Would you use a backend where you just define schema, access policy, and functions?
Basically something like making smart contracts on EVM, but instead they run on a hyperscaler, and have regular backend fundamentals.
Here's a mock frenchie made me, was thinking something like this:
schema User { email: string @private(owner) name: string @public balance: number @private(owner, admin) }
policy { User.read: owner OR role("admin") User.update.balance: role("admin") }
function transfer(from: User, to: User, amount: number) { assert(caller == from.owner OR caller.role == "admin") assert(from.balance >= amount) from.balance -= amount to.balance += amount }
Was playing with OpenFGA, and AWS Lambda stuff, and got me thinking about this.
So you would "deploy" this contract on a hyperscaler, which then let's users access it from your lean js front-end, via something like this:
const res = await fetch("https://api.hyperscaler-example.com/c/your-contract-id/transfer", { method: "POST", headers: { "Authorization": "Bearer <user-jwt>", "Content-Type": "application/json" }, body: JSON.stringify({ from: "user_abc", to: "user_xyz", amount: 50 }) });
The runtime resolves the caller identity from the JWT, checks the policy rules, runs the function, handles the encryption/decryption of fields and so your frontend never touches any of that.
That's it, would you use it? Is there something that does this exactly already? Feeling like building this.
New lines got nuked, great, here's the same doc on a github gist - https://gist.github.com/sssemil/fa94948ae0c12d92f9b1fdc272f7...
There are a lot of versions of this already, the main blocker is people not wanting to learn a bespoke DSL.
The second problem is the escape latch story, how do people customize beyond the functions when they inevitably reach that point. If you haven't hit this, you haven't built complex enough applications yet with your framework.
The elephant in the room is, why this instead of Ai? (I personally have an answer to this question, in the scope of my take on the framework described in your post)
True, I'd hate learning a new DSL...
Also true, suppose one would have to basically make it support basically full nodejs code? I wonder how restricting AWS Lambda gets, once you start building something complex.
Why instead of AI? I started thinking about this more because of AI, if you have a clear definition of your backend, with solid primitives, that can just get deployed at each commit, without much concern for the infra and such, this makes it easier to slop your way through features?
Not if I have to load up a bunch of explanation and pollute my context because of a tool with bespoke DSL
I already built this same thing with CUE as the DSL. So (1) my questions come from experience (2) the LLMs already know CUE because it has been around longer and it's in their training data