I've been evaluating schema libraries for a better-than-Zod source of truth, and ArkType is where I've been focused. Zod v4 just entered beta[1], and it improves many of my problems with it. For such a mature library to improve like this, v4 is treat and speaks volumes to the quality of engineering. But ArkType has a much larger scope, and feels to me more like a data modeling language than a library. Something I definitely want as a dev!
The main downside I see is that its runtime code size footprint is much larger than Zod. For some frontends this may be acceptable, but it's a real cost that isn't wise to pay in many cases. The good news is with precompilation[2] I think ArkType will come into its own and look more like a language with a compiler, and be suitable for lightweight frontends too.
I was so shocked by how good this is that I ended up writing up a small deck (haven't had time to write this into a doc yet): https://docs.google.com/presentation/d/1fToIKvR7dyvQS1AAtp4Y...
Shockingly good (for backend)
[0] Typia: https://typia.io/
[1] Nestia: https://nestia.io/
This is because it relies on patching the TypeScript implementation. I'm curious if its approach is even feasible with Go?
So...it's a parser. Like Zod or effect schema.
Yes, it unfortunately really does bloat your bundle a lot, which is a big reason I personally chose to go with Valibot instead (it also helps that it's a lot closer to zods API so it's easier to pickup).
Thanks for linking that issue, I'll definitely revisit it if they can get the size down.
export reflect type User = {
id: number;
username: string;
// ...
};
Edit: just remembered about this one: https://github.com/GoogleFeud/ts-runtime-checksIt’s a miracle it can be 100x faster than Zod, but speed was never my issue with zod to begin with.
The thing is Zod seems fairly standard in the ecosystem, and I value that more than novelty.
Heads up, seems overall more scannable than an equivalent zod schema though given the similarity to 'raw' TS.
Also it seems like a fairly short hop to this engine being used with actual raw TS types in a compilation step or prisma-style codegen?
If you mess that (either by being too flat, too customizeable or too limited), library users will start coming up with their own wrappers around it, which will make your stuff slower and your role as a maintainer hell.
(source: 15 years intermittently maintaining a similar project).
There is an obvious need for a validation library nowadays that bridges oop, functional and natural languages. Its value, if adopted as a standard, would be immense. The floor is still lava though, can't make it work in today's software culture.
The need is obvious. As natural language becomes more proeminent in programming, there will be a need for a bridge to the older traditional paradigms. I can't give more details, it's the kind of thing you can't put in prose yet.
I mean it.
I've been parsing (not just validating) runtime values from a decade (io-ts, Zod, effect/schema, t-comb, etc) and I find the performance penalty irrelevant in virtually any project, either FE or BE.
Seriously, people will fill their website with Google tracking crap, 20000 libraries, react crap for a simple crud, and then complain about ms differences in parsing?
I agree though, that filling your website with tracking crap is a stupid idea as well.
Zod alone accounts for a significant portion of the CPU time.
Still far behind if the 100x is to be believed. v4 isn't even a 10x improvement. Nice changes though.
There’s a few tools out there that generate code that typescript will prove will validate your schema. That I think is the path forward.
Using a library like zod requires you to trust that Zod will correctly validate the type. Instead, I much prefer to have schema validation code that typescript proves will work correctly. I want the build-type checks that my runtime validation is correct.
Typia generates runtime code that typescript can check correctly validates a given schema https://typia.io/docs/validators/assert/ . I've never actually used it, but this is closer to the realm I prefer.
// zod 3 syntax
import { z } from 'zod'
const RGB = z.schema({
red: z.number(),
green: z.number(),
blue: z.number(),
})
type RGB = z.infer<typeof RGB>
// same thing as:
// type RGB = { red: number; green: number; blue: number };
For the initial migration, there are tools that can automatically convert types into the equivalent schema. A quick search turned up https://transform.tools/typesgcript-to-zodghl, but I've seen others too.For what it's worth, I have come to prefer this deriving types from parsers approach to the other way around.
import { z } from ‘zod’
type Message = { body: string; }
const messageSchema: z.Type<Message> = z.object({ body: z.string() })https://developer.huawei.com/consumer/en/doc/harmonyos-guide...
Me: "Awesome, so I get an object from an API, it will be trivial to check at runtime if it's of a given type. Or to have a debug mode that checks each function's inputs to match the declared types. Otherwise the types would be just an empty charade. Right?"
TS: "What?"
Me: "What?"
Realizing this was a true facepalm moment for me. No one ever thought of adding a debug TS mode where it would turn
function myFunc(a: string, b: number) {}
into
function myFunc(a: string, b: number) { assert(typeof a === "string") assert(typeof b === "number") }
to catch all the "somebody fetched a wrong type from the backend" or "someone did some stupid ((a as any) as B) once to silence the compiler and now you can't trust any type assertion about your codebase" problems. Nada.
The debug mode sounds interesting at first thought, but quickly explodes in complexity when you deal with more complex object types and signatures. To enable automatic runtime validation for all cases, you would need to rewrite programs so thoroughly that you’re pretty much guaranteed to introduce bugs and behaviour changes that were not present in the source code.
In my opinion it’s great that TS draws a strict boundary to avoid runtime impact at all cost, and leave that to libraries like Zod, which handle dealing with external data.
> to catch all the "somebody fetched a wrong type from the backend" or "someone did some stupid ((a as any) as B) once to silence the compiler and now you can't trust any type assertion about your codebase" problems. Nada.
Those type casts are sure annoying, but what’s the alternative? Even in your hypothetical debug mode, you would not be safe here, since you’re effectively telling the compiler you know better and it’s supposed to transform that type, not assert it. Or do you want to remove the escape hatch „as“ is? Because that would be a major pain in the ass in situations where you just do know better, or don’t want to ensure perfect type safety for something you know will work.
You can’t make things idiot proof, no matter how hard you try. That doesn’t make preprocessing type hints useless.
The closest thing to JavaScript would probably be Dart, now that it has sound types [1].
You’re right that it’s a pain in some situations.
I don't think so. I just added typia's is* checks to places where I am digesting a json input, it was rather trivial, and now I can actually trust for the first time that the object I am holding actually matches the declared type.
> "You can’t make things idiot proof"
I just did. You can't have a non-idiot-proof program running in the wild and blindly trusting the outputs of whatever API it uses.