I'm not sure when it started, but I feel like there has been an recent increase of "for every dev" or "for everyone" in marketing copy of tools like these. Though, if you look into it for just a minute it's not a polyglot tool that at the core exposes e.g. a REST API that could be talked to in any language, but a "library + SaaS" for a very specific subset of developers (though not a small one).
------
To the founders/team: Sorry for the rant - you are not the only ones doing it. Congrats on the funding!
Neither are critical but I recommend your repo have underscore instead of hyphen for an elixir project, and using "ex" in the name of your project is generally (informally) considered to be a signal for low code quality in the community.
We started with the TypeScript SDK to help us stay focused while we iterated and learned. As Tony said, we'll gradually expand to other languages (likely Python in the near future too!).
As for your "REST API instead of an SDK" concern, we invoke your functions so we need some code in your app. We need to handle things like memoizing steps within a function, so there needs to be some orchestration within the app
Half of us are Go devs, so we definitely don't think TypeScript should be the default language everyone uses. We only chose to start with a TypeScript SDK!
Absolutely. Each additional SDK adds a lot of maintenance effort. It isn't just a write-it-once thing: - More time spent writing/updating docs - More context switching in support channels - More time spent updating SDKs as new features are developed
Staying focused on one SDK helps to keep velocity high in the early stages. But our product has matured enough that we feel comfortable gradually rolling out more language support, starting with Go soon
The press release is unnecessarily hyperbolic, which turned me off, though:
> Deploying new jobs to production also requires tedious configuration of cloud infrastructure which often requires a handoff to another team or individual. Often weeks of developer time is spent on basic workflows, before anything complex like idempotency is handled. Using Inngest, developers can write, test, and deploy complex workflows to production in hours, not weeks — all without touching infrastructure or queues.
Using something like Dramatiq [1] with Redis, writing a background job takes minutes, and can be deployed alongside an existing Python web app. There are probably JS equivalents.
I think Inngest could be a useful service (I might have used it if I'd seen it a few weeks ago), but the comparison felt off for me - it made me feel like this wasn't solving a real problem.
Multi-step functions. This requires chained jobs, and your dependencies get messy. Operating this kind of sucks. Change management here is hard. It's much easier and faster to write a single function using Inngest; basically any engineer can do it without learning infrastructure.
Absence of events/activity (eg. when a user adds a product to the cart, send an email if they don't check out). You need to start writing crons and depending on implicit state that occurs within your DB. I've done this; it becomes really buggy. Much nicer to do `step.waitForEvent` in procedural code.
Fan-out (eg. when a user signs up, add them to Stripe, intercom, etc, or mass ecommerce imports). You can definitely handle this with individual jobs, though this starts to stack up and then monitoring is overblown.
Retries and dead-letter queues. It's much easier to replay functions en-mass, or replay events that have failed, without worrying about dead-letter-queues and redrives.
Config generally sucks. You can manage concurrency, backpressure, etc. yourself, alongside managing the infra, but that's no fun. Or, you can use some existing queues and then manage cloudformation, terraform, infrastructure code, DLQ config, etc. yourself too.
Batching is annoying to build. I'm not sure how you'd batch high-volume events or jobs in Dramatiq. Maybe you can add your own observability, deploy new workers, and try to avoid batching — deal with that yourself?
There's more, too. Rate limiting, concurrency controls (eg. per user, not per job), branch deploys/preview environments/staging envs, are all pretty frustrating and require infra investment.
As with anything, handling the initial case is pretty easy but quickly gets hard.
It is hard to scale background tasks when you hit a high level of concurrency, as you've mentioned.
Your press releases makes it sound like the initial setup of background tasks is hard, and doesn't mention the harder stuff.
It makes absolute sense to "event everything".
Also, the dev-ex is incredible. Especially in conjunction with RedwoodJS.
Well done, chaps!
We also collaborated with their core team on a GraphQL envelop plugin that automatically sends events for every mutation called in your api: https://github.com/inngest/envelop-plugin-inngest
- We’ve taken a different approach that removes the queue and worker abstraction from the end developer and instead uses an HTTP-based invocation. This enables Inngest functions to be run on any platform, including serverless.
- Inngest is powered by events which gives developers the ability to fan out work and build event-driven flows easily.
- Inngest also allows you to coordinate across events, enabling workflows to pause and wait for additional input (step.waitForEvent [1]). This is powerful for human-in-the-middle workflows (e.g. AI confirmation flows) or workflows that pause for days or weeks and conditionally run steps depending on what a user might do in your system (e.g. dynamic email drip campaigns)
- Lastly, events also provide a nice way to easy replay of your workflows across any time window.
- One last thing is that we’ve heard many times that while devs are often quite excited about Temporal, they’ve found Inngest to be much easier to learn and build with. We’ve intentionally worked to make our first SDK with the simplest primitives so their workflows just look like normal code, with minimal DSL.
[1] https://www.inngest.com/docs/reference/functions/step-wait-f...
And its jusr code so not sure about the DSL reference (of course thats common with other systems)?
What use cases is this designed for? Easier to learn and build usually has tradeoffs
Typo on this line
Your CI/CD process remains the same as before and Inngest
Guillermo has been a fantastic supporter overall :)