Wouldn’t ordinarily rise to the level of commenting. But it’s an Andreessen post, and so I’m perhaps primed for the purpose being promotion versus informing.
i.e.:
A i.e. B
———————-
B
(yes yes!)Is there another Andreessen Horowitz that I don't know about? Because otherwise a $35B fund that invests in tech startups is very far from that description.
> And a giant hairball of queues and retry logic is too brittle. To deal with this, we’ve seen, over the last few years, the emergence of new solutions that bring sanity back to transactional logic. They can be roughly categorized as either workflow-centric approaches or database-centric approaches.
Ok, using databases and queues is bad. So what is good then:
> we call this the application logic transactional platform (ALTP) approach because, ultimately, it extends OLTP transactions into the application
Really how much I will love to get few of my brain farts funded on being brilliant insights for typical enterprise application.
Just publish your fucking read-only SQL credentials at this point.
Still probably not the tech one needs.
EDIT: add last paragraph
Accessing the website of any of the listed companies, the Pricing page in the header is predictably there: everyone is looking for a rent. Just to be a contrarian, because I just can't believe all these companies are making significant money, perhaps it's time for a return to the pay-for-binary model in such an Everything as a (payable) Service world: here's the blob for a nice database (or whatever), pay $1,000 (or maybe even $10,000, $100,000) and the current version is yours forever.
I think its even worse than that.
The massive disclaimer at the end includes a write-off that the whole thing "may not be reliable" and "we haven't actually checked any of this", and "this might just be an advert" and if anything is wrong with it, well it's not a16z's views, it's just one of our ICs, we stand by none of it.
All presented as authoritative "industry" commentary. Only to be revealed as diluted waffle, regurgitating some investment thesis as a thinly veiled advert.
Thanks for the 2c a16z.
And what do you know A16Z has investments in companies that offer those SaaS products.
If the startup takes off? Great! If it doesn't? At least those SaaS products offered free trials.
I've never seen people trying to build transaction guarantees into their whole microservices stack using Fancy New Startup Tech[tm]. What I have seen a lot is people building transaction guarantees into their stack using something like event sourcing and cqrs. You pay some complexity but what you get in return is a transaction guarantee when events are processed which guarantees consistency when you need it (eg financial transactions etc), and the view part is eventually consistent but is always accurate as of a point in time.
[1] https://upstash.com/about doesn't show a16z as an investor right now.
thats not how it works, they already made their bets long before writing something like this (+ validated by talking to a ton of pple than can fit into this post... i know bc i've spent time with the authors haha)
> What I have seen a lot is people building transaction guarantees into their stack using something like event sourcing and cqrs. You pay some complexity but what you get in return is a transaction guarantee when events are processed which guarantees consistency when you need it (eg financial transactions etc), and the view part is eventually consistent but is always accurate as of a point in time.
correct. what's happening here is the rise of well tested frameworks that hopefully reduce the cost of implementation while preserving/extending the benefits (which can be very large if you go as far as temporal have done)
Correct. Andreessen is unusual in that its partners have a habit of plugging their books irrespective of the underlying companies’ health. So when e.g. PG says “look, I funded this company and they’re being badass,” people listen. When these guys write this tripe, it makes me wonder who’s running out of cash on that list.
When folks run out of ways to make money, they productize smaller and smaller chunks of the overall stack to attempt to extract more value/rent/payment out of their customer base.
They will bring in ever more increasingly crazy sounding architecture experts to justify this (often times these folks get planted as FTEs in the very orgs they are selling to!).
When you find yourself asking 'is this complexity for complexities sake', know that you are being sold vaporware :)
At this point I’m diversified enough in my investments to just start agreeing. “Yes, you’re right, we’ll all be drinking our meals and everyone who uses Postgres will be fired”
Looking at it from an event driven, eventually consistent view it seems to me that one can achieve considerable complexity gains by explicitly avoiding distributed transactions.
In the past I had often the discussion that "Oh, no! Our business requirement X forces us to have atomic transactions across these 74 services". I cannot talk in absolutes but all cases I have personally seen actually didn't need distributed transactions and could (in most cases easily) be translated into event driven architectures. In the not easy cases it usually amounted to services being split badly or the IT org being too strongly present in the deployed architecture.
I never coined a term, but I implemented an “ALTP” using in memory cache library in elixir. The library is called nebulex. It was so simple to implement. Since the library supported distribution out of the box. I had a distributed in memory transactional guarantee that worked well in a short amount of time.
The cool thing was I also wrapped the database lock under the distributed in memory lock so in the case of a net split the database’s lock’s got my back. In normal circumstances it made things scalable because the in memory lock would prevent the client from connecting to the database until the in memory lock was let go.
In elixir doing these things are really intuitive and enjoyable and without requiring an external dependency. I think it took me only a couple of days work design / implement and solved the problem.
how did you coordinate this, since both are different systems altogether?
edit: I went through the video at 1.5x speed and I understand it now. This is the grand idea:
1. Use a separate distributed lock manager, get lock on the wallet. This lock is in-memory
2. Start transaction on DB and make changes
3. Release the lock
4. Other transactions wait at #1, since they can't get the lock. Thus this reduces the overall load on the DB.
This is cool though!
Update: Thx for going through it! You got the idea.
This does not exist, at least those then are not ACID, what many people understand as transactions.
"ACID (atomicity, consistency, isolation, durability)". In-memory - usually (battery,...) - is not durable.
Check out this video https://youtu.be/v-n8L-DiPL4
I always wonder how many of these "modern" deployments could be replaced by a monolith + DB running on a ~single server. What are we up to these days for off-the-shelf hardware? 128 cores, 6TB+ of RAM, x NVMe SSDs in RAID? Application on one server, cache-backed DB on another. Failover to a cloned setup.
Is anybody here running something like this on latest hardware and can provide rough performance metrics?
Does anyone know of any other more concrete explanation of how these kinds of systems could work?
and more recently they also did one at Strange Loop: https://www.youtube.com/watch?v=V_5WeVmyhzg
You can learn to architecture better, but you can never get better at duck taping together the capricious and arbitrary APIs between hottest new techs.
I usually look at the authors, because if a partner at a VC had great tech insights, they would found a startup themselves, and what I see from my coachees, most VCs just follow the herd ;-) (yes, some few have) One of the authors was a CTO though.
as someone who has worked in this space (https://www.swyx.io/why-temporal), here's a few quick responses:
> "We’re seeing the emergence of systems that extend strong transactional guarantees beyond the database, into the distributed apps themselves."
yes absolutely. but its not just transactions, but also the ability to route/distribute work, and "self provision" (https://www.swyx.io/self-provisioning-runtime) queues and timers and new application states (unconstrained by preset DAG) as you code
> a giant hairball of queues and retry logic is too brittle
my favorite img on how this looks (on a real diagram, from a real aws blogpost) https://twitter.com/swyx/status/1423025792783568899
> (temporal) will ensure the code runs to completion, even during a failure.
i'd correct to "transient failure" (eg due to network or uptime issues). retries cant solve application logic issues and mostly fatten the tail of timeout failures that still need to be dealt with. fault tolerant code is a much better default, but there's no totally-free lunch here.
> The downside is that it (temporal) often requires a lot of integration work and wrapper code to expose useful and safe interfaces to application developers.
yes (example https://temporal.io/blog/why-rust-powers-core-sdk) - but there are lots of higher level systems built atop temporal (eg https://news.ycombinator.com/item?id=34686216 and https://www.youtube.com/watch?v=M25p1cgjM2U&list=PLl9kRkvFJr...) but this is also how netflix/stripe/etc are able to customize it for their stack and encode their opinions in a way that the other alternatives mentioned would not be able to
> The other approach, which is more popular with application developers (particularly Typescript/Javascript) is for the workflow engine to serve as an orchestrator of async functions (e.g. Inngest, Defer, and Trigger)... (truncated)
lest i sound like a pure temporal shill, i also agree that this is an attractive model for fullstack web developers (as opposed to say backend/systems devs). this is why i got excited about Defer enough to review it (https://www.youtube.com/watch?v=iGccpHaB1hA)
> Database-centric approaches start with a database, but extend it to support arbitrary code execution to allow for workflows alongside data management.
i've always used the analogy that workflow systems are "stored procedures on steroids". glad to see some signals that Convex (and Supabase?) have plans to make this available to their users. i think there'll be happy users of either approach, but personally am doubtful that the DB-centric approach will take significant share simply because 1) this stuff is not existential for them while it is for the workflow companies, so they will probably lag behind in features, investment, product/GTM focus, and 2), workflow systems easily bridge multiple systems and databases while the db-centric approach is necessarily limited to the system that db lives in. if you have a real microservices setup instead of a distributed monolith, its pretty obvious which approach will win here
I closely followed your work while you were at Temporal. "React for Backend" resonated with what I experienced while building couple of systems in the backend!
It's been a year or two since then, and I'm frankly surprised that Temoporal-like services don't appear that mainstream. At least in my small circle friends, orchestration framework is still in the "early adopter" stage of innovation.
Any recent take on the adoption of Temporal-like frameworks? Do you still believe it will have a React like moment in the future?
> I'm frankly surprised that Temoporal-like services don't appear that mainstream. At least in my small circle friends, orchestration framework is still in the "early adopter" stage of innovation.
yup and the problem is in both demand and supply. it is still too difficult to get up and running (whether with temporal or orkes or other) and also not enough developers realize the business value of/(difficulty of doing well) doing long running async work (i tried to communicate that in all my temporal intros, but getting devs to care about what code has abnormal biz value is surprisingly difficult, only a minority look at code like that).
further - choreography (as referenced in my article) will always be simpler to start with than orchestration, so many people will start there first and basically stop there. so the activation energy to feel the pain is nonzero.
that said, we have evidence that app developers will eventually be convinced to overcome learning curves and shift work left, with typescript released in 2012 but only "winning" in 2018-19. there was no React "moment", it grew and grew through the work and advocacy of a small group of believers and the fortunate self immolation of Angular 2