> We carefully vet what we eager-load depending on the type of request and we optimize towards reducing instances of N+1 queries.
> Reducing Memory Allocations
> Implementing Efficient Caching Layers
All of those steps seem pretty standard ways of optimizing a Rails application. I wished the article made it clearer why they decided to pursue such a complex route (the whole custom Lua/nginx routing and two applications instead of a monolith).
Shopify surely has tons of Rails experts and I assume they pondered a lot before going for this unusual rewrite, so of course they have their reasons, but I really didn't understand (from the article) what they accomplished here that they couldn't have done in the Rails monolith.
You don't need to ditch Rails if you just don't want to use ActiveRecord.
The project does still use code from Rails. Some parts of ActiveSupport in particular are really not worth rewriting, it works fine and has a lot of investment already.
The MVC part of Rails is not used for this project, because the storefront of Shopify works in a very different way than a CRUD app, and doesn’t benefit nearly as much. Custom code is a lot smaller and easier to understand and optimize. Outside of storefront, Shopify still benefits a lot from Rails MVC.
I’ll also add that storefront serves a majority of requests made to Shopify but it’s a surprisingly tiny fraction of the actual code.
Any interesting/successful patterns you can share/resources you can share on said patterns?
What I didn't understand was why the listed performance optimizations couldn't be implemented in the monolith itself and ensued the development of a new application, which is still Ruby.
In a production env, the request reaches the Rails controller pretty fast.
I know for a fact that the view layer (.html.erb) can be a little slow if you compare it to, say, just a `render json:`, but if you're still going to be sending fully-rendered HTML pages over the wire, the listed optimizations (caching, query optimization and memory allocation) could all be implemented in Rails itself to a huge extent, and that's what I'd love to know more about.
Of course, everything you said is true for a small-to-medium sized Rails application.
They likely could have explored a separate Rails app to meet this goal, but then they have to maintain the dependency tree and security risks twice. And if the Rails core refactors away any optimizations they make, they have to maintain and integrate with those.
There’s definitely some wiggle room and a judgement call here but their custom implementation has merit.
This is a great approach and unfortunately I don't think many (most?) software projects start out like that.
Not defining conditions of victory and scope creep are possibly the biggest risks in software projects.
1) What is the goal? What defines success?
2) What are the KPI's? How are we going to measure it?
These are baseline questions to any endeavor of substance. Yet, they are rarely defined.
Does anyone on here, who has worked on this project or internally at Shopify, feel that this project was successful? Do you think this is the first, of a long and gradual process, where Shopify will rewrite itself into a microservice architecture? It seems like the mentality behind this project shares a lot of commonly claimed benefits of microservices.
> Over the years, we realized that the “storefront” part of Shopify is quite different from the other parts of the monolith
Different goals that need to be solved with different architectural approaches.
> storefront requests progressively became slower to compute as we saw more storefront traffic on the platform. This performance decline led to a direct impact on our merchant storefronts’ performance, where time-to-first-byte metrics from Shopify servers slowly crept up as time went on
Noisy neighbors.
> We learned a lot during the process of rewriting this critical piece of software. The strong foundations of this new implementation make it possible to deploy it around the world, closer to buyers everywhere, to reduce network latency involved in cross-continental networking, and we continue to explore ways to make it even faster while providing the best developer experience possible to set us up for the future.
Smaller deployable units; you don't have to deploy all of shopify at edge, you only need to deploy the component that benefits from running at edge.
It makes more sense for us to extract things than to make everything microservice.
Storefront makes sense to be on its own service, so we are making it so.
- Handcrafted SQL.
- Reduce memory usage, e.g. use mutable map.
- Aggressive caching with layers of caches, DB result cache, app level object cache, and HTTP cache. Some DB queries are partitioned and each partitioned result is cached in key-value store.
Page Rendered in 12.2ms - 18.3ms
Giving plenty of room for Network Latency.
I have been recently playing with RavenDB (from my all time favorite engineer turned CEO), it approaches most of these as an indexing problem in the database, where the view models are calculated offline as part of indexing pipeline. It approaches the problem from a very pragmatic angle. It's goal is to be a database that is very application centric.
Still to be seen if we will end up adopting, but it'll be interesting to play with.
Disclaimer: I am a former NHibernate contributor, and have been very intimate with AR features and other pitfalls.
You could specify to eagerly load some collections and have NHibernate issue additional select statement to load the children, producing maximum of 2-3 queries (depending on the eager-loading depth) but avoiding both N+1 problem and cartesian row explosion problem.
Are you going to restructure literally thousands of employees and their teams, staffed with Rubyists and organized around your current setup?
Will you re-hire and/or re-train everyone?
That doesn't seem so obvious... At the scale of a team like Shopify, refactoring to a different language is probably a non-starter.
Could someone please explain how the ‘as a result’ follows from the active-active replication setup?
Due to power law, ecommerce generally benefits a lot from things like caching and read-write split. Reading between the lines, it feels like shopify may not yet have sufficient experience in dealing with async replication, and all the potential issues caused by replication lag. Fun time ahead.
When San Diego Comiccon went live on funko.com (shopify) the website was fine but the checkout was bottlenecked by the API calls to shipping providers. Many never were able to checkout and Funko had to issue an apology.
Unfortunate that no matter how great you can improve your own product you may still be dependent upon others.
https://comicbook.com/irl/news/funko-pop-comic-con-2020-excl...
What users saw in terms of response time
and perceived response time
And what users are seeing after the improvements
*
We had evaluated spotify for one of our projects and aesthetically it is really good. However, time wise their store takes forever to do stuff
This was a couple of years back, so hopefully things are much better now
Basically, the article covers how much better THE TEAM doing the coding feels
What is the effect on the users using the stores?
I wonder how you would do that? You can't hash the html. Do you take screenshots and compare?
That kind of a tool could be handy in lots of scenarios (comparing the same service written in two different languages or with different dependencies, etc).
But how does their verifier mechanism deal with changes in the production database between responses? If the response of the legacy service comes first and the response of the new service comes after, in between both responses (the request being the same) couldn't the data from the database change and thus result in the responses not passing verification when they otherwise should have? How do they manuever around that issue?
Great write-up by the way! I really liked it :)
One slightly helpful mitigation we have in place relies on a data versioning system meant for cache invalidation. The version is incremented after data changes (with debouncing). To reduce false negatives, we throw out verification requests where the two systems saw different data versions. It's far from perfect, but it's been effective enough.
Which is good. At Reddit they would have tried to rewrite everything on reasonML and then tried to prove at the end that it is now faster