> (Previous article) Our hypothesis: By making smarter kitchen equipment we can collect more data. By applying data to our restaurant, we can build more intelligent systems. By building more intelligent systems, we can better scale our business.
I must admit, from an outsiders perspective, it really sounds like a bunch of buzzwords justifying a solution in search of a problem. Their examples of forecasting waffle fries reminds me of a failed startup that forecasted how many checkout lines to open via computer vision (which I can't find on Google). In the end, it turned out it was a lot easier for a human manager to simply open a new line when required, and the computer vision provided the wrong forecasting to be accurate. I wonder what CFAs success criteria and metrics are for this project.
Tech-wise, wouldn't it be a lot simpler to do a single node, single application that gets updated via something like RAUC? Especially if you have a small team (which they emphasized), it seems to me like adding a Kubernetes cluster at the edge adds complication without much benefit, other than "redundancy" (how redundant is a single rack with the same power source anyways?). Also, how would they get an important security update to the host, if it becomes necessary?
It's a lot of nitpicks, but the project overall is very cool. Sounds like they solved a lot of hard tech problems and executed well on the ops.
For a restaurant chain this is something worth putting the development effort into because once you've figured it out and ran it for a few years to demonstrate it's reliability you can pitch the shift from a network-optional edge at each site to a network-dependent site with intelligent components hanging off it and depending on it. That's a pathway to having a major competitive advantage in the medium term that your competitors won't be able to put into place overnight once they realize you've left them behind.
You can't get there with the amount of effort often put into untrusted edge sites like this - aka a pc in a cupboard. You also can't get there with cloud when the weakest element in the chain is unrealiable site connectivity.
They could have done it in a lot of different ways, but going with cheap commodity hardware and avoiding expensive cluster license nonsense (vSphere etc) were smart choices. Spend that money on a compenent centralized tech team rather than vendor shinyness, and you can do a hell of a lot more (and often move faster, to boot).
CFA leads the industry in revenue per site. I think more accurate forecasting is a significant factor contributing to this. Their sites aren't larger or better located than their competitors. In fact, they're often right next to their competitors in a similar footprint. Since I have a CFA nearby which I drive by multiple times a day, I've seen first hand that they always have substantially more cars in the drive-thru line and parking lot than their competitors in the same strip mall parking lot.
Customers will see the overflowing CFA line at the drive-thru yet still choose to pull into line because they've learned that CFA's throughput is dramatically faster than their competitors. In my experience I'd guess about 2x-3x faster which is incredible when you think about it. They achieve that by getting a lot of things right but it seems obvious ensuring their order delivery backlog is as fast as possible through more accurate load prediction would be a key factor.
Mostly though, any data they collect will be very valuable, as forecasting is a core component of fast food logistics. Fast food lives and dies on efficiency.
Maybe this?
Saying you want to invest in ongoing intense data-driven store innovation, then building the whole thing atop a platform that you cannot rely on (may get discontinued, price may become huge, may become a barrier to technical innovation), that you dont control seems like an obviously bad move.
Finding smart people, rolling up your sleeves, & recognizing this as a core competency, an enabler, a driver of your business, & not outsourcing the problems, is the right move. If future teams do a better job building edge kubernetes, there should also be good portability.
It's all the more important as walk ins and drive thrus reduce while delivers continue to rise.
Our hypothesis: By making smarter kitchen equipment we can collect more data. By applying data to our restaurant, we can build more intelligent systems. By building more intelligent systems, we can better scale our business.
As a simple example, imagine a forecasting model that attempts to predict how many Waffle Fries (or replace with your favorite Chick-fil-A product) should be cooked over every minute of the day. The forecast is created by an analytics process running in the cloud that uses transaction-level sales data from many restaurants. This forecast can most certainly be produced with a little work. Unfortunately, it is not accurate enough to actually drive food production. Sales in Chick-fil-A restaurants are prone to many traffic spikes and are significantly affected by local events (traffic, sports, weather, etc.).
However, if we were to collect data from our point-of-sale system’s keystrokes in real-time to understand current demand, add data from the fryers about work in progress inventory, and then micro-adjust the initial forecast in-restaurant, we would be able to get a much more accurate picture of what we should cook at any given moment. This data can then be used to give a much more intelligent display to a restaurant team member that is responsible for cooking fries (for example), or perhaps to drive cooking automation in the future.
Goals like this led us to develop an Internet of Things (IOT) platform for our restaurants. To successfully scale our business we need the ability to 1) collect data and 2) use it to drive automation in the restaurant.
The football game next door is over and the home team won? Start extra burgers in anticipation of hungry fans - great. I buy that.The whole thing can be one app running on an iPad with multiple redundant data plans enabled, esims from AT&T and Verizon or whatever. You're going to need a touchscreen tablet for the POS anyway, no need for additional hardware or Kubernetes.
The iPad is going to handle that and signal to the cooks to drop more chicken tenders?
Making each location its own failure domain was also a huge win. Imagine a cloud outage taking out hundreds of stores.
Making everything in the store a dumb client is cheap and easy, but also fragile. Doing as much computing in each store (and even on each POS) as possible is great for HA but now you have more complicated hardware and software deployment problems. Different merchants trade these off in different ways.
CFA seem to have gone for a lot of computing in the store, and the rest of the design is about mitigating those deployment and maintenance problems and costs. I like the NUC cluster, Gitops, API and support team stories. Am less keen on the K3S deployment per store, seems like a questionable choice of orchestration engine for this scenario but maybe there are details of rest of their store architecture that I'm missing.
The 486 based PC had a mix of grease and lint/dust on every possible surface including the power supply fan, all cabling and the entirety of the motherboard. It had been placed on a shelf near one of the deep fryers and had run without problems for years. Certainly the other end of the 'long tail' of computing!
https://medium.com/chick-fil-atech/enterprise-restaurant-com...
There are upstream benefits for marketing to see the feedback of their campaigns in real-time, but this is mostly about keeping inventories low and service times lower. That equates directly to profit.
CFA isn't the only chain working on this, but they're the most open about how they're implementing the infrastructure.
They're basically the mob of the fast food industry. They only want $10k from you to start your franchise; they cover all the costs of starting the business. The tradeoff they have the highest percentage take of any franchise.