It was a collaborative algorithmic optimization exercise. There wasn't a "right" answer I was looking for. If they noticed something I hadn't, that would have been great. Collaborative algorithmic optimization has been a part of my job across several industries.
Among other things, I used to work on Google's indexing system, and also high volume risk calculations and market data for a large multinational bank, and now high volume trading signals for a hedge fund.
For instance, corporate clients of banks sometimes need some insurance for some scenario, but if you can narrow down that insurance to exactly what they need, you can offer them that insurance cheaper than competitors. These structured products/exotic options can be difficult to model. For instance, say an Australian life insurance provider is selling insurance in Japan, getting paid in JPY, and doing their accounting in AUD. They might want insurance against shifts in the Japanese mortality curve (Japanese dying faster than expected) over the next 30 years, but they only need you to cover 100% of their losses over 100 million AUD. You run the numbers, you sell them this insurance at a set price in AUD for the next 30 years, and you do your accounting in USD. (The accounting currency (numerare) is relevant.) There's basically nobody who would be willing to buy these contracts off of you, so to a first-order approximation, you're on the hook for these products for the next 30 years.
If you can offer higher fidelity modeling, you can offer cheaper insurance than competitors. If you re-calculate the risk across your entire multinational bank daily, you can safely do more business by better managing your risk exposures.
Daily re-calculations of risk for some structured end up cutting into the profit margins by double-digit percentages. Getting the algorithms correct, minimizing the amount of data that needs to be re-fetched, and maximizing re-use of partial results can make the difference of several million dollars in compute cost per year for just a handful of clients, and determines if the return-on-investment justifies keeping an extra structurer or two on the desk.
In order to properly manage your risk, determine what other trades all of your business are able to trade every day, etc., you need to calculate your risk exposure every day. This is basically the first partial derivatives of the value of the structured product with respect to potentially hundreds of different factors. Every day, you take current FX futures and forward contracts in order to try and estimate JPY/AUD and AUD/USD exchange rates for the next 30 years. You also make 30-year projections on the Japanese mortality curve, and use credit index prices to estimate the probability the client goes out of business (counterparty risk) for the next 30 years. Obviously, you do your best to minimize re-calculation for the 30 years, to incrementally update the simulations as the inputs change rather than re-calculating from scratch.
For the next 30 years, shifts in the Japanese mortality curves for the structured product desk affects the amount of trading in Japanese Yen and Australian Dollars that the FX desk can make, how much the Japanese equities desk needs to hedge their FX exposure, etc. You could have fixed risk allocations to each desk, but ignoring potential offsetting exposures across desks means leaving money on the table.
You can't sell these trades to another party if you find your risk calculations are getting too expensive. You are stuck for 30 years, so your only option is to roll up your sleeves and really get optimizing. Either that, or you re-calculate your risk less often and add in a larger safety margin and do a bit less business across lots of different trading desks.
I started asking this as an interview question when I noticed a colleague had implemented several O(N*2) algorithms (and even one O(N*3)) that had O(N) alternatives. Analysis for a day of heavy equity trading went from 8 hours down to an hour once I replaced my colleague's O(N*2) algorithm with an O(N) algorithm.
The point of the exercise is mostly to see if they can properly analyze the effects of algorithm changes, in an environment where the right algorithm saves tens of millions of dollars per year.
Granted, it's a bit niche, but not really that niche. I've been doing this sort of stuff in several different industries.