The astonishing improvement here is that we can compute exact flows in almost-linear time. Previous algorithms for computing almost-optimal flows in almost-linear time have been known for some time, and hence it was expected that someone would eventually find an algorithm that finds optimal flows in almost-linear time. Well, looks like it's finally here!
I've only skimmed the paper but it seems to me that the authors draw on a set of techniques established for the almost-optimal case. These come with rather enormous constants, so it is unlikely that there will be a practical implementation of this algorithm any time soon.
ELI5 the following please ;
---
Given that "enormous constants" are required (i.e. huge lee-way) in the source of inputs (if thats worded correctly), Would it be perceivable, that in future, we may feed such zygote algorithms to some other AI/ML/Algo/Whatever, such that it churns through implementation scenarios quickly to refine it to a practical tool?
There are a lot of problems where this shows up, notably testing primality (we can do that in poly time but it's O(n^6) or something iirc) and matrix multiplication (Strassen's algo).
Surprisingly this result is (in contrast to the max flow result) entirely combinatoric!
This looks like an interesting improvement possibility over the current algorithm for optimal pathfinding over the Lightning Network by Rene Pickhart, it would be interesting to know what he thinks about it:
That doesn't really mean anything without additional context... like what the questions you're talking about were, or what the positions were, or what you claimed about your background, approximately how long ago said PhDs were earned, or what the interviewers actually expected from you (which may not have been the final solution at all), or... etc.
So much of what we learn, use, and understand was developed by people far more talented than us. Literally everything from algebra to binary digital logic. It should be obvious that something can be orders of magnitude more difficult to come up with the first time, when you have nobody to learn it from, and when the rest of the world is far behind where the field is, than it might be to come up with it when the entire world has advanced and you've (presumably) been taught extremely relevant material & perspective in a nice and polished classroom setting. Now, obviously, it can also be just as difficult in the second case as in the first case for some problems too. It depends entirely on the problem and context.
I think the algorithm is non-exact. Say, Theorem 1.1 explicitly mentions probability:
> There is an algorithm that, on a graph G = (V, E) with m edges, vertex demands, edge costs, and upper/lower edge capacities, all integral and bounded by U in absolute value, computes an exact min-cost flow in m1+o(1) log2 U time with high probability.
(See how their "theorem" is much easier to understand than the abstract...)
This is a kind of advanced algorithm with a sprinkle of optimization, so normal dudes won't be able to get a grasp on this easily. Also, this is too much for my caffeinated Monday morning.
Here's an example of how that works. Suppose you have a polynomial f(x) of degree n. I'll show a probabilistic algorithm that'll find an integer k such that f(k) != 0 with probability p >= 1/2.
Here goes: pick a random non-negative integer k below 2n. That's it, that's our answer.
What's the probability that f(k) != 0? Well, a polynomial of degree n can have at most n zeros, so at most n out of 2n non-negative integers below 2n are bad answers, so the probability of k being bad (i.e. f(k) = 0) is <= 1/2. Thus, probability of getting a correct answer is >= 1/2.
Now, probability 1/2 might seem pretty bad, we're, after all, rather likely to get a wrong answer. There's a simple way to deal with it, though: just repeat the algorithm as many times as you need. For example, if we repeat the algorithm 10 times, the probability of getting 10 wrong answers is 1/2^10 = 1/1024, rather small indeed. After 100 times, the probability of getting wrong answer is negligible.
Of course, it's easy to come up with an exact, non-probabilistic algorithm for the problem above. However, my probabilistic algorithm is O(1), while it's easy enough to prove that no constant time, non-probabilistic algorithm for the problem exists. I imagine that the situation is similar with the algorithm in the paper above: I'd wager that we get almost-linear time thanks to probabilistic nature.
Oh, and I was also cheating above somewhat: sure, my algorithm is O(1), but verifying that the answer is actually correct is O(n), so I cannot get an arbitrary small probability with my algorithm using the method above in constant time. The situation is, however, better in the context of maximum flow problem, because one can, in fact, check whether a flow is maximum in linear time, so that we can amplify the probability of getting a correct answer without losing the linear complexity.
Excusing such only encourages it IMHO. I think it was Einstein who said "If you can't explain it to a six year old, you don't really understand it." Stupid "mathies".
But agree with the other commenters that implementing this is often quite difficult and there might be extremely high constants involved.
If you're asking for a more blog post style exposition, this was also discussed on a competitive programming forum where the authors are active in:
https://codeforces.com/blog/entry/100510
Archive link since the site seems to be down for maintenance at this moment: https://web.archive.org/web/20220309063531/https://codeforce...
Screenshots of the most relevant parts of the discussions: https://twitter.com/BooleanAnalysis/status/14991905346230845...