In the end of video they tell there is no way that any implementation of their algorithm gets close to beating existing algorithms in the real world.
https://cacm.acm.org/research/almost-linear-time-algorithms-...
On a similar note, there's a lot of work put into optimal matrix multiplication algorithm. We know the lower bound is N*2, the obvious upper bound is N*3, the best (complexity wise, not practical at all) current algorithm is N*2.37, but we don't know how fast can it really get. Is it possible to write N*2 algorithm? We don't know.
> Available computational power may catch up to the crossover point, so that a previously impractical algorithm becomes practical.
Are there examples of Galactic Algorithms that now aren't Galactic Algorithms?
edit: turns out some matmul algorithms are?
From O(mn) to O(m) ... thus excluding N (number of vertices) from computation ...
Too good to be true?
Still, a neat theoretical result.
TFA didn’t describe Kyng’s breakthrough in terms of this mscore it considers so important. What’s up with that?
https://news.ycombinator.com/item?id=31675015 (72 comments)
https://cacm.acm.org/research/almost-linear-time-algorithms-...
o(n) seems like a stronger statement to me than O(n), since all o(n) algorithms are O(n), but the reverse is not true.
Also if o(n) applies to all n, however small, whereas O(n) applies only when n -> inf,
(From the Wikipedia page example: 2n = O(n) but 2n != o(n))
Then doesn’t that means this algorithm should be applicable to even small n’s? Then it would be the opposite of a galactic algorithm, as someone above suggested, wouldn’t it?
Or am I missing something?
lim (n -> infinity) f(n)/g(n) = 0
Or, in other words, for sufficiently large n, g grows faster than f.For instance, this function is o(n), because 1e1000/n goes to 0 as n grows.
f(n) = 10**n if n < 1000 else 1e1000
(Pseudo-Python for a piecewise function that grows exponentially to 10**1000 at n = 1000 and then remains constant after that.)* Ps, if memory serves me correct, 3↑↑64 is Graham's number.
In other words, it's an algorithm scheme that allows you to get an algorithm running in time O(m^ɛ) for any ɛ>1.
I'm not asking for help decoding the notation; I'm asking for if anyone knows what the more detailed bound is that O(m^(1+o(1))) is abstracting.
Both are fascinating problems, but quite different. Finding shortest paths was typically solved with Dijkstra's algorithm [1], until someone discovered an amazing optimization scheme by precalculating some information that speeds up the search algorithm dramatically [2]. Thanks to this breakthrough, one can now interactively drag routes on Google Maps, for instance. And have Rome2Rio.
Solving a problem for computational efficiency is pointless.
Wy
Take a look at AI neural networks where they blast computational resources.
May be
One day this might help.
Reply to myself
Appreciate this post. And get back to writing.
Appreciation
Out of so many other less interesting post, this post caught my attention and nowhere it spoke about how it works, most importantly why it is needed.