- Unless somebody finds a polynomial algorithm for an NP-complete problem (which is a taller order than just proving P=NP), several interesting problems will continue to be infeasible to solve exactly in the general case with large data.
- If, in addition, quantum computers don't prove to be viable, commonly used cryptosystems such as RSA, AES, ECC, will probably continue to be secure provided they're used correctly.
- Results like the Two Generals Problem, the CAP theorem, etc. will still make distributed systems difficult to work with and require tradeoffs.
- Rice's theorem, that it is impossible to determine computational properties of arbitrary programs, will still apply, making static analysis (including antivirus programs, security scans, etc.) heuristic rather than exact.
- etc.
I think this is misleading. There are many exact static analyses---proof-checking in theorem provers like Coq is an exact static analysis. More generally, type checking can be an exact static analysis that guarantees semantic properties of your programs, like termination.
If you can force your programs to be in a certain form (e.g., statically rejecting type incorrect programs), you can sufficiently restrict the class of programs (Turing machines) that you're considering that you can indeed determine non-trivial computational properties of your programs.
I should probably have been more specific by writing "decide" instead of "determine", because you can absolutely 'determine' a computational property as long as you're willing to ignore false negatives. For example, it's easy enough to write a termination checker by just checking for loops and equivalent constructs (or e.g. in Idris, by requiring that all functions are total), but that will of course reject a large number of programs that do in fact terminate.
Coq is not a Turing Complete language, so Rice's theorem doesn't apply. But almost all people are not writing programs in Coq.
I think static types are great, but they don't contradict any of this.
A proof that P=NP immediately gives a polynomial-time algorithm for NP complete problems via universal search. It’s so wildly impractical as to probably not change anything, but it _is_ in P.
Can you explain more about what universal search is, and/or where I can read about how it would solve the problem?
Yes, Einstein's theory of relativity was a change from Newtonian physics but it's a fairly minor correction for most practical purposes and Newtonian physics is still important to know and understand.
So yeah, our understanding of physics will likely change but it'll only matter in more and more extreme edge cases and will likely build on our current understanding. Maybe it'll result in us finally having fusion reactor, room temperature super conductors, or quantum computers but you're still going to get a roughly parabolic arc when you throw a ball through the air.
The whole beauty of science is that it doesn't ever claim to have static, absolute answers - it's constantly growing and changing as we learn more about everything.
Likewise, the humanities are always growing and changing and being reinterpreted, reflecting what and how we can understand now.
> Who knows if software will be nearly recognizable in 10-20 years from now
Software goes through rapid cycles of invention and forgetting what's come before. Its totem animal is a Nobel laureate goldfish. That doesn't change.
Goldfish have good memories it turns out: https://www.bbc.com/news/uk-england-oxfordshire-63242200
That's wrong for maths and, by extension, theoretical CS. I mean, sure, some of the answers come with caveats ("assuming P!= NP", etc.), and in theory, all of mathematics could be proven inconsistent (but that to me is completely unreasonable to believe), but for all intents and purposes, these answers are static and absolute.
- people give the orders
- people approve implementations (e.g., implementations handed over by an AI)
- people who approve implementations need to save face when the implementation turns out buggy
Even if AI reaches a level at which it can do all of the points above, it would dimishis its own value. Example: if I could launch an Spotify alternative with a few prompts using ChatGPT version 10, then so a million guys like can do it as well... meaning, no one will be doing it.
Of course, unless some sort of weird tech shift happens that makes the browser obsolete altogether, I suspect most HTML/CSS/JavaScript won't ever change anyway. Browsers are backwards compatible to a similar degree as Microsoft and Windows. If even stuff like the center tag are supported in 2024, most things aren't going anywhere.
On a less specific note, I guess poor planning and software development practices? Feels like planning how long things are going to take hasn't got much better in the last few decades, with things like 'agile' barely making a dent in it. I suspect projects overrunning, feature and scope creep, big budget disasters, etc will probably be issues in society til the end of time.
The Basecamp founders often talk about the advice they receive from Jeff Bezos, which was "Focus on the things that won't change in your business." [1] He was referring to things like "fast delivery" and "good customer service." But, it means a lot in a professional context, too - because it's things worth learning well.
[1] https://medium.com/@seansheikh/bezos-wisdom-focus-on-the-thi...
And of course, maths. I graduated in maths decades ago, and I always find it amusing when I see some tutorials on linear algebras making it to the top of HN, like if it was some fashionable new cool technologies. That being said, my math knowledge hasn't transferred in software engineering skills.
1. LaTeX. I can still compile my docments that I wrote 20 years ago...
2. The thirst for profit/quick releases over reliability (except for a few examples like LaTeX)
3. The existence of open source software as the one antitode to all software being horrible
4. The obsession with creating new things for the sake of creating, instead of for the good of anything
I had an initial document and development environment running within twenty minutes. That's impossible with LaTeX. In fact, for years, I had a tailor-made Docker image just for keeping LaTeX running, compiling and sane (I use more advanced features to make LaTeX bearable in $CURRENT_YEAR). That setup broke the other day.
I never investigated why, because an ecosystem where one has to go to such lengths in the first place, only to have it break, is not one I want to be a part of any longer. For typst, I can just grab the binary of the version I used and it will just work forever (or just compile it, which I have confidence will also be pretty stable for many years to come thanks to Rust).
Certainly not impossible. I don't find LaTeX hard to install, at least on a modern Linux distribution such as Ubuntu (and if memory serves, it wasn't hard on macOS either).
I agree that setting up a basic template from scratch can be tedious and I wish this was better, but the common approach for newbies is to copy a template from somewhere, and for more advanced users, they probably have some base template with personal tweaks that they keep reusing (I know I do, not only because I hate Computer Modern).
There are still a whole number of issues with LaTeX (such as incompatible packages, the inconsistency in font handling between pdflatex and xelatex, beamer is generally IMHO a mess, etc.) but what GP wrote - that old documents will continue to compile and give the same results - is true.
How companies work, how humans interact, how users behave -- I don't see how that'll change anytime soon at all.
I've seen pretty dramatic changes in both of those in the last 5 years. Human interaction seems to have become a lot shittier. Users' behaviour seems more entitled. How people behave and interact also differs quite a lot based on culture/geographic location.
Sure, at the end of the day we're all human with more or less the same wants and needs, but how we express them is neither uniform nor fixed.
https://en.wikipedia.org/wiki/Lindy_effect
(e.g. it's more likely that C, Fortran and Cobol are still around in 50 years than some of the more recent programming languages)
https://www.amazon.com/Same-Ever-Guide-Never-Changes/dp/0593...
A product manager wants our team to build something that nobody actually needs? I may wind up getting laid off for that.
Imagine ChatGPT could "translate" from grumpy-old-curmugeonish into friendly-human in realtime, then which soft skills would still be valuable? Imagine going to a shop and being served by a grumpy curmudgeon whose poor English was translated instantly into great customer service.
What changes with this? To what extent will this ever be possible?
unless you were being sarcastic.
Let's say I believe it is my destiny to make sure all humanity evolves to become enlightened and reach the next rung?
Let's also say a constraint is I disbelieve in humanity as an entire collective body can reach enlightenment by itself - and therefore must be pulled, due to human base desires.
Knowing this, how can I accomplish it? I have theories, but I don't want to pre-form the suggested solutions.
I'm in edtech building this.
it's like having 1 teacher with 25 assistants.
QWERTY keyboards
Taxes
or, wind event = trees down.
and on and on.