Now think about the kind of innovation that companies like Uber and Deliveroo represent. While ostensibly they might be tech companies, in reality their business models revolve around a workforce of contractors who lack employment benefits and job stability. Their innovation is primarily in making more people work for less and take on all the risk.
But proclaiming the "Value of Work" is just arguing for the "Merits of Drudgery."
I can't wait to not work ever again. The weak reply that, "doctors have valuable employment that gives them meaning," is completely beside the point. Doctors like helping people or the challenge of solving an ailment or they like the high status of being a doctor in society or the high pay.
But they don't like paperwork, or interacting with insurance companies. Most work is like that. Low status, repetitive, boring, meaningless. Trading the best hours of the day of the best years of your youth is a terrible bargain, but persists because it is connected to survival and status.
Break the connection and humanity prospers.
I don't see UBI as freedom. It shares a lot with slavery, in that someone else is feeding you, and therefore has control over you.
What I hope for in the future is a fully independent machine that each person owns and is capable of caring for them, by providing food, shelter, etc. and is capable of building a clone of itself.
In this way, you truly are free, having a machine which you own, and can shutdown and leave at anytime if you wish.
More or less any job a human can do, technology will do better. If not today then soon. And so even if we purely took a market point of view it's pretty obvious that humans and work isn't really an optimal cocktail once there is a technological alternative.
Work for less than... what? Their side gig had Uber not existed? I see these companies' primary innovation to be that of communication. They allow supply to connect and communicate with demand in much more efficient way than ever before.
TBH, winning a game of chess or go has little value in itself, except for the limited market of selling chess or go software. The reason they are doing that is mostly for publicity. IBM makes computers, and they show how good they are at it by having one beat top players at chess, and Google makes machine learning based products and they use AlphaGo to show how good they are at it.
Chess and go don't drive innovation, they are just a side effect of real innovation.
If you start requiring high-dimensional empirical data where the generating dynamics aren't Markovian (or aren't neatly predictable with a Markovian simulator, even if God considers them fully determined), you start having to do stuff like full-blown physics simulations while also specifying agent goals in terms of those physical states. Then you've got the machine learning part and the simulation part taking up comparable amounts of compute power, and self-supervised training becomes much more difficult.
I think you are overestimating, there isn't a single interesting theoretical insight in AlphaGo's papers.
What theoretical innovations did AlphaGo Zero provide?
Google was working on machine learning for some practical application like image classification, better targeted ads or whatever thing Google does. A bunch of people then came up with the idea: "hey, we have all that AI stuff, we may be able to use it for computer go". And Google replied with "OK, sounds like good publicity, here is a budget, we also have a bunch of servers and if you need help, feel free to ask our machine learning department".
It is like making an industrial robot that can crush concrete blocks or whatever difficult but not that useful task. Maybe it is a huge deal because all previous attempts failed, but the point here is not that years of research in concrete crushing robots have payed out, but rather that recent advancement in practical engineering made it possible, and maybe even easy.
"Nevertheless, I believe that a world-champion-level Go machine can be built within 10 years" - Feng-Hsiung Hsu, 2007 (researcher who worked on Deep Blue). https://spectrum.ieee.org/computing/software/cracking-go
I'm not certain this is true. Take OpenAI as a potential counterexample. While not chess or go (at least, at the moment), they are likely to be considered innovating while still working on problems that would be considered roughly equivalent. Much of the field that has been called AI for a long time (not the current deep learning approaches) were pioneered by working on chess and go. It may well be the case that proving that a new class of techniques work on these well studied games is the first step in the innovation process, where those techniques are taken and applied to other problems.
AGZ probably can be retrained to other board games, but the hardware cost to train is quite expensive. The estimated cost to train AGZ (for 40 days?) was $25M.
[1] https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
Physics made a lot of progress in materials (nano tech, weird polymers, and so on), in building batteries, in finding the higgs boson and gravitational waves, and I'm sure plenty of other fields.
Medical research has advanced a lot with the invention of CRISPR.
CS has grown a lot in AI and quantum computing...
For example, fusion drives, not traditional stored rocket propellant engines, will be necessary to navigate between planets and the outer solar system. Also, existing known behaviors/laws of physics aside, the book posits that colonization of other planets/stars in the universe requires achieving light speed travel (along with hibernation technology).
However, the other argument the book series makes is that there needs to be a strong motivator to get all the countries and economies of the world to focus on fundamental research and applying it.
So interests of Earth governments are not aligned with star travel, and only marginally aligned with colonizing e.g. Mars. Those groups who want to ho there will have to do it themselves. (See Elon Musk.)
You are overestimating the importance of CRISPR. Don’t get me wrong — it is hugely important and innovative. But it’s not even the most important biomedical research innovation of recent years (I’d give that title to RNA-seq or more generally next-gen sequencing technology; but there are several contenders).
But at any rate all the innovations you list are — at least partly — driven by fundamental research in the public sector, not private companies.
NGS on the other side is more or less a gradual development. While the exact technology may be novel or unique the whole process is not as can be seen by the various competing technologies that existed and still do exist within about 1 or 2 orders of magnitude in pricing over time. And gene expression read-out especially is not that novel at all given arrays existed for some time. Sure, the single applications are cool but quite a few could be done with related technologies.
But sure, in the end its not whether one thing matters more than the another as all together make a great progress.
That is quite a positive view. As a university professor I see a lot of speculative work. It is presented as fundamental, but makes assumption which do not hold in the real world. Hence a lot is mostly ignored.
Fundamental research is not less active, but it's happening in different places (e.g. the Google Brain team). Find the most profitable companies and you'll find the research.
And to suggest a computer Go player, taught in a few days, is a "marginal improvement" over decades of "AI research": as my kids say, "wut?"
If anything, today's deep learning driven AI is a prime example of how fundamental research can work (neural networks were considered research "fringy" by many until about 10 years ago).
Maybe I'm the one that has it backward, but I'm pretty sure that Harford would not agree with the statement that Alpha Go Zero is only a "marginal improvement".
To the contrary, he says that Alpha Go is an "outlier", and uses it as an example of the sort of "speculative research" we should be doing more of: "Productivity and technological progress are lacklustre because the research behind AlphaGo Zero is not typical of the way we try to produce new ideas."
Apparently he should have been clearer, but I took the article as a call for more real research of the type that produced Alpha Go, and fewer of the "pragmatic shortcuts" and "brute-force approaches that taught us little but played strong chess"
The author's choice of examples, featuring a counter-example prominently, seems odd - perhaps it is to capitalize on the interest in AlphaGo Zero. The article is something of an anachronism, in that it would have worked better immediately after Deep Blue (or even after Watson/Jeopardy).
Aren't you contradicting yourself a bit there? If it's only/mainly the most profitable companies and the topics of interest to them, then that hardly indicates fundamental research is very active. There can't be that many fundamental research positions across those companies.
(caveat: the article is currently unavailable "Error establishing a database connection" so i'm not entirely clear if it's just about fundamental research in AI areas or not).
I think the primary argument is that much of the industry has generally not been doing that sort of research for ground-breaking moon-shots. Instead most AI researchers focus on optimizing for metrics which lead to small and safer short-term improvements in a particular niche application as opposed to pursuing large and riskier long-term ones.
I've seen this same line of criticism before of the way in which AI systems have been generally designed for decades and to me it rings true. Things like the Turing award arguably lead researchers astray. Essentially, the crux of the argument is that most people focus on climbing trees to achieve success when the real goal is reaching the moon or they build better springs when the real goal is achieving flight. [1] Both show some short-term improvements on certain heuristics, but they are obviously the wrong approach if you want serious breakthroughs.
I think it's a valid criticism that most of us are still focusing too much on the wrong metrics to make many serious advances. If you look at AI, there haven't really been that many major breakthroughs. We just haven't been all that creative and most advances are ultimately tweaks on existing technology or involve throwing more data through deeper neural nets. Back-propagation (popularized by Geoffrey Hinton) for deep-learning with neural nets was a big deal and an important idea. Since then, there hasn't been much that's earth-shattering. Generative adversarial networks are arguably a big idea. LSTM as well. The work by Naftali Tishby's group on understanding what's going on with information in neural nets is a significant development. Hinton's capsule networks also seem like they may be a big idea. A few people have recently started publishing some work on building AI that builds better AI. However for the most part, it seems like the vast, vast majority people in AI aren't aiming to do any fundamentally ground-breaking things. They mostly look around at the existing body of research and slightly tweak the tools that seem most suitable for the problem domain they are working on. (This isn't entirely surprising for a number of reasons involving the incentives in the industry, but it is something worth discussing.)
Personally, I don't think that it's necessarily a bad thing that we've been sluggish with AI advances. You don't necessarily want to hand power tools to children. I think our society is unfortunately full of unwise and unkind people with already more power than they should have. Our social institutions for distributing power wisely and responding to abuse of power are far less mature than our technology and I suspect the risks of abusing advanced technology dwarf the enormous benefits they can bring.
[1] https://www.eecs.harvard.edu/shieber/Biblio/Papers/loebner-r...
Basically, we incentivize researchers to work on the most near-term solvable problems, rather than the most difficult problems where we understand how to check for a solution -- let alone to work on developing the solution-properties we can check, to get past wholesale conceptual confusions.
Research results should be public goods.
It seems hard to encourage companies to do public research, as they have no short / middle term interest to do so
Not sure the government should be involved. Not because basic research ain't great---in an ideal world we'd all get ponies from the government---but because budgets are finite and there are other opportunities some of them with more definite benefits.
(Like eg funding education, especially early education. Or perhaps just taxing less, etc.)
One interesting thing to note is that in our world the American and British military funded some of the first computers. A clear example of government funded research. But---if the government wouldn't have paid for inventing computers for the militaries, IBM came up with computers only a few years later. (And in the counterfactual with less government expenditures, the private sector might have had more funds left over to build computers earlier?)
I dispute the implication of this -- that there is less certain benefit from fundamental research. There's clear historical evidence that fundamental research has massive societal benefits. We wouldn't have our modern society without it.
It's just that it can only be understood in "statistical terms". If you place bets on fundamental research then in the long term you get big payoffs. It's a bit like investing in the stock market. You can't predict the pay-off for any one particular bit of fundamental research, and its actual pay-off usually isn't obvious in the short-medium term.
From the other direction, there's a lot of claimed definite benefits to more applied work that doesn't actually pan out. Just because it's easier to claim that there's some particular benefit to doing something doesn't mean that benefit actually exists.
For the military computers, they had a clear interest in doing so, and the military keeps most of their research hidden, so I think it's not really comparable to public research
Plenty? How would you quantify that? Certainly in the scientific area it's a tiny part of total funding.
> in the counterfactual with less government expenditures
In this counterfactual the Axis would have outspent the Allies and perhaps won the war.
Budgets are finite, but currently, the state leaves both money and economic legibility on the table. The simple system should be: if you build and commercialize technology based on a public research grant, the state owns the patent and can set a fixed licensing rate proportional to the amount of the original grant funding. The state then splits the royalties, with you the developer taking most of them, but some large portion poured back into public research budgets (including from money you make off the patent). As your patent term expires, it begins costing you progressively larger portions of royalties and revenues to renew it.
Eventually, the knowledge you documented to get the patent becomes public domain, or the state basically takes all the revenue from renewed patents.
This doesn't just make money for the public research system, it guarantees legible terms to prospective licensees and rules out patent trolling wholesale.
The idea that corporations would decide to replace federal funding for charitable reasons is absurd.
The article you link to may describe the current state of affairs, but it does not properly characterize the state of affairs in the 1950s and beyond, when many major technology corporations had research laboratories. While their activities were nominally directed towards future products and profitability, in practice this was interpreted quite freely, leading to things like Unix, as well as scientific work.
Please somebody downvote this.
AlphaGo is a good and necessary engineering, but the ideas are pretty old, and not especially illuminating. Start confusing it with fundamental research often enough, and people will start to believe it. And then, corporate managers, and academic research grants, and academic publishing venues, like conferences, will start expecting "fundamental research" to be like AlphaGo instead of actual fundamental research.
On a side note, I cannot wait for general super intelligence. It cannot come soon enough. I'm tired of being poor and stuck in a fucking rut, and contemplating my death in a few short decades.
In theory taking the work done on AlphaGo (and more importantly AlphaGo Zero) and generalizing it to non-Go related problems should be a lot easier than taking the work done on deep blue and generalizing to non chess related problem.
So Deep Blue wasn't such a big step in terms of General Artificial Intelligence because it was heavily dependent on the human optimisations and was just showing the power of number crunching rather than learning.
It's estimated that AlphaGo Zero took about 1700 GPU years to train. We can only reach this number by having a distributed effort.
400k games has currently been submitted to the Leela Zero project: http://zero.sjeng.org/ . It's still playing in amateur level. (AGZ had about 30m self-play training).
Self-driving trucks and cars are likely to completely change the job landscape, and truck drivers are not going to switch job overnight.
But yes, you're right, politics play a big part, so do monetary policies, and culture (as in: it's more acceptable to be jobless in some cultures than in others).
Is "deep learning" just a new buzzword for neural networks, or is there something extra?
Those are instructions for over-fitting. Deep learning neural networks escape from this problem somehow, but it's not a given that other models would escape it too.
Right now, we (as a society/industry) are unable to deploy well-known best practices of 5 years ago:
https://www.martinfowler.com/bliki/EditingPublishingSeparati...
(In software development, it is even worse. I can't find the source to cite, but the saying is along the lines of: The mainstream programming languages and paradigms are mostly at the state of research of the 1970s, and right now we are evolving towards the state of research of the 1980s.)