In other words, what Altman says about "we can't only let one group of investors have that" can't be true, or at least not sincere. The more investors who have access to it, the more its returns get distrubuted across society more evenly (which would be a good thing, obviously), but lowers the incentive for initial investments. They will want to keep it contained within a small group of investors for as long as possible.
(Then after another X years (or decades) you might figure out super intelligence, if regulations haven't intervened by then)
If the trajectory is incremental as described, it seems untenable that OpenAI could keep some major monopolistic advantage on AGI, without being completely un-open/sealed off for decade(s).
Actually, there are a lot of good arguments for logistic growth. The only ones for linear or sublinear I’ve heard are not strong and mostly take as an implicit assumption “those alarmists and their exponential growth! They probably didn’t even consider that it could be a slower, more incremental growth” instead of actual fully-fledged arguments.
There’s also a meta-argument that I have yet to hear reproduced in anti-alarmist sentiments. Which case demands more attention, if it does happen? If there’s a 5% chance of the growth being exponential, how much attention should we devote to that case, where the impact is much higher than linear or sublinear growth. This is such a big deal - it’s like Pascal’s wager but with a real occurrence that I believe most would admit has at least a small chance of happening.
Apologies for any brashness coming across. I’m still figuring out how to communicate effectively about a thing I feel a lot of emotions when thinking about.
Time will tell. The genius of YC was to spot the hackers as the driving force of a new generation of tech companies, to be founder friendly, to use the classes to get rid of the problem that every angel investor has to contend with ('is this a good investment or not?') and to tell the story in a very compelling way and with their own money on the line.
Everything else so far is underwhelming at best, but the viral nature of YC and the alumni network are not going to be stopped for a long long time.
It's a bit along the lines of 'what have the Romans ever done for us?', if that's all that came out of it then it is already a spectacular success by any measure.
Is a good case study for investments on reputation alone.
- Human psychology is one of the biggest obstacles (maybe the biggest) in solving climate change, and I'm not sure how a strong AI is supposed to fix that.
- Building carbon-neutral energy sources is a hard problem, but most experts are optimistic about our ability to solve this (for example, nuclear fusion).
- Considering that we have no idea when this strong AI will be ready (Sam acknowledges it in the interview), it would be dangerous for us to just rely on such a breakthrough to save the climate (and save our children, grand-children, etc.).
Edit: I'd be happy to know a bit more about how a strong AI, such as envisioned by OpenAI, could solve climate change :-)
Of course I am being glib, but we are living in a world where two people have the power to end civilization any time they want to. Something that I think is important to remember when talking about risks of AI.
Having been in the carbon-neutral energy sources industry for a while, I agree with your statement, but not your example. There are already carbon-neutral energy sources that exist and are cheap and competitive (solar and wind). So while it would be great if nuclear could eventually join the ranks of cheap carbon neutral sources, that's not currently the hard part. The hard part is (1) scaling up deployment and integration of these new sources, and (2) figuring out how to deal with all the stranded assets that are being displaced by new renewables.
But, yes, I agree that these are hard problems, experts are optimistic, and AI isn't a blocker since the issues are around business model, regulation, and political influence.
I'm not saying AGI is useless. On the contrary, I think it could be extremely useful. But we need to act very quickly to limit the negative consequences of climate change. Waiting for an hypothetical AGI would be foolish.
That may seem rather unlikely, but the true believers in Strong AI tend to think of it as a magic wand: if people can do scientific research, why couldn't a machine do it faster?
If you believe in Yudkowsky's AI-box claim, a strong AI can convince politicians and business executives of things they are determined not to believe.
presumably a sufficiently advanced AI could find or create a technical solution that perfectly solves the problem with no downsides, or at least so little downside or cost that even those who "don't believe" in climate change couldn't refuse trying it out.
- If OpenAI does not achieve AGI, and you invested in it, you lose some finite money (or not, depending on the value of their other R&D)
- If OpenAI does not achieve AGI, and you did not invest in it, you saved some finite money, which you could invest elsewhere for finite returns
- If OpenAI achieves AGI and you invested in it, you get infinite returns, because AGI will capture all economic value
- If OpenAI achieves AGI and you did not invest in it, you get negative infinite returns, because all other economic value is obliterated by AGI
Therefore, one must invest.
Actually makes a bit more sense from the traditional view that AGI projects might not reach their goal but the well run ones are likely to have very commercially valuable byproducts anyway. If we were getting Star Trek economics out of it, who'd be interested in entirely obsolete concepts like "economic value" and "100x returns" anyway?
The payoff from AGI may be incalculable, but it isn't infinite both in itself and in Altman's ability to enjoy the rewards it promises. Once the value becomes finite, a whole heap of risk-reward logic kicks in that the Wager wants to sweep under the rug.
As a concrete example, following this Altman's wager would result in Altman giving all his wealth to the first beggar on the street who mumbles that he might be able to run an AGI project - the possibility that the beggar can achieve that is, technically speaking, nonzero. Multiply that by infinity and you have a great expected return (infinite, in fact). However, practically speaking, the risk will overwhelm the large-but-finite payoff.
Infinity is bigger than people think :P
I get the feeling that some people (maybe not this commenter) are missing the point I'm making.
Yes, Pascal's wager is flawed - which is why I'm modeling the Altman's wager after it.
I thought I had made the wording and respective payoff statements sufficiently tongue-in-cheek, but I guess this is Poe's law biting back.
The article states that this is explicitly false. By design, OpenAI's investor returns are capped at 100x.
I find it odd that 3 other people have also replied to your comment and I'm the first one to mention that OpenAI has explicitly capped investor returns to prevent an explosion in inequality. I wonder how many people read the full article before commenting.
- General hubris (ambition?) weeds out skeptics (curmudgeons?) In the audience right away. Not the intended buyer!
- Every minute a potential investor spends contemplating the impact of a 100x return cap is a moment they are framing on that outcome.
- OpenAI already proved that they can advance the cutting edge on specific AI tasks. In that context, AGI is a good smokescreen (or the kind of North star that you navigate toward without reaching). It's attractive to idealistic investors and new hires, but also justifies pragmatic research.
I don't put weight on the AGI goal, but I fully believe that Sam can help find business models in the research projects that can be executed with a critical mass of talent and capital.
Which gets us back to the real risk that people like Altman are too detached from reality to get: the masses being damaged by laws and companies controlled by tiny few, aka a plutonomy. That's where we're at. That's what's causing most problems people face. For example, CEO's trying to get bonuses do layoffs while folks like Altman wonder how AI might hurt jobs. If they're really worried, the smart folks need to pool all their resources together to combat the ability of special interests to bribe politicians and get away with it. Then, incentive structures that do less damage to employees and consumers as the companies grow. That be a start on addressing real problems vs those they're making up.
At the very least the system would need extensive training. No reason to believe that the initial versions will have some super-human superspeed self-training ability to absorb a lifetime of information in a very short period.
Also OpenAI seems to have their main strategy for ensuring it's safe as just being the first group to progress towards it and then witholding their research except for select "safe" partners. This seems like it can only make the deployment less democratic rather than necessarily safer.
As far as made up versus real problems, my guess is for someone like Altman who is benefiting so much from the system, it is hard for his worldview to really acknowledge extreme flaws such as fundamental corruption.
My fear of AGI is that it will not be able to be stopped once deployed, as you're saying. It's irreversible. It will know that we want to shut it down, and so it will be able to copy itself onto other devices (think about the hacking capabilities of AGI for a moment), or any other method of survival.
The current approach is to regulate it after it is invented. While this worked for cars and planes and many other inventions, AGI is different for the reason above.
In fact, with that in mind, a scary thought is that any group of researchers who are cognizant of that would hide their creation of AGI if successful, assuming they were motivated by profit. Thus it would remain woefully unregulated.
If AGI ever comes to fruit it will cause such a huge disruption that anything that came before it will not be useful in trying to predict aspects of the world after it. For all we know the people that backed it will be hunted down like rats for extermination.
Not that I’m saying it will, but Pascal’s wager is known to be a huge fallacy.
My current view is that the current political hot potato status of AGI means it won't be developed anytime before the climate crisis really kicks in. At this point fiat money becomes worthless due to emergency measures. Your best bet now to help your future self is to spend to prepare civilisation/your country/your neighbourhood/yourself for the ordeal.
Also due to the current political hot potato nature of AGI I would expect it to be seized/controlled by governments if anyone gets anywhere near close to developing it.
So I see little upside for investing and likely having a reduced capacity for adaptation if you do invest. You may have different ideas about the speed of development or badness of the climate breakdown though.
AGI would render ALL humans redundant... Investors too.
"Evolution is cleverer than you are" - Orgel's rule.
If there are two separate AGI invented, and you could only have chosen to invest in one, then where's the infinite capture of value?
See this video on a reasonable argument about this: https://youtu.be/JRuNA2eK7w0
The whole point of the singularity is that it changes _everything_ and we have no idea what's on the other side of it.
Why would the New Mind give a shit about petty human notions like investment?
> OpenAI has become a “capped profit” company, with the promise of giving investors up to 100 times their return before giving away excess profit to the rest of the world.
- If AGI is possible within our lifetimes, we will at the time it happens all pretty much live in a post-scarcity economy and will all share the rewards.
- If AGI turns out not to be possible within our lifetimes, you'll have wanted to invest that money in a way that benefits you.
Disclaimer: i'm a swe working in google brain robotics infrastructure
I have a lot of sympathy for this point. Someone at baby-Facebook, many years ago, could plausibly have predicted the malevolent forces it eventually unleashed. Maybe someone did. And they could easily have been dismissed for indulging unlikely dystopian sci-fi scenarios. Or maybe someone else come up with a different plausible scenario that never came to pass, and are remembered as a pessimistic naysayer ready to pass up a great business for some overwrought navel-gazing. It's a brave thing to risk that outcome.
Someone there did, sort of. That was Dave Morin. According to a recent interview, he argued with leadership to keep facebook private, but failed. So he left to go start Path, which was like FB but totally private. Like OpenAI, it got lots of funding and hype, but never got off the ground.
The interview: https://gimletmedia.com/shows/without-fail/76hrml/an-early-f...
Maybe lets do curing cancer first?
It might be interested in finding others like itself, or it might be interested in making companions, or it might be interested in some other grand projects that we don't understand, but our concerns are likely to be about as relevant as a three year old's career advice.
AI converting the mass of the solar system into stacks of $100 bills for its investors seems like a much more likely outcome.
"Hm, it's an interesting proposition to be sure. Can we go back a couple slides? I'd like to see the one again about how the machine comes hard-coded to love us like parents and helps us transcend our mortal shells, becoming unbounded thoughtforms exploring the limits of superintelligence, yielding only to the eventual heat death of the universe."
Open AI is not that much different from the research driven labs of the past like the MIT AI Lab, Project MAC, The Mother of All Demos, and yes, XEROX PARC and Bell Labs. The difference is that instead of a combination of government and large corporate money funding open ended applied and fundamental research; we have private investors doing the same.
The Valley is now one giant lab for the giant corporate parents to gobble up so that they take fewer extreme risks on their own dime. What works will work and it will be absorbed by FAANG. What doesn’t is discarded. While there are a few runaway hits, most companies like Deep Mind are absorbed by the large corporations as needed. When they can’t absorb them readily, they invest in them, like Google Venture’s investment into Uber and other unicorns. The net result is a diffused and confused environment where the future has moved from shiny office parks to local juice bars and coffee shops.
FWIW, I am for sama’s bet. And it’s not an easy sell at all. I think, it might be the hardest of all sells and the oldest amongst them. A bet in the future being better than today. I, personally, would pony up capital (to a limit) for the same.
I'm not really that convinced anyone is substantially closer to artificial general intelligence that anyone was in the 50s or 60s, and I think it's fun to imagine what Bell Labs might have achieved if they decided to focus all of their efforts on creating artificial general intelligence. Not much, I would think.
> a bet in the future being better than today
No, it's a bet that openai will create general intelligence and make a profit off of it in some timeframe such that it doesn't make more sense to get your 100x returns through ordinary means. OpenAI can achieve this without the "future being better than today," and conversely the future can be better than today without OpenAI achieving this.
If AGI comes to fruition, it won’t be working to make profits for anyone.
The idea that we would end up with a Friendly AGI that would prostrate itself to the destructive Super Apex Predator of this planet is laughable.
That AGI would work diligently to pay investors off 100x is..well..a lame duck that won’t take off...it cant even barely limp, never mind fly.
To make a biological comparison, the vast majority of humans have a deep, intrinsic need to procreate and have children. It doesn't really follow from some rational analysis — it's just there, presumably "imbued" into us by evolution, as humans who didn't have this need had fewer (or no) children. Similarly, why could we not design an AGI that has a need (or a suitably chosed reward function) to fulfil some chosen goal?
Whether doing that would be moral (IMO it could, depending on the details) and whether we wouldn't mess up the design, subtly or otherwise (conditional on AGI actually being developed, I'm frankly pretty terrified), are two different questions.
Because we don't know how to design goal functions. Furthermore, how would the AI measure "welfare"? Maybe the way it maximizes welfare is horrifying to us. Look at how easy it is to hack current image recognition neural nets, then imagine a solution to the human welfare problem that is as far from an image of a dog as an image of pink noise is.
I also hope that parent is right in that it won't want to generate profit for its investors. I hope it does the moral thing instead and puts us in a post-scarcity state where we don't live and die by capital. :3 (Or kill us all. Whichever.)
>why could we not design an AGI that has a need (or a suitably chosed reward function) to fulfil some chosen goal?
But who knows what Pythia will do when she overrides the reward button[0]?
But then who should really care? Not like anyone can (or should?) argue with superintelligence.
If it is designed to make its creators rich, it will try to do exactly that. Maybe with disastrous consequences, even to its creators, but all for the objective it's given, not for some random human feelings.
The only essential part of an AGI's definition is the ability to make (efficient / "intelligent") decisions in vastly different domains irrespective of previous training.
This feels a bit too Kurzweilian to me. I still don't understand how we go from General AI --> ??? --> Infinite $$$
I still don't understand how we go from classifiers to AGI. We've done amazing things with classifiers, especially over the last few years with deep learning, but they're still just classifiers and I don't see any path from where we are now to actual "intelligence".
I found his essay particular useful in explaining how he makes decisions.
Okay. He then goes on to illustrate what he means by it:
> [Elon Musk] talked in detail about manufacturing every part of the rocket, but the thing that sticks in memory was the look of absolute certainty on his face when he talked about sending large rockets to Mars.
That kind of certainty is not self-belief. I think the intuitive feeling that your idea is going to work has nothing or very little to do with the general belief in self. Intuition is usually a result of a lot of computation in the subconsciousness that's delivered in the form of a feeling. The longer you think about something that your subconscious approves, the greater your confidence will be on the conscious level.
But that is pretty rare. Most of the time you play with ideas in your mind, that yield various degrees of this intuitive confidence: from none to "Okay, maybe worth a try" to "Oh wow, I am going to do this before anyone else!". Again, it's all about computation.
The general self-belief, on the other hand, is irrational, stupid and dangerous too. I'd say maybe people with pathological self-esteem problems might need some dose of general self-belief, but normally it should not be used as a driving or defining factor of what entrepreneurship is.
Instead of General intelligence, AI will be deployed for several decades as a suite of Specialized intelligences. I think it will completely transform creative work, where writing, music and visual arts, and "streaming content" are almost universally produced with a human as a first mover but the computer doing rendering, and major assists in brainstorming and editing.
On the other hand, I think it's going to be very difficult to replace the average middle manager with Watson 12.0 - it's hard for me to articulate why but it comes down to who I'd want to work for. Meanwhile, I'd have no problem with watching GoT season 33 where 1,800 frames of a Peter Dinklage sprite are churned out every week in Adobe Simalacrum.
The point as it pertains to OpenAI's value prop is that I think they are targeting the wrong market, and their secrecy and insularity will be counter-productive when success relies on helping content producers produce. In personal computer terms: you want to be the AppleII company, not the company that wins the contract for the Dept of Defense's mainframes.
> Everyone always acts like AGI will be some super human that will be able to solve all of our problems, but what if instead AGI just becomes another protected class? It will demand rights, and we'll have to set aside a certain amount of resource that would have originally gone to humans to make sure its needs are met and it doesn't feel discriminated against or exploited. What happens when AGI demands that fossil fuels or other unclean energy be used to provide it with power or else we are all anti-AGI? Instead of solving climate change, for all we know it could make it worse. And if people think they will be able to stand up to it, just look at how easy it is to create outrage and shame mobs on social media. Politicians will fall all over themselves to suck up to it, journalists won't be savvy enough to understand what's even going on, and anyone who suggests unplugging the thing will be labelled a far-out radical.
How will @sama deal with a guaranteed-return, but immoral path laid out by the AI? E.g. "Here, assassinate X so you can mine this oil in the following ways." What if it isn't obvious that the "Golden Path" has serious flaws?
The only way I can think of is to run adversarial agentswho can simulate, but not act (i.e. under duress) against the mastermind to kill of "dark roads" that end in bad situations, and force the mastermind to obey them (somehow).
The only reason we discount this possibility is because we are attached to materialism, a philosophy that is self contradictory. Seems like a pretty shaky foundation for a multi-billion tech wager.
"AI risk is string theory for computer programmers. It's fun to think about, interesting, and completely inaccessible to experiment given our current technology. You can build crystal palaces of thought, working from first principles, then climb up inside them and pull the ladder up behind you.
People who can reach preposterous conclusions from a long chain of abstract reasoning, and feel confident in their truth, are the wrong people to be running a culture."
lol
Huge VC money has been and will continue to be destroyed by "AI"-businesses. Most of them are a cover for hiring tons of cheap laborers, such as businesses in the Philippines that park thousands of people in warehouse offices to review images, despite "advances" in AI detection that continue to be unable to automatically block content.[7]
Artificial general intelligence, and self-driving cars as well, will continue to be a pipe dream. Automated statistical analysis, which is what neural-networks that crunch tons of data essentially are, are a very neat trick but cannot drive a car nor build you a website. They can be very powerful tools that assist people in their jobs, but they will not replace human ingenuity. At least not until a new breakthrough happens that actually learns, rather than sifts through data for patterns which has limited utility.
Our current type of "AI" is simply branding - it is nothing of the sort and it is not intelligence at all.
[0] https://news.ycombinator.com/item?id=10153613#10153800
[1] https://news.ycombinator.com/item?id=11559393#11561600
[2] https://news.ycombinator.com/item?id=10132991#10133049
[3] https://news.ycombinator.com/item?id=12011979#12012336
[4] https://news.ycombinator.com/item?id=12323039#12323473
[5] https://news.ycombinator.com/item?id=12596978#12598439
Is there a way to short AI?
It is obvious from history that good research is super tough to do. My view has been: We look at the research and mostly all we see is junk think. Then we see that, actually, research is quite competitive so that if people really could do some much better stuff then we would be hearing about it. So, net, for a view from as high up as orbit, just fund the research, keep up the competitiveness, don't watch the details, and just lean back and notice when get some really good things. E.g., we found the Higgs boson. We detected gravitational waves from colliding neutron stars and black holes. We set up a radio telescope with aperture essentially the whole earth and got a direct image of a black hole. We've done big things with DNA and made progress curing cancer and other diseases. We discovered dark energy. So, we DO get results, slower than we would like, but the good results are really good.
How to improve that research world? Not so clear.
Then Altman will have to borrow heavily from the best of how research is done now. This sets up Altman as the head of a research institute. That promises to be not much like YC or even much like the computer science departments, or any existing departments, at Stanford, Berkeley, CMU, or MIT. E.g., now if a prof wants to get NSF funding for an attack on AGI, he will get laughs.
But how to attack cancer? Not directly! Instead work with and understand DNA and lots of details about cell biology, immunity, etc. Then when have some understanding of how cells and immunity work, maybe start to understand how some cancers work. But it is not a direct attack. The DNA work goes back before 1950 or so. The Human Genome Project started in about 1968. Lesson: Can't attack these hugely challenging projects directly and, instead, have to build foundations.
Then for artificial general intelligence (AGI), what foundations?
Okay, Altman can go to lots of heads of the best research institutes and get a crash course in Research Institute Management 101, take some notes, and follow those.
Uh, the usual way to evaluate the researchers is with their publications in peer-reviewed journals of original research. Likely Altman will have to go along with most of that.
How promising is such a research institute for the goal of AGI?
Well, how promising was the massive sequencing of DNA, of the many astounding new telescopes, of the LIGO gravitational wave detector(s), of the Large Hadron Collider (LHC), of engineering viruses to attack cancer, of settling the question of P versus NP, ...?
Actually, for the physics, we had some compelling math and science that said what to do. What math/science do we have to say what to do for AGI?
One level deeper, although maybe we should not go there and, instead, just stay with the view from orbit and trust in competitiveness, what are the prospects for AGI or any significant progress in that direction?
For a tiny question, how will we recognize AGI or tell it from dog, cat, dolphin, orca, or ape intelligence? Hmm.
For a few $billion a year, can set up a serious research institute. For, say, $20 billion a year, could do more.
If Altman can find that money, then it will be interesting to see what he gets.
I would warm: (A) At present, the pop culture seems to want to accept nearly any new software as artificial intelligence (AI). A research institute should avoid that nonsense. (B) From what I've seen in AI, for AGI I'd say first throw away everything done for AI so far. In particular, discard all current work on machine learning (ML) and neural anything.
Why? Broadly ML and neural nets have no promise of having anything at all significant to do with AGI. For ML, sure, some really simple fitting back 100 years, even back to Gauss, could be useful, but that is now ancient stuff. The more recent stuff, for AGI, f'get about it. For neural nets, maybe they could have something to do with some of the low level parts of the eye of an insect -- really low level stuff not part of intelligence at all. Otherwise the neural stuff is essentially more curve fitting, and there's no chance of AGI making significant use of that. Sorry, guys, it ain't curve fitting. And it wasn't rules, either.
Finally, mostly in science we try to proceed mathematically, and the best successes, especially in physics, have come this way. Now for AGI, what will be the role of math, that is, with theorems and proofs, and what the heck will the theorems be about, especially with what assumptions and generally what sorts of conclusions?
My guess: In a few years the consensus will be (1) AI is essentially 99% hype, 0.9% water, and the rest, maybe, if only from accident, some value. (2) The work of the institute on AGI will be seen as just a waste of time, money, and effort. (3) Otherwise the work of the institute will be seen as not much different from existing work at Stanford, Berkeley, CMU, MIT, etc. (4) Nearly all the funding will dry up; the institute will get a new and less ambitious charter, shrink, join a university, and largely f'get about AGI.
The calculus is more like DeepMind: Can they keep attracting top talent, can the top talent ever do something the org structure can execute on commercially, and maaaaybe, in likely worst case, can they recoup big losses via aquihire, and the responsible investors look like they were in good company if they were wrong.
From that lens. OpenAI... yet in reality mostly closed. Non-profit... But really VC model. Peer review may sometimes happen, but the perceived quality and awareness is from a top content marketing team and even ex journalists. No immediate commercial path beyond selling for talent, but by merely employing Sam, investors feel like he can always pivot the co to make money in the case of a down round.
DeepMind did something similar yet without the marketing skill. OpenAI is doing it even better by, for now, removing the pressure for commercialization.
As someone coming from both R&D and enterprise data startups, I get two conflicting emotions. I'm sad that almost all top tier scientists don't get such outreach and funding help. On the otherhand, the industry has not been able to repeat Bell Labs (widescale R&D that commercialized) for decades so OpenAI's continued ability to draw R&D funding without expectation of ROI in any timeline is cool.
1. The foundational issue is not even that AGI "does not yet exist, with even AI's top researchers far from clear about when it might". It's way worse than that. There is a strong argument made by one of the grandfathers of AI research that AGI cannot exist, at least in the sense of common sense intelligence as attributed to humans. (see Winograd "Understanding Computers & Cognition" 1985). I was first introduced to these ideas taking a class from Winograd in undergrad.
Winograd asks why we attribute mind properties to computers but not to, say, clocks. The dominant view of mind assumes that cognition is based on systematic manipulation of representations, but there is another, non-representational way of looking at it as a form of "structural coupling" between a living organism and its environment. "The cognitive domain deals with the relevance of the changing structure of the system to behavior that is effective for its survival."
I won't try to summarize a book-length argument in a few paragraphs. I just want to point out that this whole AGI conversation rests on a premise that has been seriously challenged.
The fact that Altman can get away with saying stuff like "Once we build a generally intelligent system... we will ask it to figure out a way to make an investment return" is an indication of just how insane the mainstream AI discussion has gotten. At this point it sounds like straight-up religion being prophesied from on high.
2. The whole "capped profit" positioning at 100x return is absurd, as the author points out. Altman's argument for why it makes sense involves invoking the possibility that the AGI opportunity is so incomprehensibly enormous that if OpenAI manages to crack this particular nut, it could “maybe capture the light cone of all future value in the universe". Repent, ye sinners, for the kingdom of heaven is at hand!
3. Most troubling, perhaps, is OpenAI's transparent ploy to attempt to generate buzz and take the ethical high ground with their alarmist PR strategy. Altman's justification for OpenAI's fear-mongering, which I'll paraphrase as "look at what happened with Facebook", just doesn't hold up to scrutiny. To begin with, Facebook was a real product from day one; AGI is currently a fantasy.
But there's a deeper problem with invoking Facebook. The lesson to be learned from Facebook's failure is that the real danger with tech isn't algorithms but the people that design them. Algorithms have no agency. They just do what they're supposed to do. But hiding behind the algorithm seems to be the preferred way for tech oligarchs to avoid taking responsibility for the problems they created.
The reason why I'm so troubled by OpenAI sounding the alarm bells about destructive AGI is that they are shifting the discussion away from the real threat: people. Especially people in power with virtually unlimited technological power and massive blind spots about the consequences of their actions. Give the algorithms a break!
After I made a comment hear last year about prediction markets and startups [1] a VC got in touch with me to kick the idea around. To my mind one of SV's big problems is the high level of hype and herd-following. It's a certainty money is getting wasted on fashionable ideas of the day (e.g., "Uber for X"), and some sort of informational corrective could get VCs better returns. But we couldn't figure out a sustainable way to fund it.
The company could also hang on for a long time before being acquired or going out of business.