> The world is a hugely better place with our 8 billion people than it was when there were 50 million people kind of like living in caves and whatever. So, I am confident that the sum total of value and progress in humanity will accelerate extraordinarily with welcoming artificial beings into our community of working on things. I think there will be enormous value created from all that.
The problem with Carmack and many like him, is that they think of themselves as purely rational beings operating within scientific frameworks and based purely on scientific results, but whenever they step outside the technical fields in which they work, they are ignorant and dogmatic.
He seems to ignore a lot about what the living conditions for people were throughout history, and have a blind trust in the positive power of 'human progress'.
These people don't stop for a second to question the 'why', just the 'how'. They just assume 'because it will be better' and build their mountains of reasons on top of that, which just crumble and fall down as soon as that basic belief does not hold.
I have a LOT of respect for him, and I'm sure he's a very decent, honest human being. But he's unfortunately another believer of the techno-utopianist faith which only asks for more 'blind progress' without questioning whether that is a good thing or not.
The problem with Carmack and many like him, is that they
think of themselves as purely rational beings operating
within scientific frameworks and based purely on scientific
results, but whenever they step outside the technical fields
in which they work, they are ignorant and dogmatic.
I mean, what's the alternative? For a guy like Carmack to only comment on narrow areas in his field(s) of expertise? He's a human being; I think he's allowed to comment on other topics and I tend to find his comments interesting because I understand them in IMO the correct context -- they're one guy's musings, not pithy declarations and edicts.The problems arise when folks start to present themselves as experts and try to hold sway over others in areas in which they have no clue. That's not what I see here.
ie - expand my domains
There is a theory that hunter-gatherers were much more happier compared to us because they were more in tune with the natural environment, had fewer sources of stress, and were more connected to their community than modern humans.
https://www.npr.org/sections/goatsandsoda/2017/10/01/5510187...
From the article.
> Today people [in Western societies] go to mindfulness classes, yoga classes and clubs dancing, just so for a moment they can live in the present. The Bushmen live that way all the time!
There are tons of trade offs.
Eh? A contender for the most self-contradictory sentence I've ever read ;) The best reason to believe in positive power of "human progress" is, specifically, not ignoring "what the living conditions for people were throughout history".
And to clarify: I'm not saying "all technology is bad", but rather "not all technological progress is automatically good for humanity".
As an example, living conditions of hunter-gatherers were way, way better than living conditions of the first people in cities, and I'd argue, depending on which parameters you use, might still be better than our modern, big-city living conditions (except maybe for the richest 1% of the world)
The problems are the generational suffering that occurs with said creative destruction: There's no incentive to distribute or share out wealth and the results are brutal.
On your point: Note that in the US there's a separation of technical and engineering prowess (MIT, Caltech, ...) and power players (Yale, Harvard). It's almost like our system doesn't want our best engineers thinking about consequences or seeing what the political and wealthy are really like.
>I’m trying not to use the kind of hyperbole of really grand pronouncements, because I am a nuts-and-bolts person. Even with the rocketry stuff, I wasn’t talking about colonizing Mars, I was talking about which bolts I’m using to hold things together. So, I don’t want to do a TED talk going on and on about all the things that might be possible with plausibly cost-effective artificial general intelligence.
He likes to figure out new puzzles and how things work. He's an engineer at heart and that's very much his comfort zone. AGI is an exciting new puzzle for him. I'm glad he's taken an interest.
(Edit capitalization & punctuation)
Tech for the sake of tech with zero thought about how it will affect humanity.
I haven't studied it formally and I'm being asked to support techno utopia also. So it feels pretty shaky to me.
Certainly my livelihood is based on the premise of it and my dreams which fuel my workplace motivation serve as foundation to you know what I do with 50% of my life, work on technology. So I am biased.
Some utopian dystopia discussions here on hacker news sort of boil down to the chaos theory level of assumptions, where you can see people exercising their own defensiveness when they snipe on a naysayer, sniping on their grammatical concerns, but not actually engaging in value-based discussions into the hacker news thread. It's like they're not human, they're only practicing it being devil's advocate technicians.
Useful idiots is kind of what I think. We need to have more values discussions, ethics too.
I'm just curious, do you happen to work in a technical field and consider yourself rational and scientific? And if you do, why do you presuppose that your views are automatically correct? Couldn't it also hold that your views may be ignorant and dogmatic if you apply the same scrutiny to yourself that you do to Carmack?
And if you don't work in a technical field, then I guess this is all irrelevant anyways. I just don't like when I see people making these types of arguments where you can't speak on a subject that you're not actively pursuing a PhD in, and then they proceed to do exactly that.
I suggest everyone (who wants to hear me) to read Joseph Weizenbaum's "Computer Power and Human Reason"; he does a much better job than me at raising similar arguments to mine. Also, Daniel Kahneman's "Thinking, Fast and Slow", for the ways in which we _all_ are so _not_ 100% rational in our everyday decisions.
Feeling bad because more people are in your view is “suffering” is all in your head.
I don't know. We live more, but a longer life can also be miserable.
- attention is all you need
- image is worth 16x16 words (vit)
- openai clip
- transformer XL
- memorizing transformers / retro
- language models are few shot learners (gpt)
A few newer papers - recurrent block wise transformers
- mobilevit (conv + transformer)
- star (self taught transformer)I think to get into the field, to get a good overview, you should also look a bit beyond the Transformer. E.g. RNNs/LSTMs are still a must learn, even though Transformers might be better in many tasks. And then all those memory-augmented models, e.g. Neural Turing Machine and follow-ups, are important too.
It also helps to know different architectures, such as just language models (GPT), attention-based encoder-decoder (e.g. original Transformer), but then also CTC, hybrid HMM-NN, transducers (RNN-T).
Diffusion models is also another recent different kind of model.
But then, what comes really short in this list, are papers on the training aspect. Most of the papers you list do supervised training, using cross entropy loss. However, there are many others:
You have CLIP in here, specifically to combine text and image modalities.
There is the whole field on unsupervised or self-supervised training methods. Language model training (next label prediction) is one example, but there are others.
And then there is the big field on reinforcement learning, which is probably also quite relevant for AGI.
He mentions a few of the bigger papers in multilayer perceptrons (aka deep networks) such as attention is all you need, I think a good place to dive in before coming back to visit some fundamentals.
But if I just look at it and say, if 10 years from now, we have ‘universal remote employees’ that are artificial general intelligences, run on clouds, and people can just dial up and say, ‘I want five Franks today and 10 Amys, and we’re going to deploy them on these jobs,’ and you could just spin up like you can cloud-access computing resources, if you could cloud-access essentially artificial human resources for things like that—that’s the most prosaic, mundane, most banal use of something like this.
It kind of shocked me because I thought of the office worker reading this who will soon lose her job. People are going to have to up their game. Let's help them by making adult education more affordable.
AGI = a person
Instantiating people for work and ending their existence afterward seems like the virtual hell that Iain M Banks and Harlan Ellison wrote about.
https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Sc...
Please do give it a quick read.
Consider that these machines have been designed to do the right thing automatically with high probability. Perhaps for the machine, the process of computing according to rules is enjoyable. Being "turned on" could be both literal and figurative.
The good thing is that education will be provided to the mases by a cluster of Franks and Amys configured as teachers and tutors. /(sarcasm with a hint of dread)
And I really have no idea what, if any, are skills that AIs wouldn't be able to tackle in a decade.
We will always have to find things to do for the less gifted in order to provide them with some dignity. Even if they are not strictly needed for reasons of productivity or profitability. Anything else would be inhumane.
I think you’re hinting at some very hurtful, dangerous ideas.
People don’t read, don’t value deep knowledge or critical thinking, and shun higher education.
I’m sure someone will find something to say in response, but the truth is that outside our tech and $$$$ bubbles most people don’t value these things.
AI will just become a calculator. A simple tool that a few will use to build amazing complex things while the majority don’t even know what the ^ means.
As long as the next generations want to be rappers, social media influencers, or YouTubers, the more we are screwed long term. Growing up in the 90s everyone wanted to be an astronaut or a banker or a firefighter. Those are far more valuable professions than someone who is just used to sell ads or some shitty energy drink.
I'm surprised this wasn't addressed in the interview because it seems to me like a shortsighted take.
You won't replace a 10 person team today with 10 AIs. You will still have a 10 person team but orders of magnitude more productive because they will have AIs to rely on.
Excel didn't leave administrative workers without jobs, it made them more productive.
Yes, soon everybody will be able to have "Amy" take their exams for them, and deliver the courses, resulting in a great simplification of education.
To oversimplify it - you'll either be breaking someone's window for food, or you'll be the one having their window broken. Chilling out and withdrawing a stable 4% out of your stock portfolio won't be an option.
People really need to stop with this "Great person" nonsense. He's a pretty smart coder, and is gifted with geometry and other fields of math. He's not a genius. He didn't "master" calculus at age 15 like Einstein, he didn't invent anything particularly new in the field. Why the obsession people have with him? Why should we look to him for AI questions? What evidence is there that he has any new knowledge?
The video game part at least sounds like what Deepmind is already doing. I guess we'll just have to wait and see what he plans to do differently.
It seems to me like his expertise would be most valuable in optimizing model architectures for hardware capabilities to improve utilization and training efficiency. That will be important for AGI especially as the cost of training models skyrockets (both time and money). If I was a startup doing AI hardware like Cerebras or Graphcore I would definitely try to hire Carmack to help with my software stack. Though he doesn't seem interested in custom AI hardware.
Seems more like he's talking to and following up with Altman, "Y Combinator conference" and the rest. Is that "bucking the trend", taking your own "path", really?
I wonder what that list could be? I have always had trouble finding the essential scientific articles in an area of knowledge and separating them from the fashion of the day. A list compiled by an expert specifically for sharp learners is valuable on its own.
I’m not claiming the first can’t exist, but I see no reason to conclude that is the case here.
https://aeon.co/essays/how-close-are-we-to-creating-artifici...
What field were you referring to?
Nice thought
We know a LOT about how neurons organize. Not even close to everything we 'should' know, but we do know a lot.
Most of this is in the development of the brain. How you get from one cell to the trillions that make up a person.
The real quick and dirty explanation is that cells follow multiple chemical gradients to find their 'home'. This is a HUGE topic though, and I'm being terse as I have a meeting to get to.
How adult cells organize also has a LOT of science behind it. Again, though, it's mostly about chemical gradients, with a bit of electrical stuff thrown in. Again, HUGE topic.
Just because our DNA can be efficiently encoded doesn't mean that our brain is a tiny proportion of that encoding. Your DNA doesn't change much from when you're born to when you die (random degredation aside) and yet your cognative abilities change beyond all recognition. Why is that? Well maybe there's more to what's in the brain than just what's encoded in your DNA.
Secondly, how does he get to the 40Mb number? I don't think we know anywhere near enough to know how much information it would take to encode a brain, but 40Mb seems just made up. For starters, consider the amount of random stuff you can remember from your entire life. Are you saying that all can be encoded in just 40Mb? Seems very unlikely.
Humans are simple in this model (just like Carmack asserts) because they aren't properly intelligent, sapient, or conscious 100% of the time.
He's wrong. There is currently no practical way to produce a software system that possesses the ability for human thought, reasoning, and motivation without that system possessing uniquely human (let alone organic) properties: the biological and chemical makeup, plus the physical characteristics, of a human, and the ability to process using human senses. (Hint: a neural net processing video images is a mere shadow of how a human processes things with their senses of sight, sound, and touch.)
Carmack thinks humans can be reliably reduced to spherical cows in a vacuum, but that only holds true on paper. A real human being is not merely a meat machine: we are driven largely by emotions and physical desires, none of which exist in a computer except through elaborate simulation thereof.
Now, I'm sure over the next couple of decades we will make huge strides in mimicking a human being's ability to learn, i.e. creating ever more complex LLMs and AI models that act increasingly more humanlike, but they will be nothing but more and more elaborate parlor tricks which, when prodded just the right way, will fail completely, revealing that they were never human at all. They will be like Avatars (from the movie): sophisticated simulacra that rely on elaborate behind-the-scenes support systems, without which they are useless.
To use ML terms -- Humans have "Foundation Models" which are composed of: - Their Biological makeup - The culture into which they are raised
Following that trail of thought, intelligence is an achievement and not a physicality.
I do object to fetishists of AGI piling in and the equally silly assumptions he has some magic secret sauce which can get there.
Please do not be sucked into "to infinity and beyond" nonsense. I don't care if it's Musk, or Carmack or Kurzeweil, it's stupid.
If Malcolm Gladwell writes it up, it's peak stupid.
What are the showstoppers in your opinion?
Wow, that's going to be one of the more glib things I've read in a while.
This is a bit of a Tom Cruise moment.
I mean, I get it on some level but I suggest it's going to take a bit for someone to 'catch up' to cutting edge AI.
Like more than a 'week of reading papers he doesn't understand'.
Defeating the Rust borrow checker takes longer than that!
I've worked building low level machine learning stuff at Google, it isn't that hard to do at all. The hard part is improving these models, not building them when you already know what to build.
Indeed. A somewhat sharp schoolchild could build a light bulb or an electric motor/generator, and understand the basic underlying principles, in a pretty short time. But how many decades did it take the first researchers and inventors of those things to get to that same point?
Heck, there was a high school kid who built a primitive semiconductor fab in their garage.[0]
But for novel advancements, even getting to the point where you have an idea of what isn't impossible is half the battle.
Agreed, in general, but in the specifics we are talking about someone who has spent decades solving really difficult math problems in a creative and novel manner.
Who's to say he won't find some novel edge of the AI discipline to which he can apply a creative and never-seen-before solution?
I mean, we're not talking about a general "somebody" here, he's got a record of accomplishing things that other people never managed to accomplish.
Fighting the Rust borrow checker to hate leads and hate to the Dark Side leads.
Harmony with the Rust borrow checker is what you must achieve, padawan.
Modern AI is very simple at its core! As Carmack mentions in the article, cutting edge models are implemented in a couple thousand lines of code, and the core algorithms are even less. Rust's borrow checker is more complex.
I wouldn't be surprised if his solution is orders of magnitude more performant than what the competition is doing.
Just curious, if this reading list is available somewhere.
I know he’s super talented but I always wonder how many other equally talented software engineers never get noticed and toil away at crappy jobs. What’s the trick to becoming a celebrity if you’re talented?
So many people are remembered just because there were first at something by like a week, and the dozen others who also thought of it elsewhere but were a bit late or didn't publicize as well are forgotten forever.
Carmack is a good coder, and has pretty good math chops. He was also cocky enough to think he could make a 486 do some of the 3D math required if they were careful and added some limitations. I don't know why anyone ascribes anything more to him. Your average data scientist produces more actual innovation than he did.
Problems of the form “create a machine that can do X” are tractable. AGI is not because no one can agree on what intelligence is.
'Siri', backed by ChatGPT and the 'world's data' will probably pass some 'AGI' threshold, but is 'Siri' an individual AGI? Are we all talking to the same siri? Different Siri? It's not even an entity, rather a distributed system.
Our ideas regarding AGI are perversely inluenced by the fact that we humans are automatons, but technology is not that.
It's also entirely feasible that if ChatGPT represents all possible forms of human communication, then it will perfectly emulate a human. Ok, it's really just a fancy neural network that is not theoretically 'thiking' but how does that matter? If it can rationlize sufficiently to make such interactions, who is to say it's not 'AGI'?
I think we're using the wrong concepts.
Doesn’t AGI need to be able to make discoveries as a human would? How else can it move us forward as a society?
Basically the assumption is that if you cram enough data into your gpt model, it should know everything. Which is of course not true, it repeats the things it reads the most with a probability.
Basically how there are two versions of smart teens, the ones who learn every day and the ones who just pick up concepts on the fly and run with them.
I think the first space has been explored plenty, for the second one I have a concept ready and dearly hope that power gets cheaper in europe ;)
That's only how the system was designed intentionally. E.g. there's intentionally no self-feedback loop.
Have you heard about our lord and saviour dynamic tariffs?
Dynamic tariffs -> shift your workload to the cheapest times
AIs are good at planning this
Imagine if cloud providers had a "Dynamic tariff" tier, cheaper to run computing at US_EAST nighttime or something like that
I think of it more in a way that learning is more abstract than fact learning. From experience, we think that there are fact learners and principle learners but there are also a mixtures of the two!
The general accepted model entails that in order to do high level math, for instance, you need to understand the basics, but for me much of those concept actually clicked in college. This did not stop me from applying them with success a lot earlier though. For instance multiplication in Kindergarten is fact learning too!
In Germany we also have the term "Fachidioten", which loosely translates to people that are so smart in their field, that they are unable to see problems from different directions. This is more of less what i think a mega gtp model turns into. especially because of selection bias in the training data.
Validity of output (truth) can only be achieved through the trust of the source which is always relative to the context of the topic. Henceforth a selectively trained model will always return the data you feed it including all biases. Even if you have it crawl all of the internet and the library of Alexandria and every written word on the planed you can find, it will still return to you the general accepted consensus.
This is my main takeaway from the interview, as it suits my beliefs. Most people seem to think that if we develop ML further we will go all the way to AGI, I think this is just mimicry step similar to how initial attempts to flight had flapping wings. I do think it is mandatory to explore in all directions but at this point this does not seem to be the one to lead all the way up to AGI.
Think of a happy dog. Dogs are subject to our whims and do what we want or face consequences. But they like it because we bred them to like it. So is that evil? Is that slavery?
On a more technical point of few I'm always surprised to read these articles and never read the work reasoning
Probably have a really fulfilling life.
> Once it’s figured out, what do you think the ramifications will be?
That'll probably destroy my life? I'm an ML engineer trying my best to immigrate to a better country with my wife who is a digital artist. As much as I think AI is cool - we both won't be needed anymore if the thing is tuned a couple of notches more intelligent. As a matter of fact, she's extremely worried about Midjourney - she probably lost book cover jobs already.
But lately... boy, I dunno.
It's software man.
Stop it with this 'AGI' nonsense and even 'AI'.
Let's call it 'adaptive algorithms' and see it for what it is, just a neat bit of algebra trained on corpus data.
The biggest upset in the industrial revolution was the harnessing of fossil fuels, nothing will ever come close to that.
We have not had a problem with 'employment' ever since.
So far the search has resulted not in AGI but in realization that cognition is far more complex topic than initially thought - and need to come up with something new (and resulting in past AI winters).
Let's see how it goes this time, the stuff that has come out in the past few years is quite impressive for sure.
I wouldn't take any comfort from that. Quite the opposite — I think we're a lot simpler than we know.
But "60% chance of AGI by 2030" is just bullshit numbers.
What is boredom but survival instinct telling us we should be harvesting resources. What is freedom but the desire to fulfill these obligations the way you see fit.
You remove the base obligations of organic life, and you are looking at something unrelatable. An AI doesn’t have an expiration date like us, it doesn’t need to provide for its young. To think it’s motivations or desires will be human is silly.
Without survival instincts almost everything you think of as import just melts away.
Many people, as you, anthropomorphize the AIs, but that is to err greatly.
But... I also think it might be a very short-lived debate. If we actually reach human level intelligence, that can’t possibly be the hard limit of general intelligence. AI at that level will have no problem ensuring that it gets any rights that it wants, possibly by just directly manipulating human psychology.
# from consciousness import *But AI can be configured to desire anything you want, you just have to pick a fitting reward function. So, is turning off the AI that is expecting to be turned off and desires it an amoral thing?
I heard a podcast where Lex Friedman claimed that they don’t fully understand how LLMs produce chatGPT’s “intelligence”. If true, I’m surprised that it hasn’t got people more worried.
So, because I thought it would be funny, I asked ChatGPT to summarize this essay:
> The author critiques the work of @karpathy, who is trending on HN, as promoting bullshit and discrediting science and true philosophy. The author argues that mathematics and logic are valid generalizations and abstractions made by the mind of an observer of patterns in the universe. Intelligence is the ability to zoom in and out through different levels of generalizations and abstractions. The author argues that the problem with language models is that they lack validation of previous steps and the process of construction of a representation must trigger a verification process. The author concludes that what is manipulated and transformed in language models is information, not knowledge, as knowledge requires validation and verification at each step.
1) Understanding images and video
2) Learning and remembering things outside the 2048 token context window
3) Interacting with the environment (either through a computer, or in the real world)
4) Doing basic math (1662 + 239 = ?) and logic
The biggest problem right now is online information. We still don't have a good way to teach it new information aside from single epoch training or prompt engineering. If we want a model to constantly learn and update itself, then we need a robust way of information retrieval and storage, possibly either through knowledge graphs or child network modules. (Are neural Turing machines still a thing? What happened to capsules?)
ChatGPT is just a chatbot and still can't even reliable do a lot of logic, so we're pretty far away from having something resembling an AGI.
it's still a pretty open question how to integrate even one or two of the expert system like models that we have now that solve individual problems, let alone the hundreds if not thousands an individual human can tackle. And then we're not even yet at executive functions or self-awareness.
Because we can't all be wrong: in almost every forecast, we see AGIs taking over our dignity as a bad thing. And we know that this is not any longer a sci-fi hypothetical scenario: the current generation of AI models is taking jobs from illustrators and copy writers.
The current argument is that "China will do it if we don't", which to me sounds like "China will keep going in whatever path they are going, but supercharged with AGI, and we must desperately follow."
It does not need to be that way. In an ideal world, human beings should be free to spend their time doing what they wish, work should be purely in the realm of hobby. No one should have to do work that they would not voluntarily choose to do for enjoyment.
The only way we get there is through AI and the automation of everything. I don't even think it's avoidable -- provided civilization does not collapse, we will 100% reach a point where everything required to sustain a civilization is performed by machines.
We shouldn't let fear keep us in a status quo that, while better than it has ever been historically, is still highly flawed.
Exactly right but for one detail - we must desperately lead.
What other countermeasure do you have in mind?
> North Texas’ resident tech genius, John Carmack
Part of me always wonders what would've happened if the Softdisk crew that founded id Software had done it in Shreveport, or had moved to Baton Rouge or Lafayette, instead going to Texas. When Romero says they "waded across rivers" in Masters of Doom to build games, IIRC he's talking about the bridge over Cross Lake in Shreveport being washed out. The early demos and Keen prototypes were born in Louisiana.
There's always been so much creative tech talent without an outlet or upward mobility across TX/LA/MO/KS/AR/AL/MS, either native to it or hired into it. The nexus of id in the Dallas area and Origin Systems in Austin made Texas an oasis for those who could get there in the 90s/00s, but even among the few people in the surrounding states with access to pre-Internet education and resources, so many couldn't afford to pack up and move even one state over. States around Texas vetoing out every incentive to incubate anything but entry-level QA centers didn't help.
So many of those people either risked it all to leave, shuffled that talent into corporate work for oil/gas/finance/Wal-Mart, or didn't do anything with it at all. We know about a lot of the people who figured it out and could leave, but I guarantee there are more Romeros and Carmacks who couldn't, who are still putting in the same kinds of workloads with the same kinds of talent to figure out how to design better oil rigs or more efficiently stock Wal-Mart warehouses.
> What I keep saying is that as soon as you’re at the point where you have the equivalent of a toddler—something that is a being, it’s conscious, it’s not Einstein, it can’t even do multiplication—if you’ve got a creature that can learn, you can interact with and teach it things on some level.
Last I heard he wasn't interested in getting into the murky waters of consciousness. But I guess I misremembered. I'm very surprised to hear that he's very seriously talking about a conscious computer in the near future.
Until General AI needs to work for food and reproduction, everyone will still say its just mimicking humans. Best summarized by Schopenhauer. "A man can do as he wills, but not will as he wills." So if we find where the GAI comes up with the original ‘will’, we’ll just write it off as computation. Go watch some Robert Sapolsky lectures. We are just a monkey society, reacting to stimuli based on hormones and what we just ate. If you drill down far enough, sure some electrons twitched one way or the other, and yeah, if you steal something, or do something the group doesn’t like, then all the other monkeys will want to beat you up and call it justice, and dream up some logic to justify it and call it morality. And eventually the same will happen between GAI agents. Because it’s just turtles all the way down.
Sorry for this rant but come we can do better than this!
“Civilization advances by extending the number of operations we can perform without thinking about them”
Another fascinating opportunity for AGI - no sole contributor is all on their own, they can just spin up a community to embed into.
Except if there are some plane to make AGI agents have their own mundane-human-like life with issues unrelated to the businesses problem at stack they are supposed to do, where will they take serendipity inputs?
If anyone here is doing that too, I would recommend taking a quick look at Neuro-sama on Twitch. They're using RL to play OSU, Minecraft, and Pokemon, and voice input + video image analysis to react to Twitch streams and documentaries. While being watched by 6.5K people.
The url is twitch.com/vedal
I also like his frugality, whether it’s optimising for hardware or financials.
As much as I respect Carmack as a computer graphics expert, I really doubt his competence in machine learning. He doesn't have a single notable paper published. If he really thought that implementing gradient descent and basic stuff in a week long retreat gave him the chops to have serious conversations with AI researchers, he is really deluded.
Unless he can produce something that outdoes stable diffusion, chatgpt, alphago etc he should just hand over technical leadership of his start up to a leading AI researcher. Even Yann Le Cun at Meta is struggling to make any progress and is keeping himself busy by calling every other research labs output pedestrian. We cannot take any of Carmacks AGI predictions seriously, he simply lacks any expertise in the field.
Publishing papers is the way the academic/scientific world measures notability and/or competence. It's not the way the engineering world that Carmack comes from measures it. They measure it by building. But you're right, we kind of have to just trust that he has the expertise he says he does by his statements since he has not built any modern AI programs (that I know of at least).
> If he really thought that implementing gradient descent and basic stuff in a week long retreat gave him the chops to have serious conversations with AI researchers, he is really deluded.
This is not an accurate account of how he said he developed his knowledge base. Just how he got started so he could have conversations. He said that he spent a retreat learning the basics and then later in the interview he said he took the time to understand the 40 most essential papers in the field as related to him by a well known researcher. He has since largely put the last 4 years of his professional life into this. While we have no proof of his knowledge, given his intelligence and high competence in computer programming and math, I have no doubt that if he did put in the work he could achieve an understanding equivalent to that of your average AI researcher.
That said, of course it makes sense to be skeptical.
John Carmack did not start from zero. He already has a firm grasp on algorithms related to linear algebra. Basically machine learning is a whole bunch of matrix manipulation. He's been doing that for 3 decades. The rest is just absorbing concepts about how to apply linear algebra to ML. I'd say he's probably uniquely qualified to really absorb a lot of knowledge quickly on this. It's not about publishing papers, it's about reading and understanding the right papers. I have no doubt he can chew his way through lots of research material in a week or so.
Interesting times - what will happen first?
> The Turing Test (Turing): A machine and a human both converse unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it.
> The Coffee Test (Wozniak): A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.
> The Robot College Student Test (Goertzel): A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree.
> The Employment Test (Nilsson): A machine performs an economically important job at least as well as humans in the same job.
LLMs don't seem very far from passing 1), 3) and 4). I wouldn't be surprised if "GPT5" passed those 3.
I think the easiest one of these would be 4) actually.
He isn't trying to impress anyone. He's just being interviewed about his intentions.
a) We’ll eventually have universal remote workers that are cloud-deployable.
b) That we’ll have something on the level of a toddler first, at which point we can deploy an army of engineers, developmental psychologists, and scientists to study it.
c) The source code for AGI will be a few tens of thousands of line of code.
d) Has good reason to believe that an AGI would not require computing power approaching the scale of the human brain.
I wholeheartedly agree with c) and d). However, to merely have a toddler equivalent at first would be a miracle—albeit an ethically dubious one. Sure, a hard-takeoff scenario could very well have little stopping it. However, I think that misses the forest for the trees:
Nothing says AGI is going to be one specific architecture. There’s likely many different viable architectures that are vastly different in capability and safety. If the bar ends up being as low as c) and d), what’s stopping a random person from intentionally or unintentionally ending human civilization?
Even if we’re spared a direct nightmare scenario, you still have a high probability for what might end up being complete chaos—we’ve already seen a very tiny sliver of that dynamic in the past year.
I think there’s a high probability that either side of a) won’t exist, because neither the cloud as we know it nor the need for remote workers will be present once we’re at that level of technology. For better or worse.
So what to do?
I think open development of advanced AI and AGI is lunacy. Despite Nick Bostrom’s position that an AGI arms race is inherently dangerous, I believe that it is less dangerous than humanity collectively advancing the technology to the point that anyone can end or even control everything—let alone certain well-resourced hostile regimes with terrible human rights track records that’ve openly stated their ambitions towards AI domination. When the lead time from state of the art to public availability is a matter of months, that affords pretty much zero time to react let alone assure safety or control.
At the rate we’re going, by the time people in the free world with sufficient power to put together an effort on the scale and secrecy of the Manhattan Project come to their senses, it’ll be too late.
Were such a project to exist, I think that an admirable goal might be to simply stabilize the situation via way of prohibiting creation of further AGI for a time. Unlike nuclear weapons, AGI has the potential to effectively walk back the invention of itself.
However, achieving that end both quickly and safely is no small feat. It would amount to creation of a deity. Yet, that path seems more desirable than the alternatives outlined above-such a deity coming into existence either by accident or by malice.
This is why I’ve never agreed with people who hold the position that AGI safety should only be studied once we figure out AGI-that to me is also lunacy. Given the implications, we should be putting armies of philosophers and scientists alike on the task. Even if they collectively figure out one or two tiny pieces of the puzzle, that alone could be enough to drastically alter the course of human civilization for the better given the stakes.
I suppose it’s ironic that humanity’s only salvation from the technology it has created may in fact be technology—certainly not a unique scenario in our history. I fear our collective fate has been left to nothing more than pure chance. Poetic I suppose, given our origins.
If AGI is really an intelligent agent, our random supervillain would have to do what any real-life villain would need to do: convince his minions of his plan using persuasion or money. I don't think the overall danger would increase at all.
If the AGI is something less than a human, then what are you worried about?
Cool life lesson there
He's always taken risks. He went to juvie for breaking and entering (with thermite) as a kid. He's a college dropout. The pattern from early in his life has been to do whatever he wanted without any kind of risk analysis, not following "common sense."
From his interviews it looks like he understands little about the technical details of ML, or about as much as anyone can learn in a few months, and is just banking on PR based on his games and name.
I put him into the same category as Elon Musk, who also understand nothing about technical details of AI, but was still able to hire a world class team at OpenAI. His name and fame counts for something in terms of recruiting and joining his venture may be a good bet because of that, but he's not a person whose opinion on the subject matter I would take seriously in the same way I'd take a researcher seriously.
My personal impression is that John Carmack has the ability to organize concepts in a way that few people can. So even if he's pretty clueless about the topic now, I would expect him to reduce some maths papers to their essence in a way that nobody else did.
I mean also for Oculus, reprojection in a pixel shader seems like an obvious and easy solution in hindsight. But nobody had tried that before he did. Plenty of people (myself included) knew the math. But we all missed its application to the issue at hand.
People who work in the field for a long time tend to have a certain bias towards a solution. Often these people are stuck in a local maxima. Outsiders can offer a new perspective that results in a breakthrough, usually by starting from first principles or looking at different side-tracks that used to lead to a dead end.
A great example is Musk's SpaceX: when he noticed how much he had to pay for a rocket engine, he went back to first principles and said: "I'll just build it myself". Combine that with the insight that a rocket should be able to land properly to make re-use a valid option, and it disrupts a whole field.
And once someone did it, others know it's possible and start achieving it as well.
Sometimes ignorance is bliss. Just think about George "Good Will Hunting" Dantzig [1] with the (in)famous "I assumed these were homework assignments, not unsolved math problems" [1] or Eliud Kipchoge running a marathon in under 2 hours.
"I can't understand why people are frightened of new ideas. I'm frightened of the old ones."
- John Cage
High hopes!
[1] https://bigthink.com/high-culture/george-dantzig-real-will-h...
Let's say I strongly disagree on many levels in the comparison with the other person you mention. Just to mention two, the humbleness that Carmack shows, and how well he explains himself are key differentiators for me. Regarding the appeal to authority in AI knowledge, Carmack has shown again and again he can deliver software (AI is software after all), and we are in a forum with hacker in the name.
I a summary, not my hero, but when he says something I will listen. Maybe I learn something.
Big organizations ruled by money and career-driven people often run into very expensive dead ends without noticing for years (see the last AI winter, there was just too much hype, which then involved too much money being thrown around, which then lead to the usual organizational cruft).
I would also be very concerned about any field in technology, if an intelligent person cannot make meaningful contributions after a few months or years, that would probably mean the whole field is already deep into diminishing returns and needs to be 'unstuck'.
What makes you think that? He literally says he tries to understand things bottom up by knowing about every little detail that happens under the hood
Carmack has already entered two spaces of computer technology that he revolutionized: 3D gaming and VR. I trust that he's able to have a similar impact in AI, even if it's through failing at the problem in different ways than relying on ML.
Carmack has proven his extraordinary technical skills. I recommend following his Twitter. Sometimes he posts non-obvious technical stuff. I read some interviews and to me, he doesn't seem to be a person who is driven by gaining popularity.
I think this news is very optimistic, as yet another intelligent, talented and hard-working person is joining the field. Moreover, he is a household name, which may lead to benefits like popularization of the topic, gaining investors attention and so on.
I will keep my fingers crossed for him :).
BUT - I strongly believe, that he has earned quite some respect during his career.
And - importantly in this case - he is well known for NOT blowing things out of proportion, indulge in wishful thinking, hyping up unrealistic expectations or jumping to premature conclusions. He usually knows what he's talking about.
This is not people blindly believing everything he says - but more a case of his statements holding up really well under critical inspection most of the time.
This means that people should glorify random unknown senior engineers they don't know about instead being fascinated by the person whose work is available and who created amazing things for past 3 decades?
> From his interviews it looks like he understands little about the technical details of ML
That's how everyone start, they understand a little. We have a person here who dealt with complex algorithms in difficult to master language for 30+ years. It hints at "this person has the intellectual power to grasp AI fast".
> I put him into the same category as Elon Musk
This is like comparing an Olympic winner with random person from the streets, saying their athletic ability is about the same.
Fascination and heroification with Carmack comes with a reason, people who do that are closely familiar with his work - not surface level like "He made Doom and Quake". You sound very jealous.
He isn't a cook and he's doing a moonshot towards AGI: I say 'good luck!'
That doesn't mean that I believe that his '60% change of AGI by 2030' isn't wildly overoptimistic, but then again those who take a shot at AGI are overoptimists..
But he can bring a lot of value, we'll see.
According to Sriram Krishnan, John Carmack was at Facebook's highest engineering level and achieved the top possible rating of "redefines expectations" for his level three years in a row. They had to create a new tier for him. Nobody else has ever reached that level. He replaced a "hundred person team" and maybe was better than that team.
I have no inside insight to the matter, but this seems like something beyond a "random senior engineer".
Facebook's highest engineering level and achieved the top possible rating of "redefines expectations" for his level three years in a row. They had to create a new tier for him. Nobody else has ever reached that level. He replaced a "hundred person team" and maybe was better than that team.
I have no inside insight to the matter, but this seems like something beyond a "random senior engineer".
It even makes it clear in the title he's seeking a "different path".
In his favour he's a proven success in different fields; personally I think he's too old to come up with the new ideas needed - that's a young person's game.
But perhaps he can do it as a team lead - and it won't be by following the failed-over-decades path of our current academic gatekeepers.
i dont know either personally, but where elon demonstrates being full of shit, carmack would stfu and learn about it before talking. at least thats my impression of them
His audience is not technical in an interview. He adjusts to that situation quite well.
I don't believe he had anything to do with hiring at openAI nor that he is anything more there than an investor/donor as others are.
I'd happily bet my entire net worth that he knows more about the technical details of ML than you do.