https://www.instructables.com/How-to-Build-a-Fusion-Reactor-... https://makezine.com/projects/nuclear-fusor/ https://fusor.net/board/viewtopic.php?t=3247
The "artificially intelligent" aspect is trivial.
Reminds me of all these YouTubers making video essays parotting something they have just learned, without actually mastering the subject.
And actually, I'm not sure the switch from one ubiquitous digital format to another lossier one is the big step change here....
HudZah is seemly using the AI as a search engine for the reference materials he collected for the project, which is a legit use of the technology.
So how would we feel if the headline for this (and many other articles) was:
X uses search engine to find data required to do Y?
See fusor.net.[1] It's unlikely this rig is doing any fusion. It's just a plasma created by high voltage, like a neon lamp. He's not putting in deuterium gas. He's not detecting neutrons. Most of the people who try to do this get the blue glow, but not neutrons.
The main hazard is then the high voltage.
These things are not energy producers. It takes about a billion times as much energy input as comes out in neutrons. They can be useful neutron sources for imaging and research.
We now have good evidence that AI-assisted learning can be substantially more effective than traditional methods, at incredibly low marginal cost.
https://news.harvard.edu/gazette/story/2024/09/professor-tai...
https://blogs.worldbank.org/en/education/From-chalkboards-to...
I respectfully disagree. I don't think it's wrong or useless to get an LLM to help, I recently did a similar project myself (even though I manually fact-checked all the high-voltage stuff).
If you find a guide that explains too much then you can skip the parts you know. If it doesn't explain something you don't know yet then you recursively look that stuff up. It doesn't matter if it's a book or a teacher or a search engine or an LLM.
It's just not good journalism here, because evidently this project has been done lots of times in similar circumstances without LLMs.
It sounds like he got exactly what he wanted to.
>I must admit, though, that the thing that scared me most about HudZah was that he seemed to be living in a different technological universe than I was. If the previous generation were digital natives, HudZah was an AI native.
The one thing I'm somewhat ambivalent about is that LLMs are extremely atomizing. It's incredibly easy to go offline for months on end when you're talking to your computer.
If I got hit by a bus in 2005 when I was contributing to Linux and Postgres there were people who would pick up what I was doing and carry it forward.
If I get hit by a bus today, unless someone went through my chats, no one would really have any idea what I'd been working on and carry it on. I have a suspicion that a ton of the best and brightest have gone dark for this reason in the last two years.
I can't exactly blame them.
In the latest furor over deepseek r1 the conversation online was _substantially_ worse than what you'd get from feeding the original paper into r1 and talking to it.
This was the first time where I genuinely wondered what the point of reading groups, message boards and similar is. A model that you can run locally for $6,000 at 20tokens/s beat however many thousands of people because it actually spent the time to read what it talked about.
Suffice to say, that's not how you will understand behavior of others, especially in non-trivial situations, other cultures and so on. Just accept people are different and you may very well never understand how they see rest of the world and others.
"Hikikomori" is not a new thing, long pre-dates AI, but I think similar things are starting to be observed outside of Japan. Internet-enabled entertainment competes with the real world and occasionally achieves total victory.
"It also excited me. Just spending a couple of hours with HudZah left me convinced that we’re on the verge of someone, somewhere creating a new type of computer with AI built into its core. I believe that laptops and PCs will give way to a more novel device rather soon."
"I’m not sure that people know what’s coming for them. You’re either with the AIs now and really learning how to use them or you’re getting left behind in a profound way."
It would be worth following the money of this submarine article.
https://en.m.wikipedia.org/wiki/Fusor
Neutron radiation from fusors is particularly dangerous.
Paste the post text to chatgpt and ask chatgpt instead.
Then weep.
Also I would like to see some evidence how dangerous the experiment with the AI inspired Fusor actually was. I recently read here "hiking in jeans" is dangerous.
Fusors are somewhat dangerous, they use extremely high voltage, in the thousands to hundred thousand volt range. x-rays become an issue above around 30,000 volts, but they are frequently made by high school students, and I'm not aware of any deaths.
lots of details available here: https://fusor.net/board/viewtopic.php?t=4843
> Eventually, however, HudZah wore Claude down. He filled his Project with the e-mail conversations he’d been having with fusor hobbyists, parts lists for things he’d bought off Amazon, spreadsheets, sections of books and diagrams. HudZah also changed his questions to Claude from general ones to more specific ones. This flood of information and better probing seemed to convince Claude that HudZah did know what he was doing, and the AI began to give him detailed guidance on how to build a nuclear fusor and how not to die while doing it.
> It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.
I often feel like a complete newb with AI tools, but I don't really know how to level up. I'd love to watch someone like this, just to see what's possible.
He knew what a fusor is, knew how to find more information, have contact with hobbies groups already, been warned how dangerous it can be.
At this stage, AI is just a glorified search engine.
> it made me feel old and alarmed by the number of new tools at our disposal
> I believe that laptops and PCs will give way to a more novel device rather soon.
> So, er, like, good luck if you’re not paying attention to this stuff.
All I see is some serious effort to hoist some FOMO on me. "Something big is coming soon" is repeated ad nauseam in this article and many many others.
FortiGuard Intrusion Prevention - Access Blocked Web Page Blocked
You have tried to access a web page that is in violation of your Internet usage policy. Category Pornography URL https://www.corememory.com/
> FortiGuard Labs provides (…) AI-powered threat intelligence (…)
Ah, that explains it. The article is just a Substack with a custom domain, this sounds like an error on their part, not something the author can or should concern themselves with.
I'm really not sure what to make of that.
"*BS 5824:2013* is a British Standard titled "Wall and floor tiling – Design and installation of ceramic, natural stone, and mosaic tiling in normal conditions – Code of practice". It provides guidelines for the *design, materials, installation, and testing* of tiling systems in interior and exterior applications..."
What? OK let's check the standard on the website of the actual body that publishes them: BSIgroup.com
"BS 5824:1980 Specification for low voltage switchgear and controlgear for industrial use. Mounting rails. C-profile and accessories for the mounting of equipment... Cost £149"
Oh shit the manufacturer's website is completely wrong and so is AI. They literally have no clue what they are talking about. 1. Let's not specify their fire curtains in my building. 2. Don't trust the AI.
My conclusion: If the info you need to do your job is behind a paywall or only in expensive textbooks, then AI hasn't seen it and it will make something up that's probably wrong, and probably don't get it to write your website or you will look like an idiot...
How do you think the "real scientists(TM)" work : they use AI tools too.
Do you really think you can design a tokamak Stellarator with pen and paper ?
What good engineers do is click the "topological optimization" button on their physical simulator, and then they build the machine according to the plan the computer make.
Do you really think deep-seek can't use your COMSOL or Ansys multiphysics tool ?
Finite element Method was invented in the 1950s, our more modern AIs use some variants of Physics-Informed Neural Networks to solve the differential equations of physics.
LLM without reinforcement learning won't invent your flying saucer from reading stuff on the internet, but let your local AI play for a day with a MHD simulator https://www.jp-petit.org/science/mhd/m_mhd_e/m_mhd_e.htm and the sky is not the limit.
No, but you can draw a very nice strawman on pen an paper.
> The "topological optimization" button on their physical simulator,
A engineering tool for topological optimization is similar to "AI tools" only in the sense that a big amount of numbers are crunched together. Saying that they "are the same" is like saying that a shark is the same as a kangaroo.
Besides, for tokamak and others, I doubt that off-the-shelf tools were enough for them. I would bet that they had to build their own tools anyway.
> Do you really think deep-seek can't use your COMSOL or Ansys multiphysics tool
A monkey can "use" those tools, as long as they have bright buttons and he's conditioned to press them for food. That doesn't mean that I would trust the results it comes with.
> but let your local AI play for a day with a MHD simulator ... and the sky is not the limit.
The limit is still death, which can come much much faster than the sky.