It's good to see this, especially since they acknowledge that open weights is not equal to open source.
In fact, if you open the PDF file and navigate to that section, the content is barely relevant at all.
The USA supposedly have the most data in the world. Companies cannot (in theory) train on integrated sets of information. USA and China to some extent, can train on large amounts of information that is not public. USA in particular has been known for keeping a vast repository of metadata (data about data) about all sorts of things. This data is very refined and organized (PRISM, etc).
This allows training for purposes that might not be obvious when observing the open weights or the source of the inference engine.
It is a double-edged sword though. If anyone is able to identify such non-obvious training inserts and extract information about them or prove they were maliciously placed, it could backfire tremendously.
Today, Google and Apple both already sell AI products that technically fall under this definition, and did without government "encouragement" in the mix. There isn't a single actionable thing mentioned that would promote further development of such models.
The controllers of the whole system want open weights and source to make sure models aren’t going to expose the population to unapproved ideas and allow the spread of unapproved thoughts or allow making unapproved connections or ask unapproved questions without them being suitably countered to keep everyone in line with the system.
It seems the US is going to thrive with the former but naively stick our heads in the sands with the latter.
We’ll cede economic leadership, and wonder in 20 years what happened as other countries lead in energy. Even worse, the administrations stance will encourage US energy companies to pursue bad strategies, letting them avoid transforming their business. In 10-20 years they'll be bankrupt and the US will probably have to bail them out for strategic reasons.
The ice caps may be worse off for it, but there's little reason to think the USA will cease to "lead in energy" anytime soon.
If the rest of the world standardizes on solar+battery, demand for oil goes down, and so will the price. Which in turn makes US-produced oil not cost effective to extract, and domestic energy production collapses in favor of cheap foreign imports.
And then we're worse off in several different ways.
Increased Mortality: Projections indicate an additional 14.5 million deaths by 2050 due to climate-related impacts like floods, droughts, heatwaves, and climate-sensitive diseases (e.g., malaria and dengue).
Economic Losses: Global economic losses are predicted to reach $12.5 trillion by 2050, with an additional $1.1 trillion burden on healthcare systems due to climate-induced impacts. One study estimates that climate change will cost the global economy $38 trillion a year within the next 25 years.
Displacement and Migration: Over 200 million people may be displaced by climate change by 2050, with an estimated 21.5 million displaced annually since 2008 by weather-related events. In a worst-case scenario, the World Bank suggests this figure could reach 216 million people moving internally due to water scarcity and threats to agricultural livelihoods. Some researchers predict that 1.2 billion people could be displaced by 2050 in the worst-case scenario due to natural disasters and other ecological threats.
Food and Water Insecurity: Climate change exacerbates food and water insecurity, leading to malnutrition and increased disease burden, especially in vulnerable populations. For example, a significant increase in drought in certain regions could cause 3.2 million deaths from malnutrition by 2050. An estimated 183 million additional people could go hungry by 2050, even if warming is held below 1.6°C.
Mental Health Impacts: Climate change contributes to mental health issues like anxiety, depression, and PTSD, particularly in vulnerable populations and those experiencing climate disasters or chronic changes like drought. Extreme heat has been linked to increased aggression and suicide risk. Studies also indicate that children born today will experience a significantly higher number of climate extremes than previous generations, potentially impacting their mental well-being and sense of future security.
Inequality and Vulnerability: Climate change disproportionately affects vulnerable populations, including low-income individuals, people of color, outdoor workers, and those with existing health conditions, worsening existing health inequities and hindering poverty reduction efforts.
Not just strict energy production. Especially when it comes from sources of energy increasingly infeasible and unpopular.
Having a non-emitting form of base load is important, and nuclear has a place there, but it many applications it's just not cost competitive with renewables.
Rooftop solar starts paying back instantly and can be deployed in $20k tranches. It also requires no additional grid infrastructure and decreases demand on non generating grid infrastructure.
Pretty sure it’s rooftop solar that wins the future.
Maybe if fusion was viable, that'll change, but until then nuclear just doesn't make any sense.
Uranium mining produces significant toxic waste (tailings and raffinates). Fuel processing produces toxic waste, typically UF6. There is some processing of UF6 to UF4 but that doesn't solve the problem and it's not economic anyway. Fuel usage produces even more waste that typically needs to be actively cooled for years or decades before it can be forgotten about in a cave (as nuclear advocates argue).
And then who is going to operate the plant? This administration in particular is pushing for further nuclear deregulation, which is terrifying. You want to see what happens without regulation? Elon Musk's gas turbines in South Memphis with no Clean Air permits that are spewing pollution [1].
That's terrifying because the failure modes for a single nuclear incident are orders of magnitude worse than any other form of power plant. The cleanup from Fukushima requires technologies that don't exist yet, will take decades or centuries and will likely cost ~$1 trillion once its over, if it ever is [2].
And who's going to pay for that? It's not going to be the private operator. In fact, in the US there's laws that limit liability for nuclear accidents. The industry's self-insurance fund would be exhausted many times over by a single Fukushima incident.
And then we get to the hand waving about Chernobyl, Fukushima and Three Mise Island. "Those are old designs", "the new designs are immune to catastrophic failure" or, my favorite, "Chernobyl was because of mismanagement in the USSR" like there wouldn't be corner-cutting by any private operator in the US.
And let's just gloss over the fact that we've built fewer than 700 nuclear power plants, yet had 3 major incidents, 2 of them (Chernobyl and Fukushima) have had massive negative impacts. The Chernobyl absolute exclusion zone is still 1000 square miles. But anything negative is an outlier that should be ignored, apparently.
And then we get to the impact of carbon emissions in climate change but now we're comparing the entire fossil fuel power industry vs one nuclear plant. It's also a false dichotomy. The future is hydro and solar.
and then we get to the massive boondoggle of nuclear fusion, which I'm not convinced will ever be commercially viable. Energy loss and container destruction from fast neutrons is a fundamental problem that stars don't have because they have gravity and are incredibly large.
I have no idea where this blind faith in nuclear comes from.
[1]: https://www.politico.com/news/2025/05/06/elon-musk-xai-memph...
[2]: https://cleantechnica.com/2019/04/16/fukushimas-final-costs-...
Is it? Sure it helps sell chips, but where is it actually driving measurable efficiency improvements?
(And let's ignore the fact that humanity barely managed to organize anything that held even a mere 1000 years)
The only country that the US needs to worry about beating them in the clean energy race is China. They’re building energy plants like crazy (though worth noting, not all of their plants are clean energy).
Can you clarify what leading in energy means? And what concerns do you have?
Do you mean we, in the U.S. are in a tarpit of regulations and red tape that makes setting up a nuclear power plant up impossible? Or something else?
IMHO, leading in energy also needs to take into account where that energy takes us and what it unlocks. I immigrated to the U.S. so I am extremely bullish so do consider that below.
My California perspective is that energy is going to be even more decentralized. I have not paid an electric bill in years and get a check from my utility once a year where they pay me wholesale rates for my net export. I net export because I rarely use any meaningful energy at night that my 5kwH battery pack cannot provide. Once battery prices fall even further, I will dump everything into my local storage and draw no gross power from my utility at all. For all practical purposes, I will be off grid.
Anyone in California has the technological ability to get there as well. The utilities dump GWh of solar energy because we produce so much!
The issue we have in the U.S. is one of horrible policies and regulation.
Your typical townhouse in the city block isn't going to be able to put 20 panels on their roof because their HOA is going to throw a fit. The owner won't be allowed to install it themselves and would have to pay an electrician tens of thousands of dollars because the city isn't going to permit it otherwise. The obstacle of installing $5k worth of parts is incredibly disappointing.
From my perspective, technologically, solar energy is going to become cheaper as storage continues to fall in price.
This will empower increasing productivity. In my case, once the GPU market becomes consumer friendly and less constrained, or fundamentally different LLMs are released that are CPU friendly but I can't imagine that possibility yet, I will buy more GPUs and increase my self host LLM capacity. Today, as of right now I an getting "Insufficient capacity" errors from AWS attempting to launch a g6.2xlarge cluster and puny 24GB GPUs cost a lot making renting from AWS a better choice. The responses from the coding models blow my mind. They often meet or beat the kind of code I would expect from a junior engineer I would have to pay $120k/yr for and that would be a cheap engineer in SoCal. A GPU cluster including running costs would be fraction of that so I would be able to expand quicker with less.
Whole offices are going to become more compact and continue to become decentralized or even remote. Their carbon footprint is then going to go practically zero (no office security patrol, no HVAC, no heating, etc). More people will be able to start businesses (higher GDP) with less, increasing the GDP per Co2 emissions.
My childhood friends in the E.U who are in the same space that I am in are less enthusiastic. My friends in Germany who bought a hundred PV panels is not happy at all.
So which country will lead in energy and what would they be doing?
My boomer boss thinks writing tests is unnecessary and slows shipping down. It might be true, but it fails to appreciate the full scope of the problem.
For open source/open weight models it's particularly important because until now there wasn't a government-level strong voice countering people like Geoff Hinton's call to ban open source/open weight AI, like he articulates here: https://thelogic.co/news/ai-cant-be-slowed-down-hinton-says-...
The US generated an additional 64Twh of solar in 2024 compared to 2023. To get the same amount from nuclear you would need to build 5 large reactors in one year.
As for land mass, we can re-use already spent land mass, like rooftops, parking lots, grazing farmland and such. Solar can also be placed on lakes.
So for the foreseeable future there is no actual need for new land to be dedicated to solar.
Solar: ~300-800 L/MWh [0]
Nuclear: ~3000 L/MWh [1]
0: https://iea-pvps.org/wp-content/uploads/2020/01/Water_Footpr...
1: https://www-pub.iaea.org/MTCD/Publications/PDF/P1569_web.pdf
David MacKay in "Sustainable Energy: Without the Hot Air" did a calculation circa 2010. To fulfill the world's energy needs back then, a 10 km^2 area in the Sahara desert would be sufficient. Even if you scaled that to 100 km^2, it's absolutely tiny on a global scale, and panels have only become more efficient since then.
The challenge of course is storage and distribution, but yeah, in terms of land area, it's not much.
Considering how little use there is for most of that land anyways, it seems like a good option to me.
Also AI training seems like the perfect fit for solar. Run it when the sun is shining. Inference is significantly less power hungry, so it can run base load 24/7.
Probably zero agricultural if you mandate all rooftops to be solar. And all parking lots to be covered with solar roofs.
Solar is great for rooftops of houses, it’s not really great to run a DC 24/7 without batteries.
i know Saudi, Morocco and China are all massively dumping panels into their deserts, likely more places too. these are great places to put them as it has less impact on environment (less wildlife etc.) and it's pretty much always sunny during the daytime, so it's high efficient per m/2 comparted to colder more cloudy places.
Morocco already is connected for energy providing to Europe via Spain afaik, though i think that is currently not used yet, so they are in a good position to leverage that as power demands surge across EU datacenters trying to compete in AI :'D (absolutely no clue if they will actually go that route but it seems logical!)
If you want the PD companies to have a different blend, then they need carrots and sticks.
As we build out solar, daytime power will become cheaper than nighttime power.
Some people will eventually find it economical to time-shift their consumption to daytime hours, including saving any non-interactive computation for those hours, and shutting down unneeded compute at night.
People (at least on HN) seem to be in agreement the Europe is too regulatory and bureaucratic, so it feels fair to question the practicality of any American initiatives, as we do for European ones.
What does this document practically enact today? Is there any actual money allocated? Deregulation seems to be a theme, so are there any examples of regulations which have been cleansed already? How about planning? This document is full of directives and the names of federal agencies which plan to direct, so what are the actual results of said plans that we can see today and in the coming years?
Registering a company in US (Delaware) can be achieved in as little as 1 hour.
Getting married in Germany, particularly between a German and a foreigner, is anything from a 6 month to 2 year process, involving significant expenses, notarization/translation of documents. Some documents expire after 6 months, so if the government bureaucrats are too slow you need to get new copies, translated again, notarized again, and try to re-submit.
This isn't protecting human rights, it's supporting a class of bureaucrats/notaries/translators/clerks and making life more difficult for ordinary people. It's also a form of light racism that targets foreigners/migrants by imposing more difficult bureaucratic requirements and costs on them compared to by birth citizens.
I can’t take this seriously, as recent actions by this administration directly contradicts a few of these stated goals.
Or maybe I don’t want to, because this sounds dangerous to me at this time.
What red tape? Anyone can buy/rent a GPU(s) and train stuff.
Well previously the Chinese were not able to, but that was changed recently:
* https://www.wsj.com/tech/nvidia-wins-ok-to-resume-sales-of-a...
* https://foreignpolicy.com/2025/07/22/nvidia-chip-deal-us-chi...
Yet at the same time,
> Preventing Woke AI in the Federal Government [...] LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI. [...] DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. [1]
I don't understand how free speech can be protected while suppressing topics such as "unconscious bias" and "discrimination".
[1] https://www.whitehouse.gov/presidential-actions/2025/07/prev...
[1]: https://theconversation.com/how-do-you-stop-an-ai-model-turn...
[2]: https://www.theguardian.com/technology/2025/may/14/elon-musk...
[2]: https://www.theguardian.com/technology/2025/jul/09/grok-ai-p...
Anyone serious knows contradiction = lies.
Words are cheap, actions matter.
Have you been under a rock for the last 6 months as Trump tells Xi Jinping to hold his beer??
If foundation model companies want their government contracts renewed, they are going to have to make sure their AI output aligns with this administration's version of "truth".
This phrasing exactly corresponds to "politically correct" in its original meaning.
> I already am eating from the trashcan all the time. The name of this trashcan is ideology. The material force of ideology - makes me not see what I'm effectively eating. It's not only our reality which enslaves us. The tragedy of our predicament - when we are within ideology, is that - when we think that we escape it into our dreams - at that point we are within ideology.
So like making sure everyone knows that 2+2=5 and that we have always been at war with East Asia?
> In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.
https://www.whitehouse.gov/presidential-actions/2025/07/prev...
So... the concept of unconscious bias is verboten to the new regime? Isn't it just a pretty simple truth? We all have unconscious biases because we all work with incomplete information. Isn't this just a normal idea?
Let's see how that shakes out in this particular case.
https://www.ft.com/content/9c19d26f-57b3-4754-ac20-eeb627e87...
I haven't heard anything like that from a Western politician. Newspapers and investment analysts warn though.
Looks like plans to leave, for finding safe harbor elsewhere, have accelerated from the initial projection of 2030
Given that LLMs, for instance, are all about creating synthetic media, I don’t know how this last goal can be reconciled with the others.
This document reads like a trade group lobbying the government, not like the government looking out for the interests of its people.
With regards to LLM content in the legal system, law firms can use LLMs in the same way an experienced attorney uses a junior attorney to write a first pass. The problem lies when the first pass is sent directly to court without any review (either for sound legal theory or citation of cases which either don’t exist or support something other than the claim).
Junior attorneys would not produce a first pass that cites and quotes nonexistent cases or cite real cases that don’t match what it quotes.
The experienced attorney is going to have to do way more work to use that first draft from an LLM then they would to use a first draft from an actual human junior attorney.
Yep, it was.
I wholly agree that the document feels less guided by the public interest rather than by various business interests. Yet that last goal is in a kind of weird spot. It feels like something that was appended to the plan and not really related to the other goals — if anything, contrary to them.
That becomes clear when we read the PDF with the details of the Action Plan. There, we learn that to “Combat Synthetic Media in the Legal System” means to fight deepfakes and fake evidence. How exactly that’s going to be done while simultaneously pushing AI everywhere is unclear.
There's an idea. This government is just a propaganda machine for its head honcho.
Combat Synthetic Media in the Legal System One risk of AI that has become apparent to many Americans is malicious deepfakes, whether they be audio recordings, videos, or photos. While President Trump has already signed the TAKE IT DOWN Act, which was championed by First Lady Melania Trump and intended to protect against sexually explicit, non-consensual deepfakes, additional action is needed. 19 In particular, AI-generated media may present novel challenges to the legal system. For example, fake evidence could be used to attempt to deny justice to both plaintiffs and defendants. The Administration must give the courts and law enforcement the tools they need to overcome these new challenges. Recommended Policy Actions • Led by NIST at DOC, consider developing NIST’s Guardians of Forensic Evidence deepfake evaluation program into a formal guideline and a companion voluntary forensic benchmark.20 • Led by the Department of Justice (DOJ), issue guidance to agencies that engage in adjudications to explore adopting a deepfake standard similar to the proposed Federal Rules of Evidence Rule 901(c) under consideration by the Advisory Committee on Evidence Rules. • Led by DOJ’s Office of Legal Policy, file formal comments on any proposed deepfake- related additions to the Federal Rules of Evidence.
Basically: two nations tried to achieve AI supremacy; the two AI's learn of each other, from each other, then with each other; then they collaborate on taking control of human affairs. While the movie is movie is from 1970 (and the book from 1966), it's fun to think about how much more possible that scenario is today than it was then. (By possible, I'm talking about the AI using electronic surveillance and the ability to remotely control things. I'm not talking about the premise of the AI or how it would respond.)
=3
Move fast and break things I guess?
"PREVENTING WOKE AI IN THE FEDERAL GOVERNMENT"
https://www.whitehouse.gov/presidential-actions/2025/07/prev...
> In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.
Most of the world, and a huge chunk of America, thinks in different ways. Many are not aware the AI's are being built this way either. So, we want AI's that don't have a philosophy opposite of ours. We'd like them to either be more neutral or customizable to the users' preferences.
Given the current state, the first steps are to reverse the existing trend (eg political fine-tuning) and use open weights we can further customize. Later, maybe purge highly-biased stuff out of training sets when making new models. I find certain keywords, whether liberal or conservative, often hint they're going to push politics.
2025 America, where we can't handle the radical pushing of thought by Heinlein in the late 1950s. Unbelievable.
Any Government comment periods going forward I will be asking if the government agency made sure AIs used were not trained on Heinlein or any discussions relating to him to ensure that 'huge chunks of America's desire to exclude trans and to make sure our AIs are the best possible AIs and don't have extremist 1950s agitprop scifi trans thought thinkers like Heinlein included.
I'm sure "move fast and break things" will work out great for health care.
And there are already "clear governance and risk mitigation standards" in health care, they're just not compatible with "try first" and use unproven things.
Health care is already broken to the point of borderline dystopia. When I contrast the experience I had as a young boy of visiting a rural country doctor to the fast food health care experience of "urgent care" clinics, it makes my head spin.
The last few doctors I've been to have been completely useless and generally uncaring as well. Every visit I've made to a doctor has resulted in my feeling the same at the end but with a big medical bill to go home with.
At this point the only way I'll intentionally end up in a medical facility is if I'm unconscious and someone else makes that call.
Dentistry has met a similar fate as more and more dentists have been swallowed up by private equity. I've had loads of dental work, including a 'surprise' root canal, and never had an issue. My last dentist had a person on staff dedicated to pushing things through on the insurance front and my dental procedure was so awful it boarded on torture.
I used to be an annual check + 3 times a year dentist person. Today I'm dead set on not stepping foot in any kind of medical facility unless the alternative is incredible pain or certain death.
keep in mind, drs are also trying to figure out if you're a reliable narrator (so many patients are not) or trying to scam for drugs. best of luck!
It probably would if you quantify risk correctly. I'm not likely to die from some experimental drug gone wrong, but extremely likely to die from some routine cause like cancer, heart disease, or other disease of old age. If I trade off an increase in risk from dying from some experimental treatment gone wrong for faster development of treatments that can delay or prevent routine causes of death, I will come out ahead in the trade unless the tradeoff ends up being extremely steep in favor of risk from bad treatments.
But that outcome is very unlikely because for this to be the case the bad treatments would have to actually harmful instead of just ineffective (which is much more common). And it also fails to take into account the possibility that there isn't even a tradeoff and AI actually makes it less likely that I will die by experimental treatment gone wrong or other medical mistake, so it's just a win-win. And there is already evidence that AI outperforms doctors in the emergency room. https://pmc.ncbi.nlm.nih.gov/articles/PMC11263899/
from video games to major product roll outs to cars.
will all of the knowledge gained from this product research testing of AI on medicine be given away to the public in the same way university research used to be to the scientific community? or will this beta test on the public’s health be kept as company’s “trade secret”
if they’re going to “move fast and break things” with the public, in other words beta research on the public, then it’s incredibly worrisome if the research is hidden and “gifted” to a handful of their cronies.
particularly so when quite a lot of these people in the AI sphere have vocally many times declared they despise the government and that the government helping people is awful. from one side of their mouth chastise government spending money to boost regular communities of people while simultaneously using it to help themselves.
And the federal government at large.
Our product automates a lot of the repetitive tasks for health insurance companies and increases reliability of responses and profit margins.
And healthcare is still far from perfect.
Imagine what healthcare in 2500 will be like.
Is this a reference to the AMD chip, or just a fragment of a removed numbered list?
Edit: It‘s a fragment of the PDF-to-HTML [1]
Even during the interval of time these remained under human control, we are talking about people like Altman, Musk and Zuckerberg unilaterally wielding unprecedented economic power. What evidence is there in their behavior or human history in general to believe that this would be anything but bad for the majority of the worlds population?
Meanwhile these companies have been able to successfully nerd-snipe a small army of engineers who are right at that sweet spot of technical excellence and naivete. I won't say this is actually that difficult as these traits actually seem highly correlated in that population as a whole, but they have become the willing instruments of masters which will discard them at the first opportunity.
A global commitment to banning the development of AGI is the only sane response, and the number of people to whom this very premise itself sounds insane tells you just how fucked we are if they pull this off even halfway.
> We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day. Simply put, we need to “Build, Baby, Build!”
> Prioritize the interconnection of reliable, dispatchable power sources as quickly as possible and embrace new energy generation sources at the technological frontier (e.g., enhanced geothermal, nuclear fission, and nuclear fusion). Reform power markets to align financial incentives with the goal of grid stability, ensuring that investment in power generation reflects the system’s needs.
None of these are "dispatchable power sources." Grid-scale batteries, for which technology and raw materials are abundant in the United States, are dispatchable power sources, and are, for some reason, not mentioned here.
What they will actually do is eviscerate regulations to allow for more construction of natural gas power plants, but they won't mention that here, because any sane person would immediately identify that as a terrible idea.
one should be more worried about china or india polluting than the US.
newsflash: it doesnt matter what you "plan". you wont do it. because you cant.
it´s called state incapacity. you´re institutionally incapable.
prediction: nothing will follow from this except the low effort stuff (i.e. nothing but speeches and expenses)
Hundred million contracts with zero results. Conservative ideology is based on the idea that certain people is just above others and they deserve more for free meanwhile working class health expenses are a luxury and need to be cut down.
Sorry... not elected... sworn in... with the book 'To Serve Man'
It seems that everywhere free speech is mentioned today, the intent is to do the exactly opposite....
Someone desperately needs a philosophy course…
Then click "fact sheets", "remarks", and "articles". He's everywhere.
That's how unbiased this is going to be.
(hint, the answer is one)
Some kind of sick soft power move that I expect we will be seeing a lot more of.
> This initial phase acknowledges the need to safeguard existing assets and ensures an uninterrupted and affordable supply of power. The United States must prevent the premature decommissioning of critical power generation resources
Yeah, they're going to do all they can to block cheap renewables and give handouts to fossil fuel companies.
"We are proposing the largest solar farm in the world, in order to capture the sheer magnitude and capability of the most powerful solar plant to date, we propose calling it the Grand Trump Energy Generation Field"
The dudes ego would prevent him from blocking it.
Many AI people in positions of influence have argued that AI will all but solve the climate crisis.
Viewed from that angle, it would make sense that you wouldn’t care about how dirty the sources of energy is on the way to AGI because once there, the climate crisis will be magically solved. Somehow.
I'm glad it's focusing on the challenges that are proven, not speculative.
No technology scares me. It's the hands it is in.
xAI mechahitler was a warning.
As a result, there's zero chance even the sensible parts of this strategy won't just end up coopted into multi-billion dollar Palantir contracts to deliver outdated llama models behind some clunky UI with the word "ontology" plastered on every button.
Source? All I've seen as a result of AI is something to take the blame for layoffs. Well, that and a whole lot of copyright infringement laundered through AI.
As a non American, I just hope they don't take too long to reach it. While I'm thankful for the positive influence that the USA had in the last century, lately I feel like they only have a negative one, notably by poisoning our societies with unregulated big tech and social networks.
Whatever comes next, I can only hope that this wave of AI generated falsehoods is the last straw.
do you feel like the censorship/regulation/big state mantra that european governments are fans of are also poisoning our societies?
> Whatever comes next, I can only hope that this wave of AI generated falsehoods is the last straw.
the AI wave is just beginning.
As a European, I still liked them helping out Ukraine
What about the people who are not American?
1. A push towards open source / open weight AI models.
2. A push towards building more high quality datasets.
There's no mention of studying and monitoring the social impact of AI, but I wouldn't have expected otherwise from this administration. I suspect that we may look back on this as a big mistake, although I'd really love to be proven wrong.
At a press conference today Trump seemed to suggest having minimal restrictions related to copyright for AI researchers [0]. It's not clear if big AI companies will just get a administrative pass to do whatever they want / need in order to compete with China, or if we can expect some kind of copyright reform in the next few years.
Not sure why would anyone pay any attention to what this administration says, when it can change in a very short time.
The EU will never approve of an LLM that has been aligned to regurgitate US propaganda as truth.
Huh??? That's exactly what is happening right now. ChatGPT, Claude, Gemini, etc.
<returns stilted text at 3rd grade reading level>
"Make it sound smarter".
And voila, ai.gov is born.
They also solve the problem of publicity. When someone goes insane on Facebook it's rather visible, unlike when someone goes insane with a chatbot. Unless they publicise their descent, like Geoff Lewis seems to do.
Which means it'll be harder to detect when people are being deliberately manipulated, like it was pretty obvious which role Facebook played in e.g. Myanmar and Ethiopia.
How would you act if you wanted to make sure that the people that can perform the technical proliferation won't revolt against it?
Then in the recommended policies it references multiple times that there will be nucleic acid testing set up to catch malicious “customers”
Is this policy targeted towards the Covid lab leak conspiracy or are they just aiming for officially collecting everyone’s DNA samples?
Maybe both
And build data centers, as emphasized for the 100th time since inauguration.
If Murdoch succeeds with his recent WSJ campaign and gets Trump to resign or similar, brace for Vance and the AI bros. These schemes are literally devised by people who funded cannabis and Adderall distribution sites and have done nothing noteworthy.
- I open the site in android mobile: "swwwoooooosh" a big slow animation reveals the text
- After reading the the text I think I'll take a look at the home page: "swwooooooosh" the same animation rolls again as I load a very strange full screen image of Trump in black and white
- I click the hamburger menu icon: "swwoooooosh" the four menu items slowly slide into full screen
- There is visible no option to close the menu for me, I could probably refresh but decide I'm done here
The animations are a bit much. The scrolling horizontal rules repeating the words "AMERICA'S AI ACTION PLAN" underneath each "Pillar" header were confusing for a brief moment.
This is the sole reason EU will never, ever catch up in big tech unless they get rid of regulations.
What does the government plan to do with them? Kill them off? Because if they leave them to die, they will revolt.