However even by that metric I don't see how Claude is doing that. Seth is the one researching the suppliers "with the help of" Claude. Seth is presumably the one deciding when to prompt Claude to make decisions about if they should plant in Iowa in how many days. I think I could also grow corn if someone came and asked me well defined questions and then acted on what I said. I might even be better at it because unlike a Claude output I will still be conscious in 30 seconds.
That is a far cry from sitting down at a command like and saying "Do everything necessary to grow 500 bushels of corn by October".
The particular problem here is it is very likely that the easiest people to replace with AI are the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
>seems to end up requiring the hand-holding of a human at top,
I was born on a farm and know quite a bit about the process, but in the process of trying to get corn grown from seed to harvest I would still contact/contract a set of skilled individuals to do it for me.
One thing I've come to realize in the race to achieve AGI, the humans involved don't want AGI, they want ASI. A single model that can do what an expert can, in every field, in a short period of time is not what I would consider a general intelligence at all.
I think this is the new turing test. Once it's been passed we will have AGI and all the Sam Altmans of the world will be proven correct. (This isn't a perfect test obviously, but neither was the turing test)
If it fails to pass we will still have what jdthedisciple pointed out
> a non-farmer, is doing professional farmer's work all on his own without prior experience
I am actually curious how many people really believe AGI will happen. Theres alot of talk about it, but when can I ask claude code to build me a browser from scratch and I get a browser from scratch. Or when can I ask claude code to grow corn and claude code grows corn. Never? In 2027? In 2035? In the year 3000?
HN seems rife with strong opinions on this, but does anybody really know?
This is what people said while transitioning from horse carriages to combustion engines, steam engines to modern day locomotives. Like it or not, the race to the bottom has already begun. We will always find a way to work around it, like we have done time and again.
they know they won't be able to make a fully autonomous product while navigating liability and all sorts of problems so they're using technology to make drivers more comfortable while still in control
none of this hype about full autonomy, just realistic ideas about how things can be easier for the humans in control
This is pretty much the whole goal of capitalism since the 1800's
The point, I think, is that even if LLMs can't directly perform physical operations, they can still make decisions about what operations are to be performed, and through that achieve a result.
And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.
Still an interesting experiment to see how much of the tasks involved can be handled by an agent.
But unless they've made a commitment not to prompt the agent again until the corn is grown, it's really a human doing it with agentic help, not Claude working autonomously.
Model UI's like Gemini have "scheduled actions" so in the initial prompt you could have it do things daily and send updates or reports, etc, and it will start the conversation with you. I don't think its powerful enough to say spawn sub agents but there is some ability for them to "start chats".
A plot line in Ray Naylers great book The Mountain in the Sea that plays in a plausible, strange, not-too-distant future, is that giant fish trawler fleet are run by AI connected to the global markets, fully autonomously. They relentlessly rip every last fish from the ocean, driven entirely by the goal of maximising profits at any cost.
The world is coming along just nicely.
Also Seth a non-farmer was already capable of using Google, online forums, and Sci-Hub/Libgen to access farming-related literature before LLMs came on the scene. In this case the LLM is just acting as a super-charged search engine. A great and useful technology, sure. But we're not utilizing any entirely novel capabilities here
And tbh until we take a good crack at World Models I doubt we can
2) Regardless, I think it proves a vastly understated feature of AI: It makes people confident.
The AI may be truly informative, or it may hallucinate, or it may simply give mundane, basic advice. Probably all 3 at times. But the fact that it's there ready to assert things without hesitation gives people so much more confidence to act.
You even see it with basic emails. Myself included. I'm just writing a simple email at work. But I can feed it into AI and make some minor edits to make it feel like my own words and I can just dispense with worries about "am i giving too much info, not enough, using the right tone, being unnecessarily short or overly greating, etc." And its not that the LLMs are necessarily even an authority on these factors - it simply bypasses the process (writing) which triggers these thoughts.
I'll see if my 6 year old can grow corn this year.
They could also just burn their cash. Because they aren’t making any money paying someone to grow corn for them unless they own the land and have some private buyers lined up.
The only framework we have figured out in which LLMs can build anything of use, requires LLMs to build a robot and then we expose the robot to the real world and the real world smacks it down and then we tell the LLMs about the wreckage. And we have to keep the feedback loops small and even then we have to make sure that the LLMs don't cheat. But you're not going to give it the opportunity to decrease the wealth tax or increase the income tax so it will never get the feedback it needs.
You can try to train a neural network with backpropagation to simulate the actual economy, but I think you don't have enough data to really train the network.
You can try to have it build a play economy where a bunch of agents have different needs and different skills and have to provide what they can when they can, but the "agent personalities" that you pick embed some sort of microeconomic outlook about what sort of rational purchasing agent exists -- and a lot of what markets do is just kind of random fad-chasing, not rationally modelable.
I just don't see why you'd use that square peg to fill this round hole. Just ask economics professors, they're happy to make those predictions.
Please tell me you've watched the Mitchell & Webb skit. If not , google "Mitchell Webb kill all the poor" and thank me later.
Edit: also please tell me you know (if not played) of the text adventure "A Mind Forever Voyaging"... without spoiling anything, it's mainly about this topic.
Everything old is new again :)
Then it could do things like: "hey, do you have seeds? Send me pictures. I'll pay if I like them" or "I want to lease this land, I'll wire you the money." or "Seeds were delivered there, I need you to get your machinery and plant it"
Id say the only acceptable proof is one prompt context. But thats godels numbering Xenos paradox of a halting problem.
Do people think prompting is not adding insignificant intelligencw.
Then what you asked “do everything to grow …” would be a matter of “when?”, not “can?”
I think it would be unlikely but interesting if the AI decided that in furtherance of whatever its prompt and developing goals are to grow corn, it would branch out into something like real estate or manufacturing of agricultural equipment. Perhaps it would buy a business to manufacture high-tensile wire fence, with a side business of heavy-duty paperclips... and we all know where that would lead!
We don't yet have the legal frameworks to build an AI that owns itself (see also "the tree that owns itself" [1]), so for now there will be a human in the loop. Perhaps that human is intimately involved and micromanaging, merely a hands-off supervisor, or relegated to an ownership position with no real capacity to direct any actions. But I don't think that you can say that an owner who has not directed any actions beyond the initial prompt is really "doing the work".
It also doesn't help that Claude is incapable of coming up with an idea, incapable of wanting corn, and has no actual understanding of what corn is.
Like if a human said they started a farm, but it turns out someone else did all the leg work and they were just asked for an opinion occasionally, they'd be called out for lying about starting a farm. Meanwhile, that flies for an AI, which would be fine if we acknowledged that theres a lot of behind the scenes work that a human needs to do for it.
Overall I don't think this is useful. They might or might not get good results. However it is really hard to beat the farmer/laborer who lives close to the farm and thus sees things happen and can react quickly. There is also great value in knowing your land, though they should get records of what has happened in the past (this is all in a computer, but you won't always get access to it when you buy/lease land). Farmers are already using computers to guide decisions.
My prediction: they lose money. Not because the AI does stupid things (though that might happen), but because last year harvests were really good and so supply and demand means many farms will lose money no matter what you do. But if the weather is just right he could make a lot of money when other farmers have a really bad harvest (that is he has a large harvest but everyone else has a terrible harvest).
Iowa has strong farm ownership laws. There is real risk he will get shutdown somehow because what he is doing is somehow illegal. I'm not sure what the laws are, check with a real lawyer. (This is why Bill Gates doesn't own Iowa farm land - he legally can't do what he wants with Iowa farm land)
>Farmers are already using computers to guide decisions.
For way longer than most people expect. I remember reading farming magazines in the 80's showing computer based control for all kinds of farming operations. These days it is exceptionally high tech. Combines measure yield on a GPS grid. This is fed back into a mapping system for fertilization and soil amendment in the spring to reduce costs where you don't need to put fertilizer. The tractors themselves do most of the driving themselves if you choose to get those packages added. You can get services that monitor storm damage and predict losses on your fields, and updated satellite feed information on growth patterns, soil moisture, vegetation loss, and more. Simply put super high automation is already available for farming. I tell my uncle his job is to make sure the tractor has diesel in it, and that nothing is jammed in the plow.
When it comes to animal farming in the mid-west, a huge portion of it is done by contracts with other companies. My uncle owns the land and provides the labor, but the buildings, birds, food, and any other supplies. A faceless company setting up the contract like now, or an AI sending the same paperwork really may not look too much different.
auto steer often can get another row in without over crowding. auto steer also shuts off ineividual rows as you cross where you planted already (saving thousands of dollars in seed)
Replacing the farm manager with an AI multiplies that problem by a hundred. A thousand? A million? A lot. AI may get some sensor data but it's not going to stick its hand in the dirt and say "this feels too dry". It won't hear the weird pinging noise that the tractor's been making and describe it to the mechanic. It may try to hire underlings but, how will it know which employees are working hard and which ones are stealing from it? (Compare Anthropic's experiments with having AI run a little retail store, and get tricked into selling tungsten cubes at a steep discount.)
I got excited when I opened the website and at first had the impression that they'd actually gotten AI to grow something. Instead it's built a website and sent some emails. Not worth our attention, yet.
That's all rich people do. The premise of capitalism is that the people best at collecting rent should also be in total control of resource allocation.
Of course software can affect the physical world: Google Maps changes traffic patterns; DoorDash teleports takeoff food right to my doorstep; the weather app alters how people dress. This list is un-ending. But these effects are always second-order. Humans are always there in the background bridging the gap between bits and atoms (underpaid delivery drivers in the case of doordash).
The more interesting question is whether AI can __directly__ impact the physical world with robotics. Gemini can wax poetic about optimizing fertilizers usage, grid spacing for best cross-pollination, the optimum temperature, timing, watering frequency of growing corn, but can it actually go to Home Depot, purchase corn seeds, ... (long sequence of tasks) ..., nurture it for months until there's corn in my backyard? Each task within the (long sequence of tasks) is "making PB&J sandwich" [1] level of difficulty. Can AI generalize?
As is, LLMs are better positioned to replace decision-makers than the workers actually getting stuff done.
[1] http://static.zerorobotics.mit.edu/docs/team-activities/Prog...
Yet you get credited for all that work, because a car's ability to move people isn't special compared to your ability to operate it without running people over. Similarly, your ability to buy things from a store isn't special compared to an AI's ability to design a hydroponics farm or fusion reactor or whatever out of those things. Yes, you can do things the AI can't, but on the other hand, your car can do things you can't.
All this talk about "doing things in the physical world" is just another goalpost moving, and a really dumb one at that.
Aren't these companies in the business of leasing land? I dont see how contacting them about leasing land would be spam or bothering them. And I dont really know what you mean by "with no legal authority to actually follow up with what is requested."
Claude: Go to the owner of the building and say "if you tell me the height of your building I will give you this fine barometer."
The timing might need to be different but it would be good to see what the same amounts invested would yield from corn on the commodity market as well as from securities in farming partnerships.
Would it be fair if AI was used to play these markets too, or in parallel?
It would be interesting to see how different "varieties" of corn perform under the same calendar season.
Corn, nothing but corn as the actual standard of value :)
You don't get much any way you look at it for your $12.99 but it's a start.
Making a batch of popcorn now, I can already smell the demand on the rise :)
1. Do some research (as it's already done)
2. Rent the land and hire someone to grow the corn
3. Hire someone to harvest it, transport it, and store it
4. Manage to sell it
Doing #1 isn't terribly exciting - it's well established that AIs are pretty good at replacing an hour of googling - but if it could run a whole business process like this, that'd be neat.
Incidentally I clicked through to this guy's blog and found his predictions for 2025 and he was 0 for 13: https://avc.xyz/what-will-happen-in-2025-1
I would have to look up farm services. Look up farmhand hiring services. Write a couple emails. Make a few payments. Collect my corn after the growing season. That's not an insurmountable amount of effort. And if we don't care about optimizing cost, it's very easy.
Also, how will Claude monitor the corn growing, I'm curious. It can't receive and respond to the emails autonomously so you still have to be in the loop
The estimate seems to leave out a lot of factors, including irrigation, machinery, the literal seeds, and more. $800 for a "custom operator" for 7 months - I don't believe it. Leasing 5 acres of farmable land (for presumably a year) for less than $1400... I don't believe it.
The humans behind this experiment are going to get very tired of reading "Oh, you're right..." over and over - and likely end up deeply underwater.
[1] https://www.extension.iastate.edu/agdm/crops/html/a1-20.html
I am extremely worried by the amount of hype I see around. I hope I am being in a bubble.
(And if you read the linked post, … like this value function is established on a whim, with far less thought than some of the value-functions-run-amok in scifi…)
(and if you've never played it: https://www.decisionproblem.com/paperclips/index2.html )
"Thinking quickly, Dave constructs a homemade megaphone, using only some string, a squirrel, and a megaphone."
To make this a full AI experiment, emails to this inbox should be fielded by Claude as well.
Let's step back.
"there's a gap between digital and physical that AI can't cross"
Can intelligence of ANY kind, artificial or natural, grow corn? Do physical things?
Your brain is trapped in its skull. How does it do anything physical?
With nerves, of course. Connected to muscle. It's sending and receiving signals, that's all its doing! The brain isn't actually doing anything!
The history of humanity's last 300k years tells you that intelligence makes a difference, even though it isn't doing anything but receiving and sending signals.
An AI that can also plant corn itself (via robots it controls) is much more impressive to me than an AI just send emails.
- Yes it can
- Prove it
- AI, tell me instructions to grow corn
- Go buy seeds, plant them, water the field and once you gather the corn report back
- I'm back with the corn, proving AI can grow corn!
This is the experiment here, with nuance added to it. The thing is, though, if you "orchestrate" other people, you might as well do it with a single sentence as I described. Or you can manage more thoroughly. Some decisions you make may actually be detrimental to the end result.
So the only meaningful experiment would be to test a bot against a human being: who earns more money orchestrating the corn farm, a bot or a human? Consider also the expenses which is electricity/water for a bot and also food, medicine etc. for a human being.
I'll be following along, and I'm curious what kind of harness you'll put on TOP of Claude code to avoid it stalling out on "We have planted 16/20 fields so far, and irrigated 9/16. Would you like me to continue?"
I'd also like to know what your own "constitution" is regarding human oversight and intervention. Presumably you wouldn't want your investment to go down the drain if Claude gets stuck in a loop, or succumbs to a prompt injection attack to pay a contractor 100% of it's funds, or decides to water the fields with Brawndo.
How much are you allowing yourself to step in, and how will you document those interventions?
--Hammurabi
1) context: lack of sensors and sensor processing, maybe solvable with web cams in the field but manual labor required for soil testing etc
2)Time bias: orchestration still has a massive recency bias in LLMs and a huge underweighting of established ground truth. Causing it to weave and pivot on recent actions in a wobbly overcorrecting style.
3) vagueness: by and large most models still rely on non committal vagueness to hide a lack of detailed or granular expertise. This granular expertise tends to hallucinate more or just miss context more and get it wrong.
I’m curious how they plan to overcome this. It’s the right type of experiment, but I think too ambitious of a scale.
Unequivocally awful
The remaining work is only bad because it's low paying, and it's low paying because the wealth created by machines is unfairly distributed.
"Stop staring at screens"
"Stop sitting at your desk all day"
"Stop loafing around contributing nothing just sending orders from behind a computer"
"Touch grass"
but now that the humans are finally gonna get out and DO something you're outraged
I've been rather expecting AI to start acting as a manager with people as its arms in the real world. It reminds me of the Manna short story[1], where it acts as a people manager with perfect intelligence at all times, interconnected not only with every system but also with other instances in other companies (e.g. for competitive wage data to minimize opex / pay).
This seems like something along the lines of "We know we can use Excel to calculate profit/loss for a Mexican restaurant, but will it work for a Tibetan-Indonesian fusion restaurant? Nobody's ever done that before!"
Pure dystopia.
The endless complaining and goalposting shifting is exhausting
There's no goalpost shifting here - it's l'art pour l'art at its finest. It'd be introducing an agent where no additional agent agent is required in the first place, i.e. telling a farmer how to do their job, when they already now how to and do it in the first place.
No one needs an LLM if you can just lease some land and then tell some person to tend to it, (i.e. doing the actual work). It's baffling to me how out of touch with reality some people are.
Want to grow corn? Take some corn, put it in the ground in your backyard and harvest when it's ready. Been there, done that, not a challenge at all. Want to do it at scale? Lease some land, buy some corn, contract a farmer to till the land, sow the corn, and eventually harvest it. Done. No LLM required. No further knowledge required. Want to know when the best time for each step is? Just look at when other farmers in the area are doing it. Done.
I’m guessing this will screw up in assuming infinite labor & equipment liqudity.
I do not have a positive impression/experience of most middle/low level management in corporate world. Over 30 years in the workforce, I've watched it evolve to a "secretary/clerk, usually male, who agrees to be responsible for something they know little about or not very good at doing, pretend at orchestrating".
Like growing corn, lots of literature has been written about it. So models have lots to work with and synthesize. Why not automate the meetings and metric gatherings and mindless hallucinations and short sighted decisions that drone-ish be-like-the-other-manager people do?
Betting millions of dollars in capital on it's decision making process for something it wasn't even designed for and is way more complicated than even I believed coming from a software background into farming is patently ludicrous.
And 5 acres is a garden. I doubt he'll even find a plot to rent at that size, especially this close to seeding in that area.
So, where are the exact logs of the prompts and responses to Claude? Under "/log" I do not see this.
Managing all the decisions in growing a crop is too far a reach. Maybe someday, not today. Way too many variables and unexpected issues. I'm a former fertilizer company agronomist and the problem is far harder than say self driving cars.
This of course will never happens so instead those in power will continue to try to shoehorn AI into making slaves which is what they want, but not the ideal usage for AI.
The point could be made by having it design and print implements for an indoor container grow and then run lights and water over a microcontroller. Like Anthropic's vending machine this would also be an already addressed, if not solved, space for both home manufacturing and ag/garden automation.
It'd still be novel to see an LLM figure it out from scratch step by step, and a hell of a lot more interesting than whatever the fuck this is. Googling farmland in Iowa or Texas and then writing instructions for people to do the actual work isn't novel or interesting; of course an LLM can write and fill out forms. But the end result still primarily relies on people to execute those forms and affect the world, invalidating the point. Growing corn would be interesting, project managing corn isn't.
This is all addressed in the original blog post.
Seriously, what does this prove? The AI isn't actually doing anything, it's just online shopping basically. You're just going to end up paying grocery store prices for agricultural quantities of corn.
We feed it the information as a context to help us make a plan or strategy to achieve or get something.
They are also doing the same. They will be feeding the sensor, weather and other info, so claude can give them plan to execute.
Ultimately, they need to execute everything.
So this is a very legitimate test. We may learn some interesting ways that planting, growing, harvesting, storing, and selling corn can go wrong.
I certainly wouldn't expect to make money on my first or second try!
Look up precision ag.
I dug a little deeper and found this study showing cash rental rates per acre per year ranging from $215 to $295.[1] So it actually looks like Claude got this one right.
Of course I know nothing about renting farmland, but if you ask to rent 5 acres the average farm size is in 300+ acre region, the land owners might tell you to get lost or pony up. A little bit like asking Amazon to give you enterprise rates for a single small EC2 instance.
[0] https://farmland.card.iastate.edu/overview [1] https://www.extension.iastate.edu/agdm/wholefarm/pdf/c2-10.p...
choice = random() % 5
switch choice:
case 0: blog_post
case 1: tell_to_plant_corn
case 2: register_website
case 3: pause
case 4: move_moneyAnyway, turned it off; sure enough, misaligned.
But where is the prompt or api calls to Claude? I can't see that in the repo
Or did Claude generate the code and repo too? And there is a separate project to run it
We, as in humans?
Such a nice term to use as an AI-related alternative to "jump the shark".
I'll definitely be using.
Huh? I have no doubt that mega corporate farms have a “farm manager”, but I can tell you having grown up in small town America, that’s just not a thing. My buddies dad’s were “farm manager”, and absolutely planted every seed of corn (until the boys were old enough to drive the tractor and then it was split duty), and the big farms also harvested their own and the smaller ones hired it out.
So unless claude is planning on learning to drive a tractor it’s going to be a pretty useless task manager telling a farmer to do something he or she was already planning on doing.
"Hey AI, draft an email asking someone to grow corn. See, AI can grow corn!"
This project is neat in itself, sure, but I feel the author is wayyy missing the point of the original thought.
I have zero doubt Claude is going to do what AI does and plough forward. Emails will get sent, recommendations made, stuff done.
And it will be slop. Worse than what it does with code, the outcomes of which are highly correlated with the expertise of the user past a certain point.
Seth wins his point. AI can, via humans giving it permission to do things, affect the world. So can my chaos monkey random script.
Fred should have qualified: _usefully_ affect the world. Deliver a margin of Utility.
We’re miles off that high bar.
Disclosure: all in on AI
The important part is the stuff happening in the physical world.
581 points 342 comments
I mean, more or less, but you see what I'm getting at.
Most food is picked by migrant laborers, not machines.
The real question isn't "Can AI do x thing?" but "SHOULD AI do x thing". We know how to grow and sell corn. There is zero that AI can do to make it more "efficient" than it already is.
Come on.
If people are involved then it's not an autonomous system. You could replace the orchestrator with the average logic defined expert system. Like come on, farming AGVs have come a long way, at least do it properly.
Claude: Oh. My. God.
They're (very impressive) next word predictors. If you ask it 'is it time to order more seeds?' and the internet is full of someone answering 'no' - that's the answer it will provide. It can't actually understand how many there currently are, the season, how much land, etc, and do the math itself to determine whether it's actually needed or not.
You can babysit it and engineer the prompts to be as leading as possible to the answer you want it to give - but that's about it.
The worlds most impressive stochastic parrot, resulting from billions of dollars of research by some of the world's most advanced mathematicians and computer scientists.
And capable of some very impressive things. But pretending their limitations don't exist doesn't serve anyone.