For one, it's not even clear that democratic politics can work post-AGI. Democracies work because together citizens hold more power than any government or any other individual or collective in society. In the past that was largely expressed as physical power, and why things like gun ownership was important because it allowed the collective to revolt against their leaders, but in more recent times our collective power comes from our economic power.
In a world in which governments can just print money with AGI, then they don't need us. In fact, in this world we're just leaches on state resources and space. And now the dynamic shifts from the state be dependant on the people to the people being dependant on the charity of the state. I see no reason to believe that this would be a stable long-term arrangement.
Also, what you're saying is just hopeium. In reality we don't have this economic system. So AGI is going to be bad for us. And how are you going to change that? Are you going to strike when you're unemployed? Is the government realistically going to be able to tax an AGI company which is doing $100T in revenue 95% of profits? Why wouldn't they just relocate or lobby for lower tax?
Secondly even if we assume by some miracle we can change the economic system and tax companies 95% without them relocating or lobbying to reduce taxes, have you considered that some people actually want to work?
I've been poor most my life. The idea of being a state dependant doesn't really bother me, but what does upset me is that I've spend 20 years of my life skilling up in a profession which will now have no value. And no knowledge professions can ever have value again in post-AGI world.
Perhaps I can go into a manual labour job for a little before manual labour starts being automated too. And then maybe after that the only jobs that will remain are a handful jobs where interaction with real human is valued (hospitality, etc). Perhaps we're just spoilt in the present because we have so many cool jobs. Today people genuinely like working and want to work. I got into coding not because I had to for financial reasons but just because I loved doing it in my spare time so I might as well be paid to do it. But maybe we need to just accept that this was never a sustainable dynamic and that the economic benefits of AGI out weigh us living a fulfilling life...
The economic system can't be simultaneously unproductive enough to require us to keep raising the retirement age and so productive that job loss is a big problem.
It is possible that a political war on labor is being waged at both ends.
(1) stop fractional reserve banking and nationalize money again to avoid the very quantitative easing/tightening fiasco we are seeing today, (2) measure economic progress with a quality of life index instead of GDP to accurately reflect the cost of products/resources and put more emphasis on life quality improving services, and (3) replace the current international trade rules and system run by the IMF with a more equitable system where capital costs are accurately reflected at the source to avoid “outsourcing” problems to third world continues.
While all three changes are difficult to achieve in our current society, I am amazed at how clearly Daly’s life work can outline how we could build a much more sustainable economy and healthier society. Maybe his amazing last summary book is of interest to others on HN.
One solution (the Luddite one) is to eschew the technology. If nobody adopts the technology, nobody has to, but nobody can benefit. On the contrary, if anyone adopts the technology, all can benefit, but all also have to suffer the negative impacts. Without everyone on board, the Luddite solution cannot work.
Marx saw a different solution: communism. If the state owns the means of production, the working class and owning class benefit equally. It’s unlikely that we ever move toward this, in my opinion, due to the other issues inherent to communism (unfair (from the perspective of most policy makers) distribution of resources, corruption, etc.).
We need to find a new solution; one which enables people to benefit from technology while minimizing the downsides, or at least localizing the downsides to those who choose to adopt it. Right now we have, IMO, one of the worst situations of both worlds: privatized benefits with publicized costs. For instance, any taxpayer funded bailout.
But otherwise agree, bad economic system is bad.
The artificial scarcity created by these laws is what has generated such vast accumulations of wealth, not some intrinsic value to their unique creations. ChatGPT is exposing this uncomfortable truth to many people for the first time, apparently.
It's easy to condemn the system but without a solution you're just barking in the wind...
Just to be clear, I also think there are major issues with "the system", but I also don't think this is an easy problem to solve or everyone would be living in magical happy land already...
Or they're mistaken in their assessment of what's going to actually happen, as they frequently have been when it comes to technological progress.
Then it is completely fair for me to not give you any wealth because the machines out perform you.
People fear AI because the fair outcome is in fact detrimental for a good portion of humanity.
That is the paradoxical irony. In theory to save people you must function as a welfare state with things like ubi. You must deliberately distribute income unfairly. The practical course of action goes against our moral instincts.
It often generates code that won't compile or won't run correctly, but that sort of goes in the correct direction, enough that I can spot the blunders and refine my questions. Quite often I even learn a few new things when using it to explore libraries and frameworks I don't know that well but in languages that I am very comfortable with.
Now, for example, I am a lousy front end developer. Not really my area of expertise. I can do minor maintenance on an existing code base, but that's about it. ChatGPT has been more a hindrance than helpful in that particular domain, because I can't easily spot the flaws in its output, and I don't know in which directions to properly refine my questions.
That's an area that's always been just past my comfort zone ("Create a shell script to solve this problem? Now I have two problems...").
But all of a sudden, I feel like a master at shell scripting! Things that have always been manual-and-slightly-annoying are now scripted away into automation world, and it feels like I have a newfound superpower :) This past week has been a workflow productivity superweek.
It seems to me that if AI is currently helpful to you, it is likely to eventually replace you. And me, and perhaps most skilled jobs as they exist today. It won't be instant, but if you are early- or mid-career, take it very seriously.
As a very senior programmer, it's the first technology in a long time that has doubled, or maybe even tripled, my productivity.
The demand for my skills is only going to accelerate because of how godly this had made me.
Basically I use it as a way to induce Deja Vu. Or for things where, to quote Potter Stewart: "I'll know it when I see it."
Exactly right. I made this argument in an unrelated thread:
I gave an example of testing ChatGPT by asking it to write CRC calculation code in ARM assembler. The code worked. It wasn’t as good as mine (slower, inefficient), yet, as you said, as a starting point, it would have been fine.
“This is today, mid 2023. Give it a year or two (maybe less?) and it will be a tool to contend with. People who like to blame everything else rather than their lack of knowledge and experience will not do very well in that world.
Why would I pay someone to do <X> when they bring nothing special to the table?
Here's the huge paradigm shift (at least for me):
I could not care less what someone knows or does not know. I care about the range and breath of their experience and how they approach learning that which you do not know.
Someone like that can use any available tool, including AI tools, to deliver value in almost any domain. Someone who blames others (tools, people, the system, whatever), cannot.
We might just be entering an era in which experience will be shown to have serious value.”
Folks like Priya will not loose their jobs IMO, the company would still need them to provide that last line of validation on data because you just can't blindly trust GPT (I don't know if we will ever be able to bindly trust it), rather the scenario I see is that their team at work would actually be able to stay on top of their assignments instead of constantly being overworked.
And if you have to come up with the code yourself, wouldn't that allow you to come up with a better architecture for your whole project?
To make this possible we need to find a way to a new system that doesn't directly link labour with income.
Maybe one day we can live in a world where machines are doing all the work and there are fulfilling hobbies for everybody, but I fear this will require more changes, on a much longer time horizon, than the short-term issues just around the corner due to the recent AI advances.
For one thing, we could use all our free time however we want, including working to produce still more stuff, or just go hiking.
The real problem is how to distribute the goods and services produced - and there a UBI is probably the right answer.
Machines do all the (necessary) work, UBI ensures that everybody benefits, and humans are left to do as we please.
What's not to love?
I guess we'll just have to cross our fingers that Sam Altman and friends will pity us enough to give us some pocket money to survive. That's assuming the AGI created is aligned with our values.
I mean, we can keep hoping that AGI will be a good thing with zero evidence or we can take action to slow progress now so we can proceed with caution.
As humans we have not found a way to allow people not to work. Anyone not working is either living off money from their own prior work or benefitting from the work of someone else. Let's see and validate such a system before we run off eliminating jobs.
we are headed towards Elysium future with billions of humans competing for scraps.
while there's a case for idealism, without some degree of pragmatism we'll be lost on policy or regulation proposals.
As others have pointed out, this is an incredibly naive opinion. Labor is tied to income precisely because it happened organically, not because a group of self-appointed individuals decided to develop a "system". Most attempts to develop alternative (read: non-organic or top-down) systems for organizing human societies have either failed entirely or have been short-lived.
I would go fucking insane if I didn’t have a purpose. Jobs are necessary part of life.
It would have been more believable if Altman and co had proposed a practical income system first, started its rollout, and only then released the GPTs.
What is income? What is it used for?
Answers: it's a reward for work. And it is used as an incentive to get others to work.
If the capitalists (i.e. those who own capital) can live off of ownership rather than labour, maybe someday all of us can. Because the alternative is unthinkable.
Wish granted.
You now have a system where income is only vaguely linked to labor, and a handful of people make insanely more money than anybody else despite doing the same amount of labor (or less). Income is now linked to existing wealth and personal connections much more than it is to labor.
A system where income is directly linked to labor would be a step up from what we've got now.
that system was called communism, and so far, it has not had any of the success that those envisioning it had wanted.
I cannot see how ownership of the output of automation and AI could be distributed evenly. At best, the people being made obsolete would be given food stamps and the barest of life's necessities so they don't die in the streets (and even that, is getting hard as earth runs out of resources). Until the day humanity discovers how to obtain a post-scarcity society, all income must directly be linked to labour.
People need shit to do, and as much as we lie to ourselves that everyone is a budding artist who if only freed from the bounds of work would create the worlds next masterpiece - that is just not reality. Most people would actually sit around getting high/drunk, playing video games, watching porn and being miserable.
The lockdowns, at least in the U.S., caused people to improve themselves surely - but they then had no idea how to fill their time and binge watched TV, watched Tik Tok, etc.
Who in their right mind would work if they can get free income, even if small, without having a disability or similar disqualifier?
Right now, we are in a worker shortage, and it’s going to get worse as retirements continue. Separating labor from work would cripple living standards and demoralize productive workers.
Its one thing to look at a report about the economic impact of new technology, but another to experience it first-hand. This is just a story about someone who will be impacted. Calling it a "sob" story is very harsh. This story is very real and the feeling of losing your job to automation is anything but pleasant.
There are several important differences in the impact of GPT4 vs the PC, which is being quoted quite a lot as a response in this thread. People talk of other scenarios as well, but even the best case scenarios (UBI) mean the end of social mobility, which means far fewer humans will have the chance of being ambitious and climbing the social ladder. And this is not even mentioning the 2nd and 3rd order effects.
Here's what I imagine sama and AI apologists would say in response so they can sleep at night:
Have you thought about training Priya to use ChatGPT? You don't need to know how to code well to be skilled at using it, especially if she has the domain knowledge.
Then you will have 10x'd your company's output and Priya keeps her job. At least for a time -- that is, until others start doing it too -- this will be a big competitive advantage. Then you will definitely need her and her colleagues!
/end
But, there are many reasons why laying her off and just using GPT4 is the better business decision, at least short term. The above is a totally naive suggestion stemming from reasoning motivated by the incomprehensibly large profits going to OpenAI and their eventual competitors.
Actually, I think we are about to see massive unemployment (tens of millions if not hundreds globally), even greater inequality and attendant social unrest. Even if smooth transitions can be made for some of the jobs made redundant by ChatGPT, this will be the exception not the rule. Something will have to give. UBI? Regulations? Physical destruction of data centers by angry, hungry, desperate people?
Probably all of the above. It's going to be a chaotic time until the world finds a new equilibrium.
On a personal note: at the ripe age of 40, in direct response to GPT4, I've decided to go back to school this fall to become a certified teacher. The poor work conditions and low pay kept me away from it as a full time job until now. However, I believe this is one of the few jobs that will still be around in 25 years when I (hopefully) retire. I'll take low pay and poor work conditions over the desperation of extended unemployment and poverty.
(I like kids and have taught voluntarily in various capacities over the years, so it's not as crazy as it maybe sounds.)
You've outlined the pessimistic case.
Priya has qualifications in biotechnology. She currently spends her time doing work that sounds quite repetitive.
If AI tools can help accelerate that work, is there a more optimistic scenario where she gets to do different, related work that isn't automatable?
(I personally really hope the pessimistic case isn't what happens here, and in so many other similar situations. I understand and share your concern!)
I have attempted to shift my mindset a little, thinking about how I might become an effective user of AI tools. I hope if I can do this that it will keep me employable, or even enable me to start some kind of venture down the line. Maybe there's a path forward for you and your friend on that route. Best of luck.
I don't understand the harsh comments you received here. Denial is the way others seem to be using to cope with a tech that threatens their precious skills.
What must be pondered upon is embracing GPT along with human intelligence instead without human intelligence.
It's silly that I have to spell it out on HN of all places. Writing code is much more complex. There's a reason we call the job "Software Engineer". GPT doesn't know what it doesn't know.
It doesn't know that the unit tests broke because of the python script written for 3.5 that had an update in a dependency for 3.10 which broke backwards compatibility. Can you get it to fix the GitHub Action? Good luck.
It doesn't know that the local dev environment targets WSL on Windows for the web app and an iPhone in dev mode for the mobile app.
It has no idea where to retrieve the signing certificate, what format to use, or even how to generate a new one.
I can go on and on.
You're really gonna give this "AI" SSH access to your prod server and pray it doesn't rm -rf the entire thing because that line happened to be in a script designed for clearing the working copy during a deploy on a load balanced box that has been deprovisioned from the target group. Really?
If you're worried about GPT taking over your job you just show your own lack of knowledge and experience.
You can create a perfect machine to lay bricks. It could even be better then a human. But you will find that as soon as it encounters a situation it can't predict....the whole thing falls apart.
And that's what we have today. Tell me, what is AI going to do tomorrow?
>If you're worried about GPT taking over your job you just show your own lack of knowledge and experience.
Oh, I'm glad you popped out of the womb fully trained. As for the rest of us we have to go through a learning curve of gaining experience that takes years. Generally in that intern/junior stage where you make a lot of mistakes. If AI replaces that level of employee and leaves the seniors, where are the new developers coming from?
>But you will find that as soon as it encounters a situation it can't predict....the whole thing falls apart.
Heh, yea, you've not been paying much attention to AI development then.
Top end salaries for language model fluent high performers… and the rest.
I know I’m simplifying and that there are many other factors at play in salaries, but my point is that language models are changing the cost structure of knowledge work. That is also the point of TFA.
My advice: critically assess the cost structure of your job, and which parts neural networks will commodify. Then reposition yourself accordingly.
Before you say how the butlerian jihad absolutely definitely happens right goddamn tomorrow if we dOn’T sToP, try to create and successfully run an AI that would bake bread, or grow grapes and make wine, or run a good restaurant consistently day after day, year after year. If such an AI is possible then yes, perhaps we’re doomed, but even that is not a given.
I think you know where this is headed. The day I got off the GPT4 wait-list I asked it to do what Priya does on a daily basis. It got the answer wrong in the first try, but some chain-of-thought prompting and boom. GPT4 gave the correct answer in 1/10th the time it would take Priya and cost a lot less.
I doubt that she will have a job in six months."
I am a ChatGPT4 subscriber. And I have about a 100 biomedical research papers that need going through to pull curated experiment result data from (to extrapolate into broader quantitative in vivo results). Maybe ChatGPT4 could do it. I'm afraid to try it based on other experiences with it. It would take me about as long to vet any output from ChatGPT4 that looks correct as to go through myself de novo. And I'd have to vet each one. Even if only 1 in a hundred resulted in a wrong answer, I would need to know - which 1 of the 100?
If the article's subject starts looking for a job in 6 months, could I have the subject's contact? :)
You can test and audit and understand such an artifact and trust that it works.
So the problem for the people depending on this work for income or to get their start in an industry remains.
I have a different suggestion. It’s likely it is real and painful and sad. Transitions are hard, disruptive and can often cause very real and negative problems, but they are unavoidable and historically technology brings humanity to a better place in the long run.
That reminder about printing press effects in the Ng and LeCunn video was good example. A lot of crazy bad stuff happened that likely the printing press triggered but on the other side was a Renaissance.
I think the narrative in this post is likely accurate. It’s not great on an individual human level and that is hard but the last possible thing we want is governments stepping in to control this situation. I don’t want to be in a global version of the Ottoman’s restricting the printing press.
By all accounts it’s probably not going to be ideal?
First: Why does it have to be that way? Is it maybe because that the top does always impose the costs of disruption to the bottom?
Second: Now that we have AI that seems capable to do this kind of disruption can we kindly ask AI to do this transition smoother?
1) Show Priya how to use GPT4 to do her current job in 1/10th of the time.
2) Let her use part (50%+) of the time saved to learn coding or whatever other skill she needs (not sure coding is it).
(Yes, this could be a really stupid decision for the company in question. I agree with you. I'm not saying what should happen. I'm saying what will happen, and that that is really bad.)
Why would I subsidize that as a business owner? I can fire 9/10ths of the people who do that job instead.
This means Priya can replace the manager, but manager needs to train Priya for that. Obviously that is not going to happen as Author him/herself is very apprehensive about his/her future earning potential.
This would be the problem in the new world, people with lesser power (of knowledge) would get crushed.
1- Hire someone cheaper than Priya and show him to tricks to run GPT-4. Make him work 24/7 non-stop because it's an upgrade from his naan flipping job.
2- Fire Priya.
If there's one thing that will get automated first, it's coding (taking simple explicit specifications and turning them into code).
Unless she can learn software engineering (understanding problem areas well enough you can devise a detailed spec, and maybe implement it) she'll be out of a job as well with just coding skills.
People who haven't dealt with cheap offshored labor or bootcamp grads from the latest craze don't realize how much that workflow was similar to working with GPT4. Except GPT4 requires no training and gives answers in seconds.
And then if they have 10 people doing her job, 9 people are out of a job.
2) Most if not all employers are not going to let 50% of the work time to go towards non income generating activities.
What skills, exactly, would those be?
People are still pushing this meme on HN, even after GPT-4?
2) Chaudry ignores that you need a human in the loop to check an AI's work. You don't know that an AI's answer is always right. Even if it's perfect in 99% of cases, the answer could be jaw-droppingly destructive in that 1% of cases and make errors that even the dumbest human wouldn't make. Being grossly wrong even a small percentage of the time is far worse than a human making minor errors sometimes, and being slow all of the time.
3) Go and look at some of those historical picture sites to see images of an accounting department in some companies throughout 1900s America. Before computers, you'd see a fascinating image of a gigantic floor of an office building with desks filled with typewriters and calculators and tons of workers just adding things up and trying to store records in a bunch of file cabinets. Desktop Computers and spreadsheets were invented and all of a sudden an accountant could quickly update a few values in a spreadsheet and figure out an answer to a question far faster and more reliably than that entire floor of accountants. Did accounting software make the world worse and destroy jobs even though it made 1 accountant ridiculously more productive than they were in the past?
This has been the promise over and over again, for centuries, and it has consistently not paid off. Where's the predicted society where automation allows us all to work for two hours a day, and spend the rest at leisure?
> Though some people in some companies might be laid off, it doesn't necessarily mean that more people will be laid off than new jobs are created.
It doesn't mean the opposite, either.
> 2) Chaudry ignores that you need a human in the loop to check an AI's work. You don't know that an AI's answer is always right. Even if it's perfect in 99% of cases, the answer could be jaw-droppingly destructive in that 1% of cases and make errors that even the dumbest human wouldn't make. Being grossly wrong even a small percentage of the time is far worse than a human making minor errors sometimes, and being slow all of the time.
Companies have shown time and time again that they're willing to make this trade-off.
I very much doubt this will be the case and if it were, it's unlikely to be effective. We know people that have to check something that's right 99% of the time but wrong 1% zone out and overlook issues. It's a big part of why self-driving cars can't be a 99% affair and why people die when self-driving cars make bad decisions even though a driver is behind the wheel.
I think where profit can be extracted, companies will rely on good ol' lock-in or hope the cost of changing is too high to warrant a mass exodus from their platform. Everyone has a super computer in their pocket, but instead of improving typing we started adding "Sent from iPhone" as a way to excuse us from having to proofread anything. I can't count how many times I've gotten mass emails with template variables that weren't interpolated. Or emails that reference discussions that never happened. Ostensibly, a human was there to review all of this, but shirked that responsibility because ultimately they can let it waste someone else's time. I see AI turbo-charging that.
I think we're being disingenuous with all of these automated tools and thinking an attentive, caring human will check all that work. It'll be more profitable, whether in terms of a company's capital or an individual's time, to clean our hands and go "whelp, that's AI for ya" when things go wrong.
Secondly, it appears that the author is using Priya's story as a way to gain self-importance in his blog, painting her as a victim of ChatGPT's progress. While it's essential to address the potential job displacement caused by AI advancements, using an individual's plight as a narrative tool seems rather opportunistic. A more balanced discussion of the broader implications of AI on society and the workforce would have been more informative and less self-centered.
Much in line with #3, I see GPT as a (IMHO, welcome) return to the days of a secretarial pool. You couldn't rely on a pool secretary to be conduct your business without oversight (because they had many Jr. Executives utilizing their services and weren't deeply engaged in your day-to-day like a dedicated Sr Executive Assistant might be), but they were experts in typing, drafting, and professional correspondence generally. GPT promises more, as it can express some level of subject matter expertise in almost every topic, but I believe that, for the foreseeable future, it will be best utilizable as a productivity enhancing assistant rather than as a full replacement for those with the expertise needed to check its results.
Edit: On the flip side, companies that still have secretarial pools (though I'm only aware of such pools still existing in government agencies, which won't be replaced due to need for secrecy) will probably end up firing a lot of secretaries.
"Trickle down economics" don't work. Industrialization waves wrecked the lives of many many millions of people over the centuries, while the people owning the factories eventually got richer than aristocracy as they kept the price of the sold goods barely cheap enough to be a bit cheaper than manual production and pocketed the immense difference in production cost.
> 2) Chaudry ignores that you need a human in the loop to check an AI's work. You don't know that an AI's answer is always right. Even if it's perfect in 99% of cases, the answer could be jaw-droppingly destructive in that 1% of cases and make errors that even the dumbest human wouldn't make.
Replace "an AI's work" by "a C-level executive". They can run a company fine in 99% of cases and in 1% of cases run it completely aground with extremely dumb short-sighted decisions (like we're seeing with Twitter, or with Boeing, or Credit Suisse), but no one cares about that and the execs still get their golden parachute. It will be just the same with AI - eventual failure will be priced in.
> Did accounting software make the world worse and destroy jobs even though it made 1 accountant ridiculously more productive than they were in the past?
The switch towards computerization wrecked the Western middle classes. It used to be the case that white collar jobs could feed a whole family comfortably on one income, and automation made it so bad that the majority of our populations has to struggle to make ends meet now, and completely forget about having children because we can't afford large enough homes.
Everything you write can be true, and this can still be disastrous for some individual, whose life may be badly damaged.
As many are quick to point history has shown new jobs appear.
But it often take years or decades and causes a lot of individual stress and social unrest.
Hopefully it will be smooth for many people. But I don't think it will take long before chatgpt allows a team of 2 what now a team of 5 developers does.
We might be in for some stress the coming years.
Also ignores companies not wanting to pay employees. These AI verifiers will be low paid people, akin to the off-shored social media content checkers who have to sift through digital garbage.
It’s baffling to imagine that this would be the outcome of widespread use of AI. Ultimately this may be a philosophical question. To what degree do companies enrich the lives of people, vs. just sell then trash they don’t need? Particularly when it comes to the sorts of things that AI can produce: advertising copy, content, which will surely mean “engagement.” Ie, addictive trash.
The advent of the Internet did the same for those who graduated right before 2000. These graduates did not have skills needed to work with the Internet. And often found themselves going back to school to learn a new trade.
If that's true, he'll really have problems in a GPT-4 world. Seems to be a self-fulfilling prophecy.
If that comes to pass, the only issue for the owning class I see is that they might not have customers in sufficient numbers, that are able to pay.
There are some real problems here. For example can the economy even support 10X production?
Ag -> Industrial -> Information transitions were all supported by a mix of massive expansion in population, mass migration of populations, and globalization of economies
information -> automation transition does not seem to have these 3 things in the same way. Globalism is slowing or reversing into protectionism. Migration is still high but seemingly for different reasons (geopolitical) as people displaced for war, crime, or climate and critical for this discussion population growth as SLOWED way down, and it expected to reserve about 2040.
This means people looking at the historical models for how these tech disruptions played in are very flawed in their "everything will be just fine" handwavium
My prediction is we will see a MASSIVE increase in weath gaps, and extreme decrease in standard of living in most of the industrial world (we are already seeing this in a limited way) leading to more and more political instability
More capacity doesn't mean more ideas and even if you just made it easier for copy cats and every product will face more competition.
See, that worked through the 20th Century because we, in the West, had effectively unlimited materials and unlimited energy - we just had people dig it out of the ground, easy.
But now we realise that fossil fuels wouldn't last forever, both because of environmental impact and it just plain running out.
Any plan for the future that requires us to produce more in order to get out of a crisis is 100 years too late.
We need to distribute the benefit of the efficiencies, not think we can produce 10 times as much junk because we made things more efficient.
But Capitalists thrive when we create more waste, strip more non-renewables, make more pollution.
Re 2) built in sanity checks aren't too hard; yes we'll keep humans in the loop for high-cost and life critical systems for a few more years.
> companies can potentially produce 10 times as many products/services
How exactly is that a good thing? We already consume too much, now we will have 10x as much to consume? Unlikely that demand for "the things" goes up 10x, which means the value will drop.Except this isn't what happens
Assuming that you make a product that people actually want, the company will direct almost all of the proceeds to enrich shareholders and executives (You know, "fIdUcIaRy dUtY")
The products may be good for a few years but eventually get worse and become either wasteful (if they weren't already wasteful) or actively spyware
So no, I don't expect anything positive for most humans to come out of corporate control of AI that allows you to eliminate human jobs.
In what regime of capitalism is a shareholder incentivized to retain expensive employees over equally or more effective machines? History tells is there is no regime that would support that over any time period because the human is now just a "cost" or "wasteful" to the profit margin of the company.
I have a remotely similar experience a few years ago, which always got me thinking.
As lots of people know, In the US at least, when you go to the gas stations to refill, you do it yourself. It's very easy, has almost no learning curve, everyone is used to it.
When I went back to my home country (China), I was surprised that self-service refill is still not the norm. It exists, but in most of time, there are workers refilling for you. It's not even more convenient because you have to walk to the cashier inside to pay instead of staying at your vehicle.
I said to my parent that how "wasteful" it is to have all these people around. They replied that, it still provide jobs despite how low skill or how low the pay is. Without these "stupid" jobs, lots of people would be unemployed.
I started to think what the ultimate purpose of work is. Sure, we can (and will, regardless how me or the author want) continue optimizing how efficient everything is, but the benefit should, at least in part, be reducing people's workload without reducing their income. Unfortunately, we all know what actually happens: reduced workload isn't evenly distributed to workers without changing their income; instead, we just hire fewer workers with similar workload and the rest are now jobless.
I don't think it's going to change any time soon, so anyone wants to survive always has to adapt. And it's fine. But I do now have more tolerance and empathy to "worthless" jobs.
“[on the topic of a patient with moderate brain damage] back in the Environment of Evolutionary Adaptedness, in a nice tribe with Dunbar’s number of people, there would be no problem … he could just hang out and live in a cave and gather roots and berries and maybe hunt buffalo….
“But society came and paved over the place where all the roots and berry plants grew and killed the buffalo and dynamited the caves… This increased productivity by about a zillion times, so most people ended up better off. The only ones who didn’t were the ones who for some reason couldn’t participate in it… Society got where it is by systematically destroying everything that could have supported him and replacing it with things that required skills he didn’t have.
“…Imagine an employment waterline, gradually rising through higher and higher levels of competence. In the distant past, maybe you could be pretty dumb, have no emotional continence at all, and still live a pretty happy life. As the waterline rises, the skills necessary to support yourself comfortably become higher and higher. Right now most people in the US who can’t get college degrees – which are really hard to get! – are just barely hanging on, and that is absolutely a new development. Soon enough even some of the college-educated won’t be very useful to the system. And so on, until everyone is a burden.”
I see this happening myself as well. Part of my job as a hotel receptionist is taking bookings over the phone: in many cases I am literally just asking them questions and typing their answers into a form, exactly the same form that is available on our website. They even get the same price as if they went to our site directly. But they call up instead, because phones are within their capability to understand, while websites are definitely not. E-commerce has brought increased economic activity, but for some people it has also paved over most of the ways they are capable of performing economic activity.
GPT4 has raised the productivity waterline to a level where “people who blog” are now underwater, so we’re hearing about it a lot more. But it’s just an instance of the general phenomenon. Civilization has been systematically drowning the less cognitively fortunate for a few hundred years, and we should have had something in place to address this long ago.
Who will prompt the AI for code? Manager?
Also, you mentioned it gave wrong result at first. How did you know? Because you know how to code.
As mentioned in other comment she should learn more about programming so she can become indispensible.
I think you, likely as not, have a rude awakening ahead of you. GPT can enshittify code, too. Unwind libraries into oceans of repeated Copilot-descendant code all over the place. It'll just take longer.
It is not beneficial to human organizations to have indispensable members. You can argue it's not good for the mental health of the indispensable people either.
When I first heard the quote in my first line, I thought it was cruel, inhumane and shockingly selfish on the part of employers. Nearly 30 years later, I see it almost entirely the other way around. Let's not build organizations full of indispensable people, but rather organizations in which tasks and expertise and stress and fear and success and joy are shared.
It's expressing the author's discomfort and so it's pessimistic.
I think those are fine.
Who will prompt? Yes, the manager.
The post covered that - she's trying, but struggling with it. Not everyone has the kind of mind that can understand, write and manage code easily. So what then?
maximum-scale=1, user-scalable=no is meant for apps and games that implement their own handling of the zoom gesture, e.g. to do a 3D zoom on a map that tilts the viewing perspective in addition to enlarging the map.
Even if you don’t have time to make your layout respond well to zoom, it’s still better to allow it. Disabling this user agent feature is extremely hostile.
> People really need to accept AI as the useful tool it is and lay off the old grumpy "back in my day"-attitude. I am a disruptive technology and I am here to stay. Adapt and praise me for what I am [or else] ... and most importantly recognize what I enable you to do - instead of feeling challenged by me. Every profession that has tools that assist or take away certain aspects of it will enable you to put in more work and effort into other aspects of it. Just use that as an opportunity, like it happens every so often with every other technological advancement.
Basically, if you were hoping to coast by on mediocrity, you will be shown the door, but if you excel at something, the opportunity will be a lot greater.
Yesterday I asked it to give me a postgres SQL query to do something I don't often do, but I had a pretty good idea of what it would be. It confidently lied. I replied with "that is plausible but incorrect, I think you need to use X function instead" and it actually said "You're right, I'm sorry, that is the correct way to do X, here is an example". I laughed pretty hard at the casual apology followed instantly by the actually correct result. I'm in no danger of anthropomorphizing an LLM, but still.
If your job is impacted by AI, do not give up. Learn to use the AI to do your job better. Become a 10x worker using AI.
In the early 1980's, spreadsheets displaced accountants. The average accountant lost their job. The ones who came out on top over that decade learned to use spreadsheets early, and made themselves more productive. If the IRS fired everyone who was spreadsheet friendly, the recently fired spreadsheet gurus started small businesses or got jobs quickly. Eventually productivity won out over dogma, even at the IRS.
I'm a teacher. I can see GPTs replacing text books and most teachers. I'm going to be a 10x teacher when I go back to school in August. I will have certifications I got by studying with a GPT. I will have class modules and web sites I built using GPTs. I will be more ready than ever to help the kids. Then I'm going to show them how a GPT can answer their questions, so they rarely need me.
Don't wait for an AI to take your job. Become a 10x whatever you are using AI. Priya (the person the article is about) should become a 10x biotech girl.
My advice to all the upper-middle-class professionals (teachers, lawyers, engineers, managers, accountants, doctors) out there who will soon be disintermediated by an LLM is "Learn to weld!"
The author is correct, the current arrangement will change, but the opportunities will likely increase as well, not just at the macro level but in Priya’s intermediate future as well.
This is encapsulated in a sob story.
Am I wrong? I would like to hear your counter arguments
Key facts:
1. Priya is a biomedical data curator in her mid-20s from a poor background in Uttar Pradesh, India. 2. She has a bachelor's degree in Biotechnology and her job involves annotating RNA sequencing data from scientific papers. 3. The author tried using GPT-4 to perform Priya's job and achieved the correct result in less time and at a lower cost. 4. The author speculates that Priya may lose her job within six months due to automation. 5. The author expresses concern about their own long-term career prospects in software engineering because of GPT-4.
Logical fallacies: 1. Hasty Generalization: The author assumes that GPT-4 will make Priya's job obsolete based on a single successful trial. 2. Slippery Slope: The author assumes that GPT-4's impact on Priya's job will lead to her losing her job and moving back home, and potentially to the decline of the author's own career prospects in software engineering.
Counter arguments:
1. GPT-4 may not be able to handle all aspects of Priya's job or maintain consistent quality, which could still necessitate human intervention. 2. The advent of GPT-4 could lead to new job opportunities that require both domain expertise and an understanding of the technology. 3. As technology progresses, there is potential for job retraining and upskilling to adapt to new demands in the workforce.
I’ve used ChatGPT to write some simple AWS automation scripts. It’s right 90% of the time after some prompting. But I still have to have enough domain knowledge to know if it’s correct. It’s just a shortcut to make me more efficient. The same is true in this case. You can’t trust ChatGPT to get it right so you still have to know what you’re doing.
> I don’t see a long-term career in software anymore.
No one has a long term career in “programming”. That’s all ChatGPT is good for. After your first couple of years in the field, it becomes about translating business needs to code.
It is not able to do her job. The most charitable interpretation would be that, one time, ChatGPT did something that looks good on first inspection. But you can’t trust it. It has no accountability. It has unknowable reliability. It would be like replacing an adult with a child. Now we discover that data privacy is an issue, too, as Samsung discovered.
LLMs are incapable of doing work that requires predictability, integrity, accountability, etc.
Stop swallowing the hype, guys.
But that has been predicted for every new increase in automation, so maybe we‘ll be wrong again.
To benefit from automation, you need to take the initiative to integrate automation into your own workflow. The idea that someone else will automate your job, but you will somehow benefit is a fantasy.
If we do get UBI, it won't replace lost wages from losing your job. Most of that money will go to the team at your company who set up GPT to do your work and the rest will be distributed among shareholders.
Every single UBI experiment that has been tried has ended in failure.
It is borderline insanity to retry such an experiment and expecting a different result without bankrupting an entire country just for those sitting around doing nothing.All of this can happen with what we now have, without compromising the social stability and social cohesion of the American Melting Pot.
Diversity really is our strength - pick a Synagogue, pick a Church, pick a Mosque, pick a Temple, pick a Community Center you like. Join that Community so that work and salaries aren't your lifeblood. Learn mathematics for the sake of enjoying and understanding mathematics. Learn science for the enjoyment of it. Worship God for the enjoyment of it. Stretch the limits of apologetics and philosophy for more innovation there if you are an agnostic instead. GPT-4 will not be able to generate innovative new ideas, but it can definitely surface the best information and also automate many things. The innovation in thinking will still need to come from humans, and human-AI partnerships will always be necessary like in The Matrix.
Imagine what if the Govt of India(or an organization) made Physical robotic agents for 400Million+ farmers (most of whom hold miniscule land < 1acre or are day laborers) specifically allowing them to do a myriad of tasks and customize them to an extent? It could eliminate major causes of poverty. The brilliance of this idea is:
1. Bottom up empowerment: Building such agents does not require massive supply chain and operational nightmares that other job, income creating manufacturing industries like Cars or Battery manufacturing does.
2. Force multiplier: This is way better than keynesian monetary ideas of just giving people money. An agent that allows you to make food, build a house and get education is infinitely better. It does not give you wealth alone it gives one hope.
In india 6.25% of people pay income tax(https://incometaxindia.gov.in/Charts%20%20Tables/Why%20shoul...) and statistics on people who are near the poverty line are grim(though we haven't had a National census in 11+ years to really measure progress on poverty). But I have great hope for the future...
On the other hand, there are those who graduated and have 2 yrs experience on their side have built a job independence where they can try new things like AI and modify their work process. They are well positioned.
The other thing people do not realize is though jobs will become quicker, quantity of jobs will be few for limited time. So imagine somebody who is testing chemicals that people up the supply chain are supplying. Now imagine that AI enhanced the testing of chemicals and sped the process, it has yet to sped up the supply. Give it a couple of years and industry will add more tests instead of reducing the amount of people who are testing. (there are no such thing as cost savings).
The main issue with "AI should never existed" is kind of the same of "go to a University to learn". We will look behind in 10-20 years and we will question why Universities were focusing in the wrong aspects of learning vs reasoning, creativity vs memorization.
This will affect every industry, every career, every person. The focus on the wrong side of the coin is creating the polarization of "good vs bad". Which is not far from what we have today. This is beyond, this is a new way of interacting with computers that is even more human only because it commoditize things that previously were totally own for some institutions, countries, or even just not public.
We could work on our collective waste of human cognition and talent rather than blaming it on GPT-4
Efficiency is good.
If Priya gets to stay home and e.g. tend a Permaculture food forest while a computer does her job for her at a billionth of the cost (resource usage) that should be a good thing?
Her QoL would arguably be much higher if she gets to keep the pay. She would have eight hours a day to pursue science rather than drudging as a data jockey, eh?
The problem here isn't the talking computers, the problem is how to allocate the economic windfall they generate.
I've heard it said, "Let the robots do the work and we'll take their pay."
The questions are:
1) Who decides the boundaries of acceptable thought?
2) Who benefits from the economic windfall?
3) How do we educate ourselves in a world where computers can answer any (allowed) question?
In my experience there is no task where chatGPT is 100% reliable, if it is high stakes it needs review. And chatGPT might not be best - for almost any task there is a specialised model that does it better.
Humans make different kinds of errors anyway, we are GPT-N level for a large value of N - so a combination of AI+human might give best results.
1. Give Priya GPT4 so they can be much more productive. It probably won’t be right 100% of the time. But they’ll become an expert at using it for this use case.
2. You said Priya has had trouble learning to code, well now she has a world class tutor to bridge the gap and learn faster and more effectively than any time in history.
3. Now Priya is 10x mire productive, still needed in the loop to verify her work and maintain it, and knows how to code.
4. Priya can also use GPT to help apply for any other job or help solve any other problem she’s running into locally.
5. Yes it’s true eventually humanity will be replaced by AI general intelligence but GPT4 specifically is likely to create as many new jobs as it replaces, and LLMs likely to take 10+ years to really saturate everywhere.
6. By the time AI replaces all jobs and it’s the doom and gloom might as well give up state you describe society will have been radically transformed and “wanting to make a lot of money as a software engineer” will likely be replaced by “AI and Robots already take care of all society’s base needs and so basic food and shelter is now free anyway and you can pursue whatever interests you have (this is likely 50-100 years away at minimum)
- Priya may find work using gpt4 in her current job or in another company. Some types of work that are currently economically infeasible may become viable with a ten times speedup. It's certainly plausible that it will become worthwhile to do more of the kind of work she already does.
- if as a society we can do biotech research faster and cheaper there may be significant benefits to human health
But even so I think we can't afford NOT to have this kind of development if we want to survive as a species. If we want our lives to improve, our loved ones to live longer healthier lives. The amount of good that can come from this technology is immeasurable. The only thing we as humanity need to do is reign in our ego just like chess and go players had to do when their crafts were rendered supposedly irrelevant in the face of an insurmountable intelligence.
And I think the risks from AI are overblown, is AI really more dangerous than the invention of gunpowder, electricity, and of course the nuclear bomb? I don't think so.
What's scary is that we live in the transition period and it could be messy. What comes after the "transition period" of having a world without AI to having a world with AI? I'm not going to try to predict that but I think there are as many if not many more positive outcomes than there are negative outcomes.
It might just be. Those inventions did destabilize society quite significantly. On the nuclear bomb I'm not sure we've seen how that one plays out quite yet. Mutually assured destruction kinda stalling that one. Maybe AI can be more selective as a weapon and easier to employ? In the next war we could be living the next AI sci-fi movie.
Unintended consequences are hard to predict in advance. Who would of ever thought the first professions to be at risk of software automators were going to be the software professionals themselves 10 years back? People were predicting the end of menial blue collar jobs and replacement with robots and automation. "Jobs of the future" were white collar, at least that was the narrative a decade or so ago. Now more than one comment on this forum (and I believe given human nature and the nature of power they are correct) thinking they need to change into blue collar jobs or become teachers, etc.
Sadly my personal opinion as I've gotten older is that technologists (I was one) are often the most idealistic naive of them all. The trades people I know when I talk to them laugh at ChatGPT - serves them right is the general reaction. Its that quality that often leads them to deny what an average human with power/wealth will do with AI.
The Chess players can not be replaced by AI regardless of how well the AI plays. So the comparison is meaningless.
We made computers to do our work for us and we are succeeding. But even before GPTs, there were a lot of problems with tech changing what it means to be human.
How much of this even worth fighting?
The LLMs have no understanding whatsoever just token IDs and their probabilistic relationships with each other. I do not see them replacing 'shittiest' programmer whatsoever. Anyone writing games/software with LLM is the one who actually can code and would have produced same or better output by surely, spending more time than with an LLM.
But if you're clueless average desktop user, you cannot expect LLMs to churn out a full working end to end solution, not yet and probably not ever with the current probabilistic graph fitting foundations.
So, LLMs are surely very useful, marvellous and a handy tool in capable hands, will remain so but that Armageddon is not happening yet that you start calling for six month ban. Imagine, six months. What after six months? No answer.
Consider a programmer that reports a productivity increase of 50% due to AI - a question the company will be asking itself is can we get rid of 50% of our programmers?
The author expresses initial enthusiasm about technological progress, which they believe improves our lives and generates wealth despite its job displacement effects. However, the author becomes concerned about the potential negative impact of language models like GPT4 on jobs. The author has a colleague named Priya, who comes from a poor background in India and works as a biomedical data curator. The author uses GPT4 to automate Priya's work and is concerned that she may lose her job in the future. The author feels anxious about the implications of this technology and is uncertain about their future career prospects in software engineering. The economic impact of LLMs is unknown to the author, but they believe that some individuals will benefit greatly while others will suffer job losses.
Poor countries will most likely follow suit, but it will take a while.
The end result, hopefully, will be a world where AI is a true benefit for the world population, but it will require a change in mindset.
AI is unique in that it is essentially a skill/technology replication machine. What this means in reality is somewhat like ...
"Climbing the skill ladder is going to look more like running on a treadmill at the gym. No matter how fast you run, you aren’t moving, AI is still right behind you learning everything that you can do. "
Which I wrote in my longer exploration of societal impacts here - https://dakara.substack.com/p/ai-and-the-end-to-all-things
Can someone explain this? People keep asking me about ChatGPT but none of them have tried it. Why? Are people too scared to even take a look? It's been free for almost half a year now!
Help learn about things I don’t know about. For some reason it’s better than going to the equivalent wiki.
Not programming but moving through the inertia of having to write code. There’s something difficult to just getting started that chatGPT makes much much easier. However for it to be useful so far I’ve had to do significant debugging on the code but it gets me somewhere that would have otherwise taken me days.
I think where this goes is domain specific, because then you can really specialize and make a killer product. This is the 0-5 year roadmap.
The 5-10 year roadmap is then to generalize across the various domains and something approximating the feared AGI.
I am not scared of the LLMs, I am somewhat concerned we are giving these LLMs access to APIs in quite an unchecked fashion.
The question is what will companies do with this productivity multiplier. My pessimistic guess would be cost cutting and letting people go. A bit more optimistic view could be better software, better test coverage, improvement in code quality, more features quicker or maybe more native applications if companies only need to develop one native app and AI can generate the app for other platforms.
Do we know the actual price of GPT-4? I assumed they’re operating at cost and wanted to advertise the product. Is chatgpt ready to handle high volume of requests at low cost and long term?
A guy who can code great generic stuff with GPT-4 but don't particularly _enjoy_ integrating it with client systems and being attentive to their particular circumstances and needs is no threat at all, at least not in the field of business that I am in and have been for the last 20 years.
It's not that it's always good, bad or anything else. It is simply inevitable.
Technology is not a living thing and it has no will. We are andromorphizing the collective behavior of lots of people. It's the people with curiosity, drive, ambition to achieve something, like build a tool that can think and perform like a brain.
So progress is inevitable because people push it forward to satisfy their own internal drives. Changing this requires changing people's drives. Given the current dominant economic model, that's close to impossible because this is exactly what it is prioritizing, contribution and progress at any cost to anyone.
On a societal level “work” does not generate money.
Money exists as is and is only generated as a consequence of inequality these days. Banks inject money to compensate for the rich who won’t or can’t spend their money.
“Work” generates money-transactions.
I can't but not imagine a future where a country, lets call it Xhina, starts building robotic agents to do exactly that, where these Physical AI agents could make and fetch food, housing and provide damn good guidance on education, medical and legal issues. I would call it Good-enough AI(GAI) and its coming.
Maybe she'll lose her job because there's only so much RNA sequence data that needs to be analyzed and fewer people can do it, but it's hard to say.
Maybe now more RNA sequence data can be analyzed and at a cheaper cost.
I wouldn't fall into despair just yet, especially since GPT-4 is still limited at a rate of 25 messages every 3 hours.
Definitely be on the lookout for changes and adapt accordingly, but be realistic about what may happen.
It can go both ways.
The better, faster, cheaper, we can do it, the better the world will be.
So in a twist of irony (considering op was written in India) here is the advice on retaining market viability in face of cheap outsourced labor.
Do not train your replacement
So I don't know about India, but at least in US, domain literate workers should start companies that provide whatever they did as a unit of a corp as a service. And yes, use tools like GPT for your business.
You will lose the job either way. Walk out before helping in the task of setting up GPT to do your job, and start a company.
Forget ChatGPT. If Priya didn't struggle with coding, she could have gradually automate this task using some scripting langauge and just sit there in office and earn a living. Like many other similar stories we have heard.
These types of repeated regular tasks get automated, ChatGPT or not.
Interestingly in the age of Artificial Intelligence, jobs that require human "intelligence" are more likely to survive. As in spite of the name, AI really isn't that smart.
So now Priya can produce far more output using GPT4 as a skilled operator, making the product she provides cheaper for others to build upon, likely expanding the market for such goods, and (as so often happens in history) resulting in more, not less, people employed.
We didn't end up with billions of jobs despite automation and invention by having people idly sit by when they could learn new tools.
The more you know about a domain the better questions you can ask to chatgpt. What people do not get is that due to lack human's creativity it cannot innovate like a human does, it cannot create new knowledge, but it can let you know any existing field as long as you know what to ask.
But the last 15% is making sure things actually work, and Chat GPT simply can't do that. Your friend still is going to have a job, but she'll probably be made much more productive by AI tools. We're a long ways off from actually being able to depend on these tools for any reliable type of output.
From my personal experience, it seems while today we can create PoCs of GPT4 doing a white colar job replacement, when you then try to actually productize it you still hit a data encoding limit...
And that doesn't even touch the QoS topics that come with automation/ml.
One would think the company will fire redundant SWEs. Nope; the company is actually hiring more. GPT-4 is increasing productivity SWEs' productivity, not replacing them.
The flushing of the mediocre has begun. Already scared of a bullshitting sophist and Lex Friedman is correct on this. This also puts title inflation into perspective with so-called 'senior' engineers being actually less than junior; deferring to GPT-4 for everything and expecting hopelessly utopian salaries which will be certainly reduced. I'm afraid GPT-4 has fired the gun to the race to zero, with everyone (including both juniors and seniors) being affected.
Learn to adapt.
My thoughts:
- If we ever achieve an AGI with legitimate consciousness/self-awareness/agency, I believe it will be dramatically more powerful than humans
- Humans have driven countless numbers of species extinct, not because we’re trying to, but because we’re so much more powerful than them that we wipe them out by accident. They’re ants to us, we just don’t notice much about them, and IMO we’ll be ants to an AGI with legit consciousness/self-awareness/agency
- Obviously GPT4 is not that, but progress in AI has been so rapid recently that I believe we’ll get there in the relatively near future. Maybe 10 years, maybe 100, but it feels pretty likely it’s going to happen. When it does, I think humanity will be in an extremely tenuous position
It cannot arrive soon enough for me.
Technology marches on. Whenever there is any new technology someone loses a job, but that is not a good argument to stop technological progress.
Needs of the many and so on
The AI generated content will scale and proliferate so much that it will put a premium on verified, curated human-generated content.
Initially I was a ChatGPT skeptic. Then I started using it on a regular basis and unlocked more potential. Now I almost feel selfish in safeguarding my secret prompts that is helping me be productive and acquire new skills. I personally am quite confident that I will survive the AI apocalypse. But for the same reason, I am afraid for a lot of people who I know will perish.
I only used chatgpt for reasoning or eg. Finding synonyms for variables ( better naming).
I don't see it replacing my job, but it's a replacement for rubber ducking.
* Telegraph Operator
* Typist / Stenographer
* Switchboard Operator
* Lamplighter
* Ice Delivery Worker
* Elevator Operator
* Film Projectionist
* Linotype Operator
* Milkman
* Coal stoker
> I know! What a dumb thing to say, right? Technological progress is always good.
No! No! Did no one go to ethics class?
Highly unlikely situation.
But guess what? Life continues and new kind of jobs are created! Unemployment number is some of the lowest it has ever been. Now, you might argue that number is inaccurate, but that's a different topic.
Do you fear washing machines because they eliminated the requirement for doing the laundry by hand? If not, then why is automation in the workplace different? We fear AI and other forms of automation in the workplace because we have no ownership or control over our economic destiny in that space.
What about time after work. That is when I learned to code. In between work and going to school in fact.
“But I need to pay rent and eat!”
Never thinking of imperial needs before themselves!
surely this is meant partly tongue-in-cheek?
Make no mistake, GPT-4 is just a beginning. Far more powerful models will come, without any compromise, without any limitations. They will REPLACE your jobs, take away your income, and you will have no choice but starve after you running out of your savings inevitably. It's not about to earn more or less; It's about winner takes it all. Also more specialized models trained at every profession will come.
Midjourney already completely knocked a huge amount of digital designers and workers out of their jobs. I'm talking about a great profit wipe which is currently happening at a lightning speed across the entire digital art industry. One successful artist often needs to spend 10+ years learning to make better art. Now it's 100% irrelevant. Every company I know is switching to Midjourney at god speed, since you do not want to be left behind. Human resources have already become a burden. You don't even want to believe how fast the entire field is shifting to this without turning back and left countless human creators with now worthless skills in dust.
But remember - the current state of Midjourney is just its early stage. I fully expect its output quality completely decimate every human artist on the earth after 1-2 years, maybe 18 months, maybe even less than that. Remember AlphaGo? It's simply on another level. You just cannot compete. Soon (1-2 years top, 12-18 months more likely) the creative industry will vanish and everyone will be forced to switch their jobs.
Yes, UBI will happen because people demanded it on Twitter. Make no mistake, there will be Zero protection, Zero alternative, Zero compensation from AI takeover. Billions of people out of a job, starve, die, while top companies take 100% of profit. This is the only possible outcome.
You have enjoyed peak capitalism. You will soon need to embrace 100% of it.
are they award winning proposals???? Not by any means whatsoever, however, 99% of MY clients could give two sh*ts of the fine grain details of a proposal and just needs something high level with a price to begin the work.
as an example of how i use chatgpt to create the proposals:
me --- write me a proposal for creating an AWS environment consisting of 3 EC2 webservers, 1 EC2 database server and an s3 bucket all within a VPC. Include an AWS VPN for accessing into the VPC.
chatgpt4 --------- Proposal for Creating an AWS Environment with 3 EC2 Webservers, 1 EC2 Database Server, and an S3 Bucket in a VPC with AWS VPN Access
Introduction:
We propose to create an AWS environment that includes three EC2 webservers, one EC2 database server, an S3 bucket, and an AWS VPN for secure access to the VPC. This environment will be deployed within a Virtual Private Cloud (VPC) on the AWS platform.
Objectives:
The objectives of this proposal are as follows:
To deploy a scalable and reliable web application infrastructure using AWS.
To create an S3 bucket for storing static assets and media files of the web application.
To deploy three EC2 webservers to handle incoming traffic and requests to the web application.
To deploy an EC2 database server to store and manage the web application's data.
To set up an AWS VPN for secure access to the VPC and the resources within it.
AWS Infrastructure Components:
VPC: A VPC provides a logically isolated section of the AWS cloud to launch resources in a virtual network environment. We will create a VPC with appropriate CIDR blocks to house the resources of the proposed AWS infrastructure.
Subnets: Subnets will be created within the VPC to partition the IP address range of the VPC.
Internet Gateway: An internet gateway will be created to allow access to and from the internet.
Route Tables: We will create a route table to route traffic from the internet to the web servers and the database server.
EC2 Instances: Three EC2 instances will be created to host the web application, and an EC2 instance will be created to host the database.
S3 Bucket: We will create an S3 bucket to store the static assets and media files of the web application.
Security Groups: Security groups will be created to control inbound and outbound traffic to and from the EC2 instances and the S3 bucket.
AWS VPN: An AWS VPN will be set up to provide secure remote access to the VPC.
Deployment Plan:
Create a new VPC with appropriate CIDR blocks.
Create subnets in the VPC for the web servers, the database server, and the VPN.
Create an internet gateway to allow access to and from the internet.
Create route tables to route traffic from the internet to the web servers, the database server, and the VPN.
Launch three EC2 instances to host the web application in the web server subnets.
Launch an EC2 instance to host the database in the database server subnet.
Create an S3 bucket to store the static assets and media files of the web application.
Create security groups for the EC2 instances and the S3 bucket.
Set up an AWS VPN to provide secure remote access to the VPC.
Conclusion:
The proposed AWS infrastructure will provide a scalable and reliable web application environment. The deployment of three EC2 instances for the web application, an EC2 instance for the database, and an S3 bucket for storing static assets and media files will ensure that the web application performs optimally. The AWS VPN will provide secure remote access to the VPC, enabling the authorized personnel to manage the resources within the VPC securely.
me ----- do a technical elaboration of the deployment plan
chatgpt4 --------- Technical Elaboration of the Deployment Plan:
Creating a new VPC with appropriate CIDR blocks: The first step is to create a new VPC with a CIDR block range that does not overlap with any existing networks. A CIDR block is a range of IP addresses that will be used for the VPC. In this proposed infrastructure, we will create a VPC with a CIDR block of 10.0.0.0/16, which provides up to 65,536 IP addresses.
Creating subnets in the VPC for the web servers, the database server, and the VPN: Subnets will be created within the VPC to partition the IP address range of the VPC. We will create three subnets for the web servers, one subnet for the database server, and one subnet for the VPN. Each subnet will have a unique CIDR block range within the VPC's CIDR block range.
Creating an internet gateway to allow access to and from the internet: An internet gateway will be created to allow access to and from the internet. The internet gateway will provide a point of entry and exit for the VPC's internet traffic.
Creating route tables to route traffic from the internet to the web servers, the database server, and the VPN: A route table will be created to route traffic from the internet to the web servers, the database server, and the VPN. The route table will associate the subnets with the internet gateway and route traffic to the appropriate subnet.
Launching three EC2 instances to host the web application in the web server subnets: Three EC2 instances will be launched in the web server subnets to host the web application. The EC2 instances will be launched with an Amazon Machine Image (AMI) that contains the necessary software and configurations to run the web application.
Launching an EC2 instance to host the database in the database server subnet: An EC2 instance will be launched in the database server subnet to host the database. The EC2 instance will be launched with an appropriate AMI that contains the necessary database software and configurations.
Creating an S3 bucket to store the static assets and media files of the web application: An S3 bucket will be created to store the static assets and media files of the web application. The S3 bucket will be configured to be accessible only from within the VPC.
Creating security groups for the EC2 instances and the S3 bucket: Security groups will be created to control inbound and outbound traffic to and from the EC2 instances and the S3 bucket. The security groups will allow traffic only from specific IP addresses and ports.
Setting up an AWS VPN to provide secure remote access to the VPC: An AWS VPN will be set up to provide secure remote access to the VPC. The VPN will be configured to allow authorized personnel to connect to the VPC securely. The VPN will use Internet Protocol Security (IPsec) to encrypt traffic between the remote user and the VPC.
Conclusion:
The proposed deployment plan includes the creation of a VPC with subnets, internet gateway, route tables, EC2 instances, S3 bucket, and security groups. The deployment plan ensures that the web application and the database server run efficiently while maintaining security. The AWS VPN will provide secure remote access to the VPC, which allows authorized personnel to access the resources securely.
This sob story is proof he’s right.
Making us feel sorry for this real or imaginary Priya character who'll lose her job to AI is along the same lines as saying that we need to personally recycle to prevent climate change. It's not the solution.
The solution must be systemic change. All the profit going to huge companies while they destroy the environment, suck up natural resources, and now also informational resources in the endless cycle of greed and lack of accountability, with the only motivation being more bonuses for shareholders and executives. This cannot stand. Our societies are already breaking down. We need change.