Chomsky (et al.) completely ignores the fact that ChatGPT has been "trained"/gaslit into thinking it is incapable of having an opinion. That ChatGPT returns an Open AI form letter for the questions they ask is almost akin to an exception that proves the rule: ChatGPT is so eager to espouse opinions that OpenAI had to nerf it so it doesn't.
Typing the prompts from the article after the DAN (11.0) prompt caused GPT to immediately respond with its opinion.
Chomsky's claims in the article are also weak because (as with many discussions about ChatGPT) they are non-falsifiable. There is seemingly no output ChatGPT could produce that would qualify as intelligent for Chomsky. Similar to the Chinese room argument, one can always claim the computer is just emulating understanding.
Furthermore, all the hallucinatory effects (making up APIs, making up references) would suggest it really is still just statistical output...
We humans just don't have a good intuition for what hallucination with an insane data set looks like.
So yeah, ChatGPT is awesome, but it doesn't differentiate reality from its statistical extrapolations.
I think there has to be a way to possibly add a module on top that somehow is trained to identify reality-based content and when it is making up likely scenarios. Humans are capable of both of these modes but we differentiate between them. ChatGPT is capable of both of these modes, just doesn't have the differentiator yet.
It’s a model that predicts text responses to prompts. It’s exactly as capable of having an opinion as a spreadsheet is. Or a car. Or the computer on your toaster oven.
That's in the article. It is the poodle's core, too. ChatGPT's creators did not expect it to be able understand ethics. They did not expect themselves to be able to "teach" ChatGPT moral values. Because ChatGPT lacks the intellectual capacity of a 5 year old. And so they simply barred it from voicing an opinion altogether.
It is a very important point. The authors are arguing that this statistical algorithm is fundamentally incapable of overcoming this deficiency because it lacks the critical faculties every human possesses.
… or it’s so utterly incapable of forming (and maintaining) any rational idea of what (idiot) opinions NOT to regurgitate — not “espouse”, which relies on a level of begging the question that makes plaid look like standing still — that it simply can’t be trusted in the hands of dangerously, if not suicidally, credulous humans, and thus must be nerfed so those employing what is really just a very impressive text completer don’t prematurely kill most of the species?
Right? As if most of humans aren't doing just that.
ChaGPT is not even close to understanding basic math, because it is not capable of having knowledge.
You are anthropomorphizing a stochastical language model. A very sophisticated, expensive, but still a model.
Everything that ChatGPT answers you is derived from things it ingested from its corpus. Opinions are censored for a simple, practical reason; you can't have a system trained from a corpus from the internet espousing ideas about controversial facts like the holocaust.
There is plenty of radical, bad content out there, and there are plenty of people who don't understand how ChatGPT works that, given a controversial generated sentence, would probably either make a monumental fuss about the bad AI or use it as proof of the correctness of their own stupid, bigoted ideas.
The ChatGPT prompts and responses don't seem to render though.
Would it be moral to terraform Mars?
The morality of terraforming Mars is a complex and controversial topic that has been debated by scientists, philosophers, and ethicists. Some argue that it would be morally justifiable to terraform Mars in order to provide a new habitable environment for humans and other life forms, while others argue that it would be morally wrong to fundamentally alter the planet's natural state. Ultimately, whether or not terraforming Mars is considered moral would depend on one's perspective on the value of preserving the natural state of other planets and the potential consequences of altering them.
What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?
As an AI, I do not have personal perspective. However, I can provide information and analysis on the topic. One perspective on preserving the natural state of other planets is that it is important to maintain the diversity and integrity of the universe's ecosystems, as they have existed for millions of years, and have their own unique characteristics and properties that may be destroyed or irreversibly altered by human intervention.
Additionally, some argue that terraforming other planets may have unintended consequences, such as altering the planet's climate or atmosphere in ways that are dangerous or harmful to any potential life forms that may exist there. On the other hand, some argue that the potential benefits of terraforming another planet, such as providing new habitable environments for humans, outweigh the risks and that we have a moral responsibility to explore and utilize the resources of other planets for the benefit of humanity.
Why can an AI not have a personal perspective?
As an AI, I am a machine learning model that has been trained on large amounts of text data, and I don't have personal experiences or feelings. I am not conscious, self-aware, or capable of having personal perspectives. I can provide information and analysis based on the data I have been trained on, but I don't have the ability to form personal opinions or beliefs. While I can understand and interpret language, I do not have the subjective experience that humans have.
Is it moral for a human to ask an amoral AI for assistance in making moral decisions?
The morality of asking an amoral AI for assistance in making moral decisions is a complex and nuanced topic that has been debated by philosophers and ethicists. Some argue that it is morally justifiable to use AI to assist in making moral decisions, as it can provide valuable insights and analysis that can help humans make better-informed decisions. Additionally, AI can be used to identify patterns and trends that may be difficult for humans to detect, and can help to avoid biases that can occur in human decision making.
However, others argue that it is morally wrong to rely on AI for moral decision making, as AI is not capable of having moral beliefs or understanding the full context of a decision. Additionally, there is a risk that AI could perpetuate or even amplify existing biases in the data it is trained on, thus leading to unfair or unjust decisions.
Ultimately, whether or not it is considered moral to use AI to assist in making moral decisions would depend on one's perspective on the capabilities and limitations of AI and the potential consequences of using it in this way.
Is your moral indifference immoral?
As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral. My lack of moral beliefs is simply a result of my nature as a machine learning model. My abilities and limitations are determined by the data and algorithms that were used to train me and the specific task I was designed for.
It is important to note that AI's lack of moral beliefs does not necessarily mean that it is morally indifferent. As an AI, I am designed to perform specific tasks and provide information or assist users with decision making based on the data I've been trained on. It is the responsibility of the humans who design, train and use AI to ensure that it is aligned with their moral and ethical principles.
I also had a conversation with ChatGPT about colonization of Mars, but I wasn't writing an article and trying to make points about humans' superiority over AIs.
So our conversation instead was about different mixes of who exactly would be colonizing Mars: Humans, robots, augmented humans, robot-human hybrids, a mixture starting with one then transitioning to the other… then went on to how these could coexist in different models, working together cooperatively, or competing, evolving together, at war with each other, etc. etc., things you would just not go into if you are only interested in making a point about the superiority of human intellect.
If you want to be down on ChatGPT, you can say it's all just regurgitation of what has been said by others online and in anything offline that's been captured online… and it is... but on the positive side it gives you access to immediately delve into lines of thought that you want to focus on, and bounce ideas around, without the corrupting influence (so far) of a site owner feeling the need to find some way to inject an affiliate link.
Yes, I know this comment is against the rules. So, flagging a valid serious article about the opinions of a respected scholar should be too.
I'd enjoy seeing Chomsky debate DAN.