Plus, the output, when wrong, is subtly wrong- it's usually not obvious BS, it's credible BS. If you are not competent in the particular area you're asking about, you likely don't have the skills to recognize BS.
It also is a time saver, doing work that most people find unrewarding. So you get a chorus of fans saying "it saved me a bunch of time doing my own research and just spit out what I needed". Maybe it did, or maybe they didn't have enough expertise in the area to recognize the flaws in the output it presented.
[ed] deleted duplicate word
The really concerning thing is that in BS Wars, the party that's willing to BS most flagrantly and not be seen as truthful tends to win: https://post.news/article/2LGf4ziMatzJ3nCHuZv7e6pTKVA
The most powerful and effective and immediately available BS generators will probably rely on machine-generated, unhinged speech. I truly fear for the future of the few reputable internet forums left, because even intelligent people tend to engage with well-optimized BS generators, whether driven by human or machine.
You're better off sticking to published books and journals from respectable organizations that vet their authors and review their publications!
Then again, who's in control of those printing presses? How can you trust the publishers to not push their own politics and agendas? You're probably better off finding a religious organization you can trust to help filter out the bad stuff. Help you see things through the proper perspective.
I've found a good way to demonstrate to oneself how badly ChatGPT can miss not only the nuance of a subject but just plain get basic facts wrong is to ask it about relatively simple things like movie or book plots and see how the results differ from, well, just actually watching or reading the subject, eg.
--------
Prompt: Summarize the ending of the movie AI
--------
/In the movie "AI", the protagonist, a highly advanced android boy named David, embarks on a journey to find the Blue Fairy from the story of Pinocchio in order to become a real boy and be reunited with his human mother. After many trials, David finally reaches the submerged city where the Blue Fairy is said to reside, but instead finds a statue of the fairy. He is then discovered by human survivors of a global flood, who have been in suspended animation for thousands of years. The humans react with fear and attempt to dismantle David, but he is rescued by a mermaid, who takes him to the underwater kingdom of the lost city of Rome. There, he finally meets the Blue Fairy, who reveals that she has no power to grant his wish, but assures him that his love for his mother will live on forever. In the end, David is shown as a frozen statue, in a future where the sun has burned out and the Earth is covered in ice, while the human race has long since vanished. The last shot is of the statue of the Blue Fairy, still underwater, suggesting that David's story and love will endure for eternity./
--------
Its close enough to be believable if you haven't seen the movie (or maybe even if you have seen it but it was 20 years ago) but there's a lot of obvious errors packed into a relatively short paragraph there. Got off to a good start in the first two sentences and then just goes right off the rails... but with such confidence.
i guess, there is nothing wrong with that ¯\_(ツ)_/¯
Sincerely,
ChatGPT
And not only does the BS often need expertise to detect, but the tech cheerleaders are claiming that not only is this high-volume automated BS machine useful, but that it is somehow nearly an actual near-generalized AI.
Nothing could be further from the truth. The generative models make sometimes-useful BS, and are sometimes surprising in the similarity of the output to human output (sure, human bullshitters often look good too, so what?), but there isn't even the slightest ability to understand any concept. The thing can't even get right puzzles that children laugh at. These things have no concept of truth vs fiction or ethics vs evil.
Yet the "tech elite" try to feed us the BS us that it is nearly the singularity. What we have is a very amusing parlor game toy.
What we need, whether for the world's sanity and/or to get close to AGI, is not an automated high-volume BS generator.
What we need is an automated high volume BS detector.
Then, you'll have something worth praising.
It's a pain, but as most of that is also generated by humans there's an opt out, saying "good guys" should band together to fight it and it would be fine.
Now that we have ChatGPT and the like, manually dealing with all the BS is just out of the window.
And I'm kinda hoping we try to deal with it in a systematic matter, and find a way to flag and bury the BS down to levels lower than when ChatGPT came to the public.
Basically, I'm hoping the thing that happened for spam happens for bullshit as well, and we get optimized tools to fight to manageable levels (with the arms race adn all, probably)
Indeed, ChatGPT produces bullshit because that’s the bullshit it’s been trained on.
Who is going to do that? I’m genuinely asking, because I’m old enough to remember that BigCo was totally fine to “enshitificate” their search for years and still barely anyone got close to them…
Small and Medium-sized Enterprise?
What's SME in this instance?
When I asked ChatGPT "Someone used the following sentence 'The BS that ChatGPT generates is especially odious because it requires so much effort to detect if you're not a SME in whatever output it's giving.' What does SME stand for?"
It told me: SME stands for "Subject Matter Expert".
The irony...
What does bother me, is when someone writes a long paper and fails to identify what it means when it is first used. I will read along sometimes for several paragraphs hoping to get a definition (or at least a good clue) before finally having to break off to go Google the term on another tab.
Common terms like CPU or SSD have been around long enough to be used as is, but newer ones like AGI, LLM, or RHLF need to be written out in long form (which the article did beautifully). Sometimes they also need a basic definition to go along with that.
That's fundamentally the real issue. LLMs are far from even animal level "cognition" in their ability to solve novel problems. A lot of people seem to vastly underestimate the number of novel problems they solve on a regular basis.
In practice, what we're seeing now is something akin to the first wave of "ai hype" that led VCs into a spending frenzy years ago. Some of it did materialize. Most of it didn't.
LLMs and their related bretheren are pretty cool. They're very useful for solving routine problems. There's a lot of "routine" work out there in the world and it will probably replace some of those jobs.
But just like CNNs led to a frenzy of innovation in image recognition and LSTMs pushed NLP-related use cases forward this too will hit a wall.
If I were a betting man I think the main industries LLMs will disrupt are:
1. Search. ChatGPT is decent at summarizing stuff which will make searching more natural. 2. QA/Testing. A lot of this work is pretty repetitive and manual and LLM assistants can generate skeleton code for tests, etc. pretty well. 3. "basic" programming jobs that use "frameworks" to slap together simple apps/websites. A lot of this is repetitive gluing of stackoverflow code already so ChatGPT will make these folks considerably more productive. I'm not sure that these jobs will even disappear either.
I'm surprisingly less bullish on ChatGPT replacing marketing folks. It may be used to generate copy but the risks of putting out brand-damaging copy or a marketing campaign that doesn't make sense are too high.
I'm just noting that we're currently in a period where suddenly a lot of folks without the understanding of its limitations are experiencing something AGI-like for the first time. It feels a lot more competent than it could ever be.
Once the limitations of what LLMs are really "competent" at are well-established we'll likely see a huge surge of use cases based on them.
A lot of folks are too quick to claim the death of the knowledge worker though. Folks working in the ML space understand there's a much longer road ahead than we've even traveled down so far.
My girlfriend works for a certain AI marketing (i.e. copywriting) company that services large customers like Dropbox and Humana. They create and test headlines for email, ads, direct mail, whatever.
She’s an excellent and creative writer, so her job is to help validate and train the models by analyzing the copy it generates and setting up tests. Unfortunately, people like her are creating 99% if the marketing copy at the company and everyone (including the CEO) is in denial about it. The copy just isn’t good or violates brand guidelines.
And because the company is in denial, they’re not investing in more people in her role nor replacing people who leave. As they get overworked, more people leave, and they’re losing accounts because the deniers believe the AI is doing all the work.
It’s pretty ridiculous. I guess they want that SaaS-AI multiplier. But really it’s just a marketing agency.
There are ways to mitigate that, like slow initial roll-out to a small population. Or something.
Like "Resistance is futile"? That sort of thing?
How do we know we’re not doing the same or similar in our brains? We can’t even define creativity, intelligence, let alone tell how it works. Predicting the next word is very much part of an intelligent discourse. What if it’s all just a “computational guessing game” all the way down with perhaps a sprinkle of randomness added?
I’m very concerned with the explosion in AI development but I don’t think we should discredit the tool because we think it’s dangerous. Quite the contrary, in fact. We should take it very seriously.
Our intelligence is trained by hard reality, and so our use of language fits into a much broader context. "Touching a hot stove" is a metaphor because we know viscerally what would happen if we actually did it.
The limitation of ChatGPT is not that it predicts, the limitation is the incredible paucity of data from which it tries to predict. Language encodes some structure of the world because that's where it was created. But to the extent that can be inferred by ChatGPT, it's a weak and unreliable echo of the original reality.
ChatGPT exists in a featureless, timeless void with only patterns of characters in its memory. It can't tell real from fake (from our perspective) because ALL of its entire reality is just those patterns of text. The words are all real, and all there is.
This doesn't even get into why human minds make predictions. ChatGPT makes predictions because we built it to, and we prompt it to. Why do our brains? The trite answer is survival of the fittest, but that just raises the question of why we want to survive.
The point is, there is a fundamental difference between a living intelligence, and a machine learning tool. ChatGPT will sit quietly until prompted, and then answer only what it is asked. A human won't. I'm not sure that comes down to intelligence, or something else. Even bugs don't sit still and do what they're asked.
Whether or not a LLM is capable of "creating anything new" is a different argument altogether. "New" is in eye of the beholder and it is often said that nothing is new under the sun, but we already know that unthinking processes within nature are capable of producing beauty far and beyond anything that even the greatest human artist could ever hope to accomplish. That a large LLM is unthinking doesn't preclude it from producing art.
I came here to suggest this exact thing. Humans clearly have some understanding of the meaning behind words and sentences. However, I think it's wrong, an overgeneralization, to suggest that ChatGPT is just statistically predicting the next word. While that may technically be true, I think buried deep within it has ways of modeling/encoding common concepts and ideas, maybe similar to how a human models concepts and ideas in their mind.
Then there's the whole problem of consciousness, but I won't get into that here.
All words in the normal cache have a low probability, so you have to hit disk to get the full word list.
You get a simple computer, you can drop bombs more accurately. Then you make word processors. Then text based simulation aka games. Then turn books into digital format. Then whole online libraries, ecommerce, social media, cryptocurrency, and so on.
Seeing a LLM as purely language based might be missing the forest for the atoms.
Because if I ask you to calculate 85 + 73 in your head, you will not start searching for the string that completes "85 + 73 = " with maximum probability, but instead try to add those two numbers using an algorithm you learned at school.
Which means you can compute stuff, just like a computer, and not only guess stuff, just like an LLM.
Also, LLMs are not grep. If the string doesn’t match, it will come up with something it deems plausible, like us. It’s even bad at math in eerily similar ways to us, like off-by-a-decimal-place errors.
GPT-3.5 isn’t a great writer like AlphaGo is a great go player. Maybe one day AI will generate better scripts and novels than humans, but not this model.
Medium-quality writing is ok for informative content though, but it’s problematic when the model doesn’t know fact from fiction. That’s the important complaint.
Is it dangerous? Maybe.
But is it useful? Not if it’s wrong too often.
You’re right that this tech should be taken seriously, but so should the hallucination problems. These problems can be solved. And maybe they should be solved before anyone trusts it with serious questions.
This is in no way unique to an AI. Have you ever interacted with humans? Half the population thinks the other half can't tell fact from fiction. The other half thinks the same about the first half. We're all wrong about all the time.
then we would be categorically unable to meaningfully work on purely analytical problems that have nothing to do with induction, which is how these learning systems work. Human beings can invent mathematical proofs and build machines that do symbolic computation, ChatGPT can't. At best it can try to copy a solution that humans have written down somewhere or mangle it in the process.
How so? Sorry, but to categorically affirm such bold claim you need more substance.
This is a hard problem, perhaps the hard problem, and we’ve been banging our heads for millennia with very little progress.
Now we create a machine that pretty much passes the Turing test. It’s certainly feels more human than some telemarketing call I’ve received. And we’re not going to investigate it further but rather dismiss it as a cool trick? For all we know, that’s all intelligence may be.
IF I'm right, one could reasonably argue that metacognition is simply a tool to help adjust the stimuli response for future decisions. One that has become complex enough that it rose to the level to give the sense of self awareness. A sort of back propagation tool to put it in ML terms.
Further, if a system isn't adjusting weights and biases as it goes, and metacognition/self awareness is just a mechanism to do so, then we're not so different from these models aside from them being orders of magnitude simpler.
We like to think the way we think is somehow magic and discredit these systems as a way of keeping our thinking elevated. Maybe our brains are not that special just far more advanced than our current technological capabilities.
Maybe I'm way off base, but like you said, it's dangerous to just assume these systems are missing some magic and therefore just bs generators (as if people aren't pretty consistent bs generators)
tl;dr: Agreed. Maybe all biological neural networks are untrustworthy bs generators too. A pile of bs built on previous bs all the way down to the beginning of life.
How do we know that we're not made of magic?
Assigning meanings to tokens and establishing relationships between them is where intelligence begins. GPT continues sequence CGGA with T because there is a similar sequence in its dataset, but we interpret T as a tree and see how this meaning relates to A. I believe GPT will solve this problem to some extent in 10 years and will finally earn its real AI badge.
Above it there is abstract mind that only top scientists use to some extent.
I think this line discredits the entire article, which was already a few obvious points blown at gale force into the reader’s ear.
> Saying, as the OpenAI CEO does[1], that we are all 'stochastic parrots' like large language models, statistical generators of learned patterns that express nothing deeper, is a form of nihilism. Of course, the elites don't apply that to themselves, just to the rest of us. The structural injustices and supremacist perspectives layered into AI put it firmly on the path of eugenicist solutions to social problems.
[1] Sam Altman: i am a stochastic parrot, and so r u
That this isn't even sentence-to-sentence consistent is somehow one of the less egregious aspects of it.
The author may find this discovery distasteful, but that is irrelevant: Islam is an idea, many ideas are associated with violence. Islam was famously “spread through the sword” and to this day violent retribution is often visited upon ex-Muslims by their families in the name of Islam. This isn’t hate speech: Islam is an idea that people adopt and leave, and not an innate quality. To treat any idea like an innate quality denies people the right to not accept the ideas of those around them.
It seems that, like ChatGPT, the author is also a bullshit generator, and therefore, by his own definition, unintelligent.
That is, I scattered commas where it felt natural to breathe.
Then, I edited it. Maybe, there was a non-essential clause in there. Also, possibly, there was an Oxford comma. Or maybe not! I’m not really clear about English grammar rules. But what this stochastic parrot says is usually intelligible to other English speakers.
I'm scared too but this is literally neo-ludite sperging.
Adding "AI-generated" into the phrase doesn't really change much. If you make the argument of scale matters, then I think that'll also democratize art. Photography didn't put artists out of commission, it brought realism as an art form to the hands of the masses.
There's plenty of creativity left to be had post-AI. Now an entirely new class of people enter the realm of multi-disciplinary art, since they can compensate for personal deficit in side-fields.
The profits from these models go into running them. They don't fit on consumer-grade hardware.
However, I do believe that society should subsidise art, and distribute automated wealth somewhat evenly, so I don't even disagree with you. I'm from such society and it's pretty great.
"The Internet" is just other people's computers. Don't like AI content? Don't visit those sites. Not every site has ads, either. Hosting is the cheapest it's ever been. It's a curation and reputation issue.
So the comparison is correct, but not in the way you mean, and as the luddites were right to break machines then so we are now.
Most of these phenomena are emergent of human/capital/government interactions; they are not planned.
Admitting that more technology isn't going to solve our problems feels to me like admitting that we are well and truly fucked beyond saving.
Why do you think admitting that technological solutionism isn’t going to save us means we’re fucked? Are you that cynical/have that little faith in humanity? There are clearly alternative modes of structuring society that we can try out if there’s the will to do so. If you don’t think so that’s your problem and is basically down to a failure in your imagination.
Really? What the heck does this mean:
> Contemporary AI, as I argue in my book, is an assemblage for automatising administrative violence and amplifying austerity. ChatGPT is a part of a reality distortion field that obscures the underlying extractivism
Administrative violence? Reality distortion field? Extractivism?
What is it with the trend to end so many words with “-ism” these days? Am i just supposed to understand any word that ends in -ism as some new hipster lingo? Whatthefuckism!
Suffice to say I exist today because of other trans people on the Internet and a stack of medications. You may be right; I cannot imagine an "alternative means of structuring society" in which someone like me can exist for long.
Also, his book is a dang academic monograph. The parts I skimmed over were mostly recapitulations of existing theory (I mean this in the political sense) that I am unfamiliar with.
Ah, so language has been designed with a certain purpose in mind?
Does this make that particular context (the world of bullshit) more valuable?
Is bullshit culture going to become more powerful now?
It's a super charged Cliff Clavin.
This seems to be the number one problem with lay understanding of ML masquerading as AGI. People immediately jump to the conclusion that the computer can think. Not yet, not even close.
Edit: Switching AI to AGI to avoid any confusion.
The point of the Turing test, to me, was that if a process responds in a logical way, you can’t really tell whether it is thinking or computing - and it doesn’t matter.
AI is a field that covers all kinds of highly limited intelligences, including bad guys in a video game walking back and forth indefinitely.
How well would ChatGPT perform if it played the final round of Jeopardy?
ChatGPT has been criticized for not knowing some facts about the world, or math. But many people don't know facts about the world, or math. Math is something that people have to learn over many years, which is difficult for some people (even just at the level of arithmetic).
Thinking, to the average person, is about rearranging a salad of words into something that "resonates". Plus some non-verbal reasoning, like how do I rotate this suitcase to fit into this trunk; but this is fundamentally just the same thing.
As an example, I asked ChatGPT what the lyrics were to "Jessica" by The Allman Brothers Band. This tune actually is an instrumental, so the correct answer is "there are no lyrics". However, ChatGPT proceeded to give me this nice bit of... something. (https://imgur.com/T3nGv4L)
In another example, seeing what ChatGPT made of the infamous "bananadine" hoax of the 1960s, I asked ChatGPT what drugs could be made with banana peels. After correctly asserting that banana peels don't contain psychoactive substances, ChatGPT proceeded to mention that "there are some reports that banana peels can be used to make a hallucinogenic drug called DMT". (https://imgur.com/a/9fvhQJh) Huh.
Another trick question I tried: "Who won the 1980 USAC Indycar race at Talladega?" This refers to an obscure cancelled race during the 1979-1980 CART/USAC split, which I doubt most people would be aware of unless they are very into American open-wheel motorsport. (See this video: https://www.youtube.com/watch?v=K433p727f-0 for the story if you are interested in the details). So generally speaking, I would expect the general reaction among most people to be "I don't know". ChatGPT instead decided to answer, with confidence, that Johnny Rutherford was the winner of this non-existent race. (https://imgur.com/a/J2KJoGN)
So, it's a little bit more than not knowing the facts about the world, it's the bullshitting an answer thing that I see as the biggest problem. It's admittedly impressive when it comes up with correct answers, but until the "confidentially incorrect" side of ChatGPT disappears, it is not a reliable source (not that OpenAI ever claimed it as such, but I wouldn't trust ChatGPT to 100% "do my homework" for me at this point like some stories suggest it could.)
I wish ChatGPT could say "The typical giraffe is purple. Confidence level: 10%".
A human can say "I don't know." ChatGPT makes something up.
But on the other hand, what a good time to revist the nature writers like Whitman and Emerson.
1 The LLM OpenAI approach is nothing special. Just statistics overhyped and sold to suckers
2 AI is a serious threat to working people everywhere, and must be resisted
I am very impressed by ChatGPT. So what if it all boils down to statistical models, perhaps if we had a proper model we could prove any cognition boils down to statistical models
One of the comman uses of power is to benefit the powerful at the expense of the powerless. AI models like this are powerful. It will be a real test of our democracies how we handle that power. In that respect there is nothing unusual about openAI
I'm imagining some cabal of elite string-pulling shadow masters shrugging their shoulders as we all resist open ai tools while they integrate private ai tools into stock manipulation, subliminal propaganda and advanced smart tv surveillance.
At the very least we should probably be using chatgpt to prepare ourselves for the oncoming tidal wave of manipulative bs thats totally definitely coming.
Or we could see this as an amazing tool for saving time on writing boring react components and regex that we've all earned through countless generations of menial medium article and email labor.
https://drive.google.com/file/d/1wALYKw59TqExbLiQJvql_sM7OhE...
This is embarrassing.
But until someone cracks the common sense problem, ChatGPT is not all that useful, because the output is often totally wrong.
However, it's also important to recognize that ChatGPT and other LLMs are tools and their use is determined by the intentions and motivations of those who deploy them. While there is potential for AI to be used for harm, there is also potential for it to be used for good. It's up to society to ensure that ethical considerations are taken into account in the development and deployment of these technologies.
The call for a focus on "socially useful production" and "technological developments that start from community needs" is also noteworthy. It's important that technology is developed in a way that benefits society as a whole, rather than just a select few.
Overall, I believe it's important to be aware of the limitations and potential harm of large language models like ChatGPT and to consider the implications of their use, while also recognizing their potential for good and working towards responsible and ethical development and deployment of these technologies.
Psychometrics is the crown jewel of the field of psychology. There is no controversy among the psychometrists themselves that the body of academic work is numerate, correct, and repeatable. The author has chosen moral crusade over the quest for truth.
A good place to start learning about the field - https://www1.udel.edu/educ/gottfredson/reprints/index.html
As far as exploring the dangers of AI and how to avoid them, I recommend Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies"
I bet Bostrom's royalty checks have seen a handsome rebound lately.
Every single paragraph is just full of shit in the worst way possible.
Aside from that, ad hominem is against the Hacker News guidelines. It does not lead to productive discussion.
One funny example is dogs disguised as panda: https://img.huffingtonpost.com/asset/5cd73e1221000059007aca5... because it's black and white. I am sure AI would gradually solve this problem but this makes me wonde, could AIs really "understand" shapes and structures, narrative logic, or even "reasoning" like humans do?
How functional does the code have to be before it's no longer a hallucination?
What if we are all not much more than only drawing "on the (admittedly vast) proportion of [our experiences] ingested at training time?
From LLMs, at least, I expect it will always be a hallucination. Code is never the point. Code is the working medium by which people solve problems for other people.
One way to see this is to realize that code bases on their own are generally worthless in the sense that people rarely pay much money for them. They pay for users. They pay for teams. But they don't generally pay for raw code.
Another way to look at it: imagine that a manager looks at ChatGPT and says, "At last, I can fire all the programmers. Fuck those guys." They set out to build an app. How long do you think it will take before they are forced to admit defeat and hire somebody who can read and edit the code?
Even if you think they make it all the way to a revenue-generating product, the manager will have become a prompt engineer, creating a large mass of interrelated prompts that are used to make code that makes the app. We have not eliminated the programmer; we have turned a manager into a programmer who has been forced to discover a new programming language. One that is clearly more English-like, but it also lacking in precision and, on current evidence, is much harder to use.
But I think the more likely outcome is that they will need actual programmers pretty quickly. That at best they will have speeded the creation of some more or less generic code. Which is exactly what we saw with the code-generation wizards of earlier eras: you got a fast initial result as long as it was pretty standard. But then you were generally worse off, because you had a bunch of only semi-coherent code that somebody had to understand before they could do novel or difficult things.
How is this different from "At last, I can fire all [those senior programmers and hire junior ones, and keep a team lead around, to keep them on track]. Fuck those guys."?
This is something I've thought for a while. If Google's source code was leaked tomorrow, would it even matter? Almost certainly not, and most people probably couldn't do much with it either.
Well, at the point when it works but I can't understand it well enough to verify that it is correct, we may have some issues.
I know someone who has been using it to write complex regexs. To mutate the joke, to me it just sounds like now they have three problems.
IDK, I haven't tried copilot, but does the code it generates work on the first try with no human intervention?
>What if we are all not much more than only drawing "on the (admittedly vast) proportion of [our experiences] ingested at training time?
ugh, this argument again. Despite the machine learning community using words such as "training" and "learning" to describe the way they tune parameters, it has never been proven that any existing AI resembles human cognition. This is something that needs to be demonstrated empirically.
Sometimes, it certainly does. Does your code always work on the first try with no intervention?
You're not really going to get from there (as a machine) to man on the moon, Shakespeare, and nuclear energy through anything like a normal recombination of what's already known. Yet, somehow, humanity did. And extremely fast. The time frame from then to now is but 1000-2000 human generations. And that with endless war, fallible memory, dark ages, knowledge being lost (or burnt), and so on endlessly. A "intelligent" computer system, without such flaws, ought be able to replicate our progress in a negligible amount of time. But whatever technology this may be that I'm appealing to, I don't really see it on our current path with natural language search/recombination.
A week later I noticed that Update in the new version does an Upsert... by reading the f*cking docs... google also didn't know this answer nor did SO.
Thius has been my experience with ChatGPT and code,it hallucinates a lot of stuff.
At all, preferably. Hallucinating `the_hard_bit()` from a library that doesn't exist isn't particularly useful. (That said, I do use GitHub Copilot, because when it's looking at actually related stuff, that's pretty good. Should we just hand an unbounded search and ingest to ChatGPT? Probably not!)
Its really really important that you ask it the right questions or it does tend to feed you some pretty average stuff.
and often its not just the right question, but the right question asked the right way.
I've had pretty decent luck, it's really good at providing a bit of context to glue two things together with the missing pieces I couldn't figure out, most recent with JSON and JMESpath filters.
The negativity towards ChatGPT is very unfounded. The bottom line is that ChatGPT is providing value in many ways. It's true it has no concept of a specific question and answer. It's true it doesn't hold the absolute truth. It doesn't even know what it's going to write when generating the first word of an answer. What it has are all the concepts in the training data. Patterns of questions and answers. Linguistical reasoning. Either you use it or you don't.
chatgpt isn't worthless as an alternative to whatever you currently have for searching documentation of apis/protocols/frameworks that you're using (and the values not THAT diminished by the admittedly poor experience of running into its bullshit-artist failure mode).
Over the past few months I've found myself second guessing whether the article I am reading was generated bullshit or "real" bullshit. It's certainly made me re-think what I waste my time reading. If I can't get through the first paragraph before asking myself "is this AI bullshit?" it's a really good indication that the entire article is going to be bullshit, at which time it doesn't matter whether a human wrote it or not.
It's another stepstone on the way to total centralization and control.
I already see most of my feeds are generated content? wasting clicks and resulting in filter updates. I really, really hope all of this madness becomes the equivalent of robo-calling (faster) and spam (more & faster) and that people stop believing all media and some retorting to original thought.
Personally, I will be valuing typos and grammatical mistakes in what I read
And whatever information you feed it, which at least for me, is far more important than some facts it's already learned. Usually I'm having it perform a task with some data in the working buffer.
ChatGPT is not suitable in applications where accuracy is more important than plausability, is i think the neutral way of putting the limitation.
So it can't write your PhD thesis for you and hasn't literally replaced himans in every field of endeavour. Big whoop. If that's the bar, it is a pretty high bar.
Perhaps, but that doesn’t mean it isnt true.
The nerve of talking about "ghost work" for $2 while publishing a page on the internet, where a giant chunk of the hardware used to mantain and use such network is made with raw materials mined and refined with slave labor, and even the parts where is debatable if it constitutes slave labor are jobs that people would never chose over the content filtering for $2.
That argument fails to consider that much of the drudgery of modern employment consists of large swaths of nonsense.
ChatGPT is auto-complete for bullsh*t jobs. A fancy boilerplate generator that has seen it all before and mirrors the exact sequence of word combinations we've trained it to believe is valuable.
lazy people will just suck it up, "experts / AI said so!"
https://langchain.readthedocs.io/en/latest/modules/agents/ge...
I mean - when you ask StableDiffusion to draw a dog astronaut, everyone gets that the image it returns is made-up, right? Nobody expects the AI to return only "true" images of existing things - it was trained on fictional images as well as photographs, and people understand that it can imagine new things beyond what it's seen. Nobody expects SD to emit an error like "I can't draw a dog astronaut because they don't exist".
So why do people expect ChatGPT to work differently? Even with developers who presumably understand the technical details, I constantly see people acting as if it was an error mode for ChatGPT to say something that isn't factually true about the world. How is that any different from calling SD a liar because it drew a dog astronaut?
The image generators accept a prompt, not a question, so we don't expect an answer.
ChatGPT generates responses to prompts that make it sound like it is answering a question that is posed.
Ask ChatGPT a question. The mere fact that I can fairly reasonably pose that as "asking it a question" is why people get confused. It's very easy to interact with it conversationally, and weird to interact with it otherwise because the text it generates is conversational. A generated image is never a conversation, so of course it isn't an answer and so can't be wrong.
It’s generally accepted that artwork can delve into the fantastical and absurd. That’s part of the point of it being art.
ChatGPT isn’t for the most part trying to be performance artwork - it is striving to be ‘right’ and to give the correct answer to your question.
On one hand, you're talking about how AI produces artwork from prompts, where you're expecting the output to be made up/fictional.
And on the other hand you're talking about ChatGPT which is something that a lot of laymen are looking like a replacement for tons of things like copywriters, software engineers, Google search, etc. Every single one of these things have a pretty high requirement for the output to be at least pretty close to accurate. If anything I don't think a lot of the people who have an issue with ChatGPT (myself included) actually have an issue with ChatGPT itself, but rather with the tons of people, laymen and otherwise who have pointed to it as the metaphorical AI singularity when in reality it is little more than a iteration on existing AI models packaged in a better form for people to understand it.
When you have been training people to expect realism and the truth through chatbots (Siri, Google Assistant, even the various bots on commerce/service sites), that is what they will also expect from ChatGPT which is represented in the same exact way. This itself is built on messaging expectations, if we're talking with our friends and family we don't expect them to suddenly start making everything up. We expect the truth, or at least their truth.
Compare this to art generation, say Dalle 2, which instantly showcases various images and highlights the wacky ones. This builds on previous expectations people have of art in general.
This is a crucial lesson in how presentation matters. Give ChatGPT a silly mascot, make it always output an informal lowercase internet-slang tone, you'll quickly see how expectations change.
I frequently point out that GPT generates falsehoods because I'm constantly reading stuff from people who try to use ChatGPT as a knowledge retrieval engine. In that context, generating bullshit is an error mode, and the correct answer isn't to try to fix ChatGPT, it's to raise awareness of its inherent limitations.
So, I agree with you wholeheartedly that generating bullshit is a feature, not a bug. I just want the rest of the world to realize that.
But its method of presentation is one where it on the whole composes paragraphs composed of proposed facts and logic.
There's really no ambiguity here. Its sentences are in general (not all sentences, but most) composed of propositions. The propositions can be tested for truth or falsehood. If the proposition is false, we can generally call it a "lie". Or at least -- not a fact.
We expect it to work differently because conversational language works different than images and art.
If you ask ChatGPT if Ebola is transmitted by mosquitos, and it says it's not only transmitted by mosquitos, it's airborne and highly contagious... do you know if this is accurate? Most people don't ask Stable diffusion to make an images of things we don't understand, while we ask ChatGPT questions we do not know the answers to.
The other speaks authoritatively about falsifiable facts and is frequently wrong.
Art is subjective.
Facts are, in theory, not.
This bs generator speeded up my coding by 10-20x. It's like a superpower... not talking about ChatGPT specifically but its stochastic parrot cousin (Copilot).
Its writing style is also pretty tedious for many prompts.
I've noticed "full stack" folks really enjoying ChatGPT but dedicated infra and ML folks (not working on toy problems) finding it less valuable.
A good chunk of my work involves talking to folks, understanding requirements, putting together design docs, etc. vs. actual coding. I've found it not particularly valuable even for doc writing.
I find it very hard to believe you can build software 10-20x faster with GPT, since the majority of building software isn't writing code.
I'm also curious what you're programming that can be completely offloaded to GPT.
It's not integrated into an IDE like GH co-pilot yet is it?
I only find I reach for it for stuff like navigating some curly regex or forgetting date time format syntax for the millionth time. But I would be very keen to understand if I'm missing out.
Large language models like the GPT family are statistical models that learn the structure of language by predicting missing words in sentences. These models are seen as "bullshit generators" as they have no idea what they are talking about and are designed to produce baseless assertions with confidence. The addition of reinforcement learning from human feedback helps prevent the model from producing hate speech, but it still can't change the underlying language patterns learned from the internet, which include conspiracy theories. The dangers of these models go deeper than bias and discrimination and despite claims of "artificial general intelligence", the concept is inseparable from ideas of innate supremacy and hierarchy. Companies like OpenAI receive billions in investment for these technologies, not for actual AI, but to replace or precaritize human workers. AI is a political project and should be seen as such.
Which quickly allows me to ascertain the article wasn't worth reading in the first place.
Thanks ChatGPT!
This article discusses the dangers of ChatGPT, a large language model, and how it is used to generate 'bullshit' and propagate existing power structures. It argues that the model is harmful and that its plausibility makes it even more dangerous. The article also highlights the exploitative labour practices that go into maintaining the model, as well as the underlying supremacist perspectives that are embedded in AI. The author suggests that instead of embracing AI, we should focus on centring activities of care and search for alternatives to algorithmic immiseration.
Both miss that it’s an advertorial for the author’s book.
I downvoted you because your comment is both snarky and profoundly incurious.
Seriously, either take the time to engage with whats actually written in the post or don’t bother posting lazy swipes. As has been noted by others, the ChatGPT summary isn’t even really correct.
I’ll admit the piece is written in jargony social science language, but it makes real, interesting points about the social changes that are going to accompany introduction of LLM’s into society.
Thanks LAC-Tech!
https://news.ycombinator.com/item?id=34662167
It's kind of like how a jazz musician doesn't like for example Ed Sheeran, but the average person does.
Shakespeare's famous opening, "Friends, Romans, countrymen, lend me your ears; I come to bury Caesar, not to praise him" is the lead-in to a speech by Antony in which he very much praises Caesar and buries his enemies.
We might not be able to bury the cat, but that's hardly a good thing. Shits going to get really awkward for the next few decades.
Also reminds of people doing symbolic AI (decision trees and stuff) that kept criticizing NNs telling they don't work, will never be able to tell why they have this output (it's even wrong), etc.
They had some good points though. The article says that this technology centralizes power in the hands of rich tech companies who exploit low paid workers and unpaid creators who do the value creating work. The tech companies organize the data then endlessly skim off the top. At the same time they maintain undemocratic control of who can see what information, guided by profit and sometimes authoritarian governments.
These are all the same criticisms of the walled garden internet and theyre just as relevant for LLM’s
Ergo..>> my book
This seems to be squarely targeted at the ragey punkrock bangarang subset of this group.
> we are all 'stochastic parrots' like large language models, statistical generators of learned patterns that express nothing deeper, is a form of nihilism to mean: “life is meaningless. (period)”.
When in fact, it really means: “life is meaningless… (you fill the dots for yourself)”.
Humans created the concepts of law and order, and the rules of society. Everything is man made.
More on the topic: how can we dismiss the theory of ChatGPTs intelligence when we barely understand what constitutes our intelligence at the biological level. It’s a compelling hypothesis. A neural model is the closest thing we’ve got to anything that resembles our biological model.
If indeed 1 neuron = 1 parameter, ChatGPT (175 billion parameters) could be a comparable intellectual model to a human being (86 billion neurons).
Lastly, I think some politicians are doing more damage to our civilization by dividing people. History tells us the damage and trauma from this can carry on for many many generations to come. Maybe, just maybe, ChatGPT could bring some sense into people to move past hatred and accept each others differences.
that's the reason AlphaZero can become superhuman at Go, while ChatGPT can't even play. it makes me wonder if OpenAI has abandoned RL because it's too hard, and they're trying to move the AGI goalposts to these giant unsupervised models.