Think of the concept I want to write about as well as the supporting evidence for the topic. Ask ChatGPT to write me something in my target format using the topic and supporting evidence as input. What I get back is essentially a well-written skeleton that I can use to fill in additional details. Finally, I pass my revisions through ChatGPT to touch up any errors, rephrase wordy things, etc. I lightly edit the final draft and I usually have an excellent result.
It should be fine for most places I guess - but I suspect a decent amount will have a problem with this.
This is my main reservation about copilot as well (quality issues aside).
Every business needs to make their own decision. Personally, I’m not worried about OpenAI using my data, but I understand others might be. That being said, I already give Amazon literally all my data about my business via AWS and Google gets a copy of all my documents, so providing this data isn’t entirely unprecedented.
The problem will be the asymmetric, uni-directional flow to those whose sole function is mindless consumption of AI-generated content.
J.K. Rowling, Harry Potter and the Chamber of Secrets.
Jokes aside, do be careful. A prolonged interaction with LLM agents had resulted in at least one Googler being terminated on their jobs.
On the other hand, I would not be surprised, if they’ll make millions now, suing Google citing the job hazards exposure. And that ChatGPT reasoning abilities and empathic skills are maybe already above the median human. As a result, in a median case, such interactions might result in an effect similar to an interaction with a good teacher.
Still, none of this is very well tested.
Maybe the change we need isn't AI assist, but a break in the conventions around communication at work so we can all be more robotic and terse.
One interesting thing I've found about ChatGPT is that it removes a lot of unnecessary information from my final drafts. The information removed usually doesn't add to the overall point, and it reads so much better without it. In this way, ChatGPT is making things more terse.
Now if I could just learn to write that way in the first place, it would save me a lot of time and effort...
You have to be rather precise with the prompt language to get the desired outcome.
Still, overall, it's an impressive capability.
However, I also use ChatGPT to make the updates for me, adding additional information and context, which I then ask to be integrated into the document (e.g. update the introduction to include...)
Those and design docs. It can also reason over code somewhat; it may be possible for the model to intuit design constraints from code.
An obvious example that comes to mind are the recent "Jordowsky Tron" images [1]. Any art director could comb those images, consider changes here, there — and end up with something better than the AI.
I guess how derivative you think the final results are depends in part on how much "artist's prerogative" the art director employs, how derivative you think the AI prompts are to begin with, how derivative you see all art....
[1] https://www.facebook.com/groups/officialmidjourney/posts/454...
The best outputs from a model like Midjourney go way beyond mere prompts for other artists.
Given the amount of noise from people churning out low quality content today, I find this unlikely.
Is there a subset of any human language that can be described by a formal regular grammar without giving up the expressiveness of human language? Is there a notion of "Turing completeness" but for thought rather than computation?
I find it pretty obvious that there are many ways of thinking that are not language, just consider abstract concepts in math and related fields. When working my way through math and programming problems, a large part of my thinking is not through words but.. Some kind of visualization?
In other words, there's probably basic consciousness at one end of a spectrum, and thinking evidenced by language on the other end. Somewhere between those two, one might draw a line between thought and mere conscious awareness.
Looking at it from a different angle - what if AI could search your internal docs, and help you problem-solve? Aka help you exploit past knowledge to inform future decisions?
The hope: GPT will democratize creation, not fill the internet with shitty articles.
Searching your internal docs is an interesting one, but it's still unclear what this can do that grep can't. The leap forward would be ability to reason autonomously, but we're as far from that as we've ever been.
But there definitely is an upside for those who separate writing from communicating.
If were to ask an AI model how to structure a good e-commerce when no previous data on e-commerce was available probably you'd get nothing meaningful.
Or ask how to do a human centric design when the web was all about animated gifs and flashy buttons. Probably you'd never get minimalist design as an answer.
What makes those "basic maths" a good benchmark for whether we're fucked?
Freeing up head space for more important things is valuable. Not having to maintain 7 digit numbers in your head lets you focus on the actual problem.
Not having to worry so much about the phrasing in a document because anyone can ask a bot to rephrase it, in a way that makes the most sense to them, is better than what we have today.
Unless the apocalypse comes we'll be just fine, and at that point hunting will be more important than math again anyway.
Do you feel the same frustration regarding handheld calculators? Why or why not? The utility of reducing math mistakes and improving accuracy across an entire population seems clear.
And yet the ubiquity of calculators has not removed the need to learn the fundamentals of mathematics. I suspect the same will emerge with AI tools.
The primary difference is that it seems possible to get better at writing by working with an AI while a calculator is just a black box that spits out a pre-determinable answer.
A calculator is arguably an overly simplistic analogy, and nuance abounds once you bring AI into the mix, but I don’t think the tech community has done a good job of distinguishing between AI-specific concerns and garden variety problems that seem common to most modern technology, especially the kind that automates something that historically required a human in the loop.
Delegating creativity is less likely to lead to outright disaster, but allowing your creativity to atrophy sounds like a recipe for a terminally boring society.
Plato on reading and writing: "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them."
I think that this is great and I'll explain why.
When the machines manage to supplant all the mechanistic portions of our thinking, what will remain? The creative bits, is my guess. There will be far more room for the creative bits, and then... ???
Often the creative bits come out of the struggle to do the work. If you get rid of the struggle, then you also get rid of the motivation for their creation.
In episode 5 of the new season of 24, Jack Bauer's mission to infiltrate and take down the terrorist group leads him to a critical moment of decision. With the help of his team and his contact in Moscow, Jack manages to uncover the group's plans for a major attack on the city and learns that the group has a mole in the White House. Jack must now decide whether to reveal this information to his team and risk blowing his cover, or keep it to himself and try to take down the group alone.
Meanwhile, the White House and CTU are in a state of high alert as they race to track down the mole and stop the terrorist attack. The Vice President and acting President is under pressure to handle the crisis and maintain the illusion of Jack's kidnapping while trying to find a way to bring him home safely.
As the clock ticks down, Jack must make a difficult decision that will have serious consequences. He has to choose between his loyalty to his country or his loyalty to his team. The episode is filled with intense action and nail-biting suspense as Jack and his team race against the clock to stop the terrorist attack and clear his name before it's too late. The episode also features a dramatic twist that keeps the audience guessing until the very end.
OK, but but actually it quickly devolved into generic description of many episodes of 24. The human creative element can't be captured by GPT yet.
By the way this totally should happen: Jack is president, has to fake his kidnapping because of a plot coming out of russia, meanwhile apparently the mole is the first lady.
I've benefitted since I started using GPT in my own idiosyncratic process last summer since it identified and fixed my problem of writing sideways, i.e. breadth first.
I could get this from a good, old fashioned human education but my professor friends in the humanities have already resigned tenured positions.
I worry that similar to HR becoming human robots the long term impact is that we trade anthropomorphism for robotomorphism.
However, in my work, one of the things I’m tasked with doing is the tedious process of writing and rewriting instructions for training exercises. A non-trivial amount of unnecessary energy is spent on restructuring the grammatical structure and flow of this text.
I seriously wouldn’t mind focusing on the details and let an AI do the work.
The point is that what used to be a great achievement becomes a new baseline, and the great achievements become ever greater.
With this ChatGPT non-sense it is symbiotic and we rely on the results gotten no matter how we got them. Or do you think calculators make you better at computations in your head or a better mathematician?
I think the problem I have with this headline is the tacit meaning of better. The results might be better yes, but YOU are not different.
I have spent years polishing my English, reading Orwell, Harpers, Lapham, Poe and others. I spent hours arguing about the minute differences of intricate grammatical structures. This is nothing an AI can help me with. It can only be gotten through a teacher and arguing, thinking and brooding over differences in the solitude of ones chambers, NOT by conversing with an AI that always delivers bespoken solutions as tasty morsels lacking the bitterness of labor as the underlying creation of the solution is lost.
Unlike with a teacher, it cannot be gotten from an AI either because the AI is a generator void of the deeper understanding of what is going on.
And the same can be said about thinking. Writing is nothing but a vociferated thought put on paper.
But don't take my word for it; you should sign up for the beta and give it a try :)