1.> What will happen when evil players will get hold of this tech? Are we going to witness another cold-war or something, just this time the threat will be who develops better models?
2.> This incredibly powerful models are built and maintained by private orgs. Isn't that scary in itself?
3.> How do we adapt ourselves so that we can co-exist with such techs (and the more that are yet to come in the near future)
Maybe some more points can be added. What are your thoughts?
It first made a basic admin interface in Html+Css i could copy paste into Codepen, then made it interactive with pure JS, then refactored it to Vue3, then refactored it into the now obscure Angular 1.4 just to test it, then back to Vue 3, then added Pinia for persistent state in the browser, and then converted the application to PHP Laravel+Livewire, and then to Python with Django+Htmx for backend
EDIT: Wow, then i had it make graph with Canvas showing a sinus wave that i could speed control via an input with the prompt "make a an application in JS and Html that shows a sinus curve that you can speed up or down via an input".
Absolutely mind boggling! It means it can create already create a simple working application and transform it between most known stacks, even older ones.
I have no idea why some people here on HN say that it doesn't understand logic when it can refactor like that?
This is honestly making me reconsider being a developer as a career choice just a little.
I also think there will be a long time where a programmer's job turns instead into writing extremely detailed comment-like code where you define nearly every requirement of the code (down to the "this button should be padded 5px from the top right and hover states should function as such..").
Until we reach the day where a designer could feed a design into an ML alg and out pops the backend, frontend, and infra to do the job and scale perfectly without bugs.. devs will be needed.
This will follow Gates' law, which says we overestimate the impact of technology in the short-term and underestimate the effect in the long run.
Because it literally is not interpreting logic or using logical reasoning, it's not a matter of opinion. The people who made it wouldn't claim that, because that's not what it has been programmed to do.
It's an incredible example of machine learning, but all it's essentially doing is parsing StackOverflow answers. Everything you just said can be done novices by reading StackOverflow and copy and pasting things together. Yes, it makes that process much quicker, but there's a reason the invention of StackOverflow didn't displace programming jobs:
If you want to create complex production software and grow and maintain it long term, it's simply not enough to copy and paste from the internet.
If that's the extent of your ability then your software will turn into complete crap, riddled with technical debt, and will be almost impossible to develop or maintain it. You will be far away from the efficiency required to build a real business from that software.
If you think your job can be replaced by someone with virtually no programming experience copy and pasting things together from the internet, then yes you should be somewhat worried: get better ASAP. For me personally, that is not a threat to my job. ChatGPT is genuinely impressive and I expect even better things to be released in the coming years, but even so we're a long, long way from building complex software without the need for programmers to be involved.
The people who are extremely worried about this are probably the same people who believed Elon Musk when he said we would have fully self driving cars 2 years ago.
could you try to add some logical/calculation bug in the code and ask it to fix it?..
i'm leaning kotlin at the moment then get up to speed on android dev so i can put together a simple app i don't want to pay someone else to do for me. and i'm bored to death already. if i could have chatgpt do it for me, cool.
i don't understand how it works - i've finally signed up, logged-in, tried the 'summarize.site' chrome extension for summarizing articles, a couple test questions on the chatgtp site, but can it build me a site and deploy it for me? or is there a ton of setup?
guess i will find out.
We will always have to supervise the machines. Otherwise we wouldn't still be putting Prometheus on 'stable' systems
Interesting. IME even for toy problems it generally spits out code which fails to do what I requested for many inputs or at all. Using this tool to solve a problem requires not only that I understand what it spits out but also that I understand how to actually solve the problem, so that I can work out which parts of the rubbish it spits out are broken and ask it to iterate on those.
It doesn't seem frightening or particularly transformative; I'm not even convinced that using it could save more time than not. It's not doing anything radically different to https://github.com/drathier/stack-overflow-import and the latter works better.
The worst part is that just as with tools like Grammarly, the people who would most benefit from a properly working version of such a tool are exactly the people least well placed to understand when and how the output of the tool is wrong.
I welcome evil players attempting to use it: their evil plans will self-destruct in hilarious ways.
1) We are forever in 'cold war', nuclear weapons can destroy the eart multiple times over since 70 years ago. Not sure how a less potent weapon supercedes this fact
2) Those models are successful with computer code because programmers have been very pro-open source and open-data since forever. We 'll have open source versions of those things soon -- we ll still need to find the hardware but i think this can be done too. Much easier to do than a nuclear bomb so i expect these systems to become ubiquitous
I wish biology and medicine have had a similar attitude to open data and open science. Imagine if you could run similar queries in genome databases or neuroscience images.
3. First we get excited, then some people will turn that fun tech to billions as before
I really don't remember people being such doomers when the internet came about. What happened?
On the code completion side, I'm already feeling the pain of having to interact linearly with Copilot. Would be nice to have a few different panes - "written description", "suggested changes", and "autocomplete", and have them update each other. Just having autocomplete is like peeping through a keyhole.
Don't get me wrong, I do think there are dangers to runaway progress. But we can't stop the internet now. I think our best bet is opening ourselves up to what comes and help skew attention towards tech that leads to more compassion.
Universal Basic Income? xd
My friend wrote a "remote c2c server, basically a mutating malware" using ChatGPT. He had no bad intentions, he just works in the security domain.
People with malicious intentions will be the biggest and earliest adopters of AI. Somehow that is my first thought.
In the past, software engineers were dealing with punched cards (few engineers), later on they were dealing with assembler (more demand), then with low level programming languages like C (demand starts to increase), and nowadays engineers deal with high level prog. languages like Python, JS, etc. (high demand). As the technology makes software more ubiquitous and reachable to any aspect of life, the demand for people who know about software increases.
Maybe in the future software engineers will have to deal with even higher level languages (prompts?), but that would only mean that making software is easier than before and you'll see software even more ubiquitous than it is right now. Demand will go up for people who know about software even if the tooling seems like it requires less people with such knowledge.
Even before large language models we had the option to copy-paste working code for rnn based time series forecasting for years.
It scares me that in few year I wouldn't be able to tell if something is fake or not most of the time and global mind set will be controlled by even more fake content.
We may need to figure out a way to identify genuine human-generated content.
It's run by open AI, and like its current GPT models such as da Vinci it absolutely will be monetized.
1. There are a good subset of jobs across multiple industries that are simply "decision tree lookup" operations. These types of job will most certainly be replaced. For example, I worked for an aerospace company, we hired a consultant for advising on a manufacturing process. He basically looked at what we are trying to make, and advised on the tooling, process, e.t.c. This is the type of job that can be easily done by a future version ChatGPT that is sufficiently trained on both text and mathematical contexts. Software jobs often fall into above category, replicating common patterns that developers have learned. ChatGPT right now is even smart enough to take an input json and output json and write code to transform one into the other.
2. The actual "compute" operations jobs (like making software that requires figuring out a new pattern of transforming data or interfacing with a new piece of hardware like a 3D display) won't be replaced, but the skill will shift to a lot more computer science centric in being able to either a) additive train generic models on specific tasks, or b) use state of the art AI assisted tools effectively.
3. Overall, quality of life is going to improve, as it will get a lot cheaper to do things.
TLDR; if are a software dev and you haven't already, get super familiar with ML concepts, Pytorch, etc.
https://github.com/karpathy/micrograd is a very good primer to start with once you understand the basic concepts.
from the following table create a php crud application : CREATE TABLE `rating` ( `rating_id` int(11) NOT NULL, `review` varchar(4048) DEFAULT NULL, `rating` tinyint(4) NOT NULL )
It's worse than that, though. We didn't have nuclear fission being made by multiple competing parties in the private sector with no possibility of government regulation or oversight. That's what we've got today for ML.
Decentralization sounds great until someone is cooking with plutonium. Our values of freely sharing information and technology are simply unequipped for technologies this dangerous, but we have not yet recognized ML as a weapon of mass destruction. We should, as it plainly can now be used thusly.
As I predict that in the future, open source ML models will be regarded the same way that loose nukes are treated today.
Of course, the problem will be far worse. We never set up multiple competing MOOCs to teach-at-scale nuke-making to the next generation. In effect, we have made ML terrorism unavoidable and endemic after 2025 or so.
Given that democracy was already hanging by a thread, I'm going to wager that the profusion of next-generation ML bots will make it effectively impossible at scale, as it becomes a simple matter to create 10,000 supporters, say, with unique faces, voices, and opinions, out of the thinnest of air.
This means that democracy will denature into a sort of suspicious populism. And before you say, "isn't that where we're already at," I'll say, "yes and it just got a whole heck of a lot worse."