So much of the world of software development is building variations of custom CRUD applications that take user input, store it and then present it back to the user in various ways allowing them to read, update or delete it.
On top of this is often a layer of other features such as workflow management, notifications etc.
How do you believe this type of software development will be impacted by the advancements in AI in general and LLMs in particular?
Cheers!
I doubt it will be business/stakeholder people interacting with the AI. It could potentially be business analysts but I doubt they'll want to. It would be an addition to their current job.
That leaves the software engineers. Maybe a lot of software engineers will turn into _solution_ engineers or _product_ engineers. Their job will be to create the solution/product even if they're not writing code.
At least that's what it feels like I get paid to do...
I suppose AI needs to be able to reason and we are still a long long time away from AGI according to experts.
Btw, what’s the CAT step?
Remember, this isn't actually AI, it's machine learning, and the learning part is a misnomer.
That reminded me very much of this skit "The Expert". https://www.youtube.com/watch?v=BKorP55Aqvg&ab_channel=Lauri...
Yep.
You can make a good argument that today, any technical profession doesn't have to know as much because of existing software in most any given sector, whether its CAD software with built in stress/CFD analysis for mechanical parts, or frameworks for actual software development that are larger building blocks then pure code.
In the same way, future software engineers will likely be using generative software based on models like GPT that can take plaintext English and translate it into code. There will still be knowledge required of which model to run, what the parameters do, how to tweak the output, e.t.c
Of course, in the further future (although exponentially less time), those engineers will no longer be needed because there will be general models trained on all of their work as well. But really though, at that point, we wouldn't be that far from AGI.
What you've described is a bare bones CRUD app. The little I know about things like AI/LLM is you feed it text so it can learn. If the input is not good then the output is not going to be good either, doesn't matter what it does with the input.
We (as an industry) can't get feature requirements or business logic documented to be interpreted consistently by humans, who understand those problem domains in high fidelity, let alone some computers reading that text. If the translation of requirements to code isn't great, code to LLM to produce new code isn't going to cut it either.
Our industry jokes about all we do is CRUD apps but once an app is mature and beyond simple models, has integrations from a dozen APIs, has customers integrating via APIs, does reporting, needs to guarantee-ish transactions and most importantly is using derived sets of data for billing/invoicing it is much much more than a "CRUD app".
It would be pretty cool if LLM was able to be trained on your company's private code and then you were able to ask it "hey, we're thinking about making XYZ change with one of our vendors/components, what ramifications should we think of" and then it scans all of your code and tells you where stuff is touching it.
I can think of a ton of reasons why were years away from that/small simple breaking things that send the entire thing out the window, but I'd be surprised if we aren't marching towards some version of that next 5-10 years. Think of how many executives in companies will throw money at Microsoft if they promise (they can outright basically lie in the sales pitch and it doesn't even have to work cough Azure cough) it'll scan the entire company's code base and do advanced analytics on it.
As the requirements become more complex, like having an email sent off after a Create or Update, some SQL stored procedure code needing to be triggered, at this stage because I havent seen enough of the dataset used for training to really know, but they could potentially know enough data to replace a large group of programmers.
What reduces my confidence in that thought, is things like Chap-GPT3 cant even get history right from a wiki page, and cant even code in a specific language.
Its generating pseudo code in some instances, so I think they have a way to go still.
Oh, and the other very popular request that we get is "Hey, we have an app built with <insert no-code platform here> and we need to re-build it with traditional ways because we can't get new features in it because the no-code platform doesn't support them..."
So basically a parking app that save you a spot, aka low hanging fruit now, sure the ai stuff will eat that market, but want something like Carrot Weather, ToDoist, etc you are going to need someone to build it.
Maybe the next generation of text AI models will do it :p
I honestly have no idea what next-generation LLM performance will look like, but it could _possibly_ be used to generate entire simple CRUD apps.
In the long term this is a bit like asking how ICBMs will affect fortress wall construction.
The husband told by the wife: "Buy a watermelon, if you see eggs, buy a dozen" before going out. So he went home with a dozen watermelons.