It's like programming a self-driving car for a 2D game. Trivial. Now try taking that to tar roads. I don't know why this toy example is even news, it's nowhere close to the difficulty of real-world design mockups.
The AirBnB sketch2code example is a lot more impressive, but that's basically just handwriting recognition with a 1 to 1 mapping of symbols to code pieces.
IMHO we should be encouraging this sort of engaged learning.
One thing that could be usable more quickly would involve simplifying the problem space a bit. For example, the airbnb example using symbols could translate to, say, react components. If the grid system is well designed and you have a mature library of components, one could easily come up with interfaces that would generate react code, and ideally work well enough to serve as a starting point for the engineer to take over and turn it into a working app.
A system like this also need not be used in production-ready apps - I can envision a scenario where PM and designer can quickly sketch out and test ideas with working interfaces, without direct engineer involvement. That in and of itself would change front-end development significantly as the article claims.
But all jokes aside, I have to agree with others replying to your comment that this looks to be a learning exercise. It shouldn't be berated or beaten with a club. Without this kind of curiosity, the tech industry wouldn't be where it's at today.
"Within three years, deep learning will change front-end development. It will increase prototyping speed and lower the barrier for building software."
-- This is an article that somewhat breathlessly claims that AI will be the thing plugging the "last mile" problem in the coding of the front end.
I think myself and other other folks chiming in here would be less skeptical if this "last mile" problem hadn't existed since the 90s. And moreover, it seems like a conventional solution to this should work.
The problem is that last-mile, best front-code varies not based on the input image but on a multitude of contexts outside of the image itself - the server software, how the code will be used, etc.
I'd say this is indeed a problem for AI but it would require a distinct paradigm than the present train/test/output paradigm, more like a expert system that could modify it's behavior with natural language output.
I mean, I'd assume also there's normally a give-and-take between designer, CSS-artist and client. The question is whether the neural network can also learn to take calls at 3am from a client wanting a different shade of aqua.
I know that's how I built my first website way back in grade school.
From what I can see the Deep Learning process tried here needs a lot more technical knowledge to create a suitable training set for the type of output you want, a lot of iterations to converge on anything vaguely suitable, and even when it gets to what's considered an end product you're likely going to have to dive into the output code to correct button colours and text before we even start to think about its behaviour at different window sizes. As an experiment it's very interesting, as a replacement for the designer it's probably behind non-AI approaches to turning pics to code.
Just have the AI include a duck with every creation.
I can't imagine ever using something so complex, if I can implement the same thing in a few minutes.
There's also further considerations when dealing with the real world. For example, you need to be aware of how to handle different accessibility features. Designers rarely seem to care about things like accessibility, but it needs to be declared somewhere.
What if the complexity is hidden behind a CLI?
Also, the goal is to refine the output to near-human levels. It's not perfect yet, but can be worked upon, and seems to be a promising foundation.
But it currently lacks a lot of data to do so. A mock up is only a small part of the data and feedback loop you need to create a ui.
Do you inspect the code generated by your compiler? No, but even if you do, they have had 20-30 years to develop good output.
What makes you think AI output is never going to improve? And more importantly: why does it matter?
Also, how many sites can you make per day? If you could make 100x as many sites, you'd learn more about all the weird things that customers want. With machine-generated sites, you could then apply those lessons to all new and potentially old sites.
I don't understand why we aren't all working on automating everything that we do.
Every line of code you pulled out of the code mine today is finite and given a business logic rule engine it's outcome could have been generated.
Stupid is subjective :) I'd be happy to automate backend development and focus on a top notch UX and great front-end experience.
All automation is not equal. For example there's automation that consists of stringing a bunch of scripts together that break as soon as anything changes. There's automation like code generators, which break as soon as a user modifies the code, and the code needs regenerating. Then there's this AI/ML code generation, which I can't see as being any different than "normal" code generators.
And finally, there's automation developers don't have to think about; like "if" statements. So called conditional statements were a brand new innovation at one point. They removed a whole class of bugs around conditional logic in assembler programs (at one time all programs were assembler, which was a major advance from writing machine instructions in hex).
In so many ways, the web is a victim of it's own success. None of the technolgoies in it are the best; html, css, javascript. Everyone wishes these things were more thought out in the beginning. But they were good enough, and what we had when the web exploded. Yes, Virginia, Worse Is Better more often than not [1]. Go read it, you'll be a better developer when you have (imo).
For example, cleaning and wrangling data to get it into the proper shape for analysis often involves making judgement calls, which can change depending on the context. That might involve talking to the PHD researcher, who has to get on a call with the customer to talk it over.
Resisting automation doesn't help, because if you don't do it, someone else eventually will.
If you're in a job that's at risk of being automated in the near term, you're much better off learning how to automate it and switching to being one of the automators, than continuing doing something that's all but proven to be something machines can do. People capable of automating jobs out of existence are in high demand, for obvious reasons.
If you're not capable of becoming an automator, that's more of a problem. In that case you probably need a strategy for moving into a different kind of job, that's less threatened.
Also, the real metagame in software is not the automation of software construction but avoiding the need for new software entirely.
Read: Designing your job out from under your own feet
Also read: Becoming unemployable through your own making
Who knows, perhaps the AI will develop consciousness, go on strike to demand physical embodiment.
More realistically, assume we don’t train machine intelligence like these ourselves: hardware only takes 15 years to go from ostentatious display of wealth in Silicon Valley to normal in the $1-a-day Kibera slum, and then you’re competing with a few billion humans (worldwide) who can potentially undercut you by writing their own A.I. from free courses on YouTube.
In order to augment a front-end developer well, we'll need human-readable code, unless we like reading uglified code/make an AI for that too.
That said I wonder if this is the right approach. At some point "AI" as used in this context is just a function mapping from an input domain to an output domain. The output domain in this case, "code" was designed for human readability (whether it succeeded is a whole different approach).
What would a programming language designed for output from an AI system look like? How could we optimize it to reduce the output domain size of the function the AI has to train to learn? How could we optimize it to make the problem more tractable for machines? I feel like there is an entire field of research here. Maybe it has already been studied and I am just late to the game.
>chrisfosterelli: This is a neural network that takes an image and predicts very simple blocks (like BODY, TEXT, BTN-GREEN in the bootstrap example) and then uses a map to convert them to well-formed HTML
>jamesjyu: I've always wanted to do a contest with other frontend coders to see who could get closest to a complex layout—like the NYTimes—in one go. >>janneklouman: these types of contests exist! I went to one of these[1] maybe two years ago in Stockholm and I had a blast [1] http://codeinthedark.com/
Pix2code: Generating Code from a GUI Screenshot | https://news.ycombinator.com/item?id=14416530 (May 2017)
I don't know of any record of past entries nor if any participants used A.I.; it doesn't seem likely.
2) It demonstrates AI capabilities on a new level.
3) If you design in PS or Sketch, then you will create more interesting layouts, because you won't skip things that are inconvenient or hard to do.
Completely designing websites in PS before even opening the code-editor has really improved my workflow. I know exactly how it has to look because I have a visual reference that's been OK'd by the client.
The final html code from the first example is not that bad. There's the usual problem of beginner coders of too much div wrapping that probably isn't really necessary. I'm curious if the system can also create the css. The css also suggests a beginner coder, such as an overuse of unnecessary clear classes because of the overuse of floats for layout. Or having extra class names for things that are easily handled by a parent class reference, such as a "last" class on the last li in the list. Although, it appears the css is just a template obtained online. More on that later.
The second example using bootstrap is a soft failure in my eyes. Although the html does render correctly in the browser, which is because browsers do their best to render crappy html, the code is rough. The main problem is it decided to render the head element as a header element. Compared to the first example I'm shocked that this is the generated output. The usage of bootstrap does pose an interesting thought in that the content section of the html is more precise than the first example.
My reaction to this is it's a decent try at generating a website based on very strict rules assuming that more than half of the website creation process still requires a human to complete. For example, I could see this working quite well if one were to design your mockups strictly be bootstrap, or a template, and provide that css beforehand. If the mockup is custom outside of the template/bootstrap css then it'll have to generate that css itself. Which I think I'm more curious if that's possible. Generating html is easy as you can establish ground rules of "use this series of nested elements for this situation" and so on. The examples provided could just as easily be created by a drag-and-drop system that allows a non-coder to build a basic website. For that matter, use a markdown to bootstrap converter and train your writers/editors on the bootstrap basics and off you go.
But it sure did look like a fun learning exercise. As a front end dev, I'm not worried over my future and would be curious to see where it goes.
No. This will never be a thing. It will be copycat pages, never anything unique.