As far as I know there's a sizeable number of devs who don't intend to ever rely on copilot, and I would expect the a similar trend in the drawing community with amateurs and pros not specially anti-AI, but not wanting to have a random generator meddle with their art.
Is that really a thing?
I mean, I also don't want to rely on Microsoft and therefore also not on Copilot, but not using AI tools in general out of principle is probably a very rare minority. I simply would prefer my own local LLM.
But in the thread linked above I read "AI never had and never will have it’s place in art." And this stance would be very weird for me for devs.
I'm one of those. I've been programming for a while now and there's no way i'm gonna trust a neural network with my code. Debugging is painful enough without having to deal with subtle bugs hallucinated by ML.
Some machines are really useful to reduce human suffering and augment our collective capabilities. Some machines are just useless, polluting gadgets. I think ML sits in the middleground: if your job is pissing meaningless code all day that's very repetitive it can probably do it for you... but if you have to actually do R&D to develop new tools i don't think ML will be any use.
So yes AI can reduce work, but arguably work that was never required nor beneficial to humanity to begin with. I would be way more interested in society reflecting on "bullshit jobs" and how to actually share the workload so that we can have 1-day work-weeks planet-wide, just as the scientists from the 19th/20th century envisioned. Instead of continuing to destroy the planet so we can run bullshitting neural nets in the cloud that produce arguably little value.
But sure, ML is fun. Let's just pretend we don't see the whole world burning outside the window.
Hm, just a suggestion, I would be careful with such statements, if you don't want to insult peoples work you know nothing about.
Because LLMs enable a very broad spectrum of work. I don't use them in my current workflow(nor am I that easily insulted), but the times I did use them, they were useful. My problem with them was mainly ChatGPT4 was out of date, but it did produce very useful results for me for WebGPU and Pixijs, which I had not used before and the solutions it gave me, I could not find on the internet. So for my novel work, they don't help me in general, but they do help me if I need a new custom part, without having to reinvent the wheel.
And then of course there are people who greatly benefit from them, who did not study CS, like a friend who is ecologist and all he wants are some custom python scripts, to modify his GIS tool. I think he is doing useful work and with LLMs he is indeed spending less time on his (freelance) work and has more time for his children. Isn't that, what you are also hoping for?
Moving code around a loop, extracting a series of variables etc.
Like a sibling comment I also find it useful to complete the end of the line.
As for writing code, it mostly produces very convincing looking code at a glance, but full of shit on a second look.
But as a smart completion and local refactoring tool, I really find it useful.
It’s like having an over-eager coworker to pair-program with - and can kinda boss around as you please, it never tiring or needing a break. Not senior-level (outside of knowing “deep” small pieces and snippets), but not fresh out of school either.
And it is great for fleshing out comments (if you’re into that, as I am), picking up your style and notation as you go.
While impressive, the two issues I have with codepilot and other AI tools are:
1. The code is usually the same code I'd get a few web searches away, except then it would have the appropriate copyright. As a FOSS developer (in my free time), I do not want to risk using code I don't have a license for, and thus dirtying up my entire project and putting it in danger of being taken down.
2. I really don't need it. At very few points in a project do I both think "I want to continue this" and also "I want my code written for me". I like autocomplete, I use autocomplete, and I like Visual Studio's suggestions, too. It's only wrong 50% of the time, around about. I have no interest in a tool that writes my code for me, because I have learned everything I know from solving problems myself.
Edit: Clauses in the AI's ToS like "all code generated is yours" or something is akin to a sign on a bar saying "if you hit someone in here it's not assault" -- it doesn't change the facts whatsoever, and the fact is that it's still a crime to hit somebody, even if the bar's ToS say otherwise.
My impression is that people normally don't use Copilot as a substitute for finding solutions (ChatGPT is much better for that), but as a way to help with otherwise tedious tasks that are really specific to your codebase. Check out 6:05 and 6:25 in this Andreas Kling video for a good example: https://www.youtube.com/watch?v=8mxubNQC5O8
Regarding your second point Copilot helps me when I least expects it. I think the video illustrates what I mean with that as well.
Yes.
I'm a Vim user with 100 WPM typing speed, and I can say with confidence that Copilot isn't that useful to me. Typing boilerplate is not an issue - understanding what I wrote is most of the work. And having an AI spew code that I have to read is more work for me than just writing it myself.
In Go, it's great for dealing with a lot of the repetitive code one finds themselves writing.
When writing Android apps, it's useful for API discovery!
I had my share of auto generation with enterprise Java stacks, and tried as hard as I could to move to stacks where what we write is concise and relevant (rails is the closest I came to this, not perfect but clearly going in the right direction).
I think AI has its place, but I also hope to be lucky enough to not have to use it.
Illustrators might have similar issues, where some of them need to produce boilerplate drawings a lot, but I think they'd also prefer working on project that aren't that.
If it is possible to guess what we’re going to write, then we aren’t transmitting much information to the computer.
Copilot seems to be very popular though.
Giving away code and their rights for free is commonplace. Also it’s not like you can use “by Ryan Dhal” to make the output from copilot better.
But these art AI were trained on. CC, CC-BY, and closed license pieces of art. And you can use “by Greg Rotowski” to get art in that artist’s specific style.
I don’t think comparing the use of AI or the general attitude towards AI between artists and devs makes sense. Very apples and oranges.
I dunno if it would help, but people do seem(?) to be improving their ChatGPT responses by telling it to answer as if it is an expert on a topic.
Not only does it feel like something that might be dangerous to become reliant on (what happens when it’s not working or I don’t have access to it), I have no idea what material it was trained on which makes it ethically gray. I might be more receptive to a local LLM where I can personally vet what it was trained on (primarily, I’m concerned with if the material was obtained fully consensually or not).
My attitude towards image generators is similar. Adobe’s is totally out of the question for example, because though they claim it’s 100% ethically trained because all material came from their stock image service, I know that’s bullshit because I’ve seen stolen art put up for sale there more times than I can count (and worse, they’re unresponsive when theft is reported).
- Not enough to switch (yet, at least)
- I would have to carefully review the generated code, which is not as fun writing
The license issue is something I expect will be solved in the next few years (a dropdown menu to choose from, maybe).
I'm not sure it will, as everyone who uses it don't appear to really care about other people's licences anyway. It's just a method of BSD washing GPL code.
In both cases you're also risking being accused of plagiarism, when the model literally remembers, or reconstructs, a piece it has seen, and finds it perfectly matching your request.
I think "AI" tools in Krita may have their place: object detection and selection / tracing, upsampling, seamless resizing, cutting and pasting, texture generation, light adjustment, stuff like that. An integrated analog of DALL-E or Midjourney would likely be a poor for.
Hi! This is me! I'm good enough that I can draw and paint whatever I want manually. I (generally) don't want it in my (main) workflow and I don't want telemetry training models against my work (without knowledge & consent). However, I don't have any qualms against other people using it and I think it's exciting technology.
But the line between "random generator" and "artistic finer control" isn't that sharp and clear. How do digital artists draw leaves and bushes in background? If not photobashing, most experienced people will use some kind of brushes[0] with some radnomness built into them, like random rotation or spray.
Randomness is even more prevelant in traditional media.
And I'm 100% sure AI will evolve to cover as much as both ends.
[0]: Not necessarily a leaf brush. A common misconception held by digital painting newbies are you need X brush to paint X efficiently. Experienced aritsts don't want X -- they want some controllable randomness.
Of those that don't ever intend to rely on something like copilot - is the majority because "I can code better without it in the current capabilities" or a principled matter about the technology wronging them in some way?
For example, this recent GitHub presentation about productivity improvements: 35% acceptance rate, 50% more pull requests, etc. I believe these numbers, and even if you don't, they will be a reality soon.
The main difference is that in development, more of the tedium gets removed-- e.g. interacting with some API or UI boilerplate-- and more of the more satisfying work-- how the program, generally, is going to solve a problem-- remains. In art, the more satisfying part-- conceptualization and forming those ideas into images-- is entirely removed but the tedium remains.
Commissioning a piece of art from an artist entails describing what you want, maybe supplying some inspo images, and then going through a few rounds of drafts or waypoint updates to course-correct before arriving at a final image. Sound familiar? Generative AI art isn't making art: it is commissioning art from a computer program that makes it from an amalgam of other people's art. It reduces the role of the "artist" to making up for the machine artist's shortcomings.
When you're making art, making the details are ingrained in that process-- a requisite step to forming your ideas into images. Details are critical in high-level commercial art, and despite the insistence of many developers who know far less than they realize, current generative AI isn't even close to sufficient.
Economic realities aside, when you're merely editing someone else's images, you've basically transitioned from "writer" to "spell checker" and I don't understand how so many refuse to see how a professional artist would be distraught about that.