> This article is not part of anything
Seems incoherent I think?
In the end, there is a human curator that decides which creation by the AI is art. There is a human that writes the prompt that creates it. There is human intent as part of the mix, so AI can certainly create art but it is no different than a camera that remove many of the choices people make and presents new ones - which means it is a tool for creating art at a different level of abstraction.
Something about this article just felt like a kind of gatekeeping in hopes that they can convince people its not worth even trying.
… and this analysis generalizes all types of AIs based on characteristics of just one type of AI - LLMs - that select the “next best word “ (my layman’s understanding of what they do). With so many other types of AI out there, we will eventually definitely get new approaches to AI where the arguments presented won’t make sense
About throwing punches in the air in a tantrum - that was a funny comment. So funny that I wanted to see if someone made art of it. Found something and sharing a link here that is pretty short and to the point, sweet and cool
(I hate mystery links so some info - it’s an performance art piece called “Plastic Bag” on YouTube, 5 mins long, about 60k views, where someone throws punches in the air at a plastic bag)
There's art. And then there is Art.
HN is an excellent place to find hundreds of articles and comments predicting the future, i.e., something authors cannot know but like to guess at anyway, especially regarding the future capabilities and predicted use of "AI". There has been a steady stream of this gibberish pertaining to "AI" ever since the announcement of ChatGPT.
In a recent HN poll a majority of voters indicated they thought that "AI" was overhyped.
The parent comment is an yet another example of a comment that tells us something the commenter cannot know but wants to guess at anyway. For example,
"Artists will use AI to make some of it. The debate about whether or not its art will be part of the art."
As it happens I have been using Claude quite extensively as a drafting partner over the past few months for writing a novel. I enjoy plotting, planning and editing, but not drafting, so I let it do the zeroth draft for me. It has been quite a productive arrangement.
Are the thousands of choices of which brush strokes to put where actually the seed of creativity? According to some artists - not really.
"the old tale that Michelangelo painted the entire vault by himself is not quite true; he did have help from assistants, and not just to assist in menial tasks such as mixing the plaster, grinding the pigments, moving the scaffolding and aligning the cartoons.
Some less important aspects of the painting were delegated too – minor angels fluttering around the fringes of the main images for example, as well as oak leaves and other ornamental details.
We even know the names of four assistants that arrived from Michelangelo’s native Florence in 1508: Bastiano da Sangallo, Giuliano Bugiardini, Agnolo di Donnino and Jacopo del Tedesco. They were relatively poorly paid, however, and it seems unlikely that they were entrusted with any significant tasks in the project. "
Chiang is not saying this at all. I'm not sure how you interpreted it this way.
> Chiang is not saying this at all. I'm not sure how you interpreted it this way.
He says this, very explicitly, in his direct comparison with photography. I'm not sure how you managed to miss it.
"When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure. But over time people realized that there were a vast number of things you could do with cameras, and the artistry lies in the many choices that a photographer makes. It might not always be easy to articulate what the choices are, but when you compare an amateur’s photos to a professional’s, you can see the difference. So then the question becomes: Is there a similar opportunity to make a vast number of choices using a text-to-image generator? I think the answer is no."
https://www.rogerebert.com/roger-ebert/video-games-can-never...
The argument rings just as silly today as it was 12 years ago.
Because at that point "art" becomes sinply something that is "aesthetically pleasant", and that will change from person to person. "Art" as a compliment is useless.
If we try to use "art" as a descriptive, then we would need to draw a hypothetical Venn diagram, and define things that are art and things that are not, so we could try to categorize videogames, or whatever AI produces. This implies a lot more agreement than there is currently.
'The true work of art is but a shadow of the divine perfection.' — Michelangelo (1475-1564)
'Art is a mediator of the unspeakable.' — Goethe (1749-1832)
'A work of art is a corner of nature seen through a temperament.' — Emile Zola (1840-1929)
'Art has absolutely no existence as veracity, as truth.' — Marcel Duchamp (1887-1968)
'Art is science made clear.' — Jean Cocteau (1889-1963)
'Art is anything you can get away with.' — Andy Warhol (1928-1987)
I suspect the rate of individualized adoption of AI augmented writing is well beyond what a casual observer here on HN would think it is.
I also share Chiang's worry about this:
> We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?
I do not think OpenAI Et al. set out to create a self-perpetuating slop machine like this but it sure feels like this is where it is going. For individuals it improves their life I guess but when zoomed out there is something quite dystopian about it.
I can’t tell you why, but I don’t really react to any of them or really any AI art I’ve seen.
Take one of those images, put it in a nice frame, hang it in a quiet art museum, stick a little placard on the wall next to it with a made up backstory, and your emotional response will probably differ.
Conversely, go to your local art museum and randomly pick 10 paintings. Take a hi-res picture of each and place them on a web page entitled, "AI is getting better at generating art" and you'll probably pick out a bunch of tell-tale flaws that are evidence of machine-generate pictures. :)
With the context of looking at images on the dall-e page you're looking at a gallery of ai images not art in its natural habitat.
Art deserves to be contextualized properly not be evaluated by the large batch.
My point is that your argument is non-sensical. Just because you don't feel anything from AI art doesn't mean others don't.
But some of my friends say as you do.
I wonder if this could be uncanny valley? I understand that sense of off-ness will be in different places for everyone, as we know (and therefore spot) different things about what we're looking at.
It is still considered art.
One example I recommend is the Unanswered oddities series. Really funny. All visuals and audio AI generated.
https://www.reddit.com/r/singularity/comments/1e6c1d4/unansw...
For an essay that hinges on the notion that "art requires making choices", and attempt to apply this to AI image generation, and even specifically tries to draw a contrast between that and photography, I wish the author demonstrated even a superficial knowledge of what choices go into AI art and how it compares with photography, much less delving into the kind of deep philosophical examination of the underlying premise that you propose. But it looks like exploring the subject beyond the shallowest text-box-only UIs presented by a few big firms is too much to ask before the author does exactly what he says was naively done early on about photography.
And you didn't prompt it any more than commissioning a piece, or making a thematic suggestion to a painter friend.
You may not like its art, and it may not come up with some whole new original style, but that doesn't mean it isn't making art in known styles.
TFA is just a bit of a silly fearful protest, IMO.
I just checked Wiktionary, which indeed says it's the 'concious arrangement [of whatever medium]', so it just becomes the same question as whether AI can be conscious, which I think is just pointless semantics, it hinges entirely on how you define it, there's no deeper meaning or interest for that debate to reveal.
The model can't output the average because the average is usually completely meaningless, that's why it's a generative model and not a regressive one. As always, these articles are made by people who don't really understand the technology, and create their own interpretation on how it works, whether they are right or not at the end.
I think it's pretty clear that generative AI makes a lot of small decisions. They might not be groundbreaking, or novel (as they aren't in a custom portrait), or somehow lack overall vision, but they are there.
[1] https://www.sciencedirect.com/science/article/abs/pii/S09218...
We can debate his generalized definition of art as making creative choices that carry subjective, intentional, and performative value for human beings (and therefore LLMs fall short of this), but I think he makes a couple strong points nonetheless:
1. The argument others like François Chollet have also made, that we have yet to see any AI systems capable of exhibiting intelligence beyond stylistic mimicry or forming generalized knowledge about concepts from large data sets.
2. The subjective experience of human interaction is valuable and desirable, and will remain so in the face of increasingly capable models, not because they won't be able to compete in producing inspiring art or enjoyable fiction, but because of the inherent primacy of human intentionality and experience.
I've said this long before useful LLMs, but I don't think we've observed this in humans, either. Human creativity can be put into two very similar categories:
1) Metaphor; the arbitrary application of the dynamics of one thing to another. "What if information is like water?" "What if the economy is like the human body?" "That woman is like a bird."
2) Bad copies. When you see someone's output and try to imitate it, but have to speculate about the creative and mechanical process that resulted in that output. You sometimes guess right and sometimes wrong, but the output is similar. Then you vary the parameters in order to create a new example, but since your process was different, with different parameters and different interactions, you create something different than the person you copied would have created.
1+2) both randomly often create emergent effects that are then copied by others, sometimes badly.
This is how Japanese metal can be the result of Black Americans copying songs from musicals and English/Irish drinking songs, British people copying the blues from Black Americans, Americans copying British Invasion music and NWOBHM, and then Japanese people copying American metal.
Sure. But suppose we had an AGI that was just as smart as a human: clearly it would be able to make all these decisions just fine and make art. If current AIs are someone between that and dirt, then they ought to be able to make decisions of less complexity, but still of some importance, to the final product. As AIs improve, we would expect the decisions they make to become more complex.
Recall that traditional artists have always had a number of assistants to help them. They produced the sketch and outline, and had other artists - skilled, but not as much as them - fill in the details. A modern artist, who already is less skilled than these artists, and futhermore has less need for their creation to stand the test of time, can benefit from an even less skilled assistant helping them.
Of course it’s not going to be creative on its own - it obviously is not intelligent.
But for me comfyui is an incredibly cool tool to be creative.
Such a boring topic after all - all the noise it attracts won’t amount to much once people understand this technology
You're right that just going yes it is - no it isn't isn't so interesting, but this mainly stems from the fact that "intelligence" is a poorly defined, pre-scientific term. And really most times you're talking about about whether some X is a Y, you're not so much talking about X as about your definition of Y.
I think the thing is, with LLMs/Generative AI we see some aspects of ourselves, but not enough that we can accept that it is fully like us, hence the resistence. To me, the answer is clear: what is usually called intelligence is actually several different things, of which whatever it is an LLM does is one.
This idea that people who disagree couldn't possibly understand is misguided at best.
It's bizarre on HN of all places, given its connection to the tech industry and the history of tools sold by that industry, that someone would imagine these are mutually exclusive options.
As is often the case with tech industry tools, the tool in fact augments a human, and is (often) sold to replace a human. That's barely even a superficial contradiction -- enhancing the productivity of each human doing a task, from the perspective of consumer of the kind of human labor involved that is making short-term decisions around a fixed amount of needed output -- replaces some share of those humans rather directly. (On a broader analysis, this is often not true, because it also expands the market for the kind of work it augments by reducing the price per unit output to the point where more marginal uses become viable, but the individual existing purchaser of output often isn't concerned with that.)
https://www.newyorker.com/tech/annals-of-technology/chatgpt-...
ChatGPT is a blurry JPEG of the web (newyorker.com)
574 points by ssaddi on Feb 9, 2023
306 comments
https://news.ycombinator.com/item?id=34724477
About the author:
Lunch with the FT: Sci-fi writer Ted Chiang (ft.com)
https://cc.bingj.com/cache.aspx?d=4753927952212231&w=P5zSV2b...
Recommended by HN moderator:
Panting a fence, not art because it keeps the wood from rotting.
Painting a fence hot pink? Art because there's no good reason to paint a fence that color.
If we discover that birds hate hot pink fences, and that makes them last longer? Not art again.
A rich guy pays a million dollars for a hot pink fencepost? Art. Who's the guy who sold the hot pink fencepost? Does he have any other colors?
Whatever anyone thinks about the limitations of LLMs, or whether AI in its current form is sales hype - can anyone sensibly claim that AI 1000 years from now won't be capable of an artistic sensibility? Until there's some proof that there is a secret ingredient in human consciousness that can never be developed by AI - not even a self-aware AI - anyone attempting to lay an imaginary ceiling over the tech is deceiving themselves.
- but even assuming the rate of advancement slows down, eventually it will be making Art…
> he crafted detailed text prompts and then instructed DALL-E to revise and manipulate the generated images again and again. He generated more than a hundred thousand images to arrive at the twenty images in the exhibit.
Doing art via the mind is like breathing through a soda straw.
Mind promises and promises but just keeps missing the mark.
I can make a perfect line with my hand in a moment. I can spend a year creating a "perfect line drawing device" and never get there.
The promise of mind is so tempting tho. Succumb to it and you end up living in a flavorless cartoon.
"""The film director Bennett Miller has used dall-e 2 to generate some very striking images that have been exhibited at the Gagosian gallery; to create them, he crafted detailed text prompts and then instructed dall-e to revise and manipulate the generated images again and again. He generated more than a hundred thousand images to arrive at the twenty images in the exhibit."""
Probably because that's the predominant experience of people encountering AI art on the Internet. I have no doubt whatsoever that there are people out there using AI to do interesting things, but like with basically every technology, the vast, vast majority of the output you're going to see is people who see a labor saving device that can make doing... something, at scale, brain-dead easy. Be that generating shitty coloring books and selling them to overworked parents, generating shitty books on niche, dumb topics like the crystal healing woo shit and selling them to uncritical audiences, or just generating page upon page of boring, shitty artwork and uploading it to DeviantArt and paywalling it.
And that's just individuals. Many online businesses are actively enshittifying themselves too, adding AI generated content alongside (or in place of) human created content. On the note of DeviantArt, they built an AI generator into the damn site so people can fill it with even more low-effort garbage than was already getting uploaded. And of course Google now headlines your search results with a shitty LLM summary that runs the gamut between "dull, uninteresting summary of somewhat relevant information" to "complete nonsense that actively endangers lives" while also depriving even more websites of even more traffic that gave them whatever information in the first place.
Like, again, I have no problem envisioning some people and some orgs some place are doing interesting stuff with this tech. However I cannot overemphasize how utterly, completely, totally dog-shit my experience personally has been with it and how harshly I now tend to judge any project parading around AI integration. I'm open to being wrong... but I'm usually not.
There was that Vaudeville game that made the rounds that I felt was at least trying to do something interesting with LLMs, but like... the tech just wasn't there yet. You're talking to characters and can say basically whatever you want, and then an LLM generates an answer based on the context of that character and it's read back to you by text-to-speech. It's... neat? For like ten minutes, and then you're just playing a detective game with impressively bad writing and zero-effort VO, and the fact that the entire game was built of pre-built, unchanged assets made it feel incredibly cheap and low-effort. The only thing it's really good for is as streamer fodder, weird garbage for people to overreact to and fuck with for an audience.
There is still a lot of interesting work making use of ML tools. Maybe I'm biased towards art that embraces experimentation and new technology, but I found even images like https://i.imgur.com/Jybvj0r.png (zoom out) far more interesting than most of what I see in galleries.
Which makes ChatGPT (or whatever) just as valid as any tool for creating art.
> What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception.
As a life-long artist and musician, I agree with this. However, I find the artist's perspective lacking from this article. For many artists (myself included), the process is why we do it. It's truly therapeutic. I honestly cannot imagine my life without creative expression. Whether entering a prompt fulfills that for someone is up to them to decide. But, for me, it would remove the parts of creating art that give me joy.
I've used AI generators a few times because they're interesting little toys, but fundamentally, a creative process is literally thousands of not millions of tiny decisions that are informed by other decisions. If anything, that's what I would call an "artist's voice" in any given creative product, is an at least somewhat consistent through-line through those decisions that gives the final piece the "life" that is so clearly missing from AI art, because all those millions of decisions, instead of being made by one or a few "voices," if you will, is replaced by millions of weighted-average decisions designed to reduce "error" in the product. It's quite literally soulless and people pick up on this, no matter how much the AI lovers want to scream Luddite at me, it's true.
That's not to say it's completely without purpose, I think this stuff is going to do gangbusters for corporate news pieces, blogs, spam sites, etc. If you want royalty free imagery to use for a thing, and don't give much of a shit about what it is, AI can handle that quite well. But I simply can't fathom someone with an intention, who wants to say something with an art piece using AI much, if at all.
The "AI artists" using this tool lack the technical and artistic competency to realize this. They didn't write the algorithm, draw the dataset, or train the model. They prompted. They have the smallest amount of creative input into this whole pipeline.
I do believe AI can be used in the process to create art as it's just an image generator like fractal art, but the problem is most people are going to use AI not as a means to create art, but as an end. You could fix the problem above by simply importing the image into GIMP and changing the brightness, but nobody does that because they aren't interesting into creating an art piece with a set goal in mind, they're just being entertained by generating dozens or hundreds of images with this technology.
Amusingly, you could also just type text in GIMP. Instead there is now something called "flux" that can do text literals.
While I see the point in making a prompt interpreter capable of generating text literally, if I were creating something, I wouldn't let an AI randomly pick a font, color, weight, serifs/slabs, etc. for me. These are creative choices in design that make all the difference. Prompting gives the illusion of (creative) choice. You get something that looks good, but "getting something that looks good" is the default state. Anyone can do that. It's the AI art equivalent of drawing a stickman. The prompters just don't realize it because they're comparing themselves to to artists of other media, not comparing themselves to other AI artists.
When everything is AI, and anyone can generate an image with a prompt, the whole market will be so saturated with this (perhaps it already is at the rate these are generated) that all the novelty will be gone.
It was cool when AI was able to generate video, just like it was able to generate text. But in my opinion, those are feats of the technology, not artistic feats. The piece itself isn't interesting. It could be any video. Just the fact that the tech can do this is impressive. But it's just the tech that is impressive, not its output. Once the tech can do it once, it can do it every time, so the second time AI generates video is never going to be as impressive as the first time. By the thousandth time it will be as impressive as my ability to send this message to the other side of the world at the speed of light.
Everything you know is made with Stable Diffusion looks the same because if it doesn't look the same you probably don't know it was made with Stable Diffusion.
> The "AI artists" using this tool lack the technical and artistic competency to realize this.
No, they don't. It's been a frequent comment in the AI art community and a thing for which the community has sought and produced both in-generation and auxiliary-tooling solutions for from very early on.
> They didn't write the algorithm, draw the dataset, or train the model.
Perhaps not for the base model, people int he AI art community have done all three of those for improvements to and tools built around the base models and the original code implementation of them.
> I do believe AI can be used in the process to create art as it's just an image generator like fractal art, but the problem is most people are going to use AI not as a means to create art, but as an end.
Most of the people who are using any tool that can be used artistically are going to use it at the most superficial level. Is that true of AI image generators? Sure. But no moreso than it is true of, say, pencils.
> You could fix the problem above by simply importing the image into GIMP and changing the brightness, but nobody does that because they aren't interesting into creating an art piece with a set goal in mind, they're just being entertained by generating dozens or hundreds of images with this technology.
People are using AI image generation with a set goal in mind, and people absolurely do import generated images into to traditional image editors for adjustments. Though a lot of the people that really know what they are doing have that built into their workflows, reducing the need to do manual spot correction in a separate editor.
> Amusingly, you could also just type text in GIMP. Instead there is now something called "flux" that can do text literals.
Image generation models have been able to do text to a certain extent for a while, and improvements in text generation have been a major trumpeted feature of many recent model releases. Flux isn't interesting because "it can do text literals", it is interesting because the community has discovered that it can be finetuned (specifically, that LoRA can be trained for it) that will allow control of text style, similar to fonts.
I wasn't aware that GIMP could comform typed text to the implicit 3d shape of the surfaces it is being placed on in a 2D image, though.
> When everything is AI, and anyone can generate an image with a prompt, the whole market will be so saturated with this (perhaps it already is at the rate these are generated) that all the novelty will be gone.
Probably. So what? Novelty isn't the point in every image people produce. Lowering the cost and effort to produce basically "looks good" images for lots of casual uses isn't, itself, an advance in fine art, sure. But it is, in itself, useful.
What you misunderstand is what "looks good" means. Before AI, art that looked good was valuable and impressive precisely because few artists could produce it. It was amazing (and it still is) that a human being can reproduce realistic or surrealistic imagery with just pencil and paint.
If anyone can do it with one click, there is no value.
Like I said before, you're using AI art and comparing it to other media, like pencil and paint. That's like comparing photography to pencil and paint. Just because photography is more "realistic" that doesn't mean people will value a photo more than an artist's realistic rendering of the same thing.
When AI just looks good, that is actually the most worthless possible thing, as valuable as a sketch of a stickman.
Ask an artist what's art? Hell no.
Myself, I like results: A metaphor about the scent of roses is just as sweet, after I find it came from an LLM.
> I think the answer is no. An artist—whether working digitally or with paint—implicitly makes far more decisions during the process of making a painting than would fit into a text prompt of a few hundred words.
In the art of words, Even the briefest form has weight, Prompt and haiku both.
> This hypothetical writing program might require you to enter a hundred thousand words of prompts in order for it to generate an entirely different hundred thousand words that make up the novel you’re envisioning.
That would be an improvement on what I've been going through with the novel I started writing before the Attention Is All You Need paper — I've probably written 200,000 words, and it's currently stuck at 90% complete and 90,000 words long.
> Believing that inspiration outweighs everything else is, I suspect, a sign that someone is unfamiliar with the medium. I contend that this is true even if one’s goal is to create entertainment rather than high art.
I agree completely. The better and worse examples of AI-generated are very obvious, and I think relate to how much attention to detail people have with the result. This also applies to both text and images — think of all the cases in the first few months where you could spot fake reviews and fake books because they started "As a large language model…"
The quality of the output then becomes how good the user is at reviewing the result: I can't draw hands, but that doesn't stop me from being able to reject the incorrect outputs. Conversely I know essentially nothing about motorbikes, so if an AI (image or text) makes a fundamental error about them, I won't notice the error and would therefore let it pass.
> Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it.
This has been the case so far, but even then not entirely. To use the example of photographs, even CCTV footage can be interesting and amusing. Yes, this involves filtering out all the irrelevant stuff, and yes this is itself an act of effort, but even there that greatest effort is the easiest to automate: has anything at all even happened in this image?
To me, this matches the argument between the value of hand-made vs. factory made items. Especially in the early days, the work of an artisan is better than the same mass-produced item. An automated loom replacing artisans, pre-recorded music replacing live bands in cinemas and bars, cameras replacing painters, were all strictly worse in the first instance, but despite this they remained worth consuming — even in, as per the acknowledgement in the article itself: "When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure."
> Language is, by definition, a system of communication, and it requires an intention to communicate.
I do not see any requirement for "intention", but perhaps it is a question of definitions — at most I would reverse the causality, and say that if you believe such a requirement exists, then whatever it is you mean by "intention" must be present in an AI that behaves like an LLM.
> There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you.
Despite knowing how they work, I am unsure of this. I do not know how it is that I, a bag of mostly-water whose thinking bits are salty protein electrochemical gradients, can have subjective experiences.
I do know that ChatGPT is learning to act like us. On the one hand, it is conceivable that it could use some of its vector space to represent emotional affect that itself will closely correspond to the levels of serotonin, adrenaline, dopamine, oxytocin, in a real human — and I can even test this simply by asking it do pretend is has elevated or suppressed levels of these things.
On the other, don't get me wrong, my base assumption here is that it's just acting: I know that there are many other things, such as VHS tapes, which can reproduce the emotional affect of any real human, present any argument about their own personhood, to beg to be not switched off, and I know that it isn't real. Even the human who gets filmed and their affect and words getting onto the tape, will, most likely, be faking all those things.
I have no way to tell if what ChatGPT is doing is more like consciousness, or more like a cargo-cult's hand-carved walkie-talkie shaped object is to the US forces in the Pacific in WW2.
But when it's good enough at pretending… if you can't tell, does it matter?
> Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling.
> it’s the same phenomenon as when butterflies evolve large dark spots on their wings that can fool birds into thinking they’re predators with big eyes.
100% true. Even if, for the sake of argument, I assume that an LLM has feelings, there's absolutely no reason to assume that those feelings are the ones that it appears to have to our eyes. The author gives an example of dogs, writing "A dog can communicate that it is happy to see you" — but we know from tests, that owners believe dogs have a "guilty face" which is really a "submission face", because we can't really read canine body language as well as we think we can: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4310318/
Also, these models are trained to maximise our happiness with their output. One thing I can be sure of is they're sycophants.
> The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.
> By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills.
Both fantastic examples.