I understand the value proposition here and it's neat technology, I just think it might still be a little bit early for millions to be pouring into the field. Every example I've heard of "AI music" has been some level of failure. But, maybe they just need a few million dollars to throw at the problem, and it'll go away.
The real utopia here is putting your headphones and listening to appropriate and good music.
Music responding to gaming, sports, chill, on-the-train, falling asleep, without user input would be pretty cool.
With service, we can now implement a better version of this: https://www.youtube.com/watch?v=XPM1o9QKw1Q
(This has already existed for years on iOS and Android, but they could be improved).
There's also video games and a lot of amateur youtube videos that could use it as well.
From what the article says, it looks like this company is looking to provide their software as a tool for companies to generate stock music with minimal effort. They claim that stock music currently available for purchase is both overpriced and generally insufficient for commercial projects with specific demands. Perhaps it's also implied that paying a producer would similarly be too steep of a cost. My main issue with Amper's concept is not that the creation of music with AI is too farfetched- that is not a new concept. My issue is that the value of Amper's software depends on their ability to optimize the process of creating music according to strict specifications- something that I think is bottlenecked by the speed of human conversation. Like any producer, AI software can get the ball rolling, but ultimately creative decisions have to be made intelligently somewhere, and there must be an exchange of feedback which is more efficient and stronger than regular human conversation for Amper to succeed. Not only must the AI be very good, but the user interface itself must be so good that someone without a music production skillset and only a vague understanding of what's needed can do the work. I think creating this kind of interface for a layperson is the bigger challenge here.
I was also extremely unimpressed with the demo. The sound quality was very poor for commercial music and there really isn't a lot of range shown in the tracks. And it isn't because of soundcloud, there are plenty of even amateur composers with more professionally produced music. I know it's supposed to be stock music, but isn't this software supposed to be capable of generating excellent, detailed stock music?
What would be better is if you could download the individual tracks to import into a DAW where you could further edit/manipulate them to get that more custom sound. I went through the "Pro" tutorial and this wasn't an option.
I have no dogs in this fight, just curious.
That may be good enough for production music.
At least it sounds like music, which puts it way ahead of most musical AI projects.
One can hope that one day we'll be autogenerating music based on individual preferences in a way that every piece you hear is a unique arrangement.
Also, there are plenty of able workers ready to fill Excel sheet for a very reasonable price but that is not the point here either.
It's never just about the music, which is why you see a ton of amazingly talented SoundCloud producers that no one has ever heard of. They understand the music but lack the branding and persona development that gets people to actually stick to their music.
AI produced music will be like computer generated art - too distant for most people unless there is another component on top of the music.
That's my initial take. Curious to see what people think of my opinion.
A large part is the evolution of the persona too. Following his growth as an individual has been amazing (although House of Balloons is still my favorite project).
AI generated music is completely different field, because AI is the composer there. This is why in the original article they talk about how they want to create symbiosis of AI and human composer, essentially creating another tool aka instrument, but this time human needs to do even less in terms of actual music creation.
That's exactly the point of this tool. This tool is meant to produce background music for commercials that's cheaper than what's currently available. The "another component" is the crap that music is in the background of.
I personally don't care. I just want to listen to something that sounds good.
What it really needs is the ability to add cues, similar to how ML painting programs are now giving you the ability to define areas of a photo/painting (this is sky, this is water, etc).
If you could say I need 30 seconds of uplifting music for my trailer but here are the 3 places where things are fast paced and here is the point that is shocking, the AI could in theory accommodate that into one piece that aligns with your already assembled visuals.
However it still sounds like basic themes put together and not a song with beginning/middle/end
AI-generated music will go along with generic food and other products. Listen already to the generic soundtracks on most Kickstarters. It works because they're paired conspicuous consumption. Just use the phone app "SoundBot" to make a generic chord progression with 1/8th note rhythms.
"Compelling" (a la Magenta) art and music will have to understand the context, have a perspective and understand what is ineffable and attempt to convey it. That's an unbelievably large machine learning task, having essentially no boundaries and with very little data to source from.
Nevertheless, as a company, Amper may create some useful musical man/machine collaboration tools.
I wonder how this will turn out. I did like the sound sample at the bottom.
Kidding aside, this looks useful for a specific audience and in a price point consideration that hopefully gets traction. I've been fond of royalty-free stuff via purchase / service for a long time, it's a great platform. Will be a good one to keep an eye on!
Most of the sample tracks on SoundCloud are 30 second in length. Is the platform capable of creating longer 2-4 minute songs right now?
Amazon's recommender systems adjust dynamically, based on what users buy. Google's handwriting recognition is learning every time you select the correct choice and continue with your next scribble. How does this learn anything?
I use a light theme because I get a lot of light in the afternoon (even with curtains closed) and the glossy mac monitor turns into a mirror with a dark theme.
I don't know why dark and ugly is the norm in media manipulation tools. But, I find it annoying (mostly just hard to read, due to too-low contrast), and often switch to a lighter theme if it's available in the tools I use.
:-1:
the first nights just monitor djs and then it can learn to be it's own dj. but I think in a way it could only come up with similar styles.