Very few domains readily offer data as well curated as the corpus made available by many centuries of music. Music generation models shouldn't be too far off LLMs in terms of sophistication.
So what has happened in this area? Might it simply be the case that the field is overlooked because it's a little more difficult to demonstrate the state of the art in print and social media? Or are there other, more technical reasons (copyright for one)?
And generally, is anyone else interested in this space? The high-end stuff like the Neuralink mission naturally draws headlines, but on the other end of the scale, devices like the Muse and Emotiv headbands obviously have a market, but aren't close to mainstream.
At some point, these kind of devices will become widely prevalent, but in what form, and for what applications?
My organization invited a candidate to complete a Live Coding whiteboard test for a second stage interview. It was a really simple problem (essentially 'FizzBuzz' with a strong UI component).
When we invite people to take the test, we encourage them to talk us through their solution while they work. That way, we get a sense of how they approach problems, and how they work with others.
A recent candidate surprised us though. After we introduced the test via Teams, the candidate switched off both their mic, and camera, and started working on the test.
The first thing they did, was start writing a component (React) containing all the logic for the test artifact, with no markup/JSX, straight into the comment section describing the test! We pointed this out to show the candidate that we were there to help, and the session should represent a kind of collaboration. They replied, curtly, and didn't talk to us for the rest of the session.
It could have just been nerves, but the comment section actually sat above the module import section, so it was suspicious.
The logic was fine though. But then, when it came to adding the markup, the candidate encountered an error where they were calling variables outside of function closures (this was JS). The type of thing an experienced dev would fix along the way. Again, we raised the issue, but the candidate acknowledged the suggestion, and didn't touch this issue until the very end...
After another 10 minutes of absolute silence (muted mic), the candidate finished in record time, and only then, did they switch on their mic (still no camera).
In isolation, the two glaring mistakes (writing code into the comment section, and misunderstanding variable scope) could well be nerves. But when coupled with the order in which the candidate approached the test (writing all the logic first, without testing the rendering logic until the end), and the lack of communication, I personally found the situation suspicious.
Either the candidate had another, more experienced programmer in the room with them, or they were using ChatGPT. I raised an objection, my colleague gave a benefit of the doubt. Management progressed their application...
So I have two questions. Am I right to be suspicious in this hypothetical case?, And, going forward, what can my organization do to head off such concerns?