Lets take the medical assistant example.
> Medical assistants are unlicensed, and may only perform basic administrative, clerical and technical supportive services as permitted by law.
If they're labelling data that's "tumor" or "not tumor", with any agency of the process,does that fit within their unlicensed scope? Or, would that labelling be closer to a diagnosis?
What if the AI is eventually used to diagnose, based on data that was labeled by someone unlicensed? Should there there need to be a "chain of trust" of some sort?
I think the answer to liability will be all on the doctor agreeing/disagreeing with the AI...for now.
It does open something of a loophole. Oh, I wasn't diagnosing a friend, I was helping him label a case just like his as an educational experience. My completely IANAL guess would be that judges would look on it based on how the person is doing it, primarily if they are receiving any compensation or running it like a business.
But wait... the example the OP was talking about is doing it like a business and likely doesn't have any disclaimers properly sent to the AI, so maybe that doesn't help us decide.
Of course it's the org and not the individual who would be practicing, as labelling itself is not practicing.