In my poor armchair quarterback opinion if people wish for something to not be crawled then they must make a best effort to ensure only humans are accessing it with strong authentication, legal agreements, best-effort bot detection and also have binding legal contracts that implement punitive actions for doing something with data it was not approved for and then actually follow through with legal action for breach of contract.
Companies like OpenAI also have to do a lot of things to ensure the compliance to the regulation.