Because the societal costs of certain industries' unregulated activities do more harm than the economic cost of doing that regulation.
Despite what the Libertarian Party's pamphlet might say, regulation is invariably reactive rather than proactive; the saying is "safety-codes are written in blood", after-all.
Note that I'm not advocating we "regulate AI" now; instead I believe we're still in the "wait-and-see" phase (whereas we're definitely past that for social-media services like Facebook, but that's another story). There are hypothetical, but plausible, risks; but in the event they become real then we (society) need to be prepared to respond appropriately.
I'm not an expert in this area; I don't need to be: I trust people who do know better than me to come up with workable proposals. How about that?