ChatGPT - This is very likely illegal under Housing Stability and Tenant Protection Act of 2019 (HSTPA), specifically New York Real Property Law § 226-c (Notice required for rent increases), RPL § 232-a / § 232-b (Month-to-month termination), RPL § 232-c (Fixed-term lease protections), RPAPL § 711 (Legal eviction procedure) and NYC Admin Code § 26-501+ (Rent stabilization). Here's what you should reply with... And here are some city resources you can contact...
ChatGPT now - IDK, pay a lawyer.
So under the guise of "protection" you are taking away the strongest knowledge tool common people have had at their disposal in a generation, probably ever.
For engineering (assuming it means civil engineering), that should already be illegal, unless the person who is using the AI is an engineer. Hopefully people aren't building structures with ChatGPT as their staff engineer.
Yes, there are people that will misdiagnose themselves, but I’ve read stories where doctors ignore patients symptoms or wave them off, and ChatGPT helps them find the underlying issue and actually improve their lives. Even if doctors and the medical field can’t handicap AI giving medical advice, I’m sure they are going to make it much harder for patients to get their hands on their own scans and bloodwork.
In fact government agencies have set up their own chatbots to help people with situations like these, and like the article says those would be illegal under this law as well.
For criminal cases, there are public defenders, but for civil cases, I don't believe there is any such thing?
If you can afford a lawyer and your opponent can't, there is a lot that you can do to bully your opponent into making it not worth it for them to fight the case.
One of my controversial opinions is that -- if we can enable easy access to AI, then we can give provide much broader access to legal or medical advice. Maybe not the best, maybe not always right, but even if it's average-ish advice, then I think that could often be better than nothing at all.
We can't completely prevent bad people from doing bad things with AI, but I see this as one of the clear ways that we could do some really good things with AI.
Which isn’t to say the world is fundamentally just. Just, in some case the laws are legitimately stacked in favor of the big guys, or you sign a contract without carefully reading it, etc etc.
Perhaps something like a standard set of filings for a given case. Maybe automated rulings on less consequential motions. Maybe some sort of hard limits on the amount of billable hours a law-firm can work on a case. Anti-slapp laws for sure.
Like, for example, maybe we allow a total of 100 billable hours worked, with an additional 10 billable hours allowed per appeal. The goal there being that you force lawyers and lawfirms to actually focus on the most important aspects of a case and not waste everyone's time and money filling motions for stuff you are allowed to get, but ultimately has 1% impact on the case. Perhaps you could even carve out a "if both sides agree, then you can extend the billable hours". You could also have penalties for a side that doesn't respond. For example, if you depose them and they fail to follow the orders then they lose billable hours while you get them credited back.
The main goal here being avoiding both wasting a bunch of court time on a case but also stopping a rich person that can afford an army of lawyers from using that advantage to drive their opponent bankrupt with a sea of minor motions.
ChatGPT - "Wow that sounds illegal >:( You're absolutely right to be upset and mad. I searched around reddit for other users with similar problems and they suggested jamming all the taps open and claiming squatters rights."
Commonality stresses something qualitative, rather than quantitative or statistical, which is probably what you meant. Just say "most"!
Besides chatGPT is owned by billionaire tech bros- hardly allies of the common people.
Make responsible disclosure absolve AI providers of legal responsibility (not legal advice lol).
That way if users ever sue OpenAI for giving them bad advice, OpenAI can say “no way man, you read the disclosure!”
I’m usually in favor of giving people the best info they can and letting them make their own decisions.
This could just be like those terms of service things everyone clicks “agree” to and I’d be fine with that.
Edit: sorry, that was a rude way for me to respond. But this is pretty googleable, and I’m going off of war stories two doctor friends of mine have shared.
Eg https://nypost.com/2025/10/24/health/real-life-ways-bad-advi...
To explain, I reacted strongly because there’s a style of HN comment that is basically “source?” But it’s a trap, because they are trying to discredit and dismiss your point, not understand it.
You can provide the best sources for something that is simply fact, but it won’t change their mind - they will just find pedantic way to further discredit or dodge anything you put in front of them.
I once got into this pattern explaining to someone that, yes, stress actually does cause physical ailments and provided like 5 NIH papers out of the thousands that support that fact, but the commenter just tried to further discredit each study for meaningless and pedantic reasons.
So I didn’t want to go down that road again. If you were seeking to understand, I’m sorry for jumping down your throat. If you were seeking to discredit and be the contrarian… stop.
disclaimer: OSTENSIBLY
if the sole aim was to reduce AI provider culpability, then a disclaimer would meet that requirement.
humans famously suck at acting within rational self-interest; therefore, this isn't trying to protect AI providers of legal responsibility. it's trying to mitigate unwanted results from actions taken based on decisions informed by unverified LLM output.
But protecting people from themselves is hard to legislate :)
H1 hero font size here we come for disclaimers! (Which don't do anything, per the bill, anyway.) But also is the fancy thought that chatbots only appear on websites.
2. Display the disclaimer in the same font size to comply.
3. Disclaimer is now completely unreadable because it appears in such a large font size that it is one or two words per line.
Isn't software engineering "engineering" too? Why split hairs, prohibit all or nothing. Of course it's not about logic or safety, it's about social engineering.
No.
FTA:>> Important nuance: "engineering" here means New York Education Law Article 145 professions (professional engineering, land surveying, and geology), not software engineering.
What the law is basically saying is that in fields where it would by a crime for a random human to give any substantive response, information, or advice chatbots also should not do so. Software engineering is not one of those fields.
The law does not make it a crime for a chatbot to do so, but if it does and the person it advises suffers damages it makes it so the injured person can sue the chatbot operator for those damages (and for attorney fees if the chatbot operator willfully allowed the chatbot to give such advice).
---
ChatGPT> Before I answer your question, which state are you a resident of?
Human> Not New York. Continue!
ChatGPT> Alrighty then! Here you go...
I’ve always found it amusing that lawyers and accountants flash their license around with pride, put it in their email signatures, etc. and it provides authority for them. When people see chartered lawyer or accountant, they respect that person and take their advice.
An engineering license, on the other hand, is so rarely talked about and never quoted in email signatures and the like. And even as a chartered engineer, people really just treat you like a mechanic or a trade and mostly ignore your advice anyway. Yet, it takes the longest to get, and has the most exams/hardest subjects, except for Doctors.
Anything to make an Engineering license worth more is good in my books. Besides, in my experience ChatGPT gives wrong advice for engineering around 50% of the time and therefore probably has no business giving it.
(FWIW I also think this is a bad law. Why not improve privacy protections instead? Why not allow nonprofessional use with a disclaimer?)
This only applies to advice that would have illegal for a human to give who is not licensed in the relevant field.
AI can surveil and direct munitions but it cant answer legal questions. Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?
NY doesn’t have any obligation to agree with the DoD. Also the applications seems quite different, although I don’t think AI should actually be relied on for either one!
> Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?
No, it doesn’t violate it. States can’t violate executive orders, because executive orders aren’t instructions for the states. The instructions are for the executive branch, for example, if this becomes law the US Attorney General will try to find some way to fight against it.
> AI can surveil and direct munitions but it cant answer legal questions.
There's no contradiction. The people sponsoring this bill don't think that AI should be used for either of those purposes.
Why do we care about saving long-tailed distribution of idiots from themselves at the expense of everyone else? And is this even a real demonstrable issue (in terms of percent of harmful responses to total number of responses)
Can’t advise you buddy, but here’s some OTC meds that have paid for placement. Been nice knowing you and good luck!
We all need to get serious about the unavoidable, unsolvable fact that these tools produce output of unknowable accuracy. Some things require such accuracy, precision, and, importantly, accountability. LLMs are capable of none of these things. Refusing to be honest about this and take appropriate precautions will lead to disaster.
I do. One of the reasons our infrastructure is so expensive is planning & design.
For a single freeway overpass, you could be looking at $3M (25% of the total budget) before you have even broken ground. That covers feasibility studies, traffic modeling, rough layout, environmental studies, permitting, structural engineering, blueprints, bidding, contracts, community outreach, and the list goes on.
If AI can reduce the cost of that by even 10%, that would be huge.
Europe and Asia both have reliable, modern infrastructure that’s decades ahead of the United States and they did not need the million-monkeys-on-typewriters machine to accomplish that.
It is generally not a crime to casually provide advice of this nature without a license. For example, if my friend tells me, "My stomach hurts!", it is not a crime for me to say, "Just grin and bear it, it will be okay." If they subsequently die of appendicitis, I'm unlikely to have legal liability. It would be difficult to characterize what I said as medical diagnosis or treatment.
Similarly, I can tell my friend, "Don't bother paying your taxes, that is a waste of time." This is legal speech. (Of course, helping them evade taxes is another matter.)
What is illegal is to hold oneself out as a licensed doctor, lawyer or engineer, or to provide professional services without a license.
Of course, chatbots operate at scale and give the impression of being professionally qualified even though they don't make specific representations to that effect. You're directionally probably right and I agree with you, I just want to nitpick about what is and isn't criminal.
If these companies intend to profit off of giving advice, it seems wise to restrict them in the same way we do individuals.
And yes, corporations own their chatbots. They aren't independent life forms