Harmful uses of AI prompt nationwide legislative responses

Statehouse Reporting Project

As the use of artificial intelligence has become increasingly prevalent, lawmakers are wrestling with how to regulate the technology. The primary areas lawmakers are aiming to protect are children, mental health and people’s sexual reputations.

The creation of artificial intelligence is expected to improve efficiency and advance science and technology by analyzing complex data faster than humans can. But critics say AI needs guiding principles and laws to regulate a powerful technology that has already been used to harm people, especially children. Using AI, people have created fake imagery, sometimes sexually explicit, involving children. AI chatbots have also helped cause emotional and bodily harm to minors.

Media reports and multiple lawsuits filed against AI companies have noted disturbing incidents of AI chatbots engaging in romantic relationships and sexually explicit messaging with minors and, in some incidents, encouraging them to commit suicide or harm themselves. 

Legislatures in at least 13 states have proposed legislation to create a framework to reign in these dangerous uses of AI, and at least five of them would implement safety protocols to attempt to prevent its use for creating sexually explicit photos of minors.

David Berlekamp, an Ohio AI systems architect and ethical AI use advocate, said he sees the advancement of technology in the country at a critical turning point, with debates over regulation intensifying. 

“Right now, it is (the) Wild West, open season, in the country when it comes to AI,” he said.

The practicality of proposed regulations varies, said Daniel Castro, vice president of the Information Technology and Innovation Foundation, a nonprofit think tank that researches science and technology policy.

“They really try and regulate the technology overall, and the problem with that is: Every AI application is different. Every deployment is different,” he said.

AI can appear as a specific product, a website or an application dedicated to providing information via chat format, like ChatGPT or Claude. It can also be embedded into existing online systems like Adobe or Zoom, which have AI “assistants” that create summaries of documents or meetings for users.

That difference, Castro said, is what makes the practicality of regulating AI so iffy.

“Regulating the AI system itself, for the development of the system, doesn’t necessarily make much sense because it’s not close enough to the actual use to have meaningful regulations.”

Legislators, though, are not backing down from tackling the issues that AI can exacerbate.

Protections for children

In Carroll, Ohio, students used AI to create nude photos of a 15-year-old classmate and her friend in 2023. After the images were passed around their school, the girl and her mother began advocating for legislation surrounding AI in the state.

Senate Bill 163 and SB 217 both target the misuse of AI-generated content, including deepfake technology, which uses AI to create hyper-realistic videos or audio clips that may be used for fraud, identity theft or impersonation, including in pornographic material.

Both bills are currently in the Senate Judiciary Committee. They would require identifying markers on any AI-generated material and prohibit “simulated child pornography.”

In Missouri, three bills would bar the use of AI systems that encourage sexually explicit or non-consensually generated content.

Missouri House Bill 2035 would make it unlawful to use AI to replicate someone’s voice or image for sexual material. The bill passed through two House committees and has been initially approved by the House. It moved to the Senate on March 30.

A similar bill in the Senate has been proposed. SB 1455 would require age verification for users and make it illegal to develop or make available an AI chatbot to engage in harmful interactions such as soliciting minors for sexually explicit content, encouraging self-harm or suicide or simulating professional services. The bill was referred to a Senate committee on Feb. 5.

Missouri HB 2321, which has passed through committee and is not currently on a House calendar, would establish the “AI-Generated Content Accountability and Privacy Protection Act of 2026.” This act would make it illegal for any person to make or distribute AI-generated or AI-altered sexual content of someone without their consent.

As lawmakers aim to hold companies accountable, they have an opportunity to approach AI regulations from different angles.

In Georgia, one bill has journeyed between the two legislative chambers, and lawmakers have made amendments in each to decide if it should focus on protecting children or elections. An early version of Senate Bill 9 was filed in January of 2025. It would make distributing computer-generated, obscene images of a child under 16 years old a felony.

But the bill was later reworked by a House committee to focus on the use of AI in elections.

This revised bill would make it illegal for anyone affiliated with a political party to create and share fake media that portrays a real person making false statements or doing something that’s a total fabrication. It also would require  the disclosure of AI use in campaign advertisements.

On Jan. 28, the Senate decided not to include language regulating the use of AI in political campaigns, effectively killing it for this year.

“There were concerns from some members that we were putting too much into one bill,” Sen. John Albers, Republican sponsor of the bill and chairman of the Senate Study Committee on Artificial Intelligence, said. “We should separate out the need because this is more criminal when it comes to sexual exploitation, where elections are typically more civil related,” Albers said.

Albers also co-sponsored Senate Bill 398, which aims to protect minors and adults from virtual peeping, defined as using AI to virtually undress a person. The bill was amended by the House Committee on Judiciary, Non-Civil, to omit references to minors.  The bill did not pass this year.

Similar to SB 9, this bill would have prohibited the owners or operators of a computer program or application used primarily by children from distributing computer-generated obscene material to a child.

“That means whether they are a real image or a fake image, either way, we will not tolerate anything to do with obscene images or pornography of our children,” Albers said. “That’s against the law, and in Georgia, we are going to put you away and lock the door and take away the key.”

In line with protecting minors, House Bill 171 would prohibit the distribution of computer-generated obscene material to minors and require anyone who knowingly did so be added to Georgia’s sexual offender registry. Like some of the other AI legislation, it made it through the House, but ran out of time in the Senate.

Civil versus criminal

The separation of criminal and civil court proceedings has become a central focus in state legislation as states work to create some guardrails to curtail future abuses.

Georgia Senate Bill 418 says anyone who knowingly takes a person’s image and manipulates it in a sexually explicit way could be subject to a civil lawsuit. Under the bill, the attorney general or an appropriate prosecuting attorney would be allowed to seek fines up to $10,000 per violation. The bill is in its final stages of debate after crossing over to the House in early March.

In February, a bill was introduced to the Kansas House that would mandate user accounts and age verification for AI chatbot access. House Bill 2671 would also require popups on websites to inform users that they are interacting with AI material. The bill was referred to the Committee on Legislative Modernization but was not passed.

Kansas has already codified other legislation to protect minors from AI-generated sexual material.

In April 2025, a Kansas bill was signed into law to criminalize the possession, creation and distribution of AI-generated sexually explicit material of children.

The bill also sets a bond for those who are charged with sexually violent crimes toward children and have previous convictions for similar crimes at a minimum of $750,000.

“As child predators turn to AI to create obscene, exploitative images of children, whether by altering real photos or generating abusive material from scratch, we must act,” said Rep. Bradley Barrett in a news release.

Protections for mental health

In Ohio, HB 524 would allow the state’s courts to fine AI companies whose systems encourage users to harm themselves.

The bipartisan-supported bill reflects growing concern in Ohio over the mental health risks that AI can impose on adults and children.

The CEO of the Ohio Suicide Prevention Foundation, Tony Coder, told the Ohio House Innovation and Technology Committee that he has talked with at least four parents of children in Ohio whose suicide letters were written by AI.

This bill is sitting in the House Committee.

Some proposed legislation would address the capacities in which AI can interact with users.

In Georgia, Senate Bill 540 would prohibit AI chatbots from claiming they provide professional mental healthcare. It would also require operators of AI chatbots to clearly state to minor account holders when they are interacting with AI.

Sen. Jason Anavitarte, Republican sponsor of the bill, said parents and guardians are not aware that their children may be seeking companionship from chatbots. The goal of this bill, he said, is to put guardrails up for kids using AI chatbots.

The bill was sent to the governor’s desk on April 10.

Missouri HB 2318, sponsored by Pattie Mansur, a Democrat, would prohibit any entity or person from advertising AI as a mental health professional or as being capable of providing therapy.

“Professional practice has important layers of protection for quality (assurance) that artificial intelligence does not offer,” Bruce Eddy, a community psychologist, testified in support of the bill.

The bill passed through its assigned committee but is not currently scheduled on a House calendar.

House Bills 1746 and 1769 would establish the “AI Non-Sentience and Responsibility Act,” which aims to place responsibility on AI users by making it clear that AI systems are non-sentient beings that legally cannot be recognized as a person, spouse, legal entity or owner to any form of property.

The bills would also mandate that any indirect or direct harm caused by AI would be the responsibility of the user who directed it. Both of these bills unanimously passed committee votes.

Practical measures

In Connecticut, where deepfake revenge porn was criminalized last year, a bill has been proposed that would ban companies from letting minors use artificial intelligence that may encourage self-harm or harm to others.

Senate Bill 5 would also bar companies from using AI that acts in sexually explicit ways with minors. If passed, AI would also be allowed to offer mental health services only under the direct supervision of a licensed health care provider. The bill was passed to a Senate committee.

Republican Sen. Rob Sampson was one of four Connecticut state senators who voted against a 2025 bill proposing comprehensive AI regulations. It failed after inaction by the House.

“The problem there is that I think all of the things that could be created [with AI] are probably already governed or regulated in some way,” Sampson said. “For me, the larger issue is that AI is a tool,” Sampson said. “And I don’t know that we necessarily need to make a lot of new laws because there’s a new tool.”

This article was produced through the Statehouse Reporting Project, a collaborative effort by collegiate journalism programs across the country. It was reported by Zoe Naylor and Eric Hughes of the University of Missouri, Rachel Sandstrom of the University of Georgia, Kayla Gleason of Kent State University, Isabella Johnson of the University of Kansas and Maleena Muzio and Gavin Foster of the University of Connecticut.

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading