Bass, Berry & Sims attorneys Roy Wyman and Joelle Hupp co-authored an article for Innovation & Tech Today discussing how California’s recently enacted chatbot law could help mitigate risk associated with chatbots mimicking human interactions.

The law applies to chatbots that provide human-like responses and interact socially or emotionally with users. However, customer service chatbots, smart home devices and video game bots are excluded from regulation by the law.

Roy and Joelle highlighted key requirements and compliance guidelines that companies with applicable chatbots must follow, including explicit notice to users that they are not interacting with a human; safety protocols to prevent chatbots from promoting suicidal ideation, suicide or self-harm; and, reporting to the California’s Office of Suicide Prevention about instances of safety protocols breakdown and how it was reformed.

The law officially went into effect in January 2026, requiring companies to fulfill notice and transparency requirements and mandatory safety protocols before reporting requirements become effective on July 1, 2027.

“While the federal government is looking at preempting state law regarding AI, recent tragic events involving AI-powered chatbots and suicidal ideation suggest that increased regulation of AI-powered companion chatbots may continue, so don’t be surprised if this is just the first of numerous regulations,” said the authors. “Creating a process for review and approval of new AI is therefore a must-have.”

The full article, “California’s Chatbot Laws Confront the Risks of Friendly AI,” was published by Innovation & Tech Today on April 2 and is available online.