As digital companions become more sophisticated, New York is leading the charge in mandating safety, transparency, and accountability from AI providers. New York’s new Artificial Intelligence Companion Models law imposes specific notice obligations and safeguarding measures on providers of “AI companions” that engage users in personalized, humanlike interactions. The law took effect on November 5, 2025.
The new law establishes specific consumer protection and safety obligations for companies that develop, deploy, or operate AI systems marketed as “companions.” These AI companions are defined broadly to include models that simulate human conversation, relationships, or emotional support. The statute focuses on preventing harmful manipulation, ensuring transparency about the nonhuman nature of the service, and placing guardrails on certain types of interactions.
The law applies to any entity that offers or operates AI companions to users in New York State. According to the law, an “AI companion” is an AI system that simulates a sustained human relationship by retaining information from prior interactions, asking unprompted emotion-based questions that go beyond a direct response to the user, and sustaining an ongoing dialogue about personal issues. “AI companion” does not include a system used only for customer service, internal purposes, or employee productivity; or a system used primarily for providing efficiency improvements or research/technical assistance.
Covered AI providers must implement measures designed to protect users from deceptive or manipulative conduct by AI companions. This includes (1) clear disclosures, made verbally or in writing, that users are engaging with an AI rather than a human, including daily notifications or notification every three hours for continuing AI companion interactions, and (2) safety protocols to respond to indications of distress or risk of harm. Specifically, providers must make reasonable efforts to detect and address suicidal ideation or expressions of self-harm made by a user to the AI companion. These protocols include detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers them to crisis response resources. AI providers must ensure these safeguards are incorporated into the design, deployment, and ongoing operation of the AI companion.
The statute grants enforcement authority to the New York attorney general, who may seek injunctive relief and civil penalties of up to $15,000 per day for violations. The law does not explicitly provide a private right of action. Any penalties collected will be deposited into a newly created suicide prevention fund.
While New York’s law is the first to focus expressly on safeguards for “companion” AI systems with sustained, emotionally oriented conversations, it is part of an emerging patchwork of state-level regulation targeting interactive AI technologies. Other states, including California, Maine, Utah, Nevada, and Illinois, have recently enacted or introduced legislation governing AI chatbots, transparency in automated communications, and/or safeguards for users at heightened risk of harm. These laws often share common elements, such as mandatory disclosure that the user is interacting with AI, stricter protections for minors, and requirements to avoid manipulative or deceptive outputs. The New York statute’s emphasis on detecting and responding to suicidal ideation parallels provisions seen in certain health oriented chatbot rules and proposed mental health AI guidelines elsewhere. Taken together, these developments suggest that states are moving from general AI governance proposals toward targeted rules for specific categories of AI systems, particularly those designed to engage with users in a personal, relational, or emotionally supportive manner. Companies deploying such technologies may see other jurisdictions adopting similar measures in the future, potentially leading to overlapping obligations and the need for multistate compliance strategies.
Providers whose AI system falls under the law’s definition of “AI companion” should review any existing safeguards and take steps to comply with the new requirements. This includes determining how and how often they will notify users that they are not interacting with a human and assessing how they will detect and respond to a user’s indications of distress. Companies should consider regularly auditing conversational scripts and machine learning outputs from their products; documenting safeguard measures; updating terms of service and introductory messages, including their placement and frequency; and training personnel on the new obligations.