The New Regulatory Reality for AI in Healthcare: How Certain States Are Reshaping Compliance

By: Jennifer Yoo , Ana Razmazma , Sari Heller Ratican , Zach Harned , Natalie Kim

What You Need To Know

  • California, Nevada, Texas, and Illinois are bringing artificial intelligence-related healthcare regulations into sharper focus by limiting how AI is portrayed and what kind of care can be provided with the help of AI.
  • California laws prohibit AI systems from implying the presence of licensed medical oversight where none exists and create new compliance considerations by giving the state professional licensing boards direct authority to investigate violations.
  • Illinois’ law prohibits the use of AI in providing mental health and therapeutic decision making, unless an individual, corporation, or entity falls under an exemption.
  • Nevada’s law prohibits AI providers from utilizing AI systems to provide or claim to provide professional mental or behavioral healthcare.
  • Texas laws require providers to disclose AI use in clinical care and maintain oversight of AI-generated medical records.

The landscape for AI in healthcare is shifting dramatically as state legislators move beyond general guidelines to establish concrete enforcement mechanisms. Healthtech companies’ success using AI may be aided by integrating regulatory readiness into core business strategy rather than treating compliance as an afterthought. Below we provide an overview of these laws, as well as questions companies can use to guide their analysis on how such laws may apply to their AI use.

California Targets Misleading Systems and Design

California had already targeted the regulation of generative AI use in the healthcare setting with AB 3030, which took effect on January 1, 2025. It imposes disclosure requirements on healthcare providers, including clinics, hospitals, and physician offices, that use generative AI to produce written or verbal communications containing clinical information. Under AB 3030, such communications must include (1) a disclaimer indicating that the content was produced by generative AI and (2) clear instructions for patients on how to contact a licensed human healthcare provider. These requirements were designed to ensure transparency and preserve patient trust in clinical communications.

California’s AB 489 represents a critical evolution in healthcare AI regulation by targeting systems (rather than healthcare practitioners) that could mislead patients about the presence of licensed medical oversight. Signed into law on September 2, 2025, and effective October 1, 2025, AB 489 prohibits AI systems from using professional terminology, interface elements, and post-nominal letters (like M.D., D.O., or R.N.) suggesting users are receiving care from licensed human healthcare professionals when no such oversight exists. This extends beyond obvious misrepresentations and is meant to capture subtle design choices that could convey professional authority. Thus, healthtech companies should consider avoiding any language, design, or branding that could be interpreted as implying medical authority or licensed professional involvement, such as “Virtual Physician,” “AI Doctor,” or “Nursebot.” Additionally, healthtech companies should not (1) use clinical terminology implying that the care or advice being offered is provided by a person in possession of a professional license or (2) market products by using terms implying a medical professional may be involved.

AB 489’s enforcement mechanism begins October 1, 2025, when state professional licensing boards will have direct authority to investigate violations with each prohibited term, letter, or phrase constituting a separate offense. This creates a new compliance consideration alongside existing privacy, security, and consumer protection requirements companies must navigate.

Companies developing diagnostic AI tools or virtual health assistants should consider conducting a comprehensive review of all product features, user interfaces, and marketing materials to assess whether their systems use any language, design, or branding that could be interpreted as implying medical authority.

Illinois Takes a Comprehensive Approach to AI in Mental Health

Illinois has enacted even more sweeping restrictions through the Wellness and Oversight for Psychological Resources Act (HB 1806) (WOPRA). Effective August 4, 2025, WOPRA prohibits the use of AI to (1) make independent therapeutic decisions, (2) directly interact with clients in any form of therapeutic communication, or (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional.

WOPRA permits the use of AI systems for “administrative or supplementary support,” defined as tasks performed to assist a licensed professional in the delivery of therapy or psychotherapy services not involving therapeutic communication. Such tasks include:

  • Appointment scheduling
  • Processing billing and insurance claims
  • Preparing and maintaining client records (including therapy notes)
  • Analyzing anonymized data to track client progress or identify trends, subject to review by a licensed professional
  • Identifying and organizing external resources or referrals for client use
  • Drafting general communications related to therapy logistics that do not include therapeutic advice or recommendations

Section 25 of WOPRA provides that WOPRA does not apply to religious counseling, peer support, or publicly available self-help materials and educational resources not offering therapy or psychotherapy services. As a result, services delivered for these purposes (including mental health coaching) may fall outside the scope of WOPRA. However, organizations should conduct careful legal analysis to determine whether a particular service qualifies for an exemption.

The law, enforced by the Illinois Department of Financial and Professional Regulation with penalties up to $10,000 per violation, establishes the nation’s first statutory restriction on AI therapy while imposing strict conditions on how licensed professionals may incorporate AI into care delivery. Such financial penalties add to the existing risks of eroding client trust and privacy that may result from misuse of AI in therapeutic contexts.

For healthtech companies and AI providers, these restrictions require a clear separation between administrative and therapeutic functions, robust oversight by licensed professionals, and careful review of product features to enhance compliance. The law may limit the scope of AI-driven mental health solutions in Illinois, necessitating product redesign, enhanced compliance protocols, and ongoing legal review for businesses operating in this space.

Nevada Targets AI Providers Offering Mental Health Services

Nevada’s AB 406, signed into law on June 5, 2025, and effective on July 1, 2025, prohibits AI providers from offering and programming AI systems that provide services constituting the practice of mental or behavioral healthcare. AB 406 defines “professional mental or behavioral health care” as mental or behavioral healthcare or services relating to the diagnosis, treatment, or prevention of mental illness or emotional or behavioral disorders which are typically provided by a provider of mental or behavioral healthcare within their authorized scope of practice.

Under Nevada’s AB 406, AI providers face clear prohibitions designed to safeguard the delivery of mental and behavioral health care:

  • Prohibition on Misleading Titles or Claims: AI providers may not represent or make statements that (1) AI systems are capable of providing professional mental or behavioral health care; (2) allow users to interact with conversational features that simulate human conversation for the purpose of obtaining professional mental or behavioral healthcare; or (3) use features, avatars, or titles such as therapist, clinical therapist, counselor, psychiatrist, doctor, or any similar term that implies the system is a licensed provider of professional mental health or behavioral care.
  • Ban on Direct Care Delivery: AI systems cannot be made available to individuals in Nevada if (1) they are programmed to provide any service or experience constituting the practice of professional mental or behavioral healthcare as if performed by a human provider or (2) the provider makes a representation or statement explicitly or implicitly indicating that such AI system is capable of providing professional mental or behavioral healthcare.
  • Restrictions in Telehealth and Schools: The law extends to telehealth platforms and prohibits public schools from using AI systems to perform the functions of school counselors, psychologists, or social workers related to student mental health. Using AI systems in connection with providing professional mental and behavioral health care directly to a patient is also prohibited.

Like Illinois’ WOPRA, AB 406 permits the use of AI systems designed for use by mental and behavioral health providers to perform administrative support tasks, such as scheduling, managing records, analyzing operational data, and organizing, tracking, and managing files and notes pertaining to students. Additionally, AB 406 does not prohibit any advertisement, statement, or representation for or relating to materials, literature, and other products meant to provide advice and guidance for self-help relating to mental or behavioral health, if the material, literature, or product does not purport to offer or provide professional mental or behavioral healthcare. Thus, like WORPA, services delivered for either administrative purposes or for purposes such as self-help may fall outside the scope of AB 406. However, a careful analysis should be conducted to see if a company qualifies for an exemption.

Violations of the Nevada law may result in civil penalties up to $15,000 per instance and disciplinary action for licensed providers.

For healthtech companies and AI providers, Nevada’s AB 406 calls for a clear separation between administrative support functions and any features that could be construed as clinical care, along with careful review of product language and marketing to avoid any implication of professional mental or behavioral health services. The law may restrict the deployment of AI-driven mental health solutions in Nevada, compelling businesses to redesign products, enhance compliance oversight, and regularly consult legal counsel to mitigate the risk of substantial penalties and regulatory action.

Texas’ Approach to AI Disclosure, Oversight, and Utilization by Health Care Providers

Under the Texas Responsible Artificial Intelligence Governance Act (HB 149, TRAIGA), signed into law on June 22, 2025, and effective January 1, 2026, health care providers must disclose to patients or their personal representatives when AI systems are used in diagnosis or treatment. This disclosure must be made before or at the time of interaction in clinical settings, except in emergencies, when it must be provided as soon as reasonably possible. The law is designed to ensure patients are fully informed about the involvement of AI in their care and allow them to make decisions accordingly. TRAIGA also includes a cure period, in which the company has 60 days after the receipt of a written notice of violation from the state attorney general to cure the alleged violation, provide supporting documentation to show the manner in which the violation was cured, and make any necessary changes to internal policies to prevent further such violations. However, it is unclear whether the failure to timely disclose AI use in diagnosis or treatment will be deemed curable under this mechanism.

Additionally, SB 1188, effective September 1, 2025, imposes further requirements on providers using AI in diagnostic contexts. Licensed practitioners may use AI to support diagnosis and treatment planning only if all AI-generated records are reviewed to ensure the data is accurate and properly managed. Providers must review any AI-generated recommendations and retain ultimate responsibility for clinical decisions.

Healthtech companies and AI providers should ensure their systems enable clear patient disclosures for diagnosis or treatment, support practitioner oversight, and comply with all licensure and record review requirements under Texas Medical Board standards. These laws may require product modifications, enhanced transparency features, and ongoing collaboration with legal and compliance teams. On August 18, 2025, Texas Attorney General Ken Paxton opened an investigation into AI chatbot platforms for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools, signaling heightened enforcement risk for companies operating in this space.

Compliance Decision Models

The evolving patchwork of state laws creates a complex compliance landscape for AI deployment in mental health and healthcare. Each statute draws clear boundaries between what is permitted, restricted, or prohibited, often hinging on the AI system’s function and the degree of human oversight. The following decision models may help organizations evaluate the applicability of these laws to their products and practices.

California

Key Question If Yes, How It Affects Compliance
Does your AI system use professional titles or post-nominal letters (e.g., M.D., D.O., R.N.)? Likely prohibited unless licensed oversight is present; may trigger investigation.
Does the AI interface imply professional authority (e.g., icons, tone, terminology)? Prohibited unless there is a licensed professional; each instance could be a separate offense.
Is the AI marketed as providing care from licensed professionals? Misrepresentation is prohibited; marketing language must be carefully vetted.
Is there licensed medical oversight for the AI system? Oversight may allow AI use but must be clearly disclosed.
Is the AI used only for administrative support (e.g., scheduling)? Permitted, but must avoid misleading design or terminology.

Illinois 

Key Question If Yes, How It Affects Compliance
Does your product diagnose, treat, or support mental health conditions? Product is likely considered therapy and must comply with WOPRA.
Does the AI interact with users in an emotional or therapeutic context (e.g., conversations)? Likely prohibited unless a licensed provider is directly involved and specific consent is obtained.
Is the AI used only for administrative or back-end support with oversight from a licensed professional? Permitted, but requires informed consent and confidentiality protections.
Is the service framed as “wellness” or “health coaching” with no clinical language or claims? Possibly outside WOPRA’s scope, but marketing language must be carefully vetted.
Are licensed professionals directly supervising the AI’s therapeutic output? Permitted, but professional review, documented consent, and confidentiality compliance are critical during use.

Nevada 

Key Question If Yes, How It Affects Compliance
Does your AI system provide or claim to provide mental/behavioral health care? Prohibited; violations may result in penalties up to $15,000 per instance.
Is the AI used for administrative support (e.g., scheduling, billing)? Permitted, but outputs must be independently reviewed for compliance.
Is the AI used in technical platforms or public schools for counseling? Prohibited; schools may only use AI for administrative tasks.
Does the AI use titles like “therapist,” “psychiatrist,” or similar? Prohibited; misleading titles or claims are not allowed.
Are clinical decisions made by AI without human review? Prohibited; all decisions must be made by licensed practitioners.

Texas 

Key Question If Yes, How It Affects Compliance
Was AI used in diagnosis or treatment of a patient? Permitted, so long as disclosure to the patient or their representative is made before or during interaction, unless in emergencies.
Was the disclosure made clearly and timely? Required; failure to disclose may violate TRAIGA and trigger enforcement actions.
Are AI-generated records reviewed by the provider? Required; the provider must review all AI outputs per Texas Medical Board standards.
Is the AI marketed as a mental health tool without oversight? Prohibited; may trigger investigation for deceptive trade practices.

Strategic Considerations for Product Development and What’s Next for Healthcare AI Regulation

Given these regulatory developments, healthtech companies and AI providers should consider embedding compliance considerations into early-stage product design, rather than treating them as post-launch modifications. Companies should consider conducting comprehensive audits to classify all AI tools as administrative, supplementary, or potentially therapeutic, and implement geofencing controls to disable prohibited features for users in regulated states.

The emphasis on clear disclosure and transparency creates opportunities for companies to build competitive advantages through proactive compliance. Healthtech companies and products that clearly communicate AI capabilities and limitations, implement explainable decision pathways, and engage licensed practitioners in development processes may find stronger market acceptance as regulatory scrutiny intensifies.

The future belongs not to the fastest AI innovators, but to those who earn and maintain public trust through responsible development and deployment practices.