Life sciences companies racing to harness artificial intelligence often stumble into preventable traps that can derail innovation and create lasting legal exposure.
For their webinar “Top Mistakes to Avoid When Building an AI Policy for Life Sciences Companies,” Fenwick’s Pinar Bailey, Sari Ratican, Jennifer Yoo, and Fredrick Tsang dove into four hypothetical situations to uncover critical mistakes that life sciences companies can make when it comes to AI policies.
Drawing from their conversation, here are the 10 mistakes to avoid when developing your life sciences company’s AI governance framework.
An IP misstep companies may make involves misunderstanding who qualifies as an inventor when AI assists in drug discovery. Simply adding manual testing steps after AI generates novel compounds would not likely establish human inventorship. Typically, human contribution must occur during conception, not reduction to practice.
Companies frequently fail to assess the risk profiles of different AI tools individually. Each AI tool may carry different risks that warrant a nuanced approach in managing any associated IP and compliance risks. For example, an AI tool generating antibody sequences poses vastly different patent risks than one predicting protein folding. Understanding these distinctions upfront can help shape sensible documentation and AI governance policies that accelerate, not hinder, the company AI progress. A model one-pager can help companies to effectively manage AI tools individually.
An expensive error in data governance involves trusting that third parties have properly de-identified patient health information. Companies bear responsibility for verifying that data meets applicable legal standards whether HIPAA's safe harbor method or expert determination requirements.
Novel AI tools inherently allow for discovery of many new applications that may not be contemplated when legal risks are evaluated at adoption. Without a model-based governance framework, an AI tool intended for one use may be expanded for other uses that carry substantially different risk profiles. For example, a system approved for analyzing public documents can become a compliance nightmare when researchers later upload sensitive personal data and/or trade secrets to the same platform.
Some AI service agreements include provisions allowing vendors to use client data for model training. These clauses seem harmless for public documents but can create significant trade-secret exposure when applied to proprietary or sensitive data.
Companies that treat AI governance as a one-time exercise rather than an evolving framework inevitably fall behind rapidly changing regulatory requirements. Successful policies adapt to new U.S. Food and Drug Administration (FDA) guidance and emerging legal precedents.
AI governance requires input from legal, IT, R&D, security, and compliance teams. Companies that limit policy development to single departments miss critical risk factors and implementation challenges.
Poor documentation practices create dual problems: weakened patent positions and compliance gaps. Companies need clear protocols for recording human contributions to AI-assisted inventions while meeting regulatory documentation requirements for device submissions. Documentation practices should also be formulated pragmatically to avoid over-burdening research and operations.
Even comprehensive policies fail without proper training. Employees should have a basic understanding of the critical differences between data types, the implications of using AI tools beyond approved parameters, and the importance of following established governance procedures.
While guidance from regulatory bodies, such as FDA, are often in draft form, disregarding the requirements in those drafts and assuming the requirements can be addressed later can become costly. This approach may force expensive redesigns when final guidance emerges, delaying market entry and increasing development costs.
The stakes for AI governance continue rising as regulators intensify scrutiny and competitors advance their capabilities. Companies that proactively address these common missteps position themselves as trusted partners capable of navigating complex regulatory landscapes while maintaining innovation momentum. Without a proper governance framework, companies may face mounting legal exposure and irreversible damage to their IP and clinical trials.
Building resilient AI governance requires embracing a collaborative approach that balances innovation with risk management. Companies should consider practical factors, such as burdens on the governance teams and employees, in formulating an AI governance framework while protecting valuable intellectual property and maintaining regulatory compliance.
Watch the full webinar.