Top 10 Considerations for In-House Counsel on Privacy and Data Protection Concerns with AI:
- Know your legal role: Privacy laws vary among jurisdictions, and your obligations using AI and personal information will change based on your legal role, for example in Europe whether as a processor or controller or California as a Covered Business or Service Provider.
- Be transparent: Privacy policies/notices should be updated frequently with required disclosures to communicate key aspects of your organization’s AI use, especially concerning automated-decision making.
- Know the source of your training data: Regulators have required destruction of AI/ML models that were built on improperly obtained data.
- Understand that AI can transform non-personal information into personal information: Collective non-personal information used in AI processing could produce an output that would be considered personal information, attaching potential legal obligations.
- Maintaining AI models is a continuing action: Hallucinations and/or biased outputs originating from incomplete training data or poorly designed modeling may be alleged as an unfair or deceptive business practice.
- Expectations of “reasonable” security are evolving: Regulators are hiring technologists to assess and modernize duties of “reasonable” care that may evolve quicker than your product design cycle.
- Implement an AI governance program: Properly conducted privacy/data protection impact assessments, subject to regulators’ guidance, are going to be the new normal prior to product/feature development.
- Honor individual privacy rights: Laws will continue to provide individuals with rights such as a right to opt-out, correct or delete that may be challenging to honor given AI model constraints.
- Know how to work with vendors: Depending your role in the AI ecosystem, jurisdictions may impose specific data protection obligations on you, and a software bill of materials may become a new requirement (especially if contracting with the government).
- Develop internal controls and employee training: Employees/contactors may be the biggest risk to security of an AI system or the catalyst for inadvertent leakage of confidential information.