Evaluating AI Risks: What Companies Need to Know

By: Heidi Lawson , Stuart P. Meyer

What You Need To Know

  • Artificial intelligence (AI) models have a variety of different quirks and capabilities depending on how they were trained and how they are being used.
  • Understanding these differences is crucial in mitigating the risks that will inevitably arise when integrating AI in your business.
  • Choosing the right model for the right job and being vigilant about course correcting when problems do emerge will help you protect your business.

Artificial Intelligence (AI) has become a vital part of modern business, but it doesn’t come without its risks. At a high level, those risks depend largely on the model's complexity, the sensitivity and type of data it uses, and how the model is deployed or integrated within the business processes.

AI risk profiles will vary model to model, so to help users understand how to gauge those risk factors, below are a few takeaways from a recent webinar partners Heidi Lawson and Stuart Meyer did with Armilla AI and WTW about the risks you may encounter:

  • Bias in AI models. AI systems are trained on large datasets that may reflect real-world biases, and there is a possibility they could discriminate unfairly or disproportionately impact certain groups. Regulatory provisions are emerging to address this issue, with regulations like those in New York now requiring testing models for bias before deployment.
  • Copyright issues. AI's extensive use of online text for training models raises questions about potential copyright infringement. Input data and system outputs themselves could infringe copyrights if they closely mimic existing works. Lawsuits are currently exploring these boundaries around whether this constitutes infringement or fair use of copyrighted works.
  • Model accuracy drift over time. From a technical perspective, AI models have inherent uncertainties due to their probabilistic nature. Even highly accurate systems will sometimes make mistakes. Over time, risks can increase as models are retrained and their decisions drift. Proper ongoing monitoring helps catch issues before they impact people. Regular monitoring can detect these discrepancies, enabling prompt mitigation.

Risk profiles also change with different use cases, such as decision support systems and customer-facing applications. It’s important to take a holistic approach based on your unique use case, rather than a one-size-fits-all strategy. Businesses can consider the following to manage and mitigate risks:

  • Conduct thorough risk assessments of all AI systems and use cases, tailored to the specific model, data, and deployment context.
  • Establish strong governance and oversight of AI projects. Involve top management and designate people responsible for risk management.
  • Audit models for biases, inaccuracies, and other issues on an ongoing basis as data and conditions change over time.
  • Follow established frameworks and standards like the International Organization for Standardization (ISO) and National Institute of Standards and Technology (NIST) to guide development and use of AI in a responsible manner.
  • Consider specialized model architectures, training techniques, and use constraints for particularly high-risk applications.

For a deeper dive into considerations for evaluating AI risks, watch the full webinar recording here.