FTC Takes Aim Against Deceptive AI Practices

By: Vejay Lalla , Su Li, Ph.D. , Zach Harned

In February 2023, the Federal Trade Commission (FTC) issued guidance to companies to keep their artificial intelligence (AI) claims in check and urged them not to exaggerate the capabilities of their AI products or technology. We provided an analysis of this guidance, and since then, the FTC issued an expansion of their original guidance, asserting that their original guidance dealt with the “fake AI problem,” while this more recent guidance issued in March 2023 dealt with the “AI-fake problem.”

This AI-fake problem is closely tied with the issue of deepfakes, such as the viral Pope Francis puffer coat image. These deepfakes have become easier to create and increasingly convincing over the last few years. Such deepfakes can be created across various modalities, in that the synthetic media can take the form of images, video, or even audio, with the FTC specifically calling out risks related to “voice clones.” The FTC cites risk of such synthetic media being used deceptively in spear phishing emails, fakes websites, fake posts, fake profiles, fake consumer reviews, creating malware or ransomware, prompt injection attacks, as well as facilitating imposter scams, extortion and financial fraud. It should be noted that synthetic media can also be used fruitfully in the entertainment industry, however there are various legal complexities that require careful consideration.

The FTC warns companies to consider the following four issues if they plan to develop a generative AI product capable of creating synthetic media and deepfakes.

  1. Generative AI product developers should, at the design stage of an AI product, consider the risk of it being misused for fraud or causing other harm. If the risk is too high, the company should not make or sell the product.
  2. AI developers should be actively attempting to mitigate the risk of deepfakes and misused synthetic media. More than just warning its customers about such misuses, the company needs to create built-in deterrence features that are not easily circumvented.
  3. AI developers should not be placing the burden on consumers to highlight the ways in which the AI product may be misused (either against themselves or others). Instead, the AI developer should be attempting to detect such potential misuses before the product is released into the wild and could be exposing consumers to harm.
  4. Advertisers should be cautious not to use synthetic media to mislead people regarding what they are seeing, hearing or reading. Misleading consumers via the use of synthetic doppelgängers or mimetic models, including fake dating profiles or phony followers, can and has led to FTC enforcement actions, so both advertisers and AI developers should tread carefully.

Login

Don’t have an account yet?

Register