Partnership on AI Publishes Framework for Responsible Generative AI Practices

Generative artificial intelligence (AI) and the synthetic media it is capable of producing are clearly all the rage these days, but there are still many lingering questions about how to responsibly develop, use and host such tools and content to avoid public relations backlash and minimize liability risk. Partnership on Artificial Intelligence (PAI) recently released its Responsible Practices for Synthetic Media framework, offering guidance on how to responsibly develop, create and share synthetic media, which includes audio, visual and multimodal outputs of generative AI models. The framework recommends a number of actions, centering around themes of transparency and proper disclosure, that can be taken by three distinct stakeholders of this generative AI technology.

For those building generative AI models and its supporting infrastructure: (1) be transparent with the users about the capabilities, limitations and potential risks of the technology; (2) provide disclosure mechanisms to creators who might use your generative AI technology (e.g., using provenance standards such as C2PA) and build the components of the AI system in a manner that facilitates such disclosure; and (3) provide a publicly accessible policy regarding the restricted uses of your synthetic media and generative AI tools.

For creators using generative AI to make synthetic media: (1) be transparent regarding how you obtained proper informed consent from the subjects portrayed or utilized in your synthetic media (unless used for reasonable artistic, satirical or expressive purposes); (2) publicly provide your policy for approaching the ethical issues raised by synthetic media, and comport with such policy when generating synthetic media; and (3) make robust and accurate disclosures that the media is synthetic in nature, with the goal of limiting speculation about the content.

For distributors and publishers of synthetic media: (1) disclose when you are confident that hosted content is synthetic in nature; (2) provide a publicly accessible policy that outlines your approach to synthetic media, and consistently adhere to and enforce this policy; (3) take quick action to ameliorate your distribution of any harmful synthetic media when it comes to your attention; and (4) attempt to educate potential consumers regarding synthetic media, including the permissible types of synthetic media that may be created or shared on your platform.

This framework by PAI dovetails nicely with recent research taxonomizing various mitigations that may be taken by those designing such generative AI models (e.g., use of radioactive data to make the models detectable), by those making such AI models accessible (e.g., imposing certain restrictions on use of the models), and by those disseminating synthetic media (e.g., requiring “proof of personhood” in order to post content).

Read more AI news and insights here.


Don’t have an account yet?