There were 1,728 press releases posted in the last 24 hours and 426,715 in the last 365 days.

White House Announces Voluntary AI Governance Commitments from Seven Leading Companies

On July 21, the White House announced that seven leading AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) have agreed to make voluntary commitments around three key areas of their AI systems: safety, security, and trust (the "AI Commitments"). Notably, these companies agreed to undertake several actions to mitigate risks and increase transparency of their AI systems, including: significant new security testing, treating model weights as core intellectual property, sharing and incorporating a process for watermarking AI-generated audio or visual content, and publishing "transparency reports" that detail system capabilities, limitations, and domains of appropriate and inappropriate use.

The AI Commitments are generally framed to be forward-looking, and will apply only to the next generation of generative AI models. Specifically, the AI Commitments document states that "where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier," which means models that are more powerful than any currently released models, including GPT-4, Claude 2, PaLM 2, Titan, and, in the case of image generation, DALL-E 2.

Because the commitments are voluntary, there are no terms in the announcement referencing any enforcement or compliance mechanism to ensure that the companies adhere to their commitments. However, by making their commitments public, failing to adhere to the AI Commitments could expose these companies to the Federal Trade Commission's general Section 5 authority to enforce the prohibition on unfair and deceptive acts or practices. The FTC has previously found that companies' failure to adhere to certain public promises or statements about their operations may be an unfair or deceptive practice violative of Section 5, and has warned that it will be carefully monitoring the public statements of generative AI companies.

Looking ahead, the announcement also stated that the Administration is developing an executive order, presumably on similar AI governance issues, and will pursue bipartisan legislation on responsible AI development. The AI Commitments will remain in effect until regulations "covering substantially the same issues" are implemented.

This announcement further demonstrates the Administration's prioritization of efforts to ensure the responsible development of AI systems. Two federal agencies have issued recent requests for information about AI governance issues (the National Telecommunications and Information Administration and the White House Office of Science and Technology Policy), the FTC is actively engaged in rulemaking and enforcement activities, and the White House, in May of this year, announced several initiatives to support the development of a National Artificial Intelligence Strategy.

The AI Commitments include eight specific obligations, grouped into the categories of safety, security, and trust of AI systems. Each commitment is set out in detail below.

Safety

  • The companies commit to internal and external red teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas. ("Red teaming" refers to the process of testing the safety and security of systems via adversarial processes and techniques intended to exploit any identified deficiencies in the system.)
  • The companies will work to enhance information sharing regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards by establishing a mechanism through which they can develop shared standards and best practices for frontier AI safety.

Security

  • The companies will invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.

    As part of this commitment, the companies will treat unreleased AI model weights for models in scope as "core intellectual property" for their business, will limit access to model weights to those whose job function requires access, and will establish a robust insider threat detection program consistent with protections provided for their most valuable intellectual property and trade secrets.

  • The companies will incentivize third-party discovery and reporting of issues and vulnerabilities, including by establishing bug bounty systems for responsible disclosure of weaknesses.

Trust

Conclusion

The White House has indicated that it intends to utilize a number of policy-making approaches to address concerns around the development of responsible AI. However, binding and enforceable new requirements for companies developing and deploying AI systems in the US, whether in the form of Executive Orders, regulations, or legislation, will take time to be finalized and wend their way through the legislative process. In the meantime, these AI Commitments provide some measure of accountability and may be used as a framework for future government action geared to ensuring the safe and trustworthy development of AI systems.

DWT's AI Team regularly advises clients on the rapidly evolving AI regulatory landscape and will continue to monitor developments in US and international policy decisions regarding AI governance.