Apple Signs White House AI Safeguards

Apple's AI event at WWDC in June 2024. Image credit: Apple

Apple joins White House Voluntary AI Safeguards programme, joining 15 other major companies in agreeing to safety and transparency measures

Apple has signed up for the Joe Biden administration’s voluntary AI safeguards that seek to ensure the emerging technology is developed in a safe and secure way, the White House said.

Apple joins 15 other companies who have signed the Voluntary AI Safeguards over the past year.

The initial group of companies signing onto the programme in July 2023 included Amazon, Anthropic, Google, AI firm Inflection, Meta, Microsoft and OpenAI, developer of the ChatGPT chatbot.

The administration said at the time it had secured “voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology”.

The White House. Image credit: US government
Image credit: US government

Watermarks

The move came amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.

In September a further eight firms signed the commitments, including Adobe, IBM and AI accelerator chip maker Nvidia.

Companies that are developing these emerging technologies have a responsibility to ensure their products are safe, the White House said.

It said the commitments underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI.

Among the measures companies agree are the use of watermarks on AI-generated content such as text, images, audio and video, amidst concern that deepfake content can be utilised for fraudulent and other criminal purposes.

Companies also commit to internal and external security testing before the release of their AI systems and publicly reporting their AI systems’ capabilities.

AI safety

In October the administration released a wide-ranging executive order on AI that amongst other measures obliged companies developing the most powerful models to submit regular security reports to the federal government.

The 111-page document built on an AI “Bill of Rights” issued in late 2022 that similarly sought to address some of the technology’s main potential drawbacks while pushing to explore its benefits.

Last November the UK hosted the first AI Safety Summit which issued an international declaration that “for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”.

Tens of thousands of Hollywood video game actors began a strike on Friday over concerns generative AI could be used to put them out of jobs.