Apple Signs White House AI Safeguards

Apple has signed up for the Joe Biden administration’s voluntary AI safeguards that seek to ensure the emerging technology is developed in a safe and secure way, the White House said.

Apple joins 15 other companies who have signed the Voluntary AI Safeguards over the past year.

The initial group of companies signing onto the programme in July 2023 included Amazon, Anthropic, Google, AI firm Inflection, Meta, Microsoft and OpenAI, developer of the ChatGPT chatbot.

The administration said at the time it had secured “voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology”.

Image credit: US government

Watermarks

The move came amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.

In September a further eight firms signed the commitments, including Adobe, IBM and AI accelerator chip maker Nvidia.

Companies that are developing these emerging technologies have a responsibility to ensure their products are safe, the White House said.

It said the commitments underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI.

Among the measures companies agree are the use of watermarks on AI-generated content such as text, images, audio and video, amidst concern that deepfake content can be utilised for fraudulent and other criminal purposes.

Companies also commit to internal and external security testing before the release of their AI systems and publicly reporting their AI systems’ capabilities.

AI safety

In October the administration released a wide-ranging executive order on AI that amongst other measures obliged companies developing the most powerful models to submit regular security reports to the federal government.

The 111-page document built on an AI “Bill of Rights” issued in late 2022 that similarly sought to address some of the technology’s main potential drawbacks while pushing to explore its benefits.

Last November the UK hosted the first AI Safety Summit which issued an international declaration that “for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”.

Tens of thousands of Hollywood video game actors began a strike on Friday over concerns generative AI could be used to put them out of jobs.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

TSMC Begins 4nm Chip Production In Arizona

TSMC begins production of advanced 4nm chips at Arizona plant as US seeks to bring…

21 hours ago

China Chip Imports Surge Ahead Of New Export Controls

China's semiconductor imports grow by double-digits in 2024 ahead of new US export controls that…

21 hours ago

US Rules Divide World To Conquer China’s AI

New US export controls divide world into three tiers as outgoing administration seeks to cut…

22 hours ago

Apple Board Advises Against Plan To End Diversity Programmes

Apple board advises investors to vote against shareholder proposal to end diversity programmes as Meta,…

22 hours ago

Technology Secretary Calls Online Safety Act ‘Unsatisfactory’

Technology secretary Peter Kyle admits Online Safety Act falls short on protection from social harm,…

23 hours ago

Blue Origin Aborts Test Flight Minutes Before Launch

Jeff Bezos' Blue Origin cancels New Glenn certification flight at last minute due to unspecified…

1 day ago