Apple Signs White House AI Safeguards

Apple has signed up for the Joe Biden administration’s voluntary AI safeguards that seek to ensure the emerging technology is developed in a safe and secure way, the White House said.

Apple joins 15 other companies who have signed the Voluntary AI Safeguards over the past year.

The initial group of companies signing onto the programme in July 2023 included Amazon, Anthropic, Google, AI firm Inflection, Meta, Microsoft and OpenAI, developer of the ChatGPT chatbot.

The administration said at the time it had secured “voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology”.

Image credit: US government

Watermarks

The move came amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.

In September a further eight firms signed the commitments, including Adobe, IBM and AI accelerator chip maker Nvidia.

Companies that are developing these emerging technologies have a responsibility to ensure their products are safe, the White House said.

It said the commitments underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI.

Among the measures companies agree are the use of watermarks on AI-generated content such as text, images, audio and video, amidst concern that deepfake content can be utilised for fraudulent and other criminal purposes.

Companies also commit to internal and external security testing before the release of their AI systems and publicly reporting their AI systems’ capabilities.

AI safety

In October the administration released a wide-ranging executive order on AI that amongst other measures obliged companies developing the most powerful models to submit regular security reports to the federal government.

The 111-page document built on an AI “Bill of Rights” issued in late 2022 that similarly sought to address some of the technology’s main potential drawbacks while pushing to explore its benefits.

Last November the UK hosted the first AI Safety Summit which issued an international declaration that “for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”.

Tens of thousands of Hollywood video game actors began a strike on Friday over concerns generative AI could be used to put them out of jobs.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Brazil Unfreezes Starlink, X Bank Accounts After Funds Transfer

Judge orders X, Starlink bank accounts unfrozen after $3.3m transfer pays off fines imposed on…

9 hours ago

Uber To Offer Waymo Robotaxi Rides In Austin, Atlanta

Uber expands deal with Waymo from Phoenix to Austin, Texas and Atlanta as it faces…

10 hours ago

GenAI Shopping: Revolutionising Retail Experiences

Discover how Generative AI is transforming the retail experience with personalised interactions, AI-powered search, and…

10 hours ago

US House Passes Bill Targeting Chinese EV Battery Tech

US House of Representatives passes bill restricting tax credits for electric vehicles using battery technology…

10 hours ago

NASA Mission To Jupiter’s Europa Gets Go-Ahead

NASA to launch 'Europa Clipper' mission to Jupiter's moon Europa next month as it seeks…

11 hours ago

Police Arrest Youth Over London Transport Hack

National Crime Agency arrests 17-year-old in Walsall over hack of Transport for London that compromised…

11 hours ago