Elon Musk’s well documented concerns about artificial intelligence (AI) has not stopped the world’s richest man from launching an AI startup.

The startup known as xAI, was founded on Wednesday by Musk, and according to the website “the goal of xAI is to understand the true nature of the universe.”

Musk will be CEO of the firm, alongside his other leadership roles at Tesla, SpaceX, and Twitter. According to the website, xAI is “a separate company from X Corp, but will work closely with X (Twitter), Tesla, and other companies to make progress towards our mission.”

Image credit: Pexels

xAI startup

“Our team is led by Elon Musk, CEO of Tesla and SpaceX,” said the website, before naming some of its other staff including former employees of OpenAI, Google Deepmind and Microsoft.

“We have previously worked at DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto,” the website states. “Collectively we contributed some of the most widely used methods in the field, in particular the Adam optimiser, Batch Normalisation, Layer Normalisation, and the discovery of adversarial examples.”

“We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalisation, the Memorising Transformer, Batch Size Scaling, and μTransfer,” said the website. “We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.”

xAI also said it is “actively recruiting experienced engineers and researchers to join our team as members of our technical staff in the Bay Area.”

The firm said it will hold a Twitter Spaces event on 14 July.

Vocal critic

Musk has made clear his thoughts about AI on multiple occasions, and in 2014 described AI as an “existential threat”.

“I think we should be very careful about artificial intelligence,” he told an audience at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium in October 2014. “If I had to guess at what our biggest existential threat is, it’s probably that.”

He compared the construction of artificial minds to the calling up of unknown forces that could easily go beyond the inventor’s control.

“With artificial intelligence we’re summoning the demon,” he said at the time. “You know those stories where there’s the guy with the pentagram, and the holy water, and … he’s sure he can control the demon? Doesn’t work out.”

Also in 2014 Musk stated he believes AI poses a real threat to humans if unchecked, and had tweeted that artificial intelligence could evolve to be “potentially more dangerous than nukes”.

Elon Musk had joined other well known figures at the time, such as Steve Wozniak and the late Professor Stephen Hawking, warning about the dangers of AI.

Indeed Professor Hawking warned artificial intelligence could spell the end of life as we know it on Planet Earth. Professor Hawking also predicted that humanity has just 100 years left before the machines take over.

In 2015 Musk donated $10 million to the Future of Life Institute (FLI) – a non-profit advisory board dedicated to weighing up the potential of AI technology to benefit humanity.

But also in 2015 OpenAI was founded, and Elon Musk was part of it – serving as an initial board member.

But in 2018 Musk resigned from the OpenAI board and no longer has anything to do with the AI pioneer. Indeed he has been critical of the company and warned it has strayed from its initial goals.

More recently AI researchers and tech industry figures including Elon Musk signed an open letter in March 2023 warning that AI systems can pose “profound risks to society and humanity”.

The open letter, also signed by Apple co-founder Steve Wozniak, called for a “pause” on the development of cutting-edge AI systems for at least six months to “give society a chance to adapt”.

Then in April Musk said he would launch TruthGPT, or a maximum truth-seeking AI to rival Google’s Bard and Microsoft’s Bing AI that tries to understand the nature of the universe.

Safer AI

Now fast forward three months, and Reuters reported that in a Twitter Spaces event Wednesday evening, Musk explained his plan for building a safer AI.

Rather than explicitly programming morality into its AI, xAI will seek to create a “maximally curious” AI, Musk reportedly said.

“If it tried to understand the true nature of the universe, that’s actually the best thing that I can come up with from an AI safety standpoint,” Musk reportedly said. “I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity.”

Musk also predicted that superintelligence, or AI that is smarter than humans, will arrive in five or six years.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Apple, Google Mobile Ecosystems Should Be Investigated, CMA Told

CMA receives 'provisional recommendation' from independent inquiry that Apple,Google mobile ecosystem needs investigation

2 days ago

Australia Rejects Elon Musk Claim About Social Media Ban For Under-16s

Government minister flatly rejects Elon Musk's “unsurprising” allegation that Australian government seeks control of Internet…

2 days ago

Northvolt Files For Bankruptcy Protection In US

Northvolt files for Chapter 11 bankruptcy protection in the United States, and CEO and co-founder…

2 days ago

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

3 days ago

Former Policy Boss At X, Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

3 days ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

3 days ago