Elon Musk’s well documented concerns about artificial intelligence (AI) has not stopped the world’s richest man from launching an AI startup.

The startup known as xAI, was founded on Wednesday by Musk, and according to the website “the goal of xAI is to understand the true nature of the universe.”

Musk will be CEO of the firm, alongside his other leadership roles at Tesla, SpaceX, and Twitter. According to the website, xAI is “a separate company from X Corp, but will work closely with X (Twitter), Tesla, and other companies to make progress towards our mission.”

Image credit: Pexels

xAI startup

“Our team is led by Elon Musk, CEO of Tesla and SpaceX,” said the website, before naming some of its other staff including former employees of OpenAI, Google Deepmind and Microsoft.

“We have previously worked at DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto,” the website states. “Collectively we contributed some of the most widely used methods in the field, in particular the Adam optimiser, Batch Normalisation, Layer Normalisation, and the discovery of adversarial examples.”

“We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalisation, the Memorising Transformer, Batch Size Scaling, and μTransfer,” said the website. “We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.”

xAI also said it is “actively recruiting experienced engineers and researchers to join our team as members of our technical staff in the Bay Area.”

The firm said it will hold a Twitter Spaces event on 14 July.

Vocal critic

Musk has made clear his thoughts about AI on multiple occasions, and in 2014 described AI as an “existential threat”.

“I think we should be very careful about artificial intelligence,” he told an audience at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium in October 2014. “If I had to guess at what our biggest existential threat is, it’s probably that.”

He compared the construction of artificial minds to the calling up of unknown forces that could easily go beyond the inventor’s control.

“With artificial intelligence we’re summoning the demon,” he said at the time. “You know those stories where there’s the guy with the pentagram, and the holy water, and … he’s sure he can control the demon? Doesn’t work out.”

Also in 2014 Musk stated he believes AI poses a real threat to humans if unchecked, and had tweeted that artificial intelligence could evolve to be “potentially more dangerous than nukes”.

Elon Musk had joined other well known figures at the time, such as Steve Wozniak and the late Professor Stephen Hawking, warning about the dangers of AI.

Indeed Professor Hawking warned artificial intelligence could spell the end of life as we know it on Planet Earth. Professor Hawking also predicted that humanity has just 100 years left before the machines take over.

In 2015 Musk donated $10 million to the Future of Life Institute (FLI) – a non-profit advisory board dedicated to weighing up the potential of AI technology to benefit humanity.

But also in 2015 OpenAI was founded, and Elon Musk was part of it – serving as an initial board member.

But in 2018 Musk resigned from the OpenAI board and no longer has anything to do with the AI pioneer. Indeed he has been critical of the company and warned it has strayed from its initial goals.

More recently AI researchers and tech industry figures including Elon Musk signed an open letter in March 2023 warning that AI systems can pose “profound risks to society and humanity”.

The open letter, also signed by Apple co-founder Steve Wozniak, called for a “pause” on the development of cutting-edge AI systems for at least six months to “give society a chance to adapt”.

Then in April Musk said he would launch TruthGPT, or a maximum truth-seeking AI to rival Google’s Bard and Microsoft’s Bing AI that tries to understand the nature of the universe.

Safer AI

Now fast forward three months, and Reuters reported that in a Twitter Spaces event Wednesday evening, Musk explained his plan for building a safer AI.

Rather than explicitly programming morality into its AI, xAI will seek to create a “maximally curious” AI, Musk reportedly said.

“If it tried to understand the true nature of the universe, that’s actually the best thing that I can come up with from an AI safety standpoint,” Musk reportedly said. “I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity.”

Musk also predicted that superintelligence, or AI that is smarter than humans, will arrive in five or six years.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Spyware Maker NSO Group Found Liable In US Court

Landmark ruling finds NSO Group liable on hacking charges in US federal court, after Pegasus…

16 hours ago

Microsoft Diversifying 365 Copilot Away From OpenAI

Microsoft reportedly adding internal and third-party AI models to enterprise 365 Copilot offering as it…

16 hours ago

Albania Bans TikTok For One Year After Stabbing

Albania to ban access to TikTok for one year after schoolboy stabbed to death, as…

17 hours ago

Foldable Shipments Slow In China Amidst Global Growth Pains

Shipments of foldable smartphones show dramatic slowdown in world's biggest smartphone market amidst broader growth…

17 hours ago

Google Proposes Remedies After Antitrust Defeat

Google proposes modest remedies to restore search competition, while decrying government overreach and planning appeal

18 hours ago

Sega Considers Starting Own Game Subscription Service

Sega 'evaluating' starting its own game subscription service, as on-demand business model makes headway in…

18 hours ago