DeepSeek Says Open Source AI Image Model Beats OpenAI, Stability

Disruptive Chinese AI start-up DeepSeek has launched a family of image generation models that it says can perform better than those from better-funded rivals such as OpenAI and Stability AI.

The models in the Janus-Pro family range from 1 billion to 7 billion parameters, a measurement of a model’s problem-solving abilities.

They are available under the MIT licence, meaning they can be used commercially without restrictions.

The models, which can both analyse and generate new images, performed better than OpenAI’s DALL-E 3 on benchmarks such as GenEval and DPG-Bench, DeepSeek said in a technical paper published on Monday.

Outputs from DeepSeek's latest Janus artificial intelligence (AI) image models. Image credit: DeepSeek
Image credit: DeepSeek

Image generation

The DeepSeek models also beat competitors such as PixArt-alpha, Emu3-Gen and Stability AI’s Stable Diffusion XL.

“Janus-Pro surpasses previous unified model and matches or exceeds the performance of task-specific models,” the company said in a post on AI developer platform Hugging Face.

“The simplicity, high flexibility, and effectiveness of Janus-Pro make it a strong candidate for next-generation unified multimodal models.”

Janus-Pro is an update to the Janus model initially released in late 2024.

DeepSeek last week released an update to its AI chatbot model that drove its app to the top of the free iPhone download charts in the US on Monday, supplanting OpenAI’s ChatGPT.

The unexpected development roiled technology stocks around the world as investors questioned the huge investments companies have made into AI over the past two years.

DeepSeek’s app surged in popularity after the AI lab launched its latest reasoning model, R1, on 20 January.

‘Sputnik moment’

The little-known start-up, whose staff are mostly fresh university graduates, says the performance of R1 matches OpenAI’s o1 series of models.

The company said it spent only $5.6 million (£4.5m) training its base model, compared to the hundreds of millions or billions of dollars US companies have typically spent developing their models.

DeepSeek said in a technical report it carried out training using a cluster of more than 2,000 Nvidia chips to train its V3 model, compares to tens of thousands of such chips typically used to train a model of similar scale.

Marc Andreessen, a supporter of US president Donald Trump and a leading tech investor, called DeepSeek’s R1 “one of the most amazing and impressive breakthroughs I’ve ever seen”.

He called it a “Sputnik moment”, in reference to Soviet Russia’s launch of the Sputnik satellite in 1957.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Grayscale Launches Fund Around ‘Joke’ Dogecoin Crypto Token

Grayscale Investments launches investment fund focused on Dogecoin, cryptocurrency token originally created as a joke

3 hours ago

Nvidia’s Huang Meets With Trump After DeepSeek Chaos

Nvidia chief executive Jensen Huang meets with US president Trump at White House after Chinese…

4 hours ago

FTX Reaches Settlement With Clinton-Linked K5

FTX reaches settlement with investment firm K5 Global, co-founded by former Hillary Clinton aide, as…

4 hours ago

LinkedIn Lawsuit Over AI Training Is Dismissed

Plaintiff dismisses proposed class-action lawsuit against LinkedIn over alleged use of private messages for AI…

5 hours ago

Meta Discusses Moving Incorporation To Texas

Facebook parent Meta Platforms in discussions over possibly reincorporating in Texas as it seeks more…

5 hours ago

WhatsApp Users Targeted By Israel-Based Spyware

Dozens of WhatsApp users, including journalists and other members of civil society, targeted by Paragon…

6 hours ago