DeepSeek Says Open Source AI Image Model Beats OpenAI, Stability

Disruptive Chinese AI start-up DeepSeek has launched a family of image generation models that it says can perform better than those from better-funded rivals such as OpenAI and Stability AI.

The models in the Janus-Pro family range from 1 billion to 7 billion parameters, a measurement of a model’s problem-solving abilities.

They are available under the MIT licence, meaning they can be used commercially without restrictions.

The models, which can both analyse and generate new images, performed better than OpenAI’s DALL-E 3 on benchmarks such as GenEval and DPG-Bench, DeepSeek said in a technical paper published on Monday.

Outputs from DeepSeek's latest Janus artificial intelligence (AI) image models. Image credit: DeepSeek
Image credit: DeepSeek

Image generation

The DeepSeek models also beat competitors such as PixArt-alpha, Emu3-Gen and Stability AI’s Stable Diffusion XL.

“Janus-Pro surpasses previous unified model and matches or exceeds the performance of task-specific models,” the company said in a post on AI developer platform Hugging Face.

“The simplicity, high flexibility, and effectiveness of Janus-Pro make it a strong candidate for next-generation unified multimodal models.”

Janus-Pro is an update to the Janus model initially released in late 2024.

DeepSeek last week released an update to its AI chatbot model that drove its app to the top of the free iPhone download charts in the US on Monday, supplanting OpenAI’s ChatGPT.

The unexpected development roiled technology stocks around the world as investors questioned the huge investments companies have made into AI over the past two years.

DeepSeek’s app surged in popularity after the AI lab launched its latest reasoning model, R1, on 20 January.

‘Sputnik moment’

The little-known start-up, whose staff are mostly fresh university graduates, says the performance of R1 matches OpenAI’s o1 series of models.

The company said it spent only $5.6 million (£4.5m) training its base model, compared to the hundreds of millions or billions of dollars US companies have typically spent developing their models.

DeepSeek said in a technical report it carried out training using a cluster of more than 2,000 Nvidia chips to train its V3 model, compares to tens of thousands of such chips typically used to train a model of similar scale.

Marc Andreessen, a supporter of US president Donald Trump and a leading tech investor, called DeepSeek’s R1 “one of the most amazing and impressive breakthroughs I’ve ever seen”.

He called it a “Sputnik moment”, in reference to Soviet Russia’s launch of the Sputnik satellite in 1957.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Italy, White House Condemn ‘Discriminatory’ Tech Taxes

Italy, White House issue joint statement condemning 'discriminatory' tech taxes as US seeks to end…

1 hour ago

Italian Newspaper Hails ‘Success’ With AI-Generated Supplement

Italian newspaper Il Foglio says four-page AI-generated supplement published every day for a month shows…

2 hours ago

Huawei Updates Smart Glasses With Live Translation

Huawei launches Titanium edition of Eyewear 2 smart glasses with gesture controls and AI-powered simultaneous…

2 hours ago

Head Of Chinese Chip Tools Company Drops US Citizenship

Gerald Yin, founder, chairman and chief executive of key Chinese chip tools maker AMEC, drops…

3 hours ago

Intel Tells Chinese Clients Some AI Chips To Require Licence

Intel reportedly tells clients in China some of its AI chips will now require export…

3 hours ago

Intel Chief Flattens Leadership Structure

New Intel chief executive Lip-Bu Tan flattens company's leadership structure as he seeks to end…

4 hours ago