Press release

Elasticsearch Open Inference API Now Supports Mistral AI Embeddings

0
Sponsored by Businesswire

Elastic (NYSE: ESTC), the Search AI Company, today announced the Elasticsearch vector database now stores and automatically chunks embeddings from mistral-embed, with native integrations to the Open Inference API and the semantic_text field. This reduces time to market for RAG applications and simplifies the development process by eliminating the need to architect bespoke chunking strategies and combining chunking with vector storage.

“We are invested in delivering open-first, enterprise-grade GenAI tools to help developers build next generation search applications,” said Shay Banon, founder and chief technology officer at Elastic. “Through our collaboration with the Mistral AI team, we’re simplifying the process of storing and chunking embeddings in Elasticsearch to a single API call.”

“Mistral AI has always been committed to open-weights and making AI accessible to all,” said Arthur Mensch, co-founder and CEO of Mistral AI. “Working with Elastic allows us to bring Mistral’s tools to more developers through the Elastic open inference API, and gives us the opportunity to work with a company that shares our value of accessibility. We’re excited to see what developers will create.”

Support for Mistral’s AI embedding model is available today, read the Elastic blog to get started.

About Elastic

Elastic (NYSE: ESTC), the Search AI Company, enables everyone to find the answers they need in real-time using all their data, at scale. Elastic’s solutions for search, observability and security are built on the Elastic Search AI Platform, the development platform used by thousands of companies, including more than 50% of the Fortune 500. Learn more at elastic.co.

Elastic and associated marks are trademarks or registered trademarks of Elastic N.V. and its subsidiaries. All other company and product names may be trademarks of their respective owners.