New Jina AI small language models deliver unmatched quality and efficiency on search and semantic tasks
Elastic (NYSE: ESTC), the Search AI Company, today announced the availability of jina-embeddings-v5-text, a family of two small, Elasticsearch-native multilingual embedding models at 0.2B and 0.6B parameters that deliver state-of-the-art performance across key search and semantic tasks.
Despite their compact size, they outperform significantly larger models with 7B to 14B parameters and achieve best-in-class results on the MMTEB (Multilingual MTEB) benchmark among models of comparable size and purpose. Their small footprint enables outstanding hybrid search at lower infrastructure cost, faster query response, and new deployment scenarios where memory and compute budgets are tight - including edge devices and resource-constrained environments.
jina-embeddings-v5-text are available through multiple channels: as open-weight models on HuggingFace for self-hosted deployment via vLLM, llama.cpp, or MLX, and on Elastic Inference Service (EIS), a GPU-accelerated inference-as-a-service that makes it easy to run fast, high-quality inference without complex setup. By bringing the Jina v5 family to EIS, users get a complete data platform that consolidates state-of-the-art multilingual embedding models, a high-performance vector database, and more into one unified enterprise stack across cloud and on-premises.
“Vector search, RAG, and AI agents depend on high-quality retrieval,” said Steve Kearns, general manager, Search, Elastic. “With the addition of the Jina v5’s multilingual embeddings, Elasticsearch continues to be the platform of choice for end-to-end context engineering.”
The family includes two models, jina-embeddings-v5-text-small (239M parameters) and jina-embeddings-v5-text-nano (677M parameters). Both models are optimized for four common tasks in search and agentic applications:
- Retrieval: Allowing users to query with natural language and find the most relevant documents
- Text Matching: Allowing users to find duplicates in their data, and align paraphrases or translations
- Classification: Allowing users to categorize documents, detect sentiments, and find anomalies
- Clustering: Allowing users to group documents by topic, subject, or meaning
Availability
The Jina v5 models are now available through the Elastic Inference Service (EIS) on Elastic Cloud Serverless and Elastic Cloud Hosted. All Elastic Cloud Trials include access to EIS. To get started, visit the Elastic Inference Service (EIS) documentation.
These models are also available via an online API, and available for local hosting via vLLM, llama.cpp and MLX. Detailed instructions can be found on Hugging Face.
Additional Resources
- Release Note: jina-embeddings-v5-text: New SOTA Small Multilingual Embeddings
- Hugging Face weights
- MMTEB Leaderboard
- Technical Report on arXiv
- Elasticsearch integration guide
- Blog: jina-embeddings-v5-text: Compact state-of-the-art text embeddings for search and intelligent applications
About Elastic
Elastic (NYSE: ESTC), the Search AI Company, integrates its deep expertise in search technology with artificial intelligence to help everyone transform all of their data into answers, actions, and outcomes. Elastic's Search AI Platform — the foundation for its search, observability, and security solutions — is used by thousands of companies, including more than 50% of the Fortune 500. Learn more at elastic.co.
Elastic and associated marks are trademarks or registered trademarks of elasticsearch BV and its subsidiaries. All other company and product names may be trademarks of their respective owners.
View source version on businesswire.com: https://www.businesswire.com/news/home/20260223625535/en/
Contacts
Media Contact
Elastic PR
PR-team@elastic.co
