Current text embedding models, like BERT, are limited to processing only 512 tokens at a time, which hinders their effectiveness with long documents. This limitation often results in loss of context and nuanced understanding. However, Jina Embeddings v2 addresses this issue by supporting sequences upto 8192 tokens, allowing for the preservation of context and enhancing the accuracy and relevance of the processed information in long documents. This advancement marks a substantial improvement in handling complex text data.
Learning Objectives
- Understand the limitations of traditional text embedding models like BERT in handling long documents.
- Learn how Jina Embeddings v2 overcomes these limitations with its 8192-token support and advanced architecture.
- Explore the key innovations behind Jina Embeddings v2, including ALiBi, GLU, and its three-stage training process.
- Discover real-world applications of Jina Embeddings v2 in fields like legal research, content management, and generative AI.
- Gain practical knowledge on integrating Jina Embeddings v2 into your projects using Hugging Face libraries.
This article was published as a part of the Data Science Blogathon.
The Challenges of Long-Document Embeddings
Long documents pose unique challenges in NLP. Traditional models process text in chunks, truncating context or producing fragmented embeddings that misrepresent the original document. This results in:
- Increased computational overhead
- Higher memory usage
- Diminished performance in tasks requiring a holistic understanding of the text
Jina Embeddings v2 directly addresses these issues by expanding the token limit to 8192, eliminating the need for excessive segmentation and preserving the document’s semantic integrity.
Also Read: Guide to Word Embedding System
Innovative Architecture and Training Paradigm
Jina Embeddings v2 takes the best of BERT and supercharges it with cutting-edge innovations. Here’s how it works:
- Attention with Linear Biases (ALiBi): ALiBi replaces traditional positional embeddings with a linear bias applied to attention scores. This allows the model to extrapolate effectively to sequences much longer than those seen during training. Unlike earlier implementations designed for unidirectional generative tasks, Jina Embeddings v2 employs a bidirectional variant, ensuring compatibility with encoding-based tasks.
- Gated Linear Units (GLU): The feedforward layers use GLU, known for enhancing transformer efficiency. The model employs variants like GEGLU and ReGLU to optimize performance based on model size.
- Optimized Training Process: Jina Embeddings v2 follows a three-stage training paradigm:
- Pretraining: The model is trained on the Colossal Clean Crawled Corpus (C4), leveraging masked language modeling (MLM) to build a robust foundation.
- Fine-Tuning with Text Pairs: Focused on aligning embeddings for semantically similar text pairs.
- Hard Negative Fine-Tuning: Incorporates challenging distractor examples to improve the model’s ranking and retrieval capabilities.
- Memory-Efficient Training: Techniques like mixed precision training and activation checkpointing ensure scalability for larger batch sizes, critical for contrastive learning tasks.
With ALiBi attention, a linear bias is incorporated into each attention score preceding the softmax operation. Each attention head employs a distinct constant scalar, m, which diversifies its computation. Our model adopts the encoder variant where all tokens mutually attend during calculation, contrasting the causal variant originally designed for language modeling. In the latter, a causal mask confines tokens to attend solely to preceding tokens in the sequence.
Performance Benchmarks
Jina Embeddings v2 delivers state-of-the-art performance across multiple benchmarks, including the Massive Text Embedding Benchmark (MTEB) and newly designed long-document datasets. Key highlights include:
- Classification: Achieves top-tier accuracy in tasks like Amazon Polarity and Toxic Conversations classification, demonstrating robust semantic understanding.
- Clustering: Outperforms competitors in grouping related texts, validated by tasks like PatentClustering and WikiCitiesClustering.
- Retrieval: Excels in retrieval tasks such as NarrativeQA, where comprehensive document context is essential.
- Long Document Handling: Maintains MLM accuracy even at 8192-token sequences, showcasing its ability to generalize effectively.
The chart compares embedding models’ performance across retrieval and clustering tasks with varying sequence lengths. Text-embedding-ada-002 excels, especially at its 8191-token cap, showing significant gains in long-context tasks. Other models, like e5-base-v2, show consistent but less dramatic improvements with longer sequences, possibly affected by the lack of prefixes like query: in its setup. Overall, longer sequence handling proves critical for maximizing performance in these tasks.
Applications in Real-World Scenarios
- Legal and Academic Research: Jina Embeddings v2’s ability to encode long documents makes it ideal for searching and analyzing legal briefs, academic papers, and patent filings. It ensures context-rich and semantically accurate embeddings, crucial for detailed comparisons and retrieval tasks.
- Content Management Systems: Businesses managing vast repositories of articles, manuals, or multimedia captions can leverage Jina Embeddings v2 for efficient tagging, clustering, and retrieval.
- Generative AI: With its extended context handling, Jina Embeddings v2 can significantly enhance generative AI applications. For example:
- Improving the quality of AI-generated summaries by providing richer, context-aware embeddings.
- Enabling more relevant and precise completions for prompt-based models.
- E-Commerce: Advanced product search and recommendation systems benefit from embeddings that capture nuanced details across lengthy product descriptions and user reviews.
Comparison with Existing Models
Jina Embeddings v2 stands out not only for its ability to handle extended sequences but also for its competitive performance against proprietary models like OpenAI’s text-embedding-ada-002. While many open-source models cap their sequence lengths at 512 tokens, Jina Embeddings v2’s 16x improvement enables entirely new use cases in NLP.
Moreover, its open-source availability ensures accessibility for diverse organizations and projects. The model can be fine-tuned for specific applications using resources from its Hugging Face repository.
How to Use Jina Embeddings v2 with Hugging Face?
Step 1: Installation
!pip install transformers
!pip install -U sentence-transformers
Step 2: Using Jina Embeddings with Transformers
You can use Jina embeddings directly through the transformers library:
import torch
from transformers import AutoModel
from numpy.linalg import norm
# Define cosine similarity function
cos_sim = lambda a, b: (a @ b.T) / (norm(a) * norm(b))
# Load the Jina embedding model
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True)
# Encode sentences
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?'])
# Calculate cosine similarity
print(cos_sim(embeddings, embeddings))
Output:
Handling Long Sequences
To process longer sequences, specify the max_length parameter:
embeddings = model.encode(['Very long ... document'], max_length=2048)
Step 3: Using Jina Embeddings with Sentence-Transformers
Alternatively, utilize Jina embeddings with the sentence-transformers library:
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
# Load the Jina embedding model
model = SentenceTransformer('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True)
# Encode sentences
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?'])
# Calculate cosine similarity
print(cos_sim(embeddings, embeddings))
Setting Maximum Sequence Length
Control input sequence length as needed:
model.max_seq_length = 1024 # Set maximum sequence length to 1024 tokens
Important Notes
- Ensure you are logged into Hugging Face to access gated models. Provide an access token if needed.
- The guide applies to English models; use the appropriate model identifier for other languages (e.g., Chinese or German).
Also Read: Exploring Embedding Models with Vertex AI
Conclusion
Jina Embeddings v2 marks an important advancement in NLP, addressing the challenges of long-document embeddings. By supporting sequences of up to 8192 tokens and delivering strong performance, it enables a variety of applications, including academic research, enterprise search, and generative AI. As NLP tasks increasingly involve processing lengthy and complex texts, innovations like Jina Embeddings v2 will become essential. Its capabilities not only improve current workflows but also open new possibilities for working with long-form textual data in the future.
For more details or to integrate Jina Embeddings v2 into your projects, visit its Hugging Face page.
Key Takeaways
- Jina Embeddings v2 supports up to 8192 tokens, addressing a key limitation in long-document NLP tasks.
- ALiBi (Attention with Linear Biases) replaces traditional positional embeddings, allowing the model to process longer sequences effectively.
- Gated Linear Units (GLU) improve transformer efficiency, with variants like GEGLU and ReGLU enhancing performance.
- The three-stage training process (pretraining, fine-tuning, and hard negative fine-tuning) ensures the model produces robust and accurate embeddings.
- Jina Embeddings v2 performs exceptionally well in tasks like classification, clustering, and retrieval, particularly for long documents.
Frequently Asked Questions
A. Jina Embeddings v2 supports sequences up to 8192 tokens, overcoming the 512-token limit of traditional models like BERT. This allows it to handle long documents without segmenting them, preserving global context and improving semantic representation.
A. The model incorporates cutting-edge innovations such as Attention with Linear Biases (ALiBi), Gated Linear Units (GLU), and a three-stage training paradigm. These optimizations enable effective handling of lengthy texts while maintaining high performance and efficiency.
A. You can integrate it using either the transformers or sentence-transformers libraries. Both provide easy-to-use APIs for text encoding, handling long sequences, and performing similarity computations. Detailed setup steps and example codes are provided in the guide.
A. Ensure you’re logged into Hugging Face to access gated models, and provide an access token if needed. Also, confirm compatibility of the model with your language requirements by selecting the appropriate identifier (e.g., for Chinese or German models).
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Discover more from TrendyShopToBuy
Subscribe to get the latest posts sent to your email.