Juicebox recruits Amazon OpenSearch Service for improved talent search


This post is cowritten by Ishan Gupta, Co-Founder and Chief Technology Officer, Juicebox.

Juicebox is an AI-powered talent sourcing search engine, using advanced natural language models to help recruiters identify the best candidates from a vast dataset of over 800 million profiles. At the core of this functionality is Amazon OpenSearch Service, which provides the backbone for Juicebox’s powerful search infrastructure, enabling a seamless combination of traditional full-text search methods with modern, cutting-edge semantic search capabilities.

In this post, we share how Juicebox uses OpenSearch Service for improved search.

Challenges in recruiting search

Recruiting search engines traditionally rely on simple Boolean or keyword-based searches. These methods aren’t effective in capturing the nuance and intent behind complex queries, often leading to large volumes of irrelevant results. Recruiters spend unnecessary time filtering through these results, a process that is both time-consuming and inefficient.

In addition, recruiting search engines often struggle to scale with large datasets, creating latency issues and performance bottlenecks as more data is indexed. At Juicebox, with a database growing to more than 1 billion documents and millions of profiles being searched per minute, we needed a solution that could not only handle massive-scale data ingestion and querying, but also support contextual understanding of complex queries.

Solution overview

The following diagram illustrates the solution architecture.

OpenSearch Service securely unlocks real-time search, monitoring, and analysis of business and operational data for use cases like application monitoring, log analytics, observability, and website search. You send search documents to OpenSearch Service and retrieve them with search queries that match text and vector embeddings for fast, relevant results.

At Juicebox, we solved five challenges with Amazon OpenSearch Service, which we discuss in the following sections.

Problem 1: High latency in candidate search

Initially, we faced significant delays in returning search results due to the scale of our dataset, especially for complex semantic queries that require deep contextual understanding. Other full-text search engines couldn’t meet our requirements for speed or relevance when it came to understanding recruiter intent behind each search.

Solution: BM25 for fast, accurate full-text search

The OpenSearch Service BM25 algorithm quickly proved invaluable by allowing Juicebox to optimize full-text search performance while maintaining accuracy. Through keyword relevance scoring, BM25 helps rank profiles based on the likelihood that they match the recruiter’s query. This optimization reduced our average query latency from around 700 milliseconds to 250 milliseconds, allowing recruiters to retrieve relevant profiles much faster than our previous search implementation.

With BM25, we observed a nearly threefold reduction in latency for keyword-based searches, improving the overall search experience for our users.

Problem 2: Matching intent, not just keywords

In recruiting, exact keyword matching can often lead to missing out on qualified candidates. A recruiter looking for “data scientists with NLP experience” might miss candidates with “machine learning” in their profiles, even though they have the right expertise.

Solution: k-NN-powered vector search for semantic understanding

To address this, Juicebox uses k-nearest neighbor (k-NN) vector search. Vector embeddings allow the system to understand the context behind recruiter queries and match candidates based on semantic meaning, not just keyword matches. We maintain a billion-scale vector search index that is capable of performing low-latency k-NN search, thanks to OpenSearch Service optimizations like product quantization capabilities. The neural search capability allowed us to build a Retrieval Augmented Generation (RAG) pipeline for embedding natural language queries before searching. OpenSearch Service allows us to optimize algorithm hyperparameters for Hidden Navigable Small Worlds (HNSW) like m, ef_search, and ef_construction. This enabled us to achieve our target latency, recall, and cost goals.

Semantic search, powered by k-NN, allowed us to surface 35% more relevant candidates compared to keyword-only searches for complex queries. The speed of these searches was still fast and accurate, with vectorized queries achieving a 0.9+ recall.

Problem 3: Difficulty in benchmarking machine learning models

There are several key performance indicators (KPIs) that measure the success of your search. When you use vector embeddings, you have a number of choices to make when selecting the model, fine-tuning the model, and choosing the hyperparameters to use. You need to benchmark your solution to make sure that you’re getting the right latency, cost, and especially accuracy. Benchmarking machine learning (ML) models for recall and performance is challenging due to the vast number of fast-evolving models available (such as MTEB leaderboard on Hugging Face). We faced difficulties in selecting and measuring models accurately while making sure we performed well across large-scale datasets.

Solution: Exact k-NN with scoring script in OpenSearch Service

Juicebox used exact k-NN with scoring script features to address these challenges. This feature allows for precise benchmarking by executing brute-force nearest neighbor searches and applying filters to a subset of vectors, making sure that recall metrics are accurate. Model testing was streamlined using the wide range of pre-trained models and ML connectors (integrated with Amazon Bedrock and Amazon SageMaker) provided by OpenSearch Service. The flexibility of applying filtering and custom scoring scripts helped us evaluate multiple models across high-dimensional datasets with confidence.

Juicebox was able to measure model performance with fine-grained control, achieving 0.9+ recall. The use of exact k-NN allowed Juicebox to benchmark faster and reliably, even on billion-scale data, providing the confidence needed for model selection.

Problem 4: Lack of data-driven insights

Recruiters need to not only find candidates, but also gain insights into broader talent industry trends. Analyzing hundreds of millions of profiles to identify trends in skills, geographies, and industries was computationally intensive. Most other search engines that support full-text search or k-NN search didn’t support aggregations.

Solution: Advanced aggregations with OpenSearch Service

The powerful aggregation features of OpenSearch Service allowed us to build Talent Insights, a feature that provides recruiters with actionable insights from aggregated data. By performing large-scale aggregations across millions of profiles, we identified key skills and hiring trends, and helped clients adjust their sourcing strategies.

Aggregation queries now run on over 100 million profiles and return results in under 800 milliseconds, allowing recruiters to generate insights instantly.

Problem 5: Streamlining data ingestion and indexing

Juicebox ingests data continuously from multiple sources across the web, reaching terabytes of new data per month. We needed a robust data pipeline to ingest, index, and query this data at scale without performance degradation.

Solution: Scalable data ingestion with Amazon OpenSearch Ingestion pipelines

Using Amazon OpenSearch Ingestion, we implemented scalable pipelines. This allowed us to efficiently process and index hundreds of millions of profiles every month without worrying about pipeline failures or system bottlenecks. We used AWS Glue to preprocess data from multiple sources, chunk it for optimal processing, and feed it into our indexing pipeline.

Conclusion

In this post, we shared how Juicebox uses OpenSearch Service for improved search. We can now index hundreds of millions of profiles per month, keeping our data fresh and up to date, while maintaining real-time availability for searches.


About the authors

Ishan Gupta is the Co-Founder and CTO of Juicebox, an AI-powered recruiting software startup backed by top Silicon Valley investors including Y Combinator, Nat Friedman, and Daniel Gross. He has built search products used by thousands of customers to recruit talent for their teams.

Jon Handler is the Director of Solutions Architecture for Search Services at Amazon Web Services, based in Palo Alto, CA. Jon works closely with OpenSearch and Amazon OpenSearch Service, providing help and guidance to a broad range of customers who have search and log analytics workloads for OpenSearch. Prior to joining AWS, Jon’s career as a software developer included four years of coding a large-scale, eCommerce search engine. Jon holds a Bachelor of the Arts from the University of Pennsylvania, and a Master of Science and a Ph. D. in Computer Science and Artificial Intelligence from Northwestern University.


Discover more from TrendyShopToBuy

Subscribe to get the latest posts sent to your email.

Latest articles

spot_imgspot_img

Related articles

Leave a Reply

spot_imgspot_img

Discover more from TrendyShopToBuy

Subscribe now to keep reading and get access to the full archive.

Continue reading