Newsletters




Aerospike’s Latest Version of its Vector Search Functionality Emphasizes Choice and Simplicity


Aerospike Inc., the real-time database built for infinite scale, speed, and savings, is debuting the latest version of Aerospike Vector Search, equipped with new indexing and storage capabilities. Centering flexibility, performance, and streamlined operations, Vector Search addresses current customer problems with new engineering techniques and innovations in the vector space.

Aerospike is designed to unlock real-time semantic search across data, delivering consistency and accuracy at scale. Enterprises can ingest high quantities of real-time data, search billions of vectors within milliseconds, and achieve this at a fraction of the cost compared to other databases, according to Aerospike.

In this release, Aerospike is expanding its innovation in flexible storage to its vector product, enabling users to choose between in-memory for small indexes or hybrid memory for large indexes, helping to reduce cost.

“We're trying to…provide…a continuum of choices for the user that they can use to position themselves wherever they want in the performance/cost spectrum,” said Naren Narendran, chief engineering officer at Aerospike. These choices mitigate data duplication across systems, management, and compliance, serving to reduce overall complexity.

“Companies want to use all their data to fuel real-time AI decisions, but traditional data infrastructure chokes quickly, and as the data grows, costs soar,” said Subbu Iyer, CEO, Aerospike. “Aerospike is built on a foundation proven at many of the largest AI/ML applications at global enterprises. Our Vector Search provides a simple data model for extending existing data to take advantage of vector embeddings. The result is a single, developer-friendly database platform that puts all your data to work—with accuracy—while removing the cost, speed, scale, and management struggles that slow AI adoption.”

Another component of this release is a unique self-healing indexing capability, which addresses the phenomenon where when an index grows in size, its quality begins to deteriorate. Traditionally, repairing a deteriorating index requires manual intervention from the user. Aerospike eliminates this inefficiency, ensuring that index quality remains high through seamless, background processes.

The self-healing hierarchical navigable small world (HNSW) index enables data to be ingested immediately while asynchronously building the index for search across devices. This scale-out ingestion—where ingestion and index growth scale independently from query processing—helps deliver uninterrupted performance, fresh, accurate results, and optimal query speed for real-time decision-making, according to Aerospike.

“The index building now doesn't have to be dependent on the ingestion part alone…the index building happens in a different set of nodes [so that they] can scale differently,” explained Narendran. “You can throw more resources at that depending on how fast you want the indexing to happen. You have that flexibility…[of] the indexing [being] handled by a completely different subsystem, and therefore you can scale that subsystem as small or as large as you want and be able to handle the indexing rate appropriately.”

Aerospike’s database is both multi-model and multi-cloud, offering document, key-value, graph, and vector search within a single system. Acknowledging how important this is for AI use cases—including retrieval-augmented generation (RAG), semantic search, recommendations, fraud prevention, and ad targeting—the latest iteration of Aerospike’s vector search allows for developers to easily start or swap their AI stack to Aerospike Vector Search. This seamlessness enables developers to drive better outcomes at a lower cost, affording them the flexibility to choose the AI models that best suit their needs for years to come.

Additionally, Aersopike is offering a new Python client and sample apps for common vector use cases, accelerating deployment. Other integrations—such as a Langchain extension that expedites the building of RAG applications, and an AWS Bedrock sample embedding example that hastens the building of enterprise-ready data pipelines—is also offered by Aerospike.

To learn more about Aerospike Vector Search, please visit https://aerospike.com/.


Sponsors