Open source databases are a popular choice for many teams because they provide the flexibility and scalability required for latency-sensitive applications. However, open source databases like Cassandra, Redis and RocksDB have inherent inefficiencies in the form of compaction, compression, networking and storage.
These functions can take up 75% of CPU power, so how do you get efficient performance?
Implementing a new database (rip and replace), continuously adding database nodes, manual tuning, combining hardware (like FPGAs) with database software, and adding in a cache layer are all options.
DBTA recently held a webinar with Prasanna Sundararajan, CEO and co founder, rENIAC, who discussed the ways organizations are approaching open source database optimization.
Open source databases help companies start small and scale with team growth; provides flexibility and scalability; allows for specialized uses – relational, graph, multi-value, key value; and offers the ability to be standardized on and customized to needs as the business evolves, Sundararajan said.
According to CPU Cycle Profile when using Open Source DBs, with 75% of CPU cycles used for IO and system compute, OSS DBs are hitting an optimization peak (and getting there is hard). Problem is aggravated further with certain open source DBs that are write optimized when paired with read-intensive apps (many customer facing apps are).
FPGA-based solutions are bringing hyper speed and hyper scale to commercial databases already, but how could they work with open source databases?
rENIAC Data Engine is a drop-in data accelerator for mission critical, latency sensitive data workloads that requires no changes to software or existing data architecture, Sundararajan explained.
The platform can enable huge performance gains for latency-sensitive workloads, according to Sundararajan. Cache deployment provides more than 2TB of operational, hot data and proxy deployment provides immediate IO and latency gains.
An archived on-demand replay of this webinar is available here.