Splice Machine, provider of an SQL RDBMS powered by Hadoop and Spark, now supports native PL/SQL on Splice Machine. Announced at Strata + Hadoop World in NYC, the new capabilities are available through the Splice Machine Enterprise Edition.
The Splice Machine PL/SQL support has two components, according to the vendor. The compiler converts the PL/SQL into a type-checked and optimized runtime representation. In addition, the interpreter executes the optimized runtime representation with PL/SQL semantics so that users can ensure their application will behave the same on Splice Machine as it does on Oracle. It maintains a procedural context and handles scoping for variable de-referencing, iteration and conditional testing, and also dispatches DDL and DML to the Splice Machine RDBMS for execution.
The goal, the company says with PL/SQL support is to reduce the time and cost for companies to move big data workloads from Oracle databases.
Describing a key use case for the technology, Monte Zweben, CEO, Splice Machine, said that many customers are developing artificial intelligence applications that deploy machine learning models, but this real-time intelligence can be difficult to achieve in PL/SQL applications because of the time it takes to get data out of the engine into an analytical framework via ETL. Now, he said, with this PL/SQL capability, applications can run their native application logic while running new machine learning processes on Splice Machine's RDBMS powered by Apache Spark and Apache Spark MLlib.
For more information, go to www.splicemachine.com.