Qubole, which provides cloud big data-as-a-service, and Snowflake Computing, which provides a data warehouse built for the cloud, are partnering to enable customers to use Apache Spark in Qubole with data stored in Snowflake.
Businesses are increasingly looking to build a cloud-based data infrastructure to gain agility, scale broader analytics capabilities, as well as lower cost of ownership. At the same time, moving data warehouse infrastructures to the cloud and building data lakes dramatically improves organizations' performance, concurrency and simplicity. With this announcement, the companies say, organizations can get the best of both worlds.
With the new integration between cloud services, data teams can build, train and put in production powerful machine learning (ML) and artificial intelligence (AI) models in Spark using information stored in Snowflake.
The integration also enables data engineers to use Qubole to read and write data in Snowflake for advanced data preparation such as data wrangling, data augmentation and advanced ETL to refine existing Snowflake data sets.
Qubole provides an enterprise cloud data platform that enables organizations to operationalize data lakes through infrastructure automation and cloud optimizations for open source engines.
The new partnership automates the connection between Qubole and Snowflake, eliminating the complexity of manually configuring Spark and reducing the time to train and deploy ML and/or AI models with Snowflake data.
This integration also provides one-time, secure credential management between Qubole and Snowflake, and access to Snowflake data through Scala and Python via Qubole's Dataframe API for Apache Spark.
Find out more at www.snowflake.net and www.qubole.com.