Talend, a leading provider of open source data management software, has announced the availability of Hadoop-based integration for Big Data via its enterprise platform, Talend Integration Suite. According to the vendor, Talend Integration Suite is the first enterprise open source data integration solution to natively support Hadoop's distributed computing architecture for high performance and highly scalable data integration.
Enterprises are faced with challenges in handling ever increasing amounts of data. These problems, related to volume and usage demands, have forced IT managers and developers to seek out solutions such as Apache Hadoop for Big Data processing to effectively manage this data explosion. Hadoop, an open source software platform, answers a need for organizations in need of sophisticated analysis and transformation of petabytes of structured and complex data. With Talend Integration Suite's native support of Hadoop, users can now access, manipulate and store large volumes of data in Hadoop and benefit from high-performance, cost-optimized data integration with scalability.
Talend Integration Suite offers native support for both Hadoop Distributed File System (HDFS) to provide access to Hadoop data and Hive, the database infrastructure built on top of Hadoop, for structured and complex data processing. Talend Integration Suite leverages Hadoop's MapReduce architecture for highly distributed data processing. Talend's Hadoop components generate native Hadoop code and run data transformations directly inside the Hive database for maximum scalability. This solution makes it easy for customers to combine Hadoop-based integration with traditional data integration processes-either ETL or ELT-based-for superior overall performance.
"Apache Hadoop has rapidly become a model reference for highly scalable and cost-effective data analysis and transformation, proven by its early enterprise-wide success," said Fabrice Bonan, co-founder and COO, Talend. "Talend's native Hadoop support now allows our customers to tap into this powerful architecture for processing massive data volumes."
For more information, go here.