IBM scientists have come up with a solution for the huge volumes of data now slowing down organization's systems. At the recent Supercomputing 2010 conference, they unveiled details of a new storage architecture design that will convert terabytes of pure information into actionable insights twice as fast as previously possible.
This new architecture will shave hours off of complex computations without requiring heavy infrastructure investment, IBM says. Created at IBM Research - Almaden, the new General Parallel File System-Shared Nothing Cluster (GPFS-SNC) architecture is designed to provide higher availability through advanced clustering technologies, dynamic file system management and advanced data replication techniques.
By "sharing nothing," new levels of availability, performance and scaling are achievable. GPFS-SNC is a distributed computing architecture in which each node is self-sufficient; tasks are then divided up between these independent computers and no one waits on the other. One area that may see early adoption is among large financial institutions, which will be able to run complex algorithms to analyze risk based on petabytes of data. With billions of files spread across multiple computing platforms and stored across the world, these calculations require significant IT resource and cost because of their complexity. Using this GPFS-SNC design, running this complex analytics workload could become much more efficient, as the design provides a common file system and namespace across disparate computing platforms, streamlining the process and reducing disk space.
To stay on top of all the trends, subscribe to Database Trends and Applications magazine.