The rise of digital organizations means greater attack surfaces for cybercriminals and more data put at risk. “Data is increasingly being stored across multiple clouds, SaaS applications, and the edge,” said W. Curtis Preston, chief technical evangelist at Druva. “This means the data center is no longer the center of data, which is creating more complexity and risk. Organizations have been faced with trying to keep pace with a relentless amount of cyberattacks. These challenges are driving a need to build more resilient architectures that can keep data secure and ensure recovery is fast and easy to manage when an attack does hit.”
The shifting of data between centralized and decentralized environments is another pervasive factor in the need for resiliency. Data gravity is constantly shifting from centralized to decentralized, said Tapan Patel, senior manager for data management at SAS. “This creates barriers and stresses as organizations struggle with conflicting data connectivity, integration, and governance needs. With growing data complexities, delivering reliable and trusted data has become more time-consuming, expensive, labor-intensive, and siloed.”
Data resiliency is essential in avoiding cascading failures, which are an ongoing threat to today’s highly networked enterprises. “This is when a small, local failure through different types of dependencies takes down an entire system,” said Sush Apshankar, principal consultant for cognitive and analytics at ISG. For example, he said, overload occurs “when one cluster goes down and all its traffic moves to another cluster. A resilient architecture enables the business to be more self-sufficient and quicker to respond to changes.”
THE COMPONENTS OF RESILIENCY
It’s important to look at data resiliency from both a user and a technology perspective. “From a user’s perspective, resiliency is typically defined by how well an application continues to perform in the event of unplanned interruptions,” said Carsten Baumann, director of strategic initiatives and solution architect at Schneider Electric. “From a hardware perspective, the network, storage, and compute platforms must be operational or provide redundancy levels that ensure the applications continue to perform. Looking further down the hardware stack, power is of the utmost criticality. Without it, none of the required services can be offered. These fundamental requirements are often overlooked.”
Support for real-time computing needs to be at the heart of data resiliency initiatives. “As users adopt online business apps, mobile apps, and streaming apps, harnessing real-time data is a top analytics requirement,” said Patel. The next generation of data architecture “needs to support real-time data processing by default. Traditional data architectures tend to lock up data assets in repositories, slowing down insights and application development.”
A resilient data architecture “must be built around the ability to provide continuous service—no matter what,” said Carlos Rivero, vice president, data and analytics at GCOM, and former CDO for the state of Virginia. “This means that opportunities for networking or connectivity failures must be minimized as they are the most common source of service interruptions.” Rivero also pointed to the importance of managing cloud for resiliency, noting that “care must be taken to choose multiple availability zones for backups and data storage, and these different zones must span multiple geographical regions with underlying infrastructure that is both mutually exclusive and redundant.”
Underestimating “region-level failure protection in public clouds can be problematic,” agreed Ranganathan. “Regions can fail for many reasons, such as snowstorms, building fires, and extended power outages.”