Cloud services can help alleviate many of these challenges. However, “too many organizations are not able to take advantage of native cloud services to help improve their data architecture,” said Rivero. “Furthermore, legacy technologies are a significant obstacle to resilience. Rather than using a lift-and-shift approach to modernization and cloud adoption, organizations would be wise to document which business processes contribute to quality data used to deliver their most critical services.” In addition, he added, many databases “that offer resilience out of the box may not have the desired feature set. It’s tempting to supplement existing databases with one that has a limited feature set but is resilient. This approach usually impacts the agility of feature development severely over time.”
Skills availability is another area that presents a challenge and requires “finding and retaining the talent needed for data engineering, data governance, and data science,” said O’Connor. “Most data professionals still spend their time preparing data for use in analytical use cases, regardless of their role.” Data analysts, data engineers, and developers “are facing all sorts of bottlenecks in building and generating collaborative data pipelines,” said Patel. “As business rules become more dynamic, traditional data integration patterns are not agile enough to meet the new demands of modern users and applications.”
A NEW ERA
Industry leaders and experts say this is a new era for data resiliency, as it has gained the full attention of the business. That’s why it’s important to bring data residency initiatives to the forefront of corporate IT agendas. The following is their advice for developing a highly resilient data architecture.
Align with the business. To ensure greater data resiliency, industry leaders and experts emphasize the need to keep such efforts closely aligned with businesses. This helps establish priorities for investing in staff time and technology for boosting resiliency. “Develop a resiliency strategy that fits the needs of the enterprise,” said Baumann. “Being clear on the business impact when applications fail can help to establish a realistic budget.” This should be built upon “a use-case approach to data architecture,” Patel advised. “It may be pragmatic to have a framework to add a new project—like adding a data lake to an existing data warehouse—to meet demands and build on established strengths.”
Automate as much as possible. Automation needs to be a key part of all resilient data architectures. A resilient data architecture “should be completely automated and continuously monitored, preferably by a system that can proactively identify, alert, and respond to problems in the data infrastructure,” said Preston. “Backups should be stored in a way that protects from malicious activity, accidental deletion, or other damage. This should include encrypting all backups in transit and at rest using military-grade encryption and storing multiple copies of backups with at least one copy air-gapped from the production environment. Ultimately, this will help position IT teams to protect data and rapidly recover in the event of a mass deletion or cyberattack without ever having to pay the ransom.” Manual data management “is a thing of the past, and no organization that refuses to adopt automation tools will stand a chance of competing in our data-driven economy,” said Varshney.