Newsletters




Data Integrity with DevOps


Companies are competing to become more data-driven to create market leadership. As a result, they are generating, collecting, analyzing, and sharing more data and they are utilizing the DevOps methodology to reduce development times to release software faster. 

However, the applications created using DevOps are only as good as the quality of the data they produce. Maintaining data integrity though every stage of the data lifecycle is more difficult today because the volume of data is multiplying rapidly and the data is stored in different types of databases in different formats, on-premise and in the cloud.

Maintaining data hygiene is not one solution but a combination of disciplines spanning all teams and tools, each doing their part to improve data integrity and database performance. Each part of the data ecosystem needs to work in parallel with transparency and must also be interoperable to prevent black holes that lead to data corruption and processing bottlenecks.  

Keeping Data Clean

The goal of DevOps is to improve the coordination between operations and development to produce better products faster. In short, more code must be shipped faster and at a higher quality.

But at the same time the data ecosystem has become more complex. Different types of databases from multiple vendors can be found in different departments, regions, and subsidiaries of the same organization. Data can be stored in SQL, NoSQL, and cloud-native databases, data warehouses, and data lakes in various formats such as text, binary, XML and JSON formats and more. 

One way to ensure that data integrity is maintained while doing rapid development is to create an integrated data repository of data assets that links data from different sources. This way, applications have a single path to search and query data. In addition to linking together different databases, a data repository includes a built-in API to store and manage metadata about entities, modules, and table definitions. The information can go beyond the simple definitions of the various data structures, and typical repositories store up to hundreds of pieces of information about each data structure.

To maintain data integrity, it is essential to maintain a record about every aspect of data storage, access, retrieval and sharing. DevOps developers are often pressured for quick release times as part of rapid development so there is a risk that data movements aren’t fully documented. It is important to do a full fact-check process, including all the details, such as the origin of the data, where it is stored, checks on data integrity, who owns it, which queries or analytics use the data, how often it’s accessed, and what the actual level of risk is, if for any reason the data is compromised. It is not enough to log this information once—it must be periodically updated and checked for accuracy.

In addition, it’s necessary to keep pace with the constantly changing regulations regarding data privacy (e.g., GPDR), security (e.g., HIPAA, PCI-DSS, and GLBA), and (in the case of financial institutions) controls against money laundering and terrorism such as Know your Customer. It isn’t enough to know about federal regulations. It is also important to check in with regional policies such as the California Consumer Privacy Act (CCPA) depending on where the data is stored. With the rise of new procedures and standards, all compliance policies should be understood and cross-checked by developers to make sure that the data is handled properly and that all applications are in compliance.

Data should be cleaned often. Preferably, this should start from the very beginning, when the data is loaded. It is a good idea to run regular tests to identify errors, validate accuracy, and periodically scrub to remove duplicate data. All applications developed throughout the organization should follow the same procedures to keep bad data from leaking in. 

Version control is another important aspect of data integrity. A database administrator should have access to all versions of the database, including results from load testing, limit testing, and scalability testing, and then give full feedback to the relevant people involved with product delivery regarding any data integrity issues. This is necessary in order to maintain full control of data quality before an application version is released.

Maintaining Database Performance

DevOps teams can face challenges in building a high-performance data-centric application. Ensuring an efficient ETL (extract, transfer, load) design is one of the most important, yet often overlooked, aspects of developing an application, especially when there has been a huge increase in the quantity and types of data that need to be loaded.

This is becoming even more crucial with the rapid increase in the deployment of smart sensors, and the increased online transactions with customers, partners, and suppliers throughout the supply chain. Scalability to handle higher data volumes is essential to keep systems running smoothly. If queries or analytics processes take too long, then the resulting insights can be irrelevant.

There are solutions that can boost computing power to keep pace with the rapid rise in data volumes. Data can be stored on acceleration platforms that utilize the power of a GPU database to more rapidly access and analyze massive amounts of data. A GPU data accelerator can process multi-billion-row datasets in seconds so that applications from machine learning to geospatial analysis can run complex advanced queries that would take days to run using standard CPUs. 

Database performance is also measured by agility. DevOps is all about being able to respond quickly to business changes. The most basic way to accommodate this need is to ensure full interoperability with databases with other databases in the organization including those that are provided by different vendors and are developed using different coding languages. This also applies to the ability to ingest and process data from different private and public clouds.

The same level of interoperability needs to apply to data exchanges with third-party companies such as customers, partners, and suppliers. Databases need to use, wherever possible, certified connectors to ensure that the exchange of data with outside databases is done with the same level of diligence to check for accuracy, privacy and security.

Typically with DevOps, applications have rapid development cycles, and changes to versions are developed and released at a rapid rate. All of the diligence regarding data integrity must be applied to every stage of the applications lifecycle including when a new query or report is created, when software updates and patches are applied, and when there are enhancements and bug fixes. 

Stay in Control with Data Discipline

DevOps processes are challenged by having a larger number of changes and releases over a shorter period of time utilizing distributed work teams. This increases the risk of losing control over quality, including data governance. It is critical that these recommendations for ensuring product integrity are followed by everyone in all development teams, especially when DBA functions are assigned to other team members as part of accelerating product release cycles.  

Developing applications rapidly utilizing DevOps requires data discipline. From integrating and managing data attributes using data repositories to tracking data from its source to its final destination, all the controls need to be put into place to ensure that the data is accurate and protected. With all the different types of data stores, data formats, and movement of data to the cloud, creating and maintaining one complete up-to-date picture of the data is a challenge. One thing is certain: With the volumes of data multiplying and the addition of more and varied types of data stores, keeping data accessible and analytics-ready in a DevOps continuous development environment is not going to get any easier. However, having the right tools and technologies to manage huge volumes of data distributed over different data stores can help make it easier.


Sponsors