The hype—and equally, the perplexity—surrounding data fabric and data mesh architectures has caused a variety of organizations to question if it might remediate the data problems running rampant within their respective infrastructures. While they may seem to suggest that only one can be implemented, what if, through proper understanding, they can be woven together into a successful, effective architecture?
Jeff Fried, director of product management at InterSystems, led the Data Summit session, “Viewing Data Fabric and Data Mesh as Complementary,” to explore the ways in which data fabric and data mesh can be lifted beyond an “either/or” approach, where when united, these paradigms can propel enterprises into accessible and comprehensively data-driven entities.
The annual Data Summit conference returned to Boston, May 10-11, 2023, with pre-conference workshops on May 9.
The confusion crowding data fabric and data mesh is entirely tied to its popularization, and in turn, its misunderstanding. Quoting the Eckerson Group, “If nothing else, the data mesh is now a huge marketing bonanza for vendors who want to hitch themselves to a hot industry buzzword,” which additionally applies to data fabric.
Fried defined these terms as the following:
- A data mesh is an intentionally designed distributed data architecture, under centralized governance and standardization for interoperability, enabled by a shared and harmonized self-serve data infrastructure.
- A data fabric dynamically orchestrates disparate data sources intelligently and securely in a self-service manner, leveraging various data platforms to deliver integrated and trusted data to support various applications, analytics, and use cases.
As Fried pointed out, these definitions are virtually the same, which lends itself to the value in their combination. “These are both cloths that you can put over your data infrastructure, but they’re not really at odds with each other,” explained Fried. “What they’re really doing is unifying the data plane by simplifying systems.”
The need for these strategies are the same, ultimately focusing on democratizing data accessibility, increasing productivity, reducing cycle time, optimizing cost and data integration, as well as improving communication between data managers and data consumers to create a collaborative culture.
Fried continued, explaining that neither data fabric or data mesh are products. “You can’t go out and buy either of these concepts; neither of them are purely technological,” he said.
Nonetheless, these are different terms, pointing to some need for the paradigms’ distinction. According to Fried, “The difference between data mesh and data fabric is all about governance.”
A data mesh, then, emphasizes the process view with distributed governance, while a data fabric emphasizes the technology view with centralized governance; however, both of these methods’ goals are to provide easy access to data across multiple technologies and platforms.
Fried explained the components of each strategy, underpinned by examples, architectural principles that guide their implementation, as well as their intrinsic benefits and challenges.
He also emphasized that these paradigms work effectively when the problems that an enterprise is attempting to solve—particularly when it comes to using data to drive decision-making—are consistently put in the foreground of concern.
“At the end of the day, the problems are the things to keep your eyes on. Are you solving the problems facing your organization,” he explained.
Many Data Summit 2023 presentations are available for review at https://www.dbta.com/DataSummit/2023/Presentations.aspx.