Image courtesy of Shutterstock
Data virtualization provides access to disparate data as abstracted, virtual data services. It allows organizations to access, virtually integrate, and provide critical data to business intelligence and analytical tools.
In recent years, we have seen increased traction for data virtualization across analytical, operational and data management uses: single-view enterprise and mobile applications; cloud and on-premise integration; cognitive, predictive, and big data analytics; data discovery and democratized information access taking the results of specialized and complex “result sets” and delivering canonical data services to business users; and more. Organizations are looking to reduce upfront costs, accelerate value on projects, and build an agile information architecture that makes it easier to access and consume fast-moving data from anywhere in the context of other business information. As a result, data virtualization, which is already a must-have capability for agile business intelligence and logical data warehousing, is evolving to a broader data management capability.
Similar to server, storage, and network virtualization, data virtualization simplifies how data is presented and managed for users, while employing technologies —under the covers—for abstraction, decoupling, performance optimization and the efficient use/reuse of scalable resources. One of the key aspects of data virtualization that enables it to be an efficient common data layer is push-down optimization. Simply put, this is the ability to take user queries to the data virtualization layer and delegate parts of it to the different systems that are best able to process the query, reassemble the results, and then present it to the user. Another key aspect is how it can quickly expose disparate data in multiple systems through a virtual data layer for data discovery which, in turn, helps governance and reduces duplication.
Companies have sought to bring in data virtualization for agile business intelligence to create single views of master data for operational or regulatory reporting, share virtualized data services with partners, to decouple data sources and applications during migrations, enable rapid integration of mergers, and create analytical “sand boxes” that quickly bring together new sources of data, etc. In all of these cases, the value of data virtualization has to be proven on time-to-solution, in lowering costs, and in how well it supports or integrates with existing technologies. However, many organizations across a wide range of vertical industries are beginning to see that longer-term data virtualization provides numerous benefits. These include creating a flexible and responsive information infrastructure that is critical for business agility and leveraging data assets to quickly capitalize on market opportunities as they emerge.
There’s no doubt that data virtualization is a powerful technology that provides abstraction and decoupling between physical systems/location and logical data needs. It includes tools for semantic integration of structured to highly unstructured data, and enables intelligent caching and selective persistence to balance source and application performance. DV platforms can also introduce a layer of security, governance and data services delivery capabilities that may not be a good match between original sources and intended new applications, or not available at all.
As progressive organizations continue to adopt complete data virtualization platforms to meet their ongoing data needs, there are four key trends that are bolstering this strong market demand including:
1-The New Business Intelligence
While traditional business intelligence primarily involved reporting organized information in a traditional waterfall approach from business requirements to IT implementation, the new business intelligence is different in many ways and organizations are adapting quickly to this reality. The new BI recognizes three distinct patterns: discovery/investigative; analytics (including big data analytics); and reporting/traditional BI. Each of these patterns vary because of the types of data, technology/devices (mobile, in-memory, etc.), and context of business need (regulatory, operational, strategic etc.). Tech-savvy businesses are gaining the more predominant role in new BI, as they recognize that tools, skills, and data needs for the three patterns are different, weakening the idea of consolidation and giving way to the idea that heterogeneous data and tools must coexist now and into the future.
2-Convergence of Operational and Informational; Separation of Logical and Physical
Time and cost has replaced technology when it comes to defining where to draw the boundaries between operational and informational data and when deciding on the systems to leverage them. With the convergence of real-time analysis of transactional streams and dynamic decision making that affects operational flows, and the availability of improved technology, there is a stronger pull towards re-integrating both data types. In doing so, organizations can get more operational use of BI, gain strategic insights and have BI gathered during operational processes and feeding strategies. One benefit and corollary of this convergence is because organizations are now focusing more on logical/semantic issues (i.e., business integration) rather than on the physical layer (i.e., technology integration). This is the reason for the rise of virtualization (data, network, systems) to decouple and abstract the physical layer from the business layer.
3-The Evolution of Broad Spectrum Data Virtualization: ROI Now, Unified Architecture for Future
Broad Spectrum Data Virtualization is the ability of the unified data layer to provide a logical common data layer that supports a broad spectrum of uses across informational, operational, data management use cases. Today, organizations realize and are demonstrating that they will soon use the same “common data layer” and are architecturally moving in that direction. However, this change can't happen all at once, and organizations should emphasize the need to crawl before they walk and deliver tactical ROI first.
4-Turning Data into “Information as an Asset” Moves to the C-Suite
While organizations have long sought to maximize their data, there is now a nexus force causing convergence among top executives, as they seek to maximize the value and use of information for business; from strategic to the utterly tactical. The CEO and COO want to exploit new opportunities, reduce costs, improve customer service, value and/or reduce risk. The CFO is tired of paying for data assets and not seeing them used to the fullest potential. CFOs are now measuring on usage and pushing for a demonstrable return on their data asset investments. Finally, CIOs have found themselves on the defensive largely because they have been unable to keep up with the business, and are seeking to become strategic again by delivering “ahead of” their business’ need.
In an uncertain world and an ever-changing market, only those businesses that react quickly and insightfully can thrive. Yet, the ability to react rapidly requires organizations to have access to up-to-the-minute information and the ability to quickly change direction with ease. Making insightful decisions requires a broad view of the current situation and its entire historical context, which needless to say, is a lot of diverse information. Using data virtualization, organizations of any size and across any vertical industry or can integrate information beyond the bounds of traditional business intelligence needs and mine their data assets for greater overall value.
About the Author
Suresh Chandrasekaran, senior vice president at Denodo Technologies, is responsible for global strategic marketing and growth. Before Denodo, Chandrasekaran served in executive roles as general manager, and VP of product management and marketing at leading Web and enterprise software companies and as a management consultant at Booz Allen & Hamilton.