Newsletters




The AI Era: Crafting the Perfect Environment for AI to Thrive

Page 1 of 2 next >>

All of the hype and high hopes around AI in recent years has focused on compelling use cases—from creating new medicines to taking over customer service. However, all AI efforts will be nonstarters without well-governed, well-sourced data. That is the leading challenge for enterprises going forward in the competitive economy of 2025.

Achieving satisfactory levels of data governance to drive AI is an issue for many organizations, a survey of 220 business and IT professionals by Quest Software and Enterprise Strategy Group found. Challenges with bringing data and governance to an AI-ready state were cited by 33% of respondents, as one of the top three hurdles, along with understanding the quality of source data and finding, identifying, and harvesting data assets.

The respondents reported challenges with data mapping, data lineage, and data policies to apply to emerging AI models and data. A separate survey of 1,000 IT executives by Presidio finds 86% reported data-related barriers, such as difficulties in gaining meaningful insights and issues with real-time data access. This is slowing AI investment plans, the study’s authors pointed out.

“Much of our $80-billion-plus industry is currently built on dated relational architecture that struggles to meet the demands of modern applications,” said Mindy Lieberman, CIO at MongoDB. “As the pace of AI-driven innovation continues to accelerate, organizations must modernize their applications to not only keep pace but surpass evolving expectations.”

THE MISSING INGREDIENTS

Having high-quality data “is akin to consuming nutritious food,” said Gordon Robinson, senior director of data management at SAS. “Just as high-quality food helps develop our bodies and maintain health, good data is fundamental to building effective AI models.”

While AI is rapidly transforming various industries, the perennial issue that businesses have faced for decades persists: data quality.

Because data quality in AI may be lacking, there may be an ability to establish trust in AI output. “AI is nothing without trusted data, as unreliable data leads to unreliable models,” said Manasi Vartak, chief AI architect at Cloudera. “If an enterprise lacks a foundation of trust, the probability of its AI initiatives failing—due to inaccurate outputs, flawed decision making, and more—increases tremendously.”

Data silos and fragmented data environments also hamper efforts to deliver well-performing AI applications. “What tends to be missing are integrated and unified data systems,” said Mary Hamilton, managing director and global lead for Accenture Technology’s Innovation Center Network. “Data silos prevent the flow of information, making it difficult to build and train AI models effectively.”

“A data-to-AI strategy needs to incorporate all the elements required for AI applications: semi and unstructured data, real-time data APIs, knowledgebase management, and digitized processes,” said Alex Baldenko, head of data science at MassMutual. “In general, enterprise data environments are focused on governing structured and, most commonly, tabular data. Many emerging AI capabilities rely on having access to semi, unstructured data with the same level of coverage, management, and governance that has been successfully applied to structured data warehouses.”

Knowledgebase management also comes into play as an essential tool for data-to-AI deployments, Baldenko continued. “Knowledgebases haven’t generally been considered part of an enterprise’s data environment. However, many AI capabilities leverage them, presenting enterprises with opportunities to apply data management practices, including entitlements, repository structure, maintenance, and quality control.”

The lack of a unified data framework creates a critical gap, “especially when it comes to dealing with unstructured data,” said Jim Liddle, chief innovation officer of data intelligence and AI for Nasuni.

Lack of a systematic approach to data classification also hinders the data-to-AI pipeline. “Clear taxonomies distinguish between various types of data—those that are relevant for AI and those that are not. It is also critical that governance policies tailored to AI-specific requirements, such as data lineage tracking, bias detection, and the handling of sensitive data, are also implemented. Without these foundational elements, it becomes nearly impossible to create reliable, resilient, and scalable data pipelines for AI.”

Also at issue is the fact that data executives do not have a complete inventory of their data assets. “Most organizations underestimate how much ‘unknown’ data exists in their environments,” said Ravi Ithal, GVP and CTO of Proofpoint DSPM. “It’s not just about finding the data—it’s about understanding what’s valuable, sensitive, and relevant for AI. Without that foundation, you’re either training on incomplete information or exposing yourself to risks like data leaks or regulatory violations.”

Page 1 of 2 next >>

Sponsors