As AI and machine learning take hold, databases will require “less and less time and effort to manage as the software gets smarter about managing itself,” said Roberts. “Data analytics and the automated decision making of AI are becoming less a competitive edge and more table stakes to stay in the competitive game in nearly every industry. Smart people who can figure out the best way to combine, analyze, and put data to work will always be in demand.”
RISKS
Along with the shift or displacement of jobs, AI and machine learning introduce other types of risks into data environments. Since AI and machine learning are relatively new on the scene and require new processes, tools, and skills in data environments, there is the need for careful planning and deliberation, industry experts advise. Boomi’s Macosky cautioned against rushing into AI and machine learning too quickly, as the challenge with implementing these technologies is to “define proper architectures without impacting production operations.” AI and machine learning operations “can be technically intensive, and you want to be careful that you don’t impact the UX of the daily users of the data.”
AI and machine learning operations also “require a lot of computing power, resulting in additional costs on top of the existing business intelligence infrastructures already in place for an organization today,” Macosky added. In addition, there is risk that the wrong data may be analyzed, and coming to conclusions that are not helpful to business outcomes. “This leads to some risk with implementing AI and machine learning for BI,” Macosky said.
Along these lines, industry experts raised concerns about the quality of AI-driven decisions. “The system could make the wrong, or suboptimal, decision,” said Kazmaier. “This could be, for example, selecting the wrong objectives or the system’s inability to optimize when there are competing objectives, both of which would result in poor service levels. The challenge, therefore, lies in equipping database systems with the knowledge and tools to harness automated technologies and build trust in AI and machine learning-driven components. DBAs will need to have a strong understanding of the decision-making capabilities of ML- and AI-based models to qualify the risks of automated decisions.”Humans need to stay in the process, and there will still need to be “human checks and balances—especially if the DBAs are not well-versed on how AI handles tuning,” said Fredell. “While some tasks can be fully automated with AI and ML, others will need to be reviewed by a human for approval before being committed. Robust logging, reporting, and rollback will be essential if automated changes need to be undone.”
AI and machine learning may also add to the problem of technical debt, said Chris Bergh, CEO of DataKitchen. “Machine learning tools are evolving to make it faster and less costly to develop AI systems. But deploying and maintaining these systems over time is getting exponentially more complex and expensive. Data science teams are incurring enormous technical debt by deploying systems without the processes and tools to maintain, monitor, and update them. Further, poor quality data sources create unplanned work and cause errors that invalidate results.”
Clarity and explainability also must be addressed, and this is increasingly a concern for organizations that need to trust the results their systems produce. “When you employ machine learning algorithms to make decisions, by their very nature, the decision process is opaque as compared to a rules-based system,” said Roach. “They essentially write their own rules based on the data so that data needs to be accurate, well-understood, and appropriate for the purpose.” As a result, the reliance on good data will be no different than it is today, said Roach. “The fact that an ML system may process orders of magnitude more in a black box increases the danger.”