Newsletters




Kimberly Nevala

Kimberly Nevala is a strategic advisor at SAS (www?.sas.com). She provides counsel on the strategic value and real-world realities of emerging advanced analytics and information trends to companies worldwide, and is currently focused on demystifying the business potential and practical implications of AI and machine learning.

Articles by Kimberly Nevala

I once wrote an article titled "Do Your Metrics Matter?" It was inspired by the joy of sitting in an airplane, at the gate, going nowhere fast for close to 2 hours. All the while the airline app was proclaiming an on-time departure. Having ample time to discuss this perceived error with the crew, I learned that the closed aircraft door equaled departure. Departed, it turned out, was not an indication of a flight in motion but a process gone awry (a circumstance that may persist to this day, for what it's worth).

Posted December 19, 2024

Generative AI (GenAI) is having an Agile moment or two, but this is not a positive development. One aspect of this moment in time is the continued hyping of GenAI's unlimited agility—wherein GenAI is trumpeted as something akin to an analytic Swiss Army knife. The analogy is not only problematic, but it is actively harmful.

Posted October 16, 2024

Most times, if asked, I will rattle off, without hesitation, a list of ways AI is being applied along with all the ways it can go wrong. You may also get bit of a diatribe about why speaking of AI in the royal sense (i.e., "the AI") is always a bad idea. It's practically a (bad) party trick. Yet recently, when asked to characterize the primary ethical issues in AI, I had a perfect moment of mental blankness. It was an uncomfortable and instructive moment on many fronts.

Posted June 12, 2024

There is much ado these days regarding the language facility popular AI tools such as ChatGPT, Midjourney, Microsoft Copilot, and Gemini Connected appear to display. What do these systems comprehend? Do they understand? Are the underlying architectures a big leap toward artificial general intelligence (AGI) or an entertaining dead end? This article is not about that. This is a loosely related thread on how the language we use to describe AI systems affects our ability to govern them effectively.

Posted April 16, 2024

Generative AI (GenAI) and large language models (LLMs) burst into the corporate and public consciousness this past year like confetti from a carnival cannon or the glitter from a child's art project. Beyond splashy headlines, discussions of imminent opportunity and threat permeated into every organizational crevasse overnight, or so it seemed from the general public's point of view. And while the technical underpinnings were not all that new, the resultant capabilities of these gargantuan models certainly surprised most.

Posted January 17, 2024

Market research, strategic planning, research and development (R&D), and proactively researching and strategizing for the future are commonplace components of business operations. The exception is the case of governance teams who are far too often recipients of, rather than participants in, strategic planning. As a result, existing policies and practices quickly stagnate or deviate from current usage.

Posted October 10, 2023

Ah, those comprehensive, yet amazingly unclear terms and conditions (T&C). You know the ones. They include a minimum of 10 pages of scrolling text detailing the company's rights and obligations. Of course, the critical bits regarding your data or rights are beyond the point at which even young eyes go blurry.

Posted June 19, 2023

A short treatise on three mistakes organizations commonly make when designing or extending governance programs. Loosely inspired by discussions about (but not written by) ChatGPT. Decision rights—who needs to make what decisions—are the crux of governance. Success is not determined by the seniority of your governance council(s) or how many data stewards you have. Successful governance hinges on understanding how decisions are effectively made and made effective.

Posted February 16, 2023

Data mesh is all the rage. The objective? To eliminate artificial roadblocks and extend the means of data production across the enterprise—thereby expanding the scope of data products the organization generates. And, ultimately, increasing the value and use of data in decision making and operational practice.

Posted December 15, 2022

The information imbalance between purveyors of AI-enabled systems and their oft unwitting subjects is profound. So much so that leading AI researchers point to this chasm as a critical ethics issue in and of itself. This is due largely to the fact that public perceptions or, more accurately, misperceptions can enable (however unintentionally) the deployment of insidiously invasive or unsound AI applications.

Posted September 29, 2022

Questioning whether your governance efforts are merely inquisitive? Here are five signs.

Posted May 16, 2022

It is easy to attribute catastrophic outcomes and insidious, unintended side effects to failures of governance. Or, more often, to a lack of governance. In practice, however, all organizations are governed, either formally or informally. Formal governance involves discretely defined accountability and expectations encoded in principles, policies, and processes. Informally—and more influentially—organizations are governed by the behaviors and norms modeled and rewarded by their leadership and peers.

Posted April 01, 2022

Organizations, public and private, are codifying principles, regulations are emerging, and standards are proliferating.

Posted December 22, 2021

In the rush to bring AI and data solutions to bear, don't guess and don't just ask, "Why?"; also ask, "Why not?" Consider why this application might not be a good idea, may not lead to our intended outcome, might not be well-received, and might not safeguard human dignity and liberties.

Posted September 27, 2021

Deploying AI fairly, safely, and responsibly requires clarity about the risks and rewards of an imperfect solution, not the attainment of perfection. An AI algorithm will make mistakes. The error rate may be equal to or lower than that of a human. Regardless, until data perfectly representing every potential state—past, current, and future—exists, even a perfectly prescient algorithm will err. Given that neither perfect data nor perfect algorithms exist, the question isn't whether errors will happen but instead: When, under what conditions, and at what frequency are mistakes likely?

Posted May 26, 2021

After a wild and turbulent 2020, the new year has ushered in a renewed commitment to establishing or improving corporate governance. Yet, positive energy aside, our traditional approach to endorsing governance of data, analytics, or AI remains fraught. As a result, governance initiatives springing from an earnest desire to do right (e.g., responsible AI), as well as the need to not do wrong (e.g., regulatory/compliance), struggle to enlist broad coalitions of the willing.

Posted April 05, 2021

For ethics to take root, sustainable governance practices must be infused into the fabric of an organization's AI ecosystem.

Posted January 18, 2021

Never have charts and graphs been more prominent in the collective public consciousness. The increased focus on data-driven insights has, just as so much in life, been both positive and negative.

Posted September 14, 2020

It is a matter of when, not if, your organization will confront a never-before-seen data source—a source that, if managed improperly, could result in catastrophic consequences to your brand and bottom line. In some cases, that data will be imported from outside your four walls. In others, the data will spring from new business processes or the fertile minds of your employees manipulating existing assets to create altogether new analytic insights,

Posted May 19, 2020

To democratize data and analytics is to make them available to everyone. It is an admirable goal and one with its roots in the earliest days of the self-service movement. If an organization is to truly be data-driven, it follows that all key decisions—from tactical operational priorities to strategic vision—must be data-informed. So where is democratization going wrong?

Posted March 20, 2020

Opportunity and Threat: The Intersection of AI and Data Governance

Posted December 23, 2019

Sponsors