Enterprise AI governance has a structural tendency to live at the wrong level of the organization.
The teams with the deepest technical grasp of AI systems, specifically the engineers who build models, the data scientists who train and validate them, and the technical leadership that oversees deployment, typically have limited visibility into how those systems are actually used once they enter production. The managers who make daily decisions informed by AI outputs, including hiring, forecasting, performance evaluation, and customer engagement, typically have limited grasp of how those outputs were produced.
Between those two positions, the accountability for what AI actually does in an organization ends up diffuse. Technical teams are responsible for what the model does. Management is responsible for what the organization does with the model’s outputs. The oversight that should exist at that connection point tends to be assumed rather than designed.
Technology executive Phaneesh Murthy has been explicit about the result. “Technology scales intent,” he has said. “If your intent lacks responsibility, the scale will magnify that flaw.” His broader advisory career, including his current engagement with Covasant Technologies and his advisory position at InfoBeans, has been built in part on addressing the gap that statement describes.
Table of Contents
How Phaneesh Murthy Defines AI Fluency at the Management Layer
Murthy’s definition of AI fluency is operationally specific. AI fluency describes the ability to evaluate AI-generated outputs, recognize where those outputs might be unreliable, and apply genuine judgment to decisions that AI informs but cannot settle. The concept is distinct from technical proficiency.
In practice, this requires three things.
First, capability recognition: knowing what AI does well. Large-scale pattern detection. High-volume task automation. Anomaly identification across complex datasets. Probabilistic forecasting from structured inputs. These are real AI strengths, and managers who have a working grasp of them can deploy AI in ways that produce genuine organizational value.
Second, limitation recognition: knowing where AI fails. Generative models hallucinate. Training data carries historical bias into present decisions. Output quality is a function of input quality, and most organizations don’t scrutinize their data pipelines as carefully as their model outputs. A manager who doesn’t have a working sense of these failure modes can’t calibrate their trust in AI-generated results.
Third, strategic contextualization: recognizing what AI’s presence changes about the management function itself. AI generates more options, more scenarios, more data-supported possibilities. The function of narrowing those possibilities into decisions that reflect the organization’s actual goals is a management responsibility that AI intensifies rather than eliminates.
“You do not need to build the machine,” Murthy has said. “But if you lead people who use it, you must understand what it can and cannot do.”
The Accountability Gap Phaneesh Murthy Has Identified in Enterprise AI Deployment
Accountability structures for AI decisions tend to be designed around systems rather than decisions. Organizations define who is responsible for model performance, data quality, and deployment protocols. They rarely define who is responsible for the quality of judgment applied to AI outputs at the management layer.
The consequences appear in the public record. Hiring systems that produced discriminatory outcomes did so because training data encoded historical biases, and because the managers who deployed those systems didn’t ask what the models were trained on or how their outputs were distributed across demographic groups. These failures had a technical component, but the governance gap was organizational.
Murthy has framed the issue in terms of intent, which places it squarely in management’s domain. The organization’s intent, reflected in what questions get asked before deployment and what accountability structures get built afterward, determines whether AI produces value or risk. Technical construction can’t substitute for that.
What Phaneesh Murthy’s Framework Means for Leadership Development
Murthy’s view is that AI fluency builds through engagement. Direct use of AI tools develops working intuition about output reliability. Structured dialogue with technical colleagues about model assumptions builds the vocabulary for substantive governance conversations. Engagement with published research on AI ethics and governance develops the conceptual frame for evaluating emerging risks.
The career arc Murthy has followed, from scaling Infosys’s global delivery operations to building iGATE into a multi-billion dollar enterprise and then founding Primentor, reflects a consistent conviction: that the distance between leadership comprehension and operational reality is a primary source of organizational risk. AI has added a new dimension to that distance.
Murthy has put it plainly: “Leadership today requires technological awareness. Ignorance is no longer neutral.”
Organizations building AI fluency at the management layer are closing a governance gap that exists in the present, in every organization that deploys AI without ensuring that the people making decisions with AI outputs can evaluate what those outputs actually represent. That gap isn’t hypothetical. And as Murthy’s track record across enterprise technology demonstrates consistently, the leadership teams that take it seriously tend to be the ones that don’t discover it the hard way.