top of page

DBMT Articles for our Clients 

DBMT Articles for our Clients 

How You Know AI is Working

Here is a conversation I have more than any other right now. An executive tells me their organization has embraced AI. Hundreds of people are using it. Tools have been deployed. There is genuine enthusiasm. And then, after a pause: "But we can't really point to what it's done for us."

Find out how "AI Impact Governance" helps companies measure and manage AI performance throughout the organization.  Read Article

How You Know AI is Working
image_edited.jpg

AI Impact Governance is not a compliance framework or an IT policy. It is the discipline of ensuring that AI use — at every stage of maturity, from an individual experimenting with ChatGPT to a fully autonomous workflow running without human intervention — consistently produces outcomes worth producing, at a cost worth paying.

Consistency and quality are not the same thing. I've seen teams spend months refining their workflows within a tool — genuinely improving them, by the metrics the tool surfaces — before anyone asked whether the outputs were actually performing better against the business outcomes that matter. The tool was being governed. The outcomes weren't.

Mature AI Impact Governance at this stage is an operational discipline: continuously measuring workflow performance against quality and cost benchmarks, making deliberate decisions about what to retire and what to invest in further, and maintaining the visibility to see clearly what the systems are doing and what they are costing. This is not glamorous work. It is the work that determines whether your AI investment compounds in value or quietly compounds in waste and losses

You Have AI All Over the Place. But Is It Working?
You won't know without AI Impact Governance
 
By David Bernard, Managing Director, DBMT

Here is a conversation I have more than any other right now. An executive tells me their organization has embraced AI. Hundreds of people are using it. Tools have been deployed. There is genuine enthusiasm. And then, after a pause: "But we can't really point to what it's done for us." 

 

Or a different version: the tools are there, but only some people are using them well. The results are uneven. Time and money have been spent — meaningfully spent — and nobody can credibly explain what came back. ROI conversations are uncomfortable because the honest answer is: we don't actually know.

Or a third version, in more mature organizations: we're running agentic systems now, automating real workflows at real scale, and we're starting to realize we have no clear view of whether they're performing — or what they're costing us to run.

 

Different stages. Same underlying problem — the absence of measurement, transparency, and accountability for what AI is actually doing. The absence, in other words, of AI Impact Governance.

What AI Impact Governance Actually Means

AI Impact Governance is not a compliance framework or an IT policy. It is the discipline of ensuring that AI use — at every stage of maturity, from an individual experimenting with ChatGPT to a fully autonomous workflow running without human intervention — consistently produces outcomes worth producing, at a cost worth paying.

 

The reason this discipline is hard to see clearly is that it looks completely different depending on where your organization is. In the early stages, the governance failure presents as an inability to justify investment — you can't measure it, you can't attribute it, you can't defend it in a budget conversation. In the later stages, the stakes grow exponentially. Autonomous systems that aren't actively monitored don't just plateau — they drift, and the drift is expensive. Running AI-driven processes without visibility into their performance is like trading options without watching the market. If you blink, you've lost millions. And unlike a bad quarter, nobody saw it coming because nobody was looking.

 

What connects every stage is the same requirement: measurement and transparency. Not the same measurement — it has to evolve as the systems evolve — but always the discipline of knowing what is working, what isn't, and what it is costing you either way.

When AI Is Still Personal

The first wave of enterprise AI is almost always individual and largely self-directed. Someone, for example, discovers ChatGPT or Claude, finds it useful, starts incorporating it into how they work. This spreads — sometimes officially encouraged, sometimes simply tolerated. Before long, a significant portion of the organization is using general-purpose AI in ways that nobody fully understands or tracks.

 

I've seen organizations treat this as a win. High usage numbers, genuine enthusiasm, visible behavior change. And in one sense it is — the tools are finding real utility. But without governance at this stage, what you actually have is a measurement vacuum. You know people are using AI. You don't know whether the time they're spending is producing better outcomes, equivalent outcomes, or outcomes that quietly need to be redone by someone else.

 

The governance challenge here is twofold: scaffolding individual judgment so that people develop the instinct to use AI well, and building the measurement capability to distinguish the usage that is creating real value from the usage that is simply creating activity. Without that second piece, you cannot justify the investment — to your organization, your leadership, or yourself.

 

When Advanced Users Start Building

Among early adopters, a more sophisticated pattern emerges. Advanced users stop using AI as a conversational tool and start building with it — chaining workflows, automating reasoning steps, creating personal systems that embed AI deeply into how they work. This is AI as personal infrastructure. These users are often producing genuinely impressive results. They are also, in my experience, creating one of the most underappreciated governance risks in enterprise AI right now. The workflows they build reflect their individual judgment and individual blind spots. When those workflows get noticed — and the good ones always do — they get replicated, often without anyone asking whether they actually perform as well for someone else, or whether the time invested in building and running them is being returned in measurable value.

Governing this stage means developing the organizational ability to evaluate what your advanced users are building with rigor — not just enthusiasm. Which workflows genuinely outperform the baseline? What is the measurable difference in output quality or time savings? What does it actually cost to maintain them? These are not bureaucratic questions. They are the questions that determine whether your most capable AI users are building organizational assets or sophisticated personal habits that don't scale.

When the Tools Arrive Pre-Built

Running parallel to individual experimentation is the procurement of function-specific AI tools — platforms purpose-built for content creation, video production, sales enablement, and a growing range of other functions. These tools arrive with opinionated workflows and built-in constraints. They reduce variance by design. They are easier to deploy at scale than general-purpose tools. And they carry a governance trap that I have watched organizations fall into repeatedly.

 

Because function-specific tools produce consistent outputs, organizations assume they are producing good outputs. Consistency and quality are not the same thing. I've seen teams spend months refining their workflows within a tool — genuinely improving them, by the metrics the tool surfaces — before anyone asked whether the outputs were actually performing better against the business outcomes that matter. The tool was being governed. The outcomes weren't.

At this stage, measurement must extend beyond the tool itself. The question is not whether the AI is being used correctly within its constraints. It is whether those constraints are aligned with the outcomes your organization actually needs — and whether you have the measurement discipline to know the difference.

When AI Becomes the Work

At a certain point of maturity, AI moves from tool to infrastructure. Workflows are formalized, embedded in systems, and no longer optional. AI isn't augmenting how work gets done — it is how work gets done.

The governance questions here become architectural. You are no longer asking whether people are using AI well. You are asking whether the systems themselves are designed to produce good outcomes reliably — and whether you have sufficient visibility into their performance to know when they stop doing so. Expertise must be encoded correctly. Constraints must be enforced. Output quality must be built into the system, not dependent on whoever is running it.

What I've found is that organizations that built strong measurement habits in the earlier stages arrive here with a significant advantage. They understand what quality looks like in their context. They know how to attribute outcomes to specific system design decisions. They have the organizational instinct to ask hard questions about whether their infrastructure is actually performing. Organizations that coasted through earlier stages find themselves governing systems they don't fully understand, by metrics they haven't defined, without a baseline to measure against.

 

When the Systems Run Themselves

At full maturity, AI systems operate continuously — improving through usage data, adapting to changing conditions, retiring workflows that no longer perform. Human oversight shifts from operating the system to governing it. And this is where the stakes of measurement failure become existential.

 

Autonomous systems running without active performance monitoring are a financial exposure that compounds in silence. Unlike a failed project with a defined budget, an underperforming agentic workflow can run indefinitely — consuming compute, driving decisions, producing outputs that nobody is evaluating — while costs accumulate and value quietly fails to materialize. It is, in the most literal sense, like trading options without watching the market. The position is open. The exposure is real. And if you're not looking, you won't know until the damage is done.

This is precisely why the most forward-looking enterprise platforms are investing heavily in agent observability and governance tooling — Salesforce's Agentforce among them. The architecture is promising. Whether it delivers the transparency and control that enterprise-scale agentic workflows actually require, at the pace and complexity of a real transformation, is a question organizations are still answering in practice. That gap — between what platforms promise and what governance actually demands — is exactly where the real work happens.

Mature AI Impact Governance at this stage is an operational discipline: continuously measuring workflow performance against quality and cost benchmarks, making deliberate decisions about what to retire and what to invest in further, and maintaining the visibility to see clearly what the systems are doing and what they are costing. This is not glamorous work. It is the work that determines whether your AI investment compounds in value or quietly compounds in waste and losses.

The Thread That Runs Through All of It

Across every stage of AI maturity, the governance challenge is the same in one fundamental way: you cannot manage what you cannot measure, and you cannot justify what you cannot see.

In the early stages, that means building the measurement capability to distinguish real impact from activity — to answer the budget question credibly, and to guide people toward the AI use that actually moves the needle. In the later stages, it means maintaining the visibility and operational discipline to govern systems that are running faster and at greater scale than any individual can directly oversee.

The organizations getting this right are not necessarily the ones with the most sophisticated AI. They are the ones that asked the hard measurement questions early, built the transparency to answer them honestly, and maintained that discipline as their systems grew in complexity.

The ones that didn't are still having the uncomfortable ROI conversation.

Or they haven't had it yet — but they will.

© 2026 by DB Marketing Technologies. 

bottom of page