Skip to main content
Article

Stop building “cool agents” and build outcome engines instead

Replace pilot sprawl and shadow authority with performance systems built on permission, accountability, and measurable impact.

A group of young adults are gathered around a table in a contemporary office environment, engaged in discussion and teamwork. The workspace features a desk lamp and large windows, creating a bright and focused atmosphere. The individuals appear to be sharing ideas, with one person holding a pen and others listening attentively. The overall mood is collaborative and productive, with a neutral color palette and casual attire.

Watch the argument for adaptability, on demand

We presented our perspective on adaptability in the AI journey for an audience of business leaders attending the inaugural HumanX AI conference. Watch the on-demand session for free—no conference pass required.

placeholder

“Let’s do agents” can be your undoing

Most organizations are rolling out agentic AI inappropriately. They’re rolling out agents broadly, celebrating activity, and then getting blindsided when bad things happen:

  • Exceptions surge
  • Accountability blurs
  • Trust takes a hit

This isn’t a failure around intelligence. It’s an authority problem, and most enterprises haven’t solved for it. But, if you don’t solve what an agent is allowed to decide, when it must stop, and who owns the outcome, you don’t have an agentic AI strategy. You have confusion, inefficiency, and risk at scale.

The leaders pulling away into deep advantage are redesigning who has authority (human or AI), what gets delegated, and how mistakes get caught and corrected fast. Their success is not determined by the total volume of agents their organizations deploy. Rather, it’s determined by who can grant, bound, and revoke authority fastest when conditions change, so the right work is handed to software under the right conditions at the right time.



Design agentic AI for outcomes, not outputs.

Drive measurable growth through AI-enabled innovation and insight with Slalom.



Turning permission into an agentic operating system

Machine-speed execution without machine-speed governance and authority doesn't deliver outcomes that hold. So, the bottleneck isn’t compute. It’s permission design: What can act, within what bounds, with what proof?

Here are four forces shaping the urgent need for explicit leadership choices when deploying agentic AI:

  1. Commoditization: Agent tech will be broadly available, so advantage shifts from who buys tools first to your ability, as an enterprise, to authorize action responsibly.
  2. Autonomy pressure: Agents can execute end to end, so leaders must decide where agents can act alone and where humans must step in by design.
  3. Trust stakes: Agents will touch customers, money, and policy, so leaders must choose engineered accountability or accept avoidable brand and regulatory shocks.
  4. Coordination collapse: Agents can eliminate handoffs, status busywork, and approval chain churn, so leaders must decide to reinvest that attention dividend into growth, rather than waste it on activities that don’t drive outcomes.


Governance vs. permissions

Traditional boundaries of governance are evolving with agentic AI, and there’s some nuance now between “governance” and “permissions.”

Governance refers to overarching policies, decision-rights, etc. Governance is designed to make sure agents are being used safely under the guidance of accountable leaders.

Permissions refer to defining access and span (aka operational reach) for an agent “to do its thing.” What systems, data domains, workflows, and decisions is it allowed to touch or influence within the guardrails of governance?



Designing AI agents to work as outcome engines

78% of AI users bring their own AI tools to work, scattering agentic tools across an organization and, consequently, working around proof, permission, and authority. But delegating to and collaborating with agentic AI only scales value when accountability scales with it.

With clear owners, hard limits, and proof-gated expansion, AI agents serve as dependable operators—even trusted coworkers. Proof-gates earn permission, and with permission agentic autonomy can grow because that autonomy is earned, measured, and reversible. Authority and autonomy are how you turn an array of AI agents into a business outcomes engine.

Leaders need to ask:

  • Who owns agent permission, end-to-end?
  • What proof raises the authority ceiling and what proof lowers it?
  • How fast can we revoke authority across the fleet, and what are the implications?

How to build for an agentic future and avoid costly tradeoffs

Risk if authority isn’t designed into AI agents

Upside if authority is designed into AI agents

Tradeoff

Runaway actions: Small errors become customer events

Safe delegation: Agents move fast inside hard limits

Open vs. bounded permission

Permission ping-pong: Nobody owns the call so work stalls

Clear accountability: One owner can grant and revoke in hours

Committee drift vs. named owner

Pilot noise: More demos, more meetings, no durable results

Repeatable value: End-to-end ownership cuts handoffs and waste

Pilot sprawl vs. outcomes engines

Trust freeze: One incident shuts down the whole effort

Scaled confidence: Autonomy expands only after measured proof

Trust-first vs. proof-gated autonomy

Unprovable outcomes: Disputes drag on and risk escalates

Defensible decisions: See who authorized what, when, and why

Black box execution vs. auditable by default


Starting with authority systems, not “cool tasks”

Opportunities for agents will look endless. That’s the trap. If you start with “cool tasks,” you end up with scattered agents, conflicting rules, and leaders buried in exceptions. Start where authority is most unclear and the cost of mistakes is real.

Define:

  • Who can delegate permission
  • Where the limits are
  • What must escalate
  • What proof earns more authority

Then aim that permission at a small number of agents to power outcome engines creating clear ownership and visible results.



Do not hand “Let’s do agents” to a steering committee. Senior leadership needs to design the permission system with clearly defined owners, limits, escalation paths, the proof bar, and revocation speed.

If you don’t, shadow authority becomes the default, and leaders will spend their time cleaning up outcomes they didn’t approve and cannot defend.



Winning leadership moves for building an agentic AI future

Most competitors are building AI agents. Organizations that win are building the right to let them act, the ability to stop them fast, and the discipline to make good on the time agentic co-workers give back to human talent.

Here’s a roadmap for building the foundation for your agentic future in one quarter:

The first 30 days

End shadow authority: The CEO assigns one enterprise owner of agent permission and sets the hard lines for what can act, what must escalate, and what is off-limits.

The first 60 days

Delegate the right decisions: The COO names two or three outcomes engines. Then they define the specific decisions agents may execute, with a named business owner for outcomes.

The first 90 days

Proof buys permission: The general counsel or chief risk officer (CRO) defines the proof required to expand authority, and they define the stop rules that shrink authority fast when risk shows up.

After the quarterly financial close

Turn returned time into advantage: The CFO tracks leadership time pulled out of escalations and rework, then they direct it into a short list of growth decisions with targets.

 

KPIs to measure agentic authority as a business system

To build for an agentic future, leaders must shift away from the idea that authority is a policy and shift toward an operating model where authority is a business system. This breaks the old bargain where leaders could “approve” work without owning how it gets done. In an agentic enterprise, approval is predetermined by your permission system while still being owned, bounded, auditable, and reversible.

Here’s how to measure the effectiveness of your agentic authority:

KPI

What it shows

Executive lens

Permission cycle time

Time from request to granted authority

COO

“Are we a bottleneck?”

Revocation cycle time

Time to narrow or stop authority

CISO

“Can we contain fast?”

Audit coverage rate

Percentage of actions with provable authority trail

General counsel

“Can we prove accountability?”

Escalation load

Exceptions sent to humans per 1,000 actions

COO

“Are we buying back attention?”

Outcome engine hit rate

Percentage of engines meeting target outcomes

CEO

“Where is value concentrating?”

Exception cost

Dollars and hours burned on rework

CFO

“What is the tax?”

Proof-to-permission ratio

Authority expanded only after evidence

CRO

“Is proof running the show?”


Be sure to set quarterly thresholds. If an agent cannot produce proof, it does not get more permission.


A group of adults work together in a contemporary office environment, illuminated by warm lighting. One person is using a laptop displaying the word 'Dreams,' while another checks their phone nearby. The workspace features a large indoor plant and glass partitions, creating a stylish and productive atmosphere.


How Slalom helps leaders build for an agentic future

We see a predictable pattern: Most teams can deploy agents; few can engineer authority systems that scale. When systemic authority is missing, pilots multiply, exceptions surge, and leaders must step back in. When authority is engineered, AI agents become outcome engines that return leadership attention as dividend to reinvest quickly.

At Slalom, we advise leadership teams to adopt a new operating discipline:

  • Treat permission like capital: Decide where you will “spend” authority first and who signs for it.
  • Design revocation before delegation: Allow yourself to go big since you can stop quickly and turn on a dime.
  • Concentrate on outcome engines, not scattered agents: Put authority where results are visible, repeatable, and owned end to end.
  • Measure the attention dividend and spend it on purpose: With less time spent on resolving exceptions and escalations, redirect the time you gain back into a short list of growth decisions.

 

Key takeaway

Agents will spread in your company, with or without your blessing. They already are. You can let them encroach on your organization with shadow authority. Or, you can or design them so they spread with explicit, accountable authority that allows leaders to redirect their attention so they can operate with even more confident, competitive speed.


Start an authority-first roadmap.




FAQs

Agentic AI refers to AI systems, often called AI agents, that can take action, make decisions, and execute workflows autonomously within defined boundaries.

Unlike traditional automation, agentic AI doesn’t just assist; it can draft, decide, route, reconcile, and execute tasks end to end. However, that autonomy must be governed by clear permission systems, defined ownership, and proof-based controls to avoid scaling risk instead of value.

Traditional AI typically supports humans with recommendations or narrow task automation.

Agentic AI operates with delegated authority, meaning it can make decisions and act independently within predefined limits. This shift from assistance to autonomy introduces new governance requirements, including decision rights, escalation rules, and revocation speed.

Shadow authority occurs when AI agents are deployed without clearly defined ownership, permission boundaries, or governance controls.

This often happens when teams experiment independently, adopt external AI tools, or bypass formal approval systems. Over time, shadow authority leads to unclear accountability, higher exception rates, and increased regulatory and reputational risk.

Proof-gated autonomy means that AI agents earn expanded authority only after proving measurable, reliable performance.

Instead of granting full autonomy upfront, organizations define performance thresholds, audit trails, and outcome metrics. Authority expands only when evidence supports it, and authority shrinks quickly if risk indicators rise.

Revocation speed determines how quickly an enterprise can narrow or stop an AI agent’s authority when risk appears.

Without fast revocation, small errors can escalate into customer events, regulatory exposure, or brand damage.

The fastest-moving companies are not those that grant the most autonomy, but those that can safely pull it back in minutes when conditions change.

Preventing AI pilot sprawl requires shifting from experimentation to outcome-focused design. Leaders should:

  • Assign one enterprise owner of agent permission
  • Define clear decision rights and limits
  • Focus on a small number of high-impact workflows
  • Measure exception rates, audit coverage, and revocation speed

Effective governance of AI agents requires designing an authority system, not just policies. This includes:

  1. Named owners for delegated decisions
  2. Clear permission boundaries
  3. Escalation paths for exceptions
  4. Proof-gated expansion of autonomy
  5. Fast revocation mechanisms



Lassen Sie uns gemeinsam eine Lösung finden.