Articles
Lean OKRs as a forcing constraint
Move the needle with AI

test
Most organisations say they’re “using AI.” Copilots get rolled out. Pilots get funded. Demos impress. A few teams ship faster.
And then the business looks… the same.
That’s the illusion of progress: more activity, nicer outputs, but no real shift in customer value, cycle time, or cost.
The problem is rarely the model, the data, or the talent. It’s intent. And this is something I’ve watched exec teams stumble over for years.
If your goals are written as tasks and deliverables, AI will help you finish those tasks faster — and keep your operating model exactly as it is.
In my book, Moving the Needle with Lean OKRs, I make a point that most leadership teams miss: OKRs aren’t a performance scorecard. They’re a thinking and goal-achievement system. Used well, they force a harder question than “how do we execute better?”
They force: what would change the system and improve business results?
That’s the question that turns AI from a faster keyboard into a new way of working.
OKRs, in plain language
OKRs stands for Objectives and Key Results.
An Objective is a clear statement of what you want to change. A Key Result is the measurable proof that the change is happening.
The trap — and I’ve seen it in nearly every organisation I’ve worked with — is turning Key Results into a project plan:
- “Launch the chatbot.”
- “Roll out Copilot to 2,000 employees.”
- “Build the new dashboard.”
Those are deliverables. They can all be “done” while customers notice nothing.
Lean OKRs cut through that.
In my book, I argue that Key Results need to describe outcomes and behaviour change — not work packages. And critically, they should keep the method for achieving those results open. That mix — clear outcome, open route — is the constraint that pushes teams out of safe work.
Here’s a simple example:
Objective: Cut customer onboarding time so new clients get value in days, not weeks.
Key Results: - Reduce median onboarding time from 10 days to 2 days. - Raise activation in week one from 40% to 70%. - Keep support tickets during onboarding below 5% of new accounts.
No mention of tools. No mention of features. Just the change you want.
Pragmatic goals need innovation
A pragmatic goal isn’t timid. It’s grounded.
It starts with a real business problem, expressed as an outcome, with a clear measure of success. That’s what makes it pragmatic. But if the goal matters, it usually can’t be met with “try harder.” The gap between today and the goal forces innovation.
In my work with clients, I’ve seen two kinds:
- Small “i” innovation improves what exists. It cuts waste and removes friction inside the current model.
- Big “I” innovation changes the model. It removes whole steps, collapses handoffs, and creates new economics.
AI is useful in both. The catch is that most organisations only buy the small “i” version: faster writing, faster analysis, faster delivery. That’s helpful, but it rarely moves the needle.
Big “I” work starts when the goal is strong enough to make old solutions look silly.
Why AI efforts stall
Most AI initiatives live inside an output-driven management system. Leaders define success as deliverables, timelines, and utilisation. Teams get rewarded for shipping, not learning.
In that environment, AI becomes a well-behaved assistant: write quicker, analyse faster, summarise more neatly.
From a Lean perspective — and this is something I’ve written about extensively — this is entirely predictable. Improving activities does not improve outcomes. Making tasks faster does not challenge assumptions. And AI won’t question goals that humans have defined badly.
AI exposes the quality of thinking upstream.
Lean OKRs as the forcing constraint
Lean OKRs add discomfort on purpose. They insist on clarity of outcome without prescribing the route to get there. They replace detailed plans with testable intent.
That’s why, in my experience, they pair so well with AI.
When leaders write objectives that describe hard challenges — change customer behaviour, cut lead time, lift conversion, reduce risk — teams lose the safety of the familiar solution. Activity is no longer proof of progress.
The question opens up: what are things we’ve never tried before, and how might we get there?
At that point, AI stops being a tool for getting through the backlog and becomes a partner for exploration: generating options, spotting patterns, testing ideas, and speeding up learning loops.
Use AI to remove work, not speed it up
Take expense approvals.
A speed-only move is to have AI read receipts and fill in fields. The form still exists; approvals still queue up.
An outcome move — the kind I advocate for in my book — is to set a Key Result like: “95% of expenses flow through with no approvals, with checks after the fact.”
Now the work changes. You’re looking at policy, spend limits, clearer rules, and exception handling. Systems flag outliers; people handle the small slice that needs judgement.
Same story in customer support.
If the outcome is “cut time-to-resolution for complex cases in half,” a nicer chatbot won’t do it. You need better triage, cleaner knowledge capture, and faster escalation. AI can draft replies and search past cases, but the bigger win comes from redesigning the workflow so humans spend time where judgement matters.
From assistant to agent
Most organisations use AI as a helper: it suggests, drafts, and summarises.
Agents go further. They can plan steps, call tools, and complete a task across systems with limited back-and-forth. That makes them suited to work that’s currently held together by handoffs: onboarding, compliance checks, contract cycles.
Outcome targets often expose a blunt truth: humans can’t hit them at scale.
If your Key Result is “reduce contract cycle time from 14 days to 4 hours,” you don’t get there by asking lawyers to type faster. You get there by redesigning the entire process: standard clauses, clear guardrails, exception routes, and a system that drafts, compares, routes, and tracks — while humans handle the edge cases.
That’s big “I” work.
The weekly rhythm that keeps it honest
In my book, one of the things I’m most insistent about is this: Lean OKRs don’t just change what you write down. They change how you run the week.
Instead of status theatre, the weekly check-in asks one question: how confident are we that we’ll hit the Key Results? A low score isn’t a blame signal. It’s a signal to adjust the plan, remove obstacles, or learn faster.
Key Results are learning signals. They tell you whether your bets are working.
How to start
Pick one needle to move. Choose a business outcome that matters — not a tool rollout dressed up as strategy.
Write Key Results that describe behaviour change. What will customers do differently? What will teams stop doing? What will the system handle on its own?
Then redesign the work around that outcome. Map the current process. Decide which steps should vanish, which should become rules, and which need human judgement.
If AI isn’t changing how your organisation thinks about success, the problem is rarely AI.
Your goals aren’t Lean enough to force the change.
Do you want to learn moe about OKRs? Contact me or join one of our trainings about OKRs at Xebia Academy.
Our Ideas
Explore More Articles
Contact




