The Measurement Problem

Measuring coaching ROI. Without the guesswork.

Your CEO asks "is coaching working?" You pull up attendance numbers and satisfaction scores. They're not impressed. Here's how to build a measurement approach that actually answers the question.

14 min readFebruary 2026
The Challenge

Why measurement is so hard (and why most teams fake it).

Here's the uncomfortable truth about measuring coaching ROI: most organizations don't do it well. Not because they don't care, but because coaching is genuinely harder to measure than most business investments. The outcomes are behavioral (how someone leads differently), the timeline is long (months, not weeks), and the causal chain is indirect (coaching improves a manager, who improves a team, who improves retention, which saves money).

Compare that to measuring a marketing campaign. You spend money, you get clicks, you track conversions, you calculate return. Clean. Linear. Leadership development doesn't work that way. The best you can do is build a rigorous case using converging evidence, not a single silver-bullet metric.

The result? Most L&D teams either avoid measurement altogether ("coaching is qualitative, you can't put a number on it") or they rely on vanity metrics that don't survive executive scrutiny. Neither approach protects the budget when the CFO starts asking hard questions.

Impact becomes guesswork. And guesswork doesn't survive budget season.


The Traps

The metrics everyone uses that don't prove anything.

These metrics aren't useless. They're necessary for program management. But they don't answer the question your CEO is actually asking, which is: "Is this making our leaders better, and is that worth what we're spending?"

Participation rates

Tells you people showed up. Doesn't tell you anything changed.

"95% of enrolled leaders completed at least one session" sounds impressive until you realize attendance and impact are completely different things. People can attend every session and still not change a single behavior.

Satisfaction scores

Measures whether people enjoyed the experience, not whether it worked.

A 4.8/5 coach rating means the coachee liked their coach. That's good. But liking your coach and actually growing as a leader are not the same thing. Some of the most effective coaching is uncomfortable.

Self-reported improvement

People are poor judges of their own behavior change.

"I feel more confident as a leader" is a real feeling, but it's not evidence. Self-assessments consistently overestimate change. If your entire measurement strategy depends on people saying they improved, you're measuring optimism, not outcomes.

Number of sessions completed

Activity, not impact.

Your organization consumed 500 coaching sessions last quarter. Great. What changed? If you can't connect those sessions to observable behavior shifts or business outcomes, the number means nothing.

The core issue: These metrics measure the coaching process, not coaching outcomes. They answer "did coaching happen?" not "did coaching work?" And the second question is the only one that matters to anyone controlling a budget.


The Framework

A practical measurement framework.

Forget trying to calculate a single ROI number for coaching. That precision is misleading at best and dishonest at worst. Instead, build a measurement approach across three layers, each providing a different type of evidence that coaching is producing value.

1

Leading Indicators

Weeks 2-12

Is the coaching taking hold? Behavior shifts, coach observations, applied learning between sessions.

2

Competency Growth

Months 3-9

Are leaders building specific capabilities? Coach-observed, competency-mapped growth over time.

3

Business Outcomes

Months 6-18

Is it moving the business? Engagement, retention, promotion readiness, 360-degree feedback trends.

The power of this framework is that it gives you something credible to report at every stage. You don't have to wait twelve months to answer "is this working?" Leading indicators appear within weeks. Competency data emerges over months. Business outcomes confirm the trajectory over quarters. Each layer builds on the one before it.


Layer 1

Leading indicators: is the coaching taking hold?

These are the earliest signals that coaching is producing change. They won't convince a CFO on their own, but they tell you whether the program is on the right track or needs adjustment.

Engagement and follow-through

Not just "did they attend?" but "are they actively engaged between sessions?" Are leaders bringing specific situations to coaching? Are they completing commitments they made with their coach? Are session notes showing progressively deeper conversations? Active engagement correlates with outcomes in ways that attendance alone does not.

Coach-observed behavior shifts

Trained coaches are skilled observers of change. They can report (at the aggregate level, never identifying individuals) patterns like: "70% of coachees have shifted from avoiding difficult conversations to initiating them" or "leaders in the delegation cohort are showing measurable improvement in their ability to define outcomes rather than prescribe tasks." This is more credible than self-report because the observer is trained and external.

Qualitative feedback with specificity

Generic feedback ("coaching was great!") tells you nothing. Specific feedback tells you everything. "I used the framework from coaching to restructure how I run my team meetings, and my team said the meetings are more focused now." That level of specificity indicates actual application, not just satisfaction. Design your feedback collection to elicit specifics, not ratings.

Manager observations

The coached leader's manager is one of the best sources of evidence. "Sarah has been much more direct in her communication with the team since starting coaching" carries more weight than any self-assessment. Giving managers a simple framework to observe and report on development themes (without exposing coaching content) creates a powerful data source.


Layer 3

Lagging indicators: is it moving the business?

These are the outcomes that executives care about. They take longer to materialize, and they're influenced by many factors beyond coaching. But when you can show correlation between coaching investment and business outcomes, the argument for continued investment becomes much stronger.

MetricWhat to measureTimeline
Employee engagementCompare engagement scores in teams with coached leaders vs. without. Look for trends over two or more survey cycles.6-12 months
RetentionTrack voluntary turnover in coached leaders' teams vs. a control group. Regrettable attrition is the sharpest signal.6-18 months
Promotion readinessAre coached leaders advancing faster? Are their direct reports being promoted at higher rates? Both matter.12-18 months
360 feedback trendsCompare 360 results before coaching and 6-12 months after. Look for shifts in specific competencies, not just overall scores.6-12 months
Time-to-productivityFor new managers: how quickly do coached new managers reach full effectiveness vs. unsupported ones?6-12 months
Internal mobilityAre coached leaders more likely to take on new challenges, cross-functional roles, or expanded scope?12+ months

The comparison principle: The most persuasive data isn't absolute numbers. It's the difference between coached and uncoached populations. "Engagement in coached managers' teams increased 12 points vs. 3 points in the control group" is dramatically more convincing than "engagement went up 12 points." If you can design even a rough comparison, do it.


Layer 2

Competency growth tracking: the missing middle.

This is the layer most coaching programs are missing entirely, and it's arguably the most valuable. Competency tracking sits between leading indicators ("is the coaching engaging?") and lagging indicators ("did the business improve?"). It answers the most specific question: are leaders actually building the capabilities we're developing them on?

How it works

At the start of a coaching engagement, you map development goals to specific competencies: feedback delivery, delegation, strategic thinking, conflict management, executive presence, whatever the organization and the individual need. The coach observes progress against these competencies over time, based on the situations the leader brings to coaching and how they handle them.

This creates a development record that's objective (based on coach observation, not self-assessment), longitudinal (tracked over months, showing trajectory), and connected to the organization's leadership framework (mapped to the competencies that actually matter for the business).

Why this is powerful

Competency growth data lets you say things like: "Across our manager cohort, average competency scores in feedback delivery improved from 2.3 to 3.8 over six months of coaching." That's not a satisfaction score. That's not attendance. That's verified capability growth in a specific skill the organization identified as critical. Try cutting that budget.

Visibility without surveillance. See what's changing across your leadership population without compromising what makes coaching work.


The Tension

The confidentiality balancing act.

Here's the paradox every organization faces: the more data you collect about coaching, the better you can prove ROI. But the more you surveil coaching, the less effective it becomes. Coaching works because it's a safe space. When leaders worry that what they say will be reported to their manager or HR, they stop being honest. And coaching without honesty is just performance theater.

The solution isn't to choose between measurement and confidentiality. It's to measure at the right level of aggregation.

Safe to share

Aggregate participation rates across the program

Competency growth trends at the cohort level

Themes emerging across coaching engagements (e.g., "delegation is the most common focus area")

Satisfaction and engagement scores

Business outcome correlations at the group level

Must protect

Content of any individual coaching session

Specific challenges an individual brought to coaching

Individual competency scores shared with anyone besides the coachee

Coach's assessment of an individual leader shared with their manager

Anything that could be used in performance reviews or promotion decisions

The principle: organizations see the forest. Individuals own the trees. A leader can choose to share their coaching insights with their manager. The system should never do it for them.


The Playbook

Building the business case for leadership coaching.

Whether you're proposing a new coaching program or defending an existing one, the business case follows the same structure. Lead with the problem, connect coaching to the solution, and show how you'll measure progress.

Step 1: Name the business problem

Don't start with "we should invest in coaching." Start with what's broken. "We promoted 40 new managers last year. Engagement scores in their teams dropped an average of 8 points. Three teams lost their top performer within six months of the promotion. The estimated cost of that turnover alone is $600K." Now you have a problem worth solving.

Step 2: Connect coaching to the problem

Explain why coaching is the right intervention for this specific problem. Not "coaching is generally good for leaders" but "new managers struggle because the transition from IC to people leader requires entirely different skills. Coaching provides the individualized practice and accountability that workshops and training programs can't. Here's the research that supports this approach."

Step 3: Show what you'll measure and when

This is where most proposals fall apart. Don't promise "improved leadership." Promise specific metrics at specific timeframes: "At 90 days, we'll measure coaching engagement depth and coach-observed behavior shifts. At six months, we'll compare engagement scores in coached vs. uncoached managers' teams. At twelve months, we'll analyze retention trends." The specificity signals that you're serious about accountability.

Step 4: Anchor to the cost of inaction

The most powerful business case isn't about what coaching costs. It's about what doing nothing costs. Manager-driven turnover. Disengaged teams. Underperforming leaders in critical roles. Failed promotions that require external hires at 2-3x the cost. When the cost of the status quo is clear, the coaching investment looks small by comparison.

Weak pitch

"We'd like to invest in coaching for our managers. Research shows coaching improves leadership effectiveness. The cost is $120K per year for 40 managers. We'll measure satisfaction and participation."

Strong pitch

"Manager-driven attrition cost us $600K last year. Engagement in newly-promoted managers' teams dropped 8 points on average. We're proposing a 40-person coaching program at $120K that targets the specific skills new managers are missing. We'll track competency growth monthly and compare team engagement against a control group at 6 and 12 months."

Coaching with built-in measurement.

Boon tracks participation, competency growth, and development themes automatically. No spreadsheets. No guesswork. Just clear evidence that coaching is working.

See the Dashboard

How We Do It

How Boon measures impact.

Measurement isn't a feature we bolted on. It's built into how the entire platform works. When an organization uses Boon, measurement happens as a natural byproduct of the coaching itself, not as extra work for coaches, coachees, or L&D teams.

Competency-mapped coaching. Every coaching engagement starts with development goals mapped to specific competencies. As coaching progresses, coaches track growth against these competencies based on real coaching conversations. This creates a longitudinal view of capability development that organizations can access at the aggregate level through the customer portal.

Manager visibility, not surveillance. Managers can see that their direct report is engaged in coaching, what competency areas they're focused on, and the general trajectory of their development. They never see session content, specific challenges discussed, or individual competency scores. This gives managers enough context to support development without undermining confidentiality.

Theme-level organizational intelligence. Across your entire coaching population, Boon surfaces patterns: the most common development themes, where leaders are growing fastest, where they're getting stuck, and how different cohorts compare. This isn't data for data's sake. It tells L&D teams where to invest more, where to create supplementary programming, and what's working across the board.

Integrated with your existing data. Coaching data becomes powerful when it connects to what you already track. Combine Boon's competency growth data with your engagement survey results, retention metrics, and promotion data, and you have a complete picture of how coaching is contributing to business outcomes.

The Boon difference: Most coaching platforms track sessions and satisfaction. Boon tracks competency growth, development themes, and organizational patterns. That's the difference between proving coaching happened and proving coaching worked.


FAQ

Frequently asked questions

What's a realistic ROI to expect from a coaching program?

It depends entirely on what you're measuring and the maturity of your program. A well-designed program targeting new managers can show measurable competency growth within three months and meaningful engagement and retention improvements within six to twelve months. Be skeptical of anyone promising a specific multiplier (e.g., "7x ROI") because that level of precision requires assumptions that rarely hold up to scrutiny.

How do you isolate coaching's impact from other factors?

You can't perfectly isolate it, and anyone who claims otherwise is overselling. What you can do is build a strong correlational case: compare coached populations to similar uncoached ones, control for obvious variables (tenure, role level, team size), and triangulate across multiple data sources. A comparison group is the single most powerful design choice you can make.

How many coaching sessions before you can see results?

Leading indicators (engagement depth, coach-observed shifts, qualitative specificity) should appear within two to four sessions. Measurable competency growth typically takes three to six months. Business outcome correlations require six to twelve months of data. If you're seeing zero signal after three months, something in the program design needs to change.

Should we measure individual coaching ROI or program-level ROI?

Program-level. Individual coaching outcomes are too variable and too influenced by factors outside coaching's control (team dynamics, organizational changes, personal circumstances). At the program level, these factors average out and patterns become meaningful. The exception is executive coaching, where the investment per person is high enough that individual outcome tracking makes sense.

How do we measure ROI if we don't have a control group?

Use pre/post comparisons with the same population. Measure competency baselines before coaching starts, then track growth over time. Compare 360-degree feedback results across cycles. Survey the coached leaders' teams before and after. It's not as rigorous as a true control group, but it's vastly better than relying on satisfaction scores alone.

What if our CEO just wants a single ROI number?

Resist the temptation to manufacture one. A fabricated ROI number feels precise but is built on assumptions that collapse under questioning. Instead, present the converging evidence: "Here are the behavior changes coaches are observing. Here are the competency growth trends. Here's how engagement and retention compare between coached and uncoached populations. Here's the estimated cost of the attrition we're preventing." That story is more credible and more defensible than any single number.


Stop guessing. Start measuring.

110+ enterprise customers. Competency tracking built in. The data your CFO actually wants.

Book a Strategy CallRead the complete leadership development guide →
Keep Reading

Related