ROI Projections for Automation That Executives Actually Believe
An automation vendor presents a slide showing $4 million in annual savings from robotic process automation. The CFO asks three questions: what are the implementation costs, how long before we see those savings, and what assumptions are you making about adoption rates? The vendor does not have good answers to any of them. The proposal goes back to the bottom of the priority list.
This scenario plays out regularly because most automation ROI projections are built to impress rather than to inform. They showcase theoretical maximum savings while minimizing or omitting the costs, risks, and timeline realities that executives have been burned by before. The result is a credibility gap that delays good automation investments as often as it prevents bad ones.
Why Most ROI Models Lack Credibility
The typical automation ROI projection calculates labor hours saved, multiplies by an hourly rate, and presents the result as annual savings. This approach has several problems that experienced executives immediately recognize.
It ignores implementation costs. Building, testing, deploying, and integrating automation is not free. Depending on the complexity of the process and the systems involved, implementation can cost anywhere from $20,000 for a simple robotic process automation bot to several million dollars for enterprise-wide intelligent automation. Omitting these costs makes the ROI look better on a slide but destroys credibility with anyone who has approved a technology budget before.
It assumes immediate adoption. The projection typically shows full savings from day one, as if the automation deploys and everyone immediately stops doing the manual process. In reality, there is a transition period where both the manual and automated processes run simultaneously, where productivity dips as people learn new workflows, and where adoption ramps gradually from 20% to 80% over weeks or months.
It omits ongoing costs. Automation requires maintenance, monitoring, updates when business rules change, and periodic retraining for AI-based systems. These ongoing costs typically run 15 to 25% of the initial implementation cost per year. A projection that shows savings without maintenance costs overstates the net benefit every year of operation.
It uses gross labor cost rather than realistic displacement. Automating a task that takes two hours per day does not save two hours of salary. The employee still works a full day. The savings only materialize if the freed time is redirected to higher-value work or if headcount reductions are planned. Neither of these happens automatically, and projections that assume they do lose credibility with leaders who understand workforce dynamics.
Building a Credible Model
A believable ROI projection is honest about costs, conservative about benefits, and explicit about assumptions. Here is the framework.
Total Cost of Ownership
Start with costs, not savings. Executives respond better to a model that leads with realistic costs because it signals that the analysis is honest.
Implementation costs include software licensing or development, system integration, data preparation and cleaning, process documentation and redesign, user training, and project management overhead. For complex automation, include a 15 to 20% contingency buffer because implementation projects routinely exceed initial estimates.
Ongoing annual costs include software licensing renewals, monitoring and maintenance staff time, periodic retraining and updates, infrastructure costs (cloud computing, storage, API calls), and escalation handling for cases the automation cannot process.
Transition costs cover the period of parallel operation (running both manual and automated processes), temporary productivity loss during adoption, additional support resources during rollout, and potential rework from early automation errors.
Realistic Benefit Estimation
Benefits should be calculated in tiers based on confidence level.
Hard savings are directly measurable cost reductions that can be tracked in financial statements. Examples: reduced overtime, eliminated temporary staffing, lower error-related costs (refunds, penalties, rework). These are the most credible benefits because they show up in real budgets.
Soft savings are real but harder to measure precisely. Examples: time freed for higher-value work, faster processing improving customer satisfaction, reduced employee turnover from eliminating tedious tasks. Include these in the model but separate them visually from hard savings so executives can evaluate them independently.
Strategic benefits are long-term and speculative. Examples: ability to scale without proportional headcount growth, improved compliance posture, competitive advantage from faster operations. These are worth mentioning but should not drive the ROI calculation. They provide context, not justification.
The Adoption Curve
Projections should show a realistic adoption ramp, not a step function from zero to full value. A typical adoption curve for enterprise automation looks like month one at 20 to 30% adoption, month three at 50 to 60%, month six at 70 to 80%, and month twelve at 85 to 95%. Full adoption rarely reaches 100% because edge cases, exceptions, and system limitations keep some volume in manual processing.
Apply the adoption percentage to the benefit estimates for each period. This produces a benefit curve that starts low and ramps, rather than a flat line that overstates early returns.
Payback Period
Executives care about payback period as much as total ROI. A project that costs $500,000, saves $300,000 per year at maturity, but takes eighteen months to reach maturity has a payback period of approximately two years when you account for the adoption ramp. Presenting this honestly is more credible than claiming a twenty-month payback based on theoretical immediate full adoption.
Industry benchmarks show realistic payback periods for automation projects ranging from six to eighteen months for simple RPA implementations and eighteen to thirty-six months for complex intelligent automation. Organizations report average ROI of approximately 171% on successful automation investments, but those returns reflect mature implementations, not first-quarter results.
Presenting to Executives
The presentation format matters as much as the numbers. Structure the case in layers.
Layer one: The conservative case using only hard savings, full costs, and a gradual adoption curve. This is the floor scenario. If the executive approves based on this case alone, the project has a strong foundation.
Layer two: The expected case adds soft savings at a discounted rate (typically 50% of estimated value to account for measurement uncertainty). This is the most likely outcome.
Layer three: The optimistic case includes strategic benefits and assumes faster adoption. This is the ceiling scenario and should be labeled as such.
Presenting all three scenarios with explicit assumptions for each gives executives the information to make their own judgment. It respects their intelligence and experience, and it builds the trust that gets future projects approved more easily.
Common Credibility Killers
Avoid vendor-provided ROI numbers without independent verification. Vendors have an obvious incentive to inflate projected returns. Use their estimates as inputs to your model, not as the model itself.
Avoid comparing current-state costs to future-state savings without accounting for the transition. The transition period is where budgets get strained, and ignoring it signals either naivety or dishonesty.
Avoid presenting a single number without ranges. Any honest projection includes uncertainty. A range of $800,000 to $1.2 million in annual savings is more credible than a precise figure of $1,043,000, even though the latter looks more rigorous. Precision without accuracy is a red flag for experienced decision-makers.
The goal is not to make the ROI look as attractive as possible. The goal is to make it as accurate as possible. Executives who have seen inflated projections fail before will trust conservative, well-reasoned estimates over optimistic ones. And trusted projections get funded.
Related Reading
- Building a Competitive Intelligence Habit That Takes 15 Minutes a Day
- How Companies Are Managing Hybrid Teams of Humans and AI Agents
- The Automation Readiness Score and How It Works
- Why Automating Company Analysis Does Not Mean Removing Human Judgment
- Workforce Utilization Mapping and What It Tells Management