I4R.
← Back to BlogResearch

The Future of ROI-Based Philanthropy in the Age of AI

March 25, 2026 · Juan Posada Aparicio

ROI-based philanthropy is entering a new phase. For a long time, the basic playbook was simple enough to feel sturdy: find a study with a promising result, use the reported effect size to estimate social benefit, compare that benefit to cost, and let the result help shape funding decisions. That model did something important. It gave philanthropy a way to be more disciplined, more evidence-oriented, and less vulnerable to gut feelings, polished storytelling, or whatever issue happened to be attracting the most attention at the moment.

But that model is starting to wobble. Much of philanthropic ROI still depends on published estimates that were never meant to carry quite this much weight, and now AI is making it dramatically easier to scan papers, summarize results, compare interventions, and turn all of that into polished funding proposals. That is genuinely useful, but it also creates a new kind of risk. If philanthropy uses AI to speed up proposal formation without improving how it checks the evidence underneath, it is not necessarily becoming smarter. It may simply be getting faster at sounding confident.

1. The old model worked – until it didn’t

The older version of ROI-based giving treated research as a source of usable numbers. A paper would report that a program raised earnings, improved test scores, or changed long-run outcomes; a philanthropic team would plug that estimate into a model; and out would come an expected social return. That approach brought structure into decision-making, made opportunities easier to compare, and pushed giving in a more evidence-based direction.

Its weakness was subtler. Over time, it encouraged people to treat a published estimate as if it were a stable fact about the world, when in reality that estimate is the product of a research design, a set of assumptions, a particular sample, a series of specification choices, and a publication process that is far from neutral.

2. The real problem is not uncertainty – it is hidden uncertainty

Published estimates are often much noisier than ROI models make them look. They carry sampling error, reflect researcher judgment calls, and may be shaped by selective reporting, publication bias, or the peculiarities of a specific context. When ROI models collapse all of that into one crisp number, they do not remove uncertainty. They bury it.

3. AI makes this both better and riskier

This is where AI changes the game. Used well, it can be an extraordinary evidence assistant. It can pull claims from papers, summarize methods, compare outcome measures, spot missing pieces in reporting, and help teams build draft proposals from far more literature than any one analyst could process alone. But speed changes the bottleneck. The central risk of AI-enabled philanthropy is not that it invents evidence out of nowhere, but that it gives weak evidence a cleaner narrative, a nicer structure, and a more persuasive tone than it deserves.

4. What philanthropy needs next: a replication-centered filter

That is why ROI-based philanthropy needs something between “AI found supporting studies” and “let’s fund this.” Call it a replication-centric ROI test. At a minimum, that filter should ask: Can the core claim be reproduced? Does the result remain robust under reasonable changes? How confident should anyone be that it carries over to a different setting? Can the funder work with a range of plausible effects rather than one point estimate?

5. The future is probabilistic, not performative

Philanthropic ROI should become probabilistic by default. Instead of asking for one best estimate, teams should ask what range of impacts is actually plausible after accounting for fragility, missing evidence, and context shift. The strongest organizations in an AI proposal era will not be the ones producing the most summaries. They will be the ones using AI to widen the top of the funnel while being much stricter about what gets through the gate.