AI is challenging the status quo everywhere—philanthropy is no exception
Grantmakers are eager to explore how AI can streamline grant cycles, accelerate decision-making, and reveal hidden insights in data. However, when used as a quick fix rather than a strategic tool, AI risks amplifying the very inequities philanthropy aims to solve. At Grantbook, we help funders not only explore AI solutions, but evaluate whether AI truly adds value to their mission, or, in fact, undermines their commitment to equity.
Risks of Jumping Straight to AI
Applying AI too quickly can lead to disappointment, or even deeper challenges for grantmakers. Here are three common misconceptions that drive organizations toward AI prematurely:
1. AI will fix broken workflows.
If your grants management process is overly complex, layering AI on top won’t resolve the root issues. A long, duplicative application is still a burden on grantees, even if AI helps your staff read it faster. Process improvement should come first; otherwise AI just makes a bad process move faster.
2. AI will automatically reduce staff workload.
While AI can accelerate some tasks, it also introduces new ones: model training, data validation, governance, and ethical review. In fact, the TAG’s 2024 State of Philanthropy in Tech reports that barriers to AI adoption include: Privacy and security concerns (55%), Lack of skills (43%), and Lack of certainty about relevant use cases within philanthropy (40%).
For AI to be implemented responsibly and effectively, it requires significant investment in new staff skills, governance, and strategic planning. This ultimately adds to the workload rather than automatically reducing it, at least initially and sometimes longer term..
3. AI won’t undermine human-centered practices.
Philanthropy isn’t just about processing information. It’s about relationships, trust, and accountability. Automating too much risks flattening the nuance of grantee voices. For example, auto-summarizing reports may capture content but miss the context, cultural differences, or lived experiences of the very changemakers that funders are seeking to understand.
When AI Isn’t the Real Fix: A Case Study
One of our foundation partners recently approached us, armed with two specific (and very common!) challenges, and excited about AI's potential to solve them:
- Could a chatbot help grantees prepare stronger applications?
- Could AI help generate board reports, saving program staff hours of manual writing?
When we unpacked these requests, we discovered underlying issues that prompted different solutions.
First of all, grantees were rewriting applications every year.. Program staff mistakenly believed that their grants management system prevented reusing past submissions, but in reality, a built-in copy/clone function already existed but wasn’t being used.
Instead of jumping to an AI solution, we enabled the feature and trained staff on how to make it part of the renewal process. Returning grantees no longer had to rewrite entire applications each year, a shift that not only eliminated frustration and saved significant time but also reduced inequitable burdens on smaller organizations with fewer resources.
Secondly, staff were spending weeks on board reports. Program officers were manually compiling lengthy reports, a time-consuming process prone to inconsistency. Again, rather than using AI, we leveraged existing reporting tools within their system that could automatically populate Word templates directly from structured fields. We also scaffolded workflows so that, in the future, AI could support drafting concise summaries. But the immediate relief came from better use of features already available.
In reality, many “AI problems” are really process problems in disguise. Addressing those first not only delivers faster impact but also sets the stage for AI to succeed when, and if, it’s the right tool
A Framework for Deciding if AI Will Help
Try this four-step framework to cut through the hype and determine if AI is the right tool for the job.
Step 1: Clarify the Root Cause of the Problem
Start by asking what problem you are actually trying to solve. Is it operational inefficiency, equity, capacity, or relational?
Often, what looks like an “AI problem” is really one of process, governance, or design.
Step 2: Explore Non-AI Solutions First
In many cases, the fix to your problem could be hiding in plain sight. These potential solutions include:
Process redesign: Simplifying forms, streamlining approvals, or reducing reporting cycles can deliver more impact than any algorithm.
Exploring existing system features: Many grants management platforms have underutilized functionality for cloning applications, auto-generating reports, or triggering automated workflows.
Automation within and between existing tools: Workflow engines, dashboards, and data integrations can eliminate manual tasks without the complexity of AI.
Step 3: Evaluate AI With Five Lenses
If non-AI options won’t cut it, then it’s time to evaluate AI. Some questions to ask yourself are:
- Will AI meaningfully improve the experience of both grantees and staff?
- Does this AI reduce or deepen power imbalances?
- Do you have the governance, staff capacity, and data maturity to maintain responsible use?
- Is sensitive information, whether funder or grantee, protected from unauthorized access, breaches, or misuse?
- Are you collecting, storing, and using grantee or community data in ways that respect consent, confidentiality, and ethical standards?
Step 4: Pilot and Learn
If AI looks like the right path, start small. Choose a low-risk, high-impact use case and set clear success measures before piloting. Make sure to maintain human oversight to validate results and catch issues early, and document your learning as you go. If you want to drive your learning further, incorporate a feedback loop and be prepared to adapt.
AI isn’t a silver bullet. It won’t fix broken processes or transform inequitable practices on its own. In fact, applied hastily, it risks adding complexity or deepening existing imbalances. By focusing on the root cause of the problem and remaining open to both AI and non-AI fixes, you can leverage effective solutions that work responsibly in your unique context.
