What I Keep Seeing When Nonprofits Say They're Using AI
19-Apr-2026 9:00:00 AM • Written by: Mohamed Hamad
Somewhere between the board meeting where AI got added to the strategic plan and the Tuesday afternoon when a program coordinator pasted a client intake note into ChatGPT, something important got skipped.
Nobody set the rules.
I've heard this story more times than I can count working with nonprofits over the past few years. Leadership gets the message from sector reports, peer organizations, and conference keynotes that AI is the answer to doing more with less. The pressure to adopt feels strong and real. A 2026 Imagine Canada survey found that 80% of Canadian nonprofits are already using AI in some form. The tools arrived fast. The thinking about how to use them responsibly didn't keep pace.
That gap is a problem between technology and trust that'll eventually catch up to every organization.
80% of Canadian nonprofits are using AI. Only 10% have a policy governing it.
The same Imagine Canada research found that only about 1 in 10 organizations have any formal policy governing how AI gets used. Another fifth are working on one. That leaves nearly two thirds of nonprofits using AI with no guidelines, no accountability structure, and no clear answer to a simple question: who in this organization is responsible for how we use these tools?
What makes that number worth paying attention to is that AI tools aren't neutral containers. When a staff member pastes information into a chat-based AI tool like ChatGPT, or when an AI feature inside a platform like a CRM or project management tool processes workspace data, that information is touching third-party infrastructure. Whether it gets retained, whether it trains a model, and what rights the vendor has to that data depends entirely on their terms of service. Not your intentions. Not your values. Their terms.
Most organizations I work with haven't read those terms for a single tool in their stack.
Two ways this quietly goes wrong
Leadership is all-in and the team has no guidance
The executive director uses AI constantly. Board reports, strategic documents, donor correspondence, program summaries. It's become indispensable to how she works. The team hasn't been onboarded. There are no guidelines. Staff have no visibility into what organizational information is being fed into the tool or what the vendor's data policy actually says.
The enthusiasm outpaced any governance conversation. And because the results look good, nobody raises a hand.
The team moves fast and leadership has no idea
A program coordinator finds a free AI writing tool that cuts her grant report drafting time in half. She tells two colleagues. It spreads quietly. Within a few months, several staff members are using it through personal or free accounts. The tool's free tier explicitly permits training on user inputs. The organization's service agreements contain confidentiality clauses about program data. Nobody's connected those two facts, because nobody knows both are true at the same time.
In a recent conversation with Sandra Peterffy, a compliance and privacy consultant with 20 years of experience helping tech and healthcare organizations build responsible data practices, she made an observation that's stayed with me: organizations that were doing data handling right simply stopped thinking about it when AI entered the picture. Not from carelessness. From speed.
Both situations above are speed problems. The tools aren't the issue. The absence of anyone accountable for how they get used is.
Privacy law doesn't care how good your intentions are
I'm not a lawyer and this isn't legal advice. But I work with enough organizations in Quebec to know this is worth flagging.
Law 25, Quebec's privacy legislation, requires organizations to know what personal information they hold, have a designated Privacy Officer, and conduct a Privacy Impact Assessment before deploying technologies that process personal information. Most nonprofits haven't done that assessment for a single AI tool in their stack. If your team is using AI tools that touch client names, case notes, intake forms, or any personal data about the people you serve, that's a conversation worth having with someone who knows privacy law.
For organizations operating beyond Quebec, GDPR works from the same foundation: you're responsible for how data is processed, including by the third-party tools you use. A reputable vendor doesn't transfer your obligation.
As Sandra put it in that same conversation: it's no longer enough to say "trust us." Clients, beneficiaries, and regulators increasingly want organizations to show they're trustworthy. There's a meaningful difference between those two things.
Three questions worth asking before this becomes a harder conversation
I'm not suggesting a compliance overhaul. I'm suggesting a conversation most organizations haven't had yet.
What tools are actually in use?
Ask your team. Include personal accounts and free tools people are using on their own initiative. You can't govern what you can't see. The list you get back will probably surprise you.
Does any of it touch personal information?
Client names, case notes, service records, intake information, any personal details about the people your organization serves. If yes, those tools go on a short list for closer review. Start by reading the vendor's privacy policy, specifically whether data's used for model training and whether you can opt out.
Who's accountable?
Not who uses AI, but who's responsible for how it gets used. In a small organization that may be the executive director. In a larger one it should be a named role. Law 25 requires a designated Privacy Officer. That person needs to know your AI stack exists and what it touches.
Three questions. One conversation. One person accountable. That's the foundation everything else builds on.
Final Thoughts
The goal isn't to stop using AI. The sector genuinely needs it. The capacity pressures facing nonprofits are real, and the tools can genuinely help. The goal is to use AI the way you'd want someone to use the most sensitive information about the people you serve.
The organizations that pause to ask these questions now will be the ones that the clients and communities they serve continue to trust. For a mission-driven organization, that trust isn't a soft outcome.
It's the whole point.
Not sure where your organization stands?
If you're not sure where your organization lands on any of this, that's exactly what The JumpStart was built to help with. It's a practical, human-led program built specifically for nonprofits and mission-driven organizations that want to use AI responsibly, without the overwhelm of figuring it out alone.
Book a vibe check. A 30-minute conversation to see where you are and whether The JumpStart is the right fit.
Mohamed Hamad
Mohamed Hamad is the founder of Third Wunder, a Montreal-based digital marketing agency, with 15 years of experience in web development, digital marketing, and entrepreneurship. Through his blog, "Thought Strings", he shares insights on digital marketing and design trends, and the lessons learned from his entrepreneurial journey, aiming to inspire and educate fellow professionals and enthusiasts alike.