From curiosity to caution: How non-profit leaders are approaching responsible AI
From automating administrative tasks to generating high quality communications in seconds, generative and agentic AI tools offer clear opportunities for efficiency and innovation. However, alongside these opportunities come real and pressing challenges: ethics, accountability, transparency, and trust.
At a recent City of Kelowna conference, I had the opportunity to facilitate a session with non-profit leaders focused on practical approaches to responsible AI adoption and governance. The session was designed with a clear premise: AI is not simply a technical upgrade, it is a leadership decision with organization wide implications.
Moving beyond the hype
Rather than focusing on tools or technical capabilities, the session centered on a practical, executive-level governance framework. This framework is grounded in seven essential questions that organizations should consider before adopting AI:
These questions shift the conversation from “Can we use AI?” to “Should we, and under what conditions?”
A hands-on approach to decision making
To bring this framework to life, participants were divided into five groups (37 participants total), each tasked with working through a realistic AI use case. The scenarios were intentionally diverse, reflecting the breadth of AI applications across the sector:
Each group was asked to evaluate their scenario and decide one of three options:
A clear and consistent outcome
Despite the diversity of use cases, every single group arrived at the same conclusion: Approve with conditions. This unanimous result is telling. It reflects neither blind enthusiasm nor outright resistance. Instead, it signals a thoughtful and measured approach, leaders are open to AI, but only when appropriate safeguards are in place.
When asked to define those conditions, one theme emerged consistently across all groups: Human oversight is essential.Participants emphasized the need for humans to remain actively involved in decision making processes, particularly in areas affecting people, equity, and organizational reputation. AI, in this view, is a support tool, not a replacement for judgment or accountability.
The innovation governance gap
One of the most important insights from this session is what I would describe as an “innovation governance gap.” Organizations are ready, and in many cases eager, to adopt AI, but they lack the structured governance, policies, and confidence to do so responsibly.This gap is not about technology. It’s about leadership.
Non-profit organizations, like many others, are navigating a complex landscape where the pressure to innovate is matched by the responsibility to protect collaborators, uphold values, and maintain public trust. Without clear governance frameworks, AI adoption risks becoming reactive, inconsistent, or misaligned with organizational priorities.
From insight to action
To close the session, participants were asked to identify one immediate action they could take to strengthen responsible AI use in their organization. This step was critical, it reinforced that governance is not an abstract concept, but something that can and should begin now.
Examples of immediate actions included:
Final reflection
What stood out most from this session was not just the consistency of the results, but the clarity of perspective from non-profit leaders. There is no question that AI will play a role in the future of the sector. The real question is how that role is defined. The takeaway is simple but significant: organizations are not saying “no” to AI, they are saying “yes, but responsibly.” That “but” is where leadership, governance, and thoughtful decision making must lead the way.
CT
ThompsonBAYTED is a freelance research and consulting firm offering professional services in English, French, and Polish.
We tailor evidence-based research and strategic consulting services to your organization's unique goals.
