7 Questions every leader should ask when deploying or assessing the use of AI
For many organizations, the use of Artificial Intelligence (AI) promises efficiency, innovation, and competitive advantage. It is also transforming how we work, from automating administrative tasks to supporting complex decision making, and producing seamless written material.
Generative AI tools like ChatGPT (the GPT stands for generative pretrained transformer) can produce a wide variety of credible writing in seconds, then respond to feedback from the user to make the writing more polished. Any organization that needs to produce clear written materials stands to benefit. Those benefits come with responsibility to use AI safely, ethically, and in alignment with your organization’s values.
Before launching any system, whether it’s an internal chatbot, predictive analytics tool, automated service or writing tool every leader should take a step back and ask a few critical questions.
1. Who is accountable for the AI system? Someone must take ownership of outcomes, not just technical performance, but also social and ethical impacts. When responsibility is not identified, its assumed to be no one’s job.
2. Whose values and rights do it reflect? The use of AI reflects the values, biases, and priorities of the people and data behind the systems. Leaders looking to adopt a system in their organization must ask whether it aligns with the organization’s ethics, privacy, community expectations, and human rights principles.
3. Who owns the data, and under what authority? Data is king, but also is a point of risk. Ownership and jurisdiction matter, especially when dealing with sensitive, cultural, or community based data. Understanding consent, storage, and sovereignty (among other considerations) is key to maintaining trust.
4. Who is protected, and who is at risk of being harmed? As with every innovation, there are unintended consequences. AI can amplify inequalities if not designed or used carefully. Asking who benefits and who might be left behind will help leaders serve their organizations that build or use systems that serve, rather than exclude.
5. Can clients, stakeholders or the public understand how it works? If those affected by an AI system can’t make sense of it, transparency is missing. Being able to explain the system and the material it generates is about communicating clearly how and why AI reaches its conclusions.
6. Does the AI system meet current and future compliance standards? Regulation in the field of AI is moving fast. From Canada’s proposed Artificial Intelligence and Data Act (AIDA) to the EU’s AI Act, new rules will demand proof of ethical and compliant practices. Forward thinking organizations are preparing now.
7. Are we being transparent about its use? When people know where and how AI is being used, for example in hiring, service delivery, or communication, they are more likely to engage with confidence as transparency builds trust.
If you can’t answer these seven questions, your AI strategy may not be ready.
Taking the time to ask the right questions helps leaders build AI systems and policies that are responsible, resilient, and ready for the future. This is the work I help organizations navigate: turning complex AI challenges into clear, actionable strategies rooted in accountability, ethics, and governance.
-CT
ThompsonBAYTED is a freelance research and consulting firm offering professional services in English, French, and Polish.
We tailor evidence-based research and strategic consulting services to your organization's unique goals.
