At the NTEN 26NTC conference this year, and in the many conversations that have followed, a consistent theme kept surfacing. Nonprofit leaders are thinking seriously about AI, not just in terms of opportunity, but in terms of responsibility. They’re asking hard questions about how AI might impact their communities, both positively and negatively. 

Data privacy. The risk of perpetuating bias. Hallucinations presenting false information as fact. The fear of losing the authentic voice that makes an organization’s communications feel human. They are the instincts of leaders who take their responsibility to their communities seriously. 

I want to be clear about something: those concerns are valid. Anyone telling you otherwise is not being honest with you. There are ways we can help address some of those concerns, and there are areas where AI is just not the right tool for the job.  At AGP, we focus on practical, responsible uses of AI that align with nonprofit values. Below are some of the strategies we use to help organizations navigate these common challenges; without compromising trust, mission, or impact. 

Protecting Your Data 

Data governance is the starting point. Every organization using AI needs a clear policy defining what information can and cannot go into an AI tool. Beyond that, make sure any tool you use has model training turned off, so your data is not being used to train someone else’s system. And if your use case involves particularly sensitive information about the communities you serve, consider a private language model or a contained AI environment rather than a public model. The risk is real, but it is also manageable with the right guardrails in place. 

Guarding Against Bias 

Test your AI outputs against the communities you serve before relying on them. Your internal team will not always catch what your communities will. Keep humans in control of any high-stakes decisions that affect people’s access to services or resources — AI should never be the final word in those moments. And be transparent with the people you serve about where and how you are using AI. Transparency builds trust and creates the kind of accountability that protects both your communities and your organization. 

When AI Gets It Wrong 

The most effective mitigation is grounding your AI in your own data and giving the model specific instructions not to fabricate answers when it does not have enough information to respond accurately. An AI that tells you it does not know is far more useful than one that invents a confident but wrong answer. You have built up trust with your communities and you need to ensure you keep their trust when AI is being used. 

Keeping Your Voice Your Own 

The solution here is not to avoid AI but to train it on who you already are. Feed it your existing content, past communications, and tone guidelines. Give it specific instructions in every context where it is generating content on your behalf. Keep a human in the loop to review and approve any content generated by AI. At the end of the day, you are responsible for any output you produce whether it’s done with AI or not. 

Getting AI right takes time and iteration. What matters is that you approach it thoughtfully, with your mission and your communities as your guide. 

The nonprofit sector has always found ways to do more with less, to stretch limited resources in service of their mission. I have seen what happens when organizations approach AI with intention, the right governance, and a clear sense of what they are trying to protect. They reach more people. They free up capacity for the work that matters most. They do not lose their voice, they find it in new ways to use it. The concerns are real, and AI may not always be the right tool, but the potential of using AI in the right way can help move the mission forward in ways that weren’t possible before. Move carefully, but move. 

Learn how nonprofits can move forward with AI — carefully and confidently