Avoiding Unseen Bias: Using AI Responsibly in Public Affairs

Using Generative AI has become an integral part of my workflow. For me, it works best as a collaborator—helping generate ideas, refine concepts, test theories, and introduce fresh perspectives. As someone who is dyslexic, Generative AI has also become particularly valuable, making it much easier for me to get the ideas in my head down on paper.

I’m sure we are all now exploring how Generative AI can augment our work in public affairs. Whether it’s drafting position papers or internal communications, conducting research, testing messages, scenario planning, or reviewing policy documents, Generative AI tools can be powerful assets.

However, attending the European Commission’s Joint Research Centre (#JRC) event on science communication and AI opened my eyes to the subtle yet potential risks of bias associated with these tools. It made me take a deeper look into understanding how Generative AI tools operate, why biases occur, and importantly, how public affairs professionals can manage and mitigate these risks.

Understand Generative AI

First, let’s briefly unpack how Generative AI models, specifically Large Language Models (LLMs), actually work. Models such as ChatGPT (OpenAI), Gemini (Google), or Le Chat (Mistral) generate human-like text by recognising and predicting patterns learned from extensive training on vast datasets of existing text content. Essentially, they predict the most probable next word or phrase based on patterns seen in their training data.

However, because their decision-making processes involve complex statistical computations across potentially billions of parameters, these models operate as "black boxes," making it difficult to fully understand how specific outputs are generated.

This lack of transparency can inadvertently allow biases present in the training data—such as historical prejudices or underrepresentation—to subtly influence the AI's outputs." Biases in AI can come about due to several factors, including:

  • Training data bias: If training data contains historical or societal prejudices, LLMs can learn and amplify them.

  • Sampling bias: When training data doesn’t accurately represent the full audience it serves, skewed outputs can disadvantage underrepresented groups.

  • Interaction bias: Human biases can inadvertently shape AI design and influence how these tools interact with or adapt to their users over time.

The AI Index Report 2024 highlights these risks vividly, notably showing measurable political bias within ChatGPT, favouring specific political parties in the US and UK. Such biases pose clear risks—if we unknowingly test policy positions, develop communications, or carry out research using politically skewed information, we risk undermining trust, credibility, and unwittingly misleading our stakeholders. This political bias is not just theoretical; it can tangibly affect public trust, distort policy advocacy, and disrupt stakeholder engagement.

So, given these risks, what practical steps can public affairs professionals take to safeguard against AI biases?

  1. Human oversight is essential: Always maintain human control over key decisions and outputs. Generative AI tools support our work, but we must own the outcomes, ensuring they align with our organisation’s standards and values.

  2. Be proactive and curious: Regularly question the outputs generated by AI. Treat it as a knowledgeable but fallible collaborator, checking accuracy and fairness consistently.

  3. Implement simple bias-checking routines: Develop quick checks or protocols—perhaps peer reviews or 'red team' sessions—to periodically test outputs for potential biases before publication.

  4. Involve diverse perspectives: Even if you’re working individually, seek feedback from a diverse set of colleagues or stakeholders. This broader perspective helps surface blind spots that AI alone might overlook.

  5. Transparency is key: Clearly communicate how you’re using Generative AI in your work. Internally, document processes and checks you perform. Externally, consider disclosing where AI has contributed to content to maintain stakeholder trust.

Managing the risks of AI biases isn’t about avoiding these tools but using them responsibly. As public affairs professionals, our ability to understand and manage these challenges proactively is crucial. Ultimately, we must remain responsible for how we use the outputs of these tools.

How are you handling AI biases in your work?

#PublicAffairs #PublicAffairsWorkflow #AI

Previous
Previous

Making Strategic Choices in Public Affairs: A Framework for Balancing Global, Regional, and Local Needs

Next
Next

Simple Steps to Boost Public Affairs Collaboration