top of page

Digital Marketing

Claire Roper

AI & Engagement: How Digital Tools Are Transforming Local Government Consultations

  • Writer: Claire Roper
    Claire Roper
  • Jun 22
  • 6 min read

Updated: Jun 24

Co-authors: Lynda George and Claire Roper


Meta now uses AI to automatically summarise comment threads on posts—supposedly to highlight common themes and reduce the moderation load. But with AI now influencing around 20% of all feed content through algorithmic recommendations, we must ask: What’s being amplified? What’s being filtered out?


Text on Meta AI card discusses reactions to Bob's Stores closing in Connecticut. Reasons include "going woke," poor selection, and online retail.
Meta’s AI is summarising tool.

Summaries, while convenient, can oversimplify or misrepresent nuanced opinions. When AI becomes the lens through which decision-makers view engagement, there’s a real risk that public sentiment will be flattened, misread, or even manipulated. When AI interprets the conversation, are we still hearing the people?


“Responsible AI is not just about liability — it's about ensuring what you are building is enabling human flourishing” - Rumman Chowdhury, CEO at Parity AI, 2023

A recent Forbes survey revealed:

  • 75% of consumers worry about AI-generated misinformation.

  • 93% want mandatory labels for AI-created content.

  • 97% demand safeguards on how platforms use personal data to train AI models.

Robot presenting to businesspeople in a modern conference room. Large table, wood panel walls, and city view. Mix of engagement and curiosity.
Is the conversation still authentic when AI has a seat at the table? (Image created by AI)

Did you know - ChatGPT is an AI program that reads and learns from lots of text on the internet and books, then uses that knowledge to generate answers and have conversations based on the questions you ask. - Quote, ChatGPT



AI-Assisted Responses in Consultations


With tools like ChatGPT, Google Gemini, Microsoft Copilot, community members can now craft polished, structured, and persuasive consultation submissions. While this can empower those with limited time or literacy, it also raises equity concerns:


  • Are these responses representative of genuine community sentiment?

  • Does this skew consultation data toward those who know how to use AI tools?

  • Are we, perhaps unintentionally, measuring digital literacy as much as we’re measuring public opinion?


As practitioners, we must acknowledge that AI-written content could amplify certain voices while marginalising others—and plan engagement strategies accordingly. AI struggles with nuance, like sarcasm or emotional complexity, and can reflect biases. Text-only AI misclassified 23% of comments compared to human judgment. The average number of participants in local government consultations in New Zealand varies significantly depending on the size of the council, the nature of the consultation, and the methods used to engage the community.



There are already issues finding balance between groups submitting on a consultation and an individual. Groups will often provide set wording or ideas for participants to submit on. This can cause an influx of repeated messages or skew the community sentiment to favour one group.


A Victorian council experienced this with passionate pickleball enthusiasts accounting for half the submissions on a recreation plan. In a 2024 paper by Social Pinpoint, they predict in 2025, that the increase in AI usage will make it easier for groups to mobilise and influence consultation outcomes.  Practitioners and their organisations will need to plan ahead on how they will process this data.

Four people play pickleball on a blue court. Two in the foreground have green and yellow paddles. The background shows a net and fence.


The Risk of Deepfakes in Video Consultations


As online engagement becomes more visual—through online forums, video submissions, or digital hearings—a disturbing possibility emerges: AI-generated deepfakes could be used to impersonate individuals or simulate community consensus. This is not science fiction. It’s already happening:



The question is critical for councils and organisations conducting video consultations. How do we validate identity in a virtual town hall?


In the last decade, AI capabilities have started to exceed human performance. HumanizeAI put this to the test with 30,000 human participants to determine if an image was generated by AI or created by a person. Out of 5 images, only 10% of participants guessed 4 or more correctly. Apply this to videos content and the written word, how do our decision makers know what has been created and by who? We need to seriously think of the potential consequences and how we are going to mitigate them. Currently, there is no training on these matters. It is up to individuals to pursue and convince management to get ahead of the game.


Try the game for yourself and test your skill: https://humanizeai.com/human-or-ai/

Four faces with text indicating if they're human or AI. Results show guesses as correct or wrong, with percentages and URLs provided.
“I think it’s promising that we have policymakers who are trying to get smart about this technology and get in front of risks before we’ve had mass deployment across the product space. I think there are some very obvious things that we need to establish, one of which is the right to know whether you’re consuming content from a bot or not.”Clem Delangue, co-founder and CEO, Hugging Face


AI as a Double-Edged Sword for Engagement Specialists


AI offers engagement teams the promise of scale and efficiency. It can:

  • Process thousands of survey responses.

  • Summarise key themes.

  • Even detect patterns in sentiment or tone.


But it also raises tough ethical questions:

  • Who programs the summary logic?

  • What voices are excluded or softened in the algorithm?

  • Could reliance on automated summaries erode diversity of thought—especially dissent?


AI should support—not replace—human interpretation and relationship-building. Artificial intelligence, deep learning, machine learning — whatever you’re doing if you don’t understand it — learn it. Because otherwise, you’re going to be a dinosaur within three years. - Mark Cuban, American entrepreneur and investor, 2020



Do AI Platforms Answer Public Questions Accurately?


As more people turn to AI tools like ChatGPT for help understanding policies or consultation documents, a new risk emerges: misleading information delivered with high confidence. While generative AI is improving rapidly, it can still:


  • Oversimplify complex issues.

  • Omit context.

  • Deliver outdated or inaccurate details.


AI tools are only as good as the proficiency of the person using them.  Many workplaces have style guides and report writing templates, we now need to consider data culture. How and when do we use AI? Do we have an organisationally trained AI? If not, how do we get staff to train their AI appropriately, within the realms of the organisation’s data culture?


When people ask, “What does this proposed change mean for me?”, and AI answers incorrectly, trust can be undermined—and decisions based on misinformation become a real threat.


A man in a beige suit intently faces a silver humanoid robot in a modern setting. The mood is tense and curious, with a blurred background.
Human and Ai having a conversation - image created by AI.


Implications for Public Engagement

“Every team in your organisation is looking to the IT team to help them deliver AI powered experiences. And I know we don’t want to admit it, but IT doesn’t have all the answers. Because AI isn’t as easy as just turning it on. Delivering great AI experiences requires time, expertise and data.” — Ahyoung An, senior director, product management, MuleSoft

The integration of AI into public engagement demands a shift in thinking. Here's what we recommend:


  • Treat social media feedback with scrutiny: Platforms like Facebook are filtering and summarising engagement through AI lenses. Be critical.

  • Interrogate who is speaking—and how: Is this a genuine voice, or an AI-assisted one? How accessible are the tools for different groups?

  • Beware of AI content hiding in plain sight: A polished consultation response may mask a lack of personal experience or stake.

  • Prepare for identity authentication in video consultations: Start asking now how you’ll verify participants if deepfakes rise in prevalence.

  • Create policies and guidelines for staff: Be specific on what tools can and cannot be used, when and why. Empower staff to become AI champions.

  • Push for transparency: Use clear labelling, update disclosure and privacy statements, and include opt-outs when AI is part of your process—just like the public is asking for.



A New Lens for the Future

“In an era where technology shapes governance, adopting AI/ML is not merely an option — it is a requirement for future-ready public service.” Granicus (White Paper - The time is now - Leveraging Secure, Responsible AI/ML to Transform Public Services”)

As AI becomes more embedded in our life, we must ensure engagement remains transparent, inclusive, and human-centric. The future of public consultation might be faster and more data-rich, but it must also be trustworthy, equitable, and genuinely reflective of the people it serves.


We need to be re-evaluating the tools, methods, and assumptions that guide our work—before the line between authentic voice and synthetic noise becomes too blurred to distinguish.



Meet guest co-author Lynda George

Standards & Integrity Lead | Operations Nerd | Strategic Advisor | Volunteer Advocate | People-First Leader | Making Systems Work for Humans


Operations lead with a track record of building systems that work for people. I’ve delivered national certifications, managed cross-functional teams, developed policies that balance compliance with human impact, and reported outcomes to executive teams and boards. I focus on practical solutions, sound processes, and keeping momentum.


Connect on LinkedIn


Comments


bottom of page