Fake comments are easy. Proving them isn’t. Last year, when an air quality agency in Southern California proposed a new rule to encourage consumers to buy heat pumps instead of gas heaters, the agency was flooded with 20,000 comments opposing the idea—many more than usual. “Due to the volume and nature of these submissions, South Coast AQMD had concerns about their authenticity,” says Rainbow Yeung, an agency spokesperson. The agency’s executive director got an email thanking him for his “opposition” to a rule that his own team had drafted.
Does the public comment system have an AI problem?
Why This Matters
This article highlights the growing challenge of fake comments in public consultation processes, which can undermine genuine public input and influence policy decisions. As AI-generated comments become easier to produce, verifying authenticity is crucial for maintaining trust and transparency in the tech industry and government. Addressing this issue is vital for ensuring that public feedback accurately reflects community concerns and informs effective policymaking.
Key Takeaways
- AI can generate convincing fake comments, complicating authenticity verification.
- Public comment systems need better tools to detect and prevent fake submissions.
- Ensuring genuine public input is essential for transparent and effective policymaking.
Get alerts for these topics