Empower federal agencies to regulate AI under existing laws (or new authority) and aggressively enforce against AI harms.
Push federal agencies to set and enforce clear expectations for AI harms using existing authority (and any new authority Congress grants). This strategy focuses on targeted rules, guidance, investigations, and enforcement that require fixes in practice, not just promises. It’s especially valuable because agencies can act while Congress debates a broader framework.
Why this works
- Agencies like the FTC, FDA, EEOC, etc.
- can use their domain expertise to issue targeted rules – for instance, treating biased AI decisions as an “unfair or deceptive practice” under FTC Act authority.
- This path can be faster than waiting for Congress; indeed, agencies are already investigating AI misuse (e.g.
- the FTC’s investigation into OpenAI).
- It also allows flexibility: agencies can update guidelines as AI evolves.
Public Citizen
AdvocacyChampioning consumer rights and accountable government
Mechanism
About LobbyingHow Public Citizen uses funding
- Identify high-impact AI uses where errors, bias, or privacy harms are most likely to affect people.
- Develop grounded legal theories under existing statutes for targeted agency action.
- Engage agencies with petitions, briefings, and evidence that supports guidance, rules, and enforcement priorities.
- Support investigations and enforcement that treat harmful AI practices as actionable, not just reputational issues.
- Coordinate across agencies so oversight is consistent and gaps are reduced.
- Update guidance as AI evolves so enforcement stays relevant over time.
Milestones
Checkpoints and the expected timing for each step
- 1
Agency priority map defined
EarlyPriority harms, high-impact contexts, and responsible agencies are mapped with a clear engagement plan.
- 2
Petitions and briefings delivered
As engagement beginsAgencies receive a concrete package of evidence, recommendations, and draft guidance concepts.
- 3
Rulemaking or guidance launched
During agency actionOne or more agencies initiate formal guidance or rulemaking tied to AI harms.
- 4
Enforcement outcomes established
As cases proceedEarly investigations or actions result in remedies that demonstrate real accountability.
- 5
Iteration loop in place
OngoingGuidance is updated as needed and enforcement remains active as AI use expands.

