Trump’s AI Regulatory Order Threatened as Consumer Groups Call It Unlawful
President Trump signs an AI initiative in the Oval Office of the White House, Dec. 11, 2025, in Washington. Alex Brandon, Associated Press
By Solange Reyner | Friday, 12 December 2025 03:47 PM EST
Consumer advocacy groups contend President Donald Trump’s executive order—which seeks to prevent states from establishing their own artificial intelligence regulations—may be unlawful.
Robert Weissman, co-president of Public Citizen, stated in a statement: “The good news is, this EO [executive order] is mostly bluster. The President cannot unilaterally preempt state law. States should refuse to be cowed in regulating Big Tech.” He added: “We expect the EO to be challenged in court and defeated; in the meantime, states should continue their efforts to protect residents from the mounting dangers of unregulated AI.”
Public Citizen describes itself as a nonprofit consumer advocacy organization that “champions the public interest in the halls of power.” The National Consumer Law Center asserted the executive order “flies in the face of the critical and constitutional role states have in protecting the public.”
Lauren Saunders, associate director and director of federal advocacy at the center, explained: “States have a vital role in protecting the public from the myriad of known and unknown risks of AI, which can be used to improperly reject people for credit, jobs, and housing; freeze or steal bank accounts; abuse and share private data; and raise the cost of living through surveillance pricing.”
John Bergmayer, legal director of Public Knowledge, noted an executive order cannot preempt state legislative action: “They’re trying to find a way to bypass Congress with these various theories in the executive order. Legally, I don’t think they work very well.”
Trump’s executive order, signed Thursday, directs the attorney general to create a task force challenging state AI laws and orders the Department of Commerce to compile lists of problematic regulations. It also threatens funding restrictions from broadband deployment programs and other grants for states with AI legislation.
According to the International Association of Privacy Professionals, California, Colorado, Texas, and Utah have enacted private-sector AI rules requiring transparency and limited data collection. These laws respond to artificial intelligence embedded in daily life—where systems increasingly influence critical decisions like employment interviews, housing applications, and healthcare access. Research shows such technologies can make errors, including prioritizing specific genders or races.
State regulatory proposals mandate companies disclose practices and assess potential discriminatory risks from their AI programs. The Associated Press contributed to this report.