It’s everywhere: in your phone app and already in systems that decide credit applications. The rise of AI has fundamentally transformed our economy and society in record time. However, it has far exceeded the ability of lawmakers to establish clear rules. It is in this regulatory vacuum that we see the federal government seeking a way to try to appease the future of unbridled AI.
On December 8, 2025, President Donald Trump announced that he would sign a “ONE RULE Executive Order” to establish a single, national standard for the regulation of artificial intelligence. Does what he is doing make sense, or does he simply want to regulate and try to close the barn door after the horse has bolted? We explain it all to you.
The “One Rule,” explained
The main complaint that the White House and big tech companies have about seeking a single rule is the legislative patchwork that has been created around AI in the 50 states. The fact that the US government has been slow to legislate does not mean that states, independently, have not rushed to try to defend their citizens from this unregulated technology.
However, this was unsustainable. Companies such as OpenAI, Google, and Meta Platforms argue that it is unfeasible and paralyzing to have to comply with two different—and often contradictory—sets of AI laws established by each state. And President Trump agreed with them: “You can’t expect a company to get 50 approvals every time it wants to do something.” Ultimately, to ensure compliance with the laws, a data company will inevitably have to follow the most restrictive state standard (in many cases, California). This creates a national bottleneck that slows down the pace of development and the ability of start-ups to thrive.
While some critics believe that state bureaucracy is a time- and capital-consuming obstacle, both for the White House and for businesses, a single national rule is the fastest and most efficient way to accelerate the development of AI and catch up with its global rival, China.
The AI “Regulatory Gold Rush”
During the months when there was no legislation, AI ran rampant. That is why individual states rushed to fill the regulatory void. No state was left out; all 50 introduced some form of AI-related legislation. As a result, 38 states have enacted—or adopted—nearly 100 measures on artificial intelligence. These state laws are not broad regulatory frameworks, but rather focus on specific, demonstrable problems.
The first laws that were made prohibited the disclosure of political buildings to prevent misinformation during campaigns and elections.
Laws have also been enacted that strictly penalize the use of AI to create non-consensual intimate images. The state of Colorado also passed a comprehensive law imposing transparency, monitoring, and bias mitigation obligations on AI systems used in employee hiring or credit scoring. Other states have required their public agencies to publish inventories of how they use automated decision-making tools that could affect residents.
State leaders argue that they needed to put these safeguards in place to protect their citizens, given that Congress has not acted and states have a duty to ensure consumer protection.
Outside the United States, many countries, such as those in the European Union, have passed strict AI Acts.
FAQs
Is the news about Trump’s executive order on AI true?
Yes. Multiple sources, including Reuters, reported on December 8, 2025, that President Donald Trump plans to sign an “ONE RULE” Executive Order to unify AI regulation at the national level.
What kind of laws have states passed?
State laws are very specific and focus on risks:
- Deepfakes: Prohibition or mandatory disclosure in political campaigns.
- Discrimination: Risk assessment obligations in “high-risk” systems (e.g., Colorado).
- Privacy: Transparency and consumer protection requirements. This year alone, 38 states adopted nearly 100 measures on AI.
