The Shifting Landscape of Federal AI Policy
In October 2023, Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence introduced a sweeping framework for federal AI governance. It required safety testing, bias assessments, transparency measures, and new reporting obligations across agencies.
Then, on January 20, 2025, Executive Order 14179, titled Removing Barriers to American Leadership in Artificial Intelligence, revoked EO 14110. The new order signaled a shift in priorities: from prescriptive compliance requirements toward accelerating American competitiveness in AI development.
For federal technology leaders, the policy reversal raises an important question: if the specific requirements of EO 14110 no longer apply, does responsible AI governance still matter?
The answer is unequivocally yes. And the agencies that built their governance around principles rather than specific compliance checklists are in the strongest position today.
What Changed and What Remains
The revocation of EO 14110 removed specific federal mandates around AI safety testing, bias reporting, and watermarking requirements. However, several foundational obligations remain in place.
Existing Law Still Applies
The Privacy Act, the Administrative Procedure Act, civil rights statutes, and agency-specific authorities all govern how AI systems can be used in federal decision-making. These laws predate any executive order and remain fully in effect.
OMB Memorandum M-24-10 Status
OMB issued M-24-10 in March 2024, establishing requirements for agencies to designate Chief AI Officers and implement AI governance structures. While the enforcement posture may evolve under the current administration, agencies that have already stood up governance bodies would be unwise to dismantle them.
NIST AI Risk Management Framework
The NIST AI RMF (AI 100-1) was developed independently of EO 14110 and remains the most widely adopted voluntary framework for AI governance. Its four functions, Govern, Map, Measure, and Manage, provide a durable structure that is policy-agnostic.
Why Governance Still Matters
Regardless of which executive orders are in effect, federal AI systems carry unique responsibilities.
When an algorithm influences benefits eligibility, threat assessments, or resource allocation, the consequences of errors are borne by citizens who had no say in the system's design. This reality does not change with administrations.
Operational Risk
AI systems that lack governance create operational risk: models that drift without monitoring, training data that becomes stale, and outputs that no one can explain when challenged. These risks affect mission outcomes whether or not a compliance mandate exists.
Legal Exposure
Agencies face litigation risk when AI systems produce discriminatory outcomes. The absence of a specific AI executive order does not provide a legal shield; it simply removes one layer of proactive guidance.
Public Trust
Citizens and oversight bodies expect transparency in government decision-making. Deploying opaque AI systems, even when legally permissible, erodes the public trust that agencies depend on.
A Durable AI Governance Framework
The best governance frameworks are built to outlast any single policy directive. Here is a practical approach.
1. Establish an AI Governance Board
Maintain a cross-functional body that includes technical leaders, legal counsel, privacy officers, and mission stakeholders. This board should own the AI risk framework and have authority to approve or halt deployments.
2. Adopt the NIST AI RMF
The NIST AI Risk Management Framework provides a voluntary, flexible structure. It is not tied to any executive order and is recognized across both government and industry.
3. Implement Model Cards and Data Sheets
Every AI model should have a model card documenting its intended use, training data, performance metrics, known limitations, and ethical considerations. Similarly, every training dataset should have a data sheet describing its provenance, collection methodology, and known biases.
4. Build Continuous Monitoring Infrastructure
Models degrade, data distributions shift, and the populations they serve change. Automated monitoring systems that track model performance, fairness metrics, and data quality in real time are an engineering best practice, not just a compliance requirement.
5. Design for Human Oversight
Especially for high-impact decisions, system architectures should include clear intervention points where human reviewers can examine, override, or escalate AI recommendations.
Common Pitfalls to Avoid
In our work with federal agencies, we see several recurring mistakes in AI governance implementations.
First, treating governance as a documentation exercise. Producing voluminous policy documents that no one reads or follows does not reduce risk. Governance must be embedded in technical workflows and enforced through automated tooling wherever possible.
Second, underestimating the data challenge. Most agencies have significant data quality issues that must be resolved before responsible AI deployment is feasible. Investing in data governance is a prerequisite, not an afterthought.
Third, abandoning governance because the mandate changed. The worst outcome of EO 14110's revocation would be agencies interpreting it as permission to skip governance entirely. The risks that motivated governance have not gone away.
Looking Ahead
AI policy will continue to evolve. New executive orders, legislation, and agency directives will reshape the compliance landscape. Agencies that build flexible, principle-based governance structures today will be better positioned to adapt than those that react to each policy shift.
The goal is not to slow down AI adoption in government. It is to accelerate it responsibly, ensuring that the systems we build are worthy of the public trust they require, regardless of which executive order is in effect.
Tags
EaseOrigin Editorial
EaseOrigin Team
The EaseOrigin editorial team shares insights on federal IT modernization, cloud strategy, cybersecurity, and program delivery drawn from real-world project experience.







