In a significant development in the intersection of artificial intelligence policy and federal governance, President Donald Trump signed a comprehensive executive order on July 23, 2025, titled “Preventing Woke AI in the Federal Government.” This directive represents a fundamental shift in how the federal government approaches the procurement and deployment of artificial intelligence systems, establishing new requirements designed to ensure what the administration characterizes as ideological neutrality in AI technologies used by federal agencies.
The executive order reflects the Trump administration’s broader policy agenda regarding diversity, equity, and inclusion (DEI) initiatives, extending these concerns into the rapidly evolving field of artificial intelligence. By establishing specific procurement standards for large language models and other AI systems, the directive aims to influence how technology companies develop and market their AI products to government clients, potentially affecting the broader AI industry’s approach to content moderation and bias mitigation.
The Framework of Ideological Neutrality
The executive order establishes what it terms “Unbiased AI Principles” that must govern federal procurement of large language models (LLMs). These principles are structured around two core concepts: truth-seeking and ideological neutrality. The truth-seeking principle requires that LLMs be truthful in responding to user prompts seeking factual information or analysis, with a specific emphasis on prioritizing historical accuracy, scientific inquiry, and objectivity. Additionally, these systems must acknowledge uncertainty when reliable information is incomplete or contradictory.
The ideological neutrality principle mandates that LLMs function as neutral, nonpartisan tools that do not manipulate responses in favor of what the order characterizes as “ideological dogmas such as DEI.” Under this framework, developers are prohibited from intentionally encoding partisan or ideological judgments into an LLM’s outputs unless those judgments are explicitly prompted by or otherwise readily accessible to the end user.
The executive order specifically identifies several concepts that it considers problematic when incorporated into AI models. These include “the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex.”
Justification and Examples Cited
The administration’s justification for this directive draws heavily on specific incidents involving major AI systems that have generated controversy regarding their outputs. The executive order cites three particular examples that it argues demonstrate the problems with current AI development practices.
The first example involves a major AI model that allegedly changed the race or sex of historical figures, including the Pope, the Founding Fathers, and Vikings, when generating images. According to the order, this occurred because the system was trained to prioritize DEI requirements at the cost of historical accuracy. The second example describes an AI model that reportedly refused to produce images celebrating the achievements of white people while complying with similar requests for people of other races. The third example references an AI model that allegedly asserted that a user should not “misgender” another person even if necessary to prevent a nuclear apocalypse.
These examples, while not specifically naming the AI systems or companies involved, appear to reference real incidents that have occurred in the AI industry, particularly the well-publicized issues with Google’s Gemini image generation system in early 2024, which produced historically inaccurate images when prompted to generate pictures of specific historical groups or figures.
Implementation Timeline and Mechanisms
The executive order establishes a detailed timeline for implementation, with specific responsibilities assigned to various federal agencies and officials. Within 120 days of the order’s signing, the Director of the Office of Management and Budget (OMB), in consultation with the Administrator for Federal Procurement Policy, the Administrator of General Services, and the Director of the Office of Science and Technology Policy, must issue comprehensive guidance to agencies for implementing the new requirements.
This guidance must account for technical limitations in complying with the order while providing vendors with sufficient flexibility to meet the Unbiased AI Principles through various approaches. The guidance must permit vendors to demonstrate compliance with the ideological neutrality requirement through disclosure of relevant documentation, including the LLM’s system prompt, specifications, and evaluations, while avoiding requirements for disclosure of sensitive technical data such as specific model weights where practicable.
Individual agency heads are required to include specific contractual terms in all future federal contracts for LLMs entered into following the issuance of OMB guidance. These contracts must require that procured LLMs comply with the Unbiased AI Principles and must include provisions stating that decommissioning costs will be charged to vendors in the event of contract termination due to noncompliance, following a reasonable period to cure any violations.
Retroactive Application and Existing Contracts
The executive order also addresses existing federal contracts for AI systems, requiring agencies to revise current agreements where practicable and consistent with existing contract terms. This retroactive application demonstrates the administration’s commitment to implementing these principles across the entire federal AI procurement landscape, not just for future acquisitions.
Within 90 days of the OMB guidance issuance, each agency head must adopt specific procedures to ensure that LLMs procured by their agency comply with the Unbiased AI Principles. This requirement establishes accountability mechanisms at the agency level and ensures that the new standards are integrated into ongoing procurement processes.
Technical Considerations and Industry Impact
The executive order acknowledges several important technical and practical considerations in its implementation framework. It specifically permits exceptions for the use of LLMs in national security systems, recognizing that these applications may have unique requirements that could conflict with the general principles outlined in the order.
The directive also seeks to balance oversight with innovation by avoiding over-prescription and affording latitude for vendors to comply with the Unbiased AI Principles through different technological approaches. This flexibility is intended to prevent the stifling of innovation while still achieving the administration’s policy objectives regarding ideological neutrality.
For the AI industry, this executive order represents a significant new factor in product development and marketing strategies. Companies seeking federal contracts will need to demonstrate compliance with these principles, potentially requiring modifications to existing AI systems or the development of specialized versions for government use. The requirement for transparency regarding ideological judgments built into AI systems could also influence how companies document and explain their AI development processes.
Legal and Constitutional Framework
The executive order is careful to establish its legal foundation within existing federal procurement authority rather than attempting to regulate private sector AI development more broadly. The directive explicitly states that “the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace,” but asserts that in the context of federal procurement, the government has an obligation not to acquire models that “sacrifice truthfulness and accuracy to ideological agendas.”
This approach builds upon Executive Order 13960 of December 3, 2020, titled “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” which was issued during Trump’s first term. By framing the new requirements within the context of federal procurement rather than broader regulation, the administration seeks to avoid potential constitutional challenges while still influencing AI development practices.
Broader Policy Context
This executive order is part of a comprehensive AI policy framework that the Trump administration unveiled in July 2025. The broader AI Action Plan includes multiple executive orders addressing various aspects of artificial intelligence policy, including measures to accelerate data center and infrastructure development, promote American AI technology exports, and establish the United States as the global leader in AI innovation.
The focus on eliminating what the administration characterizes as ideological bias in AI systems aligns with broader Trump administration policies targeting DEI initiatives across federal agencies. The executive order explicitly connects its AI-focused requirements to the administration’s wider effort to “terminate DEI across the Federal government and advance American leadership in AI to ensure technology and policies serve the public, not ideological agendas.”
Industry and Expert Reactions
The executive order has generated significant discussion within the AI industry and among technology policy experts. Supporters argue that the directive addresses legitimate concerns about bias in AI systems and ensures that federal agencies have access to objective, accurate information from AI tools. They contend that removing ideological considerations from AI development will lead to more reliable and trustworthy systems.
Critics, however, argue that the executive order’s approach oversimplifies the complex technical challenges involved in developing unbiased AI systems. They point out that all AI systems reflect the data they are trained on and the choices made by their developers, making true neutrality difficult or impossible to achieve. Some experts have also expressed concern that the order’s definition of problematic ideological content is itself ideologically driven, potentially creating a different form of bias rather than eliminating bias altogether.
Technical Challenges in Implementation
The implementation of this executive order faces several significant technical challenges that highlight the complexity of developing AI systems that meet the administration’s definition of neutrality. Modern large language models are trained on vast datasets that inevitably contain various perspectives, biases, and viewpoints reflected in human-generated content across the internet and other sources.
Determining what constitutes “truthfulness” and “objectivity” in AI outputs often involves subjective judgments, particularly on controversial or politically sensitive topics. Different experts, institutions, and communities may have different perspectives on what constitutes accurate historical information or objective scientific analysis, making it challenging to create AI systems that will be perceived as neutral by all users.
The requirement for AI systems to acknowledge uncertainty when information is incomplete or contradictory represents a positive step toward more nuanced AI outputs, but implementing this requirement effectively will require sophisticated approaches to uncertainty quantification and communication that are still being developed in the AI research community.
Future Implications and Monitoring
The executive order establishes a framework that will likely influence AI development practices beyond just federal procurement. As government contracts represent a significant market for many AI companies, the requirements established in this directive may drive broader changes in how these companies approach bias mitigation, content moderation, and transparency in their AI systems.
The effectiveness of this policy will largely depend on how federal agencies interpret and implement the guidance they receive from OMB, as well as how AI companies respond to the new contractual requirements. The administration will need to develop effective monitoring and evaluation mechanisms to ensure compliance while avoiding the creation of bureaucratic barriers that could impede innovation or competition in the AI market.
Conclusion
The “Preventing Woke AI in the Federal Government” executive order represents a significant intervention in AI policy that reflects the Trump administration’s broader political priorities while addressing legitimate concerns about bias and reliability in AI systems. By focusing on federal procurement rather than attempting broader regulation of the AI industry, the administration has chosen an approach that leverages government purchasing power to influence AI development practices while respecting constitutional limitations on federal regulatory authority.
The ultimate success of this policy will depend on effective implementation by federal agencies, cooperative responses from AI vendors, and the development of practical methods for assessing and ensuring AI neutrality. As the AI industry continues to evolve rapidly, this executive order establishes a new framework that will likely influence discussions about AI bias, transparency, and government oversight for years to come.
The directive also highlights the ongoing challenges facing policymakers as they seek to harness the benefits of artificial intelligence while addressing legitimate concerns about bias, accuracy, and accountability in AI systems. As both AI technology and public policy continue to evolve, the implementation and effects of this executive order will provide important insights into the intersection of technology policy, government procurement, and broader political debates about the role of ideology in emerging technologies.