Trump unveils AI plan that aims to clamp down on regulations and 'bias'

Trump launches AI agenda to limit regulations and curb ‘bias’

Former President Donald Trump has introduced a new artificial intelligence initiative that places a strong emphasis on limiting federal regulations and addressing what he describes as political bias within AI systems. As the use of artificial intelligence rapidly expands across various sectors—including healthcare, national security, and consumer technology—Trump’s approach signals a departure from broader bipartisan and international efforts to apply tighter oversight over the evolving technology.

Trump’s latest proposal, part of his broader 2025 campaign strategy, presents AI as both an opportunity for American innovation and a potential threat to free speech. Central to his plan is the idea that government involvement in AI development should be minimal, focusing instead on reducing regulations that, in his view, may hinder innovation or enable ideological control by federal agencies or powerful tech companies.

Aunque otros líderes políticos y organismos reguladores en todo el mundo están desarrollando marcos orientados a garantizar la seguridad, transparencia y uso ético de la inteligencia artificial (IA), Trump está presentando su estrategia como una medida correctiva frente a lo que considera una creciente interferencia política en el desarrollo y uso de estas tecnologías.

At the core of Trump’s AI strategy is a sweeping call to reduce what he considers bureaucratic overreach. He proposes that federal agencies be restricted from using AI in ways that could influence public opinion, political discourse, or policy enforcement in partisan directions. He argues that AI systems, particularly those used in areas like content moderation and surveillance, can be manipulated to suppress viewpoints, especially those associated with conservative voices.

Trump’s proposal suggests that any use of AI by the federal government should undergo scrutiny to ensure neutrality and that no system is permitted to make decisions with potential political implications without direct human oversight. This perspective aligns with his long-standing criticisms of federal agencies and large tech firms, which he has frequently accused of favoring left-leaning ideologies.

His strategy also involves establishing a team to oversee the deployment of AI in government operations and recommend measures to avoid what he describes as “algorithmic censorship.” The plan suggests that systems employed for identifying false information, hate speech, or unsuitable material could potentially be misused against people or groups, and thus should be strictly controlled—not in their usage, but in maintaining impartiality.

Trump’s artificial intelligence platform also focuses on the supposed biases integrated into algorithms. He argues that numerous AI systems, especially those created by large technology companies, possess built-in political tendencies influenced by the data they are trained with and the objectives of the organizations that develop them.

Although experts within the AI sector recognize the dangers of bias present in expansive language models and recommendation algorithms, Trump’s perspective highlights the possibility that these biases might be exploited purposely instead of accidentally. He suggests strategies to examine and reveal these systems, advocating for openness concerning their training processes, the data they utilize, and the potential variations in outcomes influenced by political or ideological settings.

Her proposal does not outline particular technical methods for identifying or reducing bias; however, she suggests the creation of an autonomous entity to evaluate AI tools utilized in sectors such as law enforcement, immigration, and digital interaction. She emphasizes that the aim is to guarantee that these tools remain «unaffected by political influence.»

Beyond concerns over bias and regulation, Trump’s plan seeks to secure American dominance in the AI race. He criticizes current strategies that, in his view, burden developers with “excessive red tape” while foreign rivals—particularly China—accelerate their advancements in AI technologies with state support.

In response to this situation, he suggests offering tax incentives and loosening regulations for businesses focusing on AI development in the United States. Additionally, he advocates for increased financial support for collaborations between the public sector and private companies. These strategies aim to strengthen innovation at home and lessen dependence on overseas technology networks.

On national security, Trump’s plan is less detailed, but he does acknowledge the dual-use nature of AI technologies. He advocates for tighter controls on the export of critical AI tools and intellectual property, particularly to nations deemed strategic competitors. However, he stops short of outlining how such restrictions would be implemented without stifling global research collaborations or trade.

Interestingly, Trump’s AI strategy hardly addresses data privacy, a subject that has become crucial in numerous other plans both inside and outside the U.S. Although he recognizes the need to safeguard Americans’ private data, the focus is mainly on controlling what he considers ideological manipulation, rather than on the wider effects of AI-driven surveillance or improper handling of data.

This absence has drawn criticism from privacy advocates, who argue that AI systems—particularly those used in advertising, law enforcement, and public services—can pose serious risks if deployed without adequate data protections in place. Trump’s critics say his plan prioritizes political grievances over holistic governance of a transformative technology.

Trump’s approach to AI policy is notably different from the new legislative efforts in Europe. The EU is working on the AI Act, which intends to sort systems by their risk levels and demands rigorous adherence for applications that have substantial effects. In the United States, there are collaborative efforts from both major political parties to create regulations that promote openness, restrict biased outcomes, and curb dangerous autonomous decision-making processes, especially in areas such as job hiring and the criminal justice system.

By advocating a hands-off approach, Trump is betting on a deregulatory strategy that appeals to developers, entrepreneurs, and those skeptical of government intervention. However, experts warn that without safeguards, AI systems could exacerbate inequalities, propagate misinformation, and undermine democratic institutions.

The timing of Trump’s AI proposal appears closely tied to his 2024 election campaign. His message—framed around freedom of speech, fairness in technology, and protection against ideological control—resonates with his political base. By positioning AI as a battleground for American values, Trump seeks to differentiate his platform from other candidates who support tighter oversight or more cautious adoption of emerging tech.

The suggestion further bolsters Trump’s wider narrative of battling what he characterizes as a deeply rooted political and tech establishment. In this situation, AI transforms into not only a technological matter but also a cultural and ideological concern.

The success of Trump’s AI proposal largely hinges on the results of the 2024 election and the composition of Congress. Even if some elements are approved, the plan will probably encounter resistance from civil liberties organizations, privacy defenders, and technology professionals who warn against a landscape where AI is unchecked.

As artificial intelligence advances and transforms various sectors, nations globally are striving to find the optimal approach to merge innovation with responsibility. Trump’s plan embodies a definite, albeit contentious, perspective—centered on reducing regulation, skepticism towards organizational supervision, and significant apprehension about assumed political interference via digital technologies.

What remains uncertain is whether such an approach can provide both the freedom and the safeguards needed to guide AI development in a direction that benefits society at large.

By Ethan Brown Pheels