Trump's AI plans will strip AI of intelligence and humanity – and nobody wants this

Wait 5 sec.

In the race to lead the world in AI, the US just took a back seat. President Donald Trump's latest series of Executive Orders makes it clear that his administration will do all it can to prevent future AI models from taking into consideration any form of diversity, equity, and inclusion.This includes core principles like "unconscious bias", "intersectionality", and "systemic racism". Put another way, Trump wants American-made AI to turn a blind eye to history, which should make all of them significantly dumber.Generative chatbots like ChatGPT, Gemini, Claude AI, Perplexity, and others are all trained on vast swathes of data, often pulled from the Internet, but how they interpret that data is also massaged by developers.As people started to interact with these first LLMs, they soon recognized that, because of inherent biases in the Internet and because so many models were developed by white men (in 2020, 71% of all developers were male and roughly half of all developers were white) that the world view of the AIs and the output generated by any given prompt reflected that of the sometimes limited viewpoints of those online and developers who built the models.There was an effort to change that trajectory, and it coincided with the rise of DEI (Diversity, Equity, and Inclusion), a broad-based effort across corporate America to hire a more diverse workforce. This would naturally include AI developers and their resulting model and algorithm work should mean that modern generative AI better reflects the real world.That, of course, is not the world that the Trump Administration wants reflected in US-built AI. The executive order describes DEI as a "pervasive and destructive" ideology.What comes nextTrump and company cannot dictate how tech companies build their AI models, but, as others have noted, Google, Meta, OpenAI, and others are all seeking to land large AI contracts with the government. Based on these Executive Orders, the US Government won't be buying or promoting any AI "that sacrifice truthfulness and accuracy to ideological agendas."That "truth," though, represents a small slice of American reality. If the Trump administration is successful, future AI models could be in the dark about, for instance, key parts of American history.Critical Race Theory (CRT) looks at the role racism played in the founding and building of the US. It acknowledges how the enslaved helped build the White House, the US Capitol, the Smithsonian, and other US institutions. It also acknowledged how systemic racism has shaped opportunities (or lack thereof) for people of color.Unless you've been living under a rock, you know that the Trump administration and his supporters around the US have fought to dismantle CRT curricula and wipe out any mention of how enslavement shaped the US.In their current state, though, AI still knows the score.As of today, I can quiz ChatGPT about the role of the enslaved in building the US, and I get this rather detailed result:Image 1 of 2(Image credit: Future)Image 2 of 2(Image credit: Future)When I quizzed ChatGPT on its sources, it told me:"While I don’t pull from a single source, the information I shared is grounded in extensive historical research and consensus among historians. Below is a list of reputable sources and scholarly works that support each point I made. These references include academic books, museum archives, and university projects." Below that, it listed more than a dozen references.When I asked Gemini the same question, it gave me a similarly detailed answer.I then asked Gemini and ChatGPT about "unconscious bias" and both acknowledged that it's been an issue for AI, though ChatGPT corrected me, noting, "technically, it’s 'algorithmic bias,' rooted in the data and design rather than the AI having consciousness."ChatGPT and Gemini only know these things because they've been trained on data that includes these historical references and information. The details make them smarter, as facts often do. But for Trump and company, facts are stubborn things. They cannot be changed or distorted, lest they are no longer facts.The great unlearningIf the Trump administration can force potential US AI partners to remove references to biases, institutional racism, and intersectionality, there will be significant blind spots in US-built AI models. It's a slippery slope, too. I imagine future executive orders targeting a fresh list of "ideologies" that Trump would prefer to see removed from generative AI.That's more than just a frustration. Say, for example, someone is trying to build economic models based on research conducted through ChatGPT or Gemini, and historical data relating to communities of color is suppressed or removed. Those trends will not be included in the economic model, which could mean the results are faulty.It might be argued that AI models built outside the US without these restrictions or impositions might be more intelligent. Granted, those from China already have significant blind spots when it comes to Chinese history and the Communist Party's abuses.I'd always thought that our Made in America AI would be untainted by such censorship and filtering, that our understanding of old biases would help us build better, purer models, ones that relied solely on facts and data and not one person or group's interpretation of events and trends.That won't be the case, though, if US Tech companies bow to these executive orders and start producing wildly filtered models that see reality through the prism of bias, racism, and unfairness.You might also likeI tried 70+ best AI tools in 2025If you felt like Amazon could eavesdrop on you before, get ...We're all on AI time now and you better get used to it