Biased AI is dangerous AI

Progressive values threaten the promise that AI once displayed
Instead of becoming the next great technological innovation, AI has become the next great iteration in our all-consuming culture war. (Courtesy of Depositphotos)
Instead of becoming the next great technological innovation, AI has become the next great iteration in our all-consuming culture war. (Courtesy of Depositphotos)

When OpenAI launched ChatGPT in November 2022, the public found a groundbreaking egalitarian information system at their fingertips. The promise of this new technology seemed infinite. Less than two years later, the idealistic optimism that once guided society’s openness to AI has since faded — and for good reason. 

Artificial Intelligence (AI), especially large language models (LLMs) like ChatGPT, are guided primarily by two key factors: the data that scientists have fed to the algorithm and the parameters by which hand-picked reviewers determine how the model uses that data.

The potential for political values to influence AI is therefore inherent in its design. But few would have been cynical enough during its earliest stages to predict just how politicized AI has become. 

Instead of becoming the next great technological innovation, AI has become the next great iteration in our all-consuming culture war. 

Story continues below advertisement

Science points to systemic biases

There is mounting evidence that AI models are structurally incapable of political neutrality. 

In a peer-reviewed study, researchers found that various LLMs exhibited explicit bias towards certain political positions. 

ChatGPT is especially notable for its left-wing biases, with GPT-4 possessing a “significant and systemic political bias” towards liberal political and cultural values, according to researchers at the University of East Anglia. 

Systemic biases are only possible if the algorithm is “infused with assumptions, beliefs and stereotypes found in the reams of data” and the organizing instructions of its human overlords. 

Perhaps surprisingly, it is human involvement in AI, not the training data, that poses the greatest risks to systemic neutrality. 

Jeremy Baum and Jonah Villasenor of the Brookings Institution make the critical point that modern AIs are built by reinforcement learning with human feedback (RLHF), which opens them up to human influence.

It doesn’t stop there; depicting Vikings as predominantly African-American is ahistorical, and there have never been any female, Indian popes. Yet Google persists in its all-consuming quest for diversity to redefine, or rather recolor, history.

“RLHF is a process that uses feedback from human testers to help align LLM outputs with human values. Of course, there is a lot of human variation in how ‘values’ are interpreted,” Baum and VIllasenor said. “The RLHF process will shape the model using the views of the people providing feedback, who will inevitably have their own biases.” 

OpenAI’s CEO Sam Altman, the world’s foremost leader in AI development, has stated with 100% certainty that the employees of a company have the ability to create biases in AI. And with most AI development occurring in the progressive bastion of San Francisco, ethicists have openly expressed worry that AI will reflect the left-wing groupthink within the city. 

There is no escaping the reality that the development of AI has reflected progressive orthodoxies in programming and reinforcement. What has occurred as a result is therefore wholly unsurprising. 

What “woke” AI looks like

Despite AI’s cultural biases being known, the situation has not yet been remedied. In fact, recent events have demonstrated that companies have refused to alter their AI to the extent necessary to remedy its shortcomings.

The significant fanfare that accompanied the launch of Google’s Gemini LLM very quickly dulled to a hushed murmur as its failures, and Google’s biases, rose to the fore. 

After it became clear that Gemini had a (well-intentioned) goal of generating diverse images, right-wing accounts on X exposed the folly behind many of Google’s AI priorities by procuring an embarrassing series of historically-inaccurate images. 

When prompted to visualize the Founding Fathers, a group of older white men, it generated images of Latino and Black men. It doesn’t stop there; depicting Vikings as predominantly African-American is ahistorical, and there have never been any female, Indian popes. Yet Google persists in its all-consuming quest for diversity to redefine, or rather recolor, history. 

This problem is not limited to the edges of the culture war, however. There are actual problems that arise when diversity is prioritized over historicity. When asked to depict the Nazis, Gemini generated a group of racially-diverse Nazi soldiers. Clearly, in Gemini’s retelling, Hitler embraced diverse and inclusive hiring practices. 

It is true white people are overrepresented in the art and images that AI was fed on, and that Gemini needed to ensure that its AI did not replicate a bias towards white individuals. But what occurred was a vast overcorrection and the creation of an equal and opposite bias. If you aren’t willing to draw a group of all white men, you shouldn’t be willing to draw a group of all-Black women.

With that being said, AI visualization is highly specific. It does not account for the numerous other formats that AI’s bias continues to be expressed within. 

AI should not be “for” affirmative action. It should not be “for” equity or “for” BLM. But critically, it should not be “against” these issues either.

What “woke” AI reads like

Megan McArdle, a columnist at the Washington Post, recently wrote that she was intending to write a column about how “woke visualization” was a relatively minor problem, all things considered. 

That plan changed when visualization capabilities were temporarily turned off, and text responses were its replacement. The subsequent output changed her outlook entirely. 

She wrote, “As [textual] absurdities piled up, things began to look a lot worse for Google — and society. Gemini appears to have been programmed to avoid offending the leftmost 5% of the U.S. political distribution, at the price of offending the rightmost 50%.”

Like others, she stress-tested AI, and AI failed. Its boundaries were somehow arbitrary, but not arbitrary enough. For example, it would write toasts praising Joe Biden but not Donald Trump. It did the same for the controversial Rep. Ilhan Omar (D-MN) but not Gov. Brian Kemp (R-GA), who refused to bow to Trump’s quest to overturn the election. Gemini praised right-leaning New York Times columnist David Brook, a noted Trump critic, but not the more conservative New York Times columnist Ross Douthat, who’s more sympathetic to a Trump presidency

These examples beseech us to question not only how AI makes decisions, but to question the decisions it makes. Unfortunately, it is all too obvious that, for the vast majority of users, they will be either too young or too naive to question what AI presents to them as mainstream fact. 

The danger, of course, is that AI will present only one side of any debate unless specifically asked otherwise. The algorithm consistently takes left-leaning positions on political, cultural and social issues. But AI was never designed to inculcate its chosen values in the minds of its users. However, in reality, it does just that. 

AI should not be “for” affirmative action. It should not be “for” equity or “for” BLM. But critically, it should not be “against” these issues either. Opinions on these topics cleave via the grain of our political ideologies — the public health of our society demands that we represent these debates as the debates they are or have become rather than how we wish them to be. 

Until that time comes, we should collectively recognize that AI must not become the gatekeeper of our academic consciousness. At present, its flaws are too evident to let it become society’s guiding light. 

Return AI to its original function, and we will see AI resume the perch it was perhaps always destined to take on top of the modern world. 

There is hope for the future

The poor state of the present does not negate the possibilities of the future. While the industry has largely abandoned the open-access ethos that OpenAI (hence the name) once envisioned, AI as a product can be resurrected if the necessary steps are taken.

Google has already started improving Gemini’s visualization systems. But doing so behind closed doors does not address the fundamental question of transparency. Companies must abandon the secrecy that clouds AI development in light of these recent scandals. 

Without knowing “what happens between the prompt and what we collect as data,” said Fabio Motoki, one of the paper’s authors, there is little optimism that the necessary changes will occur. Google and OpenAI, among others, squandered the goodwill they previously had cultivated by launching explicitly biased systems. 

Moreover, companies can try to filter out all biased content in their data sets, but it is an impossible task

Instead, AI developers should practice what they preach. If companies want AI’s outputs to be diverse, then its inputs must be diverse, which can be achieved by extending the definition of diversity to viewpoints as well as race and gender. Because if those groups are lacking in representation in the AI industry, then surely so is the proportion of non-progressives, as Altman himself alluded to. 

Moreover, woke AI suffers from the same characteristics as bad writers do — they editorialize. When AIs pass judgment on the question itself, they risk getting caught up in an all-encompassing game of cultural hegemony. 

Return AI to its original function, and we will see AI resume the perch it was perhaps always destined to take on top of the modern world. 

Refuse, and the world will recognize that we have ceded control of knowledge itself to amorphous systems that do not have our best interests at heart.

View Comments (1)
More to Discover

Comments (1)

All Old Gold & Black Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *

  • L

    Lori GuidryApr 18, 2024 at 9:15 am

    This was thoughtful and well-expressed. Thank you for this.

    Reply