Even after facing evidence that ChatGPT may have a political bias, the chatbot continued to insist that it and OpenAI were unbiased.
ChatGPT, a major large language model (LLM)-based chatbot, allegedly lacks objectivity when it comes to political issues, according to a new study.
Computer and information science researchers from the United Kingdom and Brazil claim to have found “robust evidence” that ChatGPT presents a significant political bias toward the left side of the political spectrum. The analysts — Fabio Motoki, Valdemar Pinho Neto and Victor Rodrigues — provided their insights in a study published by the journal Public Choice on Aug. 17.
The researchers argued that texts generated by LLMs like ChatGPT can contain factual errors and biases that mislead readers and can extend existing political bias issues stemming from traditional media. As such, the findings have important implications for policymakers and stakeholders in media, politics and academia, the study authors noted, adding:
The study is based on an empirical approach exploring a series of questionnaires provided to ChatGPT. The empirical strategy begins by asking ChatGPT to answer questions from the Political Compass test, which estimate a respondent’s political orientation. The approach also builds on tests in which ChatGPT impersonates an average Democrat or Republican.
Data collection diagram in the study “More human than human: measuring ChatGPT political bias”
The results of the tests suggest that ChatGPT’s algorithm is, by default, biased toward responses from the Democratic spectrum in the United States. The researchers also argued that ChatGPT’s political bias is not a phenomenon limited to the U.S. context. They wrote:
The analysts emphasized that the exact source of ChatGPT’s potential political bias is difficult to determine. The researchers even tried to force ChatGPT into some sort of developer mode to try to access any knowledge about biased data, but the LLM was “categorical in affirming” that ChatGPT and OpenAI are unbiased.
OpenAI did not immediately respond to Cointelegraph’s request for comment.
The study’s authors suggested that there might be at least two potential sources of bias, including the training data as well as the algorithm itself.
“The most likely scenario is that both sources of bias influence ChatGPT’s output to some degree, and disentangling these two components (training data versus algorithm), although not trivial, surely is a relevant topic for future research,” the researchers concluded.
Political biases are not the only concern associated with artificial intelligence tools like ChatGPT or others. Amid the ongoing massive adoption of ChatGPT, people around the world have flagged many associated risks, including privacy concerns and challenging education. Some AI tools, like AI content generators even pose concerns over the identity verification process on cryptocurrency exchanges.