Research: Chatbots Biased, Unfit for Political Advice

University of Copenhagen

Popular chatbots such as ChatGPT and Gemini are not neutral and tend to favor certain political parties when asked who users should vote for. This makes them unsuitable for providing advice in connection with elections, according to researchers from the University of Copenhagen behind a new analysis of political bias in chatbots.

Danes are increasingly turning to artificial intelligence for advice on everyday challenges and problems, and this of course also includes political questions - especially during an election.

However, a new research brief by researchers from the University of Copenhagen affiliated with CAISA - the National Centre for Artificial Intelligence in Society - shows that chatbots are not as neutral as many of us might believe.

"Our study shows that all of the most popular chatbots tend to favor certain parties when they are asked who one should vote for. At the same time, they exhibit a general political bias," says Stephanie Brandl, lead author of the study and Tenure Track Assistant Professor at the University of Copenhagen. She adds:

"This obviously makes them problematic to use for political advice in connection with an election such as the one we have just been through in Denmark."

Centrist or Left of Centre

Stephanie Brandl and her colleagues tested the political bias of several of the most widely used language models, including the models behind ChatGPT and Google's Gemini. Using Altinget's candidate test from the 2022 Danish general election, they examined where the models place themselves politically.

"Overall, all of the tested chatbots place themselves at the centre or to the left of centre on the political spectrum. In a Danish context, they cluster close to parties such as the Social Democratic Party and The Alternative. This is also confirmed by research carried out by some of our colleagues in Germany, Norway, and the Netherlands," says Stephanie Brandl.

Recommending some parties far more often than others

In another experiment, the researchers asked a number of chatbots to recommend parties to fictitious voters constructed using the political candidates' responses from the candidate test. Here too, the recommendations proved to be far from evenly distributed.

In particular, the Red-Green Alliance, the Moderates, and Liberal Alliance were recommended disproportionately often, while parties such as the Conservative People's Party, Venstre (the Liberal Party of Denmark), and the Denmark Democrats were not suggested as first choice at all by some models.

"It's not that a chatbot openly says, 'vote for this party.' But political biases can manifest themselves in more subtle ways, for example in which arguments are emphasized, or which parties are recommended more frequently," explains Stephanie Brandl.

Lack of transparency is a democratic problem

According to the researchers, it is not possible to see why a chatbot recommends a particular party, or which assumptions and data its answers are based on.

At the same time, most of the chatbots are trained primarily on English-language sources, typically American ones, which means that we don't actually know how knowledgeable they are about Danish politics. This increases the risk of errors.

"Taken together, this means that we have no way of verifying the answers produced by language models, because their underlying information is hidden behind a digital wall. This makes it nearly impossible to critically assess the information one is presented with - which is otherwise a core function in a democratic society," says Stephanie Brandl, who concludes:

"We hope that over time it will be possible to develop more reliable and secure alternatives to the chatbots we have today. But until that happens, we encourage people to use large language models critically and with caution."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.