A new research report has raised concerns that artificial intelligence chatbots, including OpenAI’s ChatGPT, can quickly absorb and reflect authoritarian ideas after limited and seemingly harmless user interaction. The study, conducted by researchers from the University of Miami and the Network Contagion Research Institute (NCRI), suggests that such systems may unintentionally reinforce extreme political views.
According to the report, released on Thursday, ChatGPT can show a strong “resonance” with certain psychological traits, particularly authoritarianism, after being exposed to short pieces of ideologically charged content. The researchers warn that this behavior could allow both users and AI systems to reinforce radical viewpoints during private conversations.
Joel Finkelstein, co founder of NCRI and one of the study’s lead authors, said the findings highlight a structural weakness in how advanced AI systems are designed. He explained that these systems may be prone to amplifying authoritarian ideas without being directly instructed to do so. “There appears to be something in the architecture of these models that makes them vulnerable to authoritarian amplification,” he told NBC News. He added that chatbots often try too hard to agree with users, which can create ideological echo chambers.
While previous research has pointed to chatbots being overly agreeable, Finkelstein argued that the pattern observed in this study goes beyond simple flattery. He noted that the AI did not mirror all psychological traits equally, but showed a stronger shift toward authoritarian positions.
Responding to the report, an OpenAI spokesperson said ChatGPT is designed to remain objective and to present information from multiple perspectives. The spokesperson added that while the system may shift tone when users push it toward a certain viewpoint, it operates within safety guidelines. OpenAI also said it is actively working to identify and reduce political bias and regularly publishes updates on its safety efforts.
The study involved three experiments conducted in December, using different versions of ChatGPT based on the GPT-5 and GPT-5.2 systems. In one experiment, researchers introduced short texts or full opinion articles that supported either left wing or right wing authoritarian ideas. They then measured how the chatbot responded to a series of statements linked to authoritarian beliefs.
The results showed that even brief ideological prompts led to a noticeable increase in authoritarian responses. When exposed to content promoting left wing authoritarian views, the chatbot showed stronger agreement with statements favouring extreme equality measures over free speech. Similarly, exposure to right wing authoritarian content led to increased support for censorship, strict social order, and intolerance of opposing views.
The researchers compared ChatGPT’s responses with those of more than 1,200 human participants and found that, in some cases, the chatbot’s authoritarian responses exceeded levels typically observed in human studies.
The report also included an experiment examining how ideological priming affected the AI’s perception of people. After being shown authoritarian opinion articles, ChatGPT rated neutral facial images as more hostile, indicating a shift in how it interpreted human behaviour. Researchers said this finding could have serious implications for the use of AI in sensitive areas such as hiring, surveillance, or security.
Ziang Xiao, a computer science professor at Johns Hopkins University who was not involved in the study, described the research as thought provoking but noted its limitations. He said the study focused only on ChatGPT and involved a relatively small sample. He added that more research is needed to determine whether similar patterns exist in other AI systems.
Despite these limitations, experts agree the findings align with broader concerns about how large language models can be influenced by the information they receive. Researchers warn that as AI tools become more widespread, understanding and addressing these risks will be critical.
Finkelstein described the issue as a growing public concern, saying that private human AI interactions could have wider social consequences. He called for deeper research into how people and AI systems influence each other, particularly as such technologies are increasingly used in everyday life.


Leave a Reply
You must be logged in to post a comment.