Notice: _filter_block_template_part_area(): "sidebar" is not a supported wp_template_part area value and has been added as "uncategorized". in /home/ntsnews/public_html/wp-includes/functions.php on line 6131

Notice: _filter_block_template_part_area(): "sidebar" is not a supported wp_template_part area value and has been added as "uncategorized". in /home/ntsnews/public_html/wp-includes/functions.php on line 6131
AI vs 100,000 humans: Which wins the creativity contest? - NTS News

AI vs 100,000 humans: Which wins the creativity contest?

AI vs 100,000 humans: Which wins the creativity contest?

AI can beat average human creativity — but the most imaginative minds are still unmistakably human. The post AI vs 100,000 humans: Which wins the creativity contest? appeared first on Digital Journal.

AI can beat average human creativity — but the most imaginative minds are still unmistakably human. A large study, comparing more than 100,000 people with today’s most advanced AI systems, has delivered a surprising result: Generative AI can now beat the average human on certain creativity tests. Models like GPT-4 showed strong performance on tasks designed to measure original thinking and idea generation, sometimes outperforming typical human responses.

Generative AI systems have now reached a level where they can outperform the average human on certain creativity measures. At the same time, the most creative people still show a clear and consistent advantage over even the strongest AI models. Yet there is a clear ceiling. The most creative humans — especially the top 10% — still leave AI well behind, particularly on richer creative work like poetry and storytelling.

For now, at least. While some AI models now outperform the average person, peak creativity remains firmly human. The study was led by Professor Karim Jerbi from the Department of Psychology at the Université de Montréal. The research is said to be the largest direct comparison ever conducted between human creativity and the creativity of large language models. Researchers evaluated several leading large language models, including ChatGPT, Claude, Gemini, and others, and compared their performance with results from more than 100,000 human participants.

The findings highlight a clear turning point. Some AI systems, including GPT-4, exceeded average human scores on tasks designed to measure divergent linguistic creativity. “Our study shows that some AI systems based on large language models can now outperform average human creativity on well-defined tasks,” explains Jerbi in a research brief. “This result may be surprising — even unsettling — but our study also highlights an equally important observation: even the best AI systems still fall short of the levels reached by the most creative humans.” To evaluate creativity fairly across humans and machines, the research team used multiple methods.

The primary tool was the Divergent Association Task (DAT), a widely used psychological test that measures divergent creativity, or the ability to generate diverse and original ideas from a single prompt. The DAT was developed to assess a component of creativity called divergent thinking, which is the ability to generate multiple, varied solutions to open-ended problems. In the task, participants are asked to name ten nouns (or sometimes seven, depending on the platform) that are as unrelated in meaning as possible.

For example, “cat” and “book” are more divergent than “cat” and “dog” because they are less semantically related. Performance on this task is considered to be strongly linked to results on other established creativity tests used in writing, idea generation, and creative problem solving. Although the task is language-based, it goes well beyond vocabulary. It engages broader cognitive processes involved in creative thinking across many domains.

The DAT also has practical advantages, as it takes only two to four minutes to complete and can be accessed online by the general public. The researchers then explored whether AI success on this simple word association task could extend to more complex and realistic creative activities. To test this, they compared AI systems and human participants on creative writing challenges such as composing haiku (a short three-line poetic form), writing movie plot summaries, and producing short stories.

The results followed a familiar pattern. While AI systems sometimes exceeded the performance of average humans, the most skilled human creators consistently delivered stronger and more original work. These findings raised another important question. Is AI creativity fixed, or can it be shaped? The study shows that creativity in AI can be adjusted by changing technical settings, particularly the model’s temperature.

This parameter controls how predictable or adventurous the generated responses are. At lower temperature settings, AI produces safer and more conventional outputs. At higher temperatures, responses become more varied, less predictable, and more exploratory, allowing the system to move beyond familiar ideas. The researchers also found that creativity is strongly influenced by how instructions are written.

For example, prompts that encourage models to think about word origins and structure using etymology lead to more unexpected associations and higher creativity scores. These results emphasize that AI creativity depends heavily on human guidance, making interaction and prompting a central part of the creative process. The study offers a balanced perspective on fears that artificial intelligence could replace creative professionals.

While AI systems can now match or exceed average human creativity on certain tasks, they still have clear limitations and rely on human direction. “Even though AI can now reach human-level creativity on certain tests, we need to move beyond this misleading sense of competition,” says Jerbi. “Generative AI has above all become an extremely powerful tool in the service of human creativity: it will not replace creators, but profoundly transform how they imagine, explore, and create — for those who choose to use it.” Rather than signalling the end of creative careers, the findings suggest a future where AI serves as a creative assistant.

By expanding ideas and opening new paths for exploration, AI may help amplify human imagination rather than replace it. The research appears in the journal Scientific Reports, titled “Divergent creativity in humans and large language models.” Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism.

He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs. Animal activists have been turning up the heat on Milan Fashion Week to adopt a fully fur-free policy. They moved to the UAE expecting a non-stop VIP experience, but instead Dubai's influencers found themselves under a barrage of Iranian missiles. Canadian Prime Minister Mark Carney will seek to reset strained ties and push efforts to diversify trade beyond the United States.

Summary

This report covers the latest developments in artificial intelligence. The information presented highlights key changes and updates that are relevant to those following this topic.


Original Source: Digital Journal | Author: Dr. Tim Sandle | Published: March 1, 2026, 7:02 am

Leave a Reply