Published: April 9, 2023

Don't bet with ChatGPT – study shows language AIs often make irrational decisions

The past few years have seen an explosion of progress in large language model artificial intelligence systems that can do things like write poetry, conduct humanlike conversations and pass medical school exams. This progress has yielded models like ChatGPT that could have major social and economic ramifications ranging from job displacements and increased misinformation to massive productivity boosts. Despite their impressive abilities, large language models don’t actually think. They tend to make elementary mistakes and even make things up. However, because they generate fluent language, people tend to respond to themas though they do think. This has led researchers to study the models’ "cognitive” abilities and biases, work that has grown in importance now that large language models are widely accessible.

Tong urges judge to cut former Bridgeport police chief's pension This line of research dates back to early large language models such as Google’s BERT, which is integrated into its search engine and so has been coined BERTology. This research has already revealed a lot about what such models can do and where they go wrong.

For instance, cleverly designed experiments have shown that many language models have trouble dealing with negation – for example, a question phrased as "what is not” – and doing simple calculations. They can be overly confident in their answers, even when wrong. Like other modern machine learning algorithms, they have trouble explaining themselves when asked why they answered a certain way.

Words and thoughts

Inspired by the growing body of research in BERTology and related fields like cognitive science, my student Zhisheng Tang and I set out to answer a seemingly simple question about large language models: Are they rational?

Although the word rational is often used as a synonym for sane or reasonable in everyday English, it has a specific meaning in the field of decision-making. A decision-making system – whether an individual human or a complex entity like an organization – is rational if, given a set of choices, it chooses to maximize expected gain.

https://www.ctpost.com/news/article/don-t-bet-with-chatgpt-study-shows-language-17884152.php

© Public Gaming Research Institute. All rights reserved.