AI like ChatGPT processes information like humans with a brain disorder, study finds
2 Articles
2 Articles
AI like ChatGPT processes information like humans with a brain disorder, study finds
Large language models like ChatGPT and LLaMA have become known for their fluent, sometimes eerily human-like responses. However, they also have a well-documented problem of confidently producing information that is outright wrong. A new study suggests that how AI processes information might have surprising parallels with the way certain human brain disorders function. Researchers at the University of Tokyo explored the internal signal dynamics o…
University of Tokyo: AI overconfidence mirrors human brain condition | ResearchBuzz: Firehose
University of Tokyo: AI overconfidence mirrors human brain condition . “So-called large language model (LLM)-based agents, such as ChatGPT and Llama, have become impressively fluent in the responses they form, but quite often provide convincing yet incorrect information. Researchers at the University of Tokyo draw parallels between this issue and a human language disorder known as aphasia, where sufferers may speak fluently but make meaningless …
Coverage Details
Bias Distribution
- 100% of the sources lean Left
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage