{"id":41308,"date":"2023-12-20T09:22:46","date_gmt":"2023-12-20T09:22:46","guid":{"rendered":"https:\/\/www.innovationnewsnetwork.com\/?p=41308"},"modified":"2023-12-20T09:22:46","modified_gmt":"2023-12-20T09:22:46","slug":"large-language-models-found-to-repeat-misinformation","status":"publish","type":"post","link":"https:\/\/www.innovationnewsnetwork.com\/large-language-models-found-to-repeat-misinformation\/41308\/","title":{"rendered":"Large language models found to repeat misinformation"},"content":{"rendered":"
New research into large language models has shown that they repeat conspiracy theories, harmful stereotypes, and other forms of misinformation.<\/h2>\n
Researchers at the University of Waterloo have tested an early version of large language model ChatGPT\u2019s understanding to investigate interactions between humans and technology<\/a>.<\/p>\n