You write: "Amateur researchers should go to Google or ChatGPT or their local library and try and find out why this happens."
No one in their right mind should go to ChatGPT expecting accurate information. All the Large Language Models (LLMs) appear to be prone to confabulating, i.e. making up something plausible-ish when they don't have a good answer. (If you google this, the commonly used term appears to be "hallucinate" rather than "confabulate", for reasons I don't understand; confabulate better describes the behaviour.) This includes such helpful tricks as adding footnotes that point to articles which never existed. See e.g. https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article
You write: "Amateur researchers should go to Google or ChatGPT or their local library and try and find out why this happens."
No one in their right mind should go to ChatGPT expecting accurate information. All the Large Language Models (LLMs) appear to be prone to confabulating, i.e. making up something plausible-ish when they don't have a good answer. (If you google this, the commonly used term appears to be "hallucinate" rather than "confabulate", for reasons I don't understand; confabulate better describes the behaviour.) This includes such helpful tricks as adding footnotes that point to articles which never existed. See e.g. https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article
Have you read Nick Bostrom's book? If not... you should! https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2/