Hallucinated Content
(Redirected from Hallucination in a foundation model (FM))
Jump to navigation
Jump to search
A Hallucinated Content is a generated text that contains false or misleading information produced by AI language models that is not supported by its training data or real-world facts.
- Context:
- It can (typically) occur due to the model's reliance on identifying patterns in its training data, lacking the ability to verify information against real-world events or facts.
- It can (often) result from the model's attempt to construct Plausible-Sounding Responses based on the provided context, sometimes leading to the generation of inaccurate or fabricated information.
- ...
- It can range from being a Hallucinated Content with Minor Inaccuracy to being Hallucinated Content with Completely Fabricated Content.
- It can range from being a Intrinsic Hallucination (contradicting the source) or Extrinsic Hallucination (unverifiable from the source).
- ...
- It can manifest as Detached Hallucination when a major part of the generated output is unfaithful to the input.
- It can be influenced by various aspects of the model's architecture, including its Objective Function, Training Data, and Inference Mechanism.
- It can pose significant challenges in scenarios where the accuracy and reliability of information are critical, affecting fields such as finance, healthcare, and news reporting.
- It can be addressed with mitigation strategies that include: LLM Model Fine-tuning, Human-in-the-loop (HITL) Approaches, and leveraging External Knowledge Bases for enhanced fact-checking and accuracy.
- It can be exacerbated by certain Model Training practices, like overfitting, or the lack of mechanisms for verifying the accuracy of the content against trusted sources.
- It can involve techniques such as specialized algorithms for detection and correction, alongside the strategic use of External Databases for fact verification.
- It can be viewed as a form of AI Confabulation, similar to human memory reconstruction errors, rather than a purely artificial phenomenon.
- It can occur even with non-sense prompts composed of random tokens, suggesting it may be a basic feature of LLMs rather than a bug.
- ...
- Example(s):
- Historical Fabrication: A generated account of a historical event that never happened, such as a meeting between two famous personalities who lived in different centuries.
- Medical Misinformation: Providing recommendations based on outdated or disproven treatments by recent studies.
- Scientific Misrepresentation: Creating a lesson on a scientific concept that misrepresents fundamental principles, such as incorrectly stating the laws of thermodynamics.
- Financial Fabrication: A detailed report on a company's financial performance without access to the company's actual financial data.
- Legal Misinformation: A generated Legal Filing with hallucinated legal cases.
- Confident Falsehood: An AI chatbot confidently stating incorrect information as fact, such as providing a false historical date or inventing non-existent scientific studies.
- ...
- Counter-Example(s):
- Factual Generated Content that accurately reflects information from reliable sources.
- Explicitly Uncertain Statements where the model expresses low confidence.
- Creative Fiction deliberately generated as non-factual content.
- Paraphrased Content that maintains the original meaning without adding false information.
- Audit Report based on verified financial data.
- Data-Driven Decision, where decisions are made based on real-world data analysis.
- Fact-Checking Output, which verify information against a database of verified facts.
- See: Hallucinated Content Recognition Task, AI Confabulation, Machine Learning Bias, AI Safety, Fact Verification in NLP, Knowledge-Grounded Generation, AI Ethics, Language Model Evaluation, Natural Language Generation, Model Robustness, Factuality in Language Models, Misinformation, ChatGPT, Artificial Intelligence, Misleading Information, Fact, Association For Computing Machinery, Chatbot, Financial Report, Hallucination, Percept#Process And Terminology, AI Boom, Large Language Model.
References
2024
- Perplexity.ai
- a Hallucinated Content in the context of artificial intelligence and natural language processing can be defined as:
Generated text that contains false or misleading information presented as fact, despite appearing plausible or factual[4]. - Key characteristics of hallucinated content include:
- a Hallucinated Content in the context of artificial intelligence and natural language processing can be defined as:
1. It is ungrounded, meaning it is not supported by or faithful to the input or source information[4][6]. 2. It can appear convincing and fluent, making it difficult to distinguish from accurate information[4]. 3. It is produced by AI systems, particularly large language models, in response to prompts or queries[4][5]. 4. It may contradict the input (intrinsic hallucination) or include information that cannot be verified from the input (extrinsic hallucination)[6]. 5. It can range from minor inaccuracies to completely fabricated content[4]. 6. In severe cases, a major part of the generated output may be unfaithful to the input, known as a "detached hallucination"[6].
- Citations:
[1] https://www.merriam-webster.com/dictionary/hallucination [2] https://my.clevelandclinic.org/health/symptoms/23350-hallucinations [3] https://en.wikipedia.org/wiki/Hallucination [4] https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) [5] https://dictionary.cambridge.org/us/dictionary/english/hallucinated [6] https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00563/116414/Understanding-and-Detecting-Hallucinations-in [7] https://aclanthology.org/2021.findings-acl.120.pdf [8] https://dictionary.cambridge.org/us/dictionary/english/hallucinate
References
2024
- (Xia et al., 2024) ⇒ Yu Xia, Xu Liu, Tong Yu, Sungchul Kim, Ryan A. Rossi, Anup Rao, Tung Mai, and Shuai Li. (2024). “Hallucination Diversity-Aware Active Learning for Text Summarization.” doi:10.48550/arXiv.2404.01588
- QUOTE: Large Language Models (LLMs) have shown propensity to generate hallucinated outputs, i.e., texts that are factually incorrect or unsupported. ....
- NOTES:
- The article investigates the potential of ChatGPT in extracting norms such as commitments, prohibitions, authorizations, and powers from contracts, aiming to understand the governance of interactions within multi-agent systems.
- The article demonstrates ChatGPT's ability to perform norm extraction effectively without the need for specific training or fine-tuning on legal documents, a significant advantage given the scarcity of annotated legal data.
- The article identifies key limitations in ChatGPT's performance, including the overlooking of crucial details, hallucination of information not present, incorrect parsing of conjunctions, and inaccuracies in norm elements or types.
2024
- (Huang et al., 2024) ⇒ Chengyu Huang, Zeqiu Wu, Yushi Hu, and Wenya Wang. (2024). “Training Language Models to Generate Text with Citations via Fine-grained Rewards.” doi:10.48550/arXiv.2402.04315
- QUOTE: ... While recent Large Language Models (LLMs) have proven useful in answering user queries, they are prone to hallucination, and their responses often lack credibility due to missing references to reliable sources. An intuitive solution to these issues would be to include in-text citations referring to external documents as evidence. While previous works have directly prompted LLMs to generate in-text citations, their performances are far from satisfactory, especially when it comes to smaller LLMs. ...
- NOTE:
- It introduces a novel training framework to enhance the accuracy and credibility of LLMs in generating text with in-text citations. ...
- It tackles the issues of hallucination, credibility, and reliability in AI-generated text, aiming for more reliable outcomes. ...
2024
- (Hinton, 2024) => Geoffrey Hinton. (2024). “Will digital intelligence replace biological intelligence?." Romanes Lecture
- QUOTE:
- "hallucinations" show that LLMs don't really understand what they are saying?
- They should be called "confabulations" and they are very characteristic of human memory.
- Just like LLMs, our brains store knowledge in weights.
They use these weights to reconstruct events.
- If the events are recent the reconstructions are usually fairly accurate.
- If the events are old, we typically get a lot of the details wrong (unless we rehearsed frequently).
- We are often remarkably confident about details that we get wrong.
- QUOTE:
2024
- (Wikipedia, 2024) ⇒ https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) Retrieved:2024-3-2.
- In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation or delusion ) is a response generated by an AI which contains false or misleading information presented as fact. For example, a hallucinating chatbot might, when asked to generate a financial report for a company, falsely state that the company's revenue was $13.6 billion (or some other number apparently "plucked from thin air").[1] Such phenomena are termed "hallucinations", in loose analogy with the phenomenon of hallucination in human psychology. However, one key difference is that human hallucination is usually associated with false percepts, but an AI hallucination is associated with the category of unjustified responses or beliefs.[2] Some researchers believe the specific term "AI hallucination" unreasonably anthropomorphizes computers.[3] AI hallucination gained prominence during the AI boom, alongside the rollout of widely used chatbots based on large language models (LLMs), such as ChatGPT. Users complained that such chatbots often seemed to pointlessly embed plausible-sounding random falsehoods within their generated content. By 2023, analysts considered frequent hallucination to be a major problem in LLM technology, with some estimating chatbots hallucinate as much as 27% of the time and a study finding factual errors in 46% of generated responses.
2024
- https://www.economist.com/science-and-technology/2024/02/28/ai-models-make-stuff-up-how-can-hallucinations-be-controlled
- NOTES:
- LLM Hallucination refers to instances where AI models, especially LLMs, produce confident yet incorrect or misleading responses, akin to fabricating information or "making things up."
- LLM Hallucination is problematic for real-world applications, leading to issues such as the spread of misinformation, copyright infringement through generated images, incorrect customer service responses, and potentially dangerous errors in medical advice.
- LLM Hallucination arises from the generative nature of LLMs, which are designed to produce content by estimating probability distributions for text generation, leading to a natural tendency towards generating incorrect information due to their probabilistic framework.
- LLM Hallucination is exacerbated by the process of fine-tuning LLMs for specific tasks, which can introduce biases or encourage the model to generate more "interesting" responses, further increasing the risk of hallucinations.
- Various strategies to mitigate LLM Hallucination include adjusting the model's "temperature," limiting generation to top-ranked tokens, employing clever prompting techniques, and fine-tuning with data that encourage accurate responses.
- Advanced techniques such as Retrieval Augmented Generation (RAG) and integration with external tools (e.g., search engines) aim to reduce LLM Hallucination by supplementing the model's capabilities with reliable external information.
- Despite efforts to decrease LLM Hallucination, challenges persist due to the inherent probabilistic nature of LLMs and the complex task of aligning AI outputs with human intentions, highlighting the need for both technical advancements and user education on the limitations and proper use of AI models.
- NOTES:
2023
- (Bottou & Schölkopf, 2023) ⇒ Léon Bottou, and Bernhard Schölkopf. (2023). “Borges and AI.” doi:10.48550/arXiv.2310.01425
- QUOTE: ... As new words are printed on the tape, the story takes new turns, borrowing facts from the training data (not always true) and filling the gaps with plausible inventions (not always false). What the language model specialists sometimes call hallucinations are just confabulations [9].
2023
- (Rawte et al., 2023) ⇒ Vipula Rawte, Amit Sheth, and Amitava Das. (2023). “A Survey of Hallucination in Large Foundation Models.” In: arXiv preprint arXiv:2309.05922. [1]
- QUOTE: “...hallucinations, detect them in LLM-generated text, and mitigate their impact to improve the overall quality and trustworthiness of LLM-… edit and correct hallucinations in language models. …”
- ABSTRACT: Hallucination in a foundation model (FM) refers to the generation of content that strays from factual reality or includes fabricated information. This survey paper provides an extensive overview of recent efforts that aim to identify, elucidate, and tackle the problem of [LLM Hallucination|hallucination]], with a particular focus on ``Large Foundation Models (LFMs). The paper classifies various types of hallucination phenomena that are specific to LFMs and establishes evaluation criteria for assessing the extent of [LLM Hallucination|hallucination]]. It also examines existing strategies for mitigating hallucination in LFMs and discusses potential directions for future research in this area. Essentially, the paper offers a comprehensive examination of the challenges and solutions related to hallucination in LFMs.
2023
- (Yao et al., 2023) ⇒ Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu Liu, Mu-Nan Ning, and Li Yuan. (2023). “LLM Lies: Hallucinations are Not Bugs, But Features as Adversarial Examples.” In: arXiv preprint arXiv …. [2]
- QUOTE: “...to respond with [LLM Hallucination|hallucination]]s. This phenomenon forces us to revisit that [LLM Hallucination|hallucination]]s may be … Therefore, we formalize an automatic hallucination triggering method as the hallucination …”
- ABSTRACT: Large Language Models (LLMs), including GPT-3.5, LLaMA, and PaLM, seem to be knowledgeable and able to adapt to many tasks. However, we still can not completely trust their answer, since LLMs suffer from hallucination -- fabricating non-existent facts to cheat users without perception. And the reasons for their existence and pervasiveness remain unclear. In this paper, we demonstrate that non-sense prompts composed of random tokens can also elicit the LLMs to respond with hallucinations. This phenomenon forces us to revisit that hallucination may be another view of adversarial examples, and it shares similar features with conventional adversarial examples as the basic feature of LLMs. Therefore, we formalize an automatic hallucination triggering method as the hallucination attack in an adversarial way. Finally, we explore basic feature of attacked adversarial prompts and propose a simple yet effective defense strategy. Our code is released on GitHub.
2023
- (Ji et al., 2023) ⇒ Z Ji, YU Tiezheng, Y Xu, N Lee, E Ishii, .... (2023). “Towards Mitigating LLM Hallucination via Self Reflection.” In: The 2023 Conference …. [3]
- QUOTE: “…notably the issue of 'hallucination', where models generate … This paper analyses the phenomenon of hallucination in … approach in hallucination reduction compared to baselines.…”