# Survey of hallucination in natural language generation > ## Abstract ## Abstract > Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence technologies such as [Transformer](https://wiki.g15e.com/pages/Transformer%20architecture.txt)-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation, and data-to-text generation. However, it is also apparent that deep learning based generation is prone to [hallucinate](https://wiki.g15e.com/pages/Hallucination%20(AI.txt)) unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. > > In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions, and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, and machine translation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG. https://dl.acm.org/doi/10.1145/3571730