sigir24-llm-misinformation.github.io - SIGIR 2024 Tutorial: Preventing and Detecting Misinformation Generated by Large Language Models

Description: Deformable Neural Radiance Fields creates free-viewpoint portraits (nerfies) from casually captured videos.

nerf (195) d-nerf (90) nerfies (89)

Example domain paragraphs

As large language models (LLMs) become increasingly capable and widely deployed, the risk of them generating misinformation poses a critical challenge. Misinformation from LLMs can take various forms, from factual errors due to hallucination to intentionally deceptive content, and can have severe consequences in high-stakes domains.

This tutorial covers comprehensive strategies to prevent and detect misinformation generated by LLMs. We first introduce the types of misinformation LLMs can produce and their root causes. We then explore two broad categories:

Preventing misinformation generation:

Links to sigir24-llm-misinformation.github.io (1)