2nd Workshop on Misinformation Detection in the Era of LLMs (MisD) - Call For Papers

2026-02-13

Workshop Homepage: https://sites.google.com/view/misd-2026/home/

In conjunction with ICWSM-2026, May 26, 2026, Los Angeles, CA

Introduction

With the rise of social media such as X, Facebook, and Weibo, an increasing number of people are browsing information online. The latest statistics show that active social media user identities have passed the 5.24 billion mark, with the latest user figure equivalent to 63.9 percent of the world's population. However, due to the negligence of online regulation, the internet is flooded with a large amount of misinformation, including fake news, rumors, and conspiracy theories (latest examples like Facebook and Instagram get rid of fact-checkers). Such false pieces of information, or misleading combinations of factual information to support unwarranted conclusions, lead people to believe in unreal content, drive public opinion, and pose serious harm to society, the economy, and politics. Furthermore, the latest advances in Artificial Intelligence (AI) and large language models (LLMs) such as ChatGPT and GPT-4 have made it easier to generate seemingly persuasive false information. Therefore, there is an urgent global need for methods that can effectively detect erroneous and misleading information.

LLMs have significantly advanced the field of misinformation detection by enhancing the efficiency and accuracy of predictive models. However, using LLMs for misinformation detection still faces many challenges, including scalability, bias, contextual understanding, interpretability, and adaptability to new types of fake content. Also, they can be used to generate convincingly fake content on a large scale. Furthermore, given known issues with hallucination in LLMs, there is a need for consideration of how much automation is feasible.

This year, we introduce a shared-task based Reference-Free Counterfactual (RFC) benchmark, which focuses on detecting plausible yet false financial narratives on the web and social platforms. The task encourages models to assess causality, contextual coherence, and credibility without relying on external fact sources, bridging NLP with web information ecosystems and social science perspectives on misinformation and trust. The web page for shared-task is https://sites.google.com/view/icwsm-2026-fmd/

Call for papers

This workshop aims to explore the potential of LLMs to address such complex mis/disinformation detection issues and its implications for content moderation systems. The workshop will facilitate discussions on the current state and future directions of NLP techniques in misinformation detection and understanding, and drive the development of comprehensive frameworks that address the multifaceted nature of misinformation detection challenges. Topics include but are not limited to:

  • Methodology – Applying LLMs to identify fake news, rumors, or conspiracy theories.
    • Fact checking - Determining the ‘truth’ of claims against given background references.
    • Multi-modal/multi-lingual misinformation detection - Leverage different modalities/languages and combinations thereof to tackle online multimodal offensive content.
    • Cross-domain misinformation detection - identify misinformation collected from health, education, finance, politics, technologies, etc.
    • Stance detection - Identifying topics and sentiment/emotions.
    • Rhetoric detection - identify sarcasm, exaggeration, irony and other rhetorical strategies commonly used in mis/disinformation.
    • Network analysis - Analyzing social networks, dissemination methods, etc., of misinformation.
    • Implication - Developing methods to identify misleading reasoning that uses true facts but leads to unwarranted conclusions.
  • Interpretability - Providing explanations when detecting misinformation or fact-checking.
  • User psychology - Analyzing the psycholinguistic features that may drive the engagements of misinformation
  • Feature analysis - Analyzing the impact of different features for misinformation detection, such as emotion, style, stance, etc.
  • Hallucination mitigation and evaluation in LLMs.
  • Data source and benchmark - We encourage the contribution of new datasets and benchmarks and analysis of the misinformation generated by LLMs.
  • Fairness of LLM moderation: Existing work has shown that LLMs exhibit systematic biases against different demographics (e.g. religion, age, or other cultural characteristics). To what extent does this impact misinformation detection?
  • Policies and practical usage: LLMs are able to perform this task to a certain degree, but is this advisable? We welcome position papers on this topic.

Important Dates

Submission Deadline: TBD
Notification of acceptance: April 15, 2026
Camera-ready paper due: TBD
Workshop Date: May 26, 2026

Next item
Back to news summary page