TY - GEN
T1 - Overview of the CLEF-2022 CheckThat! Lab Task 1 on Identifying Relevant Claims in Tweets
AU - Nakov, Preslav
AU - Barrón-Cedeño, Alberto
AU - Da San Martino, Giovanni
AU - Alam, Firoj
AU - Míguez, Rubén
AU - Caselli, Tommaso
AU - Kutlu, Mucahid
AU - Zaghouani, Wajdi
AU - Li, Chengkai
AU - Shaar, Shaden
AU - Mubarak, Hamdy
AU - Nikolov, Alex
AU - Kartal, Yavuz Selim
N1 - Funding Information:
Part of this work is made within the Tanbih mega-project, developed at the Qatar Computing Research Institute, HBKU, which aims to limit the impact of “fake news”, propaganda, and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking.
Publisher Copyright:
© 2022 Copyright for this paper by its authors.
PY - 2022
Y1 - 2022
N2 - We present an overview of CheckThat! lab 2022 Task 1, part of the 2022 Conference and Labs of the Evaluation Forum (CLEF). Task 1 asked to predict which posts in a Twitter stream are worth fact-checking, focusing on COVID-19 and politics in six languages: Arabic, Bulgarian, Dutch, English, Spanish, and Turkish. A total of 19 teams participated and most submissions managed to achieve sizable improvements over the baselines using Transformer-based models such as BERT and GPT-3. Across the four subtasks, approaches that targetted multiple languages (be it individually or in conjunction, in general obtained the best performance. We describe the dataset and the task setup, including the evaluation settings, and we give a brief overview of the participating systems. As usual in the CheckThat! lab, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research on finding relevant tweets that can help different stakeholders such as fact-checkers, journalists, and policymakers.
AB - We present an overview of CheckThat! lab 2022 Task 1, part of the 2022 Conference and Labs of the Evaluation Forum (CLEF). Task 1 asked to predict which posts in a Twitter stream are worth fact-checking, focusing on COVID-19 and politics in six languages: Arabic, Bulgarian, Dutch, English, Spanish, and Turkish. A total of 19 teams participated and most submissions managed to achieve sizable improvements over the baselines using Transformer-based models such as BERT and GPT-3. Across the four subtasks, approaches that targetted multiple languages (be it individually or in conjunction, in general obtained the best performance. We describe the dataset and the task setup, including the evaluation settings, and we give a brief overview of the participating systems. As usual in the CheckThat! lab, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research on finding relevant tweets that can help different stakeholders such as fact-checkers, journalists, and policymakers.
KW - Check-Worthiness Estimation
KW - Computational Journalism
KW - COVID-19
KW - Fact-Checking
KW - Social Media Verification
KW - Veracity
UR - http://www.scopus.com/inward/record.url?scp=85136928741&partnerID=8YFLogxK
UR - http://ceur-ws.org/Vol-3180/
M3 - Conference contribution
AN - SCOPUS:85136928741
VL - 3180
T3 - CEUR Workshop Proceedings
SP - 368
EP - 392
BT - CLEF 2022: Conference and Labs of the Evaluation Forum
PB - CEUR Workshop Proceedings (CEUR-WS.org)
T2 - 2022 Conference and Labs of the Evaluation Forum, CLEF 2022
Y2 - 5 September 2022 through 8 September 2022
ER -