par Dierickx, Laurence ;Van Dalen, Arjen;Opdahl, Andreas L.;Linden, Carl-Gustav CGL
Référence Multidisciplinary International Symposium on Disinformation in Open Online Media. MISDOOM 2024, Disinformation in Open Online Media. MISDOOM 2024, Springer, Vol. 15175
Publication Publié, 2024-08-31
Référence Multidisciplinary International Symposium on Disinformation in Open Online Media. MISDOOM 2024, Disinformation in Open Online Media. MISDOOM 2024, Springer, Vol. 15175
Publication Publié, 2024-08-31
Publication dans des actes
Résumé : | The launch of ChatGPT at the end of November 2022 triggered a general reflection on its benefits for supporting fact-checking workflows and practices. Between the excitement of the availability of AI systems that no longer require the mastery of programming skills and the exploration of a new field of experimentation, academics and professionals foresaw the benefits of such technology. Critics have raised concerns about the fairness and of the data used to train Large Language Models (LLMs), including the risk of artificial hallucinations and the proliferation of machine-generated content that could spread misinformation. As LLMs pose ethical challenges, how can professional factchecking mitigate risks? This narrative literature review explores the current state of LLMs in the context of fact-checking practice, highlighting three key complementary mitigation strategies related to education, ethics and professional practice. |