Warren B. Nelms Institute Researchers Work to Combat the Spread of Misinformation in the Digital Age

Researchers Dr. My T. Thai, Associate Director of the Warren B. Nelms Institute; Dr. Yan Wang, assistant professor in the Department of Urban and Regional Planning; and Dr. Sylvia Chan-Olmsted, professor in the Department of Media Production, Management, and Technology, have received a new National Science Foundation (NSF) grant. The project, titled, “Collaborative Research: SaTC: CORE: Medium: Information Integrity: A User-centric Intervention,” will work towards increasing information integrity and reducing misinformation at the user level.


Combating misinformation in the digital age has been a challenging subject with significant social implications, as misinformation continues to impact contentious contemporary events from elections to responses to pandemics. Despite decades of research, misinformation remains a serious threat as most technical mitigation methods focus on improving detection accuracy and fail to consider social and emotional perspectives. This project assists in enhancing information integrity by identifying influencing communities, agents, and culturally resonant information to identify tipping points in public dialogue on controversial issues and offering venues of user-centric interventions at scale.

This project moves away from source-centric accuracy detection and debunking to focus on user-centric interventions that integrates psychological and socio-cultural constructs, computational theories, and machine learning (ML) algorithms to prototype interventions for testing. The first focus of research has the goal of analyzing and identifying social norm emergence–the shared beliefs or acceptable behaviors of communities, and tipping points when beliefs are about to change rapidly. The second focus of this research is to uncover the cultural contexts of belief, personalized to each individual, to optimize the receptivity of scientific evidence in online network dissemination. The third pillar (Interaction) provides human-in-the-loop visual analytics framework to support users in verifying and making users’ own decisions as to what they believe. Underpinning this work is the development and testing of novel deep learning models based on topology ML, which effectively predict heterogeneous social norm emergence for timely intervention, identify top trusted features for engagement, and temporal explainable artificial intelligence for transparent interaction with users. The involvement of leading misinformation mitigation and journalism education organizations such as the Poynter Institute helps to ensure social impacts in the field.


An Ongoing Problem

The spread of misinformation online is not a new issue. Dr. Thai has been working towards combating this concern for over a decade. In 2012, she and her collaborators published a paper, “Containment of Misinformation Spread in Online Social Networks.” In 2023, it received the ACM Web Science Test-of-Time Award, due to its continued high impact, relevance, and timeliness in today’s world.


With their blistering expansions in recent years, popular on-line social sites such as Twitter, Facebook and Bebo, have become some of the major news sources as well as the most effective channels for viral marketing nowadays. However, alongside these promising features comes the threat of misinformation propagation which can lead to undesirable effects, such as the widespread panic in the general public due to faulty swine flu tweets on Twitter in 2009. Due to the huge magnitude of online social network (OSN) users and the highly clustered structures commonly observed in these kinds of networks, it poses a substantial challenge to efficiently contain viral spread of misinformation in large-scale social networks.

In this paper, we focus on how to limit viral propagation of misinformation in OSNs. Particularly, we study a set of problems, namely the β1T — Node Protectors, which aims to find the smallest set of highly influential nodes whose decontamination with good information helps to contain the viral spread of misinformation, initiated from the set I, to a desired ratio (1 − β) in T time steps. In this family set, we analyze and present solutions including inapproximability result, greedy algorithms that provide better lower bounds on the number of selected nodes, and a community-based heuristic method for the Node Protector problems. To verify our suggested solutions, we conduct experiments on real world traces including NetHEPT, NetHEPT_WC and Facebook networks. Empirical results indicate that our methods are among the best ones for hinting out those important nodes in comparison with other available methods.