Debunking Misinformation through Social Ties
When and how can social ties debunk false claims circulating in messaging apps?
Debunk campaigns that harness the existing social capital or dyadic trust in the social network can be more effective in belief change than simple broadcast messages by authorities or experts. In this project, we discuss how debunk messages shared via Mobile Instant Messaging (MIM) Apps are likely to be perceived and what makes them more likely to be reshared. Through a randomized survey experiment, we find that debunk messages are more likely to be reshared if they come from a strong tie or an in-group member. Our experiment tests how the following three factors contribute to successful fact-checking and dissemination of debunk messages:
- Correction Format: Given that interest has been shown to be a factor in increasing messages shareability , richer, audio and image-based formats would be more interesting than text-based formats, and that this should result in greater belief change.
- Relational Closeness of Source: Given that strong ties are generally considered more credible , fact-checks shared via MIMs that come from strong ties should be more persuasive than weak ties because strong ties are more trustworthy.
- Attitudinal Similarity of Source: Given that messages become more persuasive when they are offered by someone apparently speaking against their self-interests , and that attitude homophily has been shown to increase perceived source credibility , debunks to pro-attitudinal false claims shared by an in-group member (same party affiliation) should have larger effects on beliefs.
Figure 1 shows how the quality of the tie from whom the correction originates interacts with effectiveness of the debunk message. The debunk messages which originate from strong ties or in-group are significantly more likely to be reshared (p < 0.05, N=717). An ordered logistic regression model with 5 categories for intention to share also reveals that strong-ingroup ties have the largest effect on intention to share while weak-outgroup ties have the smallest effect on shareability. We also find that voice correction formats lead to larger effects on both belief change and intention to share (p < 0.05, N=717) compared to text and image format. These effects are robust under various model formulations or dependent variable specifications, as long as the model controls for perceived credibility of the debunk message.
 Young, Dannagal, et al. “Fact-checking effectiveness as a function of format and tone:
Evaluating FactCheck.org and FlackCheck.org.” Journalism & Mass Communication Quarterly
 Karapanos, Evangelos, Pedro Teixeira, and Ruben Gouveia. “Need fulfillment and experiences on social media: A case on Facebook and WhatsApp.” Computers in Human Behavior 55 (2016)
 Berinsky, Adam J. “Rumors and health care reform: Experiments in political misinformation.” British journal of political science 47.2 (2017): 241-262.
 Housholder, Elizabeth E., and Heather L. LaMarre. “Facebook politics: Toward a process model for achieving political source credibility through social media.” Journal of Information Technology & Politics 11.4 (2014): 368-382.