Developing a Supervised Model of Online COVID Vaccine Information
Problem
Health-related misinformation on social media exacerbates health inequities.
The spread of false or inaccurate information about health issues on social media—particularly regarding COVID-19 vaccines—has a disproportionately negative effect on historically marginalized communities.
Though artificial intelligence (AI) can be used—deliberately or inadvertently—to create and propagate misinformation, AI can also be part of the solution. It can be used to analyze information, shedding light on how messages are created and how they may shape people’s knowledge, attitudes, and beliefs about health topics.
Solution
NORC developed an AI model to identify and classify health misinformation on social media.
With support from Amazon Web Services, NORC used supervised learning (the use of training datasets that match input data to the correct output data so the AI can make more accurate predictions) to develop an AI model that can identify and classify health misinformation on Instagram and Twitter.
The model focuses on:
- Analyzing text and images using classification models powered by Amazon Web Services
- Developing a typology of misinformation strategies, including:
- Rhetorical strategies such as anecdotes, expressions of uncertainty, and appeals to authority used in misinformation
- Image types, including memes, screenshots, logos, medical professional images, charts, and quotes
- Mitigating bias by documenting potential bias introduction points in model development
Result
Our new AI tool empowers stakeholders to combat misinformation and address health inequities responsibly.
NORC's AI model helps individuals, researchers, policymakers, and practitioners to:
- Better understand and evaluate health information encountered on social media
- Identify and mitigate potential misinformation about COVID-19 vaccines
- Address health inequities by using AI responsibly and equitably
The project provides a tool for identifying misinformation and contributes to the broader understanding of how AI can be developed and deployed ethically in health contexts. By focusing on bias mitigation and transparency, NORC's work sets a standard for responsible AI use in combating health misinformation and reducing its impact on marginalized communities.
Please contact us if you want to learn more about our model or work with us to adapt it to your health topic or use case.
Learn More About the Study
Want to learn more about our model or work with us to adapt it to your health topic or use case? Please contact us:
Related Tags
Project Leads
-
Amelia Burke-Garcia
DirectorProject Director & Co-Principal Investigator -
Brandon Sepulvado
Senior Research MethodologistCo-Principal Investigator & Chief Methodologist -
Joshua Lerner
Research MethodologistModel Development -
Mehmet Celepkolu
Senior Data ScientistModel Development -
Hy Tran
Senior Data ScientistData Analyst -
Ashani Johnson-Turbes
Vice PresidentAdvisor -
Sherry Emery
DirectorAdvisor