Developing a Supervised Model of Online COVID Vaccine Information
Problem
Inaccurate health information on social media exacerbates health disparities.
The spread of false or inaccurate information about health-related issues on social media—particularly regarding COVID-19 vaccines—has a disproportionately negative effect on historically marginalized communities.
Though artificial intelligence (AI) can be used—deliberately or inadvertently—to create and propagate inaccurate information, AI can also be part of the solution. It can be used to analyze information, shedding light on how messages are created and how they may shape people’s knowledge, attitudes, and beliefs about health topics.
Solution
NORC developed an AI model to identify and classify inaccurate health information on social media.
With support from Amazon Web Services, NORC used supervised learning (the use of training datasets that match input data to the correct output data so the AI can make more accurate predictions) to develop an AI model that can identify and classify inaccurate health information on Instagram and Twitter.
The model focuses on:
- Analyzing text and images using classification models powered by Amazon Web Services
- Developing a typology of strategies used to frame inaccurate information, including:
- Rhetorical strategies such as anecdotes, expressions of uncertainty, and appeals to authority used in inaccurate information
- Image types, including memes, screenshots, logos, medical professional images, charts, and quotes
- Mitigating algorithmic bias by documenting potential bias introduction points in model development
Result
Our new AI tool empowers stakeholders to combat inaccurate information and help reduce health disparities responsibly.
NORC's AI model helps individuals, researchers, policymakers, and practitioners to:
- Better understand and evaluate health information encountered on social media
- Identify and mitigate potential inaccurate information about COVID-19 vaccines
- Help reduce health disparities by using AI responsibly and fairly
The project provides a tool for identifying inaccurate information and contributes to the broader understanding of how AI can be developed and deployed ethically in health contexts. By focusing on bias mitigation and transparency, NORC's work sets a standard for responsible AI use in combating inaccurate health information and reducing its impact on communities that have historically experienced disparities.
Please contact us if you want to learn more about our model or work with us to adapt it to your health topic or use case.
Learn More About the Study
Want to learn more about our model or work with us to adapt it to your health topic or use case? Please contact us:
Related Tags
Project Leads
-
Amelia Burke-Garcia
DirectorProject Director & Co-Principal Investigator -
Brandon Sepulvado
Senior Research MethodologistCo-Principal Investigator & Chief Methodologist -
Joshua Lerner
Senior Research MethodologistModel Development -
Mehmet Celepkolu
Senior Data ScientistModel Development -
Hy Tran
Senior Data ScientistData Analyst -
Ashani Johnson-Turbes
Vice PresidentAdvisor -
Sherry Emery
DirectorAdvisor