Mendez is a doctoral candidate in population health sciences at Harvard University's TH Chan School of Public Health and a public voice fellow with the OpEd Project and Academy Health.
During a heated Senate Judiciary Committee hearing on January 31st, a bipartisan group of lawmakers spoke about the harm children are being subjected to on the Meta, TikTok, Snap, X, and Discord platforms. He denounced the leaders and threatened to impose restrictions on them closing their businesses. Of killing people.
I hope this moment heralds meaningful policy change. But I'm pessimistic. We've been here before. Past Congressional hearings on social media have covered a number of areas, including election interference, extremism and disinformation, national security, and privacy violations. While the energy behind this latest hearing is encouraging, the track record of inaction by our elected officials is disheartening. We are doomed to repeat the same harm. Now that we have entered a new era of artificially generated media, the harm is even greater.
One bill gaining attention in the wake of this hearing is the Kids Online Safety Act, which would require social media platforms to give minors the opportunity to opt out of personalized recommendation systems and completely remove their personal data. It requires the provision of a mechanism to do so. Congress must aim higher than simply protecting people from these practices until they turn 18. Elected officials must be willing to follow through on the bold claims they make on the national stage. Are they actually going to regulate Meta and X and put them out of business? Are they actually going to act like people's lives are at stake?
Sign up for the Fulcrum newsletter
If you think that's extreme, just look back at the past few years. In 2020, hydroxychloroquine was unscientifically promoted on social media as a treatment for COVID-19, leading to hundreds of deaths in May and June 2020 alone. Between May 2021 and September 2022, 232,000 lives could have been saved in the U.S. by taking the coronavirus vaccine, but far too many people are suffering from the spread of misinformation on social media. I gave in. In August 2022, Boston Children's Hospital faced a wave of harassment and bomb threats following a social media smear campaign. Protecting children from the harms of social media certainly includes addressing the harms of medical misinformation that can lead to death and violence.
As a public health researcher, I am sensitive to high-profile medical disinformation. But the harm from its spread extends beyond physical health and threatens the well-being of our democracy. Anti-science has now become a viable political platform to distract from the needs of politically marginalized groups. House hearings last summer focused on debunked coronavirus conspiracy theories that questioned the research practices of leading virologists. Rehashing these conspiracy theories will not prevent the long-term consequences of the COVID-19 pandemic, including the long-term economic costs of COVID-19 and increased mortality from COVID-19 in rural and BIPOC communities. Nothing can be done about the impact. The mainstreaming of the anti-vaccination movement in U.S. politics threatens to exacerbate current disparities in other viral diseases, including increased influenza hospitalizations in census tracts with high poverty rates.
While medical disinformation fuels political chaos, it also overlaps with voter suppression. This means that communities experiencing negative downstream impacts also have less of a voice in holding elected officials accountable. Many rural voters rely on early voting, mail-in voting and same-day registration, all of which have come under attack in recent years. Stricter voter ID requirements will disproportionately impact communities of color. This is in addition to the fundamental relationship between poor health and low voter turnout.
It is perhaps not surprising, then, that this hearing on social media does not promise a change in the current balance of social media benefits for care. The potential voters most affected by these issues have already lost their voice in electoral politics. These interrelated issues will therefore continue to grow as artificial intelligence tools promise to flood social networks with an even more immense amount of content, further enhanced to increase algorithmic discoverability. It is likely that this will increase in the next few years. The era of robots talking to robots has arrived, and we humans will also experience collateral damage from advertising sales.
The recent rise of the ChatGPT app ecosystem has troubling echoes that remind us of a core problem for social media companies. One ChatGPT plugin provides up-to-date information on regional health risks for respiratory diseases in the United States. Another tool helps users search for clinical trials, and another tool helps users understand eligibility criteria. Additionally, some sites offer more general medical information and more personalized nutritional insights. Never mind that you don't know the source of the data driving their responses or why they include some information over others. Or maybe we don't know how the information they provide us will be tailored based on our chat history or language choices. It is not enough for the ChatGPT prompt window to warn you: Please consider reviewing important information. ”
But technology industry leaders want to have their cake and eat it too, and elected officials seem content with the status quo. Social media and artificial intelligence are positioned as transformative tools that improve our lives and connect people through the sharing of information. Yet technology companies are not responsible for the information people encounter there, as if all the human decisions involved in platform design, data science, and content moderation don't matter. It is not enough for social media companies to occasionally include a disclaimer with their content.
Technology companies are changing the world, but we should believe they don't have the power to intervene. As individuals, we should believe that we are ultimately responsible for the multibillion-dollar corporate losses.
It’s only a matter of time before a new wave of human and synthetic influencers emerge, leveraging AI-generated scripts and visuals to push content at even faster speeds. Focusing on protecting children from these products is not enough to protect them from the harm of extremist content and misinformation. That alone may not be enough to protect adults from the intersecting problems of medical disinformation, political disinformation, and voter suppression.
As multiple Congressional hearings have reminded us, the underlying design and profit motives of social media companies are already costing lives and hindering public debate. They have already led to bullying, extremism, and mass disinformation. They are already interfering with elections. We need and deserve fundamental policy changes around social media and AI of an intensity and breadth comparable to the emotional intensity of this hearing. We deserve more than a theater of name-calling and public rebuke.
From an article on your site
Related articles on the web