Despite our society’s collective addiction to scrolling through social media, many of us can’t help but feel a twinge of dread when seeing notifications we’ve missed. For every clever meme, scintillating fact or adorable animal that crosses our feeds, we’re just as likely to encounter a snarky attack, racial slur or hate-filled comment.
But the potential dangers go far beyond anxiety. A 2021 Pew Research poll found that a quarter of Americans have experienced severe forms of harassment online, including physical threats, stalking, sexual harassment and sustained harassment, often tied to their political beliefs. And this does not include the additional harms caused by hate speech, misinformation and disinformation, which are even harder to measure.
While many social media companies are developing community guidelines and investing in both human and algorithmic content moderators to help uphold them, these efforts hardly feel like enough to stem the tide of toxicity. And even when platforms succeed in removing problematic content and banning perpetrators, people who are the target of online hate or harassment remain extremely vulnerable. After all, it only takes a few clicks for a banned user to create a new account or repost removed content.
At UC Berkeley, researchers are reimagining how to support freedom of expression online while minimizing the potential for harm. By holding social media companies accountable, building tools to combat online hate speech, and uplifting the survivors of online abuse, they are working to create safer and more welcoming online spaces for everyone.