Professor Stuart Soroka in the Daily Bruin!

In the final episode of Code Red, Podcasts contributors Zoë Bordes and Alicia Ying sit down with UCLA professors Sarah Roberts and Stuart Soroka to get their perspective on the causes and consequences of online extremism. Correction: This podcast and the original version of its description incorrectly referred to Stuart Soroka as Robert Soroka.

 

 

Zoë Bordes: Welcome to the third episode of Code Red. In episode one, we explored the different definitions of online extremism, the reasons people turn to extremism and a few real-life examples. Then in episode two, we discussed how algorithms and misinformation contribute to online extremism. Today, we’ll be talking with experts to get their opinions on these topics.

Alicia Ying: In this episode, we have two amazing speakers lined up. First, we’ll hear from Dr. Sarah Roberts – an assistant professor at UCLA who’s a leading scholar on social media policy, commercial content moderation and the role of the internet in perpetuating global inequities. She’s also the faculty director of the UCLA Center for Critical Internet Inquiry, co-director of the Minderoo Initiative on Technology and Power, and a research associate of the Oxford Internet Institute. Dr. Roberts brings a unique perspective informed by feminist science and technology studies, and we’re excited to have her on the show.

ZB: Our second speaker is Dr. Stuart Soroka, a professor in the departments of communication and political science at UCLA. He is also the series editor for Cambridge Elements in Politics and Communication and an associate member of the Centre for the Study of Democratic Citizenship. Dr. Soroka specializes in political communication and political psychology, as well as the relationships between public policy, public opinion, and mass media. He is mainly interested in negativity and positivity within news coverage and the role of mass media in shaping representative democracy.

So, we’re going to begin by hearing Dr. Roberts’ take on the ethics of Big Tech.

AY: We live in a world where technology is seamlessly integrated into our lives, leading us to pay little attention to the hidden tools and algorithms designed to hook us into these online platforms. To understand Big Tech companies thriving off their users, we went to Dr. Roberts.

She says that users’ endless clicks, scrolls, swipes and content interactions themselves aren’t the endgame for companies.

Sarah Roberts: The companies are all about taking our activity, our preferences, our behavior, our networks of friends and others, and monetizing. And they monetize it, because it’s true. Their true clientele is other companies. So these are really what you would call like business to business corporations. They are trying to sell ad space, not an unfamiliar model to, you know, people who work in news media, but they’re trying to sell ad space, they’re trying to sell ads aligned with particular users and particular behaviors or particular preferences. And the thing with social media is because we do so much and all of that is tracked and analyzed to an absurd degree, the way in which those ads can be targeted and sold, the value that’s placed on them, is predicated on that specificity. We are being commoditized and sold. So that is what I mean when I say that we’re not really users, we’re being used.

AY: Basically, the main method of understanding how material is really evaluated is in the context of its monetary value in this ad marketplace.

ZB: Understanding this concept is crucial to grasp how economic models and user engagement expectations drive the decisions of platforms. These decisions impact the visibility of content. Whatever social media platform we decide to open, it’s really natural to anticipate fresh and engaging content that’s tailored to our past interactions. Dr. Roberts emphasized that at the end of the day, users are part of a production chain that shapes content. Content moderation, in essence, is an editorial practice that is driven by the platform’s economic interests. This perspective reveals that decisions on content moderation are driven not just by safety concerns but also by a need to maintain user engagement and satisfy advertisers. This alignment can lead to inconsistencies and challenges in moderating content effectively.

AY: With all of this in mind, we wonder at what point will these companies feel some sort of responsibility for all of the misinformed, extremist, violent and hateful content on their platforms.

SR: I think the answer is in, you know, the history of other industries that caused harm. They are disinclined to change a business model that is basically turning on a faucet from which, you know, Benjamins pour out, right? It’s just, “Hey guys, can you maybe make less money?”

AY: In our interview, Dr. Roberts listed off some previous examples when companies only acted when they were pressured. For example, despite the tobacco industry knowing that smoking caused cancer, they didn’t say anything. Another example is that the automobile industry didn’t always have safety features such as seatbelts and airbags in the event of accidents. People needed to educate and push for elected officials in Congress to add such regulations.

With all of this in mind, our crucial question is: At what point will social media companies acknowledge their responsibility for the spread of misinformed and extremist content on their platforms? The history of other industries suggests that companies are often reluctant to alter profitable business models, even in the face of harmful outcomes. Just as the tobacco industry long denied the dangers of smoking and the automobile industry resisted adding essential safety features, social media companies may need significant external pressure – whether from the public, regulators, or both – to implement meaningful changes.

As we’ve seen throughout this series, the ease with which users can be drawn into extremist content underscores the urgent need for a proactive approach. Social media platforms must consider their ethical responsibility to monitor and manage their content effectively not only to improve user safety but to foster a healthier digital environment for all of us. Understanding the complex dynamics of content moderation, which is driven by economic incentives, is essential in this endeavor. Only through informed advocacy and demanding accountability can we hope to influence these powerful entities to prioritize the well-being of their users over mere profit.

And with that, we turn to Dr. Soroka.

ZB: So, Dr. Soroka has done a lot of work on the negativity bias. This is how he defines it:

Stuart Soroka: So mammals have evolved with brains that prioritize negative information over positive information. And we attach valence to information, we identify the valence of information very, very quickly within milliseconds. And that identification of valence then structures how that information then finds its way through our brains and how we think about things, whether we pay attention to them, whether we believe them or don’t believe them, and all kinds of other things. So we along with other mammals exhibit negativity biases. And that means that when we go to read news, like any other situation in which we’re receiving information, when we go to read news, we’re going to be more attentive, more responsive to that negative information. So means we mean, we basically set media up to do this for us, right? The whole notion of media as a fourth estate, monitoring error and identifying error and letting us know, we kind of set media up so that media processes information, in the same way that our brains do, right? We’re all – we are and the media that we read – are prioritizing negative information. And that might make sense in an information environment in which we have to make decisions about what to pay attention to and what not to pay attention to. Right, it might make sense because the consequences of negative information are bigger than the consequences of positive information. But it might also make sense because in a very complex information environment, we can’t pay attention to everything all the time. We have to decide what to pay attention to. We have to have some kind of quick way of deciding, like not deciding by reading all of it, but some kind of quick, within milliseconds decision like, “This is the thing I’m going to be attentive to, and this is the thing I’m not going to be attentive to.” Because we just don’t have enough attention for all of it. So for all of those reasons, what you get is media consumption that prioritizes negative information and media production that prioritizes negative information.

 

Click here to read the full article.