Overview
The worldwide spread of social media both opens new horizons and poses new challenges to peacebuilders. Social media platforms contribute to freedom of speech. Even so, the filter bubbles they create can also foster polarization and hate-speech, with serious offline implications. Recent elections – in the Global North and the Global South alike – have shown both the potential and dangers of this dynamic.
This session focused on the phenomenon of online and offline polarization, how it impacts our peacebuilding concepts and tools, and the role of peacebuilding approaches in responding to polarization. It connected peacebuilders with researchers and practitioners testing out new social media approaches. An introduction to ‘dangerous speech’ set the stage, followed by an introduction to the concrete experiences and examples of social media projects focused on depolarization in South Sudan, the USA and Germany. A facilitated dialogue and breakout sessions deepened the introductory remarks. The dialogue focused on questions about the current situation, the challenges, and promising practices.
This session was organized by Claudia Meier and Kate Mytty (Build Up) and Sonja Vorwerk-Halve (GIZ) and facilitated by Julia Schappert (GIZ).
Key Takeaways
Digital tools can enhance the reach of peacebuilding activities, if we find ways to translate analog peacebuilding methodologies to the online context. There are specific dimensions of social media that needs to be considered when utilizing it towards peacebuilding:
Dangerous or hate speech needs to be analysed according to the framework of communication which needs to consider the speaker, the message, the audience, the context and the medium of the specific speech
Dangerous speech lowers human barriers to violence – online and offline. It can be any form of expression, whether images or speech, that increases the risk that people will condone or commit violence against members of another group. It overlaps with hate speech, which often has a legal definition, but is not necessarily fully encompassing of hate speech or vice versa. Fear and threat, evoked through dangerous speech, are powerful in inciting violence and even more so than hatred. The hallmarks of dangerous speech include:
Dehumanization of the ‘other’.
Accusation in a mirror, where one group suggests the other group is a threat as a way to stoke violence towards the other group.
Assertion of attack on women or girls utilized to provoke a society towards violence.
Threat to purity, where a small segment of a group can pollute the actions of a whole group through dangerous speech.
Social dynamics change with social media. Societal norms do not transfer from offline to online; people behave online differently than offline. Social norms changes because people are not physically in the same space. The ‘Disinhibition effect’, created through that distance, can unleash negative expressions. Thus, social media can make it easier to attack people collectively and geographic boundaries vanish. In South Sudan, social media is used by the diaspora and while polarization happens online at a distance, it can translate to real conflict in South Sudan.
Algorithms can limit or amplify the limited information that people are exposed to, thus biasing our perspective of reality. Automation (bots) can be useful tools to reach a mass of people, but it is important to also have a human element.
Social media can be a tool for peacebuilding. It is possible to reach and organize large numbers of people against polarization by training people on how to recognize and respond to online polarization. However, it is important to combine the offline and the online space for maximum effect.
Recommendations
We need to understand better dangerous speech. There are different ways to attack people online, we need to understand who is driving the attacks and the rationale for their actions. Often, those engaging in online attacks do not fit the stereotype and may not yet be fully dedicated to dangerous speech. Knowing these human patterns would help us prevent these actors from becoming more polarized.
We need to team up in the same way dangerous speech actors do. Trolling is an activity not an identity, but those people are well coordinated. We need to organize in the same way – not to further polarize, but to break polarization.
We need to target the bystanders. They are important in both not adding to the fire, and also in intervention.
We need more research. There’s little research on the positive benefits of the connecting work we do and on what can be effective counter-speech. We mostly look at the perpetrators of bad speech, not at what works to combat it.
We need to bring peacebuilding methods to support Silicon Valley to fix the underlying issues. The big companies are facilitators who fundamentally change the way people work online.
We need to focus on impact. First, we need to understand what we are aiming to achieve, and then target our actions to that.
We need to exchange more on promising practices and how we can cooperate with each other and integrate existing tools such as the #defyhatenow social media hatespeach mitigation field guide into peacebuilding and development activities.
We need to work to increase the effectiveness of off- and online peacebuilding e.g. through the promotion of on- and offline civic education and social media literacy and through the inclusion of different actors such as psychologist, lawyers, journalists, teachers and politicians.