Danger Of Unregulated Online Communications

Social media gives people a voice but also fuels online hate, especially against marginalised groups. PhD candidate Eva Nave: 'While end-to-end encryption protects activists, it also enables criminal activity, creating a more accessible version of the Darkweb.'

It started with the rumour that members of the Rohingya, a Muslim community in Myanmar, had allegedly raped a women. These, then confirmed as false, accusations erupted on Facebook. The Rohingya started to receive death threats and Facebook's algorithm raised the visibility of the posts, reaching even more people. This led to the mass persecution of the Rohingya. Many of them were killed, raped and driven out of the country (see box text).

At present, over five billion people use social media to communicate. The example of the Rohingya shows how the spread of hate speech on social media can have extreme consequences - in the offline world too. Eva Nave conducted PhD research on online hate speech and examined the responsibility of social media platforms to counter it.

Eva Nave

The importance of human rights compliant content moderation

Social media platforms need to find the right balance when monitoring content - a task also referred to as content moderation. Content that is legal must remain, while illegal messages, photos and videos must be demoted and, if necessary, removed and reported to the police.

'WhatsApp has already been linked to lynchings in India.'

Nave also warns about the consequences of content moderation policies that take down legal content: 'Syrian human rights activists posted videos on YouTube exposing war crimes, but YouTube deleted the content without archiving it as potential evidence for future criminal investigations or sharing it with law enforcement bodies.' At the same time, criminal hate speech, such as incitement to violence, should be taken offline as soon as possible. Nave: 'Take the example of the genocide of the Rohingya. Not only did Facebook fail to remove online hate but it even amplified the visibility of the posts by, for example, automatically showing hateful content in the "up-next" video feature.'

Meta's role in contributing to genocide of the Rohingya

The Rohingya, a Muslim community in Myanmar, were killed, raped and persecuted by the Myanmar military. The United Nations classified the violence as genocide. Hundreds of thousands Rohingya had to flee to neighbouring Bangladesh. False reports about the Rohingya were posted on Facebook and hate-inciting content was shared (see main text). According to an investigation by Amnesty International and the United Nations, Facebook's algorithm fuelled offline violence against the Rohingya. According to these reports, Meta, Facebook's parent company, significantly contributed to the genocide. Although Facebook was aware of the hate-inciting content, the platform did not remove the posts and even promoted them (see main text). In 2018, Facebook admitted that it had been to slow to act to prevent hate speech and stop disinformation being spread. Several lawsuits are now pending against Meta for their role in this genocide.

Encrypted mega group chats: Darkweb for all

Good content moderation is therefore important, but it is becoming increasingly challenging due to the rise in secret communication channels. Platforms such as WhatsApp, Signal and even Facebook offer 'end-to-end-encryption', which means that only the sender and the intended recipient of the message can read the content - good news when it comes to freedom of speech and the protection of human rights activists. 'So it's not surprising that Signal, which offered end-to-end-encryption messaging from the very start, advertises itself as being the platform for activists', says Nave.

'Amplify the voices of the people who were the target of hate-inciting comments'

Nevertheless, end-to-end encryption also poses new threats. 'If there's no monitoring at all, criminal activities can proliferate more easily within these chats. So, in a way, these chats can also work as a more accessible Darkweb 2.0.'

End-to-end encrypted large group chats are the biggest problem as groups allow up to thousands of users to join in (e.g. Signal and WhatsApp). 'First, only one-on-one conversations were protected, now it's also possible for group chats. The larger the group, the greater the threat to human rights.' According to Nave, the trend of very large online platforms (such as Meta) incorporating end-to-end encryption on its messaging application facilitates the spread of online hate speech. She notes that this could lead to offline violence: 'WhatsApp has already been linked to lynchings in India.'

Disrupting hate speech without breaking privacy

Nave has been working on a solution for content moderation on end-to-end encryption communication. Together with technical experts, she has proposed a disruption technique that detects hate speech content on large group chats while protecting the privacy of the users. The tool contains a database with very specific hate expressions in multiple languages that incite violence. As soon as such a term appears, the system can automatically freeze a group or split it into smaller groups.

Before such a tool goes live, it must be transparent to users what the database monitors. 'The database should prevent people from inciting violence and also from resorting to offline violence. It should encourage people to communicate respectfully and make users aware that incitement to violence is unacceptable.'

Nave acknowledges that the system is by no means perfect. The greatest risk is that users will adapt their vocabulary or the group size. Its success also depends on reliable cooperation with law enforcement authorities. 'Content that incites hatred would have to be archived and reported to the police. But my proposal also acknowledges the increasing infiltration of violent extremists within law enforcement.' According to Nave, there is also a risk that the database will be misused to detect content that is deemed disagreeable, which puts groups which are already marginalised at a greater risk.

'A megaphone' for victims

Nave's proposals aim to prevent online hate speech. But what about the Rohingya who already became victims of it? The researcher also has ideas about that. 'I believe that one possible solution to repair the harm caused would be to amplify the voices of the people who were the target of hate-inciting comments,' says Nave. Thus, as a means to remedy the damage caused, Meta could tailor its content moderation algorithms to actually spread the content posted by the affected community.

Eva Nave defended her dissertation on 3 July. This research was funded with a grant from Marie Skłodowska-Curie Actions - Innovative Training Network.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.