Twitter: Racist tweets after Euros final didnโ€™t rely on anonymity

A mural of footballer Marcus Rashford is covered in messages of support from fans after he was defaced by racist hooligans.

Alex Livesey - Danehouse / Getty Images

If you have never been a victim of online abuse, it would be easy to assume that the perpetrators of such abuse are hiding behind anonymous avatars and usernames that hide their real identities. But that is not the case.

Twitter revealed in a blog post on Tuesday that when England footballers were targeted for racist abuse last month after losing the Euro Cup final, 99% of the accounts you suspended they were not anonymous.

The torrent of racist abuse targeting three black members of the England squad appeared on Twitter and Instagram in the hours after the game. It led commenters, including Piers Morgan, to demand that social media platforms prevent people from creating anonymous accounts to discourage them from posting racist comments.

The idea that anonymity is a primary factor in allowing perpetrators of abuse is not new, and in the UK there has even been debate whether to include a ban on anonymous online accounts in the upcoming online security bill. But the argument for social media sites to conduct mandatory identity checks is based on the fallacy that if people can be held accountable for their actions, they will not be racist.

The evidence Twitter provided on Tuesday validates what people of color have already been saying: that people will be racist regardless of whether or not an anonymous account protects them from consequences. "Our data suggests that identity verification is unlikely to have prevented the abuse from occurring, as the accounts we suspended were not anonymous," the company said in a blog post.

Instagram did not immediately respond to a request for data on the accounts or comments it removed for targeting the abuse of England footballers.

Also included in the Twitter data was evidence that while the abuse came from around the world, the UK was by far the largest source country for abusive tweets. He also added that the majority of discussions about British football on the platform did not involve racist behavior, and that the word "proud" was tweeted more frequently the day after the final than any other day this year.

For Twitter and other social media giants, implementing tools to prevent racist abuse is an ongoing challenge. On Tuesday, Twitter said it will soon test a new product feature that automatically blocks accounts temporarily with malicious language. It will also continue to implement reply messages, which encourage people to reconsider what they are tweeting if it appears their language might be harmful. In more than a third of the cases, this caused people to rewrite their tweet or not send it at all, according to the company.

"As long as racism exists offline, we will continue to see people try to bring these views online; it is a scourge that technology cannot solve on its own," Twitter said in the blog post. "Everyone has a role to play, including the government and football authorities, and we will continue to call for a collective approach to combat this deep social problem."


Leave a Comment

Comments

No comments yet. Why donโ€™t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *