Footballers facing a torrent of racist, homophobic abuse on social media

Social media accounts that send racist abuse to soccer players remain active despite repeated complaints, according to the PFA.

Research from the data science firm Signify on behalf of the players union shows that three out of four accounts reported for sending explicit abuse to players are still active on Twitter.

Using machine learning, To mean examined more than six million posts on Twitter, monitoring the accounts of the players of the Premier league, Women's Super League and English Football League.

The study revealed a 48 percent increase in racist abuse online during the second half of the 2020/21 season. In particular, half of the abusive accounts discovered were based in the UK.

During the entire 2020/21 campaign, Signify reported 1,674 accounts for abuse, and around a third of them were affiliated with UK-based clubs.

"The report also found that players in all leagues faced homophobic, trained and sexist abuse," the PFA said.

Signify found that homophobic comments were included in a third of all abusive posts, and that this particular type of abuse peaked in December 2020.

This spike, the PFA said, corresponded with high-profile anti-homophobia campaigns, such as the Rainbow Laces campaign.

According to the association, research shows that social media platforms still have a lot of work to do to tackle abusive content and harassment.

Only 56% of the racial abuse posts identified during the season had been removed, the PFA said. Some posts were active for months, while others stayed online for the entire season.

"The data in this report suggests that platforms are focusing on removing offensive individual posts rather than holding those who write them accountable," the PFA said.

Watford captain and PFA Player Board representative Troy Deeney said the Signify report highlights a poor response from social media companies and called for a change.

"Social media companies are big businesses with the best tech people," he said. "If they wanted to find solutions to abuse online, they could."

โ€œWhen is enough, enough? Now that we know abusive accounts and club membership can be identified, more must be done to hold these individuals accountable, โ€added Deeney.

PFA CEO Maheta Molango echoed Deeney's comments, noting that "the time has come to move from analysis to action."

"The PFA's work with Signify clearly shows that the technology exists to identify large-scale abuse and the people behind offensive accounts," he said.

โ€œHaving access to this data means that the consequences of online abuse can be pursued in the real world. If the players union can do this, so can the tech giants. "


Recommended


In May, the PFA joined a football-wide boycott to draw attention to the extent of racist abuse on social media.

While abusive posts initially declined in the wake of this boycott, Signify data shows that racist abuse of players peaked during the same month.

Two months later, racist abuse on social media once again attracted global attention after England's defeat to Italy in the final of Euro 2020.

English footballers like Marcus Rashford, Bukayo Sako and Jaden Sancho were the subject of a torrent of racist and abusive comments on social media in the hours after the defeat.

The English FA condemned the abuse and called for harsher punishments for people who posted racist abuse.

In response to the public outcry, social media companies such as Twitter and Facebook pledged to do more to address the problem.

Speaking at the time, a Twitter spokesperson said the company "promptly removed" tweets containing abuse and suspended "multiple accounts."

To address abusive posts and content, Twitter says it uses a combination of machine learning-based automation and human review.

Facebook moderates content in a similar way, providing users with additional tools aimed at stopping abuse. This includes your 'Hidden Words' filter.

Despite obvious technical capabilities, former Manchester United and England defender Rio Ferdinand questioned why social media companies seem unable to monitor and address abuses.

"Now is the time to change," he said. "If we have this kind of technology at our disposal, why aren't social media companies using it to eliminate racist and discriminatory abuse?"

Imran Ahmed, CEO of Center for the fight against digital hatesaid the research results bear similarities to his own study. In the wake of the Euro 2020 final, CCDH identified 105 Instagram accounts linked to racist messages sent to England footballers.

Despite this, Ahmed revealed that Instagram had only taken action against six of the reported accounts within 48 hours.

Talking to DIGITAhmed said that financial sanctions should now be considered to force social media platforms to act.

โ€œWe have heard enough from the big tech platforms; research consistently shows that there is a chronic and systemic failure to deal with hate speech and misinformation across the board, โ€he said.

"Now is the time for the government to introduce tough financial penalties to incentivize social media companies and their billionaire owners to finally clean up their platforms."



Leave a Comment

Comments

No comments yet. Why donโ€™t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *