Skip to content

Racism in football - the battle on social media: a summary of legal and technological tools

Insight

The scourge of racism has reared its ugly head again recently in football, with a series of high profile instances of shocking abuse on social media. Indeed, it seems that barely a round of fixtures goes by without another example (Anthony Martial being the latest victim, following Manchester United's 1-1 draw with West Bromwich Albion on 14 February 2021). Players across the men's and women's games (including Marcus Rashford, Paul Pogba, Reece James and Lauren James) have been subjected to unacceptable online content in the last few weeks, leading to calls for more action by social media platforms in particular. 

While not a novel problem (and sadly it does feel like a conversation that has been had multiple times before), this points to ongoing deep-rooted cultural issues that are not specific to football and which must be overcome. However, the immediate task is to protect players from racist and other online abuse in the here and now. It is worth saying that this issue is not exclusive to social media, and there are plenty of recent examples of unacceptable behaviour by fans in grounds all over the world (England's game in Bulgaria in October 2019 being one infamous example). But the absence of fans from stadiums has brought online content into sharper focus, with abuse taking the form of tweets and other posts, rather than screamed obscenities from the stands. 

Here we consider some of the technological and legal tools that can be used to root out racist behaviour (and other forms of abuse and harassment online). We recognise that these must of course be combined with political and social pressure, as well as education, but they all form part of the solution.

Challenges

The founding origins of most social media platforms pose an additional level of complexity as, prior to their multi-billion dollar corporate status, the platforms' “raisons d'être” grew from libertarian, free speech values. For the US platforms, this is derived also from the First Amendment, a part of the constitution that provides very strong protections to freedom of speech. US legislation (section 230 of the Communications Decency Act 1996 in particular) states that content hosts (such as social media platforms) should not be treated as publishers, giving them effective immunity from legal proceedings brought in the US concerning user-generated content. The position is very different in the UK and EU, but for many years these cultural, constitutional and legislative factors informed the approach of the big technology platforms  in attempts to remove content in any jurisdiction. Thanks to legal developments and public pressure, as well as a recognition on the platforms' behalf that they need to do more, this is beginning to change but there remains a long way to go. 

The ability for users to post anonymously and the sheer volume of content also remain central to the challenges facing institutions, organisations and individuals who have been subjected to racist (and other) abuse on social media. Users who feel able to hide behind a keyboard and a pseudonym unquestionably engage in more brazen practices than they would in person. Likewise, VPN (virtual private network) misuse and the creation of false email addresses linked to “burner numbers” has facilitated a multitude of “fake” accounts across all platforms. The prevalence of this general trend of behaviour online has created the impression that those who wish to engage in bigotry will remain untouched and unaccountable for their actions. Increasingly, groups/individuals are also joining what used to be perceived as peripheral anomalous platforms, such as Gab or Telegram, which have now been elevated to mainstream status (ex-President Trump’s arrival on Gab serving as a recent example). The misuse of these platforms (and their unwillingness to take preventative action) has been well documented in a myriad of circumstances (most recently the events of 6 January 2021 on Capitol Hill). 

Finally, sports organisations and athletes/players have also recognised that the international reach, brand and commercial benefits, associated with social media engagement, are modern day necessities. Indeed, social media has had very positive impacts on the ability to engage with fans across the world, and render players more accessible. The unacceptable trade-off of this, to date at least, has been the opening up of players to racist and other forms of abuse.

How can technology assist?

Jamie Barter, Managing Partner at Harod Associates, a digital investigations and intelligence company, based in London, provides his insights.

Frequently, software has only been able to track and report trends of racist sentiment and rhetoric, rather than focusing on the individuals responsible. Using unique proprietary technology that empowers organisations / clubs / individuals, who have been subjected to any form of bigotry, allows them to pursue and expose the individuals responsible, in a lawful and ethical manner. The platform identifies relevant connections of users and cross references this information against millions of data sources at a speed that would be impossible for a human. The use of this technology is becoming increasingly relevant in the backdrop of further policing cuts and an underlying misunderstanding of social media networks, and a lack of access to the relevant apparatus and technology, in order to combat the bigotry.

With access to over 100 social media platforms, combined with image and emoji recognition systems, including a bots / avatar database, the system maps out network intelligence, which enables cross-referencing identification at lighting speed. This, therefore, even facilitates the ability to cross-compare networks of repeat offenders who have had their previous accounts banned and re-appear under a new name, using a VPN.

The purpose of the technology is to gather intelligence on those users engaging in racist activities, as well as identify aliases they may be using and groups they are members of, on other social media platforms or areas of the web. This intelligence is invaluable to informing what steps clubs, governing bodies and other organisations might take. The ultimate idea, of course, is to force accountability on people who hide behind unknown or anonymous accounts.

Existing legal tools and new developments

While court orders can be sought against social media platforms to reveal information they hold on users who engage in unlawful activity, the process can be an expensive one and often only leads to the revelation of fictional email addresses. There have been some calls for platforms to require proper identification details when users set up accounts, but this is set against the concerns about data privacy. In some cases, these third party disclosure orders are useful tools, but the availability of high speed digital intelligence is often a more expeditious route to identifying those responsible. 

Racist content on social media can be the subject of criminal proceedings, and there are a variety of offences that might apply (including harassment, hate crime, section 1 of the Malicious Communications Act 1988 or section 127 of the Communications Act 2003).  However, while there have been examples of arrests in recent cases, doubts remain as to whether the police (and Crown Prosecution Service) have the resources to deal with all such instances. 

Turning to the civil (non-criminal) law, individuals or organisations may be able to argue that repeated abusive posts by a user constitute harassment (which is a civil cause of action as well as a criminal offence), are defamatory, or infringe an individual's data protection rights.  That said, it is rarely in the best interests of a player or club to launch costly legal proceedings against individual user(s) unless the actions form part of a sustained campaign that is causing material harm to reputation and/or significant distress. The wide reach of the tentacles of the General Data Protection Regulation has made it considerably more straightforward to require platforms themselves to remove content relating to individuals; so-called “right to be forgotten” cases against Google and social media platforms are very common although few reach the courts. Meanwhile clubs and governing bodies can of course impose lifetime banning orders on those found responsible. 

However, the most important legal development is the Online Safety Bill that is due to be enacted by the UK parliament this year. This legislation will introduce a new regulatory framework in the UK, which will impose a duty of care on companies (including search engines and social media platforms). The purpose is to improve safety of users by requiring platforms to have robust processes that provide protection against a range of online harms (including hateful content such as racism, activity affecting children and disinformation). The duty of care shall involve taking action to prevent user content from causing physical or psychological harm to individuals, and it is envisaged that this will include content moderation but also an obligation to have effective and accessible reporting and redress mechanisms (something which has been a frequent source of complaint against social media platforms). What the Online Safety Bill will not do is create new avenues to sue technology companies, but the idea is that the framework (which will be regulated by Ofcom) will create a greater degree of accountability. It is currently intended that Ofcome will have the power to issue fines of up to £18 million or 10 per cent of annual global turnover in respect of non-compliance. Unquestionably, this is an ambitious piece of legislation. And it is to be hoped that it makes a significant contribution to the fight against racism in sport.

With special thanks to Jamie Barter, Managing Partner at Harod Associates, for co-authoring this article.

If you require further information about anything covered in this briefing, please contact Tom Rudkin, or your usual contact at the firm on +44 (0)20 3375 7000.

This publication is a general summary of the law. It should not replace legal advice tailored to your specific circumstances.

© Farrer & Co LLP, February 2021

Want to know more?

Contact us

About the authors

Thomas Rudkin lawyer photo

Thomas Rudkin

Partner

Tom is a leading reputation, media and information lawyer.  He advises the firm’s clients on all issues relating to their reputation, privacy, confidential information and data. Tom is a member of the firm’s Reputation Management and Data, IP and Technology Disputes practices.   

Tom is a leading reputation, media and information lawyer.  He advises the firm’s clients on all issues relating to their reputation, privacy, confidential information and data. Tom is a member of the firm’s Reputation Management and Data, IP and Technology Disputes practices.   

Email Thomas +44 (0)20 3375 7586
Back to top