FIFA reiterates commitment to football without hate
Social Media Protection Service (SMPS) implemented at the FIFA Club World Cup 2025™
SMPS has been permanently available to all 211 FIFA Member Associations and their players since 2024
As part of its efforts to protect its participants and make football a safe space, FIFA has marked this year's International Day for Countering Hate Speech by reiterating its commitment against hate in football and disclosing the latest figures of its Social Media Protection Service (SMPS) , which is in place at the FIFA Club World Cup 2025™ . An enhanced service is covering the 32 teams participating in the FIFA Club World Cup™ , as well as 2,019 accounts belonging to players, coaches and officials. Since its launch at the FIFA World Cup 2022™ in Qatar , the SMPS has analysed 33 million posts and comments on 15,302 accounts across 23 tournaments, qualifiers and friendlies. The service - which has been permanently available to all 211 FIFA Member Associations (MA) and their players since 2024 - has hidden over ten million abusive comments from public view, protecting the intended target and their friends, family and followers from potential psychological harm. Abusive content that is deemed to have broken the platforms' respective terms of service is reported to social media platforms, triggering concrete actions, including account suspensions and, when the threshold has been met, information is submitted to law enforcement authorities for further action. Strengthening the link between the football authorities and the respective justice system of each MA is critical to take the fight against online abuse forward.
About the Social Media Protection Service
What is the FIFA Social Media Protection Service (SMPS)?
The SMPS protects players, teams and officials from online abuse, keeping their social feeds free from hate and allowing them to enjoy taking part in FIFA events. It also stops their followers being exposed to abusive, discriminatory and threatening posts, preventing the normalisation of these kinds of actions.
Step 1: Monitor participants' public accounts for abusive, discriminatory and threatening comments and replies.
Step 2: Moderate abusive and offensive comments and replies by instantly and automatically hiding them, where the account owner has provided permission to do so.
Step 3: Report comments and replies directly to social media platforms for further action where they are deemed to have broken the platforms' respective terms of service. Submit the relevant information to law enforcement authorities.