
GLAAD has published the 2025 edition of its Social Media Safety Index, and it shows that the six biggest social media platforms are doing nowhere near enough to keep LGBTQIA+ users safe.
The report looks at the policies and protections of TikTok, Facebook, Instagram, Threads, YouTube, and X, ranking them with a score out of 100. X comes in last place with a score of just 30, and with the highest score being a mere 56 out of 100, it’s clear there is a lot of work to do.
See also:
GLAAD, the world’s largest lesbian, gay, bisexual, transgender and queer (LGBTQ) media advocacy organization, used 14 indicators to rank each of the main social media platforms. Key among them was whether or not the platforms have public-facing policies that protect LGBTQ people from hate, harassment, and violence, as well as whether they have policies related to “targeted misgendering” and “targeted deadnaming”.
This is the fifth annual report from GLAAD. President and CEO Sarah Kate Willis explains its importance saying:
Recent years undeniably illustrate how online hate speech and misinformation negatively influence public opinion, legislation, and the real-world safety and health of lesbian, gay, bisexual, transgender, and queer (LGBTQ) people. The landscape of social media platform accountability work has shifted dramatically since GLAAD’s first SMSI report in 2021, with new and dangerous challenges in 2025.
Meta is singled out for having “recent major ideological shifts”. The report notes that the company “announced it would retreat from established norms of trust and safety in favor of welcoming hate speech, and further place the onus on users to block blatantly harmful content that would otherwise violate its policies”.
Key findings from the report include:
The most notable highlight of the 2025 research is the pair of findings that: in addition to inadequate moderation of harmful anti-LGBTQ material (for example, see GLAAD’s 2024 report, Unsafe: Meta Fails to Moderate Extreme Anti-trans Hate Across Facebook, Instagram, and Threads), platforms also frequently over-moderate legitimate LGBTQ expression. This includes wrongful takedowns of LGBTQ accounts and creators, mislabeling of LGBTQ content as “adult” or “explicit,” unwarranted demonetization of LGBTQ material, shadowbanning, and other kinds of suppression of LGBTQ content. (Such unwarranted restrictions occur with non-LGBTQ content as well.)
The takeaway from the report is that everyone involve needs to do better, but X, Threads and YouTube fare particularly poorly.
GLAAD makes several recommendations:
- Strengthen and enforce (or restore) existing policies and mitigations that protect LGBTQ people and others from hate, harassment, and misinformation; while also reducing suppression of legitimate LGBTQ expression.
- Improve moderation by providing mandatory training for all content moderators (including those employed by contractors) focused on LGBTQ safety, privacy, and expression; and moderate across all languages, cultural contexts, and regions. AI systems should be used to flag for human review, not for automated removals.
- Work with independent researchers to provide meaningful transparency about content moderation, community guidelines, development and use of AI and algorithms, and enforcement reports.
- Respect data privacy. Platforms should reduce the amount of data they collect, infer, and retain, and cease the practice of targeted surveillance advertising, including the use of algorithmic content recommender systems, and other incursions on user privacy.
- Promote and incentivize civil discourse including working with creators and proactively messaging expectations for user behavior, such as respecting platform hate and harassment policies.
The full report is available to view here (PDF).
Image credit: Manuel Bejarano / Dreamstime.com