How the 2026 Social Media Safety Index exposes rising harms to LGBTQ users on Meta and major platforms

GLAAD’s research reveals policy backslides, mass account removals, and AI-driven moderation problems that are harming LGBTQ communities online

The 2026 Social Media Safety Index from GLAAD confirms what many in the LGBTQ community have been warning about: recent policy changes at major tech companies have made some platforms less safe for queer users, particularly for transgender and non-binary people. GLAAD assessed a range of services — including TikTok, YouTube, X, and the Meta family of apps (Facebook, Instagram, and Threads) — and found broad declines in platform performance on basic safety measures. The study highlights not only the prevalence of hateful content and disinformation but also systemic moderation practices that disproportionately silence LGBTQ expression.

Across the evaluated services, only TikTok improved its score; every other platform registered lower metrics in 2026, some reaching historic lows. The report documents concrete consequences: algorithmic amplification of harmful narratives, automated removals linked to unreliable AI, and diminished corporate commitments to Diversity, Equity, and Inclusion (DEI). These shifts have translated into real-world impacts — from targeted harassment to the erasure of community spaces that previously relied on social platforms for support and outreach.

Key findings from the 2026 index

GLAAD’s analysis highlights a cluster of interrelated problems. First, the early 2026 policy rollbacks at Meta and YouTube — including alterations to what counts as hate speech and the removal of explicit protections for gender identity — remain in force and continue to endanger LGBTQ users. Second, platforms are increasingly opaque about enforcement decisions, algorithmic logic, and the use of AI in moderation. Third, the decline in workplace commitments such as DEI has weakened internal expertise and safeguards that once helped mitigate bias. Collectively, these changes mean more hate, less clarity, and fewer avenues for harmed users to contest decisions.

Platform scores and notable declines

The 2026 scorecard assigns platforms numeric ratings that reflect performance across safety, transparency, moderation, and privacy criteria. The headline scores were: TikTok 56, Instagram 41, Facebook 40, Threads 39, YouTube 30, and X 29. While TikTok was the only service to avoid a year-over-year fall, several major players landed near the bottom of the scale, signalling systemic failures. GLAAD notes that reduced policy protections combined with weak enforcement have produced a landscape where shadowbanning and disproportionate content suppression are common complaints from LGBTQ creators and organisations.

Policy rollbacks and concrete harms

Changes implemented in early 2026 at Meta included revisions to its hateful conduct standards, the elimination of many global protections for LGBTQ people, the termination of certain DEI programs, and the end of a U.S. fact-checking initiative. Around the same period, YouTube removed gender identity from its list of protected characteristics in hate policy. These rollbacks coincided with waves of account removals: in December 2026 dozens of Instagram profiles belonging to queer performers, BIPOC events, sex-positive entrepreneurs and pole dance professionals were suspended or disabled, sometimes flagged with severe allegations such as “human exploitation”. Affected creators reported no practical appeal routes, leaving community hubs and livelihoods suddenly erased.

Moderation, AI, and the wider political context

GLAAD identifies several amplification mechanisms that make online harms worse. Over-reliance on automated systems means AI frequently generates false positives, removing or restricting legitimate LGBTQ content and accounts instead of routing problematic content for human review. The report also links political trends — notably shifts in the U.S. political climate since January 2026 — to a rise in coordinated anti-LGBTQ campaigns that exploit platform weaknesses. Independent trackers cited by GLAAD documented enormous volumes of anti-LGBTQ content, including more than 97,000 posts identified by ISD around the 2026 election period, and GLAAD’s ALERT Desk recorded over 1,000 anti-LGBTQ incidents in 2026.

Compounding the issue, new forms of abuse such as non-consensual intimate images (NCII) and deepfake material surged, with documented incidents tied to generative systems in late 2026 and early 2026. These harms are magnified by policy and design choices that limit transparency and restrict meaningful user control over data — concerns GLAAD frames as both a privacy and a safety crisis.

Recommendations and what communities can do

GLAAD’s recommendations call for immediate action: platforms should restore and strengthen protections against anti-LGBTQ hate, reinstate robust DEI practices, minimize harmful automated removals by ensuring AI flags content for human review, and publish clearer enforcement reporting and algorithmic transparency. The index urges better moderator training focused on LGBTQ safety, stronger data privacy safeguards, and careful drafting of any age-related or safety laws so they do not inadvertently harm LGBTQ youth who rely on online communities for support. Advertisers and civil society are also encouraged to pressure companies to demonstrate meaningful safety commitments rather than mere rhetoric.

For those seeking the full dataset and platform-by-platform analysis, GLAAD’s 2026 Social Media Safety Index provides detailed scorecards and policy recommendations. Community outlets such as QNews continue to document firsthand consequences of enforcement shifts, and advocates urge users and creators to report harms, demand transparency, and keep pressure on platforms and policymakers to restore safer online spaces.

Scritto da Francesca Lombardi

Reframing Quentin Crisp: a short film imagines Orlyn Crisp today

Jean-Baptiste Del Amo on flesh, form, and fiction