The story first surfaced in January when QNews documented a cluster of account removals aimed at LGBTQIA+ performers, BIPOC creators, sex workers, and pole-dancing communities, all accused—often inaccurately—of human exploitation or human trafficking. Those affected described the experience as bewildering and isolating; many were locked out with no clear explanation or route to appeal. This pattern resurfaced publicly after the deletion of a high-profile brand account, but it is important to recognise that the event was not an isolated glitch — it sits inside a longer-running trend tied to platform governance and automated systems.
The practical fallout has been severe for people who rely on social networks for income, promotion and community support. Performers such as Sydney-based Basjia have reported multiple suspensions across professional profiles after losing a primary account and the platform known as The Pyramid. Queer artists, nightlife promoters and dancers continue to share similar experiences, and comedians including Alexis Sakellaris have allegedly lost both main and backup profiles. Even established sexual wellness brands like Bellesa Boutique—an LGBTQIA+ focused company with over 700,000 followers—say their account was permanently removed after Meta flagged the term clitoris as “sexually explicit language,” illustrating how moderation can sweep up both individuals and organisations.
Scale and geographic spread
The removals are not limited to one country or sector. Reporting by The Guardian in late 2026 documented what campaigners called one of the “biggest waves of censorship” in years, recording more than 200 incidents in a single year that affected abortion access providers, reproductive health groups and queer organisations. Similar stories have emerged from Canada, the UK and across Europe, while Australian HIV and queer health organisations have publicly described how content rules and enforcement actions have disrupted health education and care access. Together these accounts suggest a global pattern in which community organisations and marginalised individuals are disproportionately affected.
How the moderation system fails
Researchers and advocates point to a predictable cluster of failures: heavy reliance on automated moderation, policy language that is broad or ambiguous, and insufficient human oversight or transparency in decision-making. Automated filters often misclassify context, while rigid enforcement escalates removals without meaningful checks. Terms like shadowbanned have become common shorthand for when content is throttled or accounts are hidden without notification. The consequence is a system that tends to punish those already working at the edges of visibility rather than protect them.
Real-world consequences
The impact of account removals extends beyond vanity metrics. For many performers and organisers, social platforms are lifelines for ticket sales, health outreach and audience building. Reported outcomes include interrupted income streams, erased archives of work, and curtailed access to sexual health information for communities that rely on digital outreach. Performers such as Cleo Rapture have told QNews that the removals make it almost impossible to promote shows or present creative work, while organisations report disrupted channels for client education and support. This illustrates how digital governance decisions translate into tangible harm.
Accountability and routes forward
Attempts to obtain clarity have largely stalled. Those affected, along with journalists and political representatives who have reached out to Meta, report receiving no substantive explanations or remedies. That lack of response highlights the need for stronger demands for transparency and community accountability. Practical steps include documenting wrongful bans, amplifying affected voices, pressuring policymakers to seek meaningful oversight of platform rules, and urging companies to build human-led review paths that recognise the specific contexts of sex-positive, queer and BIPOC communities.
What readers and allies can do
Change will require collective action: keep records of removals, share verified accounts of harm, support creators financially off-platform, and lobby for clearer moderation standards. Platform policy should not be determined privately by a solitary corporate entity without independent oversight; decisions about who can exist online have real-world consequences. In the meantime, publications such as QNews continue to track developments and platform users should consider diversifying channels and backing organisations that centre marginalised communities. The goal is straightforward: refuse opaque systems that erase vulnerable voices and demand accountability.

