How AI-driven fake posts about celebrities reveal gaps in free speech and digital literacy

Social feeds are filling with AI-crafted falsehoods about public figures that inflame anti-LGBTQ+ sentiment; experts argue the fix lies in stronger digital literacy and careful protection of free speech

The social media ecosystem has recently seen a surge of AI-generated stories alleging that well-known performers and public figures hold anti-LGBTQ+ views. These posts — often accompanied by convincing images and fabricated quotes — circulate rapidly on Facebook and other networks, reaching users who may not check their sources. Although many items are easily disproved through a quick search, the combination of emotionally charged content and viral amplification means damage can spread before corrections arrive. The phenomenon highlights how modern disinformation exploits both technical tools and human biases to reshape reputations.

What makes the issue especially complex is the policy backdrop. After a U.S. presidential re-election, Meta announced it would end a formal third-party fact-checking programme, a move its leadership framed as a reaction against what they called excessive censorship. That policy shift removed a visible layer of accountability for viral claims and left moderation choices more in the hands of automated systems and platform discretion. The result: a fertile environment for AI-manufactured hoaxes that amplify divisive narratives, frequently targeting minority communities such as the LGBTQ+ population.

The anatomy of online falsehoods

At the technical level, several forces combine to make false stories effective. Cheap content production tools let bad actors create realistic images and composite videos; algorithmic ranking rewards engagement, not accuracy; and emotionally loaded messages are more likely to be shared. The distinction between misinformation and disinformation is central here: the former spreads through error or misunderstanding, while the latter is intentionally manufactured to persuade or manipulate. Platforms that once relied on editors, broadcasters and validators as imperfect gatekeepers now face a landscape where any motivated individual or group can reach millions with minimal cost.

How celebrities become vectors

Typical examples include fake posts that place allies such as Cher or Pink in fabricated contexts, or that attach spurious quotes to figures like Pedro Pascal and Mick Jagger to suggest they hold views they do not. Earlier incidents, such as false claims about Sam Smith, show the pattern is not new — only the tools have become more powerful. These items frequently echo political talking points and can inflame reactions in comment threads, creating the impression of organic controversy even when none exists. Importantly, many of these claims are simple to refute with basic verification, but the momentary emotional impact has already done harm.

Free speech, censorship and democratic risk

Debates about how to respond often split along a basic tension: policing content can slide toward censorship, but laissez-faire platforms risk letting targeted lies reshape public debate. Scholars warn that handing governments or unaccountable corporations broad authority to label and remove content can be misused against minorities, whistle-blowers or dissenting voices. At the same time, doing nothing lets weaponised narratives distort civic life. This trade-off underpins a wider conversation about the responsibilities of platforms, the rights of speakers, and the conditions necessary for healthy democratic deliberation.

From regulation to civic skill-building

Many academics argue the most sustainable response is not stronger censorship but a focus on education: equipping citizens with critical habits that reduce the impact of disinformation. The proposal advancing in philosophical and policy circles reframes digital competence as a civic duty on par with voting or jury service. That means teaching people to inspect sources, recognise motivated reasoning, test claims against primary documents, and demand transparency from both platforms and public institutions. The aim is to cultivate epistemic virtues that protect democratic decision-making without concentrating censorship power.

Scholarly engagement and opportunities to contribute

Those ideas are central to a workshop organised for the IVR World Congress in Istanbul, scheduled for 28 June-3 July 2026. The session, chaired by Oscar Pérez de la Fuente and Enrique Armijo, invites research on the intersections of democracy, free speech, disinformation and digital literacy. Contributors are asked to submit a 300–400 word abstract, a title and a short bio by emailing [email protected] before the deadline of 20 March 2026. Organisers plan to select presented work for a collective volume or a special journal issue with a reputable publisher.

The current moment is a reminder that technology can both empower and erode civic life. The spread of AI-generated libels on social platforms shows how quickly public perceptions can be bent; the policy choices of major companies determine how easy that bending becomes. Real resilience will come from a mix of clearer platform accountability, legal safeguards for rights, and broad-based digital literacy that helps citizens detect manipulation. For those interested in shaping this research agenda, the IVR workshop offers a concrete avenue to contribute evidence-based ideas toward preserving open, fact-informed democratic spaces.

Scritto da Viral Vicky

How chemsex is shaping sexual health services and legal responses