Composite story based on real events: Three years ago, a high‑school teacher in a modest Czech town posted an article on social media warning of alleged vaccine dangers. She had no political clout, no influencer following, and certainly no agenda. She was an ordinary woman who fell for a manipulative video and then believed that sharing it would help others. The cascade was swift: local newspapers named her, online communities dug up her home address, and her employer was flooded with hundreds of angry emails. Within a week she suffered severe psychological distress and ended up under medical care. The story didn’t spread because it was an extraordinary falsehood; it spread because it reflects our collective dilemma: where does justified defense of society end, and where does digital shaming begin?
I. The Anatomy of a Lie: Why the Brain Believes Nonsense
Before we can discuss the ethics of punishment, we need to understand the mechanism that makes punishment possible. Disinformation isn’t just “bad information” that can be swapped out for the right facts. It’s a parasite of cognitive bias that feeds on our deepest fears and desires.
Human brains evolved a cognitive shortcut mechanism—known in psychology as confirmation bias—not as a flaw but as an energy-saving adaptation. Ten thousand years ago, spotting a tiger once was enough to avoid all brushy thickets thereafter. Today that instinct means that once we believe, say, vaccines contain microchips, the brain filters contradictory evidence as dangerous noise. Paradoxically, we reject not only the lie but also any correction. This bias acts like an automatic filter, shielding our convictions from uncomfortable facts, and in a digital landscape with unlimited information it becomes a weapon of self‑isolation in echo chambers.
Even more dangerous is the phenomenon psychologists call psychological reactance—resistance to prohibition. When someone publicly brands a conspiracy‑theory spreader as a “madman” or “criminal,” the accused’s brain doesn’t start doubting; it does the opposite. Forbidden fruit tastes sweeter, even if it’s rotten. Studies by political scientists Brendan Nyhan and Jason Reifler suggest that public shaming often reinforces the beliefs of already‑committed individuals—a backfire effect—while neutral observers simply walk away. While more recent research has questioned this effect as being less universal than originally thought, it demonstrably occurs within heavily polarized groups. Shame doesn’t lead to correction; it fuels tribalism. The harder we try to convince someone they’re wrong, the deeper we entrench their convictions.
Enter social‑media algorithms, which are not neutral conduits but emotion accelerators. MIT research from 2018 (Soroush Vosoughi, Deb Roy, and Sinan Aral in Science) confirmed that false information spreads dramatically faster than true information—some measurements show up to six times faster. Not because falsehoods are more sophisticated, but because they provoke stronger emotions—disgust, fear, anger. Platforms optimized for user engagement thus create resonant chambers where lies bounce around, amplify, and eventually fill every available space. The algorithm isn’t looking for truth; it’s looking for reaction. Shame, scandal, and outrage become the most potent catalysts for that reaction.
II. The Hat of Shame and Digital Pillory: Historical Parallels
Public shaming isn’t a Twitter invention. Medieval squares utilized the pillory and the “stone of shame” as common tools of public justice. The difference was stark: shame was localized and time‑limited. A neighbor knew another’s transgression, but a month later life returned to normal. Today digital humiliation is permanent and global.
In the 17th century, Puritan judges sewed a red “A” onto the clothing of a known adulteress, believing that public punishment served two purposes: deterring others and reforming the sinner. Modern proponents of “naming and shaming” disinformation agents use similar rhetoric. The argument goes: transparency is the antidote to darkness, and sunlight is the best disinfectant.
But digital pillory isn’t sunlight; it’s a black hole. Once a name is linked to shame in search results, it stays there forever. Here we encounter the first major ethical conflict: while a medieval judge could calibrate punishment to the severity of the crime, algorithms have no sense of proportionality. One moment of weakness, one careless share, can earn a lifelong digital scar. Medieval pillories at least had a grim courtesy: after serving the sentence, the sinner could re‑enter society. Digital pillory has no final station.
III. Argument FOR: When Silence Kills
If we refuse to name disinformation spreaders, we aren’t morally pure—we become complicit. The argument sounds harsh, but it has logic. Imagine a doctor who deliberately circulates dangerous alternative‑cancer treatments. If journalists hide his name under the banner of “privacy,” patients who trust him may die.
Society’s right to self‑defense is as legitimate as an individual’s right to privacy. When an investigative reporter exposed a network of bots pushing foreign propaganda, or an analyst identified a specific author of an anti‑vaccine campaign, it wasn’t revenge—it was warning. A concrete precedent: the 2016 exposure of the Russian Internet Research Agency (IRA) troll farm and its influence on the U.S. presidential election. Once the organizations and tactics were named and documented, society gained tools for defense—regulators could act, platforms could improve filters, and the public gained immunity against similar manipulation. Transparency truly functioned as a protective shield.
Just as we have a right to know who the repeat offender is in our neighborhoods, we have a right to know who systematically poisons the information environment. Yet we must draw a crucial line that often blurs in the heat of emotion: the difference between identifying an author and publishing exploitable personal data (doxxing). Investigative journalism reveals the identities of public actors—politicians, lobbyists, campaign organizers. Doxxing, by contrast, releases private addresses, family phone numbers, or school locations—details that have no public relevance and serve only to intimidate and harm.
Ethical journalism walks a tightrope between these poles. When a mainstream outlet names an internet provocateur—a troll who anonymously spreads hate—is it a public service or gratuitous exposure? The answer hinges on the balance between public interest and privacy. If the individual wields influence over elections, transparency is essential. If the person is a marginal agitator whose name adds nothing to systemic understanding, the act may be more about editorial catharsis than public good. Documented cases where naming led to remediation are few, but they exist—for example, the exposure of several Czech doctors spreading COVID‑19 misinformation resulted in the removal of their posts and disciplinary proceedings, curbing harmful content within specific communities.
IV. Argument AGAINST: Weapons of Mass Identity Destruction
The philosophical case against public shaming predates the internet. Hannah Arendt warned of a “society of atomized individuals” in The Origins of Totalitarianism, where everyone becomes judge of everyone else. The digital culture of public shame has realized that vision with chilling precision.
When digital pillory activates, it isn’t justice; it’s a collective intoxication with punishment. Psychologists describe “moral licensing”—the feeling that after punishing “the bad guy,” we’re entitled to be less critical of our own sins. Public shaming becomes a drug that masks helplessness. The more aggressively we try to convince someone of their error, the deeper they retreat into their belief system.
Even more perilous is the Streisand effect: attempts to silence a disinformation spreader often amplify their reach. Algorithms love controversy. The more people comment on “that terrible article,” the higher the algorithm boosts its visibility. Shaming thus turns into inadvertent advertising.
We must avoid the cliché of “cancel culture.” This is deeper: a culture of digital exclusion where judgment comes not from courts but from algorithms, and punishment is disproportionate—lifelong, irreversible, and destructive to both the offender and their family. A digital footprint has no statute of limitations. While a medieval sinner could, after a month in the pillory, re‑enter society, a digitally shamed person carries the mark forever.
V. Unintended Victims: When the System Fails
The previous sections examined the ethics of shaming itself; now we look at a specific class of victims: those caught in the crossfire of digital punishment mechanisms due to error, technical glitch, or circumstance. Philosophical principles clash with reality when identification systems misfire.
Documented cases show innocent people being wrongly accused of spreading disinformation. One of the most heartbreaking examples followed the 2013 Boston Marathon bombings. Reddit users and others launched their own “investigations,” mistakenly identifying Sunil Tripathi—a missing college student—as a suspect. A massive wave of public shaming descended on him, only for it to emerge that he had died by suicide weeks before the attacks. His family endured not only the loss of a son but also a digital lynch mob that blamed him for terrorism. Similarly, teenager Salaheddin Barhoum, whose photograph the New York Post printed on its front page as a suspect, had to face a massive wave of hatred—despite having nothing to do with the attacks.
Elderly people who don’t grasp the context of sharing, mentally ill individuals who fall prey to conspiracy theories, and hacking victims whose accounts are hijacked—all can suffer devastating consequences when named and publicly vilified. Losing a job when search engines preserve every article forever means losing one’s future. The “right to be forgotten,” enshrined in European GDPR Article 17, offers formal recourse against search engines but not against internet archives and media outlets, making it largely illusory. How do you forget when every version of every page is archived?
Tragically, public shaming can trigger severe mental health crises, including suicide attempts. Cyberbullying that begins with “moral appeals” and escalates to systematic life‑destroying attacks is not an outlier. The line between “raising awareness” and “lynching” is thin and is often crossed before we realize it. Technical errors, false‑positive identifications, and compromised accounts form the dark side of digital punishment that cannot be ignored.
VI. Technology: New Judge or Executioner?
Artificial intelligence promises an objective fix: algorithms that automatically flag disinformation and suppress it without human shaming. The promise is seductive but dangerously naïve.
AI‑driven detection is a black box riddled with bias. Training data often embed cultural prejudices—what’s deemed a dangerous lie in one country may be legitimate critique in another. False positives (labeling true information as false) and false negatives (letting lies slip through) have fatal consequences. When an algorithm mistakenly tags a legitimate scientific study as disinformation, it stifles science. When it lets a sophisticated lie pass, it amplifies harm.
Automation also strips away human judgment. A person can assess context—whether a share was an unfortunate mistake, whether the author regrets it, whether the individual is ill or merely misinformed. An algorithm cannot. Moreover, authoritarian regimes could weaponize such systems to silence dissent, turning a protector into a censor. Machine decision‑making lacks the nuance to distinguish dangerous propaganda from unpopular truth, error from intent. Technology, therefore, offers no moral escape route; it creates a new form of arbitrary power.
VII. Legislation: The Fragile Line Between Protection and Oppression
The European Union’s Digital Services Act (DSA) attempts to regulate the space: platforms must remove illegal content while preserving transparency. It sounds reasonable, but in practice it creates a dangerous precedent.
When a state defines what constitutes “disinformation,” it opens the door to political abuse. What if “disinformation” includes criticism of government policy? What if “preventing hate” becomes a pretext to silence independent media? Experiences from authoritarian regimes show that “fake‑news” laws often become tools against opposition.
At the same time, a regulatory vacuum yields anarchy where the strongest algorithm wins. Solutions don’t lie at either extreme—total freedom for lies nor total state censorship. They lie in precise legal distinctions between erroneous information (mistake) and disinformation (deliberate falsehood), between public interest and private persecution. Laws, however, don’t address the root cause. We need a shift in how we communicate about disinformation.
VIII. A Way Forward: Detoxification Instead of Revenge
If public shaming fails and censorship is dangerous, a third path remains: restorative narratives and digital literacy.
In post‑conflict regions of Africa and Latin America, “restorative storytelling” projects have shown promise—rather than punishing hate spreaders, they focus on their reintegration. Specifically, the organization Search for Common Ground operates community dialogue programs in Kenya, where those who participated in spreading hatred are integrated into the reconciliation process. They aren’t labeled criminals; they are invited to face the consequences of their actions directly with those they harmed. Similar initiatives emerged in Colombia after the peace accord with the FARC, where former combatants and victims co‑create reconciliation narratives. This approach demands time and resources but yields longer‑lasting effects than digital exile.
Digital literacy isn’t just the ability to spot truth from falsehood. It’s understanding why lies arise—what social pains, economic frustrations, or identity crises fuel them. When we stop seeing disinformation spreaders as “agents of evil” and start viewing them as symptoms of deeper societal problems, dialogue becomes possible.
That doesn’t mean tolerating harm. It means seeking treatment instead of vengeance. It means media should allocate more space to facts and less to sensational “villain” profiles—analyzing the systems that propagate falsehoods rather than merely showcasing individual “sinners” for clicks. It means building a society that can resist lies without destroying lives.
Conclusion: A Society Without Pillories?
We may not be able to break the mirror, but we can at least turn it. Digital memory is relentless—every decision to name or to stay silent leaves a permanent imprint. If we condemn the innocent, they remain condemned forever. If we protect the guilty, they remain dangerous. Yet we need not stand helpless before that mirror, staring into our own impotence. We can choose how to light it—whether with a blinding spotlight that burns or with a soft glow that reveals without annihilating.
The ethical battle against disinformation isn’t a war of “good versus evil.” It’s a perpetual balancing act of legitimate values: society’s right to defend itself versus the individual’s right to err and be corrected. Public shaming is easy—it offers instant emotional satisfaction and an illusion of control. Easy paths often lead to hell.
Can we protect society from the poison of disinformation without turning it into a court of judges and executioners? Can we speak the truth without destroying the lives of those who are mistaken? The answer does not lie in more laws or in smarter algorithms. It lies in our ability to resist the lure of the public pillory and in the courage to defend the world’s complexity against the craving for simple villains. Because once condemnation becomes a routine tool of self-defense, its blade will sooner or later fall on those we meant to protect.
Content Transparency & AI Assistance
How this article was created:
This article was generated with artificial intelligence assistance. Specifically, we employed an agentic workflow composed of eight language models running in the OpenWebUI application. Our editorial team established the topic, research direction, and primary sources; the AI then generated the initial structure and draft text.
Want to learn more about the process?
Read our article:
Agentic Workflow on limdem.io: how eight AI specialists and a human editor co‑create deep popularization articles
Editorial review and fact-checking:
- ✓ The text was editorially reviewed
- ✓ Fact-checking: All key claims and data were verified
- ✓ Fact corrections and enhancement: Our editorial team corrected factual inaccuracies and added subject matter expertise
AI model limitations (important disclaimer):
Language models can generate plausible-sounding but inaccurate or misleading information (known as “hallucinations”). We therefore strongly recommend:
- Verifying critical facts in primary sources (official documentation, peer-reviewed research, subject matter authorities)
- Not relying on AI-generated content as your sole information source for decision-making
- Applying critical thinking when reading
Used language models:
| Role | Model | License |
|---|---|---|
| 🧠 Planner | deepseek-ai/DeepSeek-R1 | MIT License |
| 🔍 Proofreader | zai‑org/glm-5:thinking | MIT License |
| ✍️ Writer | moonshotai/kimi-k2.5:thinking | Modified MIT License |
| 🔍 Fact‑checker A | deepseek/deepseek‑v3.2 | MIT License |
| 🧠 Fact‑checker B | minimax/minimax-m2.5 | MiniMax Model Licence |
| 📝 Fact‑checker C | qwen/qwen3.5‑397b‑a17b‑thinking | Apache 2.0 |
| 👔 Supervisor | nousresearch/hermes-4-405b | Llama 3.1 Community License |
| 🌍 Translator | openai/gpt‑oss-120b | Apache 2.0 |
Source code of the workflow used:
limdemioarticlewriterprov27frontier.py
Be First to Comment