The Liar’s Dividend: How Disinformation Erodes Trust and Shields Deceit
![]() |
The liar's dividend lets scandal-plagued politicians and others claim plausible deniability for real actions. (📷:politicamericana) |
In an era of ubiquitous misinformation, a new phenomenon has emerged: the liar’s dividend. In simple terms, the liar’s dividend occurs when bad actors dismiss real news as “fake”, using the very existence of deepfakes and false content to muddy the waters and evade accountability. This means even genuine evidence (videos, audio recordings, or photos) can be shrugged off as forgeries. Researchers warn that as society becomes more aware of sophisticated deepfake and AI-manipulated media, cynical public perceptions may grow, “primed to doubt the authenticity of real audio and video evidence”. In practice, the liar’s dividend is already undermining confidence in media and institutions: by casting doubt on truth itself, it lets scandal-plagued politicians and others claim plausible deniability for real actions. As noted by scholars, people can learn that deepfakes are increasingly realistic, making false claims that true content is AI-generated “more persuasive”.
Origins and Definition
The term “liar’s dividend” was coined in the context of the deepfake threat. Law professors Bobby Chesney and Danielle Citron (2019) first flagged this dynamic: if the public expects sophisticated video forgeries, liars can exploit that awareness to dodge responsibility. Britannica succinctly defines the liar’s dividend as cases where actors claim that real information is mis- or disinformation, thereby “muddying the waters” so truth-tellers lose credibility. In other words, the existence of fake news and manipulated media provides cover: a political leader facing incriminating evidence can simply say, “That recording isn’t real – it’s a deepfake”, even if the evidence is genuine. Experts describe it as a perverse reward for liars: deepfakes make it easier for wrongdoers “to avoid accountability for things that are in fact true”. Thus the liar’s dividend is not merely about creating lies, but about weaponising doubt and distrust to turn any factual claim on its head.
Deepfakes and the Erosion of Trust
Deepfakes and generative AI lie at the heart of this phenomenon. These technologies can fabricate convincing images, videos, or audio of public figures, magnifying uncertainty about reality. As generative AI proliferates, the public’s trust in what they see and hear weakens. Chesney and Citron warn that once people “have difficulty believing what their eyes or ears are telling them — even when the information is real”, the spread of deepfakes “threatens to erode the trust necessary for democracy to function”. Corporate risk analysts echo this concern: they note that as fake content grows, “individuals find it easier to dismiss even genuine content as fabricated, undermining trust in media and information sources”. In effect, every authentic news clip or expert report gains a preemptive cloud of suspicion.
This distrust compounds existing media scepticism. Even before recent AI advances, truth had been decaying: surveys show that majorities of people in many countries report encountering made-up news daily. The liar’s dividend rides this wave of cynicism by exploiting doubts sown by terms like “fake news” and long-standing partisan attacks on the media. For example, U.S. President Donald Trump popularised “fake news” as a slur against outlets, weakening faith in reporting. As Chesney and Citron observe, educating people about deepfakes can perversely empower liars. They write: “liars aiming to dodge responsibility for their real words and actions will become more credible as the public becomes more educated about the threats posed by deep fakes”. In practice, a video or audio piece that should clarify the truth instead becomes suspect by default.
![]() |
(📷:empowervmedia) |
Political and Media-System Impacts
The liar’s dividend has stark political implications. Researchers have tested how claims of misinformation about true events affect public opinion. In one major study of U.S. politics, experiments with over 15,000 Americans found that false claims that true scandals were fake did pay a “liar’s dividend”, especially when the scandal was reported in text rather than video. Specifically, when politicians flatly declared a negative story was “misinformation” or a “deepfake”, their supporters tended to rally behind them, boosting approval even in the face of real wrongdoing. Notably, these support gains often exceeded what honest responses (like apology or denial) achieved. A Brookings commentary summarising such research notes that this effect “occurs across the political spectrum”: any public figure, left or right, can sometimes trick followers into discounting true allegations.
The key is the medium. In this 2024 study, bogus claims of misinformation failed to sway people when confronted with clear video evidence. Text articles can be more plausibly labelled “fake” than a news broadcast or viral clip. Even so, experts caution that as deepfake tech improves, visual evidence may become less convincing. For now, however, strong visual proof (like a grainy-but-real news clip) still holds more sway: fabricated denials get less traction when actual video is available. Politicians have even invoked the liar’s dividend in courts. For example, a January 6 insurrection defendant tried to dismiss incriminating video as deepfake, and Tesla has argued Elon Musk’s statements on safety shouldn’t count as evidence because they “could be deepfakes”.
Critically, research so far suggests the liar’s dividend does not always further erode general media trust. The aforementioned study found no consistent drop in people’s overall trust in the press when they saw leaders accuse media of faking the news. In other words, telling lies about lies seems to convince some partisans or sceptics enough to boost leaders’ popularity, without simultaneously plummeting everyone’s faith in news organisations. Yet experts warn this could change: in societies where trust is already very low, even small shifts could undermine social cohesion. A European Parliament report explicitly flagged the liar’s dividend as a grave risk: it notes that “politicians will be able to deny responsibility by suggesting that a true audio or video content is false, even if it is true”, much like “fake news” has been used to deflect media reporting. Over time, habitual denial of facts (video after video brushed off as “fake”) could deepen cynicism and empower autocrats who thrive on disinformation.
Global Examples and Context
Though much of the research originates in the United States, the liar’s dividend is a global concern. From Europe to Asia to Africa, political figures are leveraging false claims to sow doubt. In Spain, former Foreign Minister Alfonso Dastis infamously dismissed clear video of police crackdowns as “fake photos”. In India, a state legislator insisted an audio recording implicating him was AI-generated, despite experts confirming its authenticity. Human rights advocates document similar patterns worldwide: reports from organisations like WITNESS highlight cases in the U.S., Brazil, sub-Saharan Africa, and Southeast Asia where criminals or officials label real evidence a “deepfake” to escape justice. They warn that this “ability to claim plausible deniability on any compromising content … is a threat to a diverse civic sphere and to free expression”.
Beyond individual cases, international observers note systemic impacts. EU policy studies emphasise that truth-scepticism (the liar’s dividend’s backdrop) is already eroding the informational environment, especially where populist leaders declare “fake news” at will. In developing democracies, lack of technical resources compounds the problem: deepfake clips are harder to debunk, and communities may lack media literacy tools to recognise manipulation. Experts call for a global push: from labelling synthetic content to promoting digital literacy. For example, The Brookings Institution recommends expanding fact-checking, watermarking AI-generated media, and teaching media and AI literacy in schools. Indeed, even U.S. states are acting – Illinois requires a news literacy course in every high school, aiming to arm students against disinformation. Such efforts seek to close the gap that the liar’s dividend exploits: when audiences can better distinguish real from fake, spurious denials lose their power.
Countering the Phenomenon
Is the liar’s dividend inevitable? Scholars and practitioners argue it can be countered through a mix of technology, law, and education. Technologically, improving media authentication is key. Initiatives like the Content Authenticity Initiative encourage embedding tamper-evident “signatures” in footage, so viewers (or algorithms) can verify if content is untouched. As Bobby Chesney has noted, policing deepfakes will be an arms race: video forensic tools and provenance tracking need rapid advances to keep up. In the legal realm, new regulations on AI content (such as proposed transparency mandates for synthetic media) could raise the cost of fabricating lies. Meanwhile, the threat of liability might dissuade some from amplifying misinformation.
Crucially, bolstering public resilience matters. Educating people about the liar’s dividend itself can inoculate them: understanding that a real clip is not automatically “fake news” just because someone says so helps maintain trust in evidence. Media literacy programs, coupled with civil society campaigns, encourage audiences to demand verification rather than accept denials at face value. For example, NGOs suggest easy-access fact-checking (like reverse video searches) and community workshops so that ordinary citizens can question and confirm what they see.
![]() |
Deepfakes and generative AI lie at the heart of the liar's dividend phenomenon. (📷gwu.edu) |
Combating the liar’s dividend demands vigilance on all fronts.Without reliable authentication and public trust, “the press may find it difficult to fulfil its … obligation to spread truth”. Encouragingly, research shows that the tactic works better on text than video, and robust evidence can still sway audiences. By strengthening authentication technologies, enacting clear policies, and fostering a culture of critical inquiry, society can narrow the liar’s dividend (turning the tide so that the real story is seen as real once more). Until then, every citizen and journalist must remember: in the battle for truth, scepticism is healthy, but cynicism is the liar’s reward.
⭐⭐⭐
Comments