Seeing Isn’t Believing: Deepfakes and the 2025 Canadian Vote

Author: Denis Avetisyan


A new analysis reveals the surprisingly limited impact of AI-generated deepfakes during the recent Canadian election, despite widespread concerns about their potential to disrupt democracy.

During the election period, analysis across X, Bluesky, and Reddit reveals distinct platform-specific patterns in the prevalence of deepfake intents, with each platform exhibiting a unique distribution of synthetic content despite an overall composition summarized by the inset pie chart.
During the election period, analysis across X, Bluesky, and Reddit reveals distinct platform-specific patterns in the prevalence of deepfake intents, with each platform exhibiting a unique distribution of synthetic content despite an overall composition summarized by the inset pie chart.

Research indicates that while deepfakes circulated online, they primarily existed within echo chambers and rarely achieved significant visibility or partisan influence.

Despite growing anxieties about AI-driven disinformation, empirical evidence detailing the actual circulation of deepfakes in real-world political events remains scarce. This study, ‘Deepfakes in the 2025 Canadian Election: Prevalence, Partisanship, and Platform Dynamics’, presents a large-scale analysis of election-related content on X, Bluesky, and Reddit, revealing that while deepfakes comprised nearly 6% of shared images, their overall reach was modest and largely confined to low-visibility accounts. Notably, right-leaning users shared deepfakes at a significantly higher rate, though most detected instances were benign or non-political, raising questions about the potential for more impactful, targeted disinformation campaigns in future elections.


The Illusion of Influence: Deepfakes and the Cracking Facade of Democracy

The integrity of democratic processes faces a growing challenge from the rapid increase in AI-generated imagery, particularly as election cycles approach. Recent analyses reveal that deepfakes now constitute 5.86% of all online content, a figure that signals a tangible threat to informed civic engagement. This proliferation isn’t merely a matter of increased content volume; the sophistication of these synthetic media makes them increasingly difficult to discern from authentic sources. As the 2025 Canadian Federal Election nears, the potential for malicious actors to deploy deepfakes – fabricating statements or actions by candidates – presents a significant risk to public trust and the fairness of the electoral process. This necessitates a proactive approach to detection, verification, and public awareness, as traditional methods of combating misinformation struggle to keep pace with the speed and realism of AI-generated content.

The escalating production of politically motivated deepfakes presents a formidable challenge to conventional fact-checking procedures. While traditional verification relies on source corroboration and evidence-based analysis, the sheer volume of AI-generated content now circulating online overwhelms these systems. Moreover, the increasing sophistication of deepfake technology makes detection significantly harder; subtle manipulations, realistic audio cloning, and convincing visual forgeries can easily bypass human reviewers and even automated detection tools. This creates a situation where disinformation can spread rapidly and widely before it is debunked, eroding public trust and potentially influencing political outcomes. The speed at which these fabricated narratives proliferate, combined with the difficulty of distinguishing them from authentic content, necessitates a reevaluation of existing information verification strategies and the development of more robust detection mechanisms.

Determining the motivation behind artificially generated political content is paramount to countering its potential impact. While the technology creating these “deepfakes” rapidly advances, simply detecting their artificiality isn’t enough; understanding why a deepfake was created-whether for harmless satire, benign entertainment, or deliberate disinformation- fundamentally alters the appropriate response. Maliciously motivated deepfakes, designed to damage reputations or manipulate public opinion, demand swift debunking and potential legal action, while satirical or artistic creations may require only minimal contextualization. Ignoring this nuance risks stifling legitimate expression under the guise of combating misinformation, or conversely, failing to address genuine threats to democratic processes. Consequently, analysis must extend beyond technical detection to incorporate contextual understanding of the content creator’s intent and the broader socio-political landscape.

Although currently representing a relatively small fraction of overall online viewership – accounting for just 0.52% of all views on X – the impact of political deepfakes is disproportionately magnified by the architecture of social media platforms. Simple metrics such as ‘View Count’ and the ‘Author Follower Count’ create an echo chamber effect, granting fabricated content an artificial sense of legitimacy and reach. A deepfake shared by an account with a large following, even if demonstrably false, gains immediate visibility and can quickly bypass critical assessment. This dynamic means that even a small volume of deepfakes can generate significant engagement and potentially sway public opinion, making it crucial to understand not just the presence of these forgeries, but also how their spread is facilitated by the very systems designed to connect people.

Posts identified as deepfakes consistently receive fewer views than non-deepfake posts across all political leanings, as demonstrated by view counts and average views per post.
Posts identified as deepfakes consistently receive fewer views than non-deepfake posts across all political leanings, as demonstrated by view counts and average views per post.

Beyond Detection: Understanding the ‘Why’ Behind the Forgery

Deepfake detection within this study utilizes the ConvNeXt-V2-Base model, a convolutional neural network architecture, to identify artificially generated images. Performance was evaluated using an ‘in-the-wild’ dataset, representing real-world image sources and conditions, resulting in an F1-score of 0.852. The F1-score, representing the harmonic mean of precision and recall, indicates a balanced ability to both correctly identify deepfakes and minimize false positives. This metric provides a quantitative assessment of the model’s efficacy in distinguishing between authentic and manipulated imagery.

This research extends deepfake identification by analyzing the underlying communicative purpose of the generated imagery. Utilizing both Vision-Language Models (VLMs) and Large Language Models (LLMs) in tandem, the system assesses not just that an image is artificial, but why it was created and what message it intends to convey. The VLM, specifically ‘Qwen3-VL-32B-Instruct’, processes visual content, while the LLM, ‘Llama 3.3 70B Instruct’, interprets associated textual information, enabling a nuanced understanding of the deepfake’s intended communication and contextual meaning. This dual-model approach allows for classification beyond simple binary detection, facilitating a more comprehensive analysis of potentially misleading or manipulative content.

The research utilizes two distinct large models for analyzing deepfake intent: the Qwen3-VL-32B-Instruct Vision-Language Model (VLM) and the Llama 3.3 70B Instruct Large Language Model (LLM). The Qwen3-VL-32B-Instruct model processes both visual and textual data, enabling it to correlate image content with associated text. Complementing this, the Llama 3.3 70B Instruct model focuses on understanding the semantic meaning within textual prompts and generated outputs. This dual-model approach allows for a more comprehensive analysis, going beyond simple image identification to interpret the communicative purpose and potential misinformation conveyed by the deepfake content.

The OpenFake Dataset is a resource specifically constructed to facilitate the development and evaluation of deepfake detection models within the domain of political misinformation. It comprises a collection of images, many of which have been synthetically altered to depict fabricated scenarios or misrepresent political figures and events. This dataset distinguishes itself from general-purpose deepfake datasets by its focus on the specific visual characteristics and contextual cues commonly found in politically motivated disinformation campaigns. The dataset’s creation involved careful curation and annotation to ensure the quality and relevance of the training data, enabling more accurate and robust detection of deepfakes intended to influence political discourse.

Our analysis categorizes seven distinct intents behind AI-generated imagery, ranging from politically motivated disinformation and conspiracy theories to harmless artistic expression.
Our analysis categorizes seven distinct intents behind AI-generated imagery, ranging from politically motivated disinformation and conspiracy theories to harmless artistic expression.

The Echo Chambers of Disinformation: Platform-Specific Trends

Analysis of political deepfake content across three social media platforms – X, Bluesky, and Reddit – indicates varying levels of prevalence. The study determined that 7.9% of analyzed content on X consisted of political deepfakes, representing the highest concentration among the platforms examined. Reddit exhibited an intermediate prevalence rate, while Bluesky demonstrated the lowest, with only 2.2% of content identified as deepfakes. This disparity suggests platform-specific factors, such as content moderation policies or user demographics, influence the dissemination of this type of media.

Analysis of political deepfakes revealed a spectrum of expressed intents beyond purely malicious activity. Specifically, identified intents included attempts to damage reputation through Defamatory Intent, the promotion of unsubstantiated narratives categorized as Conspiratorial Intent, dissemination of biased information constituting Propaganda Intent, and the outright creation of false events defined as Fabricated Intent. These categories were observed in conjunction with expressions lacking clear malicious intent, indicating a range of motivations driving the creation and sharing of deepfake content, from deliberate misinformation to less harmful forms of expression.

Analysis of political deepfakes distributed across social media platforms revealed the presence of ‘Hate Intent’ directed towards specific identity groups. This intent, identified within the disseminated content, signifies a deliberate effort to utilize deepfake technology to target and potentially incite animosity towards these groups. The documented presence of this intent underscores the risk that deepfakes can be leveraged not merely to misinform, but to actively exacerbate existing social divisions and contribute to the polarization of public discourse. The study highlights this as a critical concern regarding the weaponization of these technologies.

Analysis of deepfake sharing patterns indicates a significant disparity in activity between users identified as right-leaning and left-leaning. Data reveals that 9.24% of deepfake content was shared by right-leaning users, while the rate for left-leaning users was 3.87%. This represents a more than two-fold increase in sharing activity originating from the right-leaning user base. This difference in propagation rates suggests a non-neutral distribution of deepfake content across the political spectrum and highlights potential asymmetries in online information dissemination.

Deepfake content demonstrates a clear political bias, with right-leaning accounts disproportionately generating defamatory and conspiratorial narratives compared to left-leaning or unknown accounts, as shown by the distribution of intent categories and overall contribution.
Deepfake content demonstrates a clear political bias, with right-leaning accounts disproportionately generating defamatory and conspiratorial narratives compared to left-leaning or unknown accounts, as shown by the distribution of intent categories and overall contribution.

Beyond Reactive Measures: A Proactive Defense Against Synthetic Deception

Current approaches to combating deepfakes often center on identifying false content after it has already circulated, a reactive strategy proving increasingly insufficient. This research demonstrates a critical shift is needed: prioritizing the detection of malicious intent behind the creation of deepfakes. By classifying the purpose – whether defamation, political manipulation, or inciting hatred – platforms can intervene before the content spreads, even if the fabrication isn’t immediately obvious. This proactive stance moves beyond simply flagging inaccuracies and addresses the underlying harm, allowing for targeted responses and potentially preventing significant damage to individuals and societal trust. The focus isn’t on what is fake, but why it was created, offering a more effective and nuanced defense against the evolving threat of synthetic media.

Effective mitigation of deepfake harms requires a shift in focus for social media platforms, moving beyond simply identifying potentially false content to actively addressing the intent behind its creation and dissemination. Platforms are increasingly recognizing the necessity of prioritizing interventions aimed at malicious uses – including defamation, the propagation of hate speech, and the bolstering of unsubstantiated conspiracy theories – while simultaneously safeguarding the fundamental right to freedom of expression. This delicate balance necessitates nuanced content moderation policies and the development of AI tools capable of discerning harmful intent without suppressing legitimate discourse. Such strategies demand a proactive approach to platform governance, ensuring that interventions target demonstrably malicious behavior rather than merely the content itself, thereby preserving open communication while minimizing the potential for harm.

Continued advancement in combating deepfakes necessitates a dual approach centered on both technological refinement and public awareness. Researchers are increasingly focused on developing artificial intelligence models capable of not simply detecting whether content is manipulated, but discerning the intent behind the manipulation – is it satire, artistic expression, or malicious disinformation? Crucially, these models must move beyond “black box” functionality, offering explainable reasoning for their classifications to build trust and facilitate effective content moderation. Simultaneously, initiatives aimed at improving public literacy regarding deepfake detection are vital; these programs should equip individuals with the critical thinking skills and tools necessary to evaluate online content and recognize potential manipulation, fostering a more resilient information ecosystem.

Despite the increasing prevalence of deepfake content – comprising 5.86% of all material analyzed – current data from X indicates a surprisingly limited reach, accounting for just 0.12% of total views. This suggests that while deepfake creation is becoming more common, their impact on overall information consumption remains relatively small at present. However, researchers emphasize that this low viewership should not be misconstrued as a signal to diminish efforts in detection and mitigation; the potential for rapid dissemination and concentrated impact on specific communities remains a significant threat. Continued vigilance, proactive monitoring, and the development of robust countermeasures are crucial to preventing future exploitation and safeguarding the information ecosystem, as even a small percentage of widely circulated malicious deepfakes can have disproportionately negative consequences.

Empirical cumulative distribution functions of view counts reveal that non-political deepfakes generally achieve higher exposure and average views per post compared to their political counterparts.
Empirical cumulative distribution functions of view counts reveal that non-political deepfakes generally achieve higher exposure and average views per post compared to their political counterparts.

The study’s findings-that deepfakes circulated primarily within low-visibility accounts-feel less like a revelation and more like a predictable outcome. It echoes a certain inevitability; the sophisticated tools garner attention, yet deployment always finds the path of least resistance. As Bertrand Russell observed, “The difficulty lies not so much in developing new ideas as in escaping from old ones.” The expectation of widespread, impactful manipulation collided with the reality of limited reach, a testament to how quickly even novel threats are absorbed into the existing noise. The architecture wasn’t a failure of design, simply a compromise that survived deployment – and revealed a surprising resilience in the information ecosystem. Everything optimized for virality will one day be optimized back into obscurity.

What’s Next?

The observation that these early deepfakes largely failed to ignite the political landscape is, predictably, not a reason for complacency. It simply confirms a longstanding truth: the problem isn’t the technology, it’s the audience. The limited reach documented here wasn’t due to superior detection algorithms-it was because nobody particularly cared about the content. Future work should therefore focus less on identifying synthetic media and more on understanding the conditions under which such content gains traction. The bar for ‘convincing’ is perpetually lowering, while the public’s critical faculties… remain stubbornly static.

One anticipates a shift in strategy. If blatant forgeries fail, the focus will move toward subtle manipulations – ‘cheapfakes’ and context distortion. Detecting these will be significantly harder, not because of technological limitations, but because the line between truth and falsehood will become increasingly blurred. The study correctly identifies platform dynamics as crucial, but assumes those platforms will want to be part of the solution. History suggests a different outcome: better one centralized arbiter of truth, however flawed, than a thousand decentralized echo chambers.

The field will, of course, continue to chase ever-more-sophisticated detection methods. This is inevitable. But the truly interesting question isn’t can we detect deepfakes, it’s whether anyone will believe the detection in the first place. A perfectly accurate detector is useless if the public has already decided what to believe. The next election will undoubtedly present new challenges, and the researchers who document them will, with any luck, have access to larger servers.


Original article: https://arxiv.org/pdf/2512.13915.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-12-17 21:34