Why Hackers Hate Uncertainty: A New Approach to Cyber Defense

Author: Denis Avetisyan


Understanding how attackers react to ambiguous information can significantly improve cybersecurity strategies beyond traditional loss aversion models.

This review examines the role of ambiguity aversion in cyberattack behavior and proposes cognitive modeling techniques to inform more effective defense mechanisms, leveraging frameworks like MITRE ATT&CK and insights from large language models.

While cybersecurity often focuses on rational threat actor behavior, human attackers are predictably irrational, frequently exhibiting cognitive biases beyond simple loss aversion. This research, detailed in ‘Detecting Ambiguity Aversion in Cyberattack Behavior to Inform Cognitive Defense Strategies’, introduces a novel framework for modeling and detecting ambiguity aversion-a preference for known risks-during simulated cyberattacks. By parsing human-subject red-team data with large language models and applying a new computational model, we demonstrate the ability to infer an attacker’s ambiguity aversion level in near-real time. Could operationalizing these cognitive traits fundamentally reshape adaptive cyber defense strategies and anticipate attacker decision-making?


The Erosion of Rationality: Beyond Expected Utility

Conventional cybersecurity strategies often operate under the premise that malicious actors are fundamentally rational, diligently calculating risks and rewards to maximize their expected utility – a concept borrowed from economics. However, this model presents a significant oversimplification of real-world attack behaviors. The assumption that adversaries consistently weigh potential gains against the probability of detection and consequences ignores the complexities of human motivation and cognitive processes. This rational actor model fails to account for impulsive attacks, attacks driven by ideology rather than profit, or situations where attackers miscalculate risks due to limited information or flawed judgment. Consequently, defenses built solely on anticipating rational behavior can be surprisingly vulnerable to attacks that deviate from this predicted pattern, highlighting the need for a more nuanced understanding of attacker psychology.

The prevailing cybersecurity framework often rests on the premise that attackers meticulously calculate risks and rewards, striving to maximize their gains – a notion that frequently disregards the significant impact of cognitive biases. These inherent, systematic patterns of deviation from norm or rationality in judgment manifest in attacker behavior, creating predictable vulnerabilities defenders can exploit. For example, the availability heuristic might lead an attacker to overestimate the success rate of a commonly publicized exploit, while confirmation bias could reinforce a flawed attack strategy despite evidence to the contrary. Consequently, systems designed solely against rational actors often fail to account for these predictable irrationalities, leaving openings for attacks based not on superior strategy, but on common human failings. Recognizing and modeling these biases, therefore, is becoming increasingly vital for building more robust and anticipatory defense mechanisms.

Recognizing the systematic ways human cognition deviates from perfect rationality is paramount to fortifying cybersecurity infrastructure. Attackers, like all humans, are susceptible to biases such as loss aversion – feeling the pain of a loss more strongly than the pleasure of an equivalent gain – which can lead to impulsive decisions and exploitable errors. Similarly, the availability heuristic, where decisions are influenced by easily recalled examples, may cause defenders to overemphasize recent threats while neglecting less publicized, but equally dangerous, vulnerabilities. By modeling these cognitive shortcuts, security architects can move beyond assuming optimal attacker behavior and instead design systems that account for predictable irrationalities, proactively mitigating risks and enhancing resilience against increasingly sophisticated threats. This shift towards a psychologically informed security posture represents a crucial advancement in the ongoing effort to protect digital assets.

Beyond Simulation: Capturing Attackers in the Act

The GAMBiT experiments represent a departure from traditional cybersecurity research methodologies by utilizing human participants to actively simulate cyberattacks within a controlled, realistic network environment. This approach, conducted in a dedicated cyber range, allows researchers to observe and record attacker behaviors as they unfold, rather than relying on post-incident analysis or theoretical models. Participants are given specific objectives – such as gaining access to designated systems or exfiltrating data – and are permitted to employ a range of publicly available tools and techniques. The resulting data focuses on the process of attack, capturing not just successful exploits but also failed attempts, reconnaissance activities, and the dynamic adaptation of tactics, providing a more comprehensive understanding of attacker methodologies.

The GAMBiT experiments systematically collect detailed ‘Operation Notes’ during live-fire exercises within a controlled cyber range. These notes consist of free-text entries, contemporaneously recorded by the participant simulating an attacker, documenting their reasoning for each action taken. Specifically, subjects are instructed to verbalize and then record why they chose a particular exploit, scanning technique, or lateral movement strategy. This data includes justifications for bypassing security controls, assessments of risk versus reward, and explanations of assumptions made about the target network. The resulting logs are timestamped and correlated with all network traffic and system events, providing a comprehensive record of the attacker’s cognitive process alongside their technical actions.

Operation Notes derived from the GAMBiT experiments provide a granular level of detail regarding attacker reasoning previously unavailable in cybersecurity research. Traditional security modeling often relies on abstract representations of threat actors and their motivations, resulting in predictive models lacking fidelity to real-world conditions. In contrast, these free-text logs capture in situ cognitive processes – the specific justifications, risk assessments, and prioritization strategies employed during a simulated attack. Analysis of these logs reveals the nuanced interplay of factors influencing attacker decision-making, including perceived effort, likelihood of success, available resources, and the attacker’s individual risk tolerance. This data allows researchers to move beyond hypothetical attacker profiles and build more accurate, empirically-grounded models of adversarial behavior, informing the development of more effective defensive strategies.

From Observation to Insight: LLM-Powered Behavioral Analysis

LLM-Powered Annotation automates the processing of unstructured text within ‘Operation Notes’ by utilizing Large Language Models to identify and extract critical tactical decisions and the associated rationales. This process moves beyond simple keyword searches, employing natural language understanding to discern the intent and reasoning documented by security analysts. The extracted information is structured data, enabling programmatic access and analysis of previously inaccessible qualitative insights. This automated extraction reduces manual effort, accelerates incident response timelines, and provides a standardized format for analyzing complex attack narratives contained within the free-form text of operation notes.

The system correlates data from LLM-powered annotation of operation notes with Suricata network intrusion detection system logs. Suricata logs provide detailed information regarding network traffic, including source and destination IPs, ports, protocols, and detected anomalies or signatures. By integrating these logs with the extracted tactical decisions and rationales from operation notes, the pipeline establishes a link between observed network activity and the attacker’s inferred reasoning. This combined view facilitates a more complete understanding of attacker behavior, moving beyond simple alerts to provide context regarding why specific actions were taken and how they relate to broader campaign objectives. The integration allows for reconstruction of attack sequences and identification of previously obscured attacker intent.

Following automated annotation of operational notes, the extracted attacker behaviors are mapped to the MITRE ATT&CK Framework. This process involves identifying specific techniques and sub-techniques employed by the attacker, as defined within the ATT&CK knowledge base. Mapping enables categorization of observed behaviors, facilitating large-scale analysis and reporting. Each identified behavior is assigned a corresponding ATT&CK ID, allowing for quantitative tracking of attacker tactics, techniques, and procedures (TTPs) across multiple engagements and facilitating comparisons between threat actors. This categorization supports incident response, threat hunting, and the development of security controls tailored to mitigate specific ATT&CK techniques.

The PsychSim Framework: Modeling the Cognitive Landscape of Attack

The PsychSim framework leverages the mathematical structure of Partially Observable Markov Decision Processes (POMDPs) to represent the complex decision-making processes of cyber attackers. Unlike traditional security models that assume perfect information, this approach acknowledges that attackers rarely possess a complete understanding of the target system. POMDPs allow for the modeling of uncertainty – an attacker’s belief about the system’s state is represented as a probability distribution, which is updated as they gather evidence through reconnaissance and probing. This probabilistic representation is crucial because it enables the simulation of realistic attack strategies where decisions are made based on incomplete and potentially misleading information. By framing the problem as a POMDP, researchers can systematically explore how attackers weigh risks, prioritize targets, and adapt their tactics in the face of uncertainty, ultimately leading to more robust and predictive cybersecurity models.

The PsychSim framework moves beyond traditional cybersecurity modeling by directly integrating principles from behavioral economics, specifically cognitive biases like ambiguity aversion and loss aversion. These biases, well-documented in human decision-making, are incorporated as parameters influencing how simulated attackers evaluate and respond to incomplete information. Rather than assuming purely rational actors, the framework allows for the exploration of how an attacker might, for instance, prefer a known, albeit less optimal, course of action over a potentially superior but uncertain one – reflecting loss aversion. Similarly, the model can simulate how an attacker might avoid situations with ambiguous probabilities, even if the expected value is high, demonstrating ambiguity aversion. This nuanced approach allows researchers to move beyond predicting what an attacker might do, and begin to understand why they might choose a particular strategy, even if it appears suboptimal from a purely logical standpoint.

The PsychSim framework’s efficacy hinges on its ability to mirror real-world attacker behavior, and validation is achieved through rigorous comparison with the GAMBiT dataset. Analysis reveals a nuanced interplay between modeled cognitive biases and observed cybersecurity actions; while a loss aversion model currently explains a greater proportion of the overall variance in attacker strategies, the ambiguity aversion model demonstrably generates a significantly higher number of high-confidence observations – 237 compared to zero for the loss aversion model ($χ²(0) = 254.02$, $p < .001$). This suggests that, despite explaining less of the overall data, the ambiguity aversion model excels at predicting specific, confidently executed attack patterns, highlighting the crucial role of incomplete information in shaping adversarial decision-making and offering a pathway toward more accurate predictive cybersecurity modeling.

Beyond Prediction: Towards an Adaptive Security Paradigm

Recent investigations reveal that malicious actors, much like individuals in other decision-making contexts, are demonstrably susceptible to cognitive biases, especially when facing Knightian Uncertainty – situations characterized by both unknown probabilities and unknown outcomes. This challenges traditional cybersecurity models that assume rational actors consistently maximizing utility. The research indicates attackers frequently exhibit predictable irrationalities, leaning towards loss aversion – feeling the pain of a loss more strongly than the pleasure of an equivalent gain – and ambiguity aversion, preferring known risks to uncertain ones. Consequently, security protocols can be proactively designed to exploit these biases, subtly influencing attacker choices and increasing the efficacy of defensive measures by framing options in ways that align with these inherent cognitive tendencies. This shift moves beyond simply predicting what an attacker might do, to understanding how they think, opening new avenues for adaptive security strategies.

The PsychSim Framework represents a significant advancement in cybersecurity by moving beyond static defenses to proactively model attacker decision-making. This platform leverages principles from behavioral science, specifically cognitive biases and prospect theory, to simulate realistic attacker behaviors within a virtualized environment. By constructing these computational models, security professionals can forecast likely attack vectors, assess the effectiveness of various countermeasures, and even anticipate an attacker’s response to implemented defenses. The framework doesn’t merely predict what an attacker might do, but seeks to understand why they might choose a particular course of action, allowing for the creation of adaptive security measures that exploit predictable irrationalities. This dynamic approach contrasts sharply with traditional security protocols, offering a pathway towards systems that learn and evolve alongside the threat landscape, ultimately increasing resilience and minimizing potential damage.

Security systems traditionally focus on predicting what an attacker might do, but a growing body of research indicates success lies in understanding how they think. Recent work demonstrates the potential of leveraging cognitive biases – systematic patterns of deviation from norm or rationality in judgment – to proactively strengthen defenses. Specifically, research reveals significant differences in how attackers respond to loss aversion – the tendency to strongly prefer avoiding losses to acquiring equivalent gains – versus ambiguity aversion – the dislike of uncertainty. A Wilcoxon signed-rank test, yielding a p-value of less than .001, statistically confirms these differing trait probability distributions. This insight allows for the design of security measures that subtly exploit these biases; for instance, presenting potential attackers with scenarios framed as avoiding a loss, rather than achieving a gain, can demonstrably alter their behavior and create opportunities for effective intervention, effectively transforming inherent vulnerabilities into robust security advantages.

The study of ambiguity aversion, as detailed in this work, reveals a fascinating facet of attacker psychology. It suggests that defenders might gain an advantage not simply by increasing the cost of successful attacks-addressing loss aversion-but by strategically introducing uncertainty into the attacker’s decision-making process. This resonates with John McCarthy’s observation that, “It is better to deal with a problem that you understand, even if it is hard, than to deal with a problem you don’t understand, even if it is easy.” Just as McCarthy highlights the value of confronting complexity, this research posits that attackers, facing ambiguous scenarios, may exhibit predictable behaviors, creating openings for cognitive defense strategies. Versioning these strategies, adapting them over time, becomes a form of memory, ensuring resilience against evolving threats and gracefully accommodating the inevitable decay of any single solution.

What Lies Ahead?

The pursuit of modeling ambiguity aversion in cyberattackers reveals a fundamental truth: security, like all systems, isn’t about preventing entropy, but managing its expression. Loss aversion, previously the dominant paradigm, addresses only a single facet of predictable response. This work suggests attackers, faced with incomplete information, don’t simply maximize expected gain; they actively avoid the discomfort of uncertainty. Technical debt accumulates not from malicious intent, but from the erosion of foresight – a preference for the known, even if suboptimal.

Future efforts must move beyond static models of attacker cognition. The current landscape treats threat actors as rational, if adversarial, agents. A more nuanced approach acknowledges that cognition is inherently a process of approximation, a constant negotiation between incomplete data and the drive for coherent narrative. The challenge lies in identifying the shape of that negotiation – the specific heuristics and biases that dictate how ambiguity is resolved.

Ultimately, the goal isn’t to achieve perfect defense – an impossible state – but to extend the rare phases of temporal harmony we call ‘uptime’. This requires a shift from reactive patching to proactive anticipation – understanding not just what attacks might occur, but how attackers will make decisions when faced with the inevitable fog of war. The field needs to embrace the inherent imperfections of information and model the resulting cognitive distortions, accepting that security is a continuous adaptation to decay, not a conquest of chaos.


Original article: https://arxiv.org/pdf/2512.08107.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2025-12-11 05:28