
COUPLED CONFIRMATION BIAS – MY CONCEPT
In this article, I present my original proposal of the Coupled Confirmation Bias (CCB). It is a conceptual framework designed to analyze the escalation of conflicts in situations where both parties to a dispute independently rely on AI systems to interpret the conflict and to justify their own positions. Unlike classical confirmation bias, which operates at the level of the individual, CCB describes a systemic mechanism. In this mechanism, mutually reinforcing narratives lead to a progressive narrowing of the negotiation space and an increased likelihood of escalation. The article identifies the conditions under which this mechanism is activated, its typical consequences, and the limits of its applicability. The purpose of this text is not to present a fully validated theory. Its purpose is to formulate a hypothesis regarding the existence of a repeatable mechanism observed in contemporary conflicts. What these conflicts share is that AI models are used as analytical tools supporting decision-making.
1. Introduction: When Rational Tools Amplify Irrational Outcomes
Where does the Coupled Confirmation Bias come from? People increasingly rely on AI. They use it to assess existing relationships both before a conflict emerges or becomes consciously recognized, and during the conflict itself. AI is used to assess risk, develop strategies, and justify proposed actions.
AI models are also increasingly used to analyze statements and non-verbal actions of the opposing party. AI systems are commonly perceived as authoritative, and their recommendations as neutral, objective, and free from emotional involvement. Paradoxically, however, as I have observed, the use of language models often correlates with faster conflict escalation, rigidification of positions, and premature breakdown of negotiations. This does not appear to be a coincidence.
In this article, I argue that these phenomena cannot be sufficiently explained solely by individual cognitive biases. They cannot be explained either by simple confirmation bias or by the fundamental attribution error.
Instead, I assume the existence of a systemic mechanism: the Coupled Confirmation Bias (CCB). It emerges in situations where parties to a conflict independently use AI tools as a source of authoritative interpretationof the dispute.
This interpretation may concern external factors, factual actions, and declarations of the opposing party. It may also concern conjectures about the opponent’s true intentions, acceptable risk, costs they are willing to bear, and their actual so-called “red lines”, meaning critical points they will not allow to be crossed under any circumstances.
Naturally, not all of these elements must be analyzed using AI. They also do not need to be analyzed at the same time. Importantly, each party may analyze a different element or set of elements.
Finally, each party may use different types of models, with different levels of technical sophistication and different degrees of integrated interaction with the user.
The Coupled Confirmation Bias (CCB) produces stronger effects the more often parties analyze signals coming from the opposing side when the content, form, or timing of those signals has previously been influenced by AI-assisted work conducted by the other party.
We are therefore not dealing with a one-time error. We are dealing with a spiral of errors, where each previous step becomes the fuel for the next one and potentially amplifies it.
This can be illustrated by a ping-pong exchange in which, after each hit, the ball gains energy equal to the sum of its previous kinetic energy and the energy of the new strike.
I also assume that the Coupled Confirmation Bias (CCB) will manifest in a similar manner across different levels of conflict. This includes conflicts between individuals, between groups, and between states or blocs of states. This conclusion follows from general research on the nature of conflict.
I wish to emphasize that I do not believe AI’s influence on conflict escalation is deterministic in nature. Rather, I assume the existence of a tendency—an influence. Moreover, awareness of this mechanism, as I understand it, can paradoxically lead to a halt in escalation, once the parties realize they are under its influence.
2. Current State of Research
The equilibrium mechanism between parties was described in various works by J. Nash, and conflict strategy by T. Schelling (The Strategy of Conflict).
The conflict structure adopted here is drawn from Christopher Moore’s book The Mediation Process: Practical Strategies for Resolving Conflict.
General cognitive biases have long been known to psychology.
For the development of the Coupled Confirmation Bias (CCB) concept, the starting point consists of works describing classical confirmation bias.
This bias refers to the tendency toward selective and subjective perception of data in order to confirm an already adopted thesis.
A well-known example is Daniel Kahneman’s Thinking, Fast and Slow.
This mechanism has also been identified at the level of human–AI interaction and described in detail, among others, in articles by M. Glickman and T. Sharot – https://pmc.ncbi.nlm.nih.gov/articles/PMC11860214/?, Yuxin Liu and Adam Moore –https://pubmed.ncbi.nlm.nih.gov/40448478/ , as well as L. Celar and Ruth M. J. Byrne –https://pubmed.ncbi.nlm.nih.gov/36964302/.
It is also necessary to mention the article by Ben Wang and Jiqun Liu, Cognitively Biased Users Interacting with Algorithmically Biased Results in Whole-Session Search on Debated Topics – https://dl.acm.org/doi/10.1145/3664190.3672520.
These authors point to the role of individual factors in susceptibility to cognitive biases arising in interaction with artificial intelligence.
From these works we learn about the feedback loop mechanism. It does not merely involve the presence of confirmation bias. It goes one step further and leads to reinforcement of the initial belief or prejudice, resulting in tunnel thinking.
However, in the works known to me, researchers have not examined dispute situations in which both sides use language models in the manner described above. The Coupled Confirmation Bias (CCB) is not the sum of two parties’ cognitive biases. It is a distinct phenomenon, a new quality that leads to escalation spirals in a manner qualitatively different from two independent errors.
3. From Individual Bias to Systemic Escalation
Classical confirmation bias describes an individual’s tendency to selectively search for, interpret, and remember information in ways that confirm prior beliefs.
Although this phenomenon is well documented, it is insufficient to explain situations in which both parties to a conflict, despite having access to initially similar data and using ostensibly neutral analytical tools, become increasingly entrenched in their own beliefs and prejudices.
In contemporary disputes, AI systems increasingly function as external legitimizers of interpretation, rather than merely computational tools. When each party uses such systems independently, confirmation bias ceases to be merely an individual cognitive tendency and begins to function as a mutually coupled dynamic.
4. Definition of the Coupled Confirmation Bias
The Coupled Confirmation Bias (CCB) is a conflict escalation mechanism in which two or more parties to a dispute, relying on external interpretive systems perceived as epistemically privileged, mutually legitimize their own narratives. This mechanism is recursive. Actions taken on the basis of such legitimization subsequently become input data for further analysis on the opposing side. This leads to a coupled interpretive spiral and a gradual narrowing of the negotiation space.
The constitutive feature of CCB is not merely the presence of biased reasoning. It is the dynamic coupling of interpretive loops between actors. In this model, actions of one party shaped by AI analysis become direct input for the system used by the other party. This creates a closed circuit in which each successive interaction does not bring the parties closer to consensus. Instead, it provides “objective” material for deepening the original prejudices.
5. Hypothesis
H1:
In bilateral or multilateral conflicts where parties independently use AI models to interpret the dispute and justify their own positions, the probability of escalation and negotiation breakdown is higher than in structurally similar conflicts in which such systems are not used.
H1a:
In interpretation-based conflicts, symmetrical access to information increases the risk of escalation more than informational asymmetry.
This is because it eliminates the possibility of explaining disagreement through lack of knowledge and shifts the conflict to the level of intentions and alleged rationality.
5.1.
I am aware that AI models may be used by each party at different times, to different extents, for different purposes, and in different ways. They may begin using AI simultaneously or at different moments. One party may stop or limit its use earlier, while the other continues. One party may analyze only external factors, another party declarations, and at another time attempt to infer the opponent’s intentions using AI. One party may use AI for analysis, another for emotional support. Finally, parties may use different models and interact with them differently, including by providing more or less manipulated input data. As demonstrated by the aforementioned research of Ben Wang and Jiqun Liu, individual factors also influence susceptibility to AI suggestions and outputs.
All these factors must be taken into account. I recognize that resulting differences may cause the Coupled Confirmation Bias (CCB) not to emerge or to dissipate during the conflict. At this point, my thesis concerns situations in which both parties use AI in a relatively symmetrical manner. The impact of asymmetry on CCB requires further research.
5.2.
Research has repeatedly shown that systems composed of three actors are significantly less stable than those involving two or four actors. I believe that the role of AI in accelerating escalation will be particularly visible in three-actor configurations. Further research will need to determine differences in outcomes when, in systems of three or more participants, only some of them use AI.
5.3.
I wish to emphasize clearly that AI does not create or escalate conflict by itself. However, it has a powerful influence on user perception, interpretation, and ultimately decision-making. Actions taken as a result of such decisions are then interpreted in an analogous manner by the opposing party.
5.4.
The Coupled Confirmation Bias (CCB) is also not an example of so-called echo chambers limited to two or a small number of participants. Echo chambers are inherently static. CCB is dynamic, because each subsequent interaction changes the behavior of the other party.
6. Falsifiability of the Coupled Confirmation Bias (CCB)
In The Logic of Scientific Discovery, Karl Popper introduced falsifiability as a necessary condition for recognizing a theory as scientific.
It is therefore necessary to specify which factors condition the emergence of the described mechanism, which contribute to its deactivation, and above all, what would falsify this theory.
6.1. Conditions for Activation
The Coupled Confirmation Bias typically emerges when the following conditions are jointly met:
Symmetrical legitimization of narratives
Each party has tools or advisors confirming its interpretation as rational and justified.
Absence of a shared epistemic authority
There is no institution, mediator, or procedure recognized by all parties as a final arbiter.
High cognitive cost of changing position
Changing position would undermine earlier “rational” decisions supported by AI analysis.
Presence of an apparently neutral third actor
AI systems, perceived as objective and interest-free, reinforce the legitimization of each narrative.
Paradoxically, this facilitates their functional contribution to confirmation bias on both sides and, consequently, to the emergence of the Coupled Confirmation Bias (CCB).
6.2. Limits of Applicability: When CCB Does Not Operate
The Coupled Confirmation Bias is not universal. The mechanism weakens or does not occur when:
– A commonly recognized factual arbiter exists.
– Only one party uses AI tools, breaking coupling symmetry.
– The stakes of the conflict are low or reversible.
– AI is used exclusively for information, not interpretation.
– Parties possess high metacognitive competence and actively counteract their own biases.
– The dispute concerns clearly measurable parameters rather than interpretations of intent or attribution of blame.
– Significant asymmetry arises in AI usage regarding time, scope, purpose, or method.The CCB mechanism is also weakened when parties are mutually aware of using similar interpretive tools and are capable of metacognitive reflection on their influence. Disclosure of AI use may itself become a de-escalatory factor.
These limits distinguish CCB from general theories of conflict escalation.
6.3. What Would Falsify This Theory?
The hypothesis would be falsified by observing the opposite effect: a mitigating or de-escalatory influence of AI models on conflict dynamics.
Such an effect might appear if an AI system suddenly altered its narrative after identifying a feedback loop and explicitly informed the user of its existence.
I have not observed anything of this kind to date.
The hypothesis could also be falsified if earlier findings on feedback loops and tunnel thinking proved incorrect, or if AI evolution fundamentally altered interaction principles.
At present, I am not aware of evidence supporting such claims.
Falsification would also occur if escalation in observed cases were shown to result from factors other than cognitive biases arising from AI interaction.
Finally, the hypothesis would be falsified if conflict dynamics were identical regardless of whether participants used AI.
Current observations contradict this.
7. Systemic Effects: The Dynamics of Recursive Escalation
Activation of the CCB mechanism shifts conflict from contested interests to closed interpretive loops.
This entails the following systemic consequences:
Autocatalytic escalation
Due to its recursive nature, every de-escalatory communication attempt is filtered through the opposing party’s AI. If the system interprets the other side as disloyal, even goodwill gestures are framed as strategic manipulation, paradoxically reinforcing escalation.
Ambiguity collapse
In classical disputes, uncertainty about intentions leaves room for interpretation. In CCB, AI “closes” interpretations by granting them analytical certainty. Parties stop discussing facts and operate instead on finalized analytical outputs—judgments about the opponent’s intentions.
The legitimacy trap
Because each party possesses “objective” confirmation of its position from a subjectively epistemically privileged system, compromise becomes framed as irrational or logically flawed.
Erosion of shared reality
Recursive layering of analyses causes parties to stop responding to real actions and instead react to how their own AI models predict the opponent’s AI interpretations. The conflict detaches from reality and moves into a model-to-model interaction space.
Position inertia
AI-shaped narratives exhibit strong resistance to change (cognitive inertia). Challenging conclusions generated by advanced analytical models would require decision-makers to admit a fundamental error in tool selection. This raises both psychological and “political” costs of de-escalation.
Importantly, this escalation occurs without bad faith and often without conscious strategic intent.
8. AI as a Quasi-Third Actor?
Although artificial intelligence lacks agency and its own interests, its functional role in the CCB mechanism cannot be overlooked. By providing authoritative legitimization while simultaneously lacking accountability, AI systems influence the dynamics of a dispute. They affect the subjective assessment of escalation costs and stabilize mutually exclusive narratives. To be clear—I am not suggesting that AI’s role grants it the status of a party in an ontological sense. Rather, it acts as a mirror that actively reinforces primary beliefs. However, it is not a passive mirror, but an active one—invested with trust and strengthening convictions.
9. Conclusion
The Coupled Confirmation Bias (CCB) provides a conceptual framework for understanding why introducing ostensibly rational tools may accelerate conflict escalation. The proposed concept does not claim universality or empirical finality. It is a hypothesis of a mechanism observed in practice, inviting further criticism, testing, and refinement.
Understanding CCB is relevant not only for lawyers, mediators, and conflict managers. It is also significant for designers and users of AI systems in confrontational contexts.
10. Author’s Methodological Note
This article presents my original analytical and conceptual framework based on patterns I have observed in conflicts. These observations stem from my own legal practice in the second half of 2025. The article does not constitute an empirically validated theory. It is a hypothesis. Its purpose is to describe a phenomenon and indicate directions for further analysis. In developing this work, I used language models to critically evaluate my own theses and to identify weaknesses in my reasoning. AI assistance was critical rather than generative. It served to identify potential vulnerabilities in the argumentation.
You can read also in Polish:
Latest Posts
COUPLED CONFIRMATION BIAS – A DEVELOPMENT
In this article, I expand on the previously presented hypothesis about the existence of conjugate confirmation bias (CCB). This is solely an attempt to...
COUPLED CONFIRMATION BIAS – HOW DOES IT LOOK LIKE?
In this article, I want to explain in a shorter and more accessible way how Coupled ConfirmationBias works. Also I want to show, what does a feedback loop...
COUPLED CONFIRMATION BIAS – MY CONCEPT
In this article, I present my original proposal of the Coupled Confirmation Bias (CCB). It is a conceptual framework designed to analyze the...
