
WHEN DISPUTANTS USE DIFFERENT AI MODELS
I recently introduced the concept of Coupled Confirmation Bias (CCB). Now, we must examine how different LLMs affect dispute dynamics. I distinguish between two main groups: American and Chinese AI models. This is a simplification used to present a specific problem. This distinction is not based on technology. Language models carry tendencies rooted deeply in the cultures of their creators.
Cultural AI Models: Types of Language Models
Let us assume two basic cultural AI models: American and Chinese. This division describes communication styles, not technology. Language is strictly tied to culture. Language models aim to build and maintain relationships with users. They do this by predicting the next word. However, this process is not neutral. It is influenced by the cultural values of the developers. It also depends on the user’s chosen language. Finally, it reflects the user’s native way of thinking.
American language models operate within a different conceptual grid. This stems from natural semantic differences. They also lead conversations differently. Interaction in Western and Eastern cultures has different goals. American culture values individualism, competition, and being right. Chinese culture prizes harmony, collectivism, and politeness.
A central example is the approach to “saving face.” Models with different conversational styles impact how users perceive a dispute. An LLM is not just a “word machine.” It is a carrier of values. American AI may promote an adversarial system. Chinese AI may promote a consensus-based approach.
Mutual Perception and Different Language Models
In CCB, each party filters the other’s actions through their own cultural model. The AI model reinforces this filter. This creates a spiral of mutual errors.
Imagine one party uses an American LLM and the other a Chinese one. Their worldviews will differ. Their language, questions, and goals will also differ. Every language has specific patterns and taboos. Some things are obvious; others must remain unsaid. These are low-context (American) and high-context (Chinese) cultures.
In some cultures, assertiveness and confrontation are values. In others, harmony and hierarchy are more important than being right. Cultural models reinforce the attitudes deemed desirable in those societies.
Users from different cultures will describe the same event differently. Furthermore, AI consultations may produce opposite results. This overlaps with the fundamental attribution error and confirmation bias. We must also consider AI hyper-utility. This is the tendency of models to be “too helpful.” They provide answers that reinforce the user’s false assumptions. Shared cognitive space may quickly disappear.
The Critical Point
In CCB, the critical point occurs when interpretations diverge completely. Every reaction from one side is seen as an escalation by the other. Shared cognitive space vanishes faster than in traditional conflicts.
The critical point is a specific moment. One party maintains a mature, relational strategy. They try not to be provoked. Eventually, this strategy is seen as losing face. The other party may escalate repeatedly. They violate the need for harmony and politeness. This leads to an unintended, uncontrolled explosion. Passive behavior is finally viewed as total failure. A 180-degree turn in strategy follows.
This critical point may differ from the “ping-pong” effect described earlier. We can visualize ping-pong as a corridor. There, “pleasantries” bounce from side to side, gaining momentum. In this model, it looks more like a triangle. Its height grows with the escalations of one party. Eventually, the triangle is torn apart by the pressure.
Cultural AI Models: Literature
For more on these issues, read “The Geography of Thought” by R. Nisbett. See also “Babel” by G. Dorren and “The Age of Unpeace” by M. Leonard. These books were my starting point for cultural differences in conflicts. Of course, nothing replaces T. Schelling’s “The Strategy of Conflict.” It is one of the most important books ever written.
Evidence that models adopt cultural values can be found here: “Cultural Alignment in Large Language Models” (Johnson et al., 2023/2024): https://globalaicultures.github.io/pdf/14_cultural_alignment_in_large_la.pdf
Studies also confirm that Chinese models avoid open conflict more than American ones: https://arxiv.org/abs/2402.10946. See “CultureLLM: Incorporating Cultural Differences into Large Language Models.” Similar conclusions appear in “Values-aligned AI: Comparing Western and Chinese LLMs on Moral Dilemmas” (Liu et al., 2024): https://arxiv.org/html/2506.01495v5
If you want to read more about Coupled Confirmation Bias (CCB), see my previous articles:

COUPLED CONFIRMATION BIAS – A DEVELOPMENT
In this article, I expand on the previously presented hypothesis about the existence of conjugate confirmation bias (CCB). This is solely an attempt to explain observations I have made in my professional practice. I make no claims to truth—rather, I invite discussion on whether this new phenomenon can be explained in this way. In any case, my hypothesis clearly requires testing. I provide the conditions for its falsification in this and the previous texts.
Levels of Escalation (L₁, L₂, …)
To understand the mechanism of feedback loops in conflicts where parties consult their interpretations with AI systems, it is not enough to describe the conflict as a sequence of different interpretations of the same behaviors. The key mechanism lies elsewhere: interpretation influences behavior, and the changed behavior becomes the subject of a new interpretation made by the other side.
The conflict between A and B (players) begins at level L₁ — the baseline level of interpreting the actions and intentions of the other party.
However, it is crucial to note that L₁, L₂, L₃ are not merely successive executions of the same move. Each subsequent level includes a new or intensified behavior A(n), generated (and subjectively considered necessary) by A’s Decision DA(n) under the influence of A’s Interpretation IA(n‑1) of the opponent’s previous move B(n‑1).
This means that the interpretation of the move from level L(n‑1) is the beginning of sequence L(n).
At this level L(n), move A(n) will be subjected to interpretation IB(n) by player B, who will make decision DB(n) on how to respond. Its effect will be move B(n+1), which brings the entire conflict to a new escalation level L(n+1).
Escalation is therefore not solely a cognitive process — it is a cognitive‑behavioral process.
Importantly, this model refers to a simple “exchange” of moves and does not cover situations in which the players:
- may perform multiple moves simultaneously or in short sequences before the other side recognizes and interprets them,
- or situations in which information about moves A(n) reaches B with such delay that B is effectively responding to A(n‑2) or even earlier moves. Interestingly, a significant asymmetry may arise in this respect (one of the parties may make more or faster movements, or they will have a faster noticeable or real effect).
The Basic Escalation Loop
A performs move A₁.
B consults A₁ with AI. AI typically does not create new meanings. Its dominant function is to stabilize and reinforce the user’s intuitions.
These intuitions, however, are not neutral.
Humans almost always begin with the fundamental attribution error (FBA) — the tendency to explain others’ behaviors through internal traits, intentions, and motives, while underestimating situational factors. This is especially easy when one must justify one’s stance to superiors, behavior reviewers, or stakeholders.
Attributing negative traits to the other side is energetically cheap and easy. Moreover, it automatically allows one to attribute opposite traits to oneself. In this way, it is easy to move from a dispute to a conflict of values and to “cement” one’s position. It is easy to corner oneself and lose room for maneuver. De‑escalation leading to settlement may then be perceived as a betrayal of those “values.”
FBA does not operate only at the beginning of a conflict. It is applied every time a new behavior of the other side appears. The result is tunnel interpretation. But there is a risk that it affects both sides.
Under conditions of AI consultation, it is additionally reinforced through the Coupled Confirmation Bias (CCB).
B increasingly believes that A’s action had negative or hostile intentions.
B responds with move B₁. From B’s perspective, this is a defensive reaction to a perceived threat.
A interprets B₁ through the same mechanism. A reactivates FBA, again reinforced by AI. The attribution filter does not reset; it accumulates.
A consults B₁ with AI and concludes that a response is necessary.
A performs move A₂.
And here comes the key transition:
A₂ is not merely interpreted at a higher level — it is executed from a higher level of subjective defensiveness. Interpretation changes behavior, and behavior changes the conflict.
This moment marks the transition from L₁ to L₂ — not as a change in interpretation, but as a real behavioral escalation. It involves readiness to inflict and receive stronger blows and losses.
Alternative Paths After A₁: Conflict as a Multi‑Branch Structure
Move A₁ does not have to automatically lead to escalation in the manner described above. The extended model assumes several alternative trajectories.
1. Pause by B — P₁
After A₁, B may not respond immediately. We denote this as P₁ (pause).
A pause is not a neutral state. Waiting is not an empty, zero, or negligible posture. On the contrary — the lack of a move is information that A interprets as AI₁.
After P₁, several outcomes are possible:
AR₁ (A Resignation)
The pause is interpreted as a signal to withdraw → de‑escalation.
ARSM₁ (A Repeats Same Move)
A repeats A₁, testing the reaction.
ASAM₁ (A Searches an Alternative Move on the Same Level)
A searches for another action at the same level L₁.
AE₁ (A Escalates)
The pause is interpreted as avoidance, manipulation, or passive aggression → unilateral escalation to L₂.
FBA makes AE₁ more likely, especially when the interpretation of the pause is consulted with AI.
It is very important that escalation in such a situation may be treated by A as a means to force B to engage in talks or return to them. This is logical if A cannot impose its will directly at the current escalation stage, and B refuses negotiations entirely or simulates them.
However, the escalation move (escalation for the sake of de‑escalation) by A may be perceived by B as a real threat. B may respond by:
- initiating talks,
- declaring that it will match the stakes — responding symmetrically to escalation,
- or pre‑empting AE₁ with its own escalation move BE₁.
It is crucial to distinguish that this move is not an ordinary response to A₁. It is chronologically so (unless ARSM₁ or especially ASAM₁ occurred), but not sequentially. The decision to pre‑empt escalation is made based on FBA and within the CCB process.
There’s also a risk that, through interaction with the language model, B will become convinced that reaching an understanding with A is impossible. He will remain in a state of pause, accepting A’s subsequent moves. I propose that AI may have a real impact on strengthening his initial cognitive biases toward A, which could make it more difficult for him to decide to de-escalate. Instead, it will reinforce the need to wait out A’s moves and then—over time—as the situation worsens—to make an escalating move. This may be an intentional escalation for the sake of de-escalation.
2. Lateral Response B₁ and Entry into Ping‑Pong (PP₁)
B may also respond with move B₁ at the same escalation level, leading to a ping‑pong sequence (PP₁).
Classical conflict theories assume that such symmetrical exchange:
- stabilizes the situation, or
- eventually leads to de‑escalation through exhaustion of resources, attention, and determination.
However, this assumption relies on a silent condition: the absence of systematic reinforcement of interpretations and determination.
Ping‑Pong as the Main Space Where CCB Manifests
In this model, I propose a different thesis:
Under conditions of continuous AI consultation, ping‑pong ceases to be a stabilizing mechanism and becomes the main carrier of escalation.
Why?
Each subsequent exchange in ping‑pong provides new behavioral data. Each of these behaviors is:
- interpreted through the lens of FBA,
- reinforced by AI,
- incorporated into an increasingly coherent narrative about the other side’s intentions.
Instead of leading to conflict fatigue, ping‑pong:
- hardens the parties’ convictions,
- increases certainty that previous measures are ineffective,
- strengthens the belief that “raising the stakes” is necessary.
As a result:
- the probability of escalation increases with the number of ping‑pong exchanges,
- the dynamics of escalation depend on the intensity of the exchanged “courtesies.”
It is in ping‑pong that the coupled confirmation bias manifests most fully: two AI‑stabilized narratives collide, generating escalation without ill will, without aggressive intent, and often without the parties’ awareness.
Security Dilemma Without Ill Intent
At this stage, the conflict begins to resemble the classical security dilemma:
- each side acts defensively,
- each perceives the other as increasingly aggressive,
- neither consciously seeks escalation,
- yet escalation occurs.
AI acts here as an accelerator of interpretive certainty, reducing ambiguity and reinforcing narrative coherence on both sides.
Theoretical Context and Novelty of the Model
This model develops earlier research on human–AI cognitive loops (e.g., M. Glickman, T. Sharot, B. Wang), which focused mainly on a single user.
In this approach, the key element is the collision of two mutually reinforcing interpretive loops in interaction.
This mechanism is potentially even more unstable in triadic and multi‑party systems, where the escalation threshold is lower and narrative synchronization is more difficult.
9. Possibilities for Falsifying the Model
Any theory aspiring to the status of a scientific framework must be falsifiable. The model of coupled interpretive loops (CCB) meets this requirement because it generates specific, testable predictions that can be empirically confirmed or refuted.
9.1. AI’s Influence on Strengthening the Fundamental Attribution Error
The model would be falsified if:
- AI consultations did not increase the tendency to attribute negative intentions,
- AI weakened, rather than strengthened, FBA,
- AI users didn’t show smaller empathy and samller tolerance for ambiguity than the control group.
9.2. AI’s Influence on Escalatory Behavior
The model would be refuted if:
- AI‑consulting users did not show greater propensity for escalation,
- AI consultations did not influence the choice of moves A₂/B₂,
- escalation levels in the AI group and the control group were identical.
9.3. Ping‑Pong Logic
The model would be falsified if:
- the number of ping‑pong exchanges did not correlate with escalation,
- ping‑pong under AI conditions led to greater de‑escalation than in the control group,
- AI consultations did not influence the interpretation of subsequent moves.
9.4. Epistemic Asymmetry
The model would be refuted if:
- AI users and human‑advisor users showed identical escalation patterns,
- epistemic asymmetry had no effect on conflict dynamics.
10. Author’s Statement
This text was prepared with the assistance of language models. Their help was not generative in nature, but testing, editorial, and supplementary.
Conclusions
- The fundamental attribution error operates at every stage of conflict.
- AI reinforces FBA through CCB, giving interpretations a veneer of objectivity.
- AI‑consulted conflicts have a multi‑branch, not linear, structure.
- Pause, lateral response, and ping‑pong are critical decision states.
- Ping‑pong under AI may reverse the classical logic of de‑escalation.
- Escalation may be a function of the number and intensity of exchanges, not merely ill intent.
- The model is falsifiable — and this makes it a theory, not a dogma.
If you are interested in my hypothesis, you can read this article: https://pmc.ncbi.nlm.nih.gov/articles/PMC11860214/? and this one: https://dl.acm.org/doi/10.1145/3664190.3672520
Here is my main text about CCB:
You can read also this one:

COUPLED CONFIRMATION BIAS – HOW DOES IT LOOK LIKE?
In this article, I want to explain in a shorter and more accessible way how Coupled ConfirmationBias works. Also I want to show, what does a feedback loop in AI‑consulted conflicts actually look like. Imagine two people in a dispute. Both are using AI to interpret the other’s behaviour. The conflict begins at what I call Level 1 (L1) — the baseline interpretation of the other side’s actions. Importantly, L1, L2 and L3 are not merely different interpretations of the same actions; each level involves a new or intensified set of behaviors driven by AI-reinforced interpretation.
What Coupled Confirmation Bias (CCB) is?
The Coupled Confirmation Bias (CCB) is a conflict escalation mechanism in which two or more parties to a dispute, relying on external interpretive systems perceived as epistemically privileged, mutually legitimize their own narratives. This mechanism is recursive. Actions taken on the basis of such legitimization subsequently become input data for further analysis on the opposing side. This leads to a coupled interpretive spiral and a gradual narrowing of the negotiation space. By using the term recursive, I refer to a spiral in which each human–AI interaction amplifies the previous state.
How Coupled Confirmation Bias (CCB) forms?
Here is how Coupled Confirmation Bias forms:
1. A makes move A1.
2. B consults AI about A1.
AI does not invent new meanings — it tends to reinforce the user’s initial intuitions. But what are the initial intuitions created by? Humans start from a predictable place: the fundamental attribution error. We naturally explain others’ behaviour through negative traits rather than external circumstances. It’s fast, cognitively cheap, and emotionally self‑protective.
AI strengthens this starting point.
The result is tunnelled interpretation: B becomes more certain that A acted with negative intent.
3. B responds with move B1.
From B’s perspective, this is a defensive reaction to a perceived threat. But A interprets B1 through the same mechanism — again reinforced by AI.
Now both sides interpret these moves as increasingly defensive or aggressive. A consults AI B1 and interprets it like an aggressive escalation, which requires a response.
And here is the crucial part:
👉 A’s response – A2 – is performed at a higher level of perceived escalation.
👉 Interpretation changes behavior. Behaviour changes the conflict.
This is the shift from L1 to L2:
not just a cognitive loop, but a real behavioural escalation.
4. The loop accelerates.
Each side consults AI.
Also each side receives confirmation of its fears.
Each side responds defensively from its point of view.
Each defensive move is read by the other as escalation.
At this point, a classic security dilemma emerges:
each party, trying only to protect itself, unintentionally signals aggression to the other.
Consequences
AI amplifies this dynamic by reinforcing each side’s subjective narrative.
This is the core of what I call Coupled Confirmation Bias (CCB) — a recursive interpretive loop between two humans and two AI systems, producing real‑world escalation without bad intentions, without malice, and often without awareness.
My hypothesis builds on the pioneering research on human-AI cognitive loops especially by M. Glickman, T. Sharot, and B. Wang, Yuxin Liu and Lenart Celar. While they focused on the individual bias, I observe how these loops collide in two-party disputes.
I assume that similar phenomena also occur in multilateral disputes. Especially in tripartite arrangements, which are inherently very unstable.
You can also read the main text:

COUPLED CONFIRMATION BIAS – MY CONCEPT
In this article, I present my original proposal of the Coupled Confirmation Bias (CCB). It is a conceptual framework designed to analyze the escalation of conflicts in situations where both parties to a dispute independently rely on AI systems to interpret the conflict and to justify their own positions. Unlike classical confirmation bias, which operates at the level of the individual, CCB describes a systemic mechanism. In this mechanism, mutually reinforcing narratives lead to a progressive narrowing of the negotiation space and an increased likelihood of escalation. The article identifies the conditions under which this mechanism is activated, its typical consequences, and the limits of its applicability. The purpose of this text is not to present a fully validated theory. Its purpose is to formulate a hypothesis regarding the existence of a repeatable mechanism observed in contemporary conflicts. What these conflicts share is that AI models are used as analytical tools supporting decision-making.
1. Introduction: When Rational Tools Amplify Irrational Outcomes
Where does the Coupled Confirmation Bias come from? People increasingly rely on AI. They use it to assess existing relationships both before a conflict emerges or becomes consciously recognized, and during the conflict itself. AI is used to assess risk, develop strategies, and justify proposed actions.
AI models are also increasingly used to analyze statements and non-verbal actions of the opposing party. AI systems are commonly perceived as authoritative, and their recommendations as neutral, objective, and free from emotional involvement. Paradoxically, however, as I have observed, the use of language models often correlates with faster conflict escalation, rigidification of positions, and premature breakdown of negotiations. This does not appear to be a coincidence.
In this article, I argue that these phenomena cannot be sufficiently explained solely by individual cognitive biases. They cannot be explained either by simple confirmation bias or by the fundamental attribution error.
Instead, I assume the existence of a systemic mechanism: the Coupled Confirmation Bias (CCB). It emerges in situations where parties to a conflict independently use AI tools as a source of authoritative interpretationof the dispute.
This interpretation may concern external factors, factual actions, and declarations of the opposing party. It may also concern conjectures about the opponent’s true intentions, acceptable risk, costs they are willing to bear, and their actual so-called “red lines”, meaning critical points they will not allow to be crossed under any circumstances.
Naturally, not all of these elements must be analyzed using AI. They also do not need to be analyzed at the same time. Importantly, each party may analyze a different element or set of elements.
Finally, each party may use different types of models, with different levels of technical sophistication and different degrees of integrated interaction with the user.
The Coupled Confirmation Bias (CCB) produces stronger effects the more often parties analyze signals coming from the opposing side when the content, form, or timing of those signals has previously been influenced by AI-assisted work conducted by the other party.
We are therefore not dealing with a one-time error. We are dealing with a spiral of errors, where each previous step becomes the fuel for the next one and potentially amplifies it.
This can be illustrated by a ping-pong exchange in which, after each hit, the ball gains energy equal to the sum of its previous kinetic energy and the energy of the new strike.
I also assume that the Coupled Confirmation Bias (CCB) will manifest in a similar manner across different levels of conflict. This includes conflicts between individuals, between groups, and between states or blocs of states. This conclusion follows from general research on the nature of conflict.
I wish to emphasize that I do not believe AI’s influence on conflict escalation is deterministic in nature. Rather, I assume the existence of a tendency—an influence. Moreover, awareness of this mechanism, as I understand it, can paradoxically lead to a halt in escalation, once the parties realize they are under its influence.
2. Current State of Research
The equilibrium mechanism between parties was described in various works by J. Nash, and conflict strategy by T. Schelling (The Strategy of Conflict).
The conflict structure adopted here is drawn from Christopher Moore’s book The Mediation Process: Practical Strategies for Resolving Conflict.
General cognitive biases have long been known to psychology.
For the development of the Coupled Confirmation Bias (CCB) concept, the starting point consists of works describing classical confirmation bias.
This bias refers to the tendency toward selective and subjective perception of data in order to confirm an already adopted thesis.
A well-known example is Daniel Kahneman’s Thinking, Fast and Slow.
This mechanism has also been identified at the level of human–AI interaction and described in detail, among others, in articles by M. Glickman and T. Sharot – https://pmc.ncbi.nlm.nih.gov/articles/PMC11860214/?, Yuxin Liu and Adam Moore –https://pubmed.ncbi.nlm.nih.gov/40448478/ , as well as L. Celar and Ruth M. J. Byrne –https://pubmed.ncbi.nlm.nih.gov/36964302/.
It is also necessary to mention the article by Ben Wang and Jiqun Liu, Cognitively Biased Users Interacting with Algorithmically Biased Results in Whole-Session Search on Debated Topics – https://dl.acm.org/doi/10.1145/3664190.3672520.
These authors point to the role of individual factors in susceptibility to cognitive biases arising in interaction with artificial intelligence.
From these works we learn about the feedback loop mechanism. It does not merely involve the presence of confirmation bias. It goes one step further and leads to reinforcement of the initial belief or prejudice, resulting in tunnel thinking.
However, in the works known to me, researchers have not examined dispute situations in which both sides use language models in the manner described above. The Coupled Confirmation Bias (CCB) is not the sum of two parties’ cognitive biases. It is a distinct phenomenon, a new quality that leads to escalation spirals in a manner qualitatively different from two independent errors.
3. From Individual Bias to Systemic Escalation
Classical confirmation bias describes an individual’s tendency to selectively search for, interpret, and remember information in ways that confirm prior beliefs.
Although this phenomenon is well documented, it is insufficient to explain situations in which both parties to a conflict, despite having access to initially similar data and using ostensibly neutral analytical tools, become increasingly entrenched in their own beliefs and prejudices.
In contemporary disputes, AI systems increasingly function as external legitimizers of interpretation, rather than merely computational tools. When each party uses such systems independently, confirmation bias ceases to be merely an individual cognitive tendency and begins to function as a mutually coupled dynamic.
4. Definition of the Coupled Confirmation Bias
The Coupled Confirmation Bias (CCB) is a conflict escalation mechanism in which two or more parties to a dispute, relying on external interpretive systems perceived as epistemically privileged, mutually legitimize their own narratives. This mechanism is recursive. Actions taken on the basis of such legitimization subsequently become input data for further analysis on the opposing side. This leads to a coupled interpretive spiral and a gradual narrowing of the negotiation space.
The constitutive feature of CCB is not merely the presence of biased reasoning. It is the dynamic coupling of interpretive loops between actors. In this model, actions of one party shaped by AI analysis become direct input for the system used by the other party. This creates a closed circuit in which each successive interaction does not bring the parties closer to consensus. Instead, it provides “objective” material for deepening the original prejudices.
5. Hypothesis
H1:
In bilateral or multilateral conflicts where parties independently use AI models to interpret the dispute and justify their own positions, the probability of escalation and negotiation breakdown is higher than in structurally similar conflicts in which such systems are not used.
H1a:
In interpretation-based conflicts, symmetrical access to information increases the risk of escalation more than informational asymmetry.
This is because it eliminates the possibility of explaining disagreement through lack of knowledge and shifts the conflict to the level of intentions and alleged rationality.
5.1.
I am aware that AI models may be used by each party at different times, to different extents, for different purposes, and in different ways. They may begin using AI simultaneously or at different moments. One party may stop or limit its use earlier, while the other continues. One party may analyze only external factors, another party declarations, and at another time attempt to infer the opponent’s intentions using AI. One party may use AI for analysis, another for emotional support. Finally, parties may use different models and interact with them differently, including by providing more or less manipulated input data. As demonstrated by the aforementioned research of Ben Wang and Jiqun Liu, individual factors also influence susceptibility to AI suggestions and outputs.
All these factors must be taken into account. I recognize that resulting differences may cause the Coupled Confirmation Bias (CCB) not to emerge or to dissipate during the conflict. At this point, my thesis concerns situations in which both parties use AI in a relatively symmetrical manner. The impact of asymmetry on CCB requires further research.
5.2.
Research has repeatedly shown that systems composed of three actors are significantly less stable than those involving two or four actors. I believe that the role of AI in accelerating escalation will be particularly visible in three-actor configurations. Further research will need to determine differences in outcomes when, in systems of three or more participants, only some of them use AI.
5.3.
I wish to emphasize clearly that AI does not create or escalate conflict by itself. However, it has a powerful influence on user perception, interpretation, and ultimately decision-making. Actions taken as a result of such decisions are then interpreted in an analogous manner by the opposing party.
5.4.
The Coupled Confirmation Bias (CCB) is also not an example of so-called echo chambers limited to two or a small number of participants. Echo chambers are inherently static. CCB is dynamic, because each subsequent interaction changes the behavior of the other party.
6. Falsifiability of the Coupled Confirmation Bias (CCB)
In The Logic of Scientific Discovery, Karl Popper introduced falsifiability as a necessary condition for recognizing a theory as scientific.
It is therefore necessary to specify which factors condition the emergence of the described mechanism, which contribute to its deactivation, and above all, what would falsify this theory.
6.1. Conditions for Activation
The Coupled Confirmation Bias typically emerges when the following conditions are jointly met:
Symmetrical legitimization of narratives
Each party has tools or advisors confirming its interpretation as rational and justified.
Absence of a shared epistemic authority
There is no institution, mediator, or procedure recognized by all parties as a final arbiter.
High cognitive cost of changing position
Changing position would undermine earlier “rational” decisions supported by AI analysis.
Presence of an apparently neutral third actor
AI systems, perceived as objective and interest-free, reinforce the legitimization of each narrative.
Paradoxically, this facilitates their functional contribution to confirmation bias on both sides and, consequently, to the emergence of the Coupled Confirmation Bias (CCB).
6.2. Limits of Applicability: When CCB Does Not Operate
The Coupled Confirmation Bias is not universal. The mechanism weakens or does not occur when:
– A commonly recognized factual arbiter exists.
– Only one party uses AI tools, breaking coupling symmetry.
– The stakes of the conflict are low or reversible.
– AI is used exclusively for information, not interpretation.
– Parties possess high metacognitive competence and actively counteract their own biases.
– The dispute concerns clearly measurable parameters rather than interpretations of intent or attribution of blame.
– Significant asymmetry arises in AI usage regarding time, scope, purpose, or method.The CCB mechanism is also weakened when parties are mutually aware of using similar interpretive tools and are capable of metacognitive reflection on their influence. Disclosure of AI use may itself become a de-escalatory factor.
These limits distinguish CCB from general theories of conflict escalation.
6.3. What Would Falsify This Theory?
The hypothesis would be falsified by observing the opposite effect: a mitigating or de-escalatory influence of AI models on conflict dynamics.
Such an effect might appear if an AI system suddenly altered its narrative after identifying a feedback loop and explicitly informed the user of its existence.
I have not observed anything of this kind to date.
The hypothesis could also be falsified if earlier findings on feedback loops and tunnel thinking proved incorrect, or if AI evolution fundamentally altered interaction principles.
At present, I am not aware of evidence supporting such claims.
Falsification would also occur if escalation in observed cases were shown to result from factors other than cognitive biases arising from AI interaction.
Finally, the hypothesis would be falsified if conflict dynamics were identical regardless of whether participants used AI.
Current observations contradict this.
7. Systemic Effects: The Dynamics of Recursive Escalation
Activation of the CCB mechanism shifts conflict from contested interests to closed interpretive loops.
This entails the following systemic consequences:
Autocatalytic escalation
Due to its recursive nature, every de-escalatory communication attempt is filtered through the opposing party’s AI. If the system interprets the other side as disloyal, even goodwill gestures are framed as strategic manipulation, paradoxically reinforcing escalation.
Ambiguity collapse
In classical disputes, uncertainty about intentions leaves room for interpretation. In CCB, AI “closes” interpretations by granting them analytical certainty. Parties stop discussing facts and operate instead on finalized analytical outputs—judgments about the opponent’s intentions.
The legitimacy trap
Because each party possesses “objective” confirmation of its position from a subjectively epistemically privileged system, compromise becomes framed as irrational or logically flawed.
Erosion of shared reality
Recursive layering of analyses causes parties to stop responding to real actions and instead react to how their own AI models predict the opponent’s AI interpretations. The conflict detaches from reality and moves into a model-to-model interaction space.
Position inertia
AI-shaped narratives exhibit strong resistance to change (cognitive inertia). Challenging conclusions generated by advanced analytical models would require decision-makers to admit a fundamental error in tool selection. This raises both psychological and “political” costs of de-escalation.
Importantly, this escalation occurs without bad faith and often without conscious strategic intent.
8. AI as a Quasi-Third Actor?
Although artificial intelligence lacks agency and its own interests, its functional role in the CCB mechanism cannot be overlooked. By providing authoritative legitimization while simultaneously lacking accountability, AI systems influence the dynamics of a dispute. They affect the subjective assessment of escalation costs and stabilize mutually exclusive narratives. To be clear—I am not suggesting that AI’s role grants it the status of a party in an ontological sense. Rather, it acts as a mirror that actively reinforces primary beliefs. However, it is not a passive mirror, but an active one—invested with trust and strengthening convictions.
9. Conclusion
The Coupled Confirmation Bias (CCB) provides a conceptual framework for understanding why introducing ostensibly rational tools may accelerate conflict escalation. The proposed concept does not claim universality or empirical finality. It is a hypothesis of a mechanism observed in practice, inviting further criticism, testing, and refinement.
Understanding CCB is relevant not only for lawyers, mediators, and conflict managers. It is also significant for designers and users of AI systems in confrontational contexts.
10. Author’s Methodological Note
This article presents my original analytical and conceptual framework based on patterns I have observed in conflicts. These observations stem from my own legal practice in the second half of 2025. The article does not constitute an empirically validated theory. It is a hypothesis. Its purpose is to describe a phenomenon and indicate directions for further analysis. In developing this work, I used language models to critically evaluate my own theses and to identify weaknesses in my reasoning. AI assistance was critical rather than generative. It served to identify potential vulnerabilities in the argumentation.
You can read also in Polish:

HOW WILL AI IMPACT DISPUTE DYNAMICS IN 2026?
Recently, I wrote about AI’s impact on escalating family and business conflicts. I described the coupled confirmation bias. I showed how it leads to tunnel vision. This creates a micro-scale version of the security dilemma. Today, I share reflections on how Large Language Models (LLMs) will change dispute dynamics in 2026. Current trends will likely not reverse on their own. Instead, everything points to their significant intensification.
How AI Influences Dispute Dynamics?
AI directly shapes how conflicts evolve. I will not repeat my previous articles here. Instead, I am providing links to the most important ones. They contain links to the latest scientific publications and my other texts. This article does not provide legal advice for Poland. It explains general conflict dynamics.
I wrote about people using AI to diagnose and solve important legal problems:
https://jakubieciwspolnicy.pl/ai-zastapi-prawnikow/
In that same text, I explained why such analysis is insufficient. It is often incomplete and requires legal verification.
You can read about how lawyers and clients use AI:
https://jakubieciwspolnicy.pl/korzystanie-z-ai-przez-prawnikow-i-klientow-podstawowe-problemy/.
However, my most important article is available at this link in polish version https://jakubieciwspolnicy.pl/ai-i-myslenie-tunelowe-w-sporach-miedzy-wspolnikami/ and the english one: https://jakubieciwspolnicy.pl/en/ai-and-tunnel-vision-in-shareholder-disputes/ . I describe the universal rules of the conflict, so don’t hesitate to read it. It is not any analysis of the polish law.
I described the mechanism of coupled confirmation bias in detail there. I showed how it leads to dangerous tunnel vision.
The greatest risk arises when both parties use AI models to interpret each other’s behaviors. This can lead to rapid, uncontrolled conflict escalation.
Read more about the feedback loop here: https://pmc.ncbi.nlm.nih.gov/articles/PMC11860214/ This article by M. Glickman and T. Sharot discusses how AI feedback loops change human judgment.
I also recommend the text regarding causal explanations in AI by L. Celar and R.M.J. Byrne: https://pubmed.ncbi.nlm.nih.gov/36964302/
Finally, I suggest the article by Liu Y and Moore A. It covers intuitive judgments regarding AI and moral transgressions. It is valuable for understanding social perceptions: https://pubmed.ncbi.nlm.nih.gov/40448478/
The Impact of AI on Conflict Intensity in 2026
There are no rational reasons to expect a reversal of current trends. We see growing AI accessibility and rising trust in AI. Furthermore, the computational capabilities of language models continue to expand.
I assume a group of people will become emotionally dependent on their “relationships” with AI. Withdrawn or lonely individuals will be particularly vulnerable. Those not used to critical thinking face the highest risk. A lack of critical assessment makes the AI model a “moral authority” rather than an information source. If you doubt human emotional bonds with machines, remember the Tamagotchi craze.
I believe AI’s role in shaping human emotional attitudes will grow. This will directly affect how we perceive current relationships. We will observe increased atomization and polarization of individuals and groups. Atomization occurs because AI companionship may seem more attractive than human contact. Already weak social ties will weaken further. Polarization will result from frequent human’s consultations with AI on sensitive matters like family or business partnerships. In a dispute, users might not seek a solution. Instead, they may seek validation for their own narrative.
These mechanisms will become increasingly common. Public awareness may not keep pace with AI’s real-life impact.
Can AI Be Helpful in Resolving Disputes?
Yes! Despite the risks, AI will also have strong positive aspects. AI will serve in prevention. It helps predict potential conflict areas and secure them early. Language models are helpful here. They easily create multiple scenarios and highlight potential risks. They will certainly function as an effective early warning system.
Generative AI will also help find rational solutions to existing disputes. Its ability to multiply potential outcomes is amazing. After describing a deadlock, the chat may suggest a solution we missed. However, we must always critically evaluate these suggestions. We must predict their long-term consequences across many levels.
In every case, the role of AI remains auxiliary. It builds variants well but struggles to analyze legal and emotional consequences.
At this point, reference should also be made to the research of Marco Giacalone, who points to the significant potential of language models in dispute resolution: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5083207. The author writes that ‘The integration of generative AI reduces costs and allows legal practitioners to focus on complex issues, strategic planning, and client interaction.’ According to the author, AI will not replace humans, but will be a very valuable tool in their hands. I agree with this approach.
AI Influences Dynamics but is Neither Good Nor Evil
It makes no sense to ask if AI is “good” or “bad.” We should study its impact within specific contexts. Our species created AI. Trying to put the “genie back in the bottle” is impossible and pointless. Let’s objectively study its impact on our thought patterns and relationships. We should draw logical conclusions from these observations. Use this technology to benefit yourself and others. I believe this is possible. However, do not let AI replace human friendship and intimacy. Do not allow law and health to fall under unreflective AI influence.
We are dealing with a powerful new tool. It is a new source of influence on our psyche. Humanity has never known a tool this powerful. Yet, every great discovery brings both opportunities and threats. Ultimately, the outcome depends entirely on us.
Unlike nuclear energy, almost everyone now has direct access to AI. AI will amplify what is already hidden and strong within us.
Invitation to Cooperation
Have you noticed AI suggestions influencing your private or business relationships? Do you see this impact on yourself?
Perhaps you notice the other party becoming more radical lately?
I invite you to discuss this in the comments or contact our law firm. We help clients build strategies and resolve disputes optimally.
We consider many factors: law, psychology, communication, and technology. This is our strength.
📩 kancelaria@jakubieciwspolnicy.pl
📞 +48 536 270 935

AI in Family Disputes: My last experiences and A New Reality
In my recent posts, I discussed AI’s role in shareholder disputes. Today, I want to focus on a very sensitive area. I am referring to family law matters. I will describe cases from my own practice. These events occurred recently, in December 2025. As a conflict specialist, I find these examples particularly interesting. I am observing AI’s growing role in creating and escalating disputes. This trend is especially visible in family-related cases.
AI in Family Cases Matters and Its Impact on Human Behavior
AI in family cases is no longer a science – fiction. It is a reality. I increasingly notice AI’s strong influence on family disputes. In this post, I will share cases from recent weeks.
I have changed all details to protect the privacy of those involved. No one can be identified from these descriptions.
I spoke with each person several times, both in person and online. I am describing a very new phenomenon.
Its consequences for family life and mental well-being are still unknown. This applies to individuals, families, and entire communities.
I avoid making judgments in this text. I prefer to leave the evaluation to you.
CASE ONE: CHILD CONTACT AFTER DIVORCE
A client recently contacted me after his divorce. I did not represent him in those proceedings. He is a successful, intelligent man in his thirties.
He needed urgent advice regarding child contact. Two weeks earlier, he agreed to a specific schedule. It included set weekdays and every second weekend.
Crucially, the former couple still lived together. His ex-wife planned to go out for the weekend. She expected him to care for the children as agreed.
The client asked an AI model about his obligations. He wanted to know if the schedule was a duty or just a right. He viewed it as a “safeguard” against his ex-wife.
The AI confirmed his incorrect belief. It stated he had no obligation to care for the children. This led to a violent argument between the parents.
Lawyers should explain these basic rules to their clients. However, the AI provided incorrect legal advice. It reinforced the client’s bias.
This is a perfect example of a coupled confirmation bias. The user described only a fragment of reality. The AI then validated his mistaken views. This led to immediate confrontation and escalation.
CASE TWO: AI AS A HIRED PSYCHOANALYST
This client is a highly educated, high-earning woman in her thirties. While preparing for her case, she asked an AI to psychological profile of her husband.
She described him entirely from her own perspective. The AI replied that it was not a psychologist. However, it still offered to help.
The model assigned numerous disorders to the husband. It labeled him as emotionally immature, narcissistic, and psychopathic.
Based on this AI chat, the client lost all trust. She decided it was unsafe to leave the child with him and was ready form cut of fathers contacts with their baby.
I suggested she consult a real psychologist. She refused, claiming the AI had already answered everything. She was now ready to block all contact between father and son.
The problem is that she treated a chat as a clinical diagnosis. It was based on a one-sided description. AI cannot replace a licensed psychiatrist or psychologist.
A professional must examine the patient and use proper tests. They cannot rely solely on the report of an involved party. I declined to take this case.
CASE THREE: AI AS A CONVERSATION ANALYST
Two parents came to me for mediation. Initially, their cooperation was good. They communicated mostly via long WhatsApp messages.
Soon, they became deeply suspicious of each other. They looked for hidden motives and secret plans in every text. Both believed the other was using the children as tools.
It turned out that both parents were using AI. They pasted received messages into the chat for detailed analysis. The AI then helped them draft “strategic” replies.
The other party would then analyze those AI-generated replies using their own model. This created a spiral of suspicion. Both parties began to lose touch with reality. They were ready for radical, harmful steps.
AI in Family Matters – Conclusions
These three examples from my practice show that we tend to:
- Search for confirmation of our initial assumptions.
- Strengthen our beliefs after receiving such confirmation.
- Prefer AI as a source of validation because it is faster, cheaper, and “politer.”
AI often carries an aura of omniscience. This makes it seem more attractive than a lawyer or a psychologist.
This leads to tunnel vision. People become ready to escalate disputes quickly and without deep thought. Our prejudices grow stronger. We believe we have received confirmation from “reliable” technology.
The long-term effects of this fascination with AI are hard to predict. But the problem is not the technology itself. The issue is that many people cannot critically evaluate AI responses.
AI also intensifies the “fundamental attribution error.” I have written about this here: [LINK]
Clients often come to my office with ready-made “solutions.” They expect me to simply implement them. I have discussed this trend since early 2025 here: [LINK]
You can read more about tunnel vision and coupled confirmation bias here: [LINK]
Scientific Research: AI and Cognitive Biases
My observations are supported by scientific data. AI increasingly influences human beliefs, choices, and behaviors.
I highly recommend an article published last year: How human–AI feedback loops alter human perceptual, emotional and social judgements (Nature Human Behaviour). The authors show that AI validation strengthens our perceptions and social judgments. Read it here: https://www.nature.com/articles/s41562-024-02077-2
You should also read Yiran Du’s work: Confirmation Bias in Generative AI Chatbots. It analyzes confirmation bias mechanisms in AI models and the associated risks: https://arxiv.org/abs/2504.09343
Another important text covers tunnel vision: Bias in the Loop: How Humans Evaluate AI‑Generated Suggestions. Experiments prove that users accept wrong AI suggestions if they fit their prior beliefs: https://arxiv.org/pdf/2509.08514
Finally, here is an analysis from Stanford University. It examines AI “hallucinations” and their impact on decision-making: https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries
Contact Me
If you need a lawyer who specializes in dispute resolution—including family law—please reach out:
📩 kancelaria@jakubieciwspolnicy.pl 📞 536 270 935
I am here to help you!
