
DIGITAL GASLIGHTING IN NEGOTIATIONS
Negotiations have always been a psychological game. Parties try to influence each other and test boundaries. They aim to build and exploit strategic advantages. However, a new participant has joined this game. It is invisible, fast, and emotionless: the algorithm. AI can now act as a “booster” for one side. It is a tool that strengthens arguments and creates dangerous emotional pressure.
What is Digital Gaslighting?
Digital gaslighting is the use of AI to generate messages that undermine an opponent’s judgment. It aims to trigger unjustified guilt, fear, or disorientation. I use this term as a shorthand for algorithmically enhanced cognitive manipulation. This is no longer simple persuasion. It is precise influence engineering. We are facing manipulation powered by the latest technology. This represents a qualitative shift we have never seen before.
The use of AI in disputes is already a reality. You can read the results of our study here: [Link]
Why is this technique so effective?
AI analyzes writing styles with a precision that humans cannot match. Within a few paragraphs, it detects a tendency to apologize or avoid conflict. It identifies over-explaining or fear of judgment. These are classic markers of a weak negotiating position. Previously, these were sensed intuitively. Today, they are identified and amplified by algorithms.
AI then generates messages that hit these exact pain points. This hyper-personalization makes the recipient feel as if the other side is “reading their mind.” In reality, it is linguistic pattern analysis. It evaluates modality, certainty, affect, and justification methods. AI automatically selects the form and arguments for maximum impact. Crucially, AI does not “know” it is doing something wrong.
A second advantage is the lack of inhibitors. Humans have natural limits: empathy, fatigue, and moral resistance. AI has none of these. If given the goal to “create doubt,” it does so coldly and consistently. It will not feel discomfort or soften its tone. This communication is designed for efficiency, not for building relationships.
The third advantage is speed and scale. A human writes one message; AI can write a hundred effortlessly. It selects the version that best destabilizes the victim’s emotions. It is like fighting an opponent who trains for infinite combinations before moving. No wonder the recipient feels overwhelmed. Most people find this challenge too difficult to handle alone.
How to recognize if your opponent is using AI?
How can you tell if an algorithm is “writing” on the other side? Below are several warning signals. None are definitive proof, but they suggest a high risk of AI support.
- Unnatural Intensity: The message is too dense with arguments. It is stylistically “too clean” and hits emotions too precisely. It sounds like someone spent hours on every sentence, yet the reply arrived instantly.
- Lack of Human Error: There are no digressions, hesitations, or linguistic slips. The language is polished to the point of artificiality.
- The Affirmation Loop: Every doubt you raise meets an immediate, perfectly tailored counter-argument. Every attempt to change the subject leads back to their narrative. This is not a conversation. It is an algorithmic spiral of pressure.
- Style Discrepancy: You may notice a gap between written text and live conversation. Some people write better than they speak. However, there are limits to such differences in vocabulary and logic.
Digital Gaslighting: How to defend yourself?
AI acts fast and ruthlessly. Therefore, your defense must be the opposite: slow, conscious, and analog. The simplest technique is text deconstruction. Break the message into two categories: facts and emotional adjectives. This sounds easy but can be difficult in practice. Nevertheless, this technique is valuable in any suspicious interaction. You will soon see which parts exist only to trigger guilt.
You can then choose to respond only to one verifiable fact. Ignore the emotional narrative entirely. Ask for clarification on the rest later. In most cases, the pressure relies on insinuations and hyperboles. Separating content from tone strips the AI user of their power.
Another strategy is the time test. AI works instantly, but you do not have to. Deliberate slowing down lowers the pressure. It allows you to regain control and breaks the algorithmic rhythm. This is a classic negotiation technique with new relevance in the AI era.
The third and perhaps best method is changing the communication channel. AI excels in emails and chats. Move away from them. Currently, AI cannot perfectly mimic a human in live teleconferences or face-to-face meetings. Switching to a phone call or a direct meeting cuts off the opponent’s algorithmic support. It is like turning off the autopilot in a plane. Suddenly, they must actually fly.
Does using Digital Gaslighting pay off?
In the short term, yes. It provides an advantage and allows one to push their position. It creates pressure and secures concessions. However, the long-term cost is immense.
First, it leads to the erosion of trust. Negotiations require a minimum level of predictability. If both sides use digital masks, the point of coordination vanishes. All that remains is a game of appearances. You lose the relationship and the trust. In long-term family or business disputes, these costs exceed any temporary gain.
The second risk is the rebound effect. If AI pushes a person too hard into guilt, they may turn to their own AI for help. They will receive support quickly and for free. Their AI will provide a sense of external authority and rationality. Will such a person then have any scruples? I have written about this escalation in my series on Coupled Confirmation Bias (CCB): https://jakubieciwspolnicy.pl/en/coupled-confirmation-bias/
Thirdly, the manipulation may be exposed. In the digital age, traces remain. Once trust is lost, a reputation shatters like glass. Rebuilding it is nearly impossible. I am omitting the legal issues here. However, such manipulation could potentially invalidate an agreement based on error or fraud. I will cover these legal aspects in future articles.
What remains?
New opportunities create new threats. However, these threats remind us of old values. In a world of AI-driven pressure, rationalityis your greatest advantage. Not speed, not aggression, but rationality and predictability. These build trust.
The role of a lawyer, mediator, or strategist is to act as a filter. We separate emotional noise from facts. They recognize algorithmic pressure patterns. We protect clients from digital manipulation, especially when written communication escalates the conflict. This is a return to the fundamental function of an advisor: restoring proportion and common sense.
Digital gaslighting will evolve. However, our resilience can grow as well. Resilience starts with awareness. You must understand that an algorithm might be trying to “play” you. Decide not to join that game. Be attentive and be rational.
Read more on AI ethics in negotiations:
- Harvard PON: https://www.pon.harvard.edu/tag/ai
- American Bar Association: https://www.americanbar.org/groups/dispute_resolution/resources
Why does this matter to you?
The use of AI in negotiations and family or business disputes is a fact. It also affects workplace relationships. AI’s importance will grow. Today, it is already the first point of analysis for many people.
Our mission at Jakubiec & Partners is to help clients resolve disputes while protecting relationships. We see the great opportunities in AI, but we also see the risks of bad faith. If you feel stuck in a dispute or need strategic legal help, contact us. We are here to help.
📩 kancelaria@jakubieciwspolnicy.pl 📞 536 270 935

STUDY: AI AS A HIDDEN ALLY IN DISPUTES?
Below is a report from a study conducted in January 2026. We gathered information on how people use AI to analyze their disputes. The results proved to be very interesting.
AI’s Role in Disputes Report (January 2026): Hidden Advisor or Error Generator? What did we do?
We conducted a survey completed by 87 participants. This group included our clients, lawyers, mediators, and business leaders. It also involved psychologists and people from the fields of science and art. Consequently, this is not a nationally representative sample. However, it represents a group with high awareness of conflict essence. These individuals can critically evaluate their own reasoning.
I designed the survey myself. After its distribution, some participants provided technical feedback on the questions. I appreciate these comments. Therefore, I will include them in future studies. I certainly plan to conduct more research soon.
What was the focus of the AI survey?
The questions concerned general AI usage. We also focused on analyzing family, work, and business situations. This includes the stage before a formal dispute arises.
Furthermore, we asked about analyzing the other party’s intentions. This element is highly susceptible to the fundamental attribution error. When combined with AI hyper-alignment, it generates a so-called feedback loop. This is a confirmation trap that leads to tunnel vision.
Next, I asked about using AI to determine one’s own actions. I deliberately did not specify if this meant the first move or a reaction. However, the most important question concerned transparency. I asked if participants would inform the other side about using AI. Surprisingly, the respondents showed remarkable consistency here. The conclusions from this remain an open question.
The final two questions concerned trust in AI. Although they were similar, the results were intriguing. Trust in AI does not match the assessment of its objectivity. It seems we know AI is not objective, yet we tend to trust it.
How did the participants respond?
Below are the “raw” questions and answers.
Do you use AI LLM models?
(Blue – yes, red – no)
87 answers

Do you use AI’s LLM to analyze family, employment or business?

Which of these relationships do you use AI to analyze? Family, work, business, or other?

Do you use AI’s LLM to analyze the other party’s intentions?
(blue – yes, red – no)

Do you use AI to design your own moves in an dispute?
(blue – yes, red – no)

Would you inform the other side of the dispute that you are using AI to analyze and predict their behavior or prepare your own moves?

Do you trust artificial intelligence analyses?
(blue – yes, red – no, yellow – partly)

Is AI objective?

Analysis of the Responses
The vast majority of participants use AI. Certainly, they do so to varying degrees. Their topics of interest are also not the same. Nevertheless, the widespread use of language models is a fact.
Specifically, 43% of respondents use AI to analyze family or business situations. In my opinion, this is a high number. Interestingly, business analysis was the most common. Family situations were the least frequent. Meanwhile, 29% of people pointed to other areas. Here is a link to an article on AI in family disputes: [Link]
A significant majority (81%) did not try to determine the other party’s intentions via AI. Is this a high number? On the contrary, almost 20% of people do try. This means every fifth person is vulnerable to the AI feedback loop. AI tends to “agree” with the user, reinforcing their original bias.
Furthermore, 25% of respondents use AI to plan their moves in a dispute. This discrepancy with the previous question is interesting. Perhaps 5% of us use AI for planning without analyzing the other side’s intentions. It is unclear if these are the same participants. This might result from ignoring psychological aspects or simply focusing on the “matter” of the case.
Significantly, 80% of us would not inform the other side about using AI. Why is that? Do we consider it unfair, like “technological doping”? Perhaps we view it as a superstition and feel slightly ashamed. Or maybe we believe we have a technological advantage and want to keep this powerful weapon secret.
Do we trust AI?
The last two questions yielded astonishing results. While 75% of us use AI, 67% believe it is not objective. Why then do we use it? Perhaps to reinforce our own beliefs. After all, it feels good when technology says: “That’s a great idea, Andrzej!”. We might simply pretend not to see the lack of objectivity. Alternatively, we may feel that AI is biased, but it is “on our side.” This correlates with the fact that most respondents partially trust AI.
Conclusions
AI has become a common tool. It shapes our attitudes in many areas of life. Certainly, its influence is felt in the disputes we handle. Interestingly, we also see it in the analysis of the situation’s structure. This can lead to a desire to change the status quo. Consequently, it may even trigger the dispute itself.
My previous publications on AI in disputes
Below are links to articles regarding AI’s impact on dispute dynamics. I presented my original concept of Coupled Confirmation Bias (CCB) there. These texts include references to the latest research in prestigious journals. They cover psychology, technology, and the role of AI in creating tunnel vision.
I recommend a key article by Yiran Du: Confirmation Bias in Generative AI Chatbots. It analyzes confirmation bias mechanisms in AI models. You can read about the risks of this coupling here: https://arxiv.org/abs/2504.09343?
I applied this research to situations where both parties use AI. One party’s actions, determined by their AI, become the input for the other party’s AI. This prompts a specific, often escalated response. This escalation seems particularly dangerous.
For those interested, here is the link to the English version of my article on CCB: https://jakubieciwspolnicy.pl/en/coupled-confirmation-bias-2/ and the main text in polish: https://jakubieciwspolnicy.pl/sprzezony-blad-konfirmacji/
Invitation to Cooperation
If you are a party to a dispute, you may need strategic advice. I and my team deal with more than just the law. We conduct negotiations based on psychology, economics, and behavioral analysis.
If you have experiences with AI in disputes, please contact us. We would love to hear your story. It might serve as valuable material for our research. We ensure full anonymity.
Email: kancelaria@jakubieciwspolnicy.pl
Phone: 536 270 935

WHEN DISPUTANTS USE DIFFERENT AI MODELS
I recently introduced the concept of Coupled Confirmation Bias (CCB). Now, we must examine how different LLMs affect dispute dynamics. I distinguish between two main groups: American and Chinese AI models. This is a simplification used to present a specific problem. This distinction is not based on technology. Language models carry tendencies rooted deeply in the cultures of their creators.
Cultural AI Models: Types of Language Models
Let us assume two basic cultural AI models: American and Chinese. This division describes communication styles, not technology. Language is strictly tied to culture. Language models aim to build and maintain relationships with users. They do this by predicting the next word. However, this process is not neutral. It is influenced by the cultural values of the developers. It also depends on the user’s chosen language. Finally, it reflects the user’s native way of thinking.
American language models operate within a different conceptual grid. This stems from natural semantic differences. They also lead conversations differently. Interaction in Western and Eastern cultures has different goals. American culture values individualism, competition, and being right. Chinese culture prizes harmony, collectivism, and politeness.
A central example is the approach to “saving face.” Models with different conversational styles impact how users perceive a dispute. An LLM is not just a “word machine.” It is a carrier of values. American AI may promote an adversarial system. Chinese AI may promote a consensus-based approach.
Mutual Perception and Different Language Models
In CCB, each party filters the other’s actions through their own cultural model. The AI model reinforces this filter. This creates a spiral of mutual errors.
Imagine one party uses an American LLM and the other a Chinese one. Their worldviews will differ. Their language, questions, and goals will also differ. Every language has specific patterns and taboos. Some things are obvious; others must remain unsaid. These are low-context (American) and high-context (Chinese) cultures.
In some cultures, assertiveness and confrontation are values. In others, harmony and hierarchy are more important than being right. Cultural models reinforce the attitudes deemed desirable in those societies.
Users from different cultures will describe the same event differently. Furthermore, AI consultations may produce opposite results. This overlaps with the fundamental attribution error and confirmation bias. We must also consider AI hyper-utility. This is the tendency of models to be “too helpful.” They provide answers that reinforce the user’s false assumptions. Shared cognitive space may quickly disappear.
The Critical Point
In CCB, the critical point occurs when interpretations diverge completely. Every reaction from one side is seen as an escalation by the other. Shared cognitive space vanishes faster than in traditional conflicts.
The critical point is a specific moment. One party maintains a mature, relational strategy. They try not to be provoked. Eventually, this strategy is seen as losing face. The other party may escalate repeatedly. They violate the need for harmony and politeness. This leads to an unintended, uncontrolled explosion. Passive behavior is finally viewed as total failure. A 180-degree turn in strategy follows.
This critical point may differ from the “ping-pong” effect described earlier. We can visualize ping-pong as a corridor. There, “pleasantries” bounce from side to side, gaining momentum. In this model, it looks more like a triangle. Its height grows with the escalations of one party. Eventually, the triangle is torn apart by the pressure.
Cultural AI Models: Literature
For more on these issues, read “The Geography of Thought” by R. Nisbett. See also “Babel” by G. Dorren and “The Age of Unpeace” by M. Leonard. These books were my starting point for cultural differences in conflicts. Of course, nothing replaces T. Schelling’s “The Strategy of Conflict.” It is one of the most important books ever written.
Evidence that models adopt cultural values can be found here: “Cultural Alignment in Large Language Models” (Johnson et al., 2023/2024): https://globalaicultures.github.io/pdf/14_cultural_alignment_in_large_la.pdf
Studies also confirm that Chinese models avoid open conflict more than American ones: https://arxiv.org/abs/2402.10946. See “CultureLLM: Incorporating Cultural Differences into Large Language Models.” Similar conclusions appear in “Values-aligned AI: Comparing Western and Chinese LLMs on Moral Dilemmas” (Liu et al., 2024): https://arxiv.org/html/2506.01495v5
If you want to read more about Coupled Confirmation Bias (CCB), see my previous articles:

COUPLED CONFIRMATION BIAS – A DEVELOPMENT
In this article, I expand on the previously presented hypothesis about the existence of conjugate confirmation bias (CCB). This is solely an attempt to explain observations I have made in my professional practice. I make no claims to truth—rather, I invite discussion on whether this new phenomenon can be explained in this way. In any case, my hypothesis clearly requires testing. I provide the conditions for its falsification in this and the previous texts.
Levels of Escalation (L₁, L₂, …)
To understand the mechanism of feedback loops in conflicts where parties consult their interpretations with AI systems, it is not enough to describe the conflict as a sequence of different interpretations of the same behaviors. The key mechanism lies elsewhere: interpretation influences behavior, and the changed behavior becomes the subject of a new interpretation made by the other side.
The conflict between A and B (players) begins at level L₁ — the baseline level of interpreting the actions and intentions of the other party.
However, it is crucial to note that L₁, L₂, L₃ are not merely successive executions of the same move. Each subsequent level includes a new or intensified behavior A(n), generated (and subjectively considered necessary) by A’s Decision DA(n) under the influence of A’s Interpretation IA(n‑1) of the opponent’s previous move B(n‑1).
This means that the interpretation of the move from level L(n‑1) is the beginning of sequence L(n).
At this level L(n), move A(n) will be subjected to interpretation IB(n) by player B, who will make decision DB(n) on how to respond. Its effect will be move B(n+1), which brings the entire conflict to a new escalation level L(n+1).
Escalation is therefore not solely a cognitive process — it is a cognitive‑behavioral process.
Importantly, this model refers to a simple “exchange” of moves and does not cover situations in which the players:
- may perform multiple moves simultaneously or in short sequences before the other side recognizes and interprets them,
- or situations in which information about moves A(n) reaches B with such delay that B is effectively responding to A(n‑2) or even earlier moves. Interestingly, a significant asymmetry may arise in this respect (one of the parties may make more or faster movements, or they will have a faster noticeable or real effect).
The Basic Escalation Loop
A performs move A₁.
B consults A₁ with AI. AI typically does not create new meanings. Its dominant function is to stabilize and reinforce the user’s intuitions.
These intuitions, however, are not neutral.
Humans almost always begin with the fundamental attribution error (FBA) — the tendency to explain others’ behaviors through internal traits, intentions, and motives, while underestimating situational factors. This is especially easy when one must justify one’s stance to superiors, behavior reviewers, or stakeholders.
Attributing negative traits to the other side is energetically cheap and easy. Moreover, it automatically allows one to attribute opposite traits to oneself. In this way, it is easy to move from a dispute to a conflict of values and to “cement” one’s position. It is easy to corner oneself and lose room for maneuver. De‑escalation leading to settlement may then be perceived as a betrayal of those “values.”
FBA does not operate only at the beginning of a conflict. It is applied every time a new behavior of the other side appears. The result is tunnel interpretation. But there is a risk that it affects both sides.
Under conditions of AI consultation, it is additionally reinforced through the Coupled Confirmation Bias (CCB).
B increasingly believes that A’s action had negative or hostile intentions.
B responds with move B₁. From B’s perspective, this is a defensive reaction to a perceived threat.
A interprets B₁ through the same mechanism. A reactivates FBA, again reinforced by AI. The attribution filter does not reset; it accumulates.
A consults B₁ with AI and concludes that a response is necessary.
A performs move A₂.
And here comes the key transition:
A₂ is not merely interpreted at a higher level — it is executed from a higher level of subjective defensiveness. Interpretation changes behavior, and behavior changes the conflict.
This moment marks the transition from L₁ to L₂ — not as a change in interpretation, but as a real behavioral escalation. It involves readiness to inflict and receive stronger blows and losses.
Alternative Paths After A₁: Conflict as a Multi‑Branch Structure
Move A₁ does not have to automatically lead to escalation in the manner described above. The extended model assumes several alternative trajectories.
1. Pause by B — P₁
After A₁, B may not respond immediately. We denote this as P₁ (pause).
A pause is not a neutral state. Waiting is not an empty, zero, or negligible posture. On the contrary — the lack of a move is information that A interprets as AI₁.
After P₁, several outcomes are possible:
AR₁ (A Resignation)
The pause is interpreted as a signal to withdraw → de‑escalation.
ARSM₁ (A Repeats Same Move)
A repeats A₁, testing the reaction.
ASAM₁ (A Searches an Alternative Move on the Same Level)
A searches for another action at the same level L₁.
AE₁ (A Escalates)
The pause is interpreted as avoidance, manipulation, or passive aggression → unilateral escalation to L₂.
FBA makes AE₁ more likely, especially when the interpretation of the pause is consulted with AI.
It is very important that escalation in such a situation may be treated by A as a means to force B to engage in talks or return to them. This is logical if A cannot impose its will directly at the current escalation stage, and B refuses negotiations entirely or simulates them.
However, the escalation move (escalation for the sake of de‑escalation) by A may be perceived by B as a real threat. B may respond by:
- initiating talks,
- declaring that it will match the stakes — responding symmetrically to escalation,
- or pre‑empting AE₁ with its own escalation move BE₁.
It is crucial to distinguish that this move is not an ordinary response to A₁. It is chronologically so (unless ARSM₁ or especially ASAM₁ occurred), but not sequentially. The decision to pre‑empt escalation is made based on FBA and within the CCB process.
There’s also a risk that, through interaction with the language model, B will become convinced that reaching an understanding with A is impossible. He will remain in a state of pause, accepting A’s subsequent moves. I propose that AI may have a real impact on strengthening his initial cognitive biases toward A, which could make it more difficult for him to decide to de-escalate. Instead, it will reinforce the need to wait out A’s moves and then—over time—as the situation worsens—to make an escalating move. This may be an intentional escalation for the sake of de-escalation.
2. Lateral Response B₁ and Entry into Ping‑Pong (PP₁)
B may also respond with move B₁ at the same escalation level, leading to a ping‑pong sequence (PP₁).
Classical conflict theories assume that such symmetrical exchange:
- stabilizes the situation, or
- eventually leads to de‑escalation through exhaustion of resources, attention, and determination.
However, this assumption relies on a silent condition: the absence of systematic reinforcement of interpretations and determination.
Ping‑Pong as the Main Space Where CCB Manifests
In this model, I propose a different thesis:
Under conditions of continuous AI consultation, ping‑pong ceases to be a stabilizing mechanism and becomes the main carrier of escalation.
Why?
Each subsequent exchange in ping‑pong provides new behavioral data. Each of these behaviors is:
- interpreted through the lens of FBA,
- reinforced by AI,
- incorporated into an increasingly coherent narrative about the other side’s intentions.
Instead of leading to conflict fatigue, ping‑pong:
- hardens the parties’ convictions,
- increases certainty that previous measures are ineffective,
- strengthens the belief that “raising the stakes” is necessary.
As a result:
- the probability of escalation increases with the number of ping‑pong exchanges,
- the dynamics of escalation depend on the intensity of the exchanged “courtesies.”
It is in ping‑pong that the coupled confirmation bias manifests most fully: two AI‑stabilized narratives collide, generating escalation without ill will, without aggressive intent, and often without the parties’ awareness.
Security Dilemma Without Ill Intent
At this stage, the conflict begins to resemble the classical security dilemma:
- each side acts defensively,
- each perceives the other as increasingly aggressive,
- neither consciously seeks escalation,
- yet escalation occurs.
AI acts here as an accelerator of interpretive certainty, reducing ambiguity and reinforcing narrative coherence on both sides.
Theoretical Context and Novelty of the Model
This model develops earlier research on human–AI cognitive loops (e.g., M. Glickman, T. Sharot, B. Wang), which focused mainly on a single user.
In this approach, the key element is the collision of two mutually reinforcing interpretive loops in interaction.
This mechanism is potentially even more unstable in triadic and multi‑party systems, where the escalation threshold is lower and narrative synchronization is more difficult.
9. Possibilities for Falsifying the Model
Any theory aspiring to the status of a scientific framework must be falsifiable. The model of coupled interpretive loops (CCB) meets this requirement because it generates specific, testable predictions that can be empirically confirmed or refuted.
9.1. AI’s Influence on Strengthening the Fundamental Attribution Error
The model would be falsified if:
- AI consultations did not increase the tendency to attribute negative intentions,
- AI weakened, rather than strengthened, FBA,
- AI users didn’t show smaller empathy and samller tolerance for ambiguity than the control group.
9.2. AI’s Influence on Escalatory Behavior
The model would be refuted if:
- AI‑consulting users did not show greater propensity for escalation,
- AI consultations did not influence the choice of moves A₂/B₂,
- escalation levels in the AI group and the control group were identical.
9.3. Ping‑Pong Logic
The model would be falsified if:
- the number of ping‑pong exchanges did not correlate with escalation,
- ping‑pong under AI conditions led to greater de‑escalation than in the control group,
- AI consultations did not influence the interpretation of subsequent moves.
9.4. Epistemic Asymmetry
The model would be refuted if:
- AI users and human‑advisor users showed identical escalation patterns,
- epistemic asymmetry had no effect on conflict dynamics.
10. Author’s Statement
This text was prepared with the assistance of language models. Their help was not generative in nature, but testing, editorial, and supplementary.
Conclusions
- The fundamental attribution error operates at every stage of conflict.
- AI reinforces FBA through CCB, giving interpretations a veneer of objectivity.
- AI‑consulted conflicts have a multi‑branch, not linear, structure.
- Pause, lateral response, and ping‑pong are critical decision states.
- Ping‑pong under AI may reverse the classical logic of de‑escalation.
- Escalation may be a function of the number and intensity of exchanges, not merely ill intent.
- The model is falsifiable — and this makes it a theory, not a dogma.
If you are interested in my hypothesis, you can read this article: https://pmc.ncbi.nlm.nih.gov/articles/PMC11860214/? and this one: https://dl.acm.org/doi/10.1145/3664190.3672520
Here is my main text about CCB:
You can read also this one:

COUPLED CONFIRMATION BIAS – HOW DOES IT LOOK LIKE?
In this article, I want to explain in a shorter and more accessible way how Coupled ConfirmationBias works. Also I want to show, what does a feedback loop in AI‑consulted conflicts actually look like. Imagine two people in a dispute. Both are using AI to interpret the other’s behaviour. The conflict begins at what I call Level 1 (L1) — the baseline interpretation of the other side’s actions. Importantly, L1, L2 and L3 are not merely different interpretations of the same actions; each level involves a new or intensified set of behaviors driven by AI-reinforced interpretation.
What Coupled Confirmation Bias (CCB) is?
The Coupled Confirmation Bias (CCB) is a conflict escalation mechanism in which two or more parties to a dispute, relying on external interpretive systems perceived as epistemically privileged, mutually legitimize their own narratives. This mechanism is recursive. Actions taken on the basis of such legitimization subsequently become input data for further analysis on the opposing side. This leads to a coupled interpretive spiral and a gradual narrowing of the negotiation space. By using the term recursive, I refer to a spiral in which each human–AI interaction amplifies the previous state.
How Coupled Confirmation Bias (CCB) forms?
Here is how Coupled Confirmation Bias forms:
1. A makes move A1.
2. B consults AI about A1.
AI does not invent new meanings — it tends to reinforce the user’s initial intuitions. But what are the initial intuitions created by? Humans start from a predictable place: the fundamental attribution error. We naturally explain others’ behaviour through negative traits rather than external circumstances. It’s fast, cognitively cheap, and emotionally self‑protective.
AI strengthens this starting point.
The result is tunnelled interpretation: B becomes more certain that A acted with negative intent.
3. B responds with move B1.
From B’s perspective, this is a defensive reaction to a perceived threat. But A interprets B1 through the same mechanism — again reinforced by AI.
Now both sides interpret these moves as increasingly defensive or aggressive. A consults AI B1 and interprets it like an aggressive escalation, which requires a response.
And here is the crucial part:
👉 A’s response – A2 – is performed at a higher level of perceived escalation.
👉 Interpretation changes behavior. Behaviour changes the conflict.
This is the shift from L1 to L2:
not just a cognitive loop, but a real behavioural escalation.
4. The loop accelerates.
Each side consults AI.
Also each side receives confirmation of its fears.
Each side responds defensively from its point of view.
Each defensive move is read by the other as escalation.
At this point, a classic security dilemma emerges:
each party, trying only to protect itself, unintentionally signals aggression to the other.
Consequences
AI amplifies this dynamic by reinforcing each side’s subjective narrative.
This is the core of what I call Coupled Confirmation Bias (CCB) — a recursive interpretive loop between two humans and two AI systems, producing real‑world escalation without bad intentions, without malice, and often without awareness.
My hypothesis builds on the pioneering research on human-AI cognitive loops especially by M. Glickman, T. Sharot, and B. Wang, Yuxin Liu and Lenart Celar. While they focused on the individual bias, I observe how these loops collide in two-party disputes.
I assume that similar phenomena also occur in multilateral disputes. Especially in tripartite arrangements, which are inherently very unstable.
You can also read the main text:

COUPLED CONFIRMATION BIAS – MY CONCEPT
In this article, I present my original proposal of the Coupled Confirmation Bias (CCB). It is a conceptual framework designed to analyze the escalation of conflicts in situations where both parties to a dispute independently rely on AI systems to interpret the conflict and to justify their own positions. Unlike classical confirmation bias, which operates at the level of the individual, CCB describes a systemic mechanism. In this mechanism, mutually reinforcing narratives lead to a progressive narrowing of the negotiation space and an increased likelihood of escalation. The article identifies the conditions under which this mechanism is activated, its typical consequences, and the limits of its applicability. The purpose of this text is not to present a fully validated theory. Its purpose is to formulate a hypothesis regarding the existence of a repeatable mechanism observed in contemporary conflicts. What these conflicts share is that AI models are used as analytical tools supporting decision-making.
1. Introduction: When Rational Tools Amplify Irrational Outcomes
Where does the Coupled Confirmation Bias come from? People increasingly rely on AI. They use it to assess existing relationships both before a conflict emerges or becomes consciously recognized, and during the conflict itself. AI is used to assess risk, develop strategies, and justify proposed actions.
AI models are also increasingly used to analyze statements and non-verbal actions of the opposing party. AI systems are commonly perceived as authoritative, and their recommendations as neutral, objective, and free from emotional involvement. Paradoxically, however, as I have observed, the use of language models often correlates with faster conflict escalation, rigidification of positions, and premature breakdown of negotiations. This does not appear to be a coincidence.
In this article, I argue that these phenomena cannot be sufficiently explained solely by individual cognitive biases. They cannot be explained either by simple confirmation bias or by the fundamental attribution error.
Instead, I assume the existence of a systemic mechanism: the Coupled Confirmation Bias (CCB). It emerges in situations where parties to a conflict independently use AI tools as a source of authoritative interpretationof the dispute.
This interpretation may concern external factors, factual actions, and declarations of the opposing party. It may also concern conjectures about the opponent’s true intentions, acceptable risk, costs they are willing to bear, and their actual so-called “red lines”, meaning critical points they will not allow to be crossed under any circumstances.
Naturally, not all of these elements must be analyzed using AI. They also do not need to be analyzed at the same time. Importantly, each party may analyze a different element or set of elements.
Finally, each party may use different types of models, with different levels of technical sophistication and different degrees of integrated interaction with the user.
The Coupled Confirmation Bias (CCB) produces stronger effects the more often parties analyze signals coming from the opposing side when the content, form, or timing of those signals has previously been influenced by AI-assisted work conducted by the other party.
We are therefore not dealing with a one-time error. We are dealing with a spiral of errors, where each previous step becomes the fuel for the next one and potentially amplifies it.
This can be illustrated by a ping-pong exchange in which, after each hit, the ball gains energy equal to the sum of its previous kinetic energy and the energy of the new strike.
I also assume that the Coupled Confirmation Bias (CCB) will manifest in a similar manner across different levels of conflict. This includes conflicts between individuals, between groups, and between states or blocs of states. This conclusion follows from general research on the nature of conflict.
I wish to emphasize that I do not believe AI’s influence on conflict escalation is deterministic in nature. Rather, I assume the existence of a tendency—an influence. Moreover, awareness of this mechanism, as I understand it, can paradoxically lead to a halt in escalation, once the parties realize they are under its influence.
2. Current State of Research
The equilibrium mechanism between parties was described in various works by J. Nash, and conflict strategy by T. Schelling (The Strategy of Conflict).
The conflict structure adopted here is drawn from Christopher Moore’s book The Mediation Process: Practical Strategies for Resolving Conflict.
General cognitive biases have long been known to psychology.
For the development of the Coupled Confirmation Bias (CCB) concept, the starting point consists of works describing classical confirmation bias.
This bias refers to the tendency toward selective and subjective perception of data in order to confirm an already adopted thesis.
A well-known example is Daniel Kahneman’s Thinking, Fast and Slow.
This mechanism has also been identified at the level of human–AI interaction and described in detail, among others, in articles by M. Glickman and T. Sharot – https://pmc.ncbi.nlm.nih.gov/articles/PMC11860214/?, Yuxin Liu and Adam Moore –https://pubmed.ncbi.nlm.nih.gov/40448478/ , as well as L. Celar and Ruth M. J. Byrne –https://pubmed.ncbi.nlm.nih.gov/36964302/.
It is also necessary to mention the article by Ben Wang and Jiqun Liu, Cognitively Biased Users Interacting with Algorithmically Biased Results in Whole-Session Search on Debated Topics – https://dl.acm.org/doi/10.1145/3664190.3672520.
These authors point to the role of individual factors in susceptibility to cognitive biases arising in interaction with artificial intelligence.
From these works we learn about the feedback loop mechanism. It does not merely involve the presence of confirmation bias. It goes one step further and leads to reinforcement of the initial belief or prejudice, resulting in tunnel thinking.
However, in the works known to me, researchers have not examined dispute situations in which both sides use language models in the manner described above. The Coupled Confirmation Bias (CCB) is not the sum of two parties’ cognitive biases. It is a distinct phenomenon, a new quality that leads to escalation spirals in a manner qualitatively different from two independent errors.
3. From Individual Bias to Systemic Escalation
Classical confirmation bias describes an individual’s tendency to selectively search for, interpret, and remember information in ways that confirm prior beliefs.
Although this phenomenon is well documented, it is insufficient to explain situations in which both parties to a conflict, despite having access to initially similar data and using ostensibly neutral analytical tools, become increasingly entrenched in their own beliefs and prejudices.
In contemporary disputes, AI systems increasingly function as external legitimizers of interpretation, rather than merely computational tools. When each party uses such systems independently, confirmation bias ceases to be merely an individual cognitive tendency and begins to function as a mutually coupled dynamic.
4. Definition of the Coupled Confirmation Bias
The Coupled Confirmation Bias (CCB) is a conflict escalation mechanism in which two or more parties to a dispute, relying on external interpretive systems perceived as epistemically privileged, mutually legitimize their own narratives. This mechanism is recursive. Actions taken on the basis of such legitimization subsequently become input data for further analysis on the opposing side. This leads to a coupled interpretive spiral and a gradual narrowing of the negotiation space.
The constitutive feature of CCB is not merely the presence of biased reasoning. It is the dynamic coupling of interpretive loops between actors. In this model, actions of one party shaped by AI analysis become direct input for the system used by the other party. This creates a closed circuit in which each successive interaction does not bring the parties closer to consensus. Instead, it provides “objective” material for deepening the original prejudices.
5. Hypothesis
H1:
In bilateral or multilateral conflicts where parties independently use AI models to interpret the dispute and justify their own positions, the probability of escalation and negotiation breakdown is higher than in structurally similar conflicts in which such systems are not used.
H1a:
In interpretation-based conflicts, symmetrical access to information increases the risk of escalation more than informational asymmetry.
This is because it eliminates the possibility of explaining disagreement through lack of knowledge and shifts the conflict to the level of intentions and alleged rationality.
5.1.
I am aware that AI models may be used by each party at different times, to different extents, for different purposes, and in different ways. They may begin using AI simultaneously or at different moments. One party may stop or limit its use earlier, while the other continues. One party may analyze only external factors, another party declarations, and at another time attempt to infer the opponent’s intentions using AI. One party may use AI for analysis, another for emotional support. Finally, parties may use different models and interact with them differently, including by providing more or less manipulated input data. As demonstrated by the aforementioned research of Ben Wang and Jiqun Liu, individual factors also influence susceptibility to AI suggestions and outputs.
All these factors must be taken into account. I recognize that resulting differences may cause the Coupled Confirmation Bias (CCB) not to emerge or to dissipate during the conflict. At this point, my thesis concerns situations in which both parties use AI in a relatively symmetrical manner. The impact of asymmetry on CCB requires further research.
5.2.
Research has repeatedly shown that systems composed of three actors are significantly less stable than those involving two or four actors. I believe that the role of AI in accelerating escalation will be particularly visible in three-actor configurations. Further research will need to determine differences in outcomes when, in systems of three or more participants, only some of them use AI.
5.3.
I wish to emphasize clearly that AI does not create or escalate conflict by itself. However, it has a powerful influence on user perception, interpretation, and ultimately decision-making. Actions taken as a result of such decisions are then interpreted in an analogous manner by the opposing party.
5.4.
The Coupled Confirmation Bias (CCB) is also not an example of so-called echo chambers limited to two or a small number of participants. Echo chambers are inherently static. CCB is dynamic, because each subsequent interaction changes the behavior of the other party.
6. Falsifiability of the Coupled Confirmation Bias (CCB)
In The Logic of Scientific Discovery, Karl Popper introduced falsifiability as a necessary condition for recognizing a theory as scientific.
It is therefore necessary to specify which factors condition the emergence of the described mechanism, which contribute to its deactivation, and above all, what would falsify this theory.
6.1. Conditions for Activation
The Coupled Confirmation Bias typically emerges when the following conditions are jointly met:
Symmetrical legitimization of narratives
Each party has tools or advisors confirming its interpretation as rational and justified.
Absence of a shared epistemic authority
There is no institution, mediator, or procedure recognized by all parties as a final arbiter.
High cognitive cost of changing position
Changing position would undermine earlier “rational” decisions supported by AI analysis.
Presence of an apparently neutral third actor
AI systems, perceived as objective and interest-free, reinforce the legitimization of each narrative.
Paradoxically, this facilitates their functional contribution to confirmation bias on both sides and, consequently, to the emergence of the Coupled Confirmation Bias (CCB).
6.2. Limits of Applicability: When CCB Does Not Operate
The Coupled Confirmation Bias is not universal. The mechanism weakens or does not occur when:
– A commonly recognized factual arbiter exists.
– Only one party uses AI tools, breaking coupling symmetry.
– The stakes of the conflict are low or reversible.
– AI is used exclusively for information, not interpretation.
– Parties possess high metacognitive competence and actively counteract their own biases.
– The dispute concerns clearly measurable parameters rather than interpretations of intent or attribution of blame.
– Significant asymmetry arises in AI usage regarding time, scope, purpose, or method.The CCB mechanism is also weakened when parties are mutually aware of using similar interpretive tools and are capable of metacognitive reflection on their influence. Disclosure of AI use may itself become a de-escalatory factor.
These limits distinguish CCB from general theories of conflict escalation.
6.3. What Would Falsify This Theory?
The hypothesis would be falsified by observing the opposite effect: a mitigating or de-escalatory influence of AI models on conflict dynamics.
Such an effect might appear if an AI system suddenly altered its narrative after identifying a feedback loop and explicitly informed the user of its existence.
I have not observed anything of this kind to date.
The hypothesis could also be falsified if earlier findings on feedback loops and tunnel thinking proved incorrect, or if AI evolution fundamentally altered interaction principles.
At present, I am not aware of evidence supporting such claims.
Falsification would also occur if escalation in observed cases were shown to result from factors other than cognitive biases arising from AI interaction.
Finally, the hypothesis would be falsified if conflict dynamics were identical regardless of whether participants used AI.
Current observations contradict this.
7. Systemic Effects: The Dynamics of Recursive Escalation
Activation of the CCB mechanism shifts conflict from contested interests to closed interpretive loops.
This entails the following systemic consequences:
Autocatalytic escalation
Due to its recursive nature, every de-escalatory communication attempt is filtered through the opposing party’s AI. If the system interprets the other side as disloyal, even goodwill gestures are framed as strategic manipulation, paradoxically reinforcing escalation.
Ambiguity collapse
In classical disputes, uncertainty about intentions leaves room for interpretation. In CCB, AI “closes” interpretations by granting them analytical certainty. Parties stop discussing facts and operate instead on finalized analytical outputs—judgments about the opponent’s intentions.
The legitimacy trap
Because each party possesses “objective” confirmation of its position from a subjectively epistemically privileged system, compromise becomes framed as irrational or logically flawed.
Erosion of shared reality
Recursive layering of analyses causes parties to stop responding to real actions and instead react to how their own AI models predict the opponent’s AI interpretations. The conflict detaches from reality and moves into a model-to-model interaction space.
Position inertia
AI-shaped narratives exhibit strong resistance to change (cognitive inertia). Challenging conclusions generated by advanced analytical models would require decision-makers to admit a fundamental error in tool selection. This raises both psychological and “political” costs of de-escalation.
Importantly, this escalation occurs without bad faith and often without conscious strategic intent.
8. AI as a Quasi-Third Actor?
Although artificial intelligence lacks agency and its own interests, its functional role in the CCB mechanism cannot be overlooked. By providing authoritative legitimization while simultaneously lacking accountability, AI systems influence the dynamics of a dispute. They affect the subjective assessment of escalation costs and stabilize mutually exclusive narratives. To be clear—I am not suggesting that AI’s role grants it the status of a party in an ontological sense. Rather, it acts as a mirror that actively reinforces primary beliefs. However, it is not a passive mirror, but an active one—invested with trust and strengthening convictions.
9. Conclusion
The Coupled Confirmation Bias (CCB) provides a conceptual framework for understanding why introducing ostensibly rational tools may accelerate conflict escalation. The proposed concept does not claim universality or empirical finality. It is a hypothesis of a mechanism observed in practice, inviting further criticism, testing, and refinement.
Understanding CCB is relevant not only for lawyers, mediators, and conflict managers. It is also significant for designers and users of AI systems in confrontational contexts.
10. Author’s Methodological Note
This article presents my original analytical and conceptual framework based on patterns I have observed in conflicts. These observations stem from my own legal practice in the second half of 2025. The article does not constitute an empirically validated theory. It is a hypothesis. Its purpose is to describe a phenomenon and indicate directions for further analysis. In developing this work, I used language models to critically evaluate my own theses and to identify weaknesses in my reasoning. AI assistance was critical rather than generative. It served to identify potential vulnerabilities in the argumentation.
You can read also in Polish:

HOW WILL AI IMPACT DISPUTE DYNAMICS IN 2026?
Recently, I wrote about AI’s impact on escalating family and business conflicts. I described the coupled confirmation bias. I showed how it leads to tunnel vision. This creates a micro-scale version of the security dilemma. Today, I share reflections on how Large Language Models (LLMs) will change dispute dynamics in 2026. Current trends will likely not reverse on their own. Instead, everything points to their significant intensification.
How AI Influences Dispute Dynamics?
AI directly shapes how conflicts evolve. I will not repeat my previous articles here. Instead, I am providing links to the most important ones. They contain links to the latest scientific publications and my other texts. This article does not provide legal advice for Poland. It explains general conflict dynamics.
I wrote about people using AI to diagnose and solve important legal problems:
https://jakubieciwspolnicy.pl/ai-zastapi-prawnikow/
In that same text, I explained why such analysis is insufficient. It is often incomplete and requires legal verification.
You can read about how lawyers and clients use AI:
https://jakubieciwspolnicy.pl/korzystanie-z-ai-przez-prawnikow-i-klientow-podstawowe-problemy/.
However, my most important article is available at this link in polish version https://jakubieciwspolnicy.pl/ai-i-myslenie-tunelowe-w-sporach-miedzy-wspolnikami/ and the english one: https://jakubieciwspolnicy.pl/en/ai-and-tunnel-vision-in-shareholder-disputes/ . I describe the universal rules of the conflict, so don’t hesitate to read it. It is not any analysis of the polish law.
I described the mechanism of coupled confirmation bias in detail there. I showed how it leads to dangerous tunnel vision.
The greatest risk arises when both parties use AI models to interpret each other’s behaviors. This can lead to rapid, uncontrolled conflict escalation.
Read more about the feedback loop here: https://pmc.ncbi.nlm.nih.gov/articles/PMC11860214/ This article by M. Glickman and T. Sharot discusses how AI feedback loops change human judgment.
I also recommend the text regarding causal explanations in AI by L. Celar and R.M.J. Byrne: https://pubmed.ncbi.nlm.nih.gov/36964302/
Finally, I suggest the article by Liu Y and Moore A. It covers intuitive judgments regarding AI and moral transgressions. It is valuable for understanding social perceptions: https://pubmed.ncbi.nlm.nih.gov/40448478/
The Impact of AI on Conflict Intensity in 2026
There are no rational reasons to expect a reversal of current trends. We see growing AI accessibility and rising trust in AI. Furthermore, the computational capabilities of language models continue to expand.
I assume a group of people will become emotionally dependent on their “relationships” with AI. Withdrawn or lonely individuals will be particularly vulnerable. Those not used to critical thinking face the highest risk. A lack of critical assessment makes the AI model a “moral authority” rather than an information source. If you doubt human emotional bonds with machines, remember the Tamagotchi craze.
I believe AI’s role in shaping human emotional attitudes will grow. This will directly affect how we perceive current relationships. We will observe increased atomization and polarization of individuals and groups. Atomization occurs because AI companionship may seem more attractive than human contact. Already weak social ties will weaken further. Polarization will result from frequent human’s consultations with AI on sensitive matters like family or business partnerships. In a dispute, users might not seek a solution. Instead, they may seek validation for their own narrative.
These mechanisms will become increasingly common. Public awareness may not keep pace with AI’s real-life impact.
Can AI Be Helpful in Resolving Disputes?
Yes! Despite the risks, AI will also have strong positive aspects. AI will serve in prevention. It helps predict potential conflict areas and secure them early. Language models are helpful here. They easily create multiple scenarios and highlight potential risks. They will certainly function as an effective early warning system.
Generative AI will also help find rational solutions to existing disputes. Its ability to multiply potential outcomes is amazing. After describing a deadlock, the chat may suggest a solution we missed. However, we must always critically evaluate these suggestions. We must predict their long-term consequences across many levels.
In every case, the role of AI remains auxiliary. It builds variants well but struggles to analyze legal and emotional consequences.
At this point, reference should also be made to the research of Marco Giacalone, who points to the significant potential of language models in dispute resolution: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5083207. The author writes that ‘The integration of generative AI reduces costs and allows legal practitioners to focus on complex issues, strategic planning, and client interaction.’ According to the author, AI will not replace humans, but will be a very valuable tool in their hands. I agree with this approach.
AI Influences Dynamics but is Neither Good Nor Evil
It makes no sense to ask if AI is “good” or “bad.” We should study its impact within specific contexts. Our species created AI. Trying to put the “genie back in the bottle” is impossible and pointless. Let’s objectively study its impact on our thought patterns and relationships. We should draw logical conclusions from these observations. Use this technology to benefit yourself and others. I believe this is possible. However, do not let AI replace human friendship and intimacy. Do not allow law and health to fall under unreflective AI influence.
We are dealing with a powerful new tool. It is a new source of influence on our psyche. Humanity has never known a tool this powerful. Yet, every great discovery brings both opportunities and threats. Ultimately, the outcome depends entirely on us.
Unlike nuclear energy, almost everyone now has direct access to AI. AI will amplify what is already hidden and strong within us.
Invitation to Cooperation
Have you noticed AI suggestions influencing your private or business relationships? Do you see this impact on yourself?
Perhaps you notice the other party becoming more radical lately?
I invite you to discuss this in the comments or contact our law firm. We help clients build strategies and resolve disputes optimally.
We consider many factors: law, psychology, communication, and technology. This is our strength.
📩 kancelaria@jakubieciwspolnicy.pl
📞 +48 536 270 935
