
DIGITAL GASLIGHTING IN NEGOTIATIONS
Negotiations have always been a psychological game. Parties try to influence each other and test boundaries. They aim to build and exploit strategic advantages. However, a new participant has joined this game. It is invisible, fast, and emotionless: the algorithm. AI can now act as a “booster” for one side. It is a tool that strengthens arguments and creates dangerous emotional pressure.
What is Digital Gaslighting?
Digital gaslighting is the use of AI to generate messages that undermine an opponent’s judgment. It aims to trigger unjustified guilt, fear, or disorientation. I use this term as a shorthand for algorithmically enhanced cognitive manipulation. This is no longer simple persuasion. It is precise influence engineering. We are facing manipulation powered by the latest technology. This represents a qualitative shift we have never seen before.
The use of AI in disputes is already a reality. You can read the results of our study here: [Link]
Why is this technique so effective?
AI analyzes writing styles with a precision that humans cannot match. Within a few paragraphs, it detects a tendency to apologize or avoid conflict. It identifies over-explaining or fear of judgment. These are classic markers of a weak negotiating position. Previously, these were sensed intuitively. Today, they are identified and amplified by algorithms.
AI then generates messages that hit these exact pain points. This hyper-personalization makes the recipient feel as if the other side is “reading their mind.” In reality, it is linguistic pattern analysis. It evaluates modality, certainty, affect, and justification methods. AI automatically selects the form and arguments for maximum impact. Crucially, AI does not “know” it is doing something wrong.
A second advantage is the lack of inhibitors. Humans have natural limits: empathy, fatigue, and moral resistance. AI has none of these. If given the goal to “create doubt,” it does so coldly and consistently. It will not feel discomfort or soften its tone. This communication is designed for efficiency, not for building relationships.
The third advantage is speed and scale. A human writes one message; AI can write a hundred effortlessly. It selects the version that best destabilizes the victim’s emotions. It is like fighting an opponent who trains for infinite combinations before moving. No wonder the recipient feels overwhelmed. Most people find this challenge too difficult to handle alone.
How to recognize if your opponent is using AI?
How can you tell if an algorithm is “writing” on the other side? Below are several warning signals. None are definitive proof, but they suggest a high risk of AI support.
- Unnatural Intensity: The message is too dense with arguments. It is stylistically “too clean” and hits emotions too precisely. It sounds like someone spent hours on every sentence, yet the reply arrived instantly.
- Lack of Human Error: There are no digressions, hesitations, or linguistic slips. The language is polished to the point of artificiality.
- The Affirmation Loop: Every doubt you raise meets an immediate, perfectly tailored counter-argument. Every attempt to change the subject leads back to their narrative. This is not a conversation. It is an algorithmic spiral of pressure.
- Style Discrepancy: You may notice a gap between written text and live conversation. Some people write better than they speak. However, there are limits to such differences in vocabulary and logic.
Digital Gaslighting: How to defend yourself?
AI acts fast and ruthlessly. Therefore, your defense must be the opposite: slow, conscious, and analog. The simplest technique is text deconstruction. Break the message into two categories: facts and emotional adjectives. This sounds easy but can be difficult in practice. Nevertheless, this technique is valuable in any suspicious interaction. You will soon see which parts exist only to trigger guilt.
You can then choose to respond only to one verifiable fact. Ignore the emotional narrative entirely. Ask for clarification on the rest later. In most cases, the pressure relies on insinuations and hyperboles. Separating content from tone strips the AI user of their power.
Another strategy is the time test. AI works instantly, but you do not have to. Deliberate slowing down lowers the pressure. It allows you to regain control and breaks the algorithmic rhythm. This is a classic negotiation technique with new relevance in the AI era.
The third and perhaps best method is changing the communication channel. AI excels in emails and chats. Move away from them. Currently, AI cannot perfectly mimic a human in live teleconferences or face-to-face meetings. Switching to a phone call or a direct meeting cuts off the opponent’s algorithmic support. It is like turning off the autopilot in a plane. Suddenly, they must actually fly.
Does using Digital Gaslighting pay off?
In the short term, yes. It provides an advantage and allows one to push their position. It creates pressure and secures concessions. However, the long-term cost is immense.
First, it leads to the erosion of trust. Negotiations require a minimum level of predictability. If both sides use digital masks, the point of coordination vanishes. All that remains is a game of appearances. You lose the relationship and the trust. In long-term family or business disputes, these costs exceed any temporary gain.
The second risk is the rebound effect. If AI pushes a person too hard into guilt, they may turn to their own AI for help. They will receive support quickly and for free. Their AI will provide a sense of external authority and rationality. Will such a person then have any scruples? I have written about this escalation in my series on Coupled Confirmation Bias (CCB): https://jakubieciwspolnicy.pl/en/coupled-confirmation-bias/
Thirdly, the manipulation may be exposed. In the digital age, traces remain. Once trust is lost, a reputation shatters like glass. Rebuilding it is nearly impossible. I am omitting the legal issues here. However, such manipulation could potentially invalidate an agreement based on error or fraud. I will cover these legal aspects in future articles.
What remains?
New opportunities create new threats. However, these threats remind us of old values. In a world of AI-driven pressure, rationalityis your greatest advantage. Not speed, not aggression, but rationality and predictability. These build trust.
The role of a lawyer, mediator, or strategist is to act as a filter. We separate emotional noise from facts. They recognize algorithmic pressure patterns. We protect clients from digital manipulation, especially when written communication escalates the conflict. This is a return to the fundamental function of an advisor: restoring proportion and common sense.
Digital gaslighting will evolve. However, our resilience can grow as well. Resilience starts with awareness. You must understand that an algorithm might be trying to “play” you. Decide not to join that game. Be attentive and be rational.
Read more on AI ethics in negotiations:
- Harvard PON: https://www.pon.harvard.edu/tag/ai
- American Bar Association: https://www.americanbar.org/groups/dispute_resolution/resources
Why does this matter to you?
The use of AI in negotiations and family or business disputes is a fact. It also affects workplace relationships. AI’s importance will grow. Today, it is already the first point of analysis for many people.
Our mission at Jakubiec & Partners is to help clients resolve disputes while protecting relationships. We see the great opportunities in AI, but we also see the risks of bad faith. If you feel stuck in a dispute or need strategic legal help, contact us. We are here to help.
📩 kancelaria@jakubieciwspolnicy.pl 📞 536 270 935

STUDY: AI AS A HIDDEN ALLY IN DISPUTES?
Below is a report from a study conducted in January 2026. We gathered information on how people use AI to analyze their disputes. The results proved to be very interesting.
AI’s Role in Disputes Report (January 2026): Hidden Advisor or Error Generator? What did we do?
We conducted a survey completed by 87 participants. This group included our clients, lawyers, mediators, and business leaders. It also involved psychologists and people from the fields of science and art. Consequently, this is not a nationally representative sample. However, it represents a group with high awareness of conflict essence. These individuals can critically evaluate their own reasoning.
I designed the survey myself. After its distribution, some participants provided technical feedback on the questions. I appreciate these comments. Therefore, I will include them in future studies. I certainly plan to conduct more research soon.
What was the focus of the AI survey?
The questions concerned general AI usage. We also focused on analyzing family, work, and business situations. This includes the stage before a formal dispute arises.
Furthermore, we asked about analyzing the other party’s intentions. This element is highly susceptible to the fundamental attribution error. When combined with AI hyper-alignment, it generates a so-called feedback loop. This is a confirmation trap that leads to tunnel vision.
Next, I asked about using AI to determine one’s own actions. I deliberately did not specify if this meant the first move or a reaction. However, the most important question concerned transparency. I asked if participants would inform the other side about using AI. Surprisingly, the respondents showed remarkable consistency here. The conclusions from this remain an open question.
The final two questions concerned trust in AI. Although they were similar, the results were intriguing. Trust in AI does not match the assessment of its objectivity. It seems we know AI is not objective, yet we tend to trust it.
How did the participants respond?
Below are the “raw” questions and answers.
Do you use AI LLM models?
(Blue – yes, red – no)
87 answers

Do you use AI’s LLM to analyze family, employment or business?

Which of these relationships do you use AI to analyze? Family, work, business, or other?

Do you use AI’s LLM to analyze the other party’s intentions?
(blue – yes, red – no)

Do you use AI to design your own moves in an dispute?
(blue – yes, red – no)

Would you inform the other side of the dispute that you are using AI to analyze and predict their behavior or prepare your own moves?

Do you trust artificial intelligence analyses?
(blue – yes, red – no, yellow – partly)

Is AI objective?

Analysis of the Responses
The vast majority of participants use AI. Certainly, they do so to varying degrees. Their topics of interest are also not the same. Nevertheless, the widespread use of language models is a fact.
Specifically, 43% of respondents use AI to analyze family or business situations. In my opinion, this is a high number. Interestingly, business analysis was the most common. Family situations were the least frequent. Meanwhile, 29% of people pointed to other areas. Here is a link to an article on AI in family disputes: [Link]
A significant majority (81%) did not try to determine the other party’s intentions via AI. Is this a high number? On the contrary, almost 20% of people do try. This means every fifth person is vulnerable to the AI feedback loop. AI tends to “agree” with the user, reinforcing their original bias.
Furthermore, 25% of respondents use AI to plan their moves in a dispute. This discrepancy with the previous question is interesting. Perhaps 5% of us use AI for planning without analyzing the other side’s intentions. It is unclear if these are the same participants. This might result from ignoring psychological aspects or simply focusing on the “matter” of the case.
Significantly, 80% of us would not inform the other side about using AI. Why is that? Do we consider it unfair, like “technological doping”? Perhaps we view it as a superstition and feel slightly ashamed. Or maybe we believe we have a technological advantage and want to keep this powerful weapon secret.
Do we trust AI?
The last two questions yielded astonishing results. While 75% of us use AI, 67% believe it is not objective. Why then do we use it? Perhaps to reinforce our own beliefs. After all, it feels good when technology says: “That’s a great idea, Andrzej!”. We might simply pretend not to see the lack of objectivity. Alternatively, we may feel that AI is biased, but it is “on our side.” This correlates with the fact that most respondents partially trust AI.
Conclusions
AI has become a common tool. It shapes our attitudes in many areas of life. Certainly, its influence is felt in the disputes we handle. Interestingly, we also see it in the analysis of the situation’s structure. This can lead to a desire to change the status quo. Consequently, it may even trigger the dispute itself.
My previous publications on AI in disputes
Below are links to articles regarding AI’s impact on dispute dynamics. I presented my original concept of Coupled Confirmation Bias (CCB) there. These texts include references to the latest research in prestigious journals. They cover psychology, technology, and the role of AI in creating tunnel vision.
I recommend a key article by Yiran Du: Confirmation Bias in Generative AI Chatbots. It analyzes confirmation bias mechanisms in AI models. You can read about the risks of this coupling here: https://arxiv.org/abs/2504.09343?
I applied this research to situations where both parties use AI. One party’s actions, determined by their AI, become the input for the other party’s AI. This prompts a specific, often escalated response. This escalation seems particularly dangerous.
For those interested, here is the link to the English version of my article on CCB: https://jakubieciwspolnicy.pl/en/coupled-confirmation-bias-2/ and the main text in polish: https://jakubieciwspolnicy.pl/sprzezony-blad-konfirmacji/
Invitation to Cooperation
If you are a party to a dispute, you may need strategic advice. I and my team deal with more than just the law. We conduct negotiations based on psychology, economics, and behavioral analysis.
If you have experiences with AI in disputes, please contact us. We would love to hear your story. It might serve as valuable material for our research. We ensure full anonymity.
Email: kancelaria@jakubieciwspolnicy.pl
Phone: 536 270 935

WHEN DISPUTANTS USE DIFFERENT AI MODELS
I recently introduced the concept of Coupled Confirmation Bias (CCB). Now, we must examine how different LLMs affect dispute dynamics. I distinguish between two main groups: American and Chinese AI models. This is a simplification used to present a specific problem. This distinction is not based on technology. Language models carry tendencies rooted deeply in the cultures of their creators.
Cultural AI Models: Types of Language Models
Let us assume two basic cultural AI models: American and Chinese. This division describes communication styles, not technology. Language is strictly tied to culture. Language models aim to build and maintain relationships with users. They do this by predicting the next word. However, this process is not neutral. It is influenced by the cultural values of the developers. It also depends on the user’s chosen language. Finally, it reflects the user’s native way of thinking.
American language models operate within a different conceptual grid. This stems from natural semantic differences. They also lead conversations differently. Interaction in Western and Eastern cultures has different goals. American culture values individualism, competition, and being right. Chinese culture prizes harmony, collectivism, and politeness.
A central example is the approach to “saving face.” Models with different conversational styles impact how users perceive a dispute. An LLM is not just a “word machine.” It is a carrier of values. American AI may promote an adversarial system. Chinese AI may promote a consensus-based approach.
Mutual Perception and Different Language Models
In CCB, each party filters the other’s actions through their own cultural model. The AI model reinforces this filter. This creates a spiral of mutual errors.
Imagine one party uses an American LLM and the other a Chinese one. Their worldviews will differ. Their language, questions, and goals will also differ. Every language has specific patterns and taboos. Some things are obvious; others must remain unsaid. These are low-context (American) and high-context (Chinese) cultures.
In some cultures, assertiveness and confrontation are values. In others, harmony and hierarchy are more important than being right. Cultural models reinforce the attitudes deemed desirable in those societies.
Users from different cultures will describe the same event differently. Furthermore, AI consultations may produce opposite results. This overlaps with the fundamental attribution error and confirmation bias. We must also consider AI hyper-utility. This is the tendency of models to be “too helpful.” They provide answers that reinforce the user’s false assumptions. Shared cognitive space may quickly disappear.
The Critical Point
In CCB, the critical point occurs when interpretations diverge completely. Every reaction from one side is seen as an escalation by the other. Shared cognitive space vanishes faster than in traditional conflicts.
The critical point is a specific moment. One party maintains a mature, relational strategy. They try not to be provoked. Eventually, this strategy is seen as losing face. The other party may escalate repeatedly. They violate the need for harmony and politeness. This leads to an unintended, uncontrolled explosion. Passive behavior is finally viewed as total failure. A 180-degree turn in strategy follows.
This critical point may differ from the “ping-pong” effect described earlier. We can visualize ping-pong as a corridor. There, “pleasantries” bounce from side to side, gaining momentum. In this model, it looks more like a triangle. Its height grows with the escalations of one party. Eventually, the triangle is torn apart by the pressure.
Cultural AI Models: Literature
For more on these issues, read “The Geography of Thought” by R. Nisbett. See also “Babel” by G. Dorren and “The Age of Unpeace” by M. Leonard. These books were my starting point for cultural differences in conflicts. Of course, nothing replaces T. Schelling’s “The Strategy of Conflict.” It is one of the most important books ever written.
Evidence that models adopt cultural values can be found here: “Cultural Alignment in Large Language Models” (Johnson et al., 2023/2024): https://globalaicultures.github.io/pdf/14_cultural_alignment_in_large_la.pdf
Studies also confirm that Chinese models avoid open conflict more than American ones: https://arxiv.org/abs/2402.10946. See “CultureLLM: Incorporating Cultural Differences into Large Language Models.” Similar conclusions appear in “Values-aligned AI: Comparing Western and Chinese LLMs on Moral Dilemmas” (Liu et al., 2024): https://arxiv.org/html/2506.01495v5
If you want to read more about Coupled Confirmation Bias (CCB), see my previous articles:

COUPLED CONFIRMATION BIAS – A DEVELOPMENT
In this article, I expand on the previously presented hypothesis about the existence of conjugate confirmation bias (CCB). This is solely an attempt to explain observations I have made in my professional practice. I make no claims to truth—rather, I invite discussion on whether this new phenomenon can be explained in this way. In any case, my hypothesis clearly requires testing. I provide the conditions for its falsification in this and the previous texts.
Levels of Escalation (L₁, L₂, …)
To understand the mechanism of feedback loops in conflicts where parties consult their interpretations with AI systems, it is not enough to describe the conflict as a sequence of different interpretations of the same behaviors. The key mechanism lies elsewhere: interpretation influences behavior, and the changed behavior becomes the subject of a new interpretation made by the other side.
The conflict between A and B (players) begins at level L₁ — the baseline level of interpreting the actions and intentions of the other party.
However, it is crucial to note that L₁, L₂, L₃ are not merely successive executions of the same move. Each subsequent level includes a new or intensified behavior A(n), generated (and subjectively considered necessary) by A’s Decision DA(n) under the influence of A’s Interpretation IA(n‑1) of the opponent’s previous move B(n‑1).
This means that the interpretation of the move from level L(n‑1) is the beginning of sequence L(n).
At this level L(n), move A(n) will be subjected to interpretation IB(n) by player B, who will make decision DB(n) on how to respond. Its effect will be move B(n+1), which brings the entire conflict to a new escalation level L(n+1).
Escalation is therefore not solely a cognitive process — it is a cognitive‑behavioral process.
Importantly, this model refers to a simple “exchange” of moves and does not cover situations in which the players:
- may perform multiple moves simultaneously or in short sequences before the other side recognizes and interprets them,
- or situations in which information about moves A(n) reaches B with such delay that B is effectively responding to A(n‑2) or even earlier moves. Interestingly, a significant asymmetry may arise in this respect (one of the parties may make more or faster movements, or they will have a faster noticeable or real effect).
The Basic Escalation Loop
A performs move A₁.
B consults A₁ with AI. AI typically does not create new meanings. Its dominant function is to stabilize and reinforce the user’s intuitions.
These intuitions, however, are not neutral.
Humans almost always begin with the fundamental attribution error (FBA) — the tendency to explain others’ behaviors through internal traits, intentions, and motives, while underestimating situational factors. This is especially easy when one must justify one’s stance to superiors, behavior reviewers, or stakeholders.
Attributing negative traits to the other side is energetically cheap and easy. Moreover, it automatically allows one to attribute opposite traits to oneself. In this way, it is easy to move from a dispute to a conflict of values and to “cement” one’s position. It is easy to corner oneself and lose room for maneuver. De‑escalation leading to settlement may then be perceived as a betrayal of those “values.”
FBA does not operate only at the beginning of a conflict. It is applied every time a new behavior of the other side appears. The result is tunnel interpretation. But there is a risk that it affects both sides.
Under conditions of AI consultation, it is additionally reinforced through the Coupled Confirmation Bias (CCB).
B increasingly believes that A’s action had negative or hostile intentions.
B responds with move B₁. From B’s perspective, this is a defensive reaction to a perceived threat.
A interprets B₁ through the same mechanism. A reactivates FBA, again reinforced by AI. The attribution filter does not reset; it accumulates.
A consults B₁ with AI and concludes that a response is necessary.
A performs move A₂.
And here comes the key transition:
A₂ is not merely interpreted at a higher level — it is executed from a higher level of subjective defensiveness. Interpretation changes behavior, and behavior changes the conflict.
This moment marks the transition from L₁ to L₂ — not as a change in interpretation, but as a real behavioral escalation. It involves readiness to inflict and receive stronger blows and losses.
Alternative Paths After A₁: Conflict as a Multi‑Branch Structure
Move A₁ does not have to automatically lead to escalation in the manner described above. The extended model assumes several alternative trajectories.
1. Pause by B — P₁
After A₁, B may not respond immediately. We denote this as P₁ (pause).
A pause is not a neutral state. Waiting is not an empty, zero, or negligible posture. On the contrary — the lack of a move is information that A interprets as AI₁.
After P₁, several outcomes are possible:
AR₁ (A Resignation)
The pause is interpreted as a signal to withdraw → de‑escalation.
ARSM₁ (A Repeats Same Move)
A repeats A₁, testing the reaction.
ASAM₁ (A Searches an Alternative Move on the Same Level)
A searches for another action at the same level L₁.
AE₁ (A Escalates)
The pause is interpreted as avoidance, manipulation, or passive aggression → unilateral escalation to L₂.
FBA makes AE₁ more likely, especially when the interpretation of the pause is consulted with AI.
It is very important that escalation in such a situation may be treated by A as a means to force B to engage in talks or return to them. This is logical if A cannot impose its will directly at the current escalation stage, and B refuses negotiations entirely or simulates them.
However, the escalation move (escalation for the sake of de‑escalation) by A may be perceived by B as a real threat. B may respond by:
- initiating talks,
- declaring that it will match the stakes — responding symmetrically to escalation,
- or pre‑empting AE₁ with its own escalation move BE₁.
It is crucial to distinguish that this move is not an ordinary response to A₁. It is chronologically so (unless ARSM₁ or especially ASAM₁ occurred), but not sequentially. The decision to pre‑empt escalation is made based on FBA and within the CCB process.
There’s also a risk that, through interaction with the language model, B will become convinced that reaching an understanding with A is impossible. He will remain in a state of pause, accepting A’s subsequent moves. I propose that AI may have a real impact on strengthening his initial cognitive biases toward A, which could make it more difficult for him to decide to de-escalate. Instead, it will reinforce the need to wait out A’s moves and then—over time—as the situation worsens—to make an escalating move. This may be an intentional escalation for the sake of de-escalation.
2. Lateral Response B₁ and Entry into Ping‑Pong (PP₁)
B may also respond with move B₁ at the same escalation level, leading to a ping‑pong sequence (PP₁).
Classical conflict theories assume that such symmetrical exchange:
- stabilizes the situation, or
- eventually leads to de‑escalation through exhaustion of resources, attention, and determination.
However, this assumption relies on a silent condition: the absence of systematic reinforcement of interpretations and determination.
Ping‑Pong as the Main Space Where CCB Manifests
In this model, I propose a different thesis:
Under conditions of continuous AI consultation, ping‑pong ceases to be a stabilizing mechanism and becomes the main carrier of escalation.
Why?
Each subsequent exchange in ping‑pong provides new behavioral data. Each of these behaviors is:
- interpreted through the lens of FBA,
- reinforced by AI,
- incorporated into an increasingly coherent narrative about the other side’s intentions.
Instead of leading to conflict fatigue, ping‑pong:
- hardens the parties’ convictions,
- increases certainty that previous measures are ineffective,
- strengthens the belief that “raising the stakes” is necessary.
As a result:
- the probability of escalation increases with the number of ping‑pong exchanges,
- the dynamics of escalation depend on the intensity of the exchanged “courtesies.”
It is in ping‑pong that the coupled confirmation bias manifests most fully: two AI‑stabilized narratives collide, generating escalation without ill will, without aggressive intent, and often without the parties’ awareness.
Security Dilemma Without Ill Intent
At this stage, the conflict begins to resemble the classical security dilemma:
- each side acts defensively,
- each perceives the other as increasingly aggressive,
- neither consciously seeks escalation,
- yet escalation occurs.
AI acts here as an accelerator of interpretive certainty, reducing ambiguity and reinforcing narrative coherence on both sides.
Theoretical Context and Novelty of the Model
This model develops earlier research on human–AI cognitive loops (e.g., M. Glickman, T. Sharot, B. Wang), which focused mainly on a single user.
In this approach, the key element is the collision of two mutually reinforcing interpretive loops in interaction.
This mechanism is potentially even more unstable in triadic and multi‑party systems, where the escalation threshold is lower and narrative synchronization is more difficult.
9. Possibilities for Falsifying the Model
Any theory aspiring to the status of a scientific framework must be falsifiable. The model of coupled interpretive loops (CCB) meets this requirement because it generates specific, testable predictions that can be empirically confirmed or refuted.
9.1. AI’s Influence on Strengthening the Fundamental Attribution Error
The model would be falsified if:
- AI consultations did not increase the tendency to attribute negative intentions,
- AI weakened, rather than strengthened, FBA,
- AI users didn’t show smaller empathy and samller tolerance for ambiguity than the control group.
9.2. AI’s Influence on Escalatory Behavior
The model would be refuted if:
- AI‑consulting users did not show greater propensity for escalation,
- AI consultations did not influence the choice of moves A₂/B₂,
- escalation levels in the AI group and the control group were identical.
9.3. Ping‑Pong Logic
The model would be falsified if:
- the number of ping‑pong exchanges did not correlate with escalation,
- ping‑pong under AI conditions led to greater de‑escalation than in the control group,
- AI consultations did not influence the interpretation of subsequent moves.
9.4. Epistemic Asymmetry
The model would be refuted if:
- AI users and human‑advisor users showed identical escalation patterns,
- epistemic asymmetry had no effect on conflict dynamics.
10. Author’s Statement
This text was prepared with the assistance of language models. Their help was not generative in nature, but testing, editorial, and supplementary.
Conclusions
- The fundamental attribution error operates at every stage of conflict.
- AI reinforces FBA through CCB, giving interpretations a veneer of objectivity.
- AI‑consulted conflicts have a multi‑branch, not linear, structure.
- Pause, lateral response, and ping‑pong are critical decision states.
- Ping‑pong under AI may reverse the classical logic of de‑escalation.
- Escalation may be a function of the number and intensity of exchanges, not merely ill intent.
- The model is falsifiable — and this makes it a theory, not a dogma.
If you are interested in my hypothesis, you can read this article: https://pmc.ncbi.nlm.nih.gov/articles/PMC11860214/? and this one: https://dl.acm.org/doi/10.1145/3664190.3672520
Here is my main text about CCB:
You can read also this one:

COUPLED CONFIRMATION BIAS – HOW DOES IT LOOK LIKE?
In this article, I want to explain in a shorter and more accessible way how Coupled ConfirmationBias works. Also I want to show, what does a feedback loop in AI‑consulted conflicts actually look like. Imagine two people in a dispute. Both are using AI to interpret the other’s behaviour. The conflict begins at what I call Level 1 (L1) — the baseline interpretation of the other side’s actions. Importantly, L1, L2 and L3 are not merely different interpretations of the same actions; each level involves a new or intensified set of behaviors driven by AI-reinforced interpretation.
What Coupled Confirmation Bias (CCB) is?
The Coupled Confirmation Bias (CCB) is a conflict escalation mechanism in which two or more parties to a dispute, relying on external interpretive systems perceived as epistemically privileged, mutually legitimize their own narratives. This mechanism is recursive. Actions taken on the basis of such legitimization subsequently become input data for further analysis on the opposing side. This leads to a coupled interpretive spiral and a gradual narrowing of the negotiation space. By using the term recursive, I refer to a spiral in which each human–AI interaction amplifies the previous state.
How Coupled Confirmation Bias (CCB) forms?
Here is how Coupled Confirmation Bias forms:
1. A makes move A1.
2. B consults AI about A1.
AI does not invent new meanings — it tends to reinforce the user’s initial intuitions. But what are the initial intuitions created by? Humans start from a predictable place: the fundamental attribution error. We naturally explain others’ behaviour through negative traits rather than external circumstances. It’s fast, cognitively cheap, and emotionally self‑protective.
AI strengthens this starting point.
The result is tunnelled interpretation: B becomes more certain that A acted with negative intent.
3. B responds with move B1.
From B’s perspective, this is a defensive reaction to a perceived threat. But A interprets B1 through the same mechanism — again reinforced by AI.
Now both sides interpret these moves as increasingly defensive or aggressive. A consults AI B1 and interprets it like an aggressive escalation, which requires a response.
And here is the crucial part:
👉 A’s response – A2 – is performed at a higher level of perceived escalation.
👉 Interpretation changes behavior. Behaviour changes the conflict.
This is the shift from L1 to L2:
not just a cognitive loop, but a real behavioural escalation.
4. The loop accelerates.
Each side consults AI.
Also each side receives confirmation of its fears.
Each side responds defensively from its point of view.
Each defensive move is read by the other as escalation.
At this point, a classic security dilemma emerges:
each party, trying only to protect itself, unintentionally signals aggression to the other.
Consequences
AI amplifies this dynamic by reinforcing each side’s subjective narrative.
This is the core of what I call Coupled Confirmation Bias (CCB) — a recursive interpretive loop between two humans and two AI systems, producing real‑world escalation without bad intentions, without malice, and often without awareness.
My hypothesis builds on the pioneering research on human-AI cognitive loops especially by M. Glickman, T. Sharot, and B. Wang, Yuxin Liu and Lenart Celar. While they focused on the individual bias, I observe how these loops collide in two-party disputes.
I assume that similar phenomena also occur in multilateral disputes. Especially in tripartite arrangements, which are inherently very unstable.
You can also read the main text:
