
STUDY: AI AS A HIDDEN ALLY IN DISPUTES?
Below is a report from a study conducted in January 2026. We gathered information on how people use AI to analyze their disputes. The results proved to be very interesting.
AI’s Role in Disputes Report (January 2026): Hidden Advisor or Error Generator? What did we do?
We conducted a survey completed by 87 participants. This group included our clients, lawyers, mediators, and business leaders. It also involved psychologists and people from the fields of science and art. Consequently, this is not a nationally representative sample. However, it represents a group with high awareness of conflict essence. These individuals can critically evaluate their own reasoning.
I designed the survey myself. After its distribution, some participants provided technical feedback on the questions. I appreciate these comments. Therefore, I will include them in future studies. I certainly plan to conduct more research soon.
What was the focus of the AI survey?
The questions concerned general AI usage. We also focused on analyzing family, work, and business situations. This includes the stage before a formal dispute arises.
Furthermore, we asked about analyzing the other party’s intentions. This element is highly susceptible to the fundamental attribution error. When combined with AI hyper-alignment, it generates a so-called feedback loop. This is a confirmation trap that leads to tunnel vision.
Next, I asked about using AI to determine one’s own actions. I deliberately did not specify if this meant the first move or a reaction. However, the most important question concerned transparency. I asked if participants would inform the other side about using AI. Surprisingly, the respondents showed remarkable consistency here. The conclusions from this remain an open question.
The final two questions concerned trust in AI. Although they were similar, the results were intriguing. Trust in AI does not match the assessment of its objectivity. It seems we know AI is not objective, yet we tend to trust it.
How did the participants respond?
Below are the “raw” questions and answers.
Do you use AI LLM models?
(Blue – yes, red – no)
87 answers

Do you use AI’s LLM to analyze family, employment or business?

Which of these relationships do you use AI to analyze? Family, work, business, or other?

Do you use AI’s LLM to analyze the other party’s intentions?
(blue – yes, red – no)

Do you use AI to design your own moves in an dispute?
(blue – yes, red – no)

Would you inform the other side of the dispute that you are using AI to analyze and predict their behavior or prepare your own moves?

Do you trust artificial intelligence analyses?
(blue – yes, red – no, yellow – partly)

Is AI objective?

Analysis of the Responses
The vast majority of participants use AI. Certainly, they do so to varying degrees. Their topics of interest are also not the same. Nevertheless, the widespread use of language models is a fact.
Specifically, 43% of respondents use AI to analyze family or business situations. In my opinion, this is a high number. Interestingly, business analysis was the most common. Family situations were the least frequent. Meanwhile, 29% of people pointed to other areas. Here is a link to an article on AI in family disputes: [Link]
A significant majority (81%) did not try to determine the other party’s intentions via AI. Is this a high number? On the contrary, almost 20% of people do try. This means every fifth person is vulnerable to the AI feedback loop. AI tends to “agree” with the user, reinforcing their original bias.
Furthermore, 25% of respondents use AI to plan their moves in a dispute. This discrepancy with the previous question is interesting. Perhaps 5% of us use AI for planning without analyzing the other side’s intentions. It is unclear if these are the same participants. This might result from ignoring psychological aspects or simply focusing on the “matter” of the case.
Significantly, 80% of us would not inform the other side about using AI. Why is that? Do we consider it unfair, like “technological doping”? Perhaps we view it as a superstition and feel slightly ashamed. Or maybe we believe we have a technological advantage and want to keep this powerful weapon secret.
Do we trust AI?
The last two questions yielded astonishing results. While 75% of us use AI, 67% believe it is not objective. Why then do we use it? Perhaps to reinforce our own beliefs. After all, it feels good when technology says: “That’s a great idea, Andrzej!”. We might simply pretend not to see the lack of objectivity. Alternatively, we may feel that AI is biased, but it is “on our side.” This correlates with the fact that most respondents partially trust AI.
Conclusions
AI has become a common tool. It shapes our attitudes in many areas of life. Certainly, its influence is felt in the disputes we handle. Interestingly, we also see it in the analysis of the situation’s structure. This can lead to a desire to change the status quo. Consequently, it may even trigger the dispute itself.
My previous publications on AI in disputes
Below are links to articles regarding AI’s impact on dispute dynamics. I presented my original concept of Coupled Confirmation Bias (CCB) there. These texts include references to the latest research in prestigious journals. They cover psychology, technology, and the role of AI in creating tunnel vision.
I recommend a key article by Yiran Du: Confirmation Bias in Generative AI Chatbots. It analyzes confirmation bias mechanisms in AI models. You can read about the risks of this coupling here: https://arxiv.org/abs/2504.09343?
I applied this research to situations where both parties use AI. One party’s actions, determined by their AI, become the input for the other party’s AI. This prompts a specific, often escalated response. This escalation seems particularly dangerous.
For those interested, here is the link to the English version of my article on CCB: https://jakubieciwspolnicy.pl/en/coupled-confirmation-bias-2/ and the main text in polish: https://jakubieciwspolnicy.pl/sprzezony-blad-konfirmacji/
Invitation to Cooperation
If you are a party to a dispute, you may need strategic advice. I and my team deal with more than just the law. We conduct negotiations based on psychology, economics, and behavioral analysis.
If you have experiences with AI in disputes, please contact us. We would love to hear your story. It might serve as valuable material for our research. We ensure full anonymity.
Email: kancelaria@jakubieciwspolnicy.pl
Phone: 536 270 935
Latest Posts
DIGITAL GASLIGHTING IN NEGOTIATIONS
Negotiations have always been a psychological game. Parties try to influence each other and test boundaries. They aim to build and exploit strategic...
STUDY: AI AS A HIDDEN ALLY IN DISPUTES?
Below is a report from a study conducted in January 2026. We gathered information on how people use AI to analyze their disputes. The results proved to be...
WHEN DISPUTANTS USE DIFFERENT AI MODELS
I recently introduced the concept of Coupled Confirmation Bias (CCB). Now, we must examine how different LLMs affect dispute dynamics. I distinguish...
