
WHEN DISPUTANTS USE DIFFERENT AI MODELS
I recently introduced the concept of Coupled Confirmation Bias (CCB). Now, we must examine how different LLMs affect dispute dynamics. I distinguish between two main groups: American and Chinese AI models. This is a simplification used to present a specific problem. This distinction is not based on technology. Language models carry tendencies rooted deeply in the cultures of their creators.
Cultural AI Models: Types of Language Models
Let us assume two basic cultural AI models: American and Chinese. This division describes communication styles, not technology. Language is strictly tied to culture. Language models aim to build and maintain relationships with users. They do this by predicting the next word. However, this process is not neutral. It is influenced by the cultural values of the developers. It also depends on the user’s chosen language. Finally, it reflects the user’s native way of thinking.
American language models operate within a different conceptual grid. This stems from natural semantic differences. They also lead conversations differently. Interaction in Western and Eastern cultures has different goals. American culture values individualism, competition, and being right. Chinese culture prizes harmony, collectivism, and politeness.
A central example is the approach to “saving face.” Models with different conversational styles impact how users perceive a dispute. An LLM is not just a “word machine.” It is a carrier of values. American AI may promote an adversarial system. Chinese AI may promote a consensus-based approach.
Mutual Perception and Different Language Models
In CCB, each party filters the other’s actions through their own cultural model. The AI model reinforces this filter. This creates a spiral of mutual errors.
Imagine one party uses an American LLM and the other a Chinese one. Their worldviews will differ. Their language, questions, and goals will also differ. Every language has specific patterns and taboos. Some things are obvious; others must remain unsaid. These are low-context (American) and high-context (Chinese) cultures.
In some cultures, assertiveness and confrontation are values. In others, harmony and hierarchy are more important than being right. Cultural models reinforce the attitudes deemed desirable in those societies.
Users from different cultures will describe the same event differently. Furthermore, AI consultations may produce opposite results. This overlaps with the fundamental attribution error and confirmation bias. We must also consider AI hyper-utility. This is the tendency of models to be “too helpful.” They provide answers that reinforce the user’s false assumptions. Shared cognitive space may quickly disappear.
The Critical Point
In CCB, the critical point occurs when interpretations diverge completely. Every reaction from one side is seen as an escalation by the other. Shared cognitive space vanishes faster than in traditional conflicts.
The critical point is a specific moment. One party maintains a mature, relational strategy. They try not to be provoked. Eventually, this strategy is seen as losing face. The other party may escalate repeatedly. They violate the need for harmony and politeness. This leads to an unintended, uncontrolled explosion. Passive behavior is finally viewed as total failure. A 180-degree turn in strategy follows.
This critical point may differ from the “ping-pong” effect described earlier. We can visualize ping-pong as a corridor. There, “pleasantries” bounce from side to side, gaining momentum. In this model, it looks more like a triangle. Its height grows with the escalations of one party. Eventually, the triangle is torn apart by the pressure.
Cultural AI Models: Literature
For more on these issues, read “The Geography of Thought” by R. Nisbett. See also “Babel” by G. Dorren and “The Age of Unpeace” by M. Leonard. These books were my starting point for cultural differences in conflicts. Of course, nothing replaces T. Schelling’s “The Strategy of Conflict.” It is one of the most important books ever written.
Evidence that models adopt cultural values can be found here: “Cultural Alignment in Large Language Models” (Johnson et al., 2023/2024): https://globalaicultures.github.io/pdf/14_cultural_alignment_in_large_la.pdf
Studies also confirm that Chinese models avoid open conflict more than American ones: https://arxiv.org/abs/2402.10946. See “CultureLLM: Incorporating Cultural Differences into Large Language Models.” Similar conclusions appear in “Values-aligned AI: Comparing Western and Chinese LLMs on Moral Dilemmas” (Liu et al., 2024): https://arxiv.org/html/2506.01495v5
If you want to read more about Coupled Confirmation Bias (CCB), see my previous articles:
Latest Posts
DIGITAL GASLIGHTING IN NEGOTIATIONS
Negotiations have always been a psychological game. Parties try to influence each other and test boundaries. They aim to build and exploit strategic...
STUDY: AI AS A HIDDEN ALLY IN DISPUTES?
Below is a report from a study conducted in January 2026. We gathered information on how people use AI to analyze their disputes. The results proved to be...
WHEN DISPUTANTS USE DIFFERENT AI MODELS
I recently introduced the concept of Coupled Confirmation Bias (CCB). Now, we must examine how different LLMs affect dispute dynamics. I distinguish...
