
AI in Family Disputes: My last experiences and A New Reality
In my recent posts, I discussed AI’s role in shareholder disputes. Today, I want to focus on a very sensitive area. I am referring to family law matters. I will describe cases from my own practice. These events occurred recently, in December 2025. As a conflict specialist, I find these examples particularly interesting. I am observing AI’s growing role in creating and escalating disputes. This trend is especially visible in family-related cases.
AI in Family Cases Matters and Its Impact on Human Behavior
AI in family cases is no longer a science – fiction. It is a reality. I increasingly notice AI’s strong influence on family disputes. In this post, I will share cases from recent weeks.
I have changed all details to protect the privacy of those involved. No one can be identified from these descriptions.
I spoke with each person several times, both in person and online. I am describing a very new phenomenon.
Its consequences for family life and mental well-being are still unknown. This applies to individuals, families, and entire communities.
I avoid making judgments in this text. I prefer to leave the evaluation to you.
CASE ONE: CHILD CONTACT AFTER DIVORCE
A client recently contacted me after his divorce. I did not represent him in those proceedings. He is a successful, intelligent man in his thirties.
He needed urgent advice regarding child contact. Two weeks earlier, he agreed to a specific schedule. It included set weekdays and every second weekend.
Crucially, the former couple still lived together. His ex-wife planned to go out for the weekend. She expected him to care for the children as agreed.
The client asked an AI model about his obligations. He wanted to know if the schedule was a duty or just a right. He viewed it as a “safeguard” against his ex-wife.
The AI confirmed his incorrect belief. It stated he had no obligation to care for the children. This led to a violent argument between the parents.
Lawyers should explain these basic rules to their clients. However, the AI provided incorrect legal advice. It reinforced the client’s bias.
This is a perfect example of a coupled confirmation bias. The user described only a fragment of reality. The AI then validated his mistaken views. This led to immediate confrontation and escalation.
CASE TWO: AI AS A HIRED PSYCHOANALYST
This client is a highly educated, high-earning woman in her thirties. While preparing for her case, she asked an AI to psychological profile of her husband.
She described him entirely from her own perspective. The AI replied that it was not a psychologist. However, it still offered to help.
The model assigned numerous disorders to the husband. It labeled him as emotionally immature, narcissistic, and psychopathic.
Based on this AI chat, the client lost all trust. She decided it was unsafe to leave the child with him and was ready form cut of fathers contacts with their baby.
I suggested she consult a real psychologist. She refused, claiming the AI had already answered everything. She was now ready to block all contact between father and son.
The problem is that she treated a chat as a clinical diagnosis. It was based on a one-sided description. AI cannot replace a licensed psychiatrist or psychologist.
A professional must examine the patient and use proper tests. They cannot rely solely on the report of an involved party. I declined to take this case.
CASE THREE: AI AS A CONVERSATION ANALYST
Two parents came to me for mediation. Initially, their cooperation was good. They communicated mostly via long WhatsApp messages.
Soon, they became deeply suspicious of each other. They looked for hidden motives and secret plans in every text. Both believed the other was using the children as tools.
It turned out that both parents were using AI. They pasted received messages into the chat for detailed analysis. The AI then helped them draft “strategic” replies.
The other party would then analyze those AI-generated replies using their own model. This created a spiral of suspicion. Both parties began to lose touch with reality. They were ready for radical, harmful steps.
AI in Family Matters – Conclusions
These three examples from my practice show that we tend to:
- Search for confirmation of our initial assumptions.
- Strengthen our beliefs after receiving such confirmation.
- Prefer AI as a source of validation because it is faster, cheaper, and “politer.”
AI often carries an aura of omniscience. This makes it seem more attractive than a lawyer or a psychologist.
This leads to tunnel vision. People become ready to escalate disputes quickly and without deep thought. Our prejudices grow stronger. We believe we have received confirmation from “reliable” technology.
The long-term effects of this fascination with AI are hard to predict. But the problem is not the technology itself. The issue is that many people cannot critically evaluate AI responses.
AI also intensifies the “fundamental attribution error.” I have written about this here: [LINK]
Clients often come to my office with ready-made “solutions.” They expect me to simply implement them. I have discussed this trend since early 2025 here: [LINK]
You can read more about tunnel vision and coupled confirmation bias here: [LINK]
Scientific Research: AI and Cognitive Biases
My observations are supported by scientific data. AI increasingly influences human beliefs, choices, and behaviors.
I highly recommend an article published last year: How human–AI feedback loops alter human perceptual, emotional and social judgements (Nature Human Behaviour). The authors show that AI validation strengthens our perceptions and social judgments. Read it here: https://www.nature.com/articles/s41562-024-02077-2
You should also read Yiran Du’s work: Confirmation Bias in Generative AI Chatbots. It analyzes confirmation bias mechanisms in AI models and the associated risks: https://arxiv.org/abs/2504.09343
Another important text covers tunnel vision: Bias in the Loop: How Humans Evaluate AI‑Generated Suggestions. Experiments prove that users accept wrong AI suggestions if they fit their prior beliefs: https://arxiv.org/pdf/2509.08514
Finally, here is an analysis from Stanford University. It examines AI “hallucinations” and their impact on decision-making: https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries
Contact Me
If you need a lawyer who specializes in dispute resolution—including family law—please reach out:
📩 kancelaria@jakubieciwspolnicy.pl 📞 536 270 935
I am here to help you!
Latest Posts
COUPLED CONFIRMATION BIAS – A DEVELOPMENT
In this article, I expand on the previously presented hypothesis about the existence of conjugate confirmation bias (CCB). This is solely an attempt to...
COUPLED CONFIRMATION BIAS – HOW DOES IT LOOK LIKE?
In this article, I want to explain in a shorter and more accessible way how Coupled ConfirmationBias works. Also I want to show, what does a feedback loop...
COUPLED CONFIRMATION BIAS – MY CONCEPT
In this article, I present my original proposal of the Coupled Confirmation Bias (CCB). It is a conceptual framework designed to analyze the...
