A new reflex that reveals discomfort at work
Julien is 28 years old. He is a promising executive, committed and independent. This Monday morning, he opens his computer after a difficult weekend: rumination, fatigue, tension, and stress from a strategic presentation for the executive committee.
But Julien doesn't contact his manager, occupational health, or even a trusted colleague. He talks... to a conversational AI.
A few lines typed in haste. An instant, empathetic, structured response. Temporary relief. Then back to work, as if nothing had happened.
This gesture, once marginal, is becoming widespread behavior in organizations.
According to a study by Oracle x Workplace Intelligence, 82% of employees worldwide say they prefer to talk to robots rather than humans about mental health.
This figure does not mean that AI is better. It means that something is wrong with the human system:
- Not listening,
- Lack of reference points,
- Fear of judgment,
- Lack of availability,
- Internal devices not properly identified.
So, what does this trend reveal? What psychosocial risks could AI be masking? And how should companies regulate this new practice?
As an expert in mental health in the workplace, here is an in-depth analysis and comprehensive set of operational recommendations for HR, QVCT, CSR, and line managers.
In summary:
Why are employees turning to AI to talk about mental health?
Because AI offers immediate, nonjudgmental listening, available 24/7, and perceived as less risky than human interaction within the company.
What does this reveal about the organization?
A lack of credible listening spaces, a weakened managerial culture, low visibility of internal mechanisms, and sometimes a gradual loss of trust.
What are the risks for the company?
Blindness to the real state of the social climate, delays in detecting weak signals, ethical lapses, decisions made on the basis of unreliable data, and confusion between listening and taking charge.
What are the risks for employees?
A lack of therapeutic relationship, no follow-up, no alliance, no supervision, sometimes inappropriate responses, and an illusion of help that can delay real treatment.
What should HR do?
Regulate the use of AI, strengthen human resources, make internal channels visible and credible, train managers, and clarify an ethical framework for use.
Why employees are turning to AI: a psychological phenomenon before a technological one
Figures that challenge HR
The Oracle figure (82%) signals a massive shift: employees are adopting conversational AI as their primary outlet for emotional expression. This is not a "fad." It has become a psychological shortcut.
The real psychological reasons
Employees are turning to artificial intelligence not because it is effective, but because it does not judge.
- Complete anonymity: You can say anything without fear of consequences.
- Immediate response: Suffering never waits for HR to be available.
- Emotional neutrality: No tone, no sigh, no glance.
- Relational avoidance: Many employees no longer dare to expose their vulnerabilities to their peers or managers.
AI is becoming a decompression chamber. But a chamber that leads nowhere if humans don't take over.
The limits of AI in workplace mental health: what a chatbot can never do
Contrary to what some marketing messages would have us believe, AI cannot support mental health. It can imitate an empathetic response, but nothing more.
Limitation #1: Lack of clinical evaluation
AI does not assess actual risks such as the intensity of distress, duration, signs of collapse, or organizational contextual factors.
Limitation #2: No psychological nuance
She "responds" but does not understand.
Limitation #3: Inability to detect organizational psychosocial risks
Psychosocial risks (PSRs) depend on workload, management, perceived fairness, social support, and the organizational framework.
No chatbot has access to these elements.
Limit #4: No therapeutic relationship
What is missing: the alliance (a genuine bond of trust between two people), transference (the person projects themselves and feels secure with the therapist), supervision (the therapist shares their analysis of their practice with colleagues), and follow-up, which provides an understanding of progress.
AI may provide immediate relief, but it cannot offer long-term protection.
Risks for employees: the paradox of the "illusion of listening"
AI creates a psychological trap: it provides relief without treatment.
Immediate relief... worsening in the medium term
A harassed employee may receive an empathetic response:"I understand your suffering, you are not alone." He feels better. He doesn't talk to anyone in the company. The company doesn't detect anything. The harassment continues. Three months later: long-term sick leave + burnout.
The risk is not AI, it is the lack of internal feedback.
Gradual isolation
Speech is shifting away from the collective.
The employee:
- talk less to the manager,
- cuts himself off from his colleagues,
- avoids face-to-face conflicts,
- takes refuge in AI.
A silent escalation
Without human interaction, weak signals disappear:
- irritability,
- decline in performance,
- unusual behaviors,
- repeated delays,
- avoiding meetings.
The company doesn't see anything... Until the crisis hits!

Risks for the company: invisibility, liability, and business impact
The legal obligation remains, even if AI "listens."
According to Article L4121-1 of the Labor Code, employers must protect the physical and mental health of their employees.
If employees confide their distress to AI instead of using internal mechanisms, this reveals a failure in prevention.
A loss of HR vision: the company becomes blind
When employees confide elsewhere, the company loses what matters most: subtle signals, understanding of the social climate, tensions that arise between teams, managerial difficulties, and situations of overload that should have been detected. This is no longer just an HR issue: it is a real strategic risk for the organization.
Very concrete business consequences
- Mental health-related sick leave: direct costs + disruption.
- Increased turnover: loss of key skills.
- Unresolved latent conflicts: more complex and costly investigations.
- Undetected harassment: major labor court risks.
- Deteriorated atmosphere: measurable decline in productivity.
- Loss of confidence in management: brain drain.
When AI absorbs speech, the company loses control over its prevention.
Priority actions for HR right now
Strengthening spaces for human listening
To reduce the use of AI as a substitute, the company must first make its psychological helpline much more visible and accessible, then communicate regularly via Teams, posters, or newsletters to remind employees that these resources exist and are there to be used. It is also important to play down the use of internal services by showing that they are neither stigmatizing nor reserved for extreme situations. Sharing anonymous testimonials can help build this trust, as can incorporating real "well-being check-ins" into management meetings to normalize discussions about mental health on a daily basis.
Train managers in identification (primary prevention)
The manager does not become a psychologist. He becomes an observer and career counselor.
Key skills are simple to articulate but essential to master: knowing how to notice changes in behavior, daring to ask questions without awkwardness, avoiding any form of minimization, and being able to refer people to the right professionals at the right time. This is exactly what RPS training courses provide: they give managers the tools, benchmarks, and methods they need to act appropriately and truly protect their teams.
Create an AI ethics charter
An AI usage charter must clearly define permitted uses, the tool's limitations, individual responsibilities, and the types of data that cannot be collected. It must also specify situations in which AI must redirect users to a human professional. In concrete terms, certain keywords must automatically trigger an alert or transfer to an occupational psychologist or internal system: for example , "harassment ," "I want to disappear, " "I can't take it anymore, " "overload, " or "I want it to stop." These signals require immediate human intervention, which is beyond the legitimate scope of a chatbot.
Integrate AI INTO prevention (not alongside it)
AI can play an interesting supporting role when properly supervised: it directs employees to the appropriate internal resources, provides psychoeducational content on topics such as stress or mental load, clearly reminds them of the limits of its own role, and, when the situation requires it, encourages them to contact the helpline or a qualified professional.
HR indicators to monitor for detecting substitute use
Essential QVCT KPIs:
- Absenteeism rate
- Mental health sick leave
- Calls to the helpline
- Turnover
- Declared conflicts
- Social climate index
- Mental load barometer
If these figures deteriorate while the use of AI increases, it is a major warning sign.
Immediate HR Checklist
To be checked tomorrow:
- Visible internal devices
- Trained managers
- Existing AI Charter
- Safeguards on strong signals
- Clear communication
- QVCT KPIs monitored
- Automatic messages: "AI = a gateway, not a substitute."
Towards an AI/human balance: the real challenge for HR in 2026
The challenge is twofold. On the one hand, AI can become a technology that "listens" but, paradoxically, cuts off internal communication: it diverts attention away from human systems, delays alerts, and pushes back prevention.
On the other hand, it also represents a real opportunity: it facilitates the first step, unlocks emotional expression, and allows employees to be directed toward human support earlier on.
Everything therefore hinges on balance: making AI a tool for guidance, never a substitute.
Conclusion: Protecting humans by using AI as a revealer, not a substitute
Julien's story is not an isolated case. It is symptomatic of a profound transformation within companies.
When employees choose to confide in AI rather than their organization, it is often a sign of a lack of opportunities to be heard, a fragile managerial culture, internal mechanisms that are not very visible, fears of judgment or consequences, and sometimes even a gradual loss of confidence in the company.
The challenge then becomes twofold: regulating the use of AI while strengthening human resources, making internal listening truly accessible and credible, and restoring trust in human relationships. Because while AI can help people speak freely, only human presence can truly protect mental health in the workplace.

