When Psychologists Meet AI: Trust, Bias and Diagnostic Decisions in Triage

I designed and led this research study, later published in Computers in Human Behavior: Artificial Humans (2024). Using a randomised controlled trial with 114 participants, I examined how psychologists interact with AI-assisted triage tools in mental health care. The study revealed confirmation bias: practitioners trusted AI more when its recommendations matched their own diagnoses. These findings provide the first evidence of this bias in clinical AI and carry key implications for Human–Computer Interaction in healthcare, showing that adoption depends as much on practitioner trust and cognition as on technical accuracy.

Role

Role

Lead Researcher

Timeline

Timeline

4 months

Tools

Challenge

Mental health services are under growing strain, increasing interest in AI triage assistants to support assessments and reduce practitioner workload. However, the cognitive dynamics of human–AI decision-making in clinical contexts remain underexplored. In particular, no study had empirically tested how psychologists’ diagnostic judgements interact with AI recommendations. Addressing this gap is essential to understand how such systems influence practitioner trust and decision quality in healthcare.

Results

My study was the first to investigate confirmation bias in mental health AI triage. We found:

  • Psychologists were significantly more likely to accept AI recommendations that confirmed their own initial diagnosis.

  • Trust in the AI increased when its suggestions aligned with practitioner intuition, and decreased sharply when they diverged.

  • Expertise intensified bias: more experienced practitioners were the least willing to trust AI when it contradicted them.

These findings highlight that keeping humans “in the loop” is essential, but relying on expert intuition alone does not guarantee unbiased outcomes.

+0.9 points

higher acceptance when AI matched practitioner diagnosis

r = 0.75

correlation between trust and acceptance

High self-reported expertise → stronger confirmation bias

Process

Research Design: A between-subjects randomised controlled trial with 114 participants, comprising practising psychologists, trainees, and psychology students with knowledge of assessment and triage.

Stimuli: Three ambiguous clinical case vignettes generated with GPT-4 to reflect diagnostic uncertainty, reviewed and validated by clinical psychologists.

Interfaces: Eighteen Figma prototypes of an AI triage chatbot (“MindAssist”), presenting demographic details, symptom reports, standardised assessments (e.g. PHQ-9), and a preliminary diagnosis, varied systematically between congruent and incongruent recommendations.

Procedure: Study administered through Qualtrics. Participants provided an initial diagnosis, reviewed the AI recommendation, and rated acceptance likelihood and perceived trustworthiness. Additional measures included self-reported expertise, general attitudes toward AI, and human versus AI preference, with open-ended qualitative responses.

Analysis: Statistical tests conducted in Stata, including t-tests, regression models, and moderation analyses, focusing on the effects of congruence and the moderating role of self-reported expertise.

“ The research itself was not only academically but also logistically complex. Anya engaged with stakeholders from companies such as Limbic AI, Wysa and clincial psychologists and demonstrated advanced stakeholder management skills, communicating the importance of research on AI ethics and practitioner bias ”

Dario Krpan

Tenured Associate Professor at LSE

Conclusion

Psychologists do not blindly follow AI recommendations, but confirmation bias strongly shapes trust and acceptance.

  • The study shows that practitioners are not prone to automation bias, but instead selectively trust AI when it confirms their own judgements.

  • This highlights the importance of keeping clinicians ‘in the loop’ while recognising that reliance on professional intuition can introduce its own biases.

  • Findings suggest that AI design must address the risk that biased recommendations will be accepted when they align with existing beliefs.

  • For Human–Computer Interaction in healthcare, this demonstrates that successful AI integration depends on balancing technical accuracy with practitioner trust, explainability, and appropriate reliance.

Looking Forward

  • Clear organisational policies are needed to guide ethical AI adoption in mental health care.

  • Training for practitioners should include not only technical skills but also awareness of cognitive biases in human–AI collaboration.

  • Collaborative AI systems must be designed to actively engage practitioners, supporting trust and critical evaluation rather than passive acceptance.

Create a free website with Framer, the website builder loved by startups, designers and agencies.