AI Aligns With Human Therapists In Motivational Talks

JAMA Network

About The Study: The findings of this study suggest large language models (LLMs) can produce contextually appropriate motivational interviewing-consistent responses, but limitations in coherence and stylistic alignment highlight the need for further validation before clinical use. Motivational interviewing, a structured counseling approach, provides an empirically grounded setting for evaluating alignment between LLM-generated and human therapist responses.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.