Adapting to the Rise in AI Self-Diagnosis Among Patients

Research presented at the APNA 2025 warns that unchecked reliance on AI-generated health information can lead to misdiagnosis, misguided treatment decisions, and ethical challenges. Clinicians must promote digital health literacy in practice, encourage co-navigation of AI tools with patients, advocate for ethical AI design that supports rather than replaces care, and train fellow health care professionals to manage AI conversations.
As artificial intelligence (AI) tools become more accessible, more patients are turning to them for self-diagnosis—often with risky results. Research presented at the American Psychiatric Nurses Association 39th Annual Conference (APNA 2025), held from October 15 to 18, 2025, in New Orleans, Louisiana, warns that unchecked reliance on AI-generated health information can lead to misdiagnosis, misguided treatment decisions, and ethical challenges.
The poster, presented at APNA 2025, highlights the risks associated with AI-driven self-diagnosis, describes mitigation strategies for misdiagnosis, and discusses the ethical implications of AI use in health care.
Patients use AI self-diagnosis tools because this technology offers quick answers, helps overcome difficulties in accessing care, appeals to patients who distrust health care providers or wish to remain anonymous, and is promoted by marketing and social media, said Kristen Vanderberg, DNP, FNP, PMHNP-BC, of the University of Colorado, Colorado Springs.
Dr Vanderberg conducted a literature review, which demonstrates that unsupervised use of AI not only leads to misdiagnosis but also can delay professional care and increase anxiety among patients. ‘While AI chatbots and symptom checkers can provide general health insights, the accuracy and safety [of results] remain inconsistent without clinical insight,’ Dr Vanderberg said. Misdiagnosis often stems from a lack of clinical context, as AI can generate answers based on an incomplete patient history. AI chatbots also lack clinical nuance, and answers may not always be derived from guideline-recommended treatment algorithms.
One strategy to mitigate the risk of AI-driven self-diagnosis among patients is to develop strong patient-provider communication. Health care providers who acknowledge a patient’s external research while clarifying the limitations of AI improve trust and adherence among patients.
Dr Vanderberg’s poster outlines specific steps that health care providers can take to alleviate patient concerns with AI-generated self-diagnosis: Validate their effort to understand their health Ask open-ended questions: ‘What have you found online that concerns you?’ Clarify misconceptions using evidence-based information Reinforce the value of clinical evaluation and context
Taking these steps will help ensure patients feel heard and can reinforce trust in patient-provider relationships, she said
The responsibility to regulate AI in health care should not fall solely in the hands of health care providers, Dr. Vanderberg said, but the developers of these programs have a responsibility to ensure patient safety and accuracy of information. Increased regulatory oversight and transparency are important in the development of these tools.
Dr Vanderberg calls for health care providers and the public to ‘promote digital health literacy in practice, encourage co-navigation of AI tools with patients, advocate for ethical AI design that supports, not replaces, care, and train healthcare professionals to manage AI conversations.’
References:
Vandenberg K. Proceed with caution: guiding patients away from self-diagnosis in the age of AI. Poster presented at: APNA 2025; October 15-17, 2025. New Orleans, Louisiana. Poster 196.