OpenAI Launches ChatGPT Health, Transforming Patient Care

OpenAI has launched ChatGPT Health, a specialized feature designed to assist users seeking health and wellness information. This initiative comes in response to the significant demand for health-related queries, with over 40 million individuals reportedly using ChatGPT daily for such inquiries. To delve into the implications of this new tool, Northwestern Now spoke with Dr. David Liebovitz, co-director of the Institute for Artificial Intelligence in Medicine’s Center for Medical Education in Data Science and Digital Health at Northwestern University Feinberg School of Medicine.

Dr. Liebovitz, who has decades of experience in clinical informatics, highlighted the importance of guiding patients in their interactions with AI. “The question isn’t whether patients will use AI for health information,” he stated, “but rather how we can help them do so effectively and safely.” He emphasized the necessity for appropriate guidelines and realistic expectations regarding the capabilities of such tools.

Empowering Patients with AI Insights

The introduction of ChatGPT Health aligns with the 21st Century Cures Act, which mandates that healthcare systems grant patients complete access to their medical records. This access is facilitated through standardized application programming interfaces (APIs), which organizations like Epic are now required to implement. Dr. Liebovitz noted that AI tools can assist patients in deciphering this data with minimal costs.

He explained, “A patient can download their records using these APIs, run them through an AI model on their phone, and receive personalized insights without their data ever touching a third-party server.” This approach promises true democratization of health AI, enabling patients to interpret lab results, prepare questions for medical appointments, and identify care gaps without incurring subscription fees or compromising privacy.

The Future of Patient Care and Privacy Concerns

Despite the benefits, Dr. Liebovitz expressed concerns regarding patient privacy when using ChatGPT Health. He pointed out that health data shared with the AI is not protected under HIPAA, the Health Insurance Portability and Accountability Act in the United States. Unlike conversations with healthcare providers, there is no legal privilege surrounding these interactions, which raises potential risks of data being subpoenaed or accessed through legal means. This is particularly concerning for sensitive health issues, such as reproductive or mental health matters.

To address privacy concerns, Dr. Liebovitz advocates for a new approach involving local AI processing. “Modern smartphones possess the processing capabilities necessary to run AI models directly on the device,” he explained. This method ensures that sensitive health data remains on the patient’s phone, eliminating risks associated with cloud storage or third-party servers. He cited Apple’s advancements in local AI processing as a validation of this approach, suggesting that within a few years, patients could have sophisticated health assistants on their devices analyzing their medical records with complete privacy.

Dr. Liebovitz’s research group is actively exploring ways to make this vision a reality for the public. With the technical infrastructure for standardized health records and the evolution of powerful mobile hardware, the goal is to provide patients with meaningful second opinions on their health data while ensuring full control over their information.

This initiative by OpenAI marks a significant step towards integrating AI into healthcare, promising improved accessibility and understanding of medical data for patients. As technology evolves, the potential for AI to enhance the patient experience while safeguarding privacy continues to grow, paving the way for a future where individuals can navigate their health journeys with confidence.