Artificial intelligence (AI) is increasingly being integrated into surgical procedures, raising concerns following reports of patient injuries linked to these advanced tools. Investigations and several lawsuits are prompting medical experts to critically evaluate the role of AI in operating rooms, where these technologies are primarily used to assist human surgeons rather than perform surgery independently.
According to a recent report by Reuters, the FDA has authorized at least 1,357 AI-integrated medical devices—double the number approved through 2022. Among these is the TruDi Navigation System, developed by Johnson & Johnson, which employs machine-learning algorithms to aid ear, nose, and throat specialists during operations. Other AI-assisted devices focus on enhancing visual capabilities in various surgical contexts, addressing challenges such as smoke obscuring the surgical field and difficulties in distinguishing critical anatomical structures.
Despite these advancements, a rising number of allegations and lawsuits have emerged, claiming that several AI tools have caused harm to patients. Notably, the TruDi system has faced scrutiny, with the FDA reporting “unconfirmed reports of at least 100 malfunctions and adverse events.” Many of these incidents involved the AI providing inaccurate information about the positions of surgical instruments within patients’ bodies. In one instance, this led to cerebrospinal fluid leaking from a patient’s nose, while another case involved a surgeon inadvertently puncturing the base of a patient’s skull.
Further complicating matters, two additional cases reportedly resulted in strokes due to injuries to major arteries. In one of these instances, the plaintiff alleged that the TruDi’s AI misled the surgeon, resulting in the injury of a carotid artery and subsequent blood clot formation. While the FDA’s reports on device malfunctions do not determine the causes of medical mishaps, the potential impact of AI on these incidents remains unclear.
The concerns are not limited to the TruDi system. The Sonio Detect, a machine that uses AI to analyze prenatal images, has been accused of utilizing a faulty algorithm that misidentifies fetal structures. Additionally, Medtronic, a manufacturer of AI-assisted heart monitors, has faced allegations that its devices failed to detect abnormal heart rhythms or pauses in patients.
Research published in the JAMA Health Forum indicates that at least 60 AI-assisted medical devices have been linked to 182 product recalls by the FDA. Alarmingly, approximately 43% of these recalls occurred within the first 12 months after the devices received FDA approval. Such statistics raise questions about the adequacy of the FDA’s approval process, which may overlook early performance failures associated with AI technologies.
Despite these challenges, experts suggest that improving premarket clinical testing requirements and enhancing postmarket surveillance measures could help identify and mitigate device errors more effectively. As the use of AI in surgery continues to grow, the medical community faces a critical juncture regarding patient safety and the efficacy of these innovative technologies.
