AI in the OR: Anesthesia Without Anesthesiologists 

Written by Gabby Coleman
Edited by Aseel Albokhari

After pre-operative testing and final preparations, one of the last things a patient may do before going into surgery is have a conversation with their anesthesia provider [1]. Anesthesiologists ensure that patients receive the appropriate type of anesthetic and remain safe and comfortable throughout the procedure—but what if some of these decisions were informed not only by human expertise, but also by artificial intelligence? 

Artificial Intelligence (AI) can be understood as a collection of technologies that interpret data inputs and continuously learn from them. In anesthesiology, early efforts focused on developing algorithms that could replicate aspects of human judgment in order to assist clinicians. More recent innovations center on machine learning, systems capable of learning from data in real time, to support anesthetic management in the operating room (OR) [2]. Integrating this analytic capacity into the OR allows earlier recognition of trends that may signal patient instability or complications. 

Photo from nysora.com. Curated by Kayla Vance (kmv53@cornell.edu)

AI in anesthesiology now spans multiple domains, from monitoring and decision support to postoperative pain management and education. Systems such as Anesthesia Information Management Systems (AIMS) and Smart Anesthesia Monitors (SAMs) collect and analyze patient data to identify patterns associated with risk and to provide real-time decision support. These platforms enable anesthesiologists to act proactively rather than reactively, while improving the accuracy of recordkeeping and data storage. Research suggests that AIMS can also improve clinician adherence to drug protocols, including timely administration of beta blockers and antibiotics, which are essential for maintaining stable circulation and reducing cardiac stress during non-cardiac procedures [3,4]. Commercial systems such as the Philips IntelliVue, now standard in many ORs, are designed to streamline data displays and mitigate alarm fatigue, helping prevent delayed responses to critical changes in patient condition. 

Beyond clinical decision support, researchers are exploring how AI might aid in pharmacological and mechanical anesthetic tasks. Experimental systems are being developed to assist in dose calculations based on patient-specific variables such as age, weight, body mass index, and medical history--potentially reducing the risk of over- or underdosing and improving patient comfort. While many of these automated systems remain in the testing phase, simulation studies on mannequins have shown that AI-driven models can support procedures such as intubation, ventilation, and regional anesthesia. In controlled laboratory settings, some of these models have achieved precision comparable to (or occasionally greater than) human clinicians [2]. 

Outside the OR, AI and simulation technologies are reshaping postoperative care and medical training. A 2022 multicenter study led by researchers at Yale, Boston University, and the University of Michigan found that an AI-assisted cognitive behavioral therapy program for chronic pain management produced patient progress comparable to a standard 45-minute therapy session but required only half the time [5]. Though this study addressed pain therapy rather than anesthesia itself, it illustrates AI’s potential to enhance efficiency and accessibility in perioperative care. Other work, such as the randomized controlled trial by Bruppacher et al. (Anesthesiology, 2010), has shown that simulation-based training can improve technical and nontechnical skills and translate more effectively into real-world clinical performance [6]. Together, these studies demonstrate how AI and simulation tools are becoming valuable components of both patient care and physician education. 

Photo from oklahoman.com. Curated by Kayla Vance (kmv53@cornell.edu)

Despite their promise, AI models face important limitations and ethical challenges. Machine learning systems require continuous access to high-quality data to remain accurate, and gaps or biases in that data can lead to unreliable or inequitable results [2,7]. These systems also struggle with complex or unfamiliar clinical situations that fall outside their training data and cannot easily incorporate nonquantifiable factors such as patient anxiety or intraoperative uncertainty. As AI begins to influence more clinical decisions, questions of accountability become critical: who bears responsibility when an algorithm contributes to an adverse event—the manufacturer, the provider, or the institution? Clear regulatory standards and oversight will be essential to address these emerging concerns. 

Though the age of anesthesia without anesthesiologists has yet to come, it’s bound to arise in the not-too-distant future. 


Gabby Coleman ‘29 is in the College of Engineering. She can be reached at ggc38@cornell.edu.


Previous
Previous

Organoids: Modeling Humanity 

Next
Next

Leucovorin: How is it related to Autism?