Guidance systems, similar to those used by rearview cameras, could make learning to perform robotic surgery as simple as backing up a car. That's the basic idea behind research by University Distinguished Professor Jerzy Rozenblit in the University of Arizona department of electrical and computer engineering.
Rozenblit, holder of the Raymond J. Oglethorpe Endowed Chair, will travel to Poland for six months in 2017 as a Fulbright Scholar to collaborate on development of simulation models and devices to train physicians in minimally invasive laparoscopic surgery. He has previously traveled to Poland as a Fulbright Senior Specialist, and is also a former Fulbright Research Fellow.
"The researchers in Poland are strong in control systems, automation, and robotics. They'll help with the image processing, and I'll get to shadow surgeons in the field to gain additional insights into the clinical aspects of my research," said Rozenblit, who has a joint appointment in the UA department of surgery.
Used extensively for gynecological and urological procedures, robotic systems are becoming the norm for trickier surgeries involving the brain, neck and spine. The systems work by creating a small incision into which a camera attached to a thin metal telescope, or laparoscope, is inserted. The camera's images are displayed on operating room monitors for the surgeon to view while controlling the robotic system's controls and surgical tools.
For all of its advantages, robotic laparoscopic surgery has some serious limitations: among them, loss of three-dimensional depth perception. The surgeon is guided only by the camera's onscreen images and, by extension, the robotic arms moving the tools inside the patient's body. Go too deep, or not deep enough, and serious damage could occur to a nerve, artery or other organ. A 2013 study found human error caused 11 percent of reported cases of patient injury or death during robotic surgeries.
These are "life-critical computing systems," the term coined and used by Rozenblit. "Clearly, if you mess up a payroll, it will cause a lot of inconvenience and annoyances," he said. "But if you mess up a surgery, we're talking about mistakes that are irreversible."
Extensive training is required for surgeons wanting to use robotic and laparoscopic surgical systems. However, Rozenblit believes adding a guided simulation model to gradually correct depth perception mistakes can improve upon current training methods and reduce surgical errors.
The Fulbright award will allow Rozenblit to work with researchers at the Wroclaw University of Technology in Poland to build a training simulation that incorporates both visual and haptic force, or tactile, guidance.
"You have to develop situational awareness, and then the actual motor skills," he explained. "There's no technology on the market like this that uses haptic force guidance. In essence, I take your hand to guide you through the anatomy. Gradually, the guidance is reduced as your experience and skill grow."
One way to develop that skill is to use a distance grid similar to those in rearview camera displays. If a grid can help guide a car safely into a parking spot, why can't it be used to guide a surgeon through a body's anatomy?
"We have a responsibility as scientists and engineers to ensure that the technologies we create are reliable and safe," Rozenblit said. "Such a system would offer unlimited training opportunities without sacrificing patient safety. What we want to know is, can the visual guidance coupled with the forced guidance make you better at performing actual surgery?"
###