Virtual, augmented, and mixed reality applications in orthopaedic surgery
Abstract
Innovation in computer-assisted surgery (CAS) and surgical training aims at increasing operative accuracy and improving patient safety by decreasing procedure-related complications. CAS has been shown to improve surgical outcomes by providing more accurate measurements and enhanced spatial position feedback of surgical tools relative to anatomical features and targets. The application of reality technologies, including virtual reality (VR), augmented reality (AR), and mixed reality (MR) to computer-based surgical systems, has begun to revolutionize orthopaedic training and practice. Trainees now have authentic and highly interactive operative simulations without the need for supervision. The practicing surgeon is better able (i) to pre-operatively plan and intra-operatively navigate without the use of fluoroscopy, (ii) to gain access to three-dimensional reconstructions of patient imaging within view of the surgical field, and (iii) to remotely interact with colleagues located outside the theatre. Virtual reality techniques have been previously reviewed for the training of surgical residents. However, there have not been any reviews exploring the applications for virtual, augmented, and mixed reality in orthopaedic practice and training. This review provides a current and comprehensive examination of the reality technologies and their applications in orthopaedics.
Introduction
The exponential growth of computing power and imaging technology over the last century has revolutionized the global health system and drastically improved patient care from diagnosis to therapy. Emerging computer-driven reality technologies, including virtual, augmented, and mixed reality technologies, have the potential to further improve patient care and are already becoming important in several primary sectors of healthcare: resident training, patient education, preoperative and intraoperative visualization, applications in psychology and psychiatry, telesurgery and telehealth, and athletic training. Due to the potential application of these technologies, the global market for VR and AR healthcare technologies is estimated to be worth $5.1 billion by 2025, second only to the VR and AR gaming market (Touchstone Research).
Given the increasing age of our population and forecast increased need for orthopaedic therapies, tools to increase efficiency, improve outcomes, and reduce costs are essential. Orthopaedics is receptive to the application of reality technologies. Currently, reality technologies are being applied to orthopaedic CAS systems and training simulators to increase surgical accuracy, improve outcomes, and reduce complications (Vaughan et al. 2018, Vavrà et al. 2017, Wu et al 2018). These same technologies can be used for prosthetic sizing/placement (Foutouhi 2018), remote surgery (Loescher et al. 2014, Ponce et al. 2014), phantom limb pain therapy (Ortiz-Catalan et al. 2014), physical therapy (Darter & Wilken 2011), joint injection (Agten et al. 2018), mobile app-based education (Jain et al. 2017 & Eckhoff 2015), and research applications.
The following survey examines the emerging computer-driven reality technologies, the current status of reality technology application to orthopaedic training and surgery, and potential future applications in orthopaedics.
Methodology
For this review of the literature, related published reports were found via searches of PubMed using the following subject terminology: “virtual reality” or “augmented reality” or “mixed reality” with “orthopaedics” or “orthopaedic surgery.” In total, these searches returned 141 published articles. Abstracts of all 141 articles were browsed for relevancy to the study and non-related publications were excluded. 75 articles remained for final review.
Historical Context
All surgical procedures require the surgeon to visualize a three-dimensional combination of anatomical structures with an actual anatomical target. The surgeon must constantly confirm the precise location of the target relative to their surgical instruments and how to reach the target safely by avoiding critical structures. While still a relatively recent development, surgical navigation systems are important measurement and verification tools used by surgeons to plan and evaluate their actions by providing this spatial positioning information (Mezger et al. 2013). These navigation systems can also aid in the proper placement of a prosthetic device. The advent of more advanced navigation technologies has its origin primarily in the medical imaging and stereotaxy techniques.
The first use of image-based technology in medicine began with Wilhelm Röntgen in 1895 with the first ever production and detection of X-rays. Harnessing the X-ray ushered in new possibilities with respect to medical diagnosis and treatment to observe anatomy and disease processes without making an incision. X-rays were initially used in to locate bullets in the extremities of gunshot wound victims (Holmes 2018). The major disadvantages of plain radiographs is that they fail to display much soft-tissue detail and images are two-dimensional. The subsequent development of ultrasound, CT, and MRI, along with other imaging techniques has allowed physicians to use two-dimensional imaging and three-dimensional reconstructions in the diagnosis and treatment of illness. Additionally, CT and MRI have enabled much better visualization of soft tissue. Finally, increased computer processing power, has enabled unlimited manipulation of the CT and MRI-acquired datasets. These medical imaging technologies are a critical prerequisite for surgical planning and navigation. With further imaging technology advancements, it may soon be possible to merge functional and anatomical imaging (fMRI and SPECT/CT).
Stereotaxy is a neurosurgical technique that allows for the precise localization of intracranial structures for the placement of devices, such as electrodes, needles, or catheters. Initially, stereotaxy was carried out using an anatomical atlas for preoperative planning. A mechanical head frame was appropriately positioned to reflect the preoperative plan and anchored to the patient’s skull intraoperatively. With the surgical trajectory defined by the mechanical frame, only a burr hole was needed to enter the intracranial space. Devices could then be advanced into the burr hole, causing minimal damage to intracranial structures. While stereotactic procedures were less invasive, they severely limited the surgeon’s field of view (Holmes 2018, Enchev 2009).
In the 1990’s, David Roberts first developed frameless stereotaxy for neurosurgical procedures to overcome the limitations of frame-based stereotaxy (Enchev, 2009). Frameless stereotaxy constantly updates the surgeon with spatial positioning information, superimposing the “real-time” location of the instruments in the operative field with the preoperative imaging data, usually CT or MRI (Enchev 2009). These frameless stereotactic systems make use of several cameras which emit infrared light. The infrared light reflects from targets of interest and returns to the camera to determine spatial position in real time. Reflective markers (often called fiducial markers) are placed on surgical tools or the patient to help enhance the image. At least three of these markers are needed to determine position and orientation of the object. Additional and even redundant fiducial markers may be needed in the case that they become obstructed during an operation. The cameras targeting the markers are positioned at various different angles to produce a stereoscopic image (an image with depth), which is displayed on a computer display. Navigational software is needed to process the sensor input and to output the spatial information as an image on the display (Mezger et al., 2013).
More recently, the advent of computer-assisted surgery (CAS) has continued to advance surgical procedures, increasing accuracy, improving efficiency, and reducing complications and patient recovery time. CAS refers to a broad set of surgical methods that make use of computer-based technologies for guiding and performing surgical information and includes many different types of robotic and image-guided surgery (Zheng & Nolte 2015, Koh et al. 2008). CAS devices in orthopaedic surgery been shown to reduce the variability in implant placement and improve patient outcomes, however their efficacy in these regards is highly dependent on the surgeon’s understanding of the device (Langlotz, 2004). The most modern additions to the CAS device family include the reality technologies of VR, AR, and MR.
Basic Principles of Reality Technologies
Virtual reality (VR) is a technology that immerses the user in a completely artificial, computer-generated environment. Most standalone VR systems are headset devices that work in in combination with smartphones. However, the most advanced VR experiences are capable of providing freedom of movement through an artificial environment. Beyond the visual aspects of VR, the computer may also generate artificial sounds and other stimuli. There are also special hand-operated controllers that can be used to enhance the VR experience (Mabrey et al. 2010, Vaughan et al. 2013). While VR technology is still in its infancy, it has had the greatest impact on the training of surgical residents. The VR environment allows trainees to interact with an artificial environment to practice operative techniques and hone procedural skills (Blyth et al. 2006). In addition, VR can support patient-specific preoperative simulation (Petterson et al. 2015 Willaert et al. 2012).
In contrast to virtual reality systems in which the user interacts with a completely artificial environment, augmented reality (AR) technology provides a computer-generated overlay onto real world surfaces, providing the user stereoscopic visualization. AR has the potential to increase precision during surgical procedures and facilitate better placement of instruments, guides, and devices (Nikou et al., 2000). Additionally, AR is more intuitive to use than other forms of surgical navigation (Blackwell et al. 1998). The ideal manifestation of AR technology would be centered around mobile technology or in a head-mounted display, allowing the surgeon to visualize the anatomical target within the field of view rather than using an alternative screen. With special AR headsets that are already on the market, such as Google Glass or Microsoft HoloLens, digital content is displayed in real time on a small screen in front of a user’s eye (Microsoft). This type of mobile AR is rapidly becoming prevalent across all industries as smartphone-based AR systems advance with respect to image registration and tracking (Okamoto et al., 2014). The continuous integration of real and virtual worlds in AR provides surgeons with “X-ray vision” of important anatomical structures and surgical targets without the need for repeated use of ionizing radiation.
Mixed reality (MR) is the most recent development in reality technology. Similar to AR, an MR system will produce a stereoscopic image, created by combining the three-dimensional virtual model produced from preoperative radiologic images (CT or MRI), with a real world surface. With MR devices, virtual objects are not simply projected on real world surfaces as in AR. Instead, the technology actually allows the MR user to interact with both the real world and the digital content which is added to it. Cameras are in concert with the user’s eyes and allow the user to manipulate the three-dimensional image (“hologram”) from their point of view (Muelstee et al. 2018).
Image registration is a critical process for proper use of the reality technologies, in which the feature points from patient imaging data must be combined with the feature points in actual patient anatomy into the same spatial coordinates (Maintz et al., 1998). The registration process will ensure that the imaging data is presented to the viewer in a way that accurately matches the orientation and position of the patient’s anatomical structures. It is essential that the registration process does not interrupt or hinder operative workflow. In particular, the registration must be highly accurate, require low computation time, and most importantly, cause no detrimental side effects to the patient. Multiple fiducial markers are required to combine the CT/MRI imaging coordinate system with the physical coordinate system as with frameless stereotaxy navigation (Mezger et al. 2013). Real time tracking of these fiducial markers in surgery currently consist of the image-based tracking method, optical tracking method, and electromagnetic tracking method. To superimpose the three-dimensional image with accurate spatial position, both AR and MR use an optical tracking system. The optical tracking system matches patient anatomy to imaging data by using a marker-based or marker less registration method. Marker-based registration requires placement of fiducial markers on anatomical targets while marker less registration can target prominent anatomical features (Ma et al., 2018).
Applications in Orthopaedic Surgery
Virtual Reality (VR)
Mastery of all the skills required to reach full competence as an orthopaedist is no longer feasible strictly in the clinical environment. Rising costs of training residents combined with decreasing resident work hours and ethical concerns regarding patient care demand that trainees acquire skills outside of the operating room (Vaughan et al. 2016, Ahktar et al. 2015). VR is able to fill these gaps in the modern residency program by providing an alternative to additional procedural training. The technology is capable of creating an authentic and highly interactive operative simulation with options for recording and analysis of performance. In addition, these simulators are readily available for use without the need for an expert trainer to guide and monitor the trainee. Residents trained with VR have been shown to perform surgery much faster, whereas residents without access to VR are more likely to cause inadvertent injury in a study on the testing the MIST-VR laparoscopic simulator (Seymour et al., 2002). Similar outcomes were found using shoulder and knee arthroscopy simulators (Cannon et al. 2014, Pedowitz et al. 2012). While there is substantial evidence that training on VR simulators improves operative technical skills, its use in orthopaedic training programs lags behind other surgical specialties (Aïm et al., 2015). Currently, training simulators are only available at a few residency programs.
Although VR is the most developed and validated of the reality technologies, its use is currently limited primarily to education and preoperative planning because it immerses the user in a completely computer-generated environment without view of the real world. There is a vast array of VR training hardware and software on the market for both open and arthroscopic procedures, although simulators are more frequently available for arthroscopy techniques (Vaughn et al., 2016). VR simulators have been shown to support resident learning more effectively than benchtop or synthetic trainers (Blythe et al. 2006, Middleton et al. 2017, Leblanc et al. 2013).
Most of the currently available VR training systems employ the 3D Systems Geomagic Touch X haptic device technology, which allows the user to feel on-screen computer-generated surgical instruments and anatomy by applying force feedback onto the user’s hand. The Geomagic Touch X technology is even capable of providing lower frequency vibration feedback (up to 1 KHz) to simulate the drilling forces and torques (3D Systems).
Many of the 3D Systems-based simulators are used for training and preoperative planning, particularly for trauma and arthroplasty scenarios. Tsai et al. (2007) describe a surgical simulator for hip drilling with screw and plate placement for hip trochanter fracture using the Geomagic Touch X device. A similar simulator was described by Vankipuram et al. (2010) to provide the user with realistic drilling simulation. The software was capable of differentiating skill levels between novices, residents, and expert surgeons. Linköping University developed a hip fracture fixation simulator in 2012. Patient-specific virtual reconstructions from CT data are utilized allowing the user to practice positioning of nail implants into the femur for preoperative planning. The user can preview multiple approaches with haptic feedback through bone of varying density to find the optimal path for the pilot drill.
In addition to general training and preoperative planning for open surgeries, there are several VR arthroscopy simulators that have been developed and tested with the Geomagic Touch X device. Knee arthroscopy surgical trainer (KAST) was developed by AAOS and allows users to virtually complete the steps required for knee arthroscopy. The KAST system provides haptic feedback based on simulated cartilage and tendon, with software base on the visible human project. Another arthroscopic training simulator—the Simbionix Arthromentor—uses a pair of Geomagic Touch haptic feedback devices to mimic surgical tools. A computer-generated endoscopic image with virtual surgical tools is displayed to the user. The Arthromentor simulator was subject of a validity trial with 26 physicians to test reliability and validity, which showed significant differences in time and operative accuracy with 4 medical procedures between 13 novices and 13 experts (3D Systems).
A few VR simulator systems for training have been developed utilizing technology different from Geomagic Touch X hardware, such as The Haptic Orthopaedic Training (HOOT) simulator. Developed by the Imperial College in London, this simulator provides training for hip surgery drilling and guide wire placement in dynamic hip screw procedure. Another simulator called The BoneDoc dynamic hip screw simulator, provides training for screw and plate fixation of hip fractures (Blyth et al. 2008). Finally, Johns (2014) developed a fluoroscopic-based wire navigation training simulator for hip fractures, which makes use of a real drill and guide wire instead of a haptic feedback device and has been found to be an appropriate measure of surgical skill for wire navigation of the proximal femur.
Within the prosthetic development and placement arena, Jun and Park developed a three-dimensional pre-operative hip implant planning program, which simulates femoral head resection and implantation of a patient-specific prosthesis created with data from CT data. HipNav, a similar program developed in 1995 by Carnegie Mellon University, has been the most complete THA planner since 2002. It enables preoperative optimization of the position of the acetabular component within the pelvis using CT data. HipNav also includes kinematic hip joint models and tools for predicting post-operative range of motion and functionality. Based on this data, feedback is provided by the software to help the surgeon in determining the positioning of the prosthesis. The results from the HipNav simulation can then be transferred to a computer workstation in the operating room for in-vivo surgical navigation. The HipNav platform was the first of the virtual reality simulators to be used as an intraoperative aid (DiGioia et al. 2000).
Augmented Reality (AR)
While VR technology is used almost exclusively for preoperative planning and training, augmented reality technology has the ability to transform the workflow and precision in the operating room. AR technology combines digital images and preoperative planning information, giving the surgeon an overlay experience of the patient’s individual anatomy without the use of ionizing radiation. The preoperative imaging data from CT or MRI can be combined with further visual data, including the location of incisions or drill points, that are displayed in the correct spatial alignment on the surface of the patient. In much the same way as frameless stereotaxy navigational systems, AR technology projects the graphics onto real world surfaces by using fiducial markers that the camera tracks. In this manner, AR could provide navigational assistance to the surgeon more precise than traditional CAS systems, while remaining more intuitive and less costly. In addition, AR technology allows for the visualization of anatomy that may be obscured or may not typically exposed during a surgical procedure. Although previous use of AR has been somewhat limited by low resolution and incompatibility of AR hardware within the sterile environment, recent improvements in AR systems have made them practical for intraoperative data visualization (Okamoto et al., 2014).
The technologies capable of AR available today are diverse, from handheld devices, such as tablets and smartphones, to specialized head-mounted displays (HMDs). These HMD devices, which have the most significant potential in orthopaedic surgery, can be grouped into three categories. The first category includes specialized headsets designed from the ground up which can either be a non-tethered device with the computer processing on board, such as the Microsoft HoloLens. The second category includes a tethered device which relies on an external computing source, such as the Meta 2. The advantage of the tethered device is that all the computing power is placed externally, allowing for larger processing capabilities and less space constraints. However, cabling and compatibility between external computer and headset could lead to challenges with the OR workflow. Both of these categories of AR devices, make use of an optical see-through (OST) display (Zhang et al. 2015, Dahler et al. 2017), which allows the user to see through a lens with a reflective property that superimposes the augmented graphics. Accurately superimposing the computer images to the actual patient is accomplished with a set of fiducial markers or an image marker that the camera can track. The third and final category of HMD devices are referred to as video see-through (VST) displays which make use of a smartphone or tablet (Dahler et al. 2017). Smartphones and tablets are widely employed for VR purposes, such as the Google Cardboard or Samsung Gear VR platforms (Holmes 2018). The user sees a two-dimensional image of the real world on the display with graphics overlaid on the image. However, it does not provide a proper stereoscopic image with depth to enable accurate user interaction with real world surfaces. Despite this current limitation, there are smartphones and tablets currently in development with “dual depth” sensing cameras that may provide a better sense of depth for the user. OST devices are significantly more costly than VST platforms. However, Seebright, an augmented reality startup based in Santa Cruz, has developed OST hardware that is comparable in price to the Google Cardboard device, but more intuitive and affordable to use.
Due to the current prohibitive cost of OST hardware, there are a relatively limited number of studies which validate AR technology when compared to studies exploring VR applications in orthopaedic surgery. The early results of these studies on AR, however, are highly promising for the validation of the technology as a means to improve surgical accuracy and patient safety.
An AR platform was evaluated for training THA placement efficacy in a study by Logishetty et al. The investigators utilized an AR headset to aid in acetabular cup orientation and compared placement accuracy between trainees with only AR-assistance and those with only surgeon guidance. Over the course of the four training sessions, the participants using the AR system had significantly smaller mean errors in orientation (1 +/- 1 degree) than those receiving guidance from the surgeon (6 degrees +/- 4 degrees). In the fourth and final assessment, participants in both groups had improved to a comparable level (mean difference in accuracy between the two groups was 1.2 degrees, 95% CI, -1.8 to 4.2 degrees, p=0.281). Although there were no differences in the final assessment between the AR-trained group and the surgeon-trained group, the AR guidance system may be useful in the education setting because it can provide feedback for motor skills required for arthroplasty without the need for attending surgeon oversight.
Further, AR has several applications in preoperative planning. In a study by Ogawa et al. (2018), the investigators developed a similar acetabular cup placement device, the AR-HIP system. The AR-HIP device presents the surgeon with an acetabular cup graphic overlay in the surgical field through a smartphone. The smartphone also provides feedback on the orientation of the acetabular cup. 54 patients undergoing primary total hip arthroplasty were divided randomly into two groups, one with cup orientation measured using a goniometer and the other group measured with the AR-HIP device. This study demonstrated improved operative accuracy with the AR-HIP device compared with the traditional goniometric measurement, potentially reducing wear and increasing stability.
Intraoperative navigation for fracture fixation can also be assisted by AR (Wang et al. 2016, Terander et al. 2018) In a study by Wang et al., the investigators utilized a three-dimensional reconstruction of patient CT data displayed on the HoloLens head-mounted device for placement of a sacroiliac screw. The trajectory of the screw was planned using a cylindrical model combined with the CT data. Following the operation, a repeat CT was performed to confirm the position of the screws, demonstrating only minimal screw deviation between planned and final screw position. This study suggests an intuitive and accurate approach for guiding screw placement by way of AR-based navigation. In another study by Shen et al., preoperative CT data was used to create a patient-specific reconstruction plate for unilateral pelvic and acetabular fracture reduction and internal fixation. An AR system then combined preoperative CT data with virtual models of the fracture and the surgical implants, enabling accurate implant placement and fracture reduction. The accuracy of this AR system was assessed using six clinical cases, with the variability of the virtual implant geometry at 0.63 mm on average with a standard deviation of 0.49 mm. On average, it took users 10 minutes to create the graphical representation of the implant. This study by Shen et al. provides evidence that AR can improve preoperative planning efficiency and better accommodate the patient’s individual anatomy.
AR can also provide remote training experiences as well as new solutions for global surgery. Greenfield et al. (2018) describe a remotely-guided surgery using AR technology in Gaza, Palestine for a complex traumatic hand injury. The remote surgeon in Beirut, Lebanon, utilized a combination of video teleconferencing technology and an AR system called Proximie, providing step-wise guidance to the local surgeon through hand gestures, annotations, diagrams, and clinical imaging. This proof-of-concept case report demonstrates how innovative AR technologies, such as Proximie, may be utilized to address structural inequities in remote locations by providing a cost-effective telesurgery platform. In a similar study by Ponce et al., the authors utilize a Google Glass® augmented reality device during a total shoulder arthroplasty enabling the local surgeon to interact with the remote surgeon within the local surgical field. The AR system provided livestream video and allowed remote mentoring and guidance between the two surgeons. The surgery was successful and well-tolerated by the 66-year-old patient, who had an improvement of shoulder pain and range of motion postoperatively.
Mixed Reality (MR)
While AR is a technique that allows the surgeon to visualize a superimposed digital image on the surgical field, mixed reality presents the surgeon with holographic elements that align with the real world context (Gregory et al., 2018). Mixed reality may eventually be the most effective and functional of the reality technologies for the orthopaedic surgeon, as it allows more freedom of control over the CT reconstructions for preoperative planning, as well as intraoperative visualization. Less preoperative calibration is required of the surgeon, as the data can be manipulated by the user at any time. Similar to AR, MR provides the surgeon with access to patient data while remaining in the sterile environment, decreasing radiation exposure for both patient and surgeon, and allowing the surgeon to video-conference with colleagues and trainees in remote locations. The hardware required for a mixed reality system is nearly identical to currently available AR devices and many of these devices will work with both AR and MR applications. Although a growing number of systems have been introduced to the market, Microsoft has forged the path in the mixed reality domain with their headset device (HoloLens) enabling the user to control the headset by verbal command and hand gesture. A recent literature study on the evaluation of OST-HMD suitability for mixed-reality surgical intervention shows that Microsoft HoloLens outperforms other currently available OST HMDs (Qian et al., 2017).
The most advanced devices for AR and MR are ergonomic, intuitive, and stable for practical operative usage. The HoloLens device and comparable devices are lightweight and comfortable. Surgeons did not report discomfort with the extended use of these devices (Sinkin et al. 2016). The amount of information displayed on the device can be dynamically-controlled, allowing the surgeon to customize the experience. These devices are responsive to voice and gesture commands (Léger et al. 2017) and the image quality has markedly improved with increasing technology, limiting previously problematic motion sickness issues. Finally, length of battery life and stability of the video stream have also improved, minimizing interruptions.
In a study by Gregory et al. (2018), the investigators performed a standard reverse shoulder arthroplasty aided by the HoloLens MR system. A point-of-view live video feed was also provided to four other surgeons in remote locations in the U.S. and U.K., who were able to share information in the primary surgeon’s field of view throughout the intervention via Skype. The time required to complete the procedure was characteristic of the traditional operative time at 90 minutes and postoperative CT evaluation showed proper positioning of the prosthesis. The patient was without complications on a clinical visit 45 days after the procedure.
Because of its ease of use and rapid integration into workflow, MR technology has been effectively employed for comprehensive management of complicated cervical fractures and trauma in a study by Wu et al. (2018). The authors utilized MR technology for treatment of cervical fractures requiring reduction, fixation, and decompression. Laminar screw placement for stabilization of the fracture is the standard of care for fixation of cervical fractures, however the placement of screws carries the risk of neurovascular injuries and potentially catastrophic complications. Consequently, the use of navigational systems to assist with these procedures has become increasingly common. In this study, a patient with traumatic high paraplegia was taken to the hospital and underwent a complicated, successful cervical fracture operation guided by MR technique. The authors found the MR navigation system allowed improved preoperative doctor-patient communication, as the surgeon was able to display a 360 degree view of the fracture while discussing operative risks and the surgical plan. Intraoperatively, communication between surgeons is facilitated with the use of real-time holographic images in addition to the increased operative accuracy with use of the three-dimensional digital model.
In Lee et al. (2017), the investigators present a mixed reality system which makes use of amarkerless surgical tool tracking with the aim of providing guidance and support for fast entry point localization. The user selects the surgical tool and establishes a planned trajectory using the patient CT data. By combining the tracking results and augmenting the tracked tools in the mixed reality field of view, the user is able to perceive the target depth, orientation, and spatial positioning rapidly. The planned trajectory is displayed along with the estimated trajectory of the tool, allowing the user to orient the tool properly. By aligning the two trajectories, a complicated k-wire or screw placement procedure is simplified without the need for numerous fluoroscopic images.
Fushima and Kobayashi (2016) present a MR simulation for orthognathic surgery which aims at tracking mandibular motion using skeletal/dental models reconstructed from CT data. Their solution integrates the MR model of the skeletal/dental model with the real motion of the dental cast mounted in the simulator setup. The CT reconstruction and the physical model were visually combined on a PC display and the skeletal change from osteotomy with the resulting mandibular movement was modeled using Fastrak (Polhemus Co. USA) software. The accuracy of the MR system was shown to be sufficient for clinical demands, as the average error between the modeled data from the MR system and the actual value ranged from -0.308 to 0.136 mm. A similar platform could be constructed to model biomechanical function in arthroplasty and other orthopaedic procedures.
Beyond operative uses, MR also offers an improved training modality for orthopaedic residents. Virtual reality training simulators are often limited by visualization of real world surfaces and by lack of haptic feedback, but mixed reality allows interaction with both real and holographic objects for training. A study by Stefan et al. (2018) explores the potential for mixed reality as an improvement for virtual reality simulator training systems currently on the market. The authors address a training solution for C-arm-based fluoroscopy in orthopaedic surgery by constructing a realistic and radiation-free system that combines patient-specific 3D printed anatomy with a simulated X-ray imaging using a model C-arm. This mixed reality training system was verified with six different surgeons performing a facet joint injection simulation.
A similar study by Condino et al. (2018) explored a mixed-reality platform using the HoloLens was developed as a hybrid training system for orthopaedic open surgery. Hip arthroplasty was chosen as the benchmark to evaluate the proposed system and patient-specific anatomical 3D-printed models were constructed from patient CT imaging. Along with the hardware components, the Vuforia SDK was utilized to register the virtual and physical contents and the Unity3D game engine was employed to develop the software allowing interactions with the virtual content using voice and gesture commands. The study evaluated both the quantitative accuracy of the system as well as the qualitative response of the participants. The imaging registration error of the system was low (RMSE = 0.6 mm) and the participants found the system both ergonomic, accurate, and intuitive to use. All of these findings support mixed-reality as a platform for simulation of open orthopaedic surgery.
Conclusions
As an advancement from traditional computer-assisted surgery methods, reality technologies for orthopaedic training and practice has great promise. The ability to visualize patient data in real time, enhance preoperative planning, and improve accuracy and precision of interventions has been shown to translate to improvement in both treatment quality and patient outcomes. From an educational perspective, the reality technologies provide a new method for live evaluation and remote mentoring. Interest in the technology has grown rapidly as evidenced by the increasing number of publications on the subject in recent years.
The gap between research and wide application is mainly due to prohibitive costs of reality technology devices. Additionally, regional disparities in the availability of simulators are also theoretical limitations to the widespread use. Issues of data protection will have to be addressed for greater use of this technology (Muensterer et al. 2014, Khor et al. 2016, Vávra et al. 2017). However, issues of cost and availability will likely be averted in the future as more inexpensive devices enter the market.
With further research, technological improvement, and widespread implementation into surgical curricula and practice, reality technology is sure to become commonplace in orthopaedic surgery. It has the power to transform healthcare as it is currently delivered and will allow physicians to further personalize and tailor their care to the individual patient. Surgeons will be able to become more efficient and precise while patients will benefit commensurately from this higher quality of care. In all, the development of virtual, augmented, and mixed reality technologies represents an exciting new trend with a diversity of potential applications in orthopaedics.