Skip to Content
Archives
NLM Home | About the Archives

VIRTUAL (COMPUTED) ENDOSCOPY:

DEVELOPMENT AND EVALUATION USING THE VISIBLE HUMAN DATASETS

[Presented at the Visible Human Project Conference, October 7-8, 1996,

National Library of Medicine, National Institutes of Health, Bethesda, Maryland]

RICHARD A. ROBB, PH.D.

Mayo Foundation/Clinic

Rochester, MN 55905

Introduction

Virtual endoscopy (or computed endoscopy) is a new method of diagnosis using computer processing of 3-D image datasets (such as CT or MRI scans) to provide simulated visualizations (1,2) of patient specific organs similar or equivalent to those produced by standard endoscopic procedures (3). Conventional CT and MRI scans produce cross section "slices" of the body that are viewed sequentially by radiologists who must imagine or extrapolate from these views what the actual 3 dimensional anatomy should be. By using sophisticated algorithms and high performance computing, these cross sections may be rendered as direct 3-D representations of human anatomy. Specific anatomic data appropriate for realistic endoscopic simulations can be obtained from 3-D MRI digital imaging examinations (4) or 3D acquired spiral CT data (5).

Thousands of endoscopic procedures are performed each year. They are invasive and often uncomfortable for patients. They sometimes have serious side effects such as perforation, infection and hemorrhage. Virtual endoscopic visualization avoids the risks associated with real endoscopy, and when used prior to performing an actual endoscopic exam can minimize procedural difficulties and decrease the rate of morbidity, especially for endoscopists in training. Additionally, there are many body regions not accessible to or compatible with real endoscopy that can be explored with virtual endoscopy. Eventually, when refined, virtual endoscopy may replace many forms of real endoscopy.

The term "virtual endoscopy" is not quite syntactically correct. Webster's definition of "virtual" has two main components: 1) "...possessed of certain physical virtues..." and 2) "...being such in effect, though not formally recognized...". The definition of "endoscopy" is also two-fold: 1) "...visualizing the interior of a hollow organ..." (Webster), and 2) "...visual inspection of any cavity of the body by means of an endoscope..." (Dorland's Medical Dictionary). Thus, the concatenation of the terms virtual and endoscopy does not result in a syntactically congruent phrase (e.g., "being such in effect though not formally recognized in visualizing the interior of a hollow organ"). The term "virtuality" is actually better -- meaning 1)"essence of..." and 2) "potential existence of..." (Webster). This leads to a syntactically and semantically correct phrase: "endoscopic virtuality", meaning "the essence of visualizing the potentially existing interior of an object...". This is what is meant in reality by the term virtual endoscopy. It conveys the extended power and capability of this technique for looking inside any object, hollow or not, existing or not. Additionally, the resultant visualizations are actually computed images, so the term "computed endoscopy" is perhaps more accurate and descriptive still (similar to comparing computed tomography with conventional tomography). But the term virtual endoscopy has been used so prevalently as to probably preclude modification in the common vernacular, so for practical and historic purposes, the term virtual endoscopy will be used in this paper to convey the same meaning as endoscopic virtuality.

Although there has been speculation about virtual endoscopic capabilities since the early 1970's, as dramatized in the science fiction movie "Fantastic Voyage", the recent availability of the Visible Human Datasets (VHD) from the National Library of Medicine (see Figure 1 and Figure2), coupled with the development of computer algorithms to accurately and rapidly render high resolution images in 3-D and perform fly-throughs instead of inserting long instruments (endoscopes of any kind) into a patient, has provided modern realization of these capabilities. The VHD provides a rich opportunity to help advance this important new methodology from theory to practice.

Table 1 lists some of the real virtues of the Visible Human Datasets. Combined, they constitute a uniqueness and completeness unmatched by any other standardized dataset. Such images are not available routinely in any laboratory or hospital; that is, high 3D resolution (~1 mm) multimodality images of the entire body, both male and female.


Table 1

Some Virtues of the Visible Human Dataset

Whole Body Images
Male and Female
Multimodality (CRYO, CT, MR)
High Resolution Digital Anatomy
Good Isotropic (3-D) Resolution
Available Free of Charge


These combined attributes have resulted in VHD popularity and utilization exceeding the expectations and goals of its conceivers. Facilitated by free availability from the National Library of Medicine, National Institutes of Health (39), it has become an international standard dataset for development and implementation of a wide variety of useful applications.

Advanced approaches to visualization of specific anatomical models in support of endoscopic training, rehearsal, or diagnosis can be implemented on a virtual reality display system (6-8). Such displays are "immersive", and allow the endoscopist to simultaneously visualize the anatomy and manipulate the viewing orientation in a realistic way (9,10). In fact, virtual endoscopy provides viewing control and options that are not possible with real endoscopy, such as direction and angle of view, scale of view, immediate translocation to new views, lighting, and measurement. Visual feedback positioning systems and navigation guides can orient the virtual endoscopist relative to the actual anatomy. Image values from the original data can be associated with any view and quantitatively assessed. Simulated views can be compared to actual endoscopic images and post-operative pathologic data to validate their usefulness. There are many body regions not accessible to real endoscopy that can be explored with virtual endoscopy. Several important body systems are compatible with invasive probes including the heart, spinal canal, inner ear (cochlea, semicircular canals, etc.), biliary and pancreatic ducts and large blood vessels. These are important anatomic structures ideally suited for virtual endoscopy.

The VHD is being used by an ever increasing number of investigators to develop, test, and compare 3-D visualization and image processing methods, and to evaluate the effectiveness of these methods, including endoscopic simulation, for eventual applications in clinical diagnosis and therapy. These methods include algorithms for registration, segmentation, classification, modeling and rendering, all of which may be used in various forms of computed virtual endoscopy. As 3-D medical imaging and computer power improve, virtual representations of anatomic and physiologic data have become increasingly realistic and interactive, significantly advancing the science of computed non-invasive endoscopic examination. However, there remains a critical need to refine and validate these simulations for routine clinical use, and the VHD is ideally suited for this. Table 2 categorizes into four main groups some useful applications of the VHD.


Table 2

Some Real Applications of the Visible Human Datasets

Algorithms

Education

Modeling

Prototyping

rendering

digital anatomy

deformable models

surgery planning

segmentation

reference libraries

physical attributes

anesthesia delivery

registration

knowledge bases

physiological attributes

virtual laparoscopy

classification

biology framework

realistic animations

virtual endoscopy


The Biomedical Imaging Resource at Mayo Clinic was one of the first laboratories to receive (August, 1994) this unique, high resolution image dataset -- the Visible Human Male (VHM) -- and has been actively engaged in developing and evaluating a variety of image processing methods with it. These efforts have included accurate segmentation and modeling of all major anatomical structures of the VHM. More recently, similar efforts with the Visible Human Female (VHF) have been carried out.

These data and studies have been used to develop and evaluate virtual endoscopic procedures applied to a variety of intraparenchymal regions of the body, using both anatomic modeling and perspective volume rendering, to produce fly-through movie sequences of the stomach, colon, spine, esophagus, airway, heart and aorta of the VHM, and of the sinuses, bronchial tree and female reproductive system of the VHF. Figure 3 shows a transparent rendering of a torso model of the VHM and several computed endoscopic views. Illustrated in the simulated endoscopic views are different types of navigation guides superimposed upon the display to help the user interactively determine body orientation and precise anatomical localization while performing the virtual endoscopic examination.

The virtual visualizations of the trachea, esophagus and colon have been compared with standard endoscopic views by endoscopists who judge them to be realistic and useful. Especially useful is texture mapping onto the computed virtual intra-parenchymal surfaces of the trachea, esophagus and colon using patient-specific samples from the real endoscopic images of these regions. Quantitative measurements of geometric and densitometric information obtained from the computed virtual endoscopic images ("virtual biopsy") are being carried out and compared with other direct measures on the original VHM and VHF data. Preliminary analyses suggest that virtual endoscopy can provide accurate and reproducible measurements.

We have developed specific clinical protocols to compare virtual endoscopy with real endoscopy and other diagnostic procedures. Preliminary results from these studies are encouraging and will help drive improvements in and lend credibility to virtual endoscopy as a clinical tool. However, there are a number of technical problems that have yet to be solved to make virtual endoscopy a clinical tool with sufficient scientific validity and ease of use to be employed on a routine basis. These will be addressed later in this paper. But rapid progress is being made toward successful solution of all of these problems and toward realization of the significant promise of virtual endoscopy.

Background

The history of virtual endoscopy is a brief one. It is a new technology in diagnostic medical imaging. Virtual endoscopy derives principally from digital medical imaging, and in particular from visualization of 3D CT and MRI datasets. However there are roots in non-medical areas of 3-D visualization, including computer science, terrain guidance, flight path planning and flight simulation (11,12). The fundamental concept common to all of these efforts is the representation of real world objects (in the case of medicine, patient organs and tissues) as spatial information. For digital medical image information, the spectrum of physical and computer science methods available to acquire, process, analyze, convert, scale, enhance, fuse, distribute and transmit information can be applied in ways that permit diagnostic and therapeutic capabilities beyond current human physical abilities and possibilities. This is the promise of the Information Age.

A number of investigators have been working in this field. Some of the earliest work was published by Vining (13,14) on virtual colonoscopy, Lorensen (15) performing 3-D fly-throughs of carotid arteries and A-V malformations, Robb (6,16,17) who began with patient specific 3-D organ visualizations and progressed to interactive organ fly-throughs, Hara and Johnson (18) who have published early clinical observations in the colon, Jolesz and Kikinis (19,20) who developed "enhanced reality" using 3-D visualization and image fusion for stereotactic neurosurgery, and Rubin and Napel and colleagues (2,21) who have applied simulated endoscopy to a variety of intra-parenchymal visualizations. As 3-D medical imaging and computer power improved, these early pioneers appreciated the power and promise of virtual representations for realistic visualization and manipulation to advance the science of non-invasive endoscopic diagnosis.

Continuing recent work (22-26) characterizing the rapidly maturing development and evaluation of virtual endoscopy in a variety of applications suggest that this technology is "a winner", and will in fact become a routine clinical tool in the near future.

Methods

One general schema for producing virtual endoscopy procedures is diagramed in Figure4. Three-dimensional images are first acquired from a scanner (e.g., spiral CT, MRI, confocal microscopy). Invariably, some preliminary processing on this data is required to properly prepare it for modeling. This "pre-processing" step may include interpolation to transform the dataset into isotropic elements, registration to bring all images into spatial synchrony, and segmentation to reduce the dataset to the desired specific anatomic structure(s). Many approaches to model creation have been proposed (27-33), but generally single anatomic objects must first be segmented from the 3-D images and their surfaces extracted. The isolated surface is then converted to a geometric representation, a process referred to as "tiling", by transforming surface coordinates to a meshwork of polygons. The polygonal surface representation may then have appearance modifying information added to it, such as color, lighting, textural patterns, etc. This processed dataset comprises the model, which then may be rendered for visualization using a surface or volume rendering algorithm, of which there are many available (16,35-37). The user or operator previews the rendering and accepts it as faithful and/or useful, or the user can choose to repeat some phase of the process (e.g., segmentation and/or surface definition) to obtain an acceptable model for rendering. This iteration and "acceptance" decision is generally made by a human expert (e.g., radiologist, surgeon, endoscopist). The endoscopic display procedure is then simulated in one of two ways -- 1) on-line, real-time display using an interactive simulator, such as a virtual reality display system (6,38) with rapid computational capabilities which can produce updated displays at real-time rates in response to user interactions (e.g., using a head-mounted display, head tracking and 3-D input devices); or 2) a pre-determined "flight path" is used to compute sequential frames of views which are rendered in an animated video sequence. There are variants to this overall process, but it is generally representative of current methods for virtual endoscopy.

Another approach to virtual endoscopy is diagramed in Figure 5. In this method a model is not generated, but rather the appropriately segmented images are rendered directly using volume rendering with perspective (2,16,36). This is a well-known ray-casting approach (35-37) which generates various surface views using a specified set of conditions and/or constraints on the rendering process as rays are passed mathematically through the voxels of the 3-D image. Perspective volume rendering differs from parallel volume rendering by the direction that the rays are cast. Parallel volume rendering projects all rays in a parallel trajectory through the volume. This is computationally simple, resulting in speed and efficiency. Rays cast in parallel can be considered to originate from an eye position at infinity, and do not converge. Conversely, rays cast in perspective originate at a finite eye position and are cast at divergent angles relative to one another through a projection plane and then into the volume. Parallel renderings are usually acceptable for large data sets with the viewing point outside and arbitrarily removed a far distance from the volume. Virtual endoscopy which places the eye positions in close proximity to visible surfaces requires the casting of rays in a perspective geometry, especially from view points inside the volume. By appropriately modifying the algorithms and parameters used in perspective rendering, the visualizations produced can effectively emulate images captured from actual endoscopic examinations. The shading models available for parallel volume rendering are applicable for perspective volume rendering.

Because virtual endoscopy requires an eye position inside the volume, voxels in close proximity to the eye appear larger than voxels at a distance from the eye. Because the voxel covers many pixels of the resulting visualization, it is in effect super-sampled by the algorithm and can result in a visualization that appears "blocky". By using trilinear interpolation in perspective volume rendering, the display can be effectively smoothed by sub-sampling the data to match the sampling rate of the rays being cast through the volume. Trilinear interpolation increases the computational load of the algorithm by an order of magnitude compared to parallel volume rendering. Current workstations are able to render a perspective image in under 10 seconds from a 2563 volume.

Since perspective volume rendering is a computationally intense procedure, even the fastest of modern workstations cannot render full visualizations of large images at real-time rates (30 frames per second), although this is a solvable problem with continuing advances in computer technology. Therefore, similar to the off-line option of the modeling approaches, pre-determined flight paths and animation are performed to produce cine sequences that can be viewed subsequently at video frame rates. In contrast to modeling, the potential power of perspective volume rendering, especially when computational rates become less prohibitive for large datasets, is that the intrinsic 3-D richness of the volume data is preserved, including depth layers that may hold subtle but important information to enhance the usefulness of the visualization, such as blood vessels in the luminal wall, fine trabeculations, subdural lesions, etc. Such detail is not likely to be captured in surface models.

The user interface to a virtual endoscopy system varies depending on the display mode. In real-time simulations (6,9-11,19,38) the user wears a head-mounted display or special stereo glasses and manipulates the 3-D image with various feed-back and 3-D input devices. This interface is immersive and responsive on-line, and places the user inside the visualization domain (vis-a-vis "Fantastic Voyage"). In cine or video path tracking, the interface is generally a computer workstation screen and pointing device that the user employs outside of the visualization domain to control replay of pre-determined fly-throughs. These displays can also be rendered and replayed on video tape, CD-Roms or other multi-media devices.

Evaluation

Critical to the success of any new method for clinical applications is careful validation. The VHD is playing an important role in this regard. Table 3 depicts this role in the schematic hierarchy of approaches to validation of any new technology or method. Traditionally, mathematical simulations and phantom studies are carried out to help validate a new method, but the performance (accuracy and reproducibility, sensitivity and specificity) of new methods for clinical diagnosis or therapy necessarily includes both in vitro and in vivo studies as well. The difficulty is not having a "ground truth" measure for validation. The VHD provides this with the cryosection data, against which image processing effects using the CT and MR data can be calibrated and judged. And the ground truth cryosection data itself can be used to produce very realistic simulations for testing procedures. The standardized dataset also provides a basis for multi-center studies and facilitates important end user evaluation (i.e., the physician, surgeon, anatomist, and/or endoscopist).


Table 3

Evaluation and Validation Using the VHD

Simulations
Phantoms
Real Data (VHD)
Cryosections "ground truth"
Multi-Center Comparisons
End User Participation


The visible human male (VHM) dataset from the National Library of Medicine (39) has been used to develop methods for eventual application in clinical diagnosis and therapy. These methods include the algorithms for segmentation, modeling and rendering which are used in virtual endoscopy. Models of the torso and its contents made from the VHM have been used to evaluate several virtual endoscopic procedures, as was illustrated in Figure 3.

Figure 6 shows a virtual endoscopy view captured from a fly-through of the segmented and modeled stomach of the VHM. This visualization illustrates the surface detail that can be visualized with virtual endoscopy (the procedure used is similar to that described in Figure4). Figure 6 also illustrates three different navigation guides superimposed upon the display to help the user interactively determine body orientation and precise anatomical localization while performing the virtual endoscopic examination. The icon at the upper right dynamically updates body position relative to current eye viewing position; the transparent thumbnail of the model at upper left shows a bright dot at the 3-D location of the current view; and the CT section at lower left is perpendicular to the current viewpoint with a bright dot showing the current projected anatomic location of the virtual endoscopic probe.

Figure 7 shows four panels of virtual endoscopy views with navigation guides captured at selected locations during fly-throughs of the VHM trachea (upper left), esophagus (upper right), colon (lower left) and aorta (lower right). The view from mid-trachea shows the first main bi-furcation of the airway; the esophagus view is near the opening of the stomach; and the colon view is from within the sigmoid. The virtual visualizations of the trachea, esophagus and colon have been compared with real endoscopic views by endoscopists who judge them to be realistic and useful.

The view in the lower right panel of Figure 7 is from within the aorta and shows the junction of the innominate, left common carotid and left subclavian arteries. Such visualizations are not possible with real endoscopy. Figure 8 further illustrates this advantage of virtual endoscopy, showing the segmented heart model from the VHM and 3 additional endoscopic views within the major vessels and chambers, including the superior vena cava, right ventricle and pulmonary trunk. These different views also demonstrate the capability of virtual endoscopy for rapid translocation of viewpoint. Again, such views are impossible with real endoscopy.

Figure 9 is another visualization of the colon of the VHM, illustrating the model of the descending colon. Figure 10 shows four different endoscopic views from within the central portion of the descending colon (top) and from within the sigmoid (bottom).

Figure 11 illustrates rendered views from front and back of the skeleton of the VHF. The segmentations were produced by thresholding on the CT data. More refined segmentations using 3-D region growing and math morphology were performed on the skull and brain. Figure 12 shows transparent renderings of the skull model, including segmented sinus and middle ear passages through which virtual endoscopic fly-throughs can be performed. Figure 13 is a composite image showing transparent volume renderings of the brain, including two virtual endoscopic views from within the cerebrospinal ventricular system, one looking from the base into the fourth ventricle and the other from within the right lateral ventricle adjacent to the caudate nucleus.

Figure 14 shows models of the trachea segmented from the VHF, including an endoscopic view looking toward the first main bronchial branch. These model surfaces have been texture-mapped with the spatially-congruent pixel values from the digitized cryosection data.

Figure 15 shows a model of segmented pelvic contents of the VHF, showing the hip bones, female reproductive system, and urinary system. Figure 16 illustrates simulated endoscopic views from within the left and right fallopian tubes, the uterus, and a ureter.

Quantitative measurements of geometric and densitometric information contained in these models and the correlated image data can be carried out. Preliminary studies and analyses (40,45) are providing growing evidence of the potential of virtual endoscopy for accurate and reproducible visualizations.



Some Applications and Problems

One of the useful applications of the VHD models has been to serve as framework for medical education, anesthesiology training, surgery rehearsal and endoscopic simulations. Although the focus of this paper is on virtual endoscopy, it is instructive to illustrate an example of one of these other applications, namely surgery rehearsal, as it relates to virtual endoscopy. Figure 17 is a posterior oblique view of the pelvic girdle of the VHM with a model of the bladder, prostate gland and other adjoined structures, including a tumor, computed from a patient MRI volume scan and carefully registered and inserted into the VHM pelvis. The VHM pelvic girdle model serves as an anatomic framework for studying the patient-specific models of the prostate. (The pelvic bones are not available on the MRI scans.) The urologic surgeon can use such displays in a real-time virtual environment to critically study the anatomic relationships pre-operatively. Virtual endoscopy can be performed in the bladder, urethra, and prostate gland itself for detailed visualization of locations, shapes and sizes, particularly relative to the tumor and critical structures like the seminal vesicles and neuro-vascular bundles. Segmentation of the VHM pelvis and prostate gland and adjacent anatomy was one of the first "proofs-of-concept" for helpful 3-D visualization of prostate anatomy in patients. Figure 18 shows a detailed model of the VHM prostate and relative structures, as well as models from two cancer patients. The patient-specific models can be studied independently, within the framework of the VHD pelvic model, or within the patient's own pelvic girdle if it is available from CT scan data (the CT scan data can be accurately registered with the MRI data before producing the visualization (6,17)). Virtual endoscopy can be performed on these patient specific models to precisely visualize the interior of the prostate and embedded tumor relative to critical anatomic structures, such as the urinary sphincter, seminal vesicles and neuro-vascular bundles, helping to define appropriate surgical margins in order to reduce the risk of post-operative complications (such as incontinence or impotence).

Although virtual endoscopy is in the embryonic evaluation stages in clinical practice, descriptions of methods and preliminary results are increasing in scientific meetings, workshops and publications (2,6,13,15,17,18,40-47). Reported here are two typical clinical examples of virtual endoscopy -- one in esophageal cancer and one in colon cancer -- which have been made possible using methods first developed and tested on the VHD.

Figure 19 illustrates a sequence of image processing steps in preparation for a virtual endoscopy exam of the esophagus. The upper left panel in Figure 19 shows four cross-sections through the chest from a spiral CT scan of an esophageal cancer patient. The cancer, which constricts and almost completely occludes the esophagus, can be seen in Section number3. The upper right panel is a transparent volume rendering of several segmented anatomic structures from this scan, including the skin, lungs, trachea, esophagus, cancer and part of the stomach. The location of the selected cross-sections are indicated by the dashed lines. The lower two panels illustrate renderings with some objects removed. In the lower right panel, the esophagus is seen to be "squeezed off" by the circumferential constricting cancer, making it impossible to segment the distal portion adjoining the stomach.

Figure 20 illustrates four virtual endoscopy views of this patient's esophagus, computed from the segmented esophagus shown in Figure 19. The upper left panel shows a segment of normal esophagus near level 1. The upper right panel is a view from just above the pronounced narrowing in the esophagus near level 2. The lower left panel is yet a more distal view just above level 3 which reveals an infiltrating portion of the cancer (red). The lower right panel is the same view as the lower left, but the esophageal wall has been rendered transparent to reveal the entire lesion wrapped around the esophagus. Such global views of the cancer are impossible with real endoscopy.

Figure 21 illustrates volume renderings of segmented anatomic structures from a spiral CT scan of a patient with colon cancer and polyps. The upper left image is a transparent rendering of a portion of the large bowel selected for segmentation and the rather large circumferential rectal cancer at its distal extent. The upper right and lower left images reveal these same anatomic segmentations at different angles of view with the skin removed. The lower right panel shows a volume rendering of the isolated large colon and cancer from a posterior oblique view. Also identified, segmented and rendered in this image is a polyp (blue) in the mid-sigmoid region.

Figure 22 shows different ways of digitally analyzing the patient polyp with virtual endoscopy. The upper left panel is a texture-mapped virtual endoscopic view of the polyp at close range, and the upper right panel shows an enhancement of the polyp against the luminal wall. Such enhancement is possible only with virtual endoscopy, since the polyp itself can be digitally segmented and processed (e.g., brightened) as a separate object. The lower left panel is a transparent rendering of the polyp, revealing a dense interior region which was also segmented. This is most likely a denser-than-normal vascular bed, perhaps a pre-cursor of malignancy. The lower right panel illustrates the capability for "virtual biopsy". Both geometric and densitometric measures may be obtained numerically from the segmented polyp (density measures can be computed from the original image data).

A Short Video sequence illustrating a virtual endoscopic fly-through of the rectal and sigmoidal portions of the patient colon is included with this manuscript. The movie demonstrates dynamic updating of on-screen navigation guides for anatomic orientation and location. Near the end of the sequence a 180 degree turn-around is performed to move back toward the polyp shown in Figure 22. This maneuver is not possible with real endoscopy.

The visual fidelity of current generation virtual endoscopy images is not yet at the level of diagnostic accuracy suitable for regular clinical use. There are a number of technical problems that have to be solved to make virtual endoscopy a clinical tool with sufficient scientific validity and ease of use to be employed on a routine basis. These include: 1) 3-D image resolution, 2) accurate surface rendering, 3) automatic segmentation, 4) robust registration and 5) appropriate preparation.

Current 3-D images from CT and MRI scans readily demonstrate resolving power to 5 mm, and some laboratory results on phantoms are demonstrating images of lesions with 1 and 2 mm accuracy (39-42). The current generation of helical or spiral CT scanners are capable of 1 mm resolution (5), but this is not used routinely. As scanner resolution improves to the submillimeter level, the resolution of virtual endoscopy will also improve. Current scanning resolution can be used for screening procedures of clinically relevant lesions in the colon, but improvements to at least the 1mm level are required to support a broad spectrum of use.

Higher 3-D resolution will provide the requisite structural and anatomical detail for clinical use of virtual endoscopy; however surface textures comparable to those in the optical image obtained from standard endoscopy still require significant improvement. Surface rendering uses a variety of shading techniques and/or application of generic texture mappings related to the specific organ (27,31); therefore surface (e.g. mucosal) details in such renderings are only a form of mimicry. Since these are not patient specific, they are not helpful for diagnostic purposes. Diagnoses based upon anatomic deformities by mass lesions (polyp, cancer, cyst, edema from inflammation, ulcer, stricture, etc.) can be made, but other lesions which are diagnosed by subtle mucosal changes (inflammation, superficial ulcers, vascular ectasia, etc.) cannot be faithfully represented by current virtual endoscopy methods, since the requisite information is not captured in the CT or MR scan.

Achieving accurate, reproducible and hopefully automatic organ or tissue segmentation remains a significant challenge. Segmentation is an essential step to permit individual organs to be distinguished from one another. Currently, segmentation is most often done by "post production" techniques -- after the full image data set is acquired, the data is either meticulously segmented manually (drawn or identified by hand) or by various semi-automatic algorithms. Some promising progress toward fully automated 3-D segmentations of both hard (bone) and soft (skin, muscles, etc.) tissue is being realized (17,48). What is ultimately desirable is for the image to be automatically segmented "on the fly" as it is being acquired, and then be immediately usable as the fully segmented organ with or without a series of cross sectional or multiple orthogonal views. Until this occurs, capabilities for routine visualization of 3-D image datasets as virtual organs and tissue will be available at only a few centers with sufficient resources to effectively process the images.

Like segmentation, solutions to the problem of automatic, frameless registration between two or more images is needed for routine clinical implementation of these visualization methods. Currently, fiducial markers are required for proper acquisition and alignment (fusion) of images from different imaging sources such as CT or MRI, or from video and CT. While this can be routinely accomplished with static images, the application to dynamic images as obtained during surgery is limited to a few selected applications, such as stereotactic neurosurgery. In order to extend the utilization to a broader field of procedures, robust solutions will need to be developed for on-line dynamic acquisition (49) and fusion of in vivo organs and tissues which change in shape and position.

Some organs or tissues require special preparation in order to be properly visualized. Gastrointestinal structures such as the colon need to be "prepped" with various regimes to remove feces that interfere with segmentation and interpretation. Under most circumstances, blood vessels require injections of contrast material to enhance their contrast for visualization using medical imaging scanners. While some traditional preparatory methods might be effective, other newer techniques will be needed to provide a level of visualization adequate for routine clinical implementation.

Summary

There is a broad range of potential applications of the Visible Human and Specific Patient datasets, of which virtual endoscopy is an integral part. For example, once the image data is acquired and "virtualized", it can be used for diagnosis as a virtual gross anatomic (or pathologic) specimen or as a virtual endoscopy model used for pre-operative planning of a specific complicated surgical (or radio therapy) procedure, practiced upon as a surgical (or catheter based or radio therapy) procedure simulator, and in certain cases used as a prognostic planner. In order to help understand the current state of the art relative to what appears to be a useful clinical goal for a virtual patient, Table 4 classifies the evolutionary types or generations of medical procedure simulation using virtual environments for applications based on all levels of gross, microscopic and physiologic representation.


Table 4

Taxonomy for Several Generations of Virtual Anatomy

Generation

Properties

Example

1

Geometric anatomy

3-D organ shapes

2

Physical dynamics modeling

Kinematics, deformations

3

Physiologic characteristics

Bleeding, leaking bile

4

Microscopic anatomy

Neurovascular, glandular

5

Biochemical systems

Endocrine, immune, shock


Each succeeding generation is more complex, integrates the preceding generation, and follows a chronological development based upon technical limitations of the time period. The ultimate goal is to produce a virtual photo-realistic human that has complete physical, physiologic and systemic fidelity. Initially 3-D geometric anatomic shapes were computed, with some simple interactivity (generation 1). This permitted identification of gross anatomic structures, simple fly-throughs, and very crude manipulation of cartoon-like organs, such as gall bladder, liver, etc. Computer power rapidly increased to usher in generation 2, which had more realistic graphics, included physical properties of tissue, such as stretching and deformation (plasticity), and allowed interactive positioning of anatomic structures. Interactive simulations could be performed (deform, cut, divide, etc.) on any portion of the virtual organ. There are a few virtual representations that are beginning to incorporate generation 3, which includes physiologic properties such as breathing, bleeding, leaking of bile or urine, motility, etc. in highly realistic renderings. Generation 3 has utilized either the Visible Human or patient specific data derived from CT, MRI and other imaging modalities. These comparatively high resolution actual data sets have significantly increased the realism, visual fidelity and clinical usefulness because the virtual representations are based upon real human anatomy, not upon graphic drawings or approximations.

One new application in "spatial physiology" developed in the Biomedical Imaging Resource at Mayo Clinic could be considered the one of the first examples of generation 4. The purpose of generation 4 is to add microscopic level detail to the anatomy, rendering with high fidelity miniature size structures such as neurovascular bundles, glandular structures within mucosal surfaces, and even individual cells. In Figure23 a 3-D model (left) of a single neuron from the inferior mesenteric ganglia of the gastro-intestinal system obtained from a confocal microscope scan is shown, along with four virtual endoscopy views (right) of this extraordinary model captured from a fly-through of the neuron. The views include a close up of the surface and three views from "inside" the neuron. The location of somal junctions for several dendrites can be seen, and one view (lower right) projects into a single dendritic appendage. Images of such miniature structures, including the ganglia and individual cells obtained from microscopes can be accurately placed within the anatomy of the Visible Human, as shown in Figure 24, to provide a global framework and context for study of the microstructures. Figure25 illustrates another "celluloscopic" fly-through of a single mesenteric neuron, but one for which receptor sites for specific neuro-transmitters have been determined and mapped onto the model surface. These receptor sites can be seen at specific locations from within the body of the cell. This type of fourth generation synthesis of micro, histo and gross anatomy demonstrates the potential for a seamless integration of human anatomy and spatial physiologic function from macroscopic to microscopic levels.

The fifth generation in the taxonomy of virtual anatomy will include complex biochemical parameters with multi-organ system integration to represent systemic functions, such as neuro-endocrine and immunologic functions or pathologic states such as shock. As the 4th and 5th and perhaps successive generations are realized, the virtual representation continues to become more realistic, and hence useful. A virtual endoscopic procedure on this level of image representation might eventually become indistinguishable from the actual patient and permit a continuous, seamless fly-through from gross anatomic, through endoscopic and finally to microscopic realms.

Virtual endoscopy is the fountain head of an entire generation of new diagnostic opportunities. Theoretically, most if not all internal structures can be visualized using this modality (i.e., "endoscopic virtuality"). The clinical benefits are a priori, though clinical validation will be needed. No longer will the patient require a sedative, insertion of an instrument into a natural body opening or minimally invasive opening, hospitalization or ambulatory center observation following the procedure, all of which increases the risk of complications and/or adds cost to the procedure. Early success will be in those organ systems of large size that have no intrinsic motion and which maintain a lumen without special assistance or preparation. This includes but is not limited to the tracheo-bronchial tree, renal system from calyces to bladder, pancreatico-biliary tree, uterus, cerebro-ventricular system, spinal canal and major joints. Areas that require special attention include the upper GI tract (requires insufflation), colon (requires bowel prep), vascular tree (requires contrast material) temporal bone and inner ear (requires higher resolution) and heart (requires motion accommodation/correction). In addition to the benefits for non-invasive diagnosis, the potential exists for use of virtual endoscopy in combination with non-invasive energy directed therapeutics, such as high-intensity focused ultrasound. And the surface has only been scratched in the areas of education, training, and treatment planning (48,50). Virtual endoscopy epitomizes a first realization of the remarkable promise of modern imaging and computing technologies. A real fantastic voyage into the inner sanctum of the human body has been launched with the aid of the Visible Human Datasets.

Acknowledgments

The author is grateful to his colleagues in the Biomedical Imaging Resource at Mayo Clinic who provide technical assistance in producing the software described and visualizations illustrated in this manuscript. Clinical image data was generously provided by Dr. Michael Vannier and Dr. Charlene Prather. The neuron data was provided by Dr. Joseph Szurszewski. The author also acknowledges the use of the Visible Human Male and Female Datasets from the National Library of Medicine. Special thanks are extended to Darlene Bernard for preparation of the manuscript. Portions of this paper are reproduced from a manuscript accepted for publication in PRESENCE, by permission of its authors, R. Satava and R. Robb.

References

1. Geiger, B., R. Kikinis: Simulation of endoscopy. AAAI Spring Symposium Series: Applications of Computer Vision in Medical Images Processing, Stanford University, pp.138-140, 1994.

2. Rubin, G.D., C.F. Beaulieu, V. Argiro, H. Ringl, A.M. Norbash, J.F. Feller, M.D. Drake, R.B.Jeffrey, S. Napel: Perspective volume rendering of CT and MR images: Applications for endoscopic imaging. Radiology, 199:321-330, 1996.

3. Wickham, J.E.A.: Minimally invasive surgery: Future developments. BMJ, 308:193-196, 1994.

4. Rusinek, H., M.R. Mourino, H. Firooznia, J.C. Weinreb, N.E. Chase: Volumetric rendering of MR images. Radiology, 171:269-272, 1989.

5. Napel, S.A.: Basic principles of spiral CT. In: Principles and Techniques of 3D Spiral CT Angiography, Eds: E.K. Fishman, R.B. Jeffrey, Raven, New York, New York, p. 167-182, 1995.

6. Robb, R.A., B. Cameron: Virtual reality assisted surgery program. In: Interactive Technology and the New Paradigm for Healthcare, Chap. 48, IOS Press and Ohmsha, 1995.

7. Merril, J.R., G.M. Preminger, R.K. Babayan, R.T. Roy, G.L. Merril: Surgical simulation using virtual reality technology: Design, implementation and implications. Surgical Technology International, III:53-60, 1994.

8. Satava, R.M.: Computers in surgery: Telesurgery, virtual reality and the new world order of medicine. Contemp Surg 47:204-208, 1995.

9. Fischer, H., B. Neisius, R. Trapp: Tactile feedback for endoscopic surgery. In: Interactive Technology and the New Paradigm for Healthcare, Eds.: K. Morgan, R.M. Satava, H.B. Sieburg, R. Mattheus, J.P. Christensen, IOS Press and Ohmsha, p. 114, 1995.

10. Frhlich, B., W. Krger, G. Grunst, G. Wesche: The responsive workbench: A virtual working environment for physicians. In: Interactive Technology and the New Paradigm for Healthcare, Eds.: K. Morgan, R.M. Satava, H.B. Sieburg, R. Mattheus, J.P. Christensen, IOS Press and Ohmsha, p. 118, 1995.

11. Holloway, R., H. Fuchs, W. Robinette: Virtual worlds research at the University of North Carolina at Chapel Hill as of February 1992. In: Computer Graphics-International Visual Computing, Ed. T.L. Kunii, Springer, pp. 109-128, 1992.

12. Kaltenborn, K.F., O. Rienhoff: Virtual reality in medicine. Meth. Inform. Med., 32(5):407-417, 1993.

13. Vining, D.J., D.W. Gelfand: Noninvasive colonoscopy using helical CT scanning, 3D reconstruction, and virtual reality (Abstr). SGR Scientific Program, 70, 1994.

14. Vining, D.J., R.Y. Shifrin, E.K. Grishaw, et al.: Virtual colonoscopy (abstr.). Radiology, 193:446, 1994.

15. Lorensen, W.E., F.A. Jolesz, R. Kikinis: The exploration of cross-sectional data with a virtual endoscope. In: Interactive Technology and the New Paradigm for Healthcare, Eds.: K. Morgan, R.M. Satava, H.B. Sieburg, R. Mattheus, J.P. Christensen, IOS Press and Ohmsha, p.221-230, 1995.

16. Robb, R.A., C. Barillot: Interactive display and analysis of 3-D medical images. IEEE Transactions on Medical Imaging, 8(3):217-226, 1989.

17. Robb, R.A., D.P. Hanson: The ANALYZE software system for visualization and analysis in surgery simulation. In: Computer Integrated Surgery, Eds. Steve Lavalle, Russ Taylor, Greg Burdea and Ralph Mosges, MIT Press, 1993.

18. Hara, A.K., C.D. Johnson, J.E. Reed, et al.: Detection of colorectal polyps by computed tomographic colography: feasibility of a novel technique. Gastroenterology, 110:284-290, 1996.

19. Jolesz, F., F. Shtern: The operating room of the future. Proc. Nat'l Cancer Inst. Workshop, 27:326-328, April 1992.

20. Kikinis, R., P. Langham Gleason, F.A. Jolesz: Surgical planning using computer-assisted three-dimensional reconstructions. In: Computer Integrated Surgery, Eds. Russell Taylor, Stephane Lavalle, Greg Burdea, Ralph Mosges. MIT Press, pp. 147-154, 1995.

21. Napel, S., G. Rubin, C. Beaulieu, R. Jeffrey Jr., V. Argiro: Perspective volume rendering of cross-sectional images for simulated endoscopy and intra-parenchymal viewing. SPIE's Medical Imaging 1996, Newport Beach, CA, 2707-07, p. 16, February 10-15, 1996.

22. Shahidi, R., V. Argiro, S. Napel, L. Gray, H.P. McAdams, G.D. Rubin, C.F. Beaulieu, R.B. Jeffrey, A. Johnson: Assessment of several virtual endoscopy techniques using computed tomography and perspective volume rendering. Proceedings of Visualization in Biomedical Computing '96, Ed., G. Goos, J. Hartmanis and J. van Leeuwen, Springer, 1131:521-528, 1996.

23. Mori, K., J. Hasegawa, J. Toriwaki, H. Anno, K. Katada: A fast rendering method using the tree structure of objects in virtualized bronchus endoscope system. Proceedings of Visualization in Biomedical Computing '96, Ed., G. Goos, J. Hartmanis and J. van Leeuwen, Springer, 1131:33-42, 1996.

24. Ramaswamy, K., W.E. Higgins: Endoscopic exploration and measurement in 3D radiological images. In: SPIE's Medical Imaging, number 2710-51, page 243, Newport Beach, CA, February 1996.

25. Hong, L., A. Kaufman, Y-C. Wei, A. Viswambharan, M. Wax, Z. Liang: 3d virtual colonoscopy. In: IEEE Symposium on Frontiers in Biomedical Visualization, pages 26-32, 1995.

26. Satava, R.M., R.A. Robb: Virtual endoscopy: Application of 3-D visualization to medical diagnosis. In press, PRESENCE, January 1997.

27. Cameron, B.M., A. Manduca, R.A. Robb: Patient specific anatomic models: Geometric surface generation from 3 dimensional medical images using a specified polygonal budget. In: Health Care in the Information Age: Future Tools for Transforming Medicine, Eds: S.J. Weghorst, H.B. Sieburg, K.S. Morgan. IOS Press and Ohmsha, pp. 447-460, 1996.

28. Gueziec, A., D. Dean: The Wrapper: A surface optimization algorithm that preserves highly curved areas. Proceedings of Visualization in Biomedical Computing '94, Ed., R.A. Robb, SPIE, pp.631-642, 1994.

29. Hoppe, H.: Surface reconstruction from unorganized points. Doctoral Dissertation, University of Washington, 1994.

30. Karron, D.B., J. Cox, B. Mishra: New findings from spiderweb algorithm: Toward a digital morse theory. Proceedings of Visualization in Biomedical Computing '94, Ed., R.A. Robb, SPIE, pp. 643-657, 1994.

31. Thalmann, N.M., D. Thalmann: Towards virtual humans in medicine: A prospective view. Computerized Medical Imaging & Graphics, 18:97-106, 1994.

32. McMillan, L., G. Bishop: Plenoptic modeling: An image-based rendering system. Computer Graphics: Proceedings of SIGGRAPH '95, (Los Angeles, California, August 6-11, 1995), Annual Conference Series, ACM SIGGRAPH, pp. 39-46, 1995.

33. Lorensen, W.E., H.E. Cline: Marching cubes: A high resolution 3-D surface construction algorithm. Computer Graphics, 21(3):163-169, 1987.

34. Rosen, J.: From computer-aided design to computer-aided surgery. Proceedings of Medicine Meets Virtual Reality. San Diego, CA, June 1-2, 1992.

35. Drebin, R., L. Carpenter, P. Harrahan: Volume Rendering, SIGGRAPH '88, pp. 665-674, 1988.

36. Hohne, K.H., M. Bomans, A. Pommert, U. Tiede, G. Weibeck: Rendering tomographic volume data: Adequacy of methods for different modalities and organs. In: 3D Imaging in Medicine, Eds. K.H. Hohne, et al., NATO ASI Series, Vol. F 60, pp. 197-215, 1990.

37. Levoy, M.: Display of surfaces from volume data. IEEE Computer Graphics and Applications, 8(5):29-37, 1988.

38. Satava, R.M.: Virtual reality surgical simulator. Surg. Endosc., 7:203-205, 1993.

39. National Library of Medicine (US) Board of Regents: Electronic Imaging: Report of the Board of Regents. US Department of Health and Human Services, Public Health Service, National Institutes of Health, 1990. NIH Publication 90-2197.

40. Laurent, C.R., D.K. Ysebaert, G. Hubens, R.H. Van Hee: An analysis of 3D-endoscopic systems: Differences in image acquisition, -processing and -reproduction. Surg. Endosc. 8(8):1014, 1994.

41. Jones, S.B., R.M. Satava: Virtual endoscopy of the head and neck: Diagnosis using three-dimensional visualization and virtual representation. In: Health Care in the Information Age, Eds: H. Sieburg, S. Weghorst, K. Morgan, IOS Press and Ohmsha, p. 152, 1996.

42. Peifer, J.W., W.D. Curtis, M.J. Sinclair: Applied virtual reality for simulation of endoscopic retrograde cholangio -- Pancreatography (ERCP). In: Health Care in the Information Age, Eds: H. Sieburg, S. Weghorst, K. Morgan, IOS Press and Ohmsha, p. 36, 1996.

43. Preminger, G.M., R.K. Babayan, G.L. Merril, R. Raju, A. Millman, J.R. Merril: Virtual reality surgical simulation in endoscopic urologic surgery. In: Health Care in the Information Age, Eds: H. Sieburg, S. Weghorst, K. Morgan, IOS Press and Ohmsha, p. 157, 1996.

44. Silverman, S.G., R. Kikinis, D.M. Chernoff, D.F. Adams, S.E. Seltzer, K.R. Loughlin: Three-dimensional imaging of the kidneys using spiral CT: A potential surgical planning tool. Radiological Society of North America, Chicago, IL, p. 136, December 1992.

45. Ramaswamy, K., W.E. Higgins: Endoscopic exploration and measurement in 3D radiological images. SPIE's Medical Imaging 1996, Newport Beach, CA, 2710-51, p.243, February 10-15, 1996.

46. Davis, C.P., M.E. Ladd, B.J. Romanowski, S. Wildermugh, J.F. Knoplioch, J.F. Debatin: Human aorta: Preliminary results with virtual endoscopy based on three-dimensional MR imaging data sets. Radiology, 199:37-40, 1996.

47. Kimura, F., Y. Shen, S. Date, S. Azemoto, T. Mochizuki: Thoracic aortic aneurysm and aortic dissection: New endoscopic mode for three-dimensional CT display of aorta. Radiology, 198:573-578, 1996.

48. Robb, R.A., D.P. Hanson, J.J. Camp: Computer-aided surgery planning and rehearsal at Mayo Clinic. Computer, 29(1):39-47, 1996.

49. Blumenfeld, S.M.: A glimpse at the surgery of the next century. In: Health Care in the Information Age, Eds: H. Sieburg, S. Weghorst, K. Morgan, IOS Press and Ohmsha, p. 319, 1996.

50. Robb, R.A.: Three-Dimensional Biomedical Imaging. Principles and Practice. VCH Publishers, Inc., New York, NY, 1995.


First published: 01 October 1996
Last updated: 29 June 2001
Date Archived: 12 June 2012
Metadata | Permanence level: Permanent: Stable Content