Abstract:
Historically, medical students have practiced on cadavers to learn human anatomy, as
have physicians wanting to brush up on their knowledge. However, because of storage cost
and limited availability of cadavers, practice on cadavers has proven problematic. As computers
become more powerful, medical professors have dreamed of a day when they will be
able to dissect bodies with the assistance of virtual reality. We have developed the Virtual
Human Anatomy and Surgery System (VHASS) as a potential solution. VHASS uses
cryosection images (natural-color images generated by slicing a frozen cadaver) to reconstruct
computerized three-dimensional cadavers. VHASS enhances human anatomy education
by creating three-dimensional volume models that include details of human organs,
giving medical students and physicians unlimited access to realistic virtual cadavers. Major
components in VHASS include three-dimensional virtual humans, direct volume rendering
of virtual humans, surface models of segmented human parts, and real-time manipulation
on virtual humans.
Direct volume rendering on un-segmented cryosection images is still an open research
topic. Different from traditional volume rendering, which uses transfer functions to map
scalar values to colors and opacity, direct volume rendering on cryosection images needs
efficient transfer functions mapping vectors to opacity, which is complicated by the non-linearity
of color space. We have created a series of new transfer functions for volume
rendering on un-segmented cryosection images.
To create human part surface models, we separate human tissues within cryosection
images, dissect all human organs according to their anatomic structures, and reconstruct
a three-dimensional volume model for each part. VHASS renders each part as a high-resolution,
natural-appearance three-dimensional model and labels it properly to facilitate
learning. This enables users to group different parts to better understand human anatomy.
VHASS allows real-time interactions, such as drilling, scanning and slicing on human
parts. We re-generate human part surface models at run-time for deforming interactions.
We have analyzed the limitation of the well-known Marching Cubes algorithm and modified
the algorithm to work with our data. We also have developed a new neighbor-based surface
reconstruction algorithm, which has the same performance as the Marching Cubes algorithm
but without the limitation of the Marching Cubes method. For better performance, the
new algorithm has been ported onto the new graphics hardware using the geometry shader.
Our implementation on the geometry shader serves as an example of exploiting the new
GPU parallel processing hardware.
VHASS supports stereo rendering, haptic interaction, tracking and three-dimensional
content production. Using the Sharp three-dimensional display on a laptop, VHASS provides
low-cost, portable stereo rendering of human parts without the requirement of special
glasses. Integrating with large size stereo projector and ultrasonic trackers, VHASS allows
people to manipulate human parts in the immersive stereo environment. By integrating
SensAble Onmi haptic device, VHASS enables people to feel the touch on human parts.
VHASS integrates three-dimensional content creation by allowing students to print out
physical models of human parts.