Three Dimensional Projection Environment for Molecular Design

Download Report

Transcript Three Dimensional Projection Environment for Molecular Design

Three Dimensional Projection Environment for
Molecular Design and Surgical Simulation
Matthew Wampolea, Eric Wickstroma,d, Chang-Po Chena, Devakumar Devadhasb, Yuan-Yuan Jina, Jeffrey M. Sandersa,
John C. Kairysc, Martha L. Ankenye, Rui Huf, Kenneth E. Barnerf, Karl V. Steinerf and Mathew L. Thakurb,d
aBiochemistry
and Molecular Biology, bRadiology, cSurgery, dKimmel Cancer Center, eAcademic and Instructional Support and
Resources, Thomas Jefferson University, Philadelphia, PA 19107
fElectrical and Computer Engineering, University of Delaware, Newark DE 19716
Introduction
Method
Surgery involves palpating and manipulating tissues in the operating room
environment. However, sophisticated radiographic systems present only visual images.
The actual assembly of organs of a particular patient must now be imagined by the
surgeon before the operation. Complications that were not anticipated, such as
bleeding from unusually placed arteries or veins, or unusual lesion geometry, lengthen
the procedure, placing extra stress on the patient and the surgeons.
We are developing agents for positron emission tomography (PET) imaging of
cancer gene mRNA expression to positively identify malignant tissues. We will fuse
mRNA PET images with anatomical computerized tomography (CT) images to enable
volumetric (3D) haptic (touch-and-feel) simulation of pancreatic cancer and
surrounding organs prior to surgery in a particular patient.
We hypothesize that our fusion of genetic, visual, and tactile information will
improve demarcation of clear margins, and will ultimately permit surgeons to better
plan operations and to prepare for the actual pathology found.
Patient Data
Amira®
Collect image data
Locate tumor masses
Segment images
Render segmented organs in 3D
Create surface and tetrahedral meshes
3D Image Data
Simulation Open Framework
Architecture (SOFA)
Convert meshes to accepted formats
Characterize mechanics of organs
Surgical Simulation
Combine visual and physical properties
Include haptic device into simulation
Incorporating Haptics
Patient specific images provide anatomical positioning of normal and cancerous tissue
in two dimensional image slices. Many modalities are available; in this study we use
de-identified CT and PET images. The 2D images are valuable for diagnosis, but the
lack of depth limits their usefulness. PET images of [18F]deoxyglucose accumulation
assists in locating cancerous regions in the CT images that might otherwise be hidden.
Tumor
CT
Patient Data (CT/PET)
CT with
FDG-PET
Palpation is an important for locating the
cancerous tumor and determining surgical
margins. Using the Phantom Omni to provide
haptic feedback, the simulation will present
surgeons with a chance to practice what margins
would be expected before going into surgery.
Surgeons will be interviewed on the 'feel' of the
tumor and organs to fine tune the material
properties of the models.
EGFR Molecular Dynamics
Amira®
Amira® is a powerful platform for visualizing bio-medical images. The patent's data
was segmented manually with assistance from pre-installed tools. Automated
segmentation for the entire patient was complicated by noise and nearly
indistinguishable differences in the greyscale indexes of various organs. Typically the
image stacks are reviewed for days or weeks before surgery to identify small features.
Manually segmenting takes about the same time while improving the spatial
recognition. Amira can also be used to build meshes of the segmented images for use
in other programs.
EGF binding to EGFR enables cell
entry. We will identify a fragment of
EGF to serve as a hook for
internalization of reporter-PNApeptide hybridization probes for
imaging of cancer gene mRNA. At the
current stage, we show the result of a
Langevin dynamics simulation in
explicit water 40 nsec after EGF
binding. EGF 32-48 behaves similarly.
Conclusions
Tumor
SOFA Framework
SOFA is an open source simulation
framework being developed by researchers at
INRIA and its collaborators. A 'node' based
architecture makes the simulated scenes
highly customizable. Each 'object' consists of
a behavior, collision, and visual node. The
typical simulation uses ordinary differential
equation solvers and Euler solvers to
compute mathematical equations, but others
can be easily implemented.
SUPPORTED BY DOD W81XWH-09-1-0577
Contact: [email protected]
Turning 2D CT/PET slices into 3D objects assists in understanding the topology
surrounding tumor masses. Incorporating the visual and physical characteristics of a
patient’s anatomy will provide surgeons with an informative pre-operative tool to plan
and practice the operation before the first incision. Including haptic feedback provides a
familiar 'feel' to surgeons as they palpate the target organ, trying to locate the tumor and
determine how large a margin of resection will be needed. The development of genetic
PET imaging and contrast CT into a combined visual will further improve the surgeons’
knowledge by more accurately pinpointing malignant tissue and any hidden blood
vessels.
Future Work
Include contrast-enhanced CT images into the model for improved vascular
modeling.
Improve simulation performance with multi-threading and CUDA.
Improve collision detection and model interaction algorithms.
Incorporate genetic PET imaging of cancer gene expression of tumors.