Job offer: doctoral thesis position
The Research Institute Against Digestive Cancer (IRCAD) of Strasbourg and the
University of Strasbourg, ICube lab, IGG team (Computer Graphics and Geometry)
seek high quality candidates for a PhD position.
Title: Interactive Visualization of 3D Echography Images with Volume Rendering.
Thesis Director: Jean-Michel Dischler, Professor (dischler(a)unistra.fr
<mailto:dischler@unistra.fr> )
Co-advisors:
Flavien Bridault, Software Development Director, IRCAD ( f lavien . bridault @ ircad .fr
)
Jonathan Sarton, Associate Professor ( sarton @unistra.fr )
Location: Strasbourg, France
Keywords: Volume rendering, 3D ultrasound imaging, classification, transfer function,
empty
space skipping
The candidate should have a Master degree in Computer science or equivalent with
following
desired skills:
• Scientific Visualization
• Volume data processing
• Knowledge of GPU/shaders programming
• C++ programming
Context and motivations:
In the past few decades, surgery has produced significant advances in the fight against
cancer.
Today, computer science research offers surgery a new revolution with augmented surgery.
Augmented surgery allows the surgeon to surpass standard human cognitive skills to
significantly
improve the quality of care, by enhancing vision, decision and gesture. An important
imaging
modality in surgery is ultrasound. This is painless, safe, real-time, non-invasive and
compatible with
standard surgical tools. In fact, unlike computed tomography (CT), it does not require the
use of Xrays,
and unlike magnetic resonance (MR), it can be performed with the presence of
ferromagnetic
objects. In addition, its cost is much lower than these modalities. However, the
understanding of
these images requires great expertise to mentally reconstruct the structures observed in
3D space,
using the sequence of images acquired during an examination.
This is the reason why, for several years, researchers and manufacturers in the sector
have been
seeking to provide 3D ultrasound images. Nowadays, this can be achieved in two ways. The
first
one consists in reconstructing a volume from a sequence of 2D images, using a 2D
ultrasound probe
equipped with an electromagnetic or an optical sensor. This approach is commonly referred
as
"freehand 3D ultrasound". Real 3D or even 4D ultrasound probes also exist, but
they are less
common, and they provide lower resolutions and refresh rate.
Whatever the source, visualizing a 3D ultrasound image is a particularly difficult and
unresolved
research topic. Today, it is accepted that direct volume rendering is the most suitable
method for
viewing 3D images [1]. Each voxel is associated with colour and opacity, allowing the
operator to look through the volume. To operate globally on the image, a transfer
function,
conventionally 1D, combines ranges of intensity with pairs of colour and opacity.
However, the nature of ultrasound images is very noisy and has a low dynamic range. Unlike
CT
MR images, ultrasound shows changes in physical properties rather than the physical
properties
themselves. Consequently, conventional 1D transfer functions fail to segment these
homogeneous
structures.
Finally, conventional ultrasound probes offer a second mode, the Doppler mode. It is
mainly used to
distinguish arteries from vein. The fusion of 3D images in B and Doppler mode is
technically
difficult due to the increase in data size and the actual blend between the two images [5,
6].
In this thesis, we are interested in the visualization of 3D ultrasound volumes for
computer assisted
sonography, diagnostics, interventional radiology and percutaneous surgery and training
simulators.
We will focus on the liver and the kidney to distinguish target structures (organ,
tumour), critical
surrounding structures (blood vessels, lymph nodes, etc.) and needles in the volume.
PhD Objectives:
In the above context, the objective of this thesis is to tackle the challenges of
interactive
visualization of 3D ultrasound images [1] by direct volume rendering, with a particular
focus on the
classification of the data, combined with a rendering performance concern. There are three
main
scientific and technological objectives.
As a first objective, the aim will be to propose a system to design efficient transfer
functions [3] to
define appropriate opacity levels [8] for the different target structures to be visualized
in volumes
acquired by ultrasound. For such volumes with low dynamic range, a significant amount of
noise
and variable intensities for the same tissue, it will be necessary to adopt approaches
that consider
additional information beyond the simple voxel intensity level [2] (classical 1D
functions). In
addition, we will focus on the visualization of multi-modal ultrasound image fusion
(B-mode,
doppler, elastography) and possibly with other types of acquisitions (CT-scan, MRI). It
will be
necessary to develop methods to combine data classification, either by a single transfer
function that
groups all modalities or with several independent transfer functions. Hence, it will also
be necessary
to think of a way to adapt the visualization to multi-modality in relation to the chosen
type of
classification.
As a second objective, we will focus on an approach oriented towards the efficiency of
the
rendering performance. It is possible to improve the performance of the direct volume
rendering
algorithm by using accelerating structures, like pyramidal representations [7]. Indeed,
approaches
that divide the volume space into appropriate sub-regions, not only allow to increase the
size of data
volumes, but also allow to efficiently apply empty spaces skipping [4] (due to their
transparency
associated with the transfer function), the latter contributing to accelerate rendering.
The aim is to
focus on the implementation of an accelerating data structure that updates itself
dynamically in
relation to the interactive management of the selected classification tool. Thus, propose
the
implementation of an efficient rendering adapted to the visualization of structures with
empty
regions, which are not necessary for diagnosis and which can reduce rendering costs.
Finally, the goal is to target the design of a complete interactive visualization tool.
Thus, the
development of methods based on recent advances in the literature in terms of data
filtering,
clipping, lighting and shading, and the management of uncertainty in the context of
medical
ultrasound image acquisition will be considered.
Work environment:
The PhD. student will be hosted in the Surgical Data Science team of IRCAD Strasbourg for
three
years, allowing him to benefit from existing software, infrastructure, agile management,
support
from experts in computer graphics and the possibility of testing the results in a clinical
setting. Part
of the research time during the thesis will also be spent in the Engineering, Computer
Science and
Imaging Laboratory (ICube) of the University of Strasbourg with researchers from the
Computer
Graphics and Geometry team.
The IRCAD Surgical Data Science team has been researching and developing augmented
surgery
software for 20 years that is intended to assist surgeons, interventional radiologists
and
gastroenterologists. The complexity and multiplicity of challenges associated with
augmented
surgery naturally require a team of suitable size. Consequently, in addition to its
collaborations with
the University of Strasbourg, the Surgical Data Science team is developing and forging
international
partnerships thanks to twin IRCAD institutes, and in particular IRCAD Africa, located in
Kigali.
The growth of the IRCAD Africa Surgical Data Science team has been carefully planned. The
team
now has 9 members, reaching 40 members within 5 years. To achieve this ambitious goal,
IRCAD
Africa is supporting the most deserving African computer scientists to receive funding
enabling
them to complete their doctoral training in Strasbourg. This is in collaboration with the
best
research teams of the University of Strasbourg. The best post-graduates will then have
the
opportunity to help lead, mentor and train new talents in IRCAD Africa in a virtuous
cycle.
References:
[1] Å. Birkeland, V. Solteszova, D. Hönigmann, O.H. Gilja, S. Brekke, T. Ropinski, &
I. Viola, « The
ultrasound visualization pipeline-a survey », 2012, arXiv preprint arXiv:1206.3975.
[2] C. Schulte zu Berge, M. Baust, A. Kapoor, & N. Navab, « Predicate-Based
Focus-and-Context
Visualization for 3D Ultrasound », IEEE Trans. Visual. Comput. Graphics, vol. 20, no 12,
p. 2379-2387, déc.
2014, doi: 10.1109/TVCG.2014.2346317.
[3] P. Ljung, J. Krüger, E. Groller, M. Hadwiger, C. D. Hansen, & A. Ynnerman, « State
of the Art in
Transfer Functions for Direct Volume Rendering », Computer Graphics Forum, vol. 35, no 3,
p. 669-691,
juin 2016, doi: 10.1111/cgf.12934
[4] M. Hadwiger, A. K. Al-Awami, J. Beyer, M. Agus, & H. Pfister, « SparseLeap:
Efficient Empty Space
Skipping for Large-Scale Volume Rendering », IEEE Transactions on Visualization and
Computer Graphics,
vol. 24, no 1, p. 974-983, janv. 2018, doi: 10.1109/TVCG.2017.2744238.
[5] Kim, E.-H., Managuli, R., & Kim, Y. (2009). New flexible multi-volume rendering
technique for
ultrasound imaging. Medical Imaging 2009: Ultrasonic Imaging and Signal Processing
[6] O. Zettinig, C. Hennersperger, C. Schulte zu Berge, M. Baust, & N. Navab. “3D
Velocity Field and Flow
Profile Reconstruction from Arbitrarily Sampled Doppler Ultrasound Data”.Medical Image
Computing and
Computer-Assisted Intervention (MICCAI), 2014, Boston, USA
[7] Sarton, J., Courilleau, N., Remion, Y., Lucas, L.: Interactive Visualization and
On-Demand Processing of
Large Volume Data: A Fully GPU-Based Out-Of-Core Approach. IEEE Trans. Vis. Comput. Graph.
pp. 1–1
(2019)
https://doi.org/10.1109/TVCG.2019.2912752
[8] Stéphane Marchesin, Jean-Michel Dischler, Catherine Mongenet: Per-Pixel Opacity
Modulation for
Feature Enhancement in Volume Rendering. IEEE Trans. Vis. Comput. Graph. 16(4, 560-570
(2010)