Master thesis topics proposed by the laboratory for master students (Computer Science & EPB) for 2023-2024 ...

Enhancement of Immersive Video Compression Artifacts using Machine Learning Method

Promotor, co-promotor, advisor : mehrdad.teratani@ulb.be, - , Hamed Razavi Khosroshahi, Eline Soetens

Research Unit : LABORATORY OF IMAGE SYNTHESIS AND ANALYSIS - VIRTUAL REALITY (LISA-VR)

Description

Enhancement of Immersive Video Compression Artifacts using Machine Learning Method

Context

In this research, we investigate the characteristics of artifacts by compression of 3D video and their free viewpoint generation.

Objective

This research aims to introduce a machine learning approach that overcomes the artifacts (noise or lack of information in some areas) at the decoder side and thus the synthesized immersive video content is improved.

Prerequisite

  • A strong interest in machine learning (Deep learning and convolutional Neural Networks, Residual Neural Networks)
  • INFO-H516: Visual Media Compression
  • Any course on machine learning

Contact person

mehrdad.teratani@ulb.be, hamed.razavi.khosroshahi@ulb.be


references

  • Yue, Linwei & Shen, Huanfeng & Li, Jie & Yuan, Qiangqiang & Zhang, Hongyan & Zhang, Liangpei. (2016). Image super-resolution: The techniques, applications, and future. Signal Processing. 128. 10.1016/j.sigpro.2016.05.002.
  • Zhang, Peipei. (2022). Image Enhancement Method Based on Deep Learning. Mathematical Problems in Engineering. 2022. 1-9. 10.1155/2022/6797367.

From Kinect to 3D Layered Display: A Real-time 3D System

Promotor, co-promotor, advisor : mehrdad.teratani@ulb.be, - , Laurie Van Bogaert, Armand Losfeld

Research Unit : LABORATORY OF IMAGE SYNTHESIS AND ANALYSIS - VIRTUAL REALITY (LISA-VR)

Description

From Kinect to 3D Layered Display: A Real-time 3D System

3D Layered displays are a class of 3D displays that don't require any glasses. Those displays are composed of stacked LCD panels placed in front of a backlight. The light rays emitted by the backlight cross each panel of the display and thus are impacted by a combination of pixels instead of one pixel as done in LCD displays. By controlling the pixels' intensity in each layer, several views can be projected to several positions, and depth cues can be perceived by stereoscopic vision.

By acquiring multiple viewpoints of a scene, multilayer images, i.e. images displayed in the LCD panels of the 3D display, can be generated with machine learning algorithms. However, this optimization procedure uses dozen of viewpoints of a scene that represents redundant data. To solve this problem, fewer input views can be used with depth information, i.e. depth map.

Azure Kinect cameras possess a depth sensor allowing the capture of depth maps in real-time, i.e. a Time-of-Flight (TOF) depth sensing device. Due to their affordable price, they are used in a wide range of applications related to computer vision, virtual reality, and 3D vision. Although, because of the relatively limited quality of the acquired depth maps, post-processing operations may be needed for some applications.

Context

The aim of this thesis is to use the video stream with depth information from a single Kinect Azure camera for the computation of the multi-layer images that are displayed on the 3D Layered display. Therefore, the development of this pipeline can offer a real-time 3D vision for teleoperation tasks.

Note that this project is closely related to the master thesis subject "From Plenoptic to 3D Layered Display: A Real-Time 3D System” and can be done in collaboration with another student taking the other subject.

Objective

At the end of the year, the student must present a real-time pipeline that

  1. Acquires and post-processes RGBD data from one Kinect camera
  2. Generates multilayer images from this RGBD data
  3. Displays the multilayer images with the available 3D Layered display

Prerequisite

  • Strong interest in programming and computer vision/virtual reality
  • Good knowledge of C++
  • Not required but is a bonus:
    • Any multimedia course (INFOH502, INFOH503, or similar courses)
    • CUDA API / OpenCL
    • Azure Kinect API
    • OpenCV or any other libraries for Image Processing

Contact person

mehrdad.teratani@ulb.be, laurie.van.bogaert@ulb.be


references

From Plenoptic to 3D Layered Display: A Real-time 3D System

Promotor, co-promotor, advisor : mehrdad.teratani@ulb.be, - , Armand Losfeld, Hamed Razavi Khosroshahi, Sarah Fachada

Research Unit : LABORATORY OF IMAGE SYNTHESIS AND ANALYSIS - VIRTUAL REALITY (LISA-VR)

Description

From Plenoptic to 3D Layered Display: A Real-time 3D System

3D Layered displays are a class of 3D displays that don't require any glasses. Those displays are composed of stacked LCD panels placed in front of a backlight. The light rays emitted by the backlight cross each panel of the display and thus are impacted by a combination of pixels instead of one pixel as done in LCD displays. By controlling the pixels' intensity in each layer, several views can be projected to several positions, and depth cues can be perceived by stereoscopic vision.

By acquiring multiple viewpoints of a scene, multilayer images, i.e. images displayed in the LCD panels of the 3D display, can be generated with machine learning algorithms. However, this optimization procedure uses dozen of viewpoints of a scene that represents redundant data. To solve this problem, fewer input views can be used with depth information, i.e. depth map.

Plenoptic cameras (such as Raytrix) possess a main lens, a sheet of micro-lens, and a CMOS sensor. This special design offers the possibility to capture directional light rays. More explicitly, plenoptic cameras capture information not from a single viewpoint. These cameras are then called Light field cameras for their ability to capture dense viewpoints, i.e. a huge number of light rays. From this dense data, a real-time depth map can be computed. Thus, these cameras are theoretically more suitable for 3D and VR applications than conventional cameras. Although plenoptic cameras are promising, the number of real-world applications remains low due to their hard calibration, their price, and their non-user-friendly usage.

Context

The aim of this thesis is to use the video stream with depth information from a single plenoptic camera for the computation of the multi-layer images that are displayed on the 3D Layered display. Therefore, the development of this pipeline can offer a real-time 3D vision for teleoperation tasks.

Note that this project is closely related to the master thesis subject "From Kinect to 3D Layered Display: A Real-Time 3D System” and can be done in collaboration with another student taking the other subject.

Objective

At the end of the year, the student must present a real-time pipeline that

  1. Acquires and post-processes RGBD data from one plenoptic camera
  2. Generates multilayer images from this RGBD data
  3. Displays the multilayer images with the available 3D Layered display

Prerequisite

  • Strong interest in programming and computer vision/virtual reality
  • Good knowledge of C++
  • Not required but preferred:
    • Any multimedia course (INFOH502, INFOH503, or similar courses)
    • CUDA API / OpenCL
    • OpenCV or any other libraries for Image Processing

Contact person

mehrdad.teratani@ulb.be, armand.losfeld@ulb.be


references

  • 3D Layered display:
    • G. Wetzstein, D. Lanman, M. Hirsch, R. Raskar. Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting. Proc. of SIGGRAPH 2012 (ACM Transactions on Graphics 31, 4), 2012
    • A. Losfeld et al., "3D Tensor Display for Non-Lambertian Content," 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP), Suzhou, China, 2022.
  • Plenoptic Cameras and VR applications:
    • Bonatto, D., Fachada, S., Senoh, T., Guotai, J., Jin, X., Lafruit, G., & Teratani, M. (2021). Multiview from micro-lens image of multi-focused plenoptic camera. In 2021 International Conference on 3D Immersion (IC3D) (pp. 1-5) IEEE xplore.
    • Georgiev, Todor & Lumsdaine, Andrew. (2010). Focused plenoptic camera and rendering. J. Electronic Imaging. 19. 021106. 10.1117/1.3442712.
    • Raytrix, https://raytrix.de. (Plenoptic camera manufacturer)
    • HoviTron resources, https://www.hovitron.eu/public-resources
    • HoviTron documentation, https://hovitron.gitlab-pages.ulb.be
    • RaytrixDLL for real-time RGBD acquisition and view synthesis, https://hovitron.gitlab-pages.ulb.be/RVSVulkan/RaytrixStreamer/README_how/.

Accelerated Volumetric Rendering: Towards Real-time processing of Radiance Fields

Promotor, co-promotor, advisor : mehrdad.teratani@ulb.be, - , Laurie Van Bogaert, Daniele Bonatto

Research Unit : LABORATORY OF IMAGE SYNTHESIS AND ANALYSIS - VIRTUAL REALITY (LISA-VR)

Description

Accelerated Volumetric Rendering: Towards Real-time processing of Radiance Fields

View synthesis is a field in computer vision that involves generating novel views of a scene or object from a limited set of input views. This technique is particularly useful in virtual/augmented reality, and 3D modeling applications, where it allows for a more immersive and interactive experience. View synthesis techniques typically involve using machine learning algorithms or classical algorithms to learn the underlying structure of the scene or object, and then using this knowledge to generate new views.

Context

Recently, depth learning methods for view synthesis have suddenly gained popularity, due to the introduction of NeRF (short for Neural Radiance Fields), a novel deep learning method based on volume rendering.

The basic idea behind NeRF is to represent a 3D scene as a continuous 5D function that maps a 3D point in space and a viewing direction to a color and opacity value. This function is learned using a neural network, which is trained on a set of 2D images of the scene taken from different viewpoints. Once the network has been trained, it can be used to generate new images of the scene from any viewpoint, including viewpoints that were not present in the training set.

One of the main advantages of NeRF is that it can generate photorealistic images with realistic lighting and shading effects, even in scenes with complex geometry and materials. However, as with usual deep learning methods, it takes time to render a scene, in the order of several hours.

To solve this problem, Nvidia introduced instant-ngp, a realtime version of NeRF, it uses a precomputed 3D representation of the scene, rather than a continuous function as in NeRF. This precomputed representation is generated using a traditional 3D modeling technique such as voxelization or point cloud reconstruction, which is much faster than training a neural network. It aims to reach real-time rendering.

Despite their impressive performance, many NeRF variant methods still suffer from issues such as noise in the final images and deep learning-related challenges. However, it is worth noting that while NeRF incorporates techniques from computer graphics, it is not entirely reliant on deep learning. On the other hand, Plenoxels has emerged as an alternative to view synthesis methods that do not rely on deep learning-based approaches.

We want to have a in-lab version of the Plenoxels software. The original version is publicly available in Python but their Python implementation can be improved in terms of speed. You will work with state-of-the-art technologies, in an exciting field with a dynamic team of researches, each with complementary specialties.

As a master's student in view synthesis, you are expected to have a strong foundation in computer vision, machine learning, and image processing. If you haven't yet, we expect you to pick-up quickly. You will likely study a variety of techniques for view synthesis, including traditional methods such as 3D modeling and ray tracing, as well as more modern approaches using neural networks and deep learning.

Objective

There are three objectives to this project:
1. C++ implementation integrating good programming practice as this project should serve as a basis for future work. 2. A speed-up through software optimization or the usage of specialized hardware, i.e multi-GPU computer 3. At the end of the project, the results should be compared in terms of speed and quality to the original Plenoxels implementation and eventually to other NeRF variants

Prerequisite

  • Interested in fast and efficient code.
  • Strong interest in programming and computer vision/virtual reality.
  • Willing to work with cutting-edge technologies.
  • Good knowledge of C++ or willing to put in the extra work.
  • Interested in learning notions of deep learning.

Contact person

mehrdad.teratani@ulb.be, laurie.van.bogaert@ulb.be, daniele.bonatto@ulb.be


references

  • NeRF: Mildenhall, B., et al. "NeRF: Representing scenes as neural radiance fields for view synthesis." Communications of the ACM 65.1 (2021): 99-106. DOI: 10.1145/3503250
  • Volume Rendering: Kajiya, J. T., and Brian P. Von Herzen. "Ray tracing volume densities." ACM SIGGRAPH computer graphics 18.3 (1984): 165-174. DOI: 10.1145/964965.808594
  • Instant-ngp: Müller, T., et al. "Instant neural graphics primitives with a multiresolution hash encoding." ACM Transactions on Graphics (ToG) 41.4 (2022): 1-15. DOI: arXiv:2201.05989
  • Plenoxels: Fridovich-Keil, S., et al. "Plenoxels: Radiance fields without neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. DOI: arXiv:2112.05131

From Tensor Display-like Multilayer Representation of Light Field to Tensor Display Data Format

Promotor, co-promotor, advisor : mehrdad.teratani@ulb.be, - , Eline Soetens, Laurie Van Bogaert

Research Unit : LABORATORY OF IMAGE SYNTHESIS AND ANALYSIS - VIRTUAL REALITY (LISA-VR)

Description

From Tensor Display-like Multilayer Representation of Light Field to Tensor Display Data Format

Tensor Displays are a kind of 3D display composed of multiple LCD panels and a backlight. Tensor Displays use a data format where 3D scenes are represented as multilayer images. When each of the image layers is displayed on one LCD panel, the 3D scene is reproduced by the tensor display.

Context

3D scenes are usually captured as a light field, an array of images capturing different views of the scene, then converted to an N-layers image for an N-layers Tensor Display. This conversion is computation- and time-intensive and reduces the quality of the scene. A multilayer image with fewer layers will degrade the quality of the scene more. On the contrary, converting the light field to a multilayer image with more layers preserves the quality of the scene while reducing the amount of data needed to represent a scene when compared to the light field datatype.

Objective

The objective is to do the computation-intensive conversion from the light field data to an M-layers image as a prior step. Then quickly do the conversion from the M-layers image to the N-layers image to display the scene on the tensor display. In practice, the student will have to develop a technique to transform an M-layers into an N-layers image, with M > N.

Prerequisite

  • Good grasp on C++, Python
  • As a "good to have":
    • INFO-H516: Visual Media Compression, already credited or in PAE
    • INFO-H500: Image Processing (or any similar course), already credited

Contact person

mehrdad.teratani@ulb.be, eline.soetens@ulb.be


references

Simulation of Camera-In-The-Loop Method for Imperfection Removal on Simulated 3D Layered Display

Promotor, co-promotor, advisor : mehrdad.teratani@ulb.be, - , Armand Losfeld, Hamed Razavi Khosroshahi, Eline Soetens

Research Unit : LABORATORY OF IMAGE SYNTHESIS AND ANALYSIS - VIRTUAL REALITY (LISA-VR)

Description

Simulation of Camera-In-The-Loop Method for Imperfection Removal on Simulated 3D Layered Display

3D Layered displays are a class of 3D displays that don't require any glasses. Those displays are composed of stacked LCD panels placed in front of a backlight. The light rays emitted by the backlight cross each panel of the display and thus are impacted by a combination of pixels instead of one pixel as done in LCD displays. By controlling the pixels' intensity in each layer, several views can be projected to several positions, and depth cues can be perceived by stereoscopic vision.

Context

By acquiring multiple viewpoints of a scene, multilayer images, i.e. images displayed in the LCD panels of the 3D display, can be generated with machine learning algorithms. However, this optimization procedure is done while considering a noiseless 3D display which is never the case in practice. Indeed, the prototyped 3D display in our lab was made from the disassembly of old LCD monitors and artifacts are present. Therefore, a quality mismatch exists between the reproduction of the 3D content in simulation and by using our prototype.

One recent method for imperfection removal is to place a camera in front of the 3D display to capture the noisy 3D content, compare it with the noiseless one, and finally introduce a noise compensation in the multilayer images. This process is then repeated a dozen times to get rid of most of the imperfections. While this method remains simple, it is not tolerant against different lightning ambiances, datasets, and hardware. Therefore, a Convolutional Neural Network can be trained on various input data and then used for novel applications.

This master thesis aims to implement this Camera-In-The-Loop method to improve the quality of the 3D content in simulation for any 3D content.

Objective

The goals of this thesis are to:

  1. Introduce noise/imperfections in the simulation of the 3D display
  2. Implement the Camera-in-The-Loop method for imperfections removal
  3. Use noiseless and noisy data to train a CNN for further imperfections removal

Prerequisite

  • Strong interest in machine learning/artificial intelligence for computer vision applications
  • Multimedia and machine learning/AI courses (INFO-H500, INFO-H502, INFO-F422, INFO-H501, others...)
  • Good knowledge of C++/python
  • Not mandatory but preferred:
    • Experience in machine learning or artificial intelligence
    • OpenGL

Contact person

mehrdad.teratani@ulb.be, armand.losfeld@ulb.be


references

  • 3D Layered Display:
    • G. Wetzstein, D. Lanman, M. Hirsch, R. Raskar. Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting. Proc. of SIGGRAPH 2012 (ACM Transactions on Graphics 31, 4), 2012
    • A. Losfeld et al., "3D Tensor Display for Non-Lambertian Content," 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP), Suzhou, China, 2022.
  • Camera-In-The-Loop:
    • Y. Peng, S. Choi, N. Padmanaban, G. Wetzstein, “Neural holography with camera-in-the-loop training”, ACM Transactions on Graphics., 2020.
    • C. Chen, D. Kim, D. Yoo, B. Lee, “Off-axis camera-in-the-loop optimization with noise reduction strategy for high-quality hologram generation”, Opt. Lett. 47, 2022.

Scalable Neural Network Representation for Immersive Video

Promotor, co-promotor, advisor : mehrdad.teratani@ulb.be, - , Hamed Razavi Khosroshahi, Sarah Fachada, Daniele Bonatto

Research Unit : LABORATORY OF IMAGE SYNTHESIS AND ANALYSIS - VIRTUAL REALITY (LISA-VR)

Description

Scalable Neural Network Representation for Immersive Video

Given several posed images, novel view synthesis aims to generate photorealistic images depicting the scene from unseen viewpoints. This task has applications in graphics, VR/AR. It requires a deep visual understanding of geometry and semantics, making it appealing to test visual understanding.

Volume rendering methods generate images of a 3D volumetric data set without explicitly extracting geometric surfaces from the data. These techniques use an optical model to map data values to optical properties, such as color and opacity. During rendering, optical properties are accumulated along each viewing ray to form an image of the data.

Context

In recent years, machine learning techniques have attracted a great deal of attention in this field. Neural Radiance Field (NeRF), based on volume rendering, is a fully-connected neural network that can generate novel views of complex 3D scenes, based on a partial set of 2D images. It is trained to use a rendering loss to reproduce input views of a scene. In order to create a complete scene, it interpolates between input images representing a scene. NeRF (or any other version of NeRF for immersive video) is a highly effective method of generating images from synthetic as well as real data.

Such methods use large Neural Networks to train, and the inputs are images. It normally takes several hours for the model to be trained. Using down-sampled input images can reduce the training time and ensure that your model is correct.

Objective

In this research, we will study synthesizing full-size images by training such neural networks trained by down-sampled images, considering the relation of the full-sized and scaled trained models. Additionally, we aim to investigate the possibility of generalizing this method for other neural networks which are used in immersive video applications.

Prerequisite

  • Good knowledge of Python
  • Strong interest in programming and computer vision/virtual reality
  • Preferred to have:
    • Knowledge of Machine Learning
    • Knowledge of computer vision/virtual reality – Any multimedia course (INFOH502, INFOH503, or similar courses)

Contact person

mehrdad.teratani@ulb.be, hamed.razavi.khosroshahi@ulb.be


references

  • Mildenhall, Ben, et al. ”Nerf: Representing scenes as neural radi- ance fields for view synthesis.” Communications of the ACM 65.1 (2021)
  • https://www.matthewtancik.com/nerf
  • Thomas Müller, Alex Evans, et al. Instant Neural Radiance Fields. In ACM SIGGRAPH 2022 Real-Time Live! (2022).

In-vivo / ex-vivo 3D registration of liver tissue

Promotor, co-promotor, advisor : christine.decaestecker@ulb.be, - , Adrien.Foucart@ulb.be, Arthur.Elskens@ulb.be

Research Unit : LISA

Description

Project title

In medical image analysis, it is often useful to combine information from several imaging modalities, such as in-vivo CT and ex-vivo microscopy. Interpretation of this information requires good colocalization of the corresponding regions of the organ in the different modalities. Image registration consists of finding the best image transformation to create a correspondence between the regions of a reference image and the corresponding regions of a target image.

Context

This topic is proposed within the framework of the ProTherWal project, which aims to develop and better understand proton therapy. It is funded by the Walloon Region and is conducted in collaboration with laboratories from several French-speaking universities in Belgium. The data are provided by the Centre for Microscopy and Molecular Imaging (CMMI) of the Biopark of Gosselies. They include in-vivo µCT images of mice, ex-vivo µCT images of liver lobes extracted from mice, and high resolution images of whole tissue slides.

Objective

The objective is to study and implement image segmentation and/or registration techniques that can facilitate the different steps of the in-vivo / ex-vivo registration process. They may relate to any of the following steps:

  • Registration of the in-vivo µCT image to the ex-vivo µCT image
  • Segmentation of the liver lobes in the in-vivo µCT
  • Registration of the ex-vivo µCT to whole tissue slide images
  • Segmentation of regions of interest in whole tissue slide images

The exact specifications of the thesis objectives will be determined with the student at the beginning of the thesis.

Prerequisite

  • Python
  • Image processing techniques (INFO-H-500)
  • Interest in pattern recognition and image analysis (cf INFO-H-501)

Contact person

For more information please contact : Adrien Foucart - Adrien.Foucart@ulb.be, Arthur Elskens - Arthur.Elskens@ulb.be, Christine Decaestecker - Christine.Decaestecker@ulb.be.


references

Oliveira, 2012 - Medical image registration: a review Bandi, 2017 - Comparison of different methods for tissue segmentation in histopathological whole-slide images Kiemen, 2022 - CODA: quantitative 3D reconstruction of large tissues at cellular resolution Goubran, 2015 - Registration of in-vivo to ex-vivo MRI of surgically resected specimens: A pipeline for histology to in-vivo registration

Study of the dynamics of co-current flow of oil and water in porous subsurface by image processing

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, agiotis@tuc.gr,

Research Unit : LISA

Description

Study of the dynamics of co-current flow of oil and water in porous subsurface by image processing

Context

The project is done in collaboration with Pr Andreas Giotis of the School of Mineral Resources Engineering Technical University of Crete.

A series of image sequences have been taken using a special microscopic setup to study the displacement of droplets (ganglias) on a substrate.

Different locomotion patterns related to experimental settings can be qualitatively observed.

Objective

A first exploratory approach will try to identify different image processing approaches (object based of pixels based) to extract physical features of interest.

The best methods will then be made as automatic as possible to process numerous dataset to get statistically significant measures.

see also [Chevalier2015]

Prerequisite

  • image processing

  • Python

Contact person

For more information please contact : olivier.debeir@ulb.be, andreas giotis agiotis@tuc.gr


reference

Chevalier2015

Deep learning to denoise and classify volcano-seismic data.

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, corentin.caudron@ulb.be,

Research Unit : LISA

Description

Deep learning to denoise and classify volcano-seismic data.

Context

Volcano seismic data are typically capturing a wide variety of earthquakes and seismic noise. Yet, their detection and classification remain complicated and time consuming.

Deep learning appears as a promising way to automatically denoise, detect and classify volcanic data.

Existing, but not limited to, approaches are listed here

Prerequisite

  • Python

  • DNN

Contact persons

For more information please contact : corentin.caudron@ulb.be, olivier.debeir@ulb.be

Gas detection and quantification using ground vibrations acquired by fiber-optic cable data

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, corentin.caudron@ulb.be,

Research Unit : LISA

Description

Gas detection and quantification using ground vibrations acquired by fiber-optic cable data

Context

Fiber optic cables interrogated by Distributed acoustic sensing appear sensitive to gas bubbles. Yet, their sampling frequency (~kHz) produces large amount of data. Denoising, detection and classification are desirable to clean, detect and possibly cluster interesting signals in an automatic way.

Pre-processing of the data might be required to facilitate the application of denoising, detection and clustering methodologies.

ref

Prerequisite

  • Python

  • DNN

Contact persons

For more information please contact : corentin.caudron@ulb.be, olivier.debeir@ulb.be

Towards quantifying gas fluxes using sonar data.

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, corentin.caudron@ulb.be,

Research Unit : LISA

Description

Towards quantifying gas fluxes using sonar data.

Context

Sonar allow to detect gas in the water column possibly providing useful metrics to quantify gas fluxes.

Yet, they are sensitive to fish as well as the bottom of the sea and the lake and echosounding profiling generates large amount of data (Gb to Tb).

Automated approches to detect and classify bubble sizes would facilitate future gas flux estimates.

ref

Prerequisite

  • Python

  • DNN

Contact persons

For more information please contact : corentin.caudron@ulb.be, olivier.debeir@ulb.be

Determining the impact of probe and operator-dependent factors on prostate visualization and overall quality of micro-ultrasound images

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, - , toshna.manohar.panjwani@ulb.be

Research Unit : LISA

Description

Determining the impact of probe and operator-dependent factors on prostate visualization and overall quality of micro-ultrasound images

Microultrasound (MicroUS) is a novel ultrasound-based imaging method that provides a 300% improvement in image resolution in comparison to multi-parametric MRI for prostate cancer screening. MicroUS scanners work at 29MHz with an angled side-fire transducer that reduces wave penetration depth, while providing real-time transrectal imaging similar to transrectal ultrasound (TRUS), but with higher image quality and detection of focal lesions that are generally undetected with TRUS while taking prostate size into consideration. MicroUS is standardized with the Prostate Risk Identification using Micro-Ultrasound (PRIMUS) system, which determines risk stratification, biopsy technique and the course of treatment suitable to the patient. Targeted microUS biopsies are more accurate, safe, quick and convenient for the physician as well as the patient than other biopsy methods. (Gurwin et al., 2022; Laurence Klotz, 2020; Panzone et al., 2022)

Like other ultrasound-based modalities, microUS also uses a handheld transducer. This makes it easy to visualize the complete prostate from the peripheral zone to the anterior including the transition zone. During examination, the side-fire transducer is moved and angled to view the different zones and identify any lesions in ducts that appear as hyper-echoic areas on the image. (Ghai et al., 2016; Luger, 2019)

This process is highly-operator dependent, especially in terms of their skill and experience, transducer positioning and probe pressure applied during examination. This affects the reproducibility of the test and adds ambiguity. The aim is to understand changes in prostate visualization and overall image quality with respect to probe pressure and angle using a phantom that mimics prostate and its surrounding organs along with abnormalities such as lesions and calcifications of the ducts.

Context

The project is done in collaboration with the radiology department of CHIREC - Hôpital Delta.

Prerequisite

  • Python
  • Image Processing

Contact person

For more information please contact : toshna.manohar.panjwani@ulb.be , olivier.debeir@ulb.be


Endoscopic video sequence automatic annotation

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, pierre.eisendrath@stpierre-bru.be,

Research Unit : LISA

Description

Endoscopic video sequence automatic annotation

An Esophagogastroduodenoscopy is a very common procedure, it consists in exploring the upper gastro-intestinal tract of the patient with an endoscope.

The operator will visually assess the state of various sites of the upper tract to pose a diagnostic.

One problem is to assess the quality and the completeness of the realised exam.

The assessment is based on the visual quality of the images taken and also on the validation of all the visual sites to be visited.

The aim of the project is to continue a previous work that developed the hardware to enable the video recording of the endoscopic exam sequence by adding temporal sequence analysis.

Context

The project is done in collaboration with Dr Eisendrath of St-Pierre hospital.

Prerequisite

  • Python
  • Linux (acquisition system based on a Raspberry pi4)

Contact persons

For more information please contact : pierre.eisendrath@stpierre-bru.be, olivier.debeir@ulb.be



attached pdf document

Cardiac fiber orientation extraction from high resolution CT-SCAN

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, nicolas.arribard@ulb.be,

Research Unit : LISA

Description

Cardiac fiber orientation extraction from high resolution CT-SCAN

An exceptional series of high resolution CT-scan of pathological hearts has been acquired.

The objective of the project is to build a 3D distribution of the cardiac muscle orientation (tractography), from these 3D high-resolution micro-CT volumes.

Usually tractography is done using MRI acquisition, however, because the study material is anatomical pieces, 3D CT has been chosen. The work will then focus on developing the technique for this specific type of images.

Once the fiber tensor identified, the anatomical fiber angle (as described in [Straatman21] will be computed and compared between different pathological groups.

Context

The work is done in collaboration with Dr Arribard of the cardiology department of Erasme hospital.

Prerequisite

  • Python
  • Image processing

Contact person

For more information please contact : nicolas.arribard@ulb.be, olivier.debeir@ulb.be


references

Straatman21

Total body segmentation comparison

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, kevin.brouboni@hubruxelles.be, Thomas Giot

Research Unit : LISA

Description

Background

Medical image segmentation is a crucial step in many clinical applications, including the body composition analysis (BCA) in radiology for prognosis purposes or in radiation therapy necessary for treatment planning.

Deep learning (DL) has shown remarkable success in medical image segmentation due to its ability to learn complex features from large datasets while being fast and accurate. However, with the growing number of DL-based segmentation algorithms [1,2], it becomes imperative to evaluate and compare their performance to determine their effectiveness in different clinical scenarios.

Goals

  1. Conduct a comprehensive literature review on deep learning-based medical image segmentation algorithms.

  2. Implement/improve an in-house model and evaluate selected algorithms on a dedicated dataset, including CT and anatomical structures associated.

  3. Quantitatively compare the performance of different algorithms using metrics such as Dice similarity coefficient, Jaccard index and BCA metrics.

Prerequisites

  • Proficiency in programming languages such as Python and deep learning libraries such as TensorFlow or PyTorch.

  • Ability to critically analyze and interpret research papers related to medical image segmentation.

Contact person

For more information please contact : olivier.debeir@ulb.be, kevin.brouboni@hubruxelles.be


references

[1] Wasserthal, Jakob, et al. "TotalSegmentator: robust segmentation of 104 anatomical structures in CT images." arXiv preprint arXiv:2208.05868 (2022).

[2] Sundar, Lalith Kumar Shiyam, et al. "Fully Automated, Semantic Segmentation of Whole-Body 18F-FDG PET/CT Images Based on Data-Centric Artificial Intelligence." Journal of Nuclear Medicine 63.12 (2022): 1941-1948.

Frame-by-frame zonal mapping of prostate in micro-ultrasound images.

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, - , toshna.manohar.panjwani@ulb.be

Research Unit : LISA

Description

Frame-by-frame zonal mapping of prostate in micro-ultrasound images.

Context

Over the last few years, PCa screening has taken an ‘MRI-first’ approach. Multi-parametric MRI (mpMRI) with PI-RADS (Prostate Imaging Reporting and Data System) is routinely used by clinicians as a risk stratification tool in patients with elevated PSA serum. PI-RADS is a scoring system that that focuses on the peripheral and transition zone of the prostate to predict the probability of clinically significant cancer based on T2-weighted(T2W), Diffusion weighted imaging (DWI) and dynamic contrast enhancement (DCE) findings. mpMRI images are also used for mapping lesions for transrectal ultrasound (TRUS)-based targeted biopsy and more recently with micro-ultrasound (microUS).(Gurwin et al., 2022; Panzone et al., 2022)

MicroUS is a novel ultrasound-based imaging method that provides a 300% improvement in image resolution in comparison to mpMRI for prostate cancer screening. The side-fire transducer of microUS works at 29MHz and make it possible to view the cellular and ductal anatomy from the peripheral zone to the anterior including the transition zone. Focal lesions that are generally undetected with TRUS and mpMRI, even in enlarged prostates can be seen as hyper-echoic areas on the image. MicroUS is standardized with the Prostate Risk Identification using Micro-Ultrasound (PRIMUS) system, which determines risk stratification, biopsy technique and the course of treatment suitable to the patient. Overall, microUS with PRIMUS offers a safe, quick, reliable and inexpensive system to screen for prostate cancer and also perform targeted biopsies across all zones of the prostate, irrespective of the size in a clinical setting.(Ghai et al., 2016; Klotz et al., 2021; Laurence Klotz, 2020; Luger, 2019)

Objective

The aim of the current study would be to map the different zones of prostate across a multi-frame microUS DICOM acquired during screening. Computer aided detection (CADe) has been utilized in mpMRI before to help visualize lesions more accurately in peripheral zone. A similar approach to microUS using texture-based features and anatomical segmentation from MRI as an additional input to map and segment the zones of prostate and the lesions in those zones can ease the localization of targets for biopsy. (Ashouri et al., 2023; Greer et al., 2018; Liu et al., 2016)

Prerequisite

  • Python

  • Image processing

Contact person

For more information please contact : toshna.manohar.panjwani@ulb.be , olivier.debeir@ulb.be


Stress Recognition with Fusion of Multi-Modal Sensor Data

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, ayse.betul.oktay@ulb.be,

Research Unit : LISA

Description

Stress Recognition with Fusion of Multi-Modal Sensor Data

Background

Several studies have been performed on physiological signals collected with wearable devices (such as Empatica E4 wristband) [1,2].

Electrodermal activity, heart rate, skin temperature, and blood volume pulse are the most commonly used physiological data. There are also studies for stress recognition from facial images [3].

Objective

The objective of this project is to detect stress with signals of E4 sensor and facial images. Several data fusion strategies will be investigated on an open dataset that includes data of 25 children playing exergames for physiotherapy.

Prerequisites

  • Python

  • Signal and Image Processing

  • Deep and Machine Learning

Contact person

For more information please contact : ayse.betul.oktay@ulb.be and olivier.debeir@ulb.be


references

[1] Chandra, V., Priyarup, A., Sethia, D. (2021). Comparative Study of Physiological Signals from Empatica E4 Wristband for Stress Classification. In: Singh, M., Tyagi, V., Gupta, P.K., Flusser, J., Ören, T., Sonawane, V.R. (eds) Advances in Computing and Data Sciences. Ihttps://doi.org/10.1007/978-3-030-88244-0_21

[2] Kyamakya K, Al-Machot F, Haj Mosa A, Bouchachia H, Chedjou JC, Bagula A. Emotion and Stress Recognition Related Sensors and Machine Learning Technologies. Sensors (Basel). 2021 Mar 24;21(7):2273. doi: 10.3390/s21072273. PMID: 33804987; PMCID: PMC8037255.

[3] Jeon T, Bae HB, Lee Y, Jang S, Lee S. Deep-Learning-Based Stress Recognition with Spatial-Temporal Facial Information. Sensors. 2021; 21(22):7498. https://doi.org/10.3390/s21227498

DIR implementation for contour propagation in H&N adaptive RT

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, Nicolas.Pauly@ulb.be, Younes Jourani@hubruxelles.be

Research Unit : LISA & MÉTROLOGIE NUCLÉAIRE - HUB

Description

DIR implementation for contour propagation in H&N adaptive RT

Background

The clinical workflow for patients in radiotherapy involves several steps, such as acquisition of multi- modal imaging (CT for treatment planning, MRI and PET-CT), delineation of targets and organs at risk (OAR), treatment planning and delivery. However, for some specific tuour sites, such as head and neck (H&N) patients, the workflow can experience some adjustments. This is due to anatomical changes during the treatment, which often requires contour adaptation and replanning [1-4].

The delineation process in H&N is time consuming due to the number of OAR at the target proximity. For this reason, many commercial softwares have developed tools to help and speed up this step in the workflow. Deformable image registration (DIR) is widely applied for contour propagation between CTs [5, 6]. However, it remains on the competence of the clinical physicists to implement and validate such workflows.

Deep learning is also an option for this subject with the aim to compare both approaches. #### Objective

Here the goal

Goals

The purpose of this work is to implement and validate a clinical workflow for contour propagation with DIR in MIM software based on 24 H&N patients that underwent adaptive radiotherapy.

Prerequisite

  • Programming skills

Contact person

For more information please contact : *olivier.debeir@ulb.be, nicolas.pauly@ulb.be, Sara Poeta (sara.poeta@bordet.be), Younes Jourani (younes.jourani@hubruxelles.be), Kevin Brou Boni (kevin.brouboni@hubruxelles.be) *


references

[1] Schwartz D. et al, Journal of Oncology, 2011, « Adaptive Radiation Therapy for HN Cancer – Can an old goal evolve into new standard ? »

[2] Schwartz D. et al, IJROBP, 2011, « Adaptive Radiotherapy for HN cancer : initial clinical outcomes from a prospective trial »

[3] Castelli J. et al, Acta Oncologica, 2018, « Adaptive radiotherapy for HN cancer »

[4] Morgan H. E. et al, Cancers of the Head & Neck, 2020, « Adaptive radiotherapy for head and neck cancer »

[5] Brock C. et al, Medical Physics, 2017, « Use of image registration and fusion algorithms and techniques in radiotherapy : Report of the AAPM Radiation Therapy Committee Task Group No. 132 »

[6] Husein et al, BJR, 2021, « Clinical use, challenges, and barriers to implementation of deformable image registration in radiotherapy – the need for guidance and QA tools »

Sensitivity of the gamma index evaluation in the context of bone SBRT

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, Nicolas.Pauly@ulb.be, younes.jourani@hubruxelles.be

Research Unit : LISA & MÉTROLOGIE NUCLÉAIRE - HUB

Description

Sensitivity of the gamma index evaluation in the context of bone SBRT

Background

Stereotactic body radiation therapy (SBRT) rises as an effective technique to improve oncological outcome in oligometastatic patients [1-3]. In bone metastasis management, the possibility to deliver high dose per fraction with extreme precision shows excellent response in terms of local control [4-6]. There are international guidelines for target delineation based on some patterns of failure in bones metastasis data and expert consensus [7-8]. These recommendations often support the addition of a large part of adjacent “normal appearing” bone spaces leading to potential dose prescription compromises [9-11].

The delivery of these treatments is complex, with quality assurance measures in place to ensure it is delivered accurately. Patient-specific quality assurance (PSQA) is commonly used to examine the quality of intensity modulated treatment plans, but their ability to detect clinically significant problems is unclear. External audits have found problems with delivered radiation doses despite internal PSQA giving the green light, raising questions about the sensitivity of clinical PSQA procedures [12].

Goals

The purpose of this work is to analyze the sensitivity of the PSQA procedure using the common gamma index evaluation in the context of bone SBRT. The work will contain:

  • Recalculation of already delivered plans with known modifications

  • Measurement of original and modified plans with standard PSQA tool

  • Comparison with advanced PSQA tool (if tool available at the time)

  • Correlation with local control and/or survival (if clinical data available at the time)

Contact person

For more information please contact : *olivier.Debeir@ulb.be, nicolas.pauly@ulb.be, Younes Jourani (younes.jourani@hubruxelles.be) *


references

[1] Ost et al. Surveillance or metastasis-directed therapy for oligometastatic prostate cancer recurrence: a prospective, randomized, multicenter phase II trial. 2018, J Clin Oncol 36(5):446-453

[2] Palma et al. Stereotactic ablative radiotherapy for the comprehensive treatment of oligometastatic cancers: Long-term results of the SABR-COMET phase II randomized trial. 2020, J Clin Oncol 38(25):2830-2838

[3] Harrow et al. Stereotactic radiation for the comprehensive treatment of oligometastases (SABR-COMET): extended long-term outcomes. 2022, Int J Radiat Oncol Biol Phys 114(4):611-616

[4] Husain et al. Stereotactic body radiotherapy for de novo spinal metastases: systematic review. 2017, J Neurosurg Spine 27:295-302

[5] Spencer et al. Systematic review of the role of stereotactic radiotherapy for bone metastases. 2019, J Natl Cancer Inst 111(10):1023-1032

[6] Cao et al. An international pooled analysis of SBRT outcomes to oligometastatic spine and non-spine bone metastases. 2021, Radiother Oncol 164:98-103

[7] Cox et al. International spine radiosurgery consortium consensus guidelines for target volume definition in spinal stereotactic radiosurgery. 2012, Int J Radiat Oncol Biol Phys 83(5):e597-e602

[8] Nguyen et al. International multi-institutional patterns of contouring practice and clinical target volume recommendations for stereotactic body radiation therapy for non-spine bone metastases. 2022, Int J Radiat Oncol Biol Phys 112(2):351-360

Creation of an imaging phantom to simulate tumour heterogeneity

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, Bernardo.Innocenti@ulb.be, Zelda Paquier (zelda.paquier@hubruxelles.be), Fabian Kert (fabian.kert@hubruxelles.be)

Research Unit : LISA - BEAMS

Description

Background

Medical imaging scans are primarily diagnostic tools interpreted qualitatively by trained observers. However, it is evolving towards a more significant role in personalized healthcare, especially in oncology, by exploiting the quantitative information contained inside these digital images. Radiomics has emerged as a promising clinical tool in this context, as the extracted advanced quantitative features from the region of interest (ROI) could non-invasively provide information about the entire tumour region and the surrounding tissues.

Radiomics features, especially texture features that assess the heterogeneity of the ROI, are sensitive to imaging protocols, segmentation, image processing, etc. Testing those variations on patients is not always feasible (e.g. scanning the patient multiple times and in different centres), which explains the use of imaging phantoms instead. However, commercially available phantoms are usually homogeneous and not developed for texture analysis

Goals

The purpose of this work is to create an imaging phantom that mimics tumour heterogeneity. First, the student will have to develop the phantom based on real tumours using 3D printing. This step includes choosing the materials and designing the phantom in a modelling computer-aided design application. Then he will analyze the values of texture features from the phantom compared to real data.

Contact person

For more information please contact : *olivier.debeir@ulb.be, bernardo.innocenti@ulb.be, Zelda Paquier (zelda.paquier@hubruxelles.be), Fabian Kert (fabian.kert@hubruxelles.be) *


references

Lambin P, Leijenaar RTH, Deist TM, Peerlings J, De Jong EEC, Van Timmeren J, et al. Radiomics: The bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol 2017;14:749–62. https://doi.org/10.1038/nrclinonc.2017.141.

Valladares A, Beyer T, Rausch I. Physical imaging phantoms for simulation of tumours heterogeneity in PET, CT and MRI: an overview of existing designs. Med Phys 2020. https://doi.org/10.1017/CBO9781107415324.004.

Li Y, Reyhan M, Zhang Y, Wang X, Zhou J, Zhang Y, et al. The impact of phantom design and material-dependence on repeatability and reproducibility of CT-based radiomics features. Med Phys 2022. https://doi.org/10.1002/mp.15491.

Development of an AI supported annotation tool for high resolution digital pathology slides

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, egor.zindy@ulb.be,

Research Unit : LISA-CMMI

Description

Development of an AI supported annotation tool for high resolution digital pathology slides

Recent development in machine vision[Kirillov23] provides generic algorithm for image segmentation, this system is built on a general images corpus, but could be extended to domain specific images.

Context

The project is done in collaboration with researchers from the Center for Microscopy and Molecular Imaging (CMMI) of the ULB. The center is active in pre-clinical imaging from cell analysis to small animal imaging.

Objective

The aim of the research is to implement the state of the art in automatic image segmentation, to understand the limitations due to the change in image corpus and to specialise the algorithm in a specific domain.

The research will be carried out in direct collaboration with users, who will be able to provide their business expertise in return.

Prerequisite

Contact person

For more information please contact : olivier.debeir@ulb.be, egor.zindy@ulb.be


references

Kirillov23

Study of the flow of encephalic venous sinuses in the presence of stenosis (vascular narrowing) by MRI.

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, thierry.metens@ulb.be,

Research Unit : LISA-HUB

Description

Study of the flow of encephalic venous sinuses in the presence of stenosis (vascular narrowing) by MRI.

Based on a map of velocity vectors at different times of the cardiac cycle measured by 3T MRI via a 4DPCA sequence, the objective would be to use a hydrodynamic model that would allow an evaluation of pressures before and after stenosis.

The work would therefore consist of programming a code that calculates the pressures from the velocity vector field, based on a model adapted to non-laminar flows.

Context

The project is done in collaboration with the radiology department of Erasme hospital.

Prerequisite

  • Python
  • image processing

Contact person

For more information please contact : thierry.metens@ulb.be, olivier.debeir@ulb.be

Data analysis of the first X patients treated on the MR LINAC

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, nicolas.pauly@ulb.be, Jennifer.dhont@bordet.be

Research Unit : LISA-HUB

Description

Data analysis of the first X patients treated on the MR LINAC

The project aims to solve an open issue in a certain domain of application.

Context

In radiotherapy, to goal is to deliver a high dose of ionizing radiation to the tumor, while minimizing the dose to nearby healthy tissue. High geometrical treatment accuracy is therefore crucial. However, anatomical changes - of different orders of magnitude and over different time scales - can occur throughout the patient’s body. Some examples are motion caused through patient breathing, cardiac motion, bowel movement or bladder filling, but also tumor progression or regression. Clear visualization of the internal anatomy of the patient right before and during treatment is therefore of substantial interest. The MR LINAC combines an external beam radiotherapy system with an on-board magnetic resonance imaging device. The latter enables excellent soft-tissue visualization, without any additional radiation dose to the patient. Both real-time 2D images as well as sparse 3D images at high resolution are possible.

At the radiotherapy department of Institut Jules Bordet, such an MR LINAC is currently being implemented in the clinic, and will be only the second MR LINAC in clinical use in Belgium

Objective

The purpose of this work is to perform an extensive evaluation of all data generated during the treatment of the first X patients treated on the newly installed MR LINAC. This data will consist of medical images, motion data, clinical data, treatment workflow parameters, radiotherapy treatment plans, … etc. The aim of this analysis is two-fold; (1) to validate and possibly optimize the clinical workflow and patient treatments and (2) to generate hypothesis for future research projects.

Prerequisite

  • Strong interest in modern data science and programming is a must.

Contact person

For more information please contact : Jennifer.dhont@bordet.be, olivier.debeir@ulb.be, nicolas.pauly@ulb.be

3D respiratory pattern detection and monitoring

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, olivier.vanhove@hubruxelles.be,

Research Unit : LISA-HUB

Description

3D respiratory pattern detection and monitoring

The project aims to develop a 3D respiratory pattern recorder based on a 3D depth camera acquisition [VanHove23].

Context

The project is done in collaboration with the pneumology department of Erasme hospital.

Objective

  • validate the qualitative and quantitative measures acquired

  • develop a machine learning based system to detect specific respiratory patterns

  • complete the device with connection to external devices (EEG, ECG, force platform ...)

Several aspects are tackled, some related to image acquisition and processing, others to hardware development, depending on the candidate(s), different options can be taken.

Prerequisite

  • Python
  • image processing
  • Linux (the platform used is a Raspberry pi4)
  • C (opt. depending on the hardware development)

Contact person

For more information please contact : olivier.vanhove@hubruxelles.be, olivier.debeir@ulb.be


references

VanHove23


external link

Development of a respiratory stress sensitivity test

Promotor, co-promotor, advisor : olivier.debeir@ulb.be, olivier.vanhove@hubruxelles.be,

Research Unit : LISA-HUB

Description

Project title

Development of a respiratory stress sensitivity test

Context

An asthmatic subject may be hypo-perceiving and not taking his or her treatment correctly. The development of such a test could make it possible to detect them and thus facilitate their treatment.

Other applications could obviously be made (hyper-perceptors, athletes, COPD).

The project is done in collaboration with the Erasme pneumology department.

Objective

The thesis will consist in acquiring a series of physiologic signals in a well defined setting. Example of constraints:

  • Random increase in constant inspiratory load over 5 steps

  • Load applied via a bi-directional valve (inspiratory/expiratory)

  • Washout mechanism of the device so that there is no CO2 stagnation

  • Measurement of exchanged volumes (sensor available)

  • CO2 measurement (sensor available)

  • Direct measurement of dyspnoea

  • Coupling with an EMG of the parasternal muscles (in contact with the FSM)

Prerequisite

  • C,
  • C++,
  • Python
  • hardware/software interfaces

Contact person

For more information please contact : olivier.vanhove@hubruxelles.be, olivier.debeir@ulb.be


RGBD input to immersive video display real-time processing

Promotor, co-promotor, advisor : gauthier.lafruit@ulb.be, Mehrdad Teratani, Eline Soetens, Daniele Bonatto

Research Unit : LISA-VR

Description

Objective

Immersive video displays are devices that support glasses-free visualization of 3D content by projecting parallax-aware images over various viewpoints. Typically, a couple of RGBD images are captured, out of which hundreds of parallax images are synthesized for projection onto the display. Reaching real-time performance is only possible with specialized acceleration for image processing, e.g. on GPU. The objective of the Master thesis is to start from a known processing pipeline in the field, adapting and accelerating it to the specific devices under test, eventually showing a proof-of-concept of real-time 3D visual communication.

Prerequisite

  • Strong background in multimedia signal processing
  • C/C++ programming (GPU CUDA/OpenCL and/or OpenGL programming is a plus)

Contact person

For more information please contact : gauthier.lafruit@ulb.be


Updated on April 13, 2023