ACS Research Group Colloquium
Fri. Sep. 26 12:30 PM
- Fri. Sep. 26 01:20 PM
Contact: Andrea Wiebe
Location: 3M63
Deep Learning & Medical Imaging
Students from the UWinnipeg Applied Computer Science department will jointly present their research on medical imaging applications of machine learning.
1. Surgery-aware Generative AI for Intraoperative MRI Generation: Mask Guidance and Attention Mechanisms
Intraoperative MRI (iMRI) often suffers from global quality degradation due to rapid, highly accelerated acquisitions, undermining its diagnostic utility during glioma resections. While generative AI methods can transfer the high-quality appearance of preoperative MRI (pMRI) to iMRI, they may hallucinate resected tumors near cavities, creating safety-critical false positives. To jointly address global enhancement and pathology preservation, we propose SurgAware-GAN, featuring: (1) green masks that downweight losses within resection/tumor regions (λ_g=0.3) to suppress false restoration; (2) yellow masks (a 5 mm peri-tumoral ring) that upweight losses to enforce anatomical clarity and boundary definition; and (3) CBAM-augmented generators that improve global and local feature representations. Trained on the ReMIND dataset (n=114), SurgAware-GAN attains strong global SSIM and near-0% tumor hallucination inside cavities, while providing practical runtime (~8.3 s/volume), enabling near–real-time intraoperative enhancement.
BIO: I am a Mitacs Globalink Research intern currently conducting a three-month research placement at The University of Winnipeg under the supervision of Dr. Qian Liu. I am from Beijing University of Technology (BJUT), China. My research focuses on generative AI for medical imaging, with an emphasis on safe and faithful generation and enhancement of intraoperative MRI.
2. Towards Robust Alzheimer’s Disease Classification with Multimodal Fusion
Alzheimer's Disease (AD) poses a significant global burden, yet current unimodal diagnostic approaches using MRI alone miss critical complementary disease markers essential for accurate early detection. We developed a deep multimodal fusion framework that combines structural MRI with structured clinical data for enhanced AD diagnosis. Our approach employed FT-Transformer for tabular clinical variables and DeiT for brain MRI processing, integrating modalities through early concatenation and mid-fusion via modality-specific projections. Evaluation across five public AD datasets demonstrated that our mid-fusion approach consistently outperformed unimodal and early-fusion baselines, confirming that deep multimodal methods substantially enhance diagnostic accuracy. In this presentation, we will introduce our novel mid-fusion framework, discuss its performance across geographically diverse datasets, and highlight the interpretability aspects of our methodology.
BIO: Manjot Sran and Sujay Rittikar are thesis-based Master's students in Applied Computer Science at the University of Winnipeg, supervised by Dr. Sheela Ramanna. Both hold Bachelor's degrees in Computer Science and Engineering—Manjot from I.K. Gujral Punjab Technical University, India, and Sujay from Shivaji University, India. This research project was conducted under the guidance of Dr. Sheela Ramanna and Dr. Liu.
Manjot specializes in multimodal information processing, affective computing, and healthcare AI, with expertise in fusion strategies and soft-computing classification methods. Sujay focuses on language models, multilingualism, and multimodal healthcare AI, supported by the UW President's Scholarship for World Leaders. He brings industry experience from software companies in the real estate, finance, and compliance sectors. Inspired by this project, they both collaborate on innovative solutions through their startup to support dementia caregivers.
3. Deep Learning for Fetal Head Segmentation: A Comparative Study
Ultrasound is frequently used for pregnancy checkups which helps clinicians to gain insight on fetal growth and health. One measurement they rely on is head circumference which gives information on healthy brain growth of the newborn. Surprisingly, it is still commonly measured by hand. In this talk, we will show how we can teach computers to measure the head circumference of the babies from ultrasound images automatically using different machine learning models, and how we gain different insights by from the results from different models. We compare three popular approaches – a U-Net style convolutional network, an EfficientNet variant, and a Vision Transformer—on the HC18 dataset (which contains baby ultrasound pictures with expert-drawn head outlines), and we test each one in two ways: with the raw ultrasound images and with “cleaned up” (denoised) images.
Two surprises make the story fun, first, the classic convolutional approach with a ResNet backbone still wins overall; second, denoising—something we usually love—sometimes makes the answers worse! We will show why, and also show how results differ across the three stages of pregnancy, and show why telling the model to draw a clear outline helps a lot.
If you’re curious about how optimization, image/signal processing, and geometry (ellipse fitting!) meet real medical imaging, or you’re thinking about projects that blend math with impact, this talk is for you. No specific background/domain required—just curiosity.
BIO: Sumaiya Sultana Dola is a thesis-based master’s student, and Mir Md Taosif Nur is a prospective thesis-based master’s student in Applied Computer Science, both supervised by Dr. Camilo Valderrama. Both hold a BSc in Computer Science and Engineering from BRAC University, Bangladesh. Inspired by the HC18 grand challenge, they worked on this research under the guidance of Dr. Camilo Valderrama and Dr. Qian Liu. Dola's research focuses on AI for maternal-infant health, medical imaging, data science, public health analytics, multimodal health AI, machine learning and deep learning, backed by a Graduate Student Research Award, while Taosif’s expertise lies on signal processing, Natural Language Processing, image processing, pattern recognition, data analytics, machine learning and deep learning, backed by a President’s Scholarship for World Leaders (Graduate), 2025, along with 1 year of industry experience as a software developer.