segmentation | VALIANT /valiant 91ºÚÁÏÍø Advanced Lab for Immersive AI Translation (VALIANT) Wed, 25 Feb 2026 02:26:30 +0000 en-US hourly 1 UNISELF: A unified network with instance normalization and self-ensembled lesion fusion for multiple sclerosis lesion segmentation /valiant/2026/02/25/uniself-a-unified-network-with-instance-normalization-and-self-ensembled-lesion-fusion-for-multiple-sclerosis-lesion-segmentation/ Wed, 25 Feb 2026 02:26:30 +0000 /valiant/?p=6064 Zhang, Jinwei; Zuo, Lianrui; Dewey, Blake E.; Remedios, Samuel W.; Liu, Yihao; Hays, Savannah P.; Pham, Dzung L.; Mowry, Ellen M.; Newsome, Scott Douglas; Calabresi, Peter Arthur; Saidha, Shiv; Carass, Aaron; & Prince, Jerry L. (2026).Ìý.ÌýMedical Image Analysis, 109, 103954.Ìý

Multiple sclerosis (MS) causes lesions, or areas of damage, in the brain that can be seen on multicontrast magnetic resonance (MR) images. Automatically segmenting, or outlining, these lesions using deep learning (DL) can improve speed and consistency compared to manual tracing by experts. Although many DL methods perform well on data similar to what they were trained on, they often struggle when tested on new datasets from different hospitals or scanners, a problem known as poor out-of-domain generalization.

To address this issue, the researchers developed a new method called UNISELF. The goal of UNISELF is to achieve high segmentation accuracy within the original training domain while also performing well on data from different sources. UNISELF introduces a test-time self-ensembled lesion fusion strategy, which combines multiple predictions at test time to improve accuracy. It also uses test-time instance normalization (TTIN) of latent features, meaning it adjusts internal feature representations during testing to better handle domain shifts and missing input contrasts, such as when certain MR image types are unavailable.

The model was trained using data from the ISBI 2015 longitudinal MS segmentation challenge. On the official test dataset, UNISELF ranked among the top-performing methods. Importantly, when evaluated on out-of-domain datasets with different scanners, imaging protocols, and missing contrasts—including the MICCAI 2016 dataset, the UMCL dataset, and a private multisite dataset—UNISELF outperformed other benchmark models trained on the same ISBI data. These results suggest that UNISELF is both accurate and robust to real-world variations in MR imaging, making it a promising tool for automated MS lesion segmentation across diverse clinical settings.

Fig. 1.ÌýAn illustration of the spatial augmentation, network input, and network output during training in UNISELF.

]]>
Glo-In-One-v2: holistic identification of glomerular cells, tissues, and lesions in human and mouse histopathology /valiant/2026/01/28/glo-in-one-v2-holistic-identification-of-glomerular-cells-tissues-and-lesions-in-human-and-mouse-histopathology/ Wed, 28 Jan 2026 17:02:41 +0000 /valiant/?p=5701 Yu, Lining; Yin, Mengmeng; Deng, Ruining; Liu, Quan; Yao, Tianyuan; Cui, Can; Guo, Junlin; Wang, Yu; Wang, Yaohong; Zhao, Shilin; Yang, Haichun; & Huo, Yuankai. (2025).Ìý.ÌýJournal of Medical Imaging,Ìý12(6), 61406.Ìý

Segmenting structures and lesions inside kidney glomeruli usually requires expert nephropathologists to carefully examine tissue morphology, a process that is time-consuming and can vary between observers. Building on their earlier Glo In One toolkit for detecting and segmenting glomeruli, the authors developed Glo In One version 2, which adds more detailed segmentation capabilities. They created a large annotated dataset containing 14 labels that cover tissue regions, cell types, and glomerular lesions across 23,529 glomeruli from both human and mouse kidney histopathology images, making it one of the largest datasets of its kind. Using this dataset, they trained a single deep learning model with a dynamic head architecture to segment all 14 classes from partially labeled whole slide images. The model was trained on 368 annotated kidney slides and learned to identify five intraglomerular tissue types and nine lesion types. The model achieved solid performance, with an average Dice similarity coefficient of 76.5 percent for glomerulus segmentation. In addition, using transfer learning, where knowledge learned from mouse data is applied to human data, improved lesion segmentation accuracy by more than 3 percent across lesion types. Overall, this work introduces a publicly available convolutional neural network that enables detailed, multiclass segmentation of glomerular tissue and lesions, helping reduce manual workload and variability in kidney pathology analysis.

Fig.Ìý1

This figure presents fine-grained classes of intraglomerular tissue, including Bowman’s capsule (Cap), tuft (Tuft), mesangium (Mes), mesangial cells (Mec), and podocytes (Pod). It also highlights the glomerular lesions observed in rodents and humans: AH, adhesion; CD, capsular drop; GS, global sclerosis; HS, hyalinosis; ML, mesangial lysis; MA, microaneurysm; NS, nodular sclerosis; ME, mesangial expansion; SS segmental sclerosis.

]]> Towards fair decentralized benchmarking of healthcare AI algorithms with the Federated Tumor Segmentation (FeTS) challenge /valiant/2025/07/28/towards-fair-decentralized-benchmarking-of-healthcare-ai-algorithms-with-the-federated-tumor-segmentation-fets-challenge/ Mon, 28 Jul 2025 13:58:31 +0000 /valiant/?p=4769 Zenk, Maximilian, Baid, Ujjwal, Pati, Sarthak, Linardos, Akis, Edwards, Brandon, Sheller, Micah, Foley, Patrick, Aristizabal, Alejandro, Zimmerer, David, Gruzdev, Alexey, Martin, Jason, Shinohara, Russell T., Reinke, Annika, Isensee, Fabian, Parampottupadam, Santhosh, Parekh, Kaushal, Floca, Ralf, Kassem, Hasan, Baheti, Bhakti, Thakur, Siddhesh, Chung, Verena, Kushibar, Kaisar, Lekadir, Karim, Jiang, Meirui, Yin, Youtan, Yang, Hongzheng, Liu, Quande, Chen, Cheng, Dou, Qi, Heng, Pheng-Ann, Zhang, Xiaofan, Zhang, Shaoting, Khan, Muhammad Irfan, Azeem, Mohammad Ayyaz, Jafaritadi, Mojtaba, Alhoniemi, Esa, Kontio, Elina, Khan, Suleiman A., Mächler, Leon, Ezhov, Ivan, Kofler, Florian, Shit, Suprosanna, Paetzold, Johannes C., Loehr, Timo, Wiestler, Benedikt, Peiris, Himashi, Pawar, Kamlesh, Zhong, Shenjun, Chen, Zhaolin, Hayat, Munawar, Egan, Gary, Harandi, Mehrtash, Isik Polat, Ece, Polat, Gorkem, Kocyigit, Altan, Temizel, Alptekin, Tuladhar, Anup, Tyagi, Lakshay, Souza, Raissa, Forkert, Nils D., Mouches, Pauline, Wilms, Matthias, Shambhat, Vishruth, Maurya, Akansh, Danannavar, Shubham Subhas, Kalla, Rohit, Anand, Vikas Kumar, Krishnamurthi, Ganapathy, Nalawade, Sahil, Ganesh, Chandan, Wagner, Ben, Reddy, Divya, Das, Yudhajit, Yu, Fang F., Fei, Baowei, Madhuranthakam, Ananth J., Maldjian, Joseph, Singh, Gaurav, Ren, Jianxun, Zhang, Wei, An, Ning, Hu, Qingyu, Zhang, Youjia, Zhou, Ying, Siomos, Vasilis, Tarroni, Giacomo, Passerrat-Palmbach, Jonathan, Rawat, Ambrish, Zizzo, Giulio, Kadhe, Swanand Ravindra, Epperlein, Jonathan P., Braghin, Stefano, Wang, Yuan, Kanagavelu, Renuga, Wei, Qingsong, Yang, Yechao, Liu, Yong, Kotowski, Krzysztof, Adamski, Szymon, Machura, Bartosz, Malara, Wojciech, Zarudzki, Lukasz, Nalepa, Jakub, Shi, Yaying, Gao, Hongjian, Avestimehr, Salman, Yan, Yonghong, Akbar, Agus S., Kondrateva, Ekaterina, Yang, Hua, Li, Zhaopei, Wu, Hung-Yu, Roth, Johannes, Saueressig, Camillo, Milesi, Alexandre, Nguyen, Quoc D., Gruenhagen, Nathan J., Huang, Tsung-Ming, Ma, Jun, Singh, Har Shwinder H., Pan, Nai-Yu, Zhang, Dingwen, Zeineldin, Ramy A., Futrega, Michal, Yuan, Yading, Conte, Gian Marco, Feng, Xue, Pham, Quan D., Xia, Yong, Jiang, Zhifan, Luu, Huan Minh, Dobko, Mariia, Carré, Alexandre, Tuchinov, Bair, Mohy-ud-Din, Hassan, Alam, Saruar, Singh, Anup, Shah, Nameeta, Wang, Weichung, Sako, Chiharu, Bilello, Michel, Ghodasara, Satyam, Mohan, Suyash, Davatzikos, Christos, Calabrese, Evan, Rudie, Jeffrey, Villanueva-Meyer, Javier, Cha, Soonmee, Hess, Christopher, Mongan, John, Ingalhalikar, Madhura, Jadhav, Manali, Pandey, Umang, Saini, Jitender, Huang, Raymond Y., Chang, Ken, To, Minh-Son, Bhardwaj, Sargam, Chong, Chee, Agzarian, Marc, Kozubek, Michal, Lux, Filip, Michálek, Jan, Matula, Petr, Kerškovský, Miloš, Kopřivová, Tereza, Dostál, Marek, Vybíhal, Václav, Pinho, Marco C., Holcomb, James, Metz, Marie, Jain, Rajan, Lee, Matthew D., Lui, Yvonne W., Tiwari, Pallavi, Verma, Ruchika, Bareja, Rohan, Yadav, Ipsa, Chen, Jonathan, Kumar, Neeraj, Gusev, Yuriy, Bhuvaneshwar, Krithika, Sayah, Anousheh, Bencheqroun, Camelia, Belouali, Anas, Madhavan, Subha, Colen, Rivka R., Kotrotsou, Aikaterini, Vollmuth, Philipp, Brugnara, Gianluca, Preetha, Chandrakanth J., Sahm, Felix, Bendszus, Martin, Wick, Wolfgang, Mahajan, Abhishek, Balaña, Carmen, Capellades, Jaume, Puig, Josep, Choi, Yoon Seong, Lee, Seung-Koo, Chang, Jong Hee, Ahn, Sung Soo, Shaykh, Hassan F., Herrera-Trujillo, Alejandro, Trujillo, Maria, Escobar, William, Abello, Ana, Bernal, Jose, Gómez, Jhon, LaMontagne, Pamela, Marcus, Daniel S., Milchenko, Mikhail, Nazeri, Arash, Landman, Bennett, Ramadass, Karthik, Xu, Kaiwen, Chotai, Silky, Chambless, Lola B., Mistry, Akshitkumar, Thompson, Reid C., Srinivasan, Ashok, Bapuraj, J. Rajiv, Rao, Arvind, Wang, Nicholas, Yoshiaki, Ota, Moritani, Toshio, Turk, Sevcan, Lee, Joonsang, Prabhudesai, Snehal, Garrett, John, Larson, Matthew, Jeraj, Robert, Li, Hongwei, Weiss, Tobias, Weller, Michael, Bink, Andrea, Pouymayou, Bertrand, Sharma, Sonam, Tseng, Tzu-Chi, Adabi, Saba, Xavier Falcão, Alexandre, Martins, Samuel B., Teixeira, Bernardo C. A., Sprenger, Flávia, Menotti, David, Lucio, Diego R., Niclou, Simone P., Keunen, Olivier, Hau, Ann-Christin, Pelaez, Enrique, Franco-Maldonado, Heydy, Loayza, Francis, Quevedo, Sebastian, McKinley, Richard, Slotboom, Johannes, Radojewski, Piotr, Meier, Raphael, Wiest, Roland, Trenkler, Johannes, Pichler, Josef, Necker, Georg, Haunschmidt, Andreas, Meckel, Stephan, Guevara, Pamela, Torche, Esteban, Mendoza, Cristobal, Vera, Franco, Ríos, Elvis, López, Eduardo, Velastin, Sergio A., Choi, Joseph, Baek, Stephen, Kim, Yusung, Ismael, Heba, Allen, Bryan, Buatti, John M., Zampakis, Peter, Panagiotopoulos, Vasileios, Tsiganos, Panagiotis, Alexiou, Sotiris, Haliassos, Ilias, Zacharaki, Evangelia I., Moustakas, Konstantinos, Kalogeropoulou, Christina, Kardamakis, Dimitrios M., Luo, Bing, Poisson, Laila M., Wen, Ning, Vallières, Martin, Loutfi, Mahdi Ait Lhaj, Fortin, David, Lepage, Martin, Morón, Fanny, Mandel, Jacob, Shukla, Gaurav, Liem, Spencer, Alexandre, Gregory S., Lombardo, Joseph, Palmer, Joshua D., Flanders, Adam E., Dicker, Adam P., Ogbole, Godwin, Oyekunle, Dotun, Odafe-Oyibotha, Olubunmi, Osobu, Babatunde, Shu’aibu Hikima, Mustapha, Soneye, Mayowa, Dako, Farouk, Dorcas, Adeleye, Murcia, Derrick, Fu, Eric, Haas, Rourke, Thompson, John A., Ormond, David Ryan, Currie, Stuart, Fatania, Kavi, Frood, Russell, Simpson, Amber L., Peoples, Jacob J., Hu, Ricky, Cutler, Danielle, Moraes, Fabio Y., Tran, Anh, Hamghalam, Mohammad, Boss, Michael A., Gimpel, James, Kattil Veettil, Deepak, Schmidt, Kendall, Cimino, Lisa, Price, Cynthia, Bialecki, Brian, Marella, Sailaja, Apgar, Charles, Jakab, Andras, Weber, Marc-André, Colak, Errol, Kleesiek, Jens, Freymann, John B., Kirby, Justin S., Maier-Hein, Lena, Albrecht, Jake, Mattson, Peter, Karargyris, Alexandros, Shah, Prashant, Menze, Bjoern, Maier-Hein, Klaus, & Bakas, Spyridon. (2025). *Nature Communications, 16*(1), 6274.

Competitions are commonly used to test and compare computer algorithms for analyzing medical images. However, these competitions usually rely on small, carefully chosen datasets collected from only a few hospitals. This doesn’t reflect the reality of working with patient data from many different medical centers, which can vary a lot. TheÌýFederated Tumor Segmentation (FeTS) ChallengeÌýwas designed to better reflect real-world conditions. It tests two things: (i) how wellÌýfederated learningÌýmethods (where data stays at each hospital and only the model updates are shared) can combine information from different locations, and (ii) how well the latestÌýtumor segmentationÌýalgorithms perform across a wide range of data sources.

The challenge used brain tumor data from many hospitals to simulate how federated learning would work in real life. The results showed that algorithms that couldÌýadaptively combine informationÌýfrom different sites did better, and selecting which hospitals (clients) to involve at each step helped save time and resources. When the best segmentation algorithms were tested using brain scan data from 32 institutions around the world, they generally performed well, but in some cases, they struggled due to differences in the data. This shows that using data from many sites is important for making sure healthcare AI tools actually work in the real world.

Fig. 1: Concept and main findings of the Federated Tumor Segmentation (FeTS) Challenge.

The FeTS challenge is an international competition to benchmark brain tumor segmentation algorithms, involving data contributors, participants, and organizers across the globe. Test data hubs are geographically distributed while training data is centralized. Participants include those from the 2021 and 2022 challenges. Task 1 focused on simulated federated learning and we consistently saw an increase in performance by teams utilizing variants of selective sampling in their federated aggregation. In Task 2, submissions are distributed among the test data hubs for evaluation. As a representative example, the top-ranked model shows good average segmentation performance (measured by the Dice Similarity coefficient, DSC) but also failures for individual cases. Cases with empty tumor regions and data sites with less than 40 cases are not shown in the strip plot. Source data are provided as a Source Data file.

]]>
Adaptive Patching for High-resolution Image Segmentation with Transformers /valiant/2025/01/28/adaptive-patching-for-high-resolution-image-segmentation-with-transformers/ Tue, 28 Jan 2025 14:43:05 +0000 /valiant/?p=3745 Zhang, Enzhi; Lyngaas, Isaac; Chen, Peng; Wang, Xiao; Igarashi, Jun; Huo, Yuankai; Munetomo, Masaharu; Wahib, Mohamed. International Conference for High Performance Computing, Networking, Storage and Analysis, SC,Ìý

2024, .Ìý

Ìý

Attention-based models are becoming increasingly popular for tasks like image analysis and segmentation, but working with high-resolution images, like those used in pathology, presents challenges. The typical method involves breaking the images into small patches and processing them in sequence, but this becomes very resource-intensive for high-resolution images, making it difficult to use these models efficiently. One solution has been to either use complex multi-resolution models or simplified attention methods, but these come with their own challenges.Ìý

In this study, we drew inspiration from a technique used in high-performance computing called Adaptive Mesh Refinement (AMR). Instead of dividing the entire image into patches upfront, we dynamically adjust the patches based on the details in the image, allowing us to significantly reduce the number of patches needed for processing. This approach adds minimal extra work and can be easily used with any attention-based model. Our method not only improved the quality of image segmentation compared to existing models, but it also increased processing speed by an average of 6.9 times, even for images with very high resolutions (up to 64K²), while running on up to 2,048 GPUs.Ìý

]]>
Beyond MR Image Harmonization: Resolution Matters Too /valiant/2024/11/21/beyond-mr-image-harmonization-resolution-matters-too/ Thu, 21 Nov 2024 16:59:02 +0000 /valiant/?p=3307 Hays, S.P.; Remedios, S.W.; Zuo, L.; Mowry, E.M.; Newsome, S.D.; Calabresi, P.A.; Carass, A.; Dewey, B.E.; Prince, J.L.Ìý “), Volume 15187 LNCS, 2025, pp. 34-44, Ìý

Ìý

Magnetic resonance (MR) imaging is widely used to monitor the body non-invasively, but the results can vary due to differences in scanner hardware, software, and imaging protocols. This variability can create challenges for processing algorithms, which may struggle to handle these differences consistently. To address this, image harmonization is used to reduce these variations and improve the accuracy of tasks like segmentation. However, most harmonization models focus on imaging parameters like inversion or repetition time and overlook the impact of image resolution.Ìý

This study evaluates how image resolution affects harmonization by using a pretrained harmonization algorithm. We simulated different 2D image resolutions by altering slice thickness and gaps in high-resolution 3D MR images and analyzed how the harmonization algorithm performs with these changes. Our findings show that low-resolution images cause issues for harmonization, as it doesn’t fully account for resolution and orientation differences. While super-resolution techniques can help address this, they are not always used in practice. This approach highlights the importance of understanding the limits of harmonization algorithms and how resolution affects their reliability, offering guidance for preprocessing steps and ensuring trust in imaging results.Ìý

Fig. 1.Ìý

Slice thickness occurrences for each MRI image contrast: T1w, T2w, and T2w-FLAIRÌý

]]>
Towards an Accurate and Generalizable Multiple Sclerosis Lesion Segmentation Model Using Self-Ensembled Lesion Fusion /valiant/2024/09/22/towards-an-accurate-and-generalizable-multiple-sclerosis-lesion-segmentation-model-using-self-ensembled-lesion-fusion/ Sun, 22 Sep 2024 03:52:45 +0000 /valiant/?p=2991 Zhang, Jinwei, Zuo, Lianrui, Dewey, Blake E., Remedios, Samuel W., Pham, Dzung L., Carass, Aaron, & Prince, Jerry L. (2024). Towards an accurate and generalizable multiple sclerosis lesion segmentation model using self-ensembled lesion fusion. In Proceedings of the 21st IEEE International Symposium on Biomedical Imaging (ISBI 2024), Athens, Greece, May 27-30, 2024. https://doi.org/10.1109/ISBI56570.2024.10635877

This study focuses on improving the automatic detection and segmentation of multiple sclerosis (MS) lesions in MRI scans, a process that is crucial for efficient and consistent diagnosis but traditionally performed manually. Current automated methods often rely on complex modifications to U-Net-based architectures to improve performance. However, these modifications can limit how well the models generalize across different MRI datasets with varying contrasts or image quality.

The researchers aimed to address this by developing a segmentation model based on the standard U-Net architecture, without any additional modifications, to create a more accurate and generalizable tool for detecting MS lesions. They introduced a novel self-ensembling technique that enhances model performance during testing by combining multiple predictions into a final, refined segmentation. This approach not only achieved the highest performance in the widely recognized ISBI 2015 MS lesion segmentation challenge but also showed that it is robust across different settings of the self-ensemble parameters.

Additionally, the study found that using instance normalization, rather than the more common batch normalization, improved the model’s ability to generalize across clinical MRI data from various scanners, making it more versatile in real-world applications. This work demonstrates that sophisticated modifications to architectures are not always necessary to achieve high accuracy and that simple techniques, when applied thoughtfully, can result in highly generalizable and efficient models for MS lesion segmentation.

Illustration of the proposed self-ensembled lesion fu- sion (SELF) strategy.

 

]]>
Sex and age effects on gray matter volume trajectories in young children with prenatal alcohol exposure /valiant/2024/06/20/sex-and-age-effects-on-gray-matter-volume-trajectories-in-young-children-with-prenatal-alcohol-exposure/ Thu, 20 Jun 2024 14:49:20 +0000 /valiant/?p=2536 Madison Long, Preeti Kar, Nils D. Forkert, Bennett A. Landman, W. Ben Gibbard, Christina Tortorelli, Carly A. McMorris, Yuankai Huo, and Catherine A. Lebel. “” Frontiers in Human Neuroscience, vol. 18, 1379959, April 2024.Ìý

Prenatal alcohol exposure (PAE) affects about 11% of pregnancies in North America and is the leading known cause of neurodevelopmental disabilities, such as fetal alcohol spectrum disorder (FASD), which affects around 2-5% of the population. PAE has been linked to smaller gray matter volumes in individuals across different age groups. However, the specific developmental patterns in early childhood and potential sex differences have not been well studied.

This study used longitudinal T1-weighted MRI to examine gray matter volume development in young children aged 3-8 years with PAE (42 children, 84 scans) compared to unexposed children aged 2-8.5 years (127 children, 450 scans). The results showed that children with PAE had different gray matter development trajectories, with less increase and more decrease in gray matter volume compared to unexposed children. Notably, sex differences were more pronounced in the PAE group, with females showing the smallest gray matter volumes and the least change with age.

These findings suggest that children with PAE may have reduced brain plasticity or accelerated maturation, contributing to the cognitive and behavioral challenges they often face. The study also indicates that gray matter volume differences associated with PAE become more evident as children grow older, supporting previous research on older age groups.

Figure 3 Total and regional gray matter volumes were larger in the control group. Regional volume was up to 11% larger in controls than in the PAE sample. Average group differences in volume were calculated from the main-effects only models.
]]>
Multi-scale Multi-site Renal Microvascular Structures Segmentation for Whole Slide Imaging in Renal Pathology /valiant/2024/06/20/multi-scale-multi-site-renal-microvascular-structures-segmentation-for-whole-slide-imaging-in-renal-pathology/ Thu, 20 Jun 2024 14:26:43 +0000 /valiant/?p=2516 Franklin Hu, Ruining Deng, Shunxing Bao, Haichun Yang, and Yuankai Huo. “.” Proceedings of SPIE Medical Imaging 2024: Digital and Computational Pathology, vol. 12933, 1293319, 2024, San Diego, California, United States.

Segmenting tiny blood vessels, like arterioles, venules, and capillaries, in kidney tissue images is crucial for kidney disease research but is currently very time-consuming when done manually. To automate this process, researchers developed Omni-Seg, a new method that uses deep learning to handle data from various sources and scales. Unlike most existing methods that are limited to specific data sets, Omni-Seg can learn from images that are only partially labeled, where each image has only one type of tissue labeled.

The researchers trained Omni-Seg on images from two different databases, HuBMAP and NEPTUNE, using multiple magnification levels (40x, 20x, 10x, and 5x). The results showed that Omni-Seg performed better in accurately identifying and segmenting these blood vessels, achieving higher scores on measures of accuracy, the Dice Similarity Coefficient (DSC), and Intersection over Union (IoU). This method offers a powerful tool for renal pathologists, making it easier and faster to analyze kidney tissue images quantitatively.

Figure 1: Datasets from NEPTUNE and HuBMAP feature diverse partially labeled structures. While NEPTUNE contains various non-microvascular components, our strategy leverages this data to enhance model robustness. With the multi- site and multi-scale nature of these datasets, spanning magnifications from 5× to 40×, our Omni-Seg method effectively addresses the challenge.
]]>
Evaluation of U-Nets for object segmentation in ultrasound images /valiant/2024/06/20/evaluation-of-u-nets-for-object-segmentation-in-ultrasound-images/ Thu, 20 Jun 2024 14:16:34 +0000 /valiant/?p=2507 Rui Wang, Katelyn Craft, Elisa Holtzman, Hannah Mason, Christopher Khan, Brett Byram, Jason Mitchell, and Jack H. Noble. “.” Proceedings of SPIE Medical Imaging 2024: Ultrasonic Imaging and Tomography, vol. 12932, 129321G, 2024, San Diego, California, United States.

Ultrasound imaging is widely used in medicine due to its safety and cost-effectiveness compared to other methods. However, its quality can vary depending on tissue properties and depth. In this study, researchers tested deep learning techniques to create 3D models of objects imaged with ultrasound. They used three versions of the 3D U-Net model, each trained with different scenarios. The models performed well on specific categories of objects they were trained on but struggled with new categories. Researchers also looked into dual-task autoencoding to improve performance across different object types. These findings set a foundation for further improving the U-Net model to handle a broader range of ultrasound imaging tasks, potentially enhancing visualization and accuracy in medical applications.

]]>