Publications by Ettore Candeloro

Explore our research publications: papers, articles, and conference proceedings from AImageLab.

Tip: type @ to pick an author and # to pick a keyword.

Active filters (Clear): Author: Ettore Candeloro

Multi-Structure Segmentation in CBCT Volumes: the ToothFairy2 Challenge

Authors: Bolelli, Federico; Lumetti, Luca; Van Nistelrooij, Niels; Vinayahalingam, Shankeeth; Di Bartolomeo, Mattia; Marchesini, Kevin; Pellacani, Arrigo; Candeloro, Ettore; Rosati, Gabriele; Xi, Tong; Isensee, Fabian; Kirchhoff, Yannick; Krämer, Lars; Rokuss, Maximilian; Ulrich, Constantin; Maier-Hein, Klaus; Jiang, Yuxian; Liu, Yusheng; Wang, Lisheng; Wang, Haoshen; Chen, Siyu; Cui, Zhiming; Shi, Pengcheng; Pan, Zhaohong; Liang, Xiaokun; Ma, Qi; Konukoglu, Ender; Wodzinski, Marek; Müller, Henning; Mai, Haipeng; Dang, Xiaobing; Bhandary, Shrajan; Grosu, Radu; Bergé, Stefaan; Anesi, Alexandre; Grana, Costantino

Published in: MEDICAL IMAGE ANALYSIS

Cone-beam computed tomography (CBCT) is widely used for dento-maxillofacial diagnostics and treatment planning, and comprehensive multi-structure segmentation remains time-consuming, limiting … (Read full abstract)

Cone-beam computed tomography (CBCT) is widely used for dento-maxillofacial diagnostics and treatment planning, and comprehensive multi-structure segmentation remains time-consuming, limiting large-scale, reproducible research. In this article, we present ToothFairy2, a MICCAI 2024 challenge on multi-structure segmentation in maxillofacial CBCT. The accompanying dataset comprises 530 CBCT volumes (480 public training, 50 hidden test) with expert 3D annotations of 42 classes, including maxilla, mandible, crowns, bridges, implants, inferior alveolar canals, maxillary sinuses, pharynx, and teeth using the International Tooth Numbering System (FDI). 26 international teams participated in ToothFairy2, and their methods were run and evaluated for voxel-wise multi-class segmentation using a standardized protocol. This report extends the evaluation of teeth to also investigate the current capabilities of tooth detection and FDI numbering. Furthermore, ranking stability was analyzed to assess the robustness of the final challenge outcome. Overall, challenge participants achieved consistently high performance for large, high-contrast structures such as jawbones, pharynx, and most teeth, while maxillary sinuses, dental restorations, and fine structures remain challenging due to class imbalance and metal artifacts. Analysis of tooth-related metrics further revealed that assigning correct FDI numbers was more challenging than delineating individual teeth. By releasing CBCT data, 3D annotations, baseline models, and evaluation code, ToothFairy2 establishes a long-term benchmark to drive the development of automated methods for robust, clinically meaningful multi-structure segmentation in maxillofacial CBCT.

2026 Articolo su rivista

Investigating the ABCDE Rule in Convolutional Neural Networks

Authors: Bolelli, Federico; Lumetti, Luca; Marchesini, Kevin; Candeloro, Ettore; Grana, Costantino

Published in: LECTURE NOTES IN COMPUTER SCIENCE

Convolutional Neural Networks (CNNs) have been broadly employed in dermoscopic image analysis, mainly due to the large amount of data … (Read full abstract)

Convolutional Neural Networks (CNNs) have been broadly employed in dermoscopic image analysis, mainly due to the large amount of data gathered by the International Skin Imaging Collaboration (ISIC). But where do neural networks look? Several authors have claimed that the ISIC dataset is affected by strong biases, i.e. spurious correlations between samples that machine learning models unfairly exploit while discarding the useful patterns they are expected to learn. These strong claims have been supported by showing that deep learning models maintain excellent performance even when "no information about the lesion remains" in the debased input images. With this paper, we explore the interpretability of CNNs in dermoscopic image analysis by analyzing which characteristics are considered by autonomous classification algorithms. Starting from a standard setting, experiments presented in this paper gradually conceal well-known crucial dermoscopic features and thoroughly investigate how CNNs performance subsequently evolves. Experimental results carried out on two well-known CNNs, EfficientNet-B3, and ResNet-152, demonstrate that neural networks autonomously learn to extract features that are notoriously important for melanoma detection. Even when some of such features are removed, the others are still enough to achieve satisfactory classification performance. Obtained results demonstrate that literature claims on biases are not supported by carried-out experiments. Finally, to demonstrate the generalization capabilities of state-of-the-art CNN models for skin lesion classification, a large private dataset has been employed as an additional test set.

2025 Relazione in Atti di Convegno