Publications

Explore our research publications: papers, articles, and conference proceedings from AImageLab.

Tip: type @ to pick an author and # to pick a keyword.

Modulation of Aerobic Glycolysis Genes During the Progression of Retinitis Pigmentosa

Authors: Adani, E.; Vasquez, S. S. V.; Lovino, M.; Bighinati, A.; Cappellino, L.; D'Alessandro, S.; Kalatzis, V.; Marigo, V.

Published in: INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE

PURPOSE. Photoreceptors are retinal cells with a high glucose metabolism and retinal degeneration, specifically retinitis pigmentosa (RP), affects glycolysis. We … (Read full abstract)

PURPOSE. Photoreceptors are retinal cells with a high glucose metabolism and retinal degeneration, specifically retinitis pigmentosa (RP), affects glycolysis. We aimed to evaluate changes in the expression of genes related to glucose metabolism in rod photoreceptors at different stages of retinal degeneration in murine models and human retinal organoids. METHODS. RNA sequencing (RNA-seq) analysis was performed on a photoreceptor-like cell line induced to undergo degeneration and validated by real-time qPCR analysis of retinas from two murine models and one human organoid model of RP. Bioinformatic analysis was performed on published RNA-seq datasets from three murine RP models. Real-time qPCR analysis was also performed on retinas treated with an adeno-associated virus type 2 vector carrying the neurotrophic H105A peptide, derived from the pigment epithelium-derived factor. RESULTS. The aerobic glycolysis genes, Hk2, Pkm1, Pkm2, Ldha, and Slc6a6 and other glucose metabolism genes were found downregulated in the in vitro model of photoreceptor degeneration and in the in vivo RhoP23H/+, rd1, and rd10 models at early stages of the disease. The decreased expression of the aerobic glycolysis genes, except for PKM2, was confirmed in human organoids with mutations in the USH2A gene associated with RP. Expression was partially recovered in RhoP23H/+ retinas after treatment with the adeno-associated virus type 2 vector expressing the neurotrophic H105A peptide. CONCLUSIONS. Glucose metabolism gene expression was found altered during the progression of RP in murine and human models of the disease. Expression was partially recovered in a molecular response to the treatment with the neurotrophic factor H105A.

2026 Articolo su rivista

Multi-Structure Segmentation in CBCT Volumes: the ToothFairy2 Challenge

Authors: Bolelli, Federico; Lumetti, Luca; Van Nistelrooij, Niels; Vinayahalingam, Shankeeth; Di Bartolomeo, Mattia; Marchesini, Kevin; Pellacani, Arrigo; Candeloro, Ettore; Rosati, Gabriele; Xi, Tong; Isensee, Fabian; Kirchhoff, Yannick; Krämer, Lars; Rokuss, Maximilian; Ulrich, Constantin; Maier-Hein, Klaus; Jiang, Yuxian; Liu, Yusheng; Wang, Lisheng; Wang, Haoshen; Chen, Siyu; Cui, Zhiming; Shi, Pengcheng; Pan, Zhaohong; Liang, Xiaokun; Ma, Qi; Konukoglu, Ender; Wodzinski, Marek; Müller, Henning; Mai, Haipeng; Dang, Xiaobing; Bhandary, Shrajan; Grosu, Radu; Bergé, Stefaan; Anesi, Alexandre; Grana, Costantino

Published in: MEDICAL IMAGE ANALYSIS

Cone-beam computed tomography (CBCT) is widely used for dento-maxillofacial diagnostics and treatment planning, and comprehensive multi-structure segmentation remains time-consuming, limiting … (Read full abstract)

Cone-beam computed tomography (CBCT) is widely used for dento-maxillofacial diagnostics and treatment planning, and comprehensive multi-structure segmentation remains time-consuming, limiting large-scale, reproducible research. In this article, we present ToothFairy2, a MICCAI 2024 challenge on multi-structure segmentation in maxillofacial CBCT. The accompanying dataset comprises 530 CBCT volumes (480 public training, 50 hidden test) with expert 3D annotations of 42 classes, including maxilla, mandible, crowns, bridges, implants, inferior alveolar canals, maxillary sinuses, pharynx, and teeth using the International Tooth Numbering System (FDI). 26 international teams participated in ToothFairy2, and their methods were run and evaluated for voxel-wise multi-class segmentation using a standardized protocol. This report extends the evaluation of teeth to also investigate the current capabilities of tooth detection and FDI numbering. Furthermore, ranking stability was analyzed to assess the robustness of the final challenge outcome. Overall, challenge participants achieved consistently high performance for large, high-contrast structures such as jawbones, pharynx, and most teeth, while maxillary sinuses, dental restorations, and fine structures remain challenging due to class imbalance and metal artifacts. Analysis of tooth-related metrics further revealed that assigning correct FDI numbers was more challenging than delineating individual teeth. By releasing CBCT data, 3D annotations, baseline models, and evaluation code, ToothFairy2 establishes a long-term benchmark to drive the development of automated methods for robust, clinically meaningful multi-structure segmentation in maxillofacial CBCT.

2026 Articolo su rivista

Multimodal Understanding tramite Retrieval-Augmentation: dai Modelli alla Valutazione

Authors: Sarto, Sara

Nel campo dell’Intelligenza Artificiale (IA), l’introduzione del meccanismo di attention e dell’architettura Transformer ha reso possibili modelli in grado di … (Read full abstract)

Nel campo dell’Intelligenza Artificiale (IA), l’introduzione del meccanismo di attention e dell’architettura Transformer ha reso possibili modelli in grado di elaborare più modalità su scala senza precedenti. Questa svolta è dovuta alla flessibilità dell’operatore di attention e all’adattabilità dell’architettura, che hanno dato origine a una nuova generazione di sistemi visione-linguaggio. Tra i task all’intersezione tra Computer Vision, Natural Language Processing e Multimedia, l’image captioning, ovvero la generazione di descrizioni in linguaggio naturale a partire da contenuti visivi, ha svolto un ruolo centrale. Nell’era dei Multimodal Large Language Models (MLLMs), il captioning resta fondamentale, affiancato da task multimodali come il Visual Question Answering (VQA). Per potenziare tali modelli, la retrieval augmentation è emersa come strategia chiave. L’arricchimento con conoscenza esterna rilevante migliora l’adattabilità e consente risposte più accurate e sensibili al contesto, soprattutto in scenari complessi o specialistici. Questa tesi rappresenta l’evoluzione naturale della retrieval augmentation, passando dalle sue prime applicazioni nell’image captioning all'integrazione nei moderni MLLMs. Ogni fase si basa sulle intuizioni e sulle sfide incontrate, affrontando problemi aperti legati alla valutazione e all’efficacia del retrieval. La prima parte della tesi stabilisce le basi dei modelli visione-linguaggio con retrieval augmentation. Vengono analizzate tecniche classiche di cross-modal retrieval ed estese a scenari più complessi, inclusi query multimodali e collezioni documentali eterogenee. Un’intuizione centrale è che la qualità del retrieval influenzi in modo critico le prestazioni complessive. In risposta a ciò, vengono introdotti nuovi retriever multimodali, ReT e ReT-2, progettati per tali scenari. La tesi indaga anche architetture di captioning con retrieval augmentation attraverso l’introduzione del RA-Transformer, in cui la conoscenza esterna viene integrata direttamente nel processo di generazione, fornendo segnali utili a produrre caption più ricche e precise. Successivamente, il lavoro estende la retrieval augmentation ai MLLMs, motivato dal fatto che anche il pretraining su larga scala mostra limiti nell’affrontare query knowledge-intensive o specifiche di dominio. In particolare, WikiLLaVA introduce architetture MLLM con retrieval augmentation per il knowledge-based VQA, in cui i meccanismi di retrieval potenziano le capacità di ragionamento e l’adattabilità a query multimodali complesse. Nel corso della ricerca emerge come il progresso dei modelli di captioning sia limitato dalla mancanza di metriche di valutazione robuste e affidabili. Le metriche tradizionali, sebbene ampiamente utilizzate, spesso non riescono a catturare adeguatezza semantica, grounding fattuale e fluidità linguistica. Quindi, un contributo di questa tesi è la progettazione e l’analisi di nuove metriche di valutazione per l’image captioning, ovvero PAC-S, BRIDGE e una versione migliorata di PAC-S. Tali metriche sono progettate per allinearsi al giudizio umano e per catturare la qualità delle descrizioni. La tesi ne analizza anche l’applicazione su diversi benchmark e domini, inclusa la loro capacità di valutare caption generate da MLLMs, riflettendo il passaggio del captioning da compito autonomo a componente di sistemi di ragionamento multimodale più ampi. Nel complesso, attraverso nuove architetture di captioning con retrieval augmentation, retriever multimodali e metriche di valutazione, questa tesi fornisce metodologie, strumenti e contributi che avanzano lo stato dell’arte nell’ambito dell’Intelligenza Artificiale multimodale.

2026 Tesi di dottorato

Multimodal-Conditioned Latent Diffusion Models for Fashion Image Editing

Authors: Baldrati, Alberto; Morelli, Davide; Cornia, Marcella; Bertini, Marco; Cucchiara, Rita

Published in: ACM TRANSACTIONS ON MULTIMEDIA COMPUTING, COMMUNICATIONS AND APPLICATIONS

Fashion illustration is a crucial medium for designers to convey their creative vision and transform design concepts into tangible representations … (Read full abstract)

Fashion illustration is a crucial medium for designers to convey their creative vision and transform design concepts into tangible representations that showcase the interplay between clothing and the human body. In the context of fashion design, computer vision techniques have the potential to enhance and streamline the design process. Departing from prior research primarily focused on virtual try-on, this paper tackles the task of multimodal-conditioned fashion image editing. Our approach aims to generate human-centric fashion images guided by multimodal prompts, including text, human body poses, garment sketches, and fabric textures. To address this problem, we propose extending latent diffusion models to incorporate these multiple modalities and modifying the structure of the denoising network, taking multimodal prompts as input. To condition the proposed architecture on fabric textures, we employ textual inversion techniques and let diverse cross-attention layers of the denoising network attend to textual and texture information, thus incorporating different granularity conditioning details. Given the lack of datasets for the task, we extend two existing fashion datasets, Dress Code and VITON-HD, with multimodal annotations. Experimental evaluations demonstrate the effectiveness of our proposed approach in terms of realism and coherence concerning the provided multimodal inputs.

2026 Articolo su rivista

PopEYE - Infrared Ocular Image Dataset for Eye State and Gaze-Direction Classification

Authors: Gibertoni, Giovanni; Borghi, Guido; Rovati, Luigi

The PopEYE dataset is a specialized collection of 14,976 near-infrared (NIR) images of the human eye region, specifically designed to … (Read full abstract)

The PopEYE dataset is a specialized collection of 14,976 near-infrared (NIR) images of the human eye region, specifically designed to support the development and benchmarking of computer vision algorithms for eye-state detection and coarse gaze-direction classification. Each image is provided in a fixed resolution of 772 × 520 pixels in 8-bit grayscale PNG format. The acquisition was performed frontally using a custom-developed Maxwellian-view optical configuration, consisting of a board-level CMOS camera and a specialized lens system where the subject's eye is precisely positioned at the focal point. This setup ensures a high-contrast representation of the anterior segment, making the pupil, iris, limbus, and portions of the sclera and eyelids clearly distinguishable under stable 850 nm infrared illumination. The dataset is categorized into six mutually exclusive classes identified through manual annotation supported by fixed visual aids and an expert system algorithm. The classification includes a correct positioning class for eyes open and properly aligned for clinical measurements (8,160 images), a closed class representing full eye closures such as blinks or sustained lid closure (1,790 images), and four directional classes representing gaze shifts relative to the central optical axis, specifically up (1,379 images), down (1,015 images), left (1,296 images), and right (1,336 images). The data captures the natural anatomical variability of 22 subjects and incorporates common real-world artifacts such as specular reflections from NIR sources and partial pupil occlusions by eyelashes or eyelids. By providing standardized labels and high-resolution NIR imagery, PopEYE serves as a robust resource for training machine learning models intended for real-time patient monitoring during ophthalmic examinations.

2026 Banca dati

RaTA-Tool: Retrieval-based Tool Selection with Multimodal Large Language Models

Authors: Mattioli, Gabriele; Turri, Evelyn; Sarto, Sara; Baraldi, Lorenzo; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita

Tool learning with foundation models aims to endow AI systems with the ability to invoke external resources — such as … (Read full abstract)

Tool learning with foundation models aims to endow AI systems with the ability to invoke external resources — such as APIs, computational utilities, and specialized models — to solve complex tasks beyond the reach of standalone language generation. While recent advances in Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) have expanded their reasoning and perception capabilities, existing tool-use methods are predominantly limited to text-only inputs and closed-world settings. Consequently, they struggle to interpret multimodal user instructions and cannot generalize to tools unseen during training. In this work, we introduce RaTA-Tool, a novel framework for open-world multimodal tool selection. Rather than learning direct mappings from user queries to fixed tool identifiers, our approach enables an MLLM to convert a multimodal query into a structured task description and subsequently retrieve the most appropriate tool by matching this representation against semantically rich, machine-readable tool descriptions. This retrieval-based formulation naturally supports extensibility to new tools without retraining. To further improve alignment between task descriptions and tool selection, we incorporate a preference-based optimization stage using Direct Preference Optimization (DPO). To support research in this setting, we also introduce the first dataset for open-world multimodal tool use, featuring standardized tool descriptions derived from Hugging Face model cards. Extensive experiments demonstrate that our approach significantly improves tool-selection performance, particularly in open-world, multimodal scenarios.

2026 Relazione in Atti di Convegno

ReAG: Reasoning-Augmented Generation for Knowledge-based Visual Question Answering

Authors: Compagnoni, Alberto; Morini, Marco; Sarto, Sara; Cocchi, Federico; Caffagni, Davide; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita

Multimodal Large Language Models (MLLMs) have shown impressive capabilities in jointly understanding text, images, and videos, often evaluated via Visual … (Read full abstract)

Multimodal Large Language Models (MLLMs) have shown impressive capabilities in jointly understanding text, images, and videos, often evaluated via Visual Question Answering (VQA). However, even state-of-the-art MLLMs struggle with domain-specific or knowledge-intensive queries, where relevant information is underrepresented in pre-training data. Knowledge-based VQA (KB-VQA) addresses this by retrieving external documents to condition answer generation, but current retrieval-augmented approaches suffer from low precision, noisy passages, and limited reasoning. To address this, we propose ReAG, a novel Reasoning-Augmented Multimodal RAG approach that combines coarse- and fine-grained retrieval with a critic model that filters irrelevant passages, ensuring high-quality additional context. The model follows a multi-stage training strategy leveraging reinforcement learning to enhance reasoning over retrieved content, while supervised fine-tuning serves only as a cold start. Extensive experiments on Encyclopedic-VQA and InfoSeek demonstrate that ReAG significantly outperforms prior methods, improving answer accuracy and providing interpretable reasoning grounded in retrieved evidence. Our source code is publicly available at: https://github.com/aimagelab/ReAG.

2026 Relazione in Atti di Convegno

Scalare l’Intelligenza Artificiale per l’Analisi di Immagini Orali e Dentali

Authors: Lumetti, Luca

La tomografia computerizzata a fascio conico (Cone Beam Computed Tomography, CBCT) è centrale nella pratica odontoiatrica e maxillo-facciale contemporanea, ma … (Read full abstract)

La tomografia computerizzata a fascio conico (Cone Beam Computed Tomography, CBCT) è centrale nella pratica odontoiatrica e maxillo-facciale contemporanea, ma i progressi nell’analisi automatizzata sono stati limitati dalla scarsità di dataset pubblici disponibili. Questa tesi affronta tale collo di bottiglia creando un ecosistema aperto ed estensibile che combina dataset, strumenti di annotazione, progressi algoritmici e dimostra come questi elementi interagiscano ciclicamente per accelerare la ricerca e la traduzione in prodotti clinici. Il dataset Maxillo è stato il primo nel suo genere, fornendo 91 volumi densamente annotati e 256 scansioni annotate in modo sparso per l’annotazione del Canale Alveolare Inferiore. La serie ToothFairy, a cui questa tesi ha contribuito, si è basata su queste fondamenta: la prima versione di ToothFairy ha aumentato le annotazioni dense a 156 volumi; ToothFairy2 si è espansa fino a 480 volumi CBCT, ciascuno con 42 classi semantiche; e ToothFairy3 ha ulteriormente ampliato il corpus a 532 volumi e 77 classi, migliorando al contempo la qualità delle annotazioni e la diversità degli scanner utilizzati. A complemento delle CBCT, il dataset Bits2Bites, anch'esso parte di questa tesi, ha fornito 200 coppie di scansioni intra-orali registrate con annotazioni multi-etichetta di occlusione. Tutte le risorse sono state rilasciate in modo aperto per consentire benchmarking riproducibili e sviluppi successivi. Per scalare le annotazioni senza sacrificare la fedeltà clinica, ho sviluppato strumenti di annotazione semi-automatizzati e una rigorosa pipeline di controllo qualità che combina modelli predittivi con la revisione da parte di esperti. Fondamentalmente, la creazione dei dataset, gli strumenti e lo sviluppo dei modelli sono progrediti in modo ciclico: dati aggiuntivi hanno permesso modelli migliori; modelli migliori hanno alimentato strumenti di annotazione più rapidi e accurati; e strumenti migliorati hanno a loro volta prodotto dataset più grandi e di qualità superiore, costituendo il contributo intellettuale centrale di questo lavoro. Su questa base di dati, ho migliorato i metodi di segmentazione volumetrica: moduli basati su architettura transformer che codificano esplicitamente le relazioni spaziali tra patch per preservare il dettaglio a livello di voxel aggregando al contempo il contesto a lungo raggio, e adattamenti dell'architettura Mamba per una segmentazione 3D efficiente e ad alta precisione. Infine, ho introdotto U-Net Transplant, un framework di fusione di modelli che propone tecniche innovative per aggiornare e specializzare modelli clinici senza un riaddestramento completo, riducendo i costi di rideploy, lo spazio di archiviazione e i rischi di esposizione dei dati. Nel complesso, questo ecosistema ha fornito il più grande benchmark CBCT aperto per la segmentazione maxillo-facciale fino ad oggi, insieme a un insieme coerente di metodi e strumenti che hanno migliorato in modo sostanziale l’accuratezza, l’efficienza e la gestione del ciclo di vita dell’IA clinica, abilitando una ricerca e un’implementazione dell’IA dentale più rapide, sicure e riproducibili.

2026 Tesi di dottorato

Searching for New Possible Peripheral Biomarkers of Cognitive Decline in Down Syndrome: The Role of IL-18 Pathway and its Interaction with TGF-β1 and TNF-α

Authors: Grasso, M.; Fidilio, A.; L'Episcopo, F.; Recupero, M.; Barone, C.; Lovino, M.; Alboni, S.; Bacalini, M. G.; Caruso, G.; Greco, D.; Buono, S.; De La Torre, R.; Tascedda, F.; Blom, J. M.; Benatti, C.; Caraci, F.

Published in: NEUROMOLECULAR MEDICINE

Down syndrome (DS) represents one of the most common genetic disorders attributable to a partial or complete trisomy of chromosome … (Read full abstract)

Down syndrome (DS) represents one of the most common genetic disorders attributable to a partial or complete trisomy of chromosome 21 that affects about 1 in 700 individuals at birth. The diagnosis of Alzheimer's Disease (AD)-correlated cognitive decline in this population requires new approaches and new biomarkers that comprehensively assess health status and early cognitive decline. In this observational study, we explored for the first time the relation of IL-18, a cytokine member of IL-1 family involved in both innate and acquired immune responses, with DS associated cognitive decline. We observed that plasma total IL-18, in subjects with DS over 35 with and without AD-related cognitive decline, and plasma concentrations of its binding protein in subjects with DS (19-35 years) were correlated with lower plasma concentrations of Transforming Growth Factor (TGF-beta 1), which are linked to an increased rate of cognitive decline in adults with DS. In addition, we found a significant association between low baseline concentrations of Free IL-18, the active form of the cytokine, and an increased rate of cognitive decline at 12 months, calculated as delta of the Test for Severe Impairment (dTSI), in individuals with DS (19-35 years). Finally, we demonstrated a reduction of Free IL-18/TNF-alpha ratio, considered as a new possible double biomarker, in both young and older adult DS subjects without AD-related cognitive decline (area under the receiver operating curve (AUC) was 0.82 and 0.71, respectively), suggesting the advantage of the composite biomarkers in the discrimination of patients from healthy people over single biomarkers.

2026 Articolo su rivista

Sketch2Stitch: GANs for Abstract Sketch-Based Dress Synthesis

Authors: Farooq Khan, Faizan; Mohamed Bakr, Eslam; Morelli, Davide; Cornia, Marcella; Cucchiara, Rita; Elhoseiny, Mohamed

In the realm of creative expression, not everyone possesses the gift of effortlessly translating their imaginative visions into flawless sketches. … (Read full abstract)

In the realm of creative expression, not everyone possesses the gift of effortlessly translating their imaginative visions into flawless sketches. More often than not, the outcome resembles an abstract, perhaps even slightly distorted representation. The art of producing impeccable sketches is not only challenging but also a time-consuming process. Our work is the first of this kind in transforming abstract, sometimes deformed garment sketches into photorealistic catalog images, to empower the everyday individual to become their own fashion designer. We create Sketch2Stitch, a dataset featuring over 65,000 abstract sketch images generated from garments of DressCode and VITONHD, two benchmark datasets in the virtual try-on task. Sketch2Stitch is the first dataset in the literature to provide abstract sketches in the fashion domain. We propose a StyleGAN-based generative framework that bridges freehand sketching with photorealistic garment synthesis. We demonstrate that our framework allows users to sketch rough outlines and optionally provide color hints, producing realistic designs in seconds. Experimental results demonstrate, both quantitatively and qualitatively, that the proposed framework achieves superior performance against various baselines and existing methods on both subsets of our dataset. Our work highlights a pathway toward AI-assisted fashion design tools, democratizing garment ideation for students, independent designers, and casual creators.

2026 Relazione in Atti di Convegno

Page 3 of 109 • Total publications: 1082