Publications

Explore our research publications: papers, articles, and conference proceedings from AImageLab.

Tip: type @ to pick an author and # to pick a keyword.

A Workflow for Cost- and Time-Aware Refueling Itinerary Optimization

Authors: Savarese, Marco; Zaccagnino, Carmine; De Blasi, Antonio; Salici, Giacomo; Cascianelli, Silvia; Vezzani, Roberto; Grazia, Carlo Augusto

The complete workflow of the RI-PIENO framework is presented, a system for refueling itinerary optimization that extends the original PIENO … (Read full abstract)

The complete workflow of the RI-PIENO framework is presented, a system for refueling itinerary optimization that extends the original PIENO design. While prior work introduced the conceptual modules of RI-PIENO, their operational pipeline was not described in detail. This study makes the workflow explicit, covering the end-to-end process from CAN Bus data acquisition and stop detection to the construction of daily trip graphs, refueling optimization, and mileage prediction. By clarifying the sequence of operations, the contribution provides a reproducible and extensible foundation for future research and development.

2026 Relazione in Atti di Convegno

An Investigation on Incremental Learning from Unbalanced Streamed Data

Authors: Borghi, Guido; Graffieti, Gabriele; Vezzani, Roberto

Published in: LECTURE NOTES IN COMPUTER SCIENCE

2026 Relazione in Atti di Convegno

CAMNet: Leveraging Cooperative Awareness Messages for Vehicle Trajectory Prediction

Authors: Grasselli, Mattia; Porrello, Angelo; Grazia, Carlo Augusto

2026 Relazione in Atti di Convegno

DOLFIN: Balancing Stability and Plasticity in Federated Continual Learning

Authors: Moussadek, Omayma; Salami, Riccardo; Calderara, Simone

Published in: LECTURE NOTES IN COMPUTER SCIENCE

Federated continual learning (FCL) enables models to learn new tasks across multiple distributed clients, protecting privacy and without forgetting previously … (Read full abstract)

Federated continual learning (FCL) enables models to learn new tasks across multiple distributed clients, protecting privacy and without forgetting previously acquired knowledge. However, current methods face challenges balancing performance, privacy preservation, and communication efficiency. We introduce a Distributed Online LoRA for Federated INcremental learning methodDOLFIN, a novel approach combining Vision Transformers with low-rank adapters designed to efficiently and stably learn new tasks in federated environments. Our method leverages LoRA for minimal communication overhead and incorporates Dual Gradient Projection Memory (DualGPM) to prevent forgetting. Evaluated on CIFAR-100, ImageNet-R, ImageNet-A, and CUB-200 under two Dirichlet heterogeneity settings,DOLFINconsistently surpasses six strong baselines in final average accuracy while matching their memory footprint. Orthogonal low-rank adapters offer an effective and scalable solution for privacy-preserving continual learning in federated settings.

2026 Relazione in Atti di Convegno

Enabling 8B Bitwise Autoregressive Image Generation on Edge GPUs

Authors: Vezzali, Enrico; Bolelli, Federico; Grana, Costantino; Benini, Luca; Li, Yawei

Visual Autoregressive (VAR) models face a severe "Memory Wall" on edge devices due to large model size and substantial KV-cache … (Read full abstract)

Visual Autoregressive (VAR) models face a severe "Memory Wall" on edge devices due to large model size and substantial KV-cache requirements. In this work, we analyze the Infinity VAR family (2B and 8B) and propose a compression pipeline for deployment on constrained NVIDIA Jetson systems. We diagnose critical bottlenecks: activation outliers reaching 353x the median and channel-skewed cache variance. To address this, we propose a hybrid pipeline combining SVDQuant—to structurally decouple weight outliers—and Asymmetric Per-Channel KV8 quantization. Our approach reduces the Infinity-8B footprint by 64% (37.1GB →13.3GB), fitting it on the mid-range Orin NX with a 4.1x speedup over Flux.1-dev (W4A4), while achieving superior aesthetic alignment (ImageReward 1.13 vs 0.935). Crucially, we also unlock entry-level feasibility for the Infinity-2B, compressing it from 16.0 to 7.71 GB to enable deployment on the Orin Nano. These results establish a new efficiency standard for high-fidelity generative AI at the edge. The code is available at https://github.com/Henvezz95/deepcompressor.

2026 Relazione in Atti di Convegno

Evoluzione della Conoscenza nell’Intelligenza Artificiale: Verso Reti Neurali Profonde Robuste e Modulari

Authors: Capitani, Giacomo

Le reti neurali profonde sono diventate un pilastro fondamentale dell’Intelligenza Artificiale moderna grazie alla loro straordinaria efficacia e versatilità. Tuttavia, … (Read full abstract)

Le reti neurali profonde sono diventate un pilastro fondamentale dell’Intelligenza Artificiale moderna grazie alla loro straordinaria efficacia e versatilità. Tuttavia, le loro capacità di generalizzazione dipendono tipicamente dall’assunzione che i dati siano indipendenti e distribuiti in modo identico, una condizione raramente soddisfatta negli scenari reali, dinamici ed evolutivi. Quando le distribuzioni dei dati variano, i modelli tendono a sfruttare scorciatoie (inclusi bias spurî e impliciti), a soffrire di catastrophic forgetting e a mostrare capacità compositive limitate. La presente tesi esplora come i modelli neurali possano essere guidati ad adattare, preservare, trasferire e comporre le proprie capacità oltre il semplice data fitting. La prima parte si concentra sulla mitigazione del bias in assenza di attributi protetti espliciti. Si sfruttano cluster latenti per formare gruppi semantici proxy che orientano l’ottimizzazione lontano dall’apprendimento di scorciatoie, migliorando così la robustezza. L’analisi viene poi estesa al continual learning, dove le strategie basate su rehearsal possono introdurre o amplificare correlazioni spurie se i segnali di debiasing non vengono gestiti correttamente. Per affrontare tale problema, vengono proposti meccanismi di rehearsal bilanciati, capaci di mantenere l’equilibrio in termini di valori di loss e mitigare correlazioni spurie sotto cambiamenti di distribuzione. La seconda parte indaga i modelli multimodali visione–linguaggio, rivelando che architetture simili a CLIP manifestano bias impliciti analoghi a quelli umani. Si introducono tecniche leggere di prompt steering per ridurre i bias impliciti nei compiti di image retrieval e classificazione. Successivamente, viene analizzato lo spazio dei parametri per determinare quando i task vector mantengono conoscenza trasferibile tra modelli addestrati su dataset distinti, e vengono definite procedure di allineamento basate su permutazioni per consentire il trasporto di conoscenza tra modelli. Infine, si dimostra che le proprietà geometriche del loss landscape, in particolare la sua piattezza, predicono la compatibilità tra modelli fine-tuned derivati da un pretraining comune, con applicazioni pratiche nella segmentazione medica 3D. Analisi sperimentali approfondite su diversi dataset e paradigmi di apprendimento supportano questi risultati. Complessivamente, i contributi delineano un quadro a quattro assi della generalizzazione nelle reti neurali: (i) mitigazione dell’apprendimento di scorciatoie a livello di dati e feature; (ii) prevenzione delle correlazioni spurie nel continual learning; (iii) disambiguazione semantica nell’allineamento multimodale; (iv) manipolazione della geometria dello spazio dei parametri per il trasferimento di conoscenza e il model merging. Attraverso questa prospettiva, la tesi propone principi e metodologie per lo sviluppo di sistemi neurali adattivi le cui capacità possano essere mantenute, trasferite e composte in modo robusto.

2026 Tesi di dottorato

FG-TRACER: Tracing Information Flow in Multimodal Large Language Models in Free-Form Generation

Authors: Saporita, Alessia; Pipoli, Vittorio; Bolelli, Federico; Baraldi, Lorenzo; Acquaviva, Andrea; Ficarra, Elisa

Multimodal Large Language Models (MLLMs) have achieved impressive performance across a variety of vision–language tasks. However, their internal working mechanisms … (Read full abstract)

Multimodal Large Language Models (MLLMs) have achieved impressive performance across a variety of vision–language tasks. However, their internal working mechanisms remain largely underexplored. In his work, we introduce FG-TRACER, a framework designed to analyze the information flow between visual and textual modalities in MLLMs in free-form generation. Notably, our numerically stabilized computational method enables the first systematic analysis of multimodal information flow in underexplored domains such as image captioning and chain-of-thought (CoT) reasoning. We apply FG-TRACER to two state-of-the-art MLLMs—LLaMA 3.2-Vision and LLaVA 1.5—across three vision–language benchmarks—TextVQA, COCO 2014, and ChartQA—and we conduct a word-level analysis of multimodal integration. Our findings uncover distinct patterns of multimodal fusion across models and tasks, demonstrating that fusion dynamics are both model- and task-dependent. Overall, FG-TRACER offers a robust methodology for probing the internal mechanisms of MLLMs in free-form settings, providing new insights into their multimodal reasoning strategies. Our source code is publicly available at https://anonymous.4open.science/r/FG-TRACER-CB5A/.

2026 Relazione in Atti di Convegno

Generating Synthetic Data with Large Language Models for Low-Resource Sentence Retrieval

Authors: Caffagni, Davide.; Cocchi, Federico; Mambelli, Anna; Tutrone, Fabio; Zanella, Marco; Cornia, Marcella.; Cucchiara, Rita

Published in: LECTURE NOTES IN COMPUTER SCIENCE

Sentence similarity search is a fundamental task in information retrieval, enabling applications such as search engines, question answering, and textual … (Read full abstract)

Sentence similarity search is a fundamental task in information retrieval, enabling applications such as search engines, question answering, and textual analysis. However, retrieval systems often struggle when training data are scarce, as is the case for low-resource languages or specialized domains such as ancient texts. To address this challenge, we propose a novel paradigm for domain-specific sentence similarity search, where the embedding space is shaped by a combination of limited real data and a large amount of synthetic data generated by Large Language Models (LLMs). Specifically, we employ LLMs to generate domain-specific sentence pairs and fine-tune a sentence embedding model, effectively distilling knowledge from the LLM to the retrieval model. We validate our method through a case study on biblical intertextuality in Latin, demonstrating that synthetic data augmentation significantly improves retrieval effectiveness in a domain with scarce annotated resources. More broadly, our approach offers a scalable and adaptable framework for enhancing retrieval in domain-specific contexts. Source code and trained models are available at https://github.com/aimagelab/biblical-retrieval-synthesis.

2026 Relazione in Atti di Convegno

Gradient-sign Masking for Task Vector Transport Across Pre-Trained Models

Authors: Rinaldi, Filippo; Panariello, Aniello; Salici, Giacomo; Liu, Fengyuan; Ciccone, Marco; Porrello, Angelo; Calderara, Simone

When a new release of a foundation model is published, practitioners typically need to repeat fine-tuning, even if the same … (Read full abstract)

When a new release of a foundation model is published, practitioners typically need to repeat fine-tuning, even if the same task was already tackled in the previous version. A promising alternative is to reuse the parameter changes (i.e., task vectors) that capture how a model adapts to a specific task. However, these vectors often fail to transfer across different pre-trained models because their parameter spaces are misaligned. In this work, we show that successful transfer depends strongly on the gradient-sign structure of the new model. Based on this insight, we propose GradFix, which approximates the ideal sign structure and leverages it to transfer knowledge using only a handful of labeled samples. Notably, this requires no additional fine-tuning: we only compute a few target-model gradients without parameter updates and mask the source task vector accordingly. This yields an update that is locally aligned with the target loss landscape, effectively rebasing the task vector onto the new pre-training. We provide a theoretical guarantee that our method ensures first-order descent. Empirically, we demonstrate significant performance gains on vision and language benchmarks, consistently outperforming naive task vector addition and few-shot fine-tuning. We further show that transporting task vectors improves multi-task and multi-source model merging. Code is available at https://github.com/fillo-rinaldi/GradFix.

2026 Relazione in Atti di Convegno

GramSR: Visual Feature Conditioning for Diffusion-Based Super-Resolution

Authors: D'Oronzio, Fabio; Putamorsi, Federico; Zini, Leonardo; Cornia, Marcella; Baraldi, Lorenzo

Despite recent advances, single-image super-resolution (SR) remains challenging, especially in real-world scenarios with complex degradations. Diffusion-based SR methods, particularly those … (Read full abstract)

Despite recent advances, single-image super-resolution (SR) remains challenging, especially in real-world scenarios with complex degradations. Diffusion-based SR methods, particularly those built on Stable Diffusion, leverage strong generative priors but commonly rely on text conditioning derived from semantic captioning. Such textual descriptions provide only high-level semantics and lack the spatially aligned visual information required for faithful restoration, leading to a representation gap between abstract semantics and spatially aligned visual details. To address this limitation, we propose GramSR, a one-step diffusion-based SR framework that replaces text conditioning with dense visual features extracted from the low-resolution input using a pre-trained DINOv3 encoder. GramSR adopts a three-stage LoRA architecture, where pixel-level, semantic-level, and texture-level LoRA modules are trained sequentially. The pixel-level module focuses on degradation removal using L2 loss, the semantic-level module enhances perceptual details via LPIPS and CSD losses, and the texture-level module enforces feature correlation consistency through a Gram matrix loss computed from DINOv3 features. At inference, independent guidance scales enable flexible control over degradation removal, semantic enhancement, and texture preservation. Extensive experiments on standard SR benchmarks demonstrate that GramSR consistently outperforms existing one-step diffusion-based methods, achieving superior structural fidelity and texture realism.

2026 Relazione in Atti di Convegno
2 3 »

Page 1 of 107 • Total publications: 1068