Publications

Explore our research publications: papers, articles, and conference proceedings from AImageLab.

Tip: type @ to pick an author and # to pick a keyword.

Novel and Rare Fusion Transcripts Involving Transcription Factors and Tumor Suppressor Genes in Acute Myeloid Leukemia

Authors: Padella And, Antonella; Simonetti And, Giorgia; Paciello And, Giulia; Giotopoulos And, George; Baldazzi And, Carmen; Righi And, Simona; Ghetti And, Martina; Stengel And, Anna; Guadagnuolo And, Viviana; De Tommaso And, Rossella; Papayannidis And, Cristina; Robustelli And, Valentina; Franchini And, Eugenia; Ghelli Luserna Di Rorà And, Andrea; Ferrari And, Anna; Fontana And Maria, Chiara; Bruno And, Samantha; Ottaviani And, Emanuela; Soverini And, Simona; Storlazzi And Clelia, Tiziana; Haferlach And, Claudia; Sabattini And, Elena; Testoni And, Nicoletta; Iacobucci And, Ilaria; Huntly And Brian, J. P.; Ficarra, Elisa; Martinelli And, Giovanni

Published in: CANCERS

Approximately 18% of acute myeloid leukemia (AML) cases express a fusion transcript. However, few fusions are recurrent across AML and … (Read full abstract)

Approximately 18% of acute myeloid leukemia (AML) cases express a fusion transcript. However, few fusions are recurrent across AML and the identification of these rare chimeras is of interest to characterize AML patients. Here, we studied the transcriptome of 8 adult AML patients with poorly described chromosomal translocation(s), with the aim of identifying novel and rare fusion transcripts. We integrated RNA-sequencing data with multiple approaches including computational analysis, Sanger sequencing, fluorescence in situ hybridization and in vitro studies to assess the oncogenic potential of the ZEB2-BCL11B chimera. We detected 7 different fusions with partner genes involving transcription factors (OAZ-MAFK, ZEB2-BCL11B), tumor suppressors (SAV1-GYPB, PUF60-TYW1, CNOT2-WT1) and rearrangements associated with the loss of NF1 (CPD-PXT1, UTP6-CRLF3). Notably, ZEB2-BCL11B rearrangements co-occurred with FLT3 mutations and were associated with a poorly differentiated or mixed phenotype leukemia. Although the fusion alone did not transform murine c-Kit+ bone marrow cells, 45.4% of 14q32 non-rearranged AML cases were also BCL11B-positive, suggesting a more general and complex mechanism of leukemogenesis associated with BCL11B expression. Overall, by combining different approaches, we described rare fusion events contributing to the complexity of AML and we linked the expression of some chimeras to genomic alterations hitting known genes in AML.

2019 Articolo su rivista

OpenFACS: An Open Source FACS-Based 3D Face Animation System

Authors: Cuculo, V.; D'Amelio, A.

Published in: LECTURE NOTES IN COMPUTER SCIENCE

We present OpenFACS, an open source FACS-based 3D face animation system. OpenFACS is a software that allows the simulation of … (Read full abstract)

We present OpenFACS, an open source FACS-based 3D face animation system. OpenFACS is a software that allows the simulation of realistic facial expressions through the manipulation of specific action units as defined in the Facial Action Coding System. OpenFACS has been developed together with an API which is suitable to generate real-time dynamic facial expressions for a three-dimensional character. It can be easily embedded in existing systems without any prior experience in computer graphics. In this note, we discuss the adopted face model, the implemented architecture and provide additional details of model dynamics. Finally, a validation experiment is proposed to assess the effectiveness of the model.

2019 Relazione in Atti di Convegno

Precision computation of wind turbine power upgrades: An aerodynamic and control optimization test case

Authors: Astolfi, D.; Castellani, F.; Fravolini, M. L.; Cascianelli, S.; Terzi, L.

Published in: JOURNAL OF ENERGY RESOURCES TECHNOLOGY

Wind turbine upgrades have recently been spreading in the wind energy industry for optimizing the efficiency of the wind kinetic … (Read full abstract)

Wind turbine upgrades have recently been spreading in the wind energy industry for optimizing the efficiency of the wind kinetic energy conversion. These interventions have material and labor costs; therefore, it is fundamental to estimate the production improvement realistically. Furthermore, the retrofitting of the wind turbines sited in complex environments might exacerbate the stress conditions to which those are subjected and consequently might affect the residual life. In this work, a two-step upgrade on a multimegawatt wind turbine is considered from a wind farm sited in complex terrain. First, vortex generators and passive flow control devices have been installed. Second, the management of the revolutions per minute has been optimized. In this work, a general method is formulated for assessing the wind turbine power upgrades using operational data. The method is based on the study of the residuals between the measured power output and a judicious model of the power output itself, before and after the upgrade. Therefore, properly selecting the model is fundamental. For this reason, an automatic feature selection algorithm is adopted, based on the stepwise multivariate regression. This allows identifying the most meaningful input variables for a multivariate linear model whose target is the power of the upgraded wind turbine. For the test case of interest, the adopted upgrade is estimated to increase the annual energy production to 2.660.1%. The aerodynamic and control upgrades are estimated to be 1.8% and 0.8%, respectively, of the production improvement.

2019 Articolo su rivista

Predicting the Driver's Focus of Attention: the DR(eye)VE Project

Authors: Palazzi, Andrea; Abati, Davide; Calderara, Simone; Solera, Francesco; Cucchiara, Rita

Published in: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE

Predicting the Driver's Focus of Attention: the DR(eye)VE Project Andrea Palazzi, Davide Abati, Simone Calderara, Francesco Solera, Rita Cucchiara (Submitted … (Read full abstract)

Predicting the Driver's Focus of Attention: the DR(eye)VE Project Andrea Palazzi, Davide Abati, Simone Calderara, Francesco Solera, Rita Cucchiara (Submitted on 10 May 2017 (v1), last revised 6 Jun 2018 (this version, v3)) In this work we aim to predict the driver's focus of attention. The goal is to estimate what a person would pay attention to while driving, and which part of the scene around the vehicle is more critical for the task. To this end we propose a new computer vision model based on a multi-branch deep architecture that integrates three sources of information: raw video, motion and scene semantics. We also introduce DR(eye)VE, the largest dataset of driving scenes for which eye-tracking annotations are available. This dataset features more than 500,000 registered frames, matching ego-centric views (from glasses worn by drivers) and car-centric views (from roof-mounted camera), further enriched by other sensors measurements. Results highlight that several attention patterns are shared across drivers and can be reproduced to some extent. The indication of which elements in the scene are likely to capture the driver's attention may benefit several applications in the context of human-vehicle interaction and driver attention analysis.

2019 Articolo su rivista

Predictive Sampling of Facial Expression Dynamics Driven by a Latent Action Space

Authors: Boccignone, G.; Bodini, M.; Cuculo, V.; Grossi, G.

We present a probabilistic generative model for tracking by prediction the dynamics of affective spacial expressions in videos. The model … (Read full abstract)

We present a probabilistic generative model for tracking by prediction the dynamics of affective spacial expressions in videos. The model relies on Bayesian filter sampling of facial landmarks conditioned on motor action parameter dynamics; namely, trajectories shaped by an autoregressive Gaussian Process Latent Variable state-space. The analysis-by-synthesis approach at the heart of the model allows for both inference and generation of affective expressions. Robustness of the method to occlusions and degradation of video quality has been assessed on a publicly available dataset.

2019 Relazione in Atti di Convegno

Problems with Saliency Maps

Authors: Boccignone, Giuseppe; Cuculo, Vittorio; D’Amelio, Alessandro

Published in: LECTURE NOTES IN COMPUTER SCIENCE

Despite the popularity that saliency models have gained in the computer vision community, they are most often conceived, exploited and … (Read full abstract)

Despite the popularity that saliency models have gained in the computer vision community, they are most often conceived, exploited and benchmarked without taking heed of a number of problems and subtle issues they bring about. When saliency maps are used as proxies for the likelihood of fixating a location in a viewed scene, one such issue is the temporal dimension of visual attention deployment. Through a simple simulation it is shown how neglecting this dimension leads to results that at best cast shadows on the predictive performance of a model and its assessment via benchmarking procedures.

2019 Relazione in Atti di Convegno

Recognizing social relationships from an egocentric vision perspective

Authors: Alletto, Stefano; Cornia, Marcella; Baraldi, Lorenzo; Serra, Giuseppe; Cucchiara, Rita

In this chapter we address the problem of partitioning social gatherings into interacting groups in egocentric scenarios. People in the … (Read full abstract)

In this chapter we address the problem of partitioning social gatherings into interacting groups in egocentric scenarios. People in the scene are tracked, their head pose and 3D location are estimated. Following the formalism of the f-formation, we define with the orientation and distance an inherently social pairwise feature capable of describing how two people stand in relation to one another. We present a Structural SVM based approach to learn how to weight each component of the feature vector depending on the social situation is applied to. To better understand the social dynamics, we also estimate what we call social relevance of each subject in a group using a saliency attentive model. Extensive tests on two publicly available datasets show that our solution achieves encouraging results when detecting social groups and their relevant subjects in the challenging egocentric scenarios.

2019 Capitolo/Saggio

Robust single-sample face recognition by sparsity-driven sub-dictionary learning using deep features

Authors: Cuculo, Vittorio; D'Amelio, Alessandro; Grossi, Giuliano; Lanzarotti, Raffaella; Lin, Jianyi

Published in: SENSORS

Face recognition using a single reference image per subject is challenging, above all when referring to a large gallery of … (Read full abstract)

Face recognition using a single reference image per subject is challenging, above all when referring to a large gallery of subjects. Furthermore, the problem hardness seriously increases when the images are acquired in unconstrained conditions. In this paper we address the challenging Single Sample Per Person (SSPP) problem considering large datasets of images acquired in the wild, thus possibly featuring illumination, pose, face expression, partial occlusions, and low-resolution hurdles. The proposed technique alternates a sparse dictionary learning technique based on the method of optimal direction and the iterative ℓ 0 -norm minimization algorithm called k-LIMAPS. It works on robust deep-learned features, provided that the image variability is extended by standard augmentation techniques. Experiments show the effectiveness of our method against the hardness introduced above: first, we report extensive experiments on the unconstrained LFW dataset when referring to large galleries up to 1680 subjects; second, we present experiments on very low-resolution test images up to 8 × 8 pixels; third, tests on the AR dataset are analyzed against specific disguises such as partial occlusions, facial expressions, and illumination problems. In all the three scenarios our method outperforms the state-of-the-art approaches adopting similar configurations.

2019 Articolo su rivista

Segmentation Guided Scoring of Pathological Lesions in Swine Through CNNs

Authors: Bergamini, L.; Trachtman, A. R.; Palazzi, A.; Negro, E. D.; Capobianco Dondona, A.; Marruchella, G.; Calderara, S.

Published in: LECTURE NOTES IN ARTIFICIAL INTELLIGENCE

The slaughterhouse is widely recognised as a useful checkpoint for assessing the health status of livestock. At the moment, this … (Read full abstract)

The slaughterhouse is widely recognised as a useful checkpoint for assessing the health status of livestock. At the moment, this is implemented through the application of scoring systems by human experts. The automation of this process would be extremely helpful for veterinarians to enable a systematic examination of all slaughtered livestock, positively influencing herd management. However, such systems are not yet available, mainly because of a critical lack of annotated data. In this work we: (i) introduce a large scale dataset to enable the development and benchmarking of these systems, featuring more than 4000 high-resolution swine carcass images annotated by domain experts with pixel-level segmentation; (ii) exploit part of this annotation to train a deep learning model in the task of pleural lesion scoring. In this setting, we propose a segmentation-guided framework which stacks together a fully convolutional neural network performing semantic segmentation with a rule-based classifier integrating a-priori veterinary knowledge in the process. Thorough experimental analysis against state-of-the-art baselines proves our method to be superior both in terms of accuracy and in terms of model interpretability. Code and dataset are publicly available here: https://github.com/lucabergamini/swine-lesion-scoring.

2019 Relazione in Atti di Convegno

Self Paced Deep Learning for Weakly Supervised Object Detection

Authors: Sangineto, E.; Nabi, M.; Culibrk, D.; Sebe, N.

Published in: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE

In a weakly-supervised scenario object detectors need to be trained using image-level annotation alone. Since bounding-box-level ground truth is not … (Read full abstract)

In a weakly-supervised scenario object detectors need to be trained using image-level annotation alone. Since bounding-box-level ground truth is not available, most of the solutions proposed so far are based on an iterative, Multiple Instance Learning framework in which the current classifier is used to select the highest-confidence boxes in each image, which are treated as pseudo-ground truth in the next training iteration. However, the errors of an immature classifier can make the process drift, usually introducing many of false positives in the training dataset. To alleviate this problem, we propose in this paper a training protocol based on the self-paced learning paradigm. The main idea is to iteratively select a subset of images and boxes that are the most reliable, and use them for training. While in the past few years similar strategies have been adopted for SVMs and other classifiers, we are the first showing that a self-paced approach can be used with deep-network-based classifiers in an end-to-end training pipeline. The method we propose is built on the fully-supervised Fast-RCNN architecture and can be applied to similar architectures which represent the input image as a bag of boxes. We show state-of-the-art results on Pascal VOC 2007, Pascal VOC 2010 and ILSVRC 2013. OnILSVRC 2013 our results based on a low-capacity AlexNet network outperform even those weakly-supervised approaches which are based on much higher-capacity networks.

2019 Articolo su rivista

Page 49 of 109 • Total publications: 1084