Introducing Shape Priors in Siamese Networks for Image Classification

Published in Neurocomputing, 2023

Recommended citation: Alqasir Hiba, Damien Muselet, and Christophe Ducottet. "Introducing Shape Priors in Siamese Networks for Image Classification." Neurocomputing, Volume 568, 2024, 127034 2023.

The efficiency of deep neural networks is increasing, and so is the amount of annotated data required for training them. We propose a solution improving the learning process of a classification network with less labeled data. Our approach is to inform the classifier of the elements it should focus on to make its decision by supplying it with some shape priors. These shape priors are expressed as binary masks, giving a rough idea of the shape of the relevant elements for a given class. We resort to Siamese architecture and feed it with image/mask pairs. By inserting shape priors, only the relevant features are retained. This provides the network with significant generalization power without requiring a specific domain adaptation step. This solution is tested on some standard cross-domain digit classification tasks and on a real-world video surveillance application. Extensive tests show that our approach outperforms the classical classifier by generating a good latent space with less training data.

Download paper here

Access the article for free until January 13, 2024, via the Share Link . No sign-ups or fees required!