Contrastive Language-Image Pre-training for the Italian Language

Federico Bianchi*, Giuseppe Attanasio, Raphael Pisoni, Silvia Terragni, Gabriele Sarti, Sri Lakshimi

*Corresponding author for this work

Research output: Working paperPreprintAcademic

Abstract

CLIP (Contrastive Language-Image Pre-training) is a very recent multi-modal model that jointly learns representations of images and texts. The model is trained on a massive amount of English data and shows impressive performance on zero-shot classification tasks. Training the same model on a different language is not trivial, since data in other languages might be not enough and the model needs high-quality translations of the texts to guarantee a good performance. In this paper, we present the first CLIP model for the Italian Language (CLIP-Italian), trained on more than 1.4 million image-text pairs. Results show that CLIP-Italian outperforms the multilingual CLIP model on the tasks of image retrieval and zero-shot classification.
Original languageEnglish
PublisherarXiv
Publication statusSubmitted - 19-Aug-2021

Publication series

NameArXiv
PublisherCornell University Press
ISSN (Print)2331-8422

Keywords

  • contrastive learning
  • zero-shot image classification
  • deep learning
  • natural language processing
  • italian language
  • image retrieval

Fingerprint

Dive into the research topics of 'Contrastive Language-Image Pre-training for the Italian Language'. Together they form a unique fingerprint.

Cite this