Towards Identity-Aware Cross-Modal Retrieval: a Dataset and a Baseline

Authors: Nicola Messina, Lucia Vadicamo, Leo Maltese, Claudio Gennaro

Published: 2024-12-30 15:21:36+00:00

AI Summary

This paper introduces COCO Person FaceSwap (COCO-PFS), a new dataset for identity-aware cross-modal retrieval, addressing the lack of large-scale datasets for this task. It also proposes Identity-aware CLIP (Id-CLIP), a novel architecture that achieves competitive retrieval performance through targeted fine-tuning of the visual backbone.

Abstract

Recent advancements in deep learning have significantly enhanced content-based retrieval methods, notably through models like CLIP that map images and texts into a shared embedding space. However, these methods often struggle with domain-specific entities and long-tail concepts absent from their training data, particularly in identifying specific individuals. In this paper, we explore the task of identity-aware cross-modal retrieval, which aims to retrieve images of persons in specific contexts based on natural language queries. This task is critical in various scenarios, such as for searching and browsing personalized video collections or large audio-visual archives maintained by national broadcasters. We introduce a novel dataset, COCO Person FaceSwap (COCO-PFS), derived from the widely used COCO dataset and enriched with deepfake-generated faces from VGGFace2. This dataset addresses the lack of large-scale datasets needed for training and evaluating models for this task. Our experiments assess the performance of different CLIP variations repurposed for this task, including our architecture, Identity-aware CLIP (Id-CLIP), which achieves competitive retrieval performance through targeted fine-tuning. Our contributions lay the groundwork for more robust cross-modal retrieval systems capable of recognizing long-tail identities and contextual nuances. Data and code are available at https://github.com/mesnico/IdCLIP.


Key findings
Id-CLIP significantly outperforms baseline models (CLIP and CLIP-PAD) on both entity-in-context and entity-only retrieval tasks, demonstrating the effectiveness of visual prompt tuning for improving identity-aware cross-modal retrieval. The COCO-PFS dataset provides a valuable benchmark for future research in this area.
Approach
The authors address the problem of identity-aware cross-modal retrieval by creating a new dataset, COCO-PFS, using deepfake technology to replace faces in COCO images with those of public figures. They then adapt CLIP, incorporating a visual prompt tuning strategy (Id-CLIP), to improve retrieval performance by focusing on person identities.
Datasets
COCO Person FaceSwap (COCO-PFS) dataset derived from COCO and VGGFace2; VGGFace2 is used as a face gallery.
Model(s)
CLIP, CLIP-PAD, Identity-aware CLIP (Id-CLIP)
Author countries
Italy