Contrastive Learning of Global-Local Video Representations

Authors: Shuang Ma, Zhaoyang Zeng, Daniel McDuff, Yale Song

Published: 2021-04-07 07:35:08+00:00

AI Summary

This paper proposes a contrastive learning approach for learning video representations that generalize to both global and local tasks. It achieves this by optimizing two contrastive objectives that encourage the model to learn global-local visual information from audio signals, significantly outperforming models trained with disjoint objectives.

Abstract

Contrastive learning has delivered impressive results for various tasks in the self-supervised regime. However, existing approaches optimize for learning representations specific to downstream scenarios, i.e., textit{global} representations suitable for tasks such as classification or textit{local} representations for tasks such as detection and localization. While they produce satisfactory results in the intended downstream scenarios, they often fail to generalize to tasks that they were not originally designed for. In this work, we propose to learn video representations that generalize to both the tasks which require global semantic information (e.g., classification) and the tasks that require local fine-grained spatio-temporal information (e.g., localization). We achieve this by optimizing two contrastive objectives that together encourage our model to learn global-local visual information given audio signals. We show that the two objectives mutually improve the generalizability of the learned global-local representations, significantly outperforming their disjointly learned counterparts. We demonstrate our approach on various tasks including action/sound classification, lip reading, deepfake detection, event and sound localization (https://github.com/yunyikristy/global_local).


Key findings
The proposed model significantly outperforms state-of-the-art methods on several downstream tasks, including lip reading, deepfake detection, and audio-visual event localization. Jointly optimizing both contrastive objectives improves representation learning in both subspaces. The learned audio-visual attention maps effectively localize sounding sources in videos.
Approach
The approach factorizes the spatio-temporal feature space into spatially-local/temporally-global and spatially-global/temporally-local subspaces. Two cross-modal contrastive objectives are defined in each subspace, capturing slowly changing patch-level and fast-changing frame-level information respectively. A spatial attention pooling mechanism uses the learned patch-level information to guide frame-level learning.
Datasets
Kinetics-400, AVSpeech, UCF101, HMDB51, ESC50, LRW, LRS2, DFDC, AVE, Kinetics-Sounds, AudioSet
Model(s)
3D-ResNet (visual encoders), 1D-ResNet (audio encoders), MLP (prediction heads)
Author countries
USA, China