Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked Autoencoders

Authors: Srijan Das, Tanmay Jain, Dominick Reilly, Pranav Balaji, Soumyajit Karmakar, Shyam Marjit, Xiang Li, Abhijit Das, Michael S. Ryoo

Published: 2023-10-31 17:59:07+00:00

AI Summary

This paper introduces Self-Supervised Auxiliary Task (SSAT), a method for training Vision Transformers (ViTs) on datasets with limited data. SSAT jointly optimizes the primary task (e.g., classification) with a self-supervised auxiliary task (e.g., image reconstruction), significantly improving ViT performance compared to sequential self-supervised learning and fine-tuning.

Abstract

Vision Transformers (ViTs) have become ubiquitous in computer vision. Despite their success, ViTs lack inductive biases, which can make it difficult to train them with limited data. To address this challenge, prior studies suggest training ViTs with self-supervised learning (SSL) and fine-tuning sequentially. However, we observe that jointly optimizing ViTs for the primary task and a Self-Supervised Auxiliary Task (SSAT) is surprisingly beneficial when the amount of training data is limited. We explore the appropriate SSL tasks that can be optimized alongside the primary task, the training schemes for these tasks, and the data scale at which they can be most effective. Our findings reveal that SSAT is a powerful technique that enables ViTs to leverage the unique characteristics of both the self-supervised and primary tasks, achieving better performance than typical ViTs pre-training with SSL and fine-tuning sequentially. Our experiments, conducted on 10 datasets, demonstrate that SSAT significantly improves ViT performance while reducing carbon footprint. We also confirm the effectiveness of SSAT in the video domain for deepfake detection, showcasing its generalizability. Our code is available at https://github.com/dominickrei/Limited-data-vits.


Key findings
SSAT consistently outperforms sequential self-supervised learning and fine-tuning across multiple image and video datasets, especially when data is limited. SSAT improves robustness to data perturbations and achieves state-of-the-art results on several benchmark datasets. The approach also shows strong generalization to deepfake detection in the video domain.
Approach
The approach jointly optimizes a Vision Transformer (ViT) for a primary task (e.g., classification) and a self-supervised auxiliary task (SSAT), such as masked image reconstruction using a masked autoencoder. This allows the ViT to leverage both supervised and self-supervised information, improving performance especially with limited data.
Datasets
CIFAR-10, CIFAR-100, Oxford Flowers102, SVHN, ImageNet-1K, Chaoyang, PMNIST, ClipArt, Infograph, Sketch, DFDC, FaceForensics++
Model(s)
ViT (various sizes: ViT-T, ViT-S, ViT-B), CVT-13, Swin-T, ResNet-50, VideoMAE
Author countries
USA, India