Visual Perturbation-aware Collaborative Learning for Overcoming the Language Prior Problem
release_by74nzm64zaljbztkjh4xgg5g4
by
Yudong Han, Liqiang Nie, Jianhua Yin, Jianlong Wu, Yan Yan
2022
Abstract
Several studies have recently pointed that existing Visual Question Answering
(VQA) models heavily suffer from the language prior problem, which refers to
capturing superficial statistical correlations between the question type and
the answer whereas ignoring the image contents. Numerous efforts have been
dedicated to strengthen the image dependency by creating the delicate models or
introducing the extra visual annotations. However, these methods cannot
sufficiently explore how the visual cues explicitly affect the learned answer
representation, which is vital for language reliance alleviation. Moreover,
they generally emphasize the class-level discrimination of the learned answer
representation, which overlooks the more fine-grained instance-level patterns
and demands further optimization. In this paper, we propose a novel
collaborative learning scheme from the viewpoint of visual perturbation
calibration, which can better investigate the fine-grained visual effects and
mitigate the language prior problem by learning the instance-level
characteristics. Specifically, we devise a visual controller to construct two
sorts of curated images with different perturbation extents, based on which the
collaborative learning of intra-instance invariance and inter-instance
discrimination is implemented by two well-designed discriminators. Besides, we
implement the information bottleneck modulator on latent space for further bias
alleviation and representation calibration. We impose our visual
perturbation-aware framework to three orthodox baselines and the experimental
results on two diagnostic VQA-CP benchmark datasets evidently demonstrate its
effectiveness. In addition, we also justify its robustness on the balanced VQA
benchmark.
In text/plain
format
Archived Files and Locations
application/pdf 5.0 MB
file_fe3vy52yxbhvpkf2mzeqob5sda
|
arxiv.org (repository) web.archive.org (webarchive) |
2207.11850v1
access all versions, variants, and formats of this works (eg, pre-prints)