MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants
release_4nofyvydobamdpc7b7tecflcci
by
Alkesh Patel, Joel Ruben Antony Moniz, Roman Nguyen, Nick Tzou, Hadas Kotek, Vincent Renkens
2021
Abstract
In multimodal assistant, where vision is also one of the input modalities,
the identification of user intent becomes a challenging task as visual input
can influence the outcome. Current digital assistants take spoken input and try
to determine the user intent from conversational or device context. So, a
dataset, which includes visual input (i.e. images or videos for the
corresponding questions targeted for multimodal assistant use cases, is not
readily available. The research in visual question answering (VQA) and visual
question generation (VQG) is a great step forward. However, they do not capture
questions that a visually-abled person would ask multimodal assistants.
Moreover, many times questions do not seek information from external knowledge.
In this paper, we provide a new dataset, MMIU (MultiModal Intent
Understanding), that contains questions and corresponding intents provided by
human annotators while looking at images. We, then, use this dataset for intent
classification task in multimodal digital assistant. We also experiment with
various approaches for combining vision and language features including the use
of multimodal transformer for classification of image-question pairs into 14
intents. We provide the benchmark results and discuss the role of visual and
text features for the intent classification task on our dataset.
In text/plain
format
Archived Files and Locations
application/pdf 6.9 MB
file_632lvojh4bcibh5hr4pnssuk2q
|
arxiv.org (repository) web.archive.org (webarchive) |
2110.06416v2
access all versions, variants, and formats of this works (eg, pre-prints)