MTUNet: Few-shot image classification with visual explanations
Bowen Wang, Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara
June 2021
Abstract
Few-shot learning (FSL) approaches, mostly neural network-based, are assuming that the pre-trained knowledge can be obtained from base (seen) categories and transferred to novel (unseen) categories. However, the black-box nature of neural networks makes it difficult to understand what is actually transferred, which may hamper its application in some risk-sensitive areas. In this paper, we reveal a new way to perform explainable FSL for image classification, using discriminative patterns and pairwise matching. Experimental results prove that the proposed method can achieve satisfactory explainability on two mainstream datasets. Code is available*.
Publication
Proc.~IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Specially-Appointed Researcher/Fellow
Guest Assistant Professor
His research interests lie in deep learning, computer vision, robotics, and medical images.
Specially-Appointed Researcher/Fellow
Manisha’s research interest broadly lies in computer vision and image processing. Currently, she is working on micro facial expression recognition using multi-model deep learning frameworks.
Associate Professor
Yuta Nakashima is an associate professor with Institute for Datability Science, Osaka University. His research interests include computer vision, pattern recognition, natural langauge processing, and their applications.
Professor
He is working on computer vision and pattern recognition. His main research interests lie in image/video recognition and understanding, as well as applications of natural language processing techniques.