Image Retrieval by Hierarchy-aware Deep Hashing Based on Multi-task Learning
Bowen Wang, Liangzhi Li, Yuta Nakashima, Takehiro Yamamoto, Hiroaki Ohshima, Yoshiyuki Shoji, Kenro Aihara, Noriko Kando
Deep hashing has been widely used to approximate nearest-neighbor search for image retrieval tasks. Most of them are trained with image-label pairs without any inter-label relationship, which may not make full use of the real-world data. This paper presents deep hashing, named HA2SH, that leverages multiple types of labels with hierarchical structures that an ethnological museum assigns to their artifacts. We experimentally prove that HA2SH can learn to generate hashes that give a better retrieval performance. Our code is available at https://github.com/wbw520/minpaku.
Proc.~ACM International Conference on Multimedia Retrieval (ICMR)
Specially-Appointed Assistant Professor
His research interests lie in deep learning, computer vision, robotics, and medical images.
Yuta Nakashima is an associate professor with Institute for Datability Science, Osaka University. His research interests include computer vision, pattern recognition, natural langauge processing, and their applications.