Facial expression recognition with skip-connection to leverage low-level features
Manisha Verma, Hirokazu Kobori, Yuta Nakashima, Noriko Takemura, Hajime Nagahara
Deep convolutional neural networks (CNNs) have established their feet in the ground of computer vision and machine learning, used in various applications. In this work, an attempt is made to learn a CNN for a task of facial expression recognition (FER). Our network has convolutional layers linked with an FC layer with a skip-connection to the classification layer. Motivation behind this design is that lower layers of a CNN are responsible for lower level features, and facial expressions can be mainly encoded in low-to-mid level features. Hence, in order to leverage the responses from lower layers, all convo-lutional layers are integrated via FC layers. Moreover, a network with shared parameters is used to extract landmark motion trajectory features. These visual and landmark features are fused to improve the performance. Our method is evaluated on the CK+ and Oulu-CASIA facial expression datasets.
Proceedings - IEEE International Conference on Image Processing (ICIP)
Manisha’s research interest broadly lies in computer vision and image processing. Currently, she is working on micro facial expression recognition using multi-model deep learning frameworks.
Yuta Nakashima is an associate professor with Institute for Datability Science, Osaka University. His research interests include computer vision, pattern recognition, natural langauge processing, and their applications.
Guest Associate Professor
She is working on ambient intelligence and gait recognition using pattern recognition and machine learning.
He is working on computer vision and pattern recognition. His main research interests lie in image/video recognition and understanding, as well as applications of natural language processing techniques.