Interpretable and Accurate Fine-grained Recognition
via Region Grouping


Zixuan Huang 1
Yin Li 1,2
1Department of Computer Sciences
2Department of Biostatistics & Medical Informatics
University of Wisconsin-Madison


Code [GitHub]
CVPR 2020 (ORAL) [Paper][Slides]


Abstract

We present an interpretable deep model for fine-grained visual recognition. At the core of our method lies the integration of region-based part discovery and attribution within a deep neural network. Our model is trained using image-level object labels, and provides an interpretation of its results via the segmentation of object parts and the identification of their contributions towards classification. To facilitate the learning of object parts without direct supervision, we explore a simple prior of the occurrence of object parts. We demonstrate that this prior, when combined with our region-based part discovery and attribution, leads to an interpretable model that remains highly accurate. Our model is evaluated on major fine-grained recognition datasets, including CUB-200, CelebA and iNaturalist. Our results compares favourably to state-of-the-art methods on classification tasks, and outperforms previous approaches on the localization of object parts.


Interpretable Deep Model via Region Grouping

Why does a deep model recognize the bird as Yellow-headed Blackbird or consider the person smiling? We present an interpretable deep model for fine-grained recognition. Given an input image (left), our model is able to segment object parts (middle) and identify their contributions (right) for the decision. See our results trained using only image-level labels.


With image-level labels, our model learns to group pixels into meaningful object part regions and to attend to these part regions for fine-grained classification. Our key innovation is a novel regularization of part occurrence that facilitate part discovery during learning. Once learned, our model can output (1) a part assignment map; (2) a attention map and (3) the predicted label of the image. We demonstrate that our model provide an accurate and interpretable deep model for fine-grained recognition.


Acknowledgements

The authors acknowledge the support provided by the UW-Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation.