Bringing Back the Context: Camera Trap Species Identification as Link Prediction on Multimodal Knowledge Graphs

1The Ohio State University, 2Rensselaer Polytechnic Institute,
3University of Wisconsin-Madison

Abstract

Camera traps are valuable tools in animal ecology for biodiversity monitoring and conservation. However, challenges like poor generalization to deployment at new unseen locations limit their practical application. Images are naturally associated with heterogeneous forms of context possibly in different modalities. In this work, we leverage the structured context associated with the camera trap images to improve out-of-distribution generalization for the task of species identification in camera traps. For example, a photo of a wild animal may be associated with information about where and when it was taken, as well as structured biology knowledge about the animal species. While typically overlooked by existing work, bringing back such context offers several potential benefits for better image understanding, such as addressing data scarcity and enhancing generalization. However, effectively integrating such heterogeneous context into the visual domain is a challenging problem. To address this, we propose a novel framework that reformulates species classification as link prediction in a multimodal knowledge graph (KG). This framework seamlessly integrates various forms of multimodal context for visual recognition. We apply this framework for out-of-distribution species classification on the iWildCam2020-WILDS and Snapshot Mountain Zebra datasets and achieve competitive performance with state-of-the-art approaches. Furthermore, our framework successfully incorporates biological taxonomy for improved generalization and enhances sample efficiency for recognizing under-represented species.

Overview of COSMO framework

Overview of our framework COSMO. Left: Our multimodal knowledge graph for camera traps and wildlife. Photos from camera traps are jointly represented in the KG with contextual information such as time, location, and structured biology taxonomy. Right: In our formulation of species classification as link prediction, the plausibility score ψ(s, r, o) of each (subject, relation, object) triple is computed using a KGE model (e.g., DistMult), where the subject, relation, and object are all first embedded into a vector space.

Overall Results


Key Takeaways: The addition of one or more contexts results in a performance gain over the no-context baseline in the vast majority of cases. Furthermore, the use of multiple contexts results in a performance boost in a majority of cases.
Species Classification results on iWildCam2020-WILDS (OOD) dataset

Species Classification results on iWildCam2020-WILDS (OOD) dataset. The first baseline in the second section shows the no-context baseline that uses only image-species labels as KG edges. All models use a pre-trained ResNet-50 as image encoder. Parentheses show standard deviation across 3 random seeds. Missing values are denoted by –.

Species Classification results on Snapshot Mountain Zebra dataset

Species Classification results on Snapshot Mountain Zebra dataset.


Spatiotemporal attributes give a prior for species distribution

Spatiotemporal attributes give a prior for species distribution

(a): Species probabilities conditioned on day/night for the 10 most frequent species in the training set (iWildCam2020-WILDS). Animal species demonstrate distinct temporal preferences for their daily activities, as evidenced by the contrasting probabilities observed during day and night.
(b): Each color square shows the distance between the corresponding training hour slot on x-axis and validation hour slot on y-axis. The correlation peaks for day-day and night-night hour slots.
(c): Plot of location GPS coordinates for training and validation splits (iWildCam2020-WILDS). The coordinates can be grouped into six clusters. Most coordinates exhibit an overlap with their respective cluster centroids at this visualization scale.
(d): Each color square shows the distance between the corresponding validation cluster centroid on x-axis and the training cluster centroid on y-axis. The correlation peaks along the diagonal.


Taxonomy-aware model results in more plausible predictions

Taxonomy-aware model results in more plausible predictions

(a): Comparison of COSMO model with and without taxonomy edges. The use of taxonomy information helps the model to avoid semantically implausible predictions.
(b): Quantitative evaluation of COSMO errors with and without taxonomy using a hierarchical distance metric (iWildCam2020-WILDS). The taxonomy-aware model achieves better performance in terms of Avg. LCA height.

Citation

@article{pahuja2023bringing,
  title={Bringing Back the Context: Camera Trap Species Identification as Link Prediction on Multimodal Knowledge Graphs},
  author={Pahuja, Vardaan and Luo, Weidi and Gu, Yu and Tu, Cheng-Hao and Chen, Hong-You and Berger-Wolf, Tanya and Stewart, Charles and Gao, Song and Chao, Wei-Lun and Su, Yu},
  journal={arXiv preprint arXiv:2401.00608},
  year={2023}
}