ICLR 2026

Automatic Image-Level Morphological Trait Annotation for Organismal Images

Vardaan PahujaSamuel Stevens,  Alyson East,  Sydne Record,  Yu Su

The Ohio State University  ·  University of Maine

Paper Code Dataset

Abstract

Morphological traits are physical characteristics of biological organisms that provide vital clues on how organisms interact with their environment. Yet extracting these traits remains a slow, expert-driven process, limiting their use in large-scale ecological studies. A major bottleneck is the absence of high-quality datasets linking biological images to trait-level annotations.

In this work, we demonstrate that sparse autoencoders trained on foundation-model features yield monosemantic, spatially grounded neurons that consistently activate on meaningful morphological parts. Leveraging this property, we introduce a trait annotation pipeline that localizes salient regions and uses vision-language prompting to generate interpretable trait descriptions.

Using this approach, we construct Bioscan-Traits, a dataset of 80K trait annotations spanning 19K insect images from BIOSCAN-5M. Human evaluation confirms the biological plausibility of the generated morphological descriptions. We assess design sensitivity through a comprehensive ablation study. By annotating traits with a modular pipeline rather than prohibitively expensive manual efforts, we offer a scalable way to inject biologically meaningful supervision into foundation models, enable large-scale morphological analyses, and bridge the gap between ecological relevance and machine-learning practicality.

80K Trait annotations
19K Insect images
DINOv2 Vision backbone

Method

Trait annotation pipeline overview
Figure 1. Given an input specimen image, we first compute dense visual representations using an off-the-shelf backbone (e.g., DINOv2). These features are passed through a pre-trained sparse autoencoder (SAE), which identifies high-activation latent units corresponding to semantically meaningful regions. We extract the spatial masks associated with these activations and overlay them on the original image to localize trait-relevant boxes. Finally, a multimodal language model (MLLM) is prompted with the annotated image to generate fine-grained morphological trait descriptions.

Dataset: Bioscan-Traits

We release Bioscan-Traits, a large-scale morphological trait dataset for insects, constructed automatically using our pipeline.

🤗 osunlp/bioscan-traits

Available on Hugging Face. Built on top of the BIOSCAN-5M insect image collection, each annotation links an image region to an interpretable morphological trait description generated by our pipeline.

80K annotations 19K images Insect morphology BIOSCAN-5M base
Download Dataset

Results

To validate the utility of Bioscan-Traits, we fine-tune BioCLIP on our dataset and evaluate on a held-out insect classification benchmark. Fine-tuning with trait-annotated data yields a +5.1% improvement in accuracy over the BioCLIP baseline, demonstrating that our automatically generated morphological annotations provide meaningful biological supervision.

BioCLIP fine-tuning results on Bioscan-Traits
Fine-tuning BioCLIP on Bioscan-Traits improves classification accuracy from 34.8% to 39.9% on the BIOSCAN-5M insect benchmark.

Citation

If you find this work useful, please cite our paper:

@inproceedings{pahuja2026automatic,
  title     = {Automatic Image-Level Morphological Trait Annotation
               for Organismal Images},
  author    = {Pahuja, Vardaan and Stevens, Samuel and East, Alyson
               and Record, Sydne and Su, Yu},
  booktitle = {The Fourteenth International Conference on
               Learning Representations},
  year      = {2026},
  url       = {https://openreview.net/forum?id=oFRbiaib5Q}
}

Acknowledgments

We gratefully acknowledge the following projects and communities whose work made this research possible:

SAEV — sparse autoencoder training infrastructure
BioCLIP — downstream training and evaluation tooling
BIOSCAN-5M — large-scale insect image collection