Object counting methods typically rely on manually annotated datasets. The cost of creating such datasets has restricted the versatility of these networks to count objects from specific classes (such as humans or penguins), and counting objects from diverse categories remains a challenge. The availability of robust text-to-image latent diffusion models (LDMs) raises the question of whether these models can be utilized to generate counting datasets. However, LDMs struggle to create images with an exact number of objects based solely on text prompts but they can be used to offer a dependable sorting signal by adding and removing objects within an image. Leveraging this data, we initially introduce an unsupervised sorting methodology to learn object-related features that are subsequently refined and anchored for counting purposes using counting data generated by LDMs. Further, we present a density classifier-guided method for dividing an image into patches containing objects that can be reliably counted. Consequently, we can generate counting data for any type of object and count them in an unsupervised manner. AFreeCA outperforms other unsupervised and few-shot alternatives and is not restricted to specific object classes for which counting data is available.
We are currently in the process of cleaning up the code and organizing it for release. It should all be released shortly.
- Setup project page & repo
- Push code for sorting pre-training
- Push code for finetuning steps
- Push code for inference step
- Push code for synthetic data generation
- Release model checkpoints
- Release demo
The source datasets can be downloaded from the following locations:
- ShanghaiTechA ShanghaiTechB: download
- JHU-CROWD : download
- QNRF: download
- Penguins: download
- CARPK: download
To train a model, first ensure that your dataset has the following format:
{
"train": [
["train_add_1.png", "train_src_1.png", "train_remove_1.png"],
...,
["train_add_N.png", "train_src_N.png", "train_remove_N.png"],
],
"val": [
["val_add_1.png", "val_src_1.png", "val_remove_1.png"],
...,
["val_add_N.png", "val_src_N.png", "val_remove_N.png"]
]
}
then, run the following script:
python train_wrapper.py \
--experiment ./training/sort/synthetic_ranksim \
--index 0 \
--dataset your_dataset \
--data_dir /path/to/data \
--experiment_name your_experiment_name \
--params ./config/params.json
We will be updating this repository with finetuning code shortly.
We will be updating this repository with inference code shortly.
We will be updating this repository with a link to the pre-trained checkpoints shortly.
We will be updating this repository with a demo shortly.
If you use any part of this work in your projects or publications, please cite:
@article{d2024afreeca,
title={AFreeCA: Annotation-Free Counting for All},
author={D'Alessandro, Adriano and Mahdavi-Amiri, Ali and Hamarneh, Ghassan},
journal={arXiv preprint arXiv:2403.04943},
year={2024}
}
Crowd counting has legitimate use cases such as urban planning, event management, and retail analysis. However, it also involves human surveillance, which can be misused by bad actors. We should always be deeply skeptical of any human surveillance use cases downstream of our research. Given ths, we release all of our source code under the Open RAIL-S LICENSE in an attempt to mitigate downstream misuse.