Skip to content

Implementation of the paper: "Discriminator-free unsupervised domain adaptation for Multi-label image classification"

License

Notifications You must be signed in to change notification settings

cvi2snt/DDA-MLIC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DDA-MLIC

This repo is the official PyTorch implementation of "Discriminator-free Unsupervised Domain Adaptation for Multi-label Image Classification" in WACV 2024.

alt text

Citation

@inproceedings{singh2023discriminatorfree,
  title={Discriminator-free Unsupervised Domain Adaptation for Multi-label Image Classification},
  author={Singh, Indel Pal and Ghorbel, Enjie and Kacem, Anis and Rathinam, Arunkumar and Aouada, Djamila},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  year={2024}
}

Datasets

  1. AID
  2. UCM (Labels)
  3. DFC
  4. PASCAL VOC 2007
  5. Clipart1k
  6. Cityscapes
  7. Foggycityscapes

Pre-trained models

We provide a collection of models trained with the proposed GMM-based discrepancy on various multi-label domain adaptation datasets.

Source Target mAP
AID UCM 63.2
UCM AID 54.9
AID DFC 62.1
UCM DFC 70.6
VOC Clipart 61.4
Clipart VOC 77.0
Cityscapes Foggycityscapes 62.3

Installation

Create virtual environment

$ python3 -m venv dda_mlic

Activate your virtual environment

$ source dda_mlic/bin/activate

Upgrade pip to the latest version

$ pip install --upgrade pip

Install compatible CUDA and pytorch versions.

$ pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

Install other required packages from requirements.txt

$ pip install -r requirements.txt

Evaluation

Syntax: Source → Target

$ python main.py --phase test -s <name_of_source_dataset> -t <name_of_target_dataset> -s-dir <path_to_source_dataset_dir> -t-dir <path_to_target_dataset_dir> --model-path <path_to_pretrained_weights>

Example: AID → UCM

$ python main.py --phase test -s AID -t UCM -s-dir datasets/AID -t-dir datasets/UCM --model-path models/aid2ucm_best_63-2.pth

Training

Download the imagenet pretrained weights for TResNetM.

Syntax: Source → Target

$ python main.py -s <name_of_source_dataset> -t <name_of_target_dataset> -s-dir <path_to_source_dataset_dir> -t-dir <path_to_target_dataset_dir> --model-path <path_to_imagenet_pretrained_weights>

Example: AID → UCM

$ python main.py -s AID -t UCM -s-dir datasets/AID -t-dir datasets/UCM --model-path models/tresnet_m_miil_21k.pth

Acknowledgement

We create our code based on the following repositories:

Transfer Learning Library

TResNet/ASL

ImageNet21k

Thanks to the authors.

About

Implementation of the paper: "Discriminator-free unsupervised domain adaptation for Multi-label image classification"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published