Skip to content

[CVPR 2025] Official Repo of Paper "FOCUS: Knowledge-enhanced Adaptive Visual Compression for Few-shot Whole Slide Image Classification"

License

Notifications You must be signed in to change notification settings

dddavid4real/FOCUS

Repository files navigation

FOCUS

FOCUS: Knowledge-enhanced Adaptive Visual Compression for Few-shot Whole Slide Image Classification, CVPR 2025
Zhengrui Guo, Conghao Xiong, Jiabo MA, Qichen Sun, Lishuang Feng, Jinzhuo Wang, Hao Chen

[ArXiv] [CVPR Proceedings]

Methodology of FOCUS

News

  • 2025.03.20: The model and training codes have been released!
  • 2025.02.27: Our paper is accepted by CVPR 2025! 🎉

1. Installation

Please refer to ViLa-MIL, CLAM, and CONCH.

2. Reproduce FOCUS

This repository is based on the Pytorch version of the FOCUS implementation.

We have provided the model implementation and training code, with detailed instructions shown as follows:

2.1 Dataset

We've inlcuded three datasets in this study, i.e., TCGA-NSCLC, CAMELYON, and UBC-OCEAN. Here provides the download link to each dataset:

2.2 Preprocessing

For WSI preprocessing, please refer to CLAM, where we set the patch size to 512 and magnification to 40X.

For obtaining patch feature embeddings, please note that we use CONCH as the feature extractor for experiments in this study.

For dataset splitting under few-shot settings, please refer to ViLa-MIL.

After the preprocessing steps above, assume that we have divided the dataset into 10 folds (we've provided the splits of three datasets we used in this study in the splits folder).

2.3 Training

🌟 Before training the model, please download the conch.pth checkpoint from our provided HuggingFace Repo. After downloading, put it under the ckpts folder.

Please see LUAD_LUSC.sh, camelyon.sh, and UBC-OCEAN.sh. If you find any config confusing, please refer to ViLa-MIL for detailed description.

Note the data_folder_s argument is only used for models that need dual-scale WSI features (e.g., ViLa-MIL).

Acknowledgment

This codebase is based on ViLa-MIL and CLAM. Many thanks to the authors of these great projects!

Issues

  • Please open new threads or report issues directly (for urgent blockers) to [email protected]
  • Immediate response to minor issues may not be available

Reference

If you find our work useful in your research, please consider citing our paper at::

@inproceedings{guo2025focus,
  title={Focus: Knowledge-enhanced adaptive visual compression for few-shot whole slide image classification},
  author={Guo, Zhengrui and Xiong, Conghao and Ma, Jiabo and Sun, Qichen and Feng, Lishuang and Wang, Jinzhuo and Chen, Hao},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={15590--15600},
  year={2025}
}

About

[CVPR 2025] Official Repo of Paper "FOCUS: Knowledge-enhanced Adaptive Visual Compression for Few-shot Whole Slide Image Classification"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published