Skip to content

LeapLabTHU/CheXWorld

Repository files navigation

CheXWorld: Exploring Image World Modeling for Radiograph Representation Learning (CVPR 2025)

Authors: Yang Yue*, Yulin Wang*, Chenxin Tao, Pan Liu, Shiji Song, Gao Huang#.

*: Equal contribution, #: Corresponding author.

Overview

fig1

CheXWorld is a self-supervised world model for radiographic images, inspired by how humans develop internal models of the world to reason and predict outcomes. It learns key aspects of medical knowledge critical for radiologists, including: 1) Local anatomical structures (e.g., tissue shapes, textures), 2) Global anatomical layouts (e.g., organ and skeleton organization), and 3) Domain variations (e.g., differences in image quality across hospitals and devices). CheXWorld shows strong performance across eight medical imaging tasks, outperforming existing SSL methods and large-scale medical foundation models.

Resources

The pre-trained models and data splits of the downstream tasks can be found here.

Usage Guide

Acknowledgement

This code is developed on the top of MAE and I-JEPA

Contact

If you have any questions or concerns, please send email to [email protected]

About

[CVPR 2025] CheXWorld: Exploring Image World Modeling for Radiograph Representation Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages