If you find our work helpful, please consider giving us a ⭐!
git clone https://github.com/byliutao/Cradle2Cane
conda create --name cradle2cane python=3.10 -y
conda activate cradle2cane
pip install -r config/requirement.txt
# Download models
pip install -U huggingface_hub
export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download --resume-download stabilityai/sdxl-turbo --local-dir models/sdxl-turbo/
huggingface-cli download --resume-download madebyollin/sdxl-vae-fp16-fix --local-dir models/sdxl-vae-fp16-fix/
huggingface-cli download --resume-download openai/clip-vit-large-patch14 --local-dir models/clip-vit-large-patch14/
huggingface-cli download --resume-download byliutao/Cradle2Cane --local-dir models/
# infer
python infer.py --input_path asserts/23_male.png
# in-the-wild image infer
python infer.py --input_path asserts/25_male.png --one_threshold
# infer with attribute change
python infer.py --input_path asserts/20_female.png --addition_prompt "yellow hair"
Download the ffhq 512×512 dataset from the link and put the files to $REPOROOT/dataset.
Download the json directory from the link and put it under $REPOROOT/dataset/eval.
Download the ffhq-dataset-v2.json directory from the link and put it under $REPOROOT/dataset/eval.
The directory structure should look like:
$REPOROOT
|-- dataset
| |-- ffhq512 # contains images :*.png
| |-- json # contains images :*.json
| |-- ffhq-dataset-v2.json
# preprocess ffhq512 dataset
python -m lib.utils.ffhq_process
# check you config in train.sh first
bash config/train.sh
Download the celeba-200 dataset from the link and unzip the folder to $REPOROOT/dataset/eval.
Download the agedb-400 dataset from the link and unzip the folder to $REPOROOT/dataset/eval. (Follow Arc2Face, we use agebd dataset to do lpips evaluation)
The directory structure should look like:
$REPOROOT
|-- dataset
| |-- eval
| | |-- celeba-200 # contains images :*.jpg
| | |-- agedb-400 # contains images :*.jpg
# please set api key and secret first in eval.sh
bash config/eval.sh
git clone https://github.com/mk-minchul/AdaFace.git
cd Adaface
conda create --name adaface pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch
conda activate adaface
conda install scikit-image matplotlib pandas scikit-learn
pip install -r requirements.txt
- Download
labeled faces_webface_112×112 datasetfrom link and unzip it to$REPOROOT/dataset/faces_webface_112x112 - Download the
faces_webface_112×112 datasetfrom link and unzip it to$REPOROOT/dataset/faces_webface_112x112. - run
python convert.py --rec_path $REPOROOT/dataset/faces_webface_112x112 --make_image_files --make_validation_memfiles
The directory structure should look like:
$REPOROOT
|-- dataset
| |-- faces_webface_112x112 # contains subdir with imgs
| |-- faces_webface_112x112_labeled # contains images :*.jpg
cd $REPOROOT
conda activate cradle2cane
bash config/gen_fake_fast.sh
cd AdaFace
conda activate adaface
bash ../config/run_ir50_ms1mv2.sh
| 24 Male | 35 Female |
|---|---|
![]() |
![]() |
If you find our paper or benchmark helpful for your research, please consider citing our paper and giving this repo a star ⭐. Thank you very much!
@inproceedings{
liu2025from,
title={From Cradle to Cane: A Two-Pass Framework for High-Fidelity Lifespan Face Aging},
author={Tao Liu and Dafeng Zhang and Gengchen Li and Shizhuo Liu and yongqi song and Senmao Li and Shiqi Yang and Boqian Li and Kai Wang and Yaxing Wang},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025},
url={https://openreview.net/forum?id=E1eVGJ5RYG}
}Licensed under a Creative Commons Attribution-NonCommercial 4.0 International for Non-commercial use only. Any commercial use should get formal permission first.

