This repo contains the official code for experiments on the defense effectiveness of selective encryption against simulated attacks mentioned in the paper FedML-HE: An Efficient Homomorphic-Encryption-Based Privacy-Preserving Federated Learning System. For the implementation of homomorphic encryption in the FedML system, please refer to FedML-FHE-Example.
- Install the packages with
conda env create --file environment.yml
. (Some conflicts may be resolved by changingnumpy
to 1.23.5, but not sure if there are other conflicts.)
Note: breaching
used in this repo is a modified version and has been included. You don't have to install breaching
on your own.
-
Make sure to have the same file structure as this repo using
init.sh
before running any program if you want to obtain the experiment results, or you can specify on your own.├──breaching ├──classification │ ├── <scripts> (*.py) │ ├── <notebooks> (*.ipynb) │ ├── <models> (*.pt) │ ├── sensitivity (folder) │ │ │── <sensitivity files> (*.pt) │ │ └── sensitivity-lm.ipynb │ ├── noised_grad (folder containing noised gradients for dp experiments) │ │ └── <noised gradients> (*.pt) │ └── results (folder for attack experiments) │ ├── result-image.ipynb │ ├── <user index> (folder) │ │ └── <results> (*.pt) │ └── dp (folder for dp experiments) │ ├── <results> (*.pt) │ └── result-image-dp.ipynb └──lm ├── tag (folder) │ ├── <scripts> (*.py) │ ├── <notebooks> (*.ipynb) │ ├── <models> (*.pt) │ └── noised_grad (folder containing noised gradients for dp experiments) │ └── <noised gradients> (*.pt) ├── results (folder) │ └── tag (folder for TAG attack) │ ├── result-lm.ipynb │ ├── <user index> (folder) │ │ └── <results> (*.pt) │ └── dp (folder for dp experiments) │ ├── <results> (*.pt) │ └── result-lm-dp.ipynb └── sensitivity (folder) ├── <sensitivity files> (*.pt) └── sensitivity-lm.ipynb
The following demos assume the files are organized using ./init.sh
.
- Open the
breaching
environment
conda activate breaching
- Run the script you want, e.g.,
python classification/ig-lenet-cifar-new.py
python lm/tag/tag-trans3-wikitext-causal-new.py
cd <lm or classification>/sensitivity
python sensitivity.py --model_name <model_name> --num_user <num_user>
See breaching/cases/models/model_preparation.py
for available models.
For some examples, corresponding notebooks are available as a more visible demo.
Please cite our paper if you use this code in your work:
@article{jin2023fedml,
title={FedML-HE: An efficient homomorphic-encryption-based privacy-preserving federated learning system},
author={Jin, Weizhao and Yao, Yuhang and Han, Shanshan and Gu, Jiajun and Joe-Wong, Carlee and Ravi, Srivatsan and Avestimehr, Salman and He, Chaoyang},
journal={arXiv preprint arXiv:2303.10837},
year={2023}
}
We integrate the work from https://github.com/JonasGeiping/breaching.