Skip to content

FedAnil++ is a Privacy-Preserving and Communication-Efficient Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil++ written in Python.

Notifications You must be signed in to change notification settings

rezafotohi/FedAnilPlusPlus

Repository files navigation

FedAnil++: A Privacy-Preserving and Communication-Efficient Federated Deep Learning Model for Intelligent Enterprises

FedAnil++ is a Privacy-Preserving and Communication-Efficient Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil++ written in Python.

Introduction

With the volume of data growing in enterprises, the traditional learning paradigm based on machine learning (ML) has given way to an emerging paradigm called federated deep learning (FDL). In FDL, with the collaboration of local enterprises and the server, a model is trained without sending raw private data from local enterprises to a server. However, existing FDL-based approaches are vulnerable to attacks and violate privacy. Therefore, we propose FedAnil++, a novel Federated Deep Learning Model that includes three main phases to overcome this challenge. The goal of the first phase is to solve the Unbalanced and non-IID (Independent and Identically Distributed) data challenges. The Privacy-preserving challenge is addressed in the second phase. Finally, in the third phase, a communication-efficient approach is proposed to reduce communication costs.

For detailed explanations, please refer to the A Privacy-Preserving and Communication-Efficient Federated Deep Learning Model for Intelligent Enterprises.

FedAnil++ Installation

Requirements

OS

Windows Linux MacOS
✔️ ✔️ ✔️

Python

3.9 3.10 3.11 3.12
✔️

PyTorch

2.1.1 2.1.2 2.2.0 2.2.1
✔️

Step 1: Download the repo

git clone https://github.com/rezafotohi/FedAnilPlusPlus.git
cd FedAnilPlusPlus

Step 2: Create a new conda environment with Python 3.10

conda create -n FedAnil++ python=3.10
conda activate FedAnil++

Step 3: Install PyTorch and Jupyter

conda install pytorch torchvision torchaudio -c pytorch
conda install -c conda-forge jupyter jupyterlab

Step 4: Is the torch installed successfully or not? Enter the following commands in the terminal:

python3
import torch

Step 5: Install Pycryptodome and Matplotlib

conda install pycryptodome
conda install matplotlib

Step 6: Install Scikit-Learn

pip3 install scikit-learn-extra

Step 7: Install Bitarray

pip3 install bitarray

Step 8: Install TenSEAL

pip3 install git+https://github.com/OpenMined/TenSEAL.git#egg=tenseal

Step 9: Install Cmake

On Windows and Linux:

Download the latest CMake Mac binary distribution here: https://cmake.org/download/

On MacBooks with M1 processor:

arch -arm64 brew install cmake

Step 10: Run FedAnil++ Simulation

python3 main.py -nd 100 -max_ncomm 50 -ha 80,10,10 -aio 1 -pow 0 -ko 5 -nm 3 -vh 0.08 -cs 0 -B 64 -mn OARF -iid 0 -lr 0.01 -dtx 1 -le 20

Arguments:

-nd 100: 100 Enterprises.

-max_ncomm 50: Maximum 50 communication rounds.

-ha 80,10,10: Role assignment hard-assigned to 80 workers, 10 validators, and 10 miners for each communication round. A * in -ha means the corresponding number of roles is not limited. e.g., -ha *,10,* means at least 5 validators would be assigned in each communication round, and the rest of the enterprises are dynamically and randomly assigned to any role. -ha *,*,* means the role-assigning in each communication round is completely dynamic and random.

-aio 1: aio means "all in one network", namely, every enterprise in the simulation has every other enterprise in its peer list. This simulates FedAnil++ running on a Permissioned blockchain (consortium blockchain). If using -aio 0, the simulation will let an enterprise (registrant) randomly register with another enterprise (register) and copy the register's peer list.

-pow 0: The argument of -pow specifies the proof-of-work difficulty. When using 0, FedAnil++ runs with FedAnil++-PoS consensus to select the winning miner.

-ko 5: This argument means an enterprise is blacklisted after it is identified as malicious after 6 consecutive rounds as a worker.

-nm 3: Exactly 3 enterprises will be malicious nodes.

-vh 0.08: Validator-threshold is set to 0.08 for all communication rounds. This value may be adaptively learned by validators in a future version.

-cs 0: As the simulation does not include mechanisms to disturb the digital signature of the transactions, this argument turns off signature checking to speed up the execution.

Federated Learning arguments (inherited from https://github.com/WHDY/FedAvg)

-B 64: Batch size set to 64.

-mn OARF: Use OARF Dataset.

-iid 0: Shard the training data set in Non-IID way.

-lr 0.01: Learning rate set to 0.01.

Other arguments

-dtx 1: See Issues.

Please see main.py for other argument options.

Simulation Logs

Examining the Logs

While running, the program saves the simulation logs inside of the log/\ folder. The logs are saved based on communication rounds. In the corresponding round folder, you may find the model accuracy evaluated by each enterprise using the global model at the end of each communication round. You may also find each worker's local training accuracy, the validation-accuracy-difference value of each validator, and the final stake rewarded to each enterprise in this communication round. You may also find the malicious enterprise's identification log outside the round folders.

Issues

If you use a GPU with a RAM of less than 16GB, you may encounter the issue of CUDA out of memory. This issue may be because the local model updates (i.e., neural network models) stored inside the blocks occupy the CUDA memory and cannot be automatically released because the memory taken in CUDA increases as the communication round progresses. A few solutions have been tried without luck.

A temporary solution is to specify -dtx 1. This argument lets the program delete the transactions stored inside the last block to release the CUDA memory as much as possible. However, specifying -dtx 1 will also turn off the chain-resyncing functionality as the resyncing process requires enterprises to reperform global model updates based on the transactions stored inside of the resynced chain, which has empty transactions in each block. As a result, using GPU should only emulate the situation that FedAnil++ runs in its most ideal situation; That is, every available transaction would be recorded inside of the block of each round, as specified by the default arguments.

Use GitHub issues for tracking requests and bugs.

Citation

If you publish work that uses FedAnil++, please cite FedAnil++ as follows:

@article{2024FedAnil++,
  title = {A Privacy-Preserving and Communication-Efficient Federated Deep Learning Model for Intelligent Enterprises},
  author = {Reza Fotohi and Fereidoon Shams Aliee and Bahar Farahani},
  journal= {Under Review!},
  volume = {},
  pages = {},
  year = {2024},
  issn = {},
  doi = {},
  url = {},
}

Disclaimer

This model is a research work and is provided as it is. We are not responsible for any user action or omission.

Contact

Please raise any other issues and concerns you may have. Thank you!

Email: Fotohi.reza@gmail.com

Linkedin: https://www.linkedin.com/in/reza-fotohi-b433a169/

Acknowledgments

(1) The code of the Blockchain Architecture used in FedAnil++ is inspired Fully functional blockchain application implemented in Python from scratch by Satwik Kansal.

(2) The code of the Validation and Consensus scheme used in FedAnil++ is inspired VBFL by Hang Chen.

(3) The code of the FedAvg used in FedAnil is inspired WHDY's FedAvg implementation by WHDY.

About

FedAnil++ is a Privacy-Preserving and Communication-Efficient Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil++ written in Python.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages