Title: SecureFL: A Privacy-Preserving Federated Learning Framework with Differential Privacy
Authors: Li Xiaoming, Wang Jing, Zhang Qiwei
Conference: NeurIPS 2024 (Neural Information Processing Systems)
Abstract: We propose SecureFL, a federated learning framework that provides strong privacy guarantees through differential privacy and secure aggregation. Our system enables organizations to collaboratively train models while keeping their data local and private.
Key Contributions:
Differential privacy guarantee achieved
Reduction in communication cost
Model accuracy preserved
Comprehensive privacy protection and security mechanisms for federated learning.
Mathematical privacy guarantees with configurable privacy budget.
Cryptographic protocols to protect individual model updates.
Robust against malicious participants and data poisoning attacks.
Training data never leaves the participant’s local environment.
Compressed gradients and adaptive communication scheduling.
Comprehensive logging for compliance and reproducibility.
Our framework provides formal privacy guarantees through:
Dataset | Centralized Accuracy | Federated Accuracy | Privacy Level |
---|---|---|---|
CIFAR-10 | 94.2% | 92.8% | ε=1.0 |
MNIST | 99.1% | 98.7% | ε=0.5 |
IMDB | 89.3% | 87.9% | ε=0.1 |
Medical Images | 91.7% | 90.2% | ε=0.05 |
Our optimization techniques significantly reduce communication overhead:
Extensive testing against various attack scenarios:
Get started with SecureFL in just a few lines of code:
from securefl import FederatedTrainer, PrivacyConfig
# Configure privacy settings
privacy_config = PrivacyConfig(
epsilon=1.0, # Privacy budget
delta=1e-5, # Privacy delta
clipping_norm=1.0 # Gradient clipping
)
# Initialize federated trainer
trainer = FederatedTrainer(
model=your_model,
privacy_config=privacy_config,
aggregation='secagg', # Secure aggregation
byzantine_tolerance=True
)
# Start federated training
trainer.fit(
participants=client_list,
rounds=100,
local_epochs=5
)
SecureFL supports multiple deployment scenarios:
Healthcare Collaboration
# Multi-hospital medical imaging model
hospitals = ['hospital_a', 'hospital_b', 'hospital_c']
model = MedicalImageClassifier()
federated_model = trainer.train_medical_model(
participants=hospitals,
privacy_budget=0.1, # Strict privacy for medical data
compliance='HIPAA'
)
Financial Fraud Detection
# Multi-bank fraud detection
banks = ['bank_1', 'bank_2', 'bank_3']
model = FraudDetectionModel()
fraud_model = trainer.train_fraud_detector(
participants=banks,
privacy_budget=0.5,
regulatory_compliance='GDPR'
)