← Back to Ideas

Hypernetworks can be leveraged to jointly learn client-specific model parameters and adaptive Differential Privacy (DP) noise scales, optimizing the privacy-utility trade-off dynamically per client.

Feasibility: 8 Novelty: 8

Motivation

Current DP-FL approaches typically apply a uniform noise budget across all clients or require manual tuning. However, the sensitivity and complexity of data vary across non-IID clients; a 'one-size-fits-all' noise level either degrades utility for complex clients or provides insufficient privacy for sensitive ones.

Proposed Method

Extend the FedHypeVAE architecture so the hypernetwork outputs both the VAE weights and a scalar noise variance parameter (sigma) for each client. Modify the loss function to include a 'privacy cost' term (derived from Renyi Differential Privacy accounting) that penalizes low sigma values, balanced against the reconstruction loss. Train the hypernetwork to minimize the global loss, effectively learning to inject the minimum necessary noise for each client's specific data distribution.

Expected Contribution

An automated, data-driven mechanism for privacy budgeting in Federated Learning that improves overall model utility without violating aggregate privacy guarantees.

Required Resources

Standard FL simulation framework (e.g., Flower or PySyft), GPU cluster for hypernetwork training, datasets with varying complexity per class (e.g., CIFAR-100).

Source Paper

FedHypeVAE: Federated Learning with Hypernetwork Generated Conditional VAEs for Differentially Private Embedding Sharing

View Paper Details →