← Back to Ideas

The generative nature of FedHypeVAE can mitigate catastrophic forgetting in Federated Class-Incremental Learning by acting as a privacy-preserving 'pseudo-replay' buffer.

Feasibility: 6 Novelty: 8

Motivation

In continuous FL settings, clients acquire new classes over time and forget old ones. Storing old data for replay violates privacy/storage constraints. Since FedHypeVAE learns to generate embeddings, the server could potentially use the hypernetwork to generate 'historical' embeddings to refresh the global model.

Proposed Method

Implement a lifelong learning setup where tasks arrive sequentially. Before aggregating updates for Task T, the server queries the hypernetwork using context vectors from Tasks 1 to T-1 to generate synthetic embeddings (pseudo-replay data). These synthetic embeddings are distilled into the global model alongside the current client updates to preserve performance on previous tasks.

Expected Contribution

A method for Continual Federated Learning that prevents catastrophic forgetting without storing any historical data or requiring clients to maintain local buffers.

Required Resources

High-performance compute for generative replay cycles, benchmark datasets for class-incremental learning (e.g., Split-CIFAR).

Source Paper

FedHypeVAE: Federated Learning with Hypernetwork Generated Conditional VAEs for Differentially Private Embedding Sharing

View Paper Details →