← Back to Ideas

FedHypeVAE can be extended to handle 'disjoint modality' scenarios (e.g., Client A has MRI, Client B has CT) by using the hypernetwork to hallucinate missing modality embeddings in a shared latent space.

Feasibility: 7 Novelty: 9

Motivation

Real-world healthcare federations often suffer from missing modalities, where different hospitals have different equipment. Existing imputation methods struggle with privacy constraints. FedHypeVAE's embedding generation capability offers a unique path to align these heterogeneous inputs into a unified representation without sharing raw data.

Proposed Method

Modify the hypernetwork input to accept a 'modality embedding' vector alongside the client embedding. Train the hypernetwork to generate modality-specific encoders that map input data to a shared, modality-agnostic latent space. Use Maximum Mean Discrepancy (MMD) constraints to force the embedding distributions of different modalities (e.g., text and image) to overlap in the latent space, enabling cross-modal retrieval or classification.

Expected Contribution

A privacy-preserving framework for multi-modal Federated Learning that allows clients with different sensor capabilities to collaborate on a single shared model.

Required Resources

Multi-modal datasets (e.g., MIMIC-III or multimodal MNIST), expertise in domain adaptation and multi-view learning.

Source Paper

FedHypeVAE: Federated Learning with Hypernetwork Generated Conditional VAEs for Differentially Private Embedding Sharing

View Paper Details →