FedSCAl: Leveraging Server and Client Alignment for Unsupervised Federated Source-Free Domain Adaptation

Visual Computing Lab, IISc Bangalore
*Indicates Equal Contribution

WACV 2026
First research result visualization

(a) Federated source-Free Domain Adaptation (FFreeDA) Setup: Multiple clients (labeled as \(\text{C}_1\), \(\text{C}_2\), \(\text{C}_3\), and \(\text{C}_4\)) hold unlabeled data from same or different domains, while the central server uses a pre-trained model trained on a labeled source dataset, which differs from clients' data distributions and is unavailable during training.
(b) Performance of different methods on Office-Home dataset.

Abstract

We address the Federated source-Free Domain Adaptation (FFreeDA) problem, with clients holding unlabeled data with significant inter-client domain gaps. The FFreeDA setup constrains the FL frameworks to employ only a pre-trained server model as the setup restricts access to the source dataset during the training rounds. Often, this source domain dataset has a distinct distribution to the clients’ domains.

To address the challenges posed by the FFreeDA setup, adaptation of the Source-Free Domain Adaptation (SFDA) methods to FL struggles with clientdrift in real-world scenarios due to extreme data heterogeneity caused by the aforementioned domain gaps, resulting in unreliable pseudo-labels. In this paper, we introduce \(\textbf{FedSCAl}\), an FL framework leveraging our proposed Server-Client Alignment (SCAl) mechanism to regularize client updates by aligning the clients’ and server model’s predictions.

We observe an improvement in the clients’ pseudo-labeling accuracy post alignment, as the SCAl mechanism helps to mitigate the client-drift. Further, we present extensive experiments on benchmark vision datasets showcasing how FedSCAl consistently outperforms state-of-the-art FL methods in the FFreeDA setup for classification tasks.

Method

Method Overview

Overview of the proposed FedSCAI framework with Server and Client Alignment (SCAI) implementation. The server communicates the global model \( \mathbf{w}^r \) to the clients, the clients then train their local models \( \mathbf{w}^r_k \) using a weak augmentation \( A_w \) giving us the augmented image \( A_w(x^i_k) \), and a strong augmentation \( A_s \) from which we get the augmented image \( A_s(x^i_k) \). The clients compute the client alignment loss: \( L^{i}_{l,k} \) and the server alignment loss: \( L^{i}_{g,k} \) to perform the local training using the below equation $$ l_k(\mathbf{w}) = L_k^{LoA} + L_k^{SCAl} $$ and then communicate the updated local models \( \mathbf{w}^r_k \) to the server in the subsequent round for aggregation.

▷ For more details, please refer to the method section of the main paper.

Results

Experimental Results
Experimental Results

▷ For more results and ablation analysis, please refer to the the main paper and supplementary.


Pseudo-label prediction accuracy difference\(\Delta_{pAcc}\) (refer to Eq. 13 of the main paper) improves with the proposed SCAl mechanism on theOffice-Homedataset, when the training is initialized with the server model pre-trained on different source domains (Art, ClipArt, Product & Real-World).


Entropy density of the clients predictions on Office-Homedataset, calculated via (refer to Eq. 14 of the main paper), exhibiting skewness (\(\gamma\)); with negative \(\gamma\) indicating higher entropy and lower prediction confidence, while positive \(\gamma\) reflecting lower entropy and higher prediction confidence.

Effect of varying threshold\(\tau_{init}\) (refer to Eq. 16 of the main paper) on the average accuracy attained by all the clients when the training is initialized with the server model pre-trained on Art, Clipart, Product, and Real-World.

BibTeX

@InProceedings{yashwanth_2026_WACV,
        author    = {Yashwanth, M and Koti, Sampath and Singh, Arunabh and Marjit, Shyam and Chakraborty, Anirban},
        title     = {FedSCAl: Leveraging Server and Client Alignment for Unsupervised Federated Source-Free Domain Adaptation},
        booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)},
        year      = {2026},
    }