Afficher la notice abrégée

dc.contributor.authorSun, Yuchang
dc.contributor.authorKountouris, Marios
dc.contributor.authorZhang, Jun
dc.date.accessioned2025-04-07T07:30:04Z
dc.date.available2025-04-07T07:30:04Z
dc.date.issued2024-11-28
dc.identifier.citationY. Sun, M. Kountouris, and J. Zhang, “How to collaborate: Towards maximizing the generalization performance in cross-silo federated learning,” accepted to IEEE Trans. Mobile Comput. https://doi.org/10.48550/arXiv.2401.13236es_ES
dc.identifier.urihttps://hdl.handle.net/10481/103477
dc.descriptionThe work of J. Zhang was supported by the Hong Kong Research Grants Council under the Areas of Excellence scheme grant AoE/E-601/22-R and NSFC/RGC Collaborative Research Scheme grant CRS HKUST603/22. The work of M. Kountouris was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (Grant agreement No. 101003431).es_ES
dc.description.abstractFederated learning (FL) has attracted vivid attention as a privacy-preserving distributed learning framework. In this work, we focus on cross-silo FL, where clients become the model owners after training and are only concerned about the model’s generalization performance on their local data. Due to the data heterogeneity issue, asking all the clients to join a single FL training process may result in model performance degradation. To investigate the effectiveness of collaboration, we first derive a generalization bound for each client when collaborating with others or when training independently. We show that the generalization performance of a client can be improved by collaborating with other clients that have more training data and similar data distributions. Our analysis allows us to formulate a client utility maximization problem by partitioning clients into multiple collaborating groups. A hierarchical clustering-based collaborative training (HCCT) scheme is then proposed, which does not need to fix in advance the number of groups. We further analyze the convergence of HCCT for general non-convex loss functions which unveils the effect of data similarity among clients. Extensive simulations show that HCCT achieves better generalization performance than baseline schemes, whereas it degenerates to independent training and conventional FL in specific scenarios.es_ES
dc.description.sponsorshipHong Kong Research Grants Council AoE/E-601/22-Res_ES
dc.description.sponsorshipNSFC/RGC Collaborative Research Scheme CRS HKUST603/22es_ES
dc.description.sponsorshipEuropean Research Council (ERC)es_ES
dc.description.sponsorshipUnion’s Horizon 2020 No. 101003431es_ES
dc.language.isoenges_ES
dc.publisherIEEEes_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectFederated Learninges_ES
dc.subjectGeneralization es_ES
dc.subjectCollaboration patternes_ES
dc.subjectHierarchical clusteringes_ES
dc.titleHow to Collaborate: Towards Maximizing the Generalization Performance in Cross-Silo Federated Learninges_ES
dc.typejournal articlees_ES
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/101003431es_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.48550/arXiv.2401.13236
dc.type.hasVersionSMURes_ES


Fichier(s) constituant ce document

[PDF]

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepté là où spécifié autrement, la license de ce document est décrite en tant que Attribution-NonCommercial-NoDerivatives 4.0 Internacional