An interpretable client decision tree aggregation process for federated learning
Metadatos
Mostrar el registro completo del ítemAutor
Argente-Garrido, Alberto; Zuheros, Cristina; Luzón García, María Victoria; Herrera Triguero, FranciscoEditorial
Elsevier
Materia
Federated Learning Decision Trees Interpretability Aggregation Process Data Privacy
Fecha
2024-04-03Referencia bibliográfica
Published version: Argente-Garrido, Alberto et al. Information Sciences Volume 694, March 2025, 121711. https://doi.org/10.1016/j.ins.2024.121711
Patrocinador
National Institute of Cybersecurity (INCIBE) C074/23; University of Granada; European Union (Next Generation)Resumen
Trustworthy Artificial Intelligence solutions are essential in today’s data-driven applications, prioritizing principles such as robustness, safety, transparency, explainability, and
privacy among others. This has led to the emergence of Federated Learning as a solution for
privacy and distributed machine learning. While decision trees, as self-explanatory models,
are ideal for collaborative model training across multiple devices in resource-constrained
environments such as federated learning environments for injecting interpretability in these
models. Decision tree structure makes the aggregation in a federated learning environment
not trivial. They require techniques that can merge their decision paths without introducing
bias or overfitting while keeping the aggregated decision trees robust and generalizable. In
this paper, we propose an Interpretable Client Decision Tree Aggregation process for Federated Learning scenarios that keeps the interpretability and the precision of the base decision
trees used for the aggregation. This model is based on aggregating multiple decision paths
of the decision trees and can be used on different decision tree types, such as ID3 and CART.
We carry out the experiments within four datasets, and the analysis shows that the tree built
with the model improves the local models, and outperforms the state-of-the-art.





