Computation of Kullback–Leibler Divergence in Bayesian Networks
Metadatos
Mostrar el registro completo del ítemEditorial
Universidad de Granada
Materia
Probabilistic graphical models Machine learning algorithms Kullback–Leibler divergence
Fecha
2021Referencia bibliográfica
Moral, S.; Cano, A.; Gómez-Olmedo, M. Computation of Kullback–Leibler Divergence in Bayesian Networks. Entropy 2021, 23, 1122. https://doi.org/10.3390/ e23091122
Patrocinador
Spanish Ministry of Education and Science under project PID2019-106758GB-C31; European Regional Development Fund (FEDER)Resumen
Kullback–Leibler divergence KL(p, q) is the standard measure of error when we have a
true probability distribution p which is approximate with probability distribution q. Its efficient
computation is essential in many tasks, as in approximate computation or as a measure of error
when learning a probability. In high dimensional probabilities, as the ones associated with Bayesian
networks, a direct computation can be unfeasible. This paper considers the case of efficiently
computing the Kullback–Leibler divergence of two probability distributions, each one of them
coming from a different Bayesian network, which might have different structures. The paper is based
on an auxiliary deletion algorithm to compute the necessary marginal distributions, but using a cache
of operations with potentials in order to reuse past computations whenever they are necessary. The
algorithms are tested with Bayesian networks from the bnlearn repository. Computer code in Python
is provided taking as basis pgmpy, a library for working with probabilistic graphical models.