OG-SGG: Ontology-Guided Scene Graph Generation-A Case Study in Transfer Learning for Telepresence Robotics
Metadatos
Mostrar el registro completo del ítemEditorial
IEEE
Materia
Scene graph generation Ontology Computer vision Telepresence robotics
Fecha
2022-12-19Referencia bibliográfica
F. Amodeo... [et al.]. "OG-SGG: Ontology-Guided Scene Graph Generation—A Case Study in Transfer Learning for Telepresence Robotics," in IEEE Access, vol. 10, pp. 132564-132583, 2022, doi: [10.1109/ACCESS.2022.3230590]
Patrocinador
Programa Operativo FEDER Andalucia; Consejeria de Economia y Conocimiento (TELEPORTA) UPO-1264631 Consejeria de Economia y Conocimiento (DeepBot) PY20_00817; European Union NextGenerationEU/PRTR PLEC2021-007868 MCIN/AEI/10.13039/501100011033; Spanish Government Juan de la Cierva Incorporacion; Google Research Scholar Programme IJC2019-039152-IResumen
Scene graph generation from images is a task of great interest to applications such as robotics,
because graphs are the main way to represent knowledge about the world and regulate human-robot
interactions in tasks such as Visual Question Answering (VQA). Unfortunately, its corresponding area of
machine learning is still relatively in its infancy, and the solutions currently offered do not specialize well in
concrete usage scenarios. Speci cally, they do not take existing ``expert'' knowledge about the domain world
into account; and that might indeed be necessary in order to provide the level of reliability demanded by the
use case scenarios. In this paper, we propose an initial approximation to a framework called Ontology-Guided
Scene Graph Generation (OG-SGG), that can improve the performance of an existing machine learning
based scene graph generator using prior knowledge supplied in the form of an ontology (speci cally, using
the axioms de ned within); and we present results evaluated on a speci c scenario founded in telepresence
robotics. These results show quantitative and qualitative improvements in the generated scene graphs.