Afficher la notice abrégée

dc.contributor.authorVanitha, M.
dc.contributor.authorNiharika, N.
dc.contributor.authorPranitha, L.
dc.contributor.authorArchitha, K.
dc.date.accessioned2025-04-22T09:49:12Z
dc.date.available2025-04-22T09:49:12Z
dc.date.issued2024-12-31
dc.identifier.citationM. Vanitha, N. Niharika, L. Pranitha, K. Architha (2024). Synergizing Human Gaze with Machine Vision for Location Mode Prediction,Vol.15(5).286-295. ISSN:1989-9572es_ES
dc.identifier.issn1989-9572
dc.identifier.urihttps://hdl.handle.net/10481/103721
dc.description.abstractBefore the advent of machine learning and AI, systems predicting human intent and movement relied heavily on sensor-based approaches like inertial measurement units (IMUs), gyroscopes, and accelerometers, which primarily tracked physical movements. These systems, while effective in detecting motion, lacked the nuanced understanding of human intent and environmental context that could be gained from integrating human gaze. The title "Synergizing Human Gaze with Machine Vision for Location Mode Prediction" reflects the integration of human gaze data, which provides information about where a person is looking (indicating intent), with machine vision systems that process movement data (cloud points) to predict future locomotion modes or transitions. Before machine learning, traditional systems for predicting human movement were limited to sensor-based methods such as IMUs, which could only detect physical movements without understanding the intent behind them. These systems were less adaptable and often required manual calibration and interpretation by experts. Traditional sensor-based systems lacked the ability to accurately predict human intent or understand the contextual environment in real-time, leading to less reliable and slower responses in applications like wearable robotics. These systems could detect movement but were unable to forecast the user's next movement or transition. The proposed system, GT-NET, utilizes machine learning algorithms to combine human gaze data (images) with cloud point data (user movement) for predicting human intent and locomotion. This system leverages deep learning models trained on a custom dataset, with the aim of accurately forecasting the user's next movement. By integrating these data modalities, GT-NET enhances the ability of machines to anticipate human actions, particularly in dynamic environments.es_ES
dc.language.isoenges_ES
dc.publisherUniversidad de Granadaes_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectMachine learninges_ES
dc.subjectArtificial Intelligence (AI)es_ES
dc.subjectDeep Learning Modelses_ES
dc.subjectCloudes_ES
dc.titleSynergizing Human Gaze with Machine Vision for Location Mode Predictiones_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.type.hasVersionVoRes_ES


Fichier(s) constituant ce document

[PDF]

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepté là où spécifié autrement, la license de ce document est décrite en tant que Attribution-NonCommercial-NoDerivatives 4.0 Internacional