Integration of Feature and Decision Fusion with Deep Learning Architectures for Video Classification
Identificadores
URI: https://hdl.handle.net/10481/89102Metadatos
Mostrar el registro completo del ítemEditorial
IEEE
Materia
Computer vision data fusion deep neural networks human action recognition spatio-temporal features
Fecha
2024-02-01Referencia bibliográfica
R. S. Kiziltepe, J. Q. Gan and J. J. Escobar, "Integration of Feature and Decision Fusion With Deep Learning Architectures for Video Classification," in IEEE Access, vol. 12, pp. 19432-19446, 2024, doi: 10.1109/ACCESS.2024.3360929
Patrocinador
Ministry of National Education, TurkeyResumen
Information fusion is frequently employed to integrate diverse inputs, including sensory data, features, or decisions, in order to leverage the advantageous relationships among various features and classifiers. This paper presents a novel approach for video classification using deep learning architectures, including ConvLSTM and vision transformer based fusion architectures, which incorporates the combination of spatial and temporal features, along with the utilisation of decision fusion at multiple levels. The proposed vision transformer based method uses a 3D CNN to extract spatio-temporal information and different attention mechanisms to pay attention to essential features for action recognition and thus learns spatio-temporal dependencies effectively. The effectiveness of the methods proposed in this paper is validated through empirical evaluations conducted on two well-known video classification datasets, namely UCF-101 and KTH. The experimental findings indicate that the utilisation of both spatial and temporal features is essential, with the superior performance gained by using temporal features as the primary source of features in conjunction with two types of distinct spatial features when compared to other configurations. The multi-level decision fusion approach proposed in this study produces results comparable to those of feature fusion methods while offering the advantage of reduced memory requirements and computational costs. The fusion of RGB, HOG, and optical flow representations has demonstrated the best performance compared to other fusion methods examined in this study. It has also been demonstrated that the vision transformer based approaches significantly outperformed the ConvLSTM based approaches. Furthermore, an ablation study was conducted to compare the performances of vision transformer based feature fusion approaches for enhancing the performance of video classification.