Attention-Based Deep Learning Model for Predicting Collaborations Between Different Research Affiliations
Metadata
Show full item recordAuthor
Zhou, Hui; Sun, Jinqing; Zhao, Zhongying; Yang, Yonghao; Xie, Ailei; Chiclana Parrilla, FranciscoEditorial
IEEE
Materia
Relationship prediction Collaboration analysis Coauthor networks Deep learning
Date
2019-08-21Referencia bibliográfica
Zhou, H., Sun, J., Zhao, Z., Yang, Y., Xie, A., & Chiclana, F. (2019). Attention-Based Deep Learning Model for Predicting Collaborations Between Different Research Affiliations. IEEE Access, 7, 118068-118076.
Sponsorship
This work was supported in part by the Humanities and Social Science Research Project of the Ministry of Education in China under Grant 17YJCZH262 and Grant 18YJAZH136, in part by the National Natural Science Foundation of China under Grant 61303167, Grant 61702306, Grant 61433012, Grant U1435215, and Grant 71772107, in part by the Natural Science Foundation of Shandong Province under Grant ZR2018BF013 and Grant ZR2017BF015, in part by the Innovative Research Foundation of Qingdao under Grant 18-2-2-41-jch, in part by the Key Project of Industrial Transformation and Upgrading in China under Grant TC170A5SW, and in part by the Scientific Research Foundation of SDUST for Innovative Team under Grant 2015TDJH102.Abstract
It is challenging but important to predict the collaborations between different entities which in
academia, for example, would enable finding evaluating trends of scientific research collaboration and the
provision of decision support for policy formulation and incentive measures. In this paper, we propose an
attention-based Long Short-Term Memory Convolutional Neural Network (LSTM-CNN) model to predict
the collaborations between different research affiliations, which takes both the influence of research articles
and time (year) relationships into consideration. The experimental results show that the proposed model
outperforms the competitive Support Vector Machine (SVM), CNN and LSTM methods. It significantly
improves the prediction precision by a minimum of 3.23 percent points and up to 10.80 percent points
when compared with the mentioned competitive methods, while in terms of the F1-score, the performance
is improved by 13.48, 4.85 and 4.24 percent points, respectively.