Please wait a minute...
Journal of Data and Information Science  2019, Vol. 4 Issue (4): 26-41    DOI: 10.2478/jdis-2019-0019
Research Paper     
CiteOpinion: Evidence-based Evaluation Tool for Academic Contributions of Research Papers Based on Citing Sentences
Xiaoqiu Le1,(),Jingdan Chu2,Siyi Deng2,Qihang Jiao2,Jingjing Pei2,Liya Zhu1,Junliang Yao2
1National Science Library, Chinese Academy of Sciences, Beijing 100190, China
2University of Chinese Academy of Sciences, Beijing 100190, China
Download: PDF (3345 KB)      HTML  
Export: BibTeX | EndNote (RIS)      


Purpose: To uncover the evaluation information on the academic contribution of research papers cited by peers based on the content cited by citing papers, and to provide an evidence-based tool for evaluating the academic value of cited papers.

Design/methodology/approach: CiteOpinion uses a deep learning model to automatically extract citing sentences from representative citing papers; it starts with an analysis on the citing sentences, then it identifies major academic contribution points of the cited paper, positive/negative evaluations from citing authors and the changes in the subjects of subsequent citing authors by means of Recognizing Categories of Moves (problems,methods, conclusions, etc.), and sentiment analysis and topic clustering.

Findings: Citing sentences in a citing paper contain substantial evidences useful for academic evaluation. They can also be used to objectively and authentically reveal the nature and degree of contribution of the cited paper reflected by citation, beyond simple citation statistics.

Practical implications: The evidence-based evaluation tool CiteOpinion can provide an objective and in-depth academic value evaluation basis for the representative papers of scientific researchers, research teams, and institutions.

Originality/value: No other similar practical tool is found in papers retrieved.

Research limitations: There are difficulties in acquiring full text of citing papers. There is a need to refine the calculation based on the sentiment scores of citing sentences. Currently, the tool is only used for academic contribution evaluation, while its value in policy studies, technical application, and promotion of science is not yet tested.

Key wordsCited paper      Citing paper      Citing sentence      Citation motive      Citation sentiment      Academic contribution      Evaluation     
Received: 08 November 2019      Published: 19 December 2019
Corresponding Authors: Xiaoqiu Le     E-mail:
Cite this article:

Xiaoqiu Le, Jingdan Chu, Siyi Deng, Qihang Jiao, Jingjing Pei, Liya Zhu, Junliang Yao. CiteOpinion: Evidence-based Evaluation Tool for Academic Contributions of Research Papers Based on Citing Sentences. Journal of Data and Information Science, 2019, 4(4): 26-41.

URL:     OR

Figure 1. Diagram for Concept of Citation Content.
Quantitative Analysis Tool CiteOpinion
Data object Metadata of paper Citing sentences
Data granularity Article Sentence
Analysis focus Statistical indicators Text content mining
Result form Relationship diagrams and data sheets Evaluation evidence text, relationship diagrams and data sheets
Table 1 Comparison between CiteOpinion and Conventional Quantitative Analysis Tools.
Sentiment Category Definition Sentiment Score Range (E)
Positive Holding commendatory, approving and admiring attitude 1>E>0
Neutral Brief statement or rephrasing, without obvious expression of sentiment E=0
Negative Describing defects, shortcomings or mistakes 0>E>-1
Table 2 Sentiment categorization of citing sentences.
Figure 2. Measurement framework for academic contribution of representative paper.
Figure 3. CiteOpinion system framework.
Figure 4. Move structure category recognition model.
# Author Representative Papers Citing Papers
1 Hinton Geoffrey Deep learning 7,630
2 Hinton Geoffrey Reducing the dimensionality of data with neural networks 4,509
3 Hinton Geoffrey A fast learning algorithm for deep belief nets 4,538
4 Hinton Geoffrey Learning representations by back propagating errors 6,598
5 Hinton Geoffrey Dropout: A Simple Way to Prevent Neural Networks from Overfitting 4,142
Table 3 Representative papers of Geoffrey Hinton.
Highlights of Citing Sentence Titles of Citing Paper
1 The steepest descent algorithm, also known as the error backpropagation (EBP) algorithm [8,9], dispersed the dark clouds on the field of artificial neural networks and could be regarded as one of the most significant breakthroughs for training neural networks. Application of Neural Networks to Automatic Load Frequency Control
2 In addition to the development of new ANN algorithms that were more neural-inspired (e.g. Hopfield networks), another major breakthrough that helped lead to a resurgence in neural network research was the rediscovery of the backpropagation technique (LeCun, 1985, Rumelhart et al., 1986, Werbos, 1990). A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications
3 The next major breakthrough happened in late 80s with the invention of back-propagation and a gradient-based optimization algorithm to train a neural network with one or two hidden layers with any desired number of nodes (Rumelhart et al., 1986). Meta-analysis of deep neural networks in remote sensing: A comparative study of mono-temporal classification to support vector machines
Table 4 Highlights of citing sentences praising the contribution point as a breakthrough.
Contribution points mentioned by the author in the original representative work We describe a new learning procedure, back-propagation, for networks of neurone-like units.
Contribution points mentioned in the citing sentences 1. In MBGD, the learning rate is very important to the convergence speed and quality in training. Many different schemes, e.g., momentum [6], averaging [15], AdaGrad [16], RMSProp [17], Adam [18], etc., have been proposed to optimize the learning rate in neural network training. Adam may be the most popular one among them. [43.84%]
2. As for the extrapolation, a smooth activation function that only acts on the hidden layer(s) is recommended. Back-propagation is the second part of the algorithm [37]. This is the central mechanism that allows neural network methods to “learn.” [42.36%]
3. The feedforward multilayer perceptron is one of the most popular types of ANNs; it was developed by Rumelhart et al. [23], and it is presented in Supplementary 1. This network also consists of an input layer, one or more hidden layers, and one output layer. [10.95%]
Table 5 Comparison of contribution points of representative work and citing sentences.
Figure 5. Changes in major new research topics of representative paper’s citing papers.
Figure 6. Distribution of representative work’s citing sentences in different disciplines.
[1]   Brooks , T.A. (1985). Private acts and public objects: An investigation of citer motivations. Journal of the American Society for Information Science, 36(4), 223-229.
doi: 10.1002/(ISSN)1097-4571
[2]   Garfield , E. (1964). Can citation indexing be automated? In the Proceedings of the Symposyumon Statistical Association, Washington DC,8490.
[3]   Howard ,J., &Ruder ,S. (2018). Universal language model fine-tuning for text classification. In the Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.
[4]   Lei S.W., Chen H.H., Huang Y., & Lu W. (2016). Research on automatic recognition of academic citation context. Library and Information Service, 60(17), 78-87. DOI:10.13266/j.issn.0252-3116.2016.17.012
[5]   Liu , S.B., &Chen ,C.M. (2013). The differences between latent topics in abstracts and citation contexts of citing papers. Journal of the American Society for Information Science and Technology, 64(3), 627-639.
doi: 10.1002/asi.2013.64.issue-3
[6]   Liu S.B., Wang B., Tang D.L., Ma X., & Ding K. (2015). Research on paper influence based on citation context: A case study of the Nobel Prize winner’s paper. Library and Information Service, 59(24), 109-114.
[7]   Liu Y., & Cui L.(2014). The information value of citation context in paper content analysis. Library and Information Service, 58(6), 101-104.
doi: 10.15171/ijhpm.2018.09 pmid: 30912420
[8]   Ma , F., &Wu ,Y.S. (2009). A survey study on motivations for citation: A case study on periodicals research and library and information science community in China. Chinese Journal of Library and Information Science, 2(3), 28-43.
[9]   Mohammad S., Dorr B., Egan M., Hassan A., & Zajic D.M. (2009). Using citations to generate surveys of scientific paradigms. In the Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Boulder: Association for Computational Linguistics, 584-592.
[10]   Pei , J.J., &Le ,X.Q. (2019). Identifying coordinate text blocks in discourses. Data Analysis and Knowledge Discovery, 3(5), 51-56. DOI:10.11925/infotech.2096-3467.2018.1380
[11]   Small , H. (2011). Interpreting maps of science using citation context sentiments: A preliminary investigation. Scientometrics, 87(2), 373-388.
doi: 10.1007/s11192-011-0349-2
[12]   Spiegel-R?sing , I. (1977). Science studies: Bibliometric and content analysis. Social Studies of Science, 7(1), 97-113.
doi: 10.1186/s12913-019-4613-0 pmid: 31727059
[1] Jiang Chang, Jianhua Liu. Methods and Practices for Institutional Benchmarking based on Research Impact and Competitiveness: A Case Study of ShanghaiTech University[J]. Journal of Data and Information Science, 2019, 4(3): 55-72.
[2] Florian Vanlee, Walter Ysebaert. Disclosing and Evaluating Artistic Research[J]. Journal of Data and Information Science, 2019, 4(3): 35-54.
[3] Fang Xu, Xiaoxuan Li. Practice and Challenge of International Peer Review: A Case Study of Research Evaluation of CAS Centers for Excellence[J]. Journal of Data and Information Science, 2019, 4(3): 22-34.
[4] John Mingers†, Eren Kaymaz. Normalizing Book Citations in Google Scholar: A Hybrid Cited-side Citing-side Method[J]. Journal of Data and Information Science, 2019, 4(2): 19-35.
[5] Dag, W. Aksnes, Gunnar Sivertsen . A Criteria-based Assessment of the Coverage of Scopus and Web of Science[J]. Journal of Data and Information Science, 2019, 4(1): 1-21.
[6] Gunnar Sivertsen. The Norwegian Model in Norway[J]. Journal of Data and Information Science, 2018, 3(4): 3-19.
[7] Emanuel Kulczycki, Przemysław Korytkowski. Redesigning the Model of Book Evaluation in the Polish Performance-based Research Funding System[J]. Journal of Data and Information Science, 2018, 3(4): 61-73.
[8] Xiaoling Liu, Mihai Păunescu, Viorel Proteasa, Jinshan Wu. Minimum Representative Size in Comparing Research Performance of Universities: the Case of Medicine Faculties in Romania[J]. Journal of Data and Information Science, 2018, 3(3): 32-42.
[9] Garner Jon,L. Porter Alan,Leidolf Andreas,Baker Michelle. Measuring and Visualizing Research Collaboration and Productivity[J]. Journal of Data and Information Science, 2018, 3(1): 54-81.
[10] Cinzia Daraio. A Framework for the Assessment of Research and Its Impacts[J]. Journal of Data and Information Science, 2017, 2(4): 7-42.
[11] Zhao Dangzhi, Cappello Alicia, Johnston Lucinda. Functions of Uni- and Multi-citations: Implications for Weighted Citation Analysis[J]. Journal of Data and Information Science, 2017, 2(1): 51-69.
[12] van Raan Anthony F.J.. Patent Citations Analysis and Its Value in Research Evaluation: A Review and a New Approach to Map Technology-relevant Research[J]. Journal of Data and Information Science, 2017, 2(1): 13-50.
[13] Yuqing Mao & Zhiyong Lu. Mining Related Articles for Automatic Journal Cataloging[J]. Journal of Data and Information Science, 2016, 1(2): 45-59.
[14] Huifang XU, Shushu CHEN, Yajing LIU, Ling LENG & Wenjiao GUO. A new approach to assessing the science and technology competitiveness of China's provincial academies of sciences[J]. Journal of Data and Information Science, 2015, 7(4): 18-31.
[15] Simon S. LI & Fred Y. YE. Evaluating journals' yearly impact with altmetric indicators[J]. Journal of Data and Information Science, 2015, 7(2): 25-38.