Please wait a minute...
Journal of Data and Information Science  2016, Vol. 1 Issue (3): 6-26    DOI: 10.20309/jdis.201617
Research Paper     
The Power-weakness Ratios (PWR) as a Journal Indicator: Testing the “Tournaments” Metaphor in Citation Impact Studies
Loet Leydesdorff1, Wouter de Nooy1 & Lutz Bornmann2
1 Amsterdam School of Communication Research, University of Amsterdam, Amsterdam 1001 NG, The Netherlands;
2 Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck Society, Munich 80539, Germany
Download: PDF (903 KB)     
Export: BibTeX | EndNote (RIS)      

Abstract  Purpose: Ramanujacharyulu developed the Power-weakness Ratio (PWR) for scoring tournaments. The PWR algorithm has been advocated (and used) for measuring the impact of journals. We show how such a newly proposed indicator can empirically be tested.
Design/methodology/approach: PWR values can be found by recursively multiplying the citation matrix by itself until convergence is reached in both the cited and citing dimensions; the quotient of these two values is defined as PWR. We study the effectiveness of PWR using journal ecosystems drawn from the Library and Information Science (LIS) set of the Web of Science (83 journals) as an example. Pajek is used to compute PWRs for the full set, and Excel for the computation in the case of the two smaller sub-graphs: (1) JASIST+ the seven journals that cite JASIST more than 100 times in 2012; and (2) MIS Quart+ the nine journals citing this journal to the same extent.
Findings: A test using the set of 83 journals converged, but did not provide interpretable results. Further decomposition of this set into homogeneous sub-graphs shows that—like most other journal indicators—PWR can perhaps be used within homogeneous sets, but not across citation communities. We conclude that PWR does not work as a journal impact indicator; journal impact, for example, is not a tournament.
Research limitations: Journals that are not represented on the “citing” dimension of the matrix—for example, because they no longer appear, but are still registered as “cited” (e.g. ARIST)—distort the PWR ranking because of zeros or very low values in the denominator.
Practical implications: The association of “cited” with “power” and “citing” with “weakness” can be considered as a metaphor. In our opinion, referencing is an actor category and can be Metaphor in Citation Impact Studies in terms of behavior, whereas “citedness” is a property of a document with an expected dynamics very different from that of “citing.” From this perspective, the PWR model is not valid as a journal indicator.
Originality/value: Arguments for using PWR are: (1) its symmetrical handling of the rows and columns in the asymmetrical citation matrix, (2) its recursive algorithm, and (3) its mathematical elegance. In this study, PWR is discussed and critically assessed.

Key wordsCitation      Impact      Ranking      Power      Matrix      Homogeneity     
Received: 10 June 2016      Published: 02 August 2016
Fund:  The authors acknowledge Gangan Prathap for discussing the PWR method with us in detail.
Corresponding Authors: Loet Leydesdorff     E-mail:
Cite this article:

Loet Leydesdorff, Wouter de Nooy & Lutz Bornmann. The Power-weakness Ratios (PWR) as a Journal Indicator: Testing the “Tournaments” Metaphor in Citation Impact Studies. Journal of Data and Information Science, 2016, 1(3): 6-26.

URL:     OR

Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. College & Research Libraries News, 68, 314.
Blondel, V .D., Guillaume, J.L., Lambiotte, R., & Lefebvre, E. (2008). Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 8(10), P10008, 10001-10012.
Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual Web search engine.
Computer Networks and ISDN Systems, 30 (1-7), 107-117.
De Nooy, W ., Mrvar, A., & Batagelj, V. (2011). Exploratory social network analysis with Pajek: Revised and expanded second edition. Cambridge: Cambridge University Press.
De Visscher, A. (2010). An index to measure a scientist's specific impact. Journal of the American Society for Information Science and Technology, 61(2), 310-318.
De Visscher, A. (2011). What does the g-index really measure? Journal of the American Society for Information Science and Technology, 62(11), 2290-2293.
Dong, S.B. (1977). A Block-Stodola eigensolution technique for large algebraic systems with nonsymmetrical matrices. International Journal for Numerical Methods in Engineering, 11(2), 247-267.
Franceschet, M. (2011). Page Rank: Standing on the shoulders of giants. Communications of the ACM, 54(6), 92-101.
Garfield, E., & Sher, I.H. (1963). New factors in the evaluation of scientific literature through citation indexing. American Documentation, 14, 195-201.
Gingras, Y., & Larivière, V. (2011). There are neither “king” nor “crown” in scientometrics: Comments on a supposed “alternative” method of normalization. Journal of Informetrics, 5(1), 226-227.
Guerrero-Bote, V.P., & Moya-Anegón, F. (2012). A further step forward in measuring journals' scientific prestige: The SJR2 indicator. Journal of Informetrics, 6(4), 674-688.
Kamada, T., & Kawa i, S. (1989). An algorithm for drawing general undirected graphs. Information Processing Letters, 31(1), 7-15.
Kleinberg, J.M. (1999). Authoritative sources in a hyperlinked environment. Journal of ACM, 46 (5), 604-632.
Leydesdorff, L. (2006). Can scientific journals be classified in terms of aggregated journal-journal citation relations using the Journal Citation Reports? Journal of the American Society for Information Science & Technology, 57(5), 601-613.
Leydesdorff, L. (2009). How are New citation-based journal indicators adding to the bibliometric toolbox? Journal of the American Society for Information Science and Technology, 60(7), 1327-1336.
Leydesdorff, L., & Bornmann, L. (2012). Percentile ranks and the integrated impact indicator (I3). Journal of the American Society for Information Science and Technology, 63(9), 1901-1902.
Leydesdorff, L., & Bornmann, L. (2016). The operationalization of “fields” as WoS Subject Categories (WCs) in evaluative bibliometrics: The cases of “Library and Information Science” and “Science & Technology Studies”. Journal of the Association for Information Science and Technology, 67(3), 707-714.
Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables in citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62(7), 1370-1381.
Milojevi?, S., & Leydesdorff, L. (2013). Information Metrics (iMetrics): A research specialty with a socio-cognitive identity? Scientometrics, 95(1), 141-157.
Moed, H.F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265-277.
Narin, F. (1976). Evaluative bibliometrics: The use of publication and citation analysis in the evaluation of scientific activity. Washington, DC: National Science Foundation.
Nicolaisen, J., & Frandsen, T.F. (2008). The reference return ratio. Journal of Informetrics, 2(2), 128-135.
Opthof, T., & Leydesdorff, L. (2010). Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance. Journal of Informetrics, 4(3), 423-430.
Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory with application to the literature of physics. Information Processing and Management, 12(5), 297-312.
Prathap, G. (2014). The best team at IPL 2014 and EPL 2013-2014. Science Reporter, August 44-47.
Prathap, G. & Nishy, P. (in peparation). A size-independent journal impact metric based on social-network analysis. Preprint available at
Prathap, G., Nishi, P., & Savithri, S. (in press). On the orthogonality of indicators of journal performance. Current Science.
Price, D.J. de Solla (1976). A general theory of bibliometric and other cumulative advantage processes. Journal of the American Society for Information Science, 27(5), 292-306.
Price, D.J. de Solla (1981). The analysis of square matrices of scientometric transactions. Scientometrics, 3(1), 55-63.
Rafols, I., Leydesdorff, L., O'Hare, A., Nightingale, P., & Stirling, A. (2012). How journal rankings can suppress interdisciplinary research: A comparison between innovation studies and business & management. Research Policy, 41(7), 1262-1282.
Ramanujacharyulu, C. (1964). Analysis of preferential experiments. Psychometrika, 29(3), 257- 261.
Todeschini, R., Grisoni, F., & Nembri, S. (2015). Weighted power-weakness ratio for multi-criteria decision making. Chemometrics and Intelligent Laboratory Systems, 146, 329-336.
Waltman, L., Yan, E., & van Eck, N.J. (2011a). A recursive field-normalized bibliometric performance indicator: An application to the field of library and information science. Scientometrics, 89(1), 301-314.
Waltman, L., van Eck, N.J., van Leeuwen, T.N., Visser, M.S., & van Raan, A.F.J. (2011b). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1), 37-47.
West, J.D., Bergst rom, T.C., & Bergstrom, C.T. (2010). The Eigenfactor metrics: A network approach to assessing scholarly journals. College and Research Libraries, 71(3), 236-244.
Wouters, P. (1999). The citation culture. Amsterdam: Unpublished Ph.D. Thesis, University of Amsterdam.
Yan, E., & Ding, Y. (2010). Weighted citation: An indicator of an article's prestige. Journal of the American Society for Information Science and Technology, 61(8), 1635-1643.
Yanovsky, V. (1981). Citation analysis significance of scientific journals. Scientometrics, 3(3), 223-233.
Zhirov, A., Zhirov, O., & Shepelyansky, D.L. (2010). Two-dimensional ranking of Wikipedia articles. The European Physical Journal B, 77(4), 523-531.
[1] Cinzia Daraio. A Framework for the Assessment of Research and Its Impacts[J]. Journal of Data and Information Science, 2017, 2(4): 7-42.
[2] Zhao Dangzhi, Cappello Alicia, Johnston Lucinda. Functions of Uni- and Multi-citations: Implications for Weighted Citation Analysis[J]. Journal of Data and Information Science, 2017, 2(1): 51-69.
[3] Liang Guoqiang, Hou Haiyan, Hu Zhigang, Huang Fu, Wang Yajie, Zhang Shanshan. Usage Count: A New Indicator to Detect Research Fronts[J]. Journal of Data and Information Science, 2017, 2(1): 89-104.
[4] van Raan Anthony F.J.. Patent Citations Analysis and Its Value in Research Evaluation: A Review and a New Approach to Map Technology-relevant Research[J]. Journal of Data and Information Science, 2017, 2(1): 13-50.
[5] Fred Y. Ye, Mu-Hsuan Huang & Dar-Zen Chen. Comparative Study of Trace Metrics between Bibliometrics and Patentometrics[J]. Journal of Data and Information Science, 2016, 1(2): 13-31.
[6] Jiao Li, Si Zheng, Hongyu Kang, Zhen Hou & Qing Qian. Identifying Scientific Project-generated Data Citation from Full-text Articles: An Investigation of TCGA Data Citation[J]. Journal of Data and Information Science, 2016, 1(2): 32-44.
[7] Jian Du & Yishan Wu. A Bibliometric Framework for Identifying “Princes” Who Wake up the “Sleeping Beauty” in Challenge-type Scientific Discoveries[J]. Journal of Data and Information Science, 2016, 1(1): 50-68.
[8] Huifang XU, Shushu CHEN, Yajing LIU, Ling LENG & Wenjiao GUO. A new approach to assessing the science and technology competitiveness of China's provincial academies of sciences[J]. Journal of Data and Information Science, 2015, 7(4): 18-31.
[9] Shiji CHEN & Xiaolin ZHANG. Research on overlapping structures and evolution properties of co-citation network[J]. Journal of Data and Information Science, 2013, 6(1): 1-13.
[10] Jian DU, Bin ZHANG, Yang LI, Xiaoli TANG & Peiyang XU. A causational analysis of scholars' years of active academic careers vis-à-vis their academic productivity and academic influence[J]. Journal of Data and Information Science, 2011, 4(2): 77-91.
[11] ZHANG Yang & ZHANG Jie. An Informetric analysis of web citation in Chinese journals of Library and Information Science in recent years[J]. Journal of Data and Information Science, 2010, 3(3): 46-62.
[12] LI Rui,MENG Liansheng. On the framing of patent citations and academic paper citations in refl ecting knowledge linkage: A discussion of the discrepancy of their divergent value-orientations[J]. Journal of Data and Information Science, 2010, 3(3): 37-45.
[13] LIU Xiaomin & ZHANG Jianyong. Influences of digital resource acquisition on scientific research behaviors—The statistical analysis on the full-text downloading quantity and cited times[J]. Journal of Data and Information Science, 2009, 2(4): 71-78.
[14] MA Feng & WU Yishan. A survey study on motivations for citation: A case study on periodicals research and library and information science community in China[J]. Journal of Data and Information Science, 2009, 2(3): 28-43.