Journal of Data and Information Science, 2018, 3(1): 19-39
doi: 10.2478/jdis-2018-000219
Does Monetary Support Increase the Number of Scientific Papers? An Interrupted Time Series Analysis
Yaşar Tonta
Department of Information Management, Faculty of Letters, Hacettepe University, 06800 Beytepe, Ankara, Turkey
 Cite this article:
Yaşar Tonta. Does Monetary Support Increase the Number of Scientific Papers? An Interrupted Time Series Analysis. Journal of Data and Information Science[J], 2018, 3(1): 19-39 doi:10.2478/jdis-2018-000219

Abstract:

Purpose: One of the main indicators of scientific production is the number of papers published in scholarly journals. Turkey ranks 18th place in the world based on the number of scholarly publications. The objective of this paper is to find out if the monetary support program initiated in 1993 by the Turkish Scientific and Technological Research Council (TÜBİTAK) to incentivize researchers and increase the number, impact, and quality of international publications has been effective in doing so.

Design/methodology/approach: We analyzed some 390,000 publications with Turkish affiliations listed in the Web of Science (WoS) database between 1976 and 2015 along with about 157,000 supported ones between 1997 and 2015. We used the interrupted time series (ITS) analysis technique (also known as “quasi-experimental time series analysis” or “intervention analysis”) to test if TÜBİTAK’s support program helped increase the number of publications. We defined ARIMA (1,1,0) model for ITS data and observed the impact of TÜBİTAK’s support program in 1994, 1997, and 2003 (after one, four and 10 years of its start, respectively). The majority of publications (93%) were full papers (articles), which were used as the experimental group while other types of contributions functioned as the control group. We also carried out a multiple regression analysis.

Findings: TÜBİTAK’s support program has had negligible effect on the increase of the number of papers with Turkish affiliations. Yet, the number of other types of contributions continued to increase even though they were not well supported, suggesting that TÜBİTAK’s support program is probably not the main factor causing the increase in the number of papers with Turkish affiliations.

Research limitations: Interrupted time series analysis shows if the “intervention” has had any significant effect on the dependent variable but it does not explain what caused the increase in the number of papers if it was not the intervention. Moreover, except the “intervention”, other “event(s)” that might affect the time series data (e.g., increase in the number of research personnel over the years) should not occur during the period of analysis, a prerequisite that is beyond the control of the researcher.

Practical implications: TÜBİTAK’s “cash-for-publication” program did not seem to have direct impact on the increase of the number of papers published by Turkish authors, suggesting that small amounts of payments are not much of an incentive for authors to publish more. It might perhaps be a better strategy to concentrate limited resources on a few high impact projects rather than to disperse them to thousands of authors as “micropayments.”

Originality/value: Based on 25 years’ worth of payments data, this is perhaps one of the first large-scale studies showing that “cash-for-publication” policies or “piece rates” paid to researchers tend to have little or no effect on the increase of researchers’ productivity. The main finding of this paper has some implications for countries wherein publication subsidies are used as an incentive to increase the number and quality of papers published in international journals. They should be prepared to consider reviewing their existing support programs (based usually on bibliometric measures such as journal impact factors) and revising their reward policies.

Key words: Performance-based research funding systems) ; Publication subsidies) ; Publication support programs) ; Interrupted time series analysis)
1 Introduction

The number of scholarly papers and citations thereto are indirect indicators of the level of scientific development of countries. The number of scholarly papers with Turkish affiliations listed in citation indexes has increased tremendously over the years and Turkey ranks 18th in the world in terms of number of publications. Over 36,000 papers were published in 2015 alone, although their scientific impact in terms of the number of citations they gather is well below the average of the world, the European Union (EU) and the OECD countries.

In 1993, the Turkish Scientific and Technological Research Council (TÜBİTAK) initiated a monetary support program (UBYT) to incentivize researchers and increase the number, impact, and quality of international publications authored by Turkish researchers. Considerable percentages of papers with Turkish affiliations were supported in the early years of this program, even though the rate of support has gradually decreased (to c. 30%) over the years due to the steep increase in the number of published papers with Turkish affiliations. As part of the program, some 157,000 publications (93% of which were papers/articles) were supported between 1997 and 2015. The amount of support paid for each paper has been determined on the basis of the impact factor of the journal in which it was published.

The total amount of support was about 124 million Turkish Liras (in 2015 current prices; equal to c. 35 million USD). The number of papers supported, the total number of publications, and the amount of support increased four-, 10- and 13-fold, respectively, during this period.

The support program has been in place for almost a quarter century. Yet, its impact has not been evaluated in the past. We have been asked by TÜBİTAK to evaluate the effectiveness of the program and given the payment records of 157,000 supported publications. They included, among others, journal information (name, year, its class based on Journal Citation Reports’ subject categories), type of contribution (e.g., article, review) and the amount of support.

Based on the payment records provided, the characteristics (i.e., impact factors) of journals in which supported papers with Turkish affiliations appeared have been analyzed, the functioning of the support algorithm has been studied, and the effectiveness of the overall support program has been evaluated. Findings indicate that the authors of mediocre papers published in journals with relatively low impact factors have mostly been supported due to the use of skewed distributions of journal impact factors in determining the amount of support. The existing support algorithm, on the other hand, does not seem to function as conceived.

This paper presents only the findings of the interrupted time series analysis with a view to find out if the support program has had any impact on the increase of the number of papers with Turkish affiliations. It is organized as follows: The Literature Review section briefly discusses the findings of relevant studies including those that provide some background on the Turkish case. The Data Sources and Method section describes the data used and provides information on interrupted time series analysis. The detailed findings are presented thereafter (Findings and Discussion) along with the limitations of the study. The paper ends with Conclusions.

2 Literature Review

Performance-based research funding systems (PRFSs) came into being in the 1980s. Based on rewarding the outputs, the rationale of PRFSs is to provide more support to institutions (or individuals) with higher performances so that the ones with lower performances will strive to improve theirs in order to get more support (Herbst, 2007). Yet, it is not clear if PRFSs based on outputs and competition increase the scientific productivity and the impact of outputs. In a relatively recent study comparing PRFSs and outputs of eight countries, countries with less competitive PRFSs such as Denmark turned out to be as effective as the ones with more competitive PRFSs such as the UK and Australia (Auranen & Nieminen, 2010). Some researchers drew attention to the potential “side effects” of PRFSs based on competition, as they tend to “homogenize” research outputs, discourage experiments using new approaches, and reward researchers playing “safe” even though their contributions may not have any societal impact (Geuna & Martin, 2003). The idea of increasing productivity on the basis of outputs and competition seems more complicated than what decision-makers have initially thought (Auranen & Nieminen). For instance, there appears to be some evidence (albeit with relatively small effect sizes) that China’s “cash-for-publication” policy tends to increase researchers’ productivity (Heywood, Wei, & Ye, 2011). Yet, such cash incentives for publications that are in effect in China, South Korea, and Turkey seem to increase the number of submissions but are negatively correlated with acceptance rates (Franzoni, Scellator, & Stephan, 2011).1 (1Based on the countries of the first authors of papers submitted to the journal Science between 2000 and 2009, some 6,228, 1,345, and 84 papers came from China, South Korea, and Turkey, respectively. Yet, only 93 papers from China (1.5%), 18 papers from South Korea (1.3%), and 3 papers from Turkey (3.6%) have been accepted for publication during this period (Franzoni, Scellato, & Stephan, 2011); number of papers submitted to Science along with the accepted number of papers from respective countries come from the Excel tables included in the Supporting Online Material of this article. We calculated the acceptance rates based on the figures provided.)

There are mainly two types of PRFSs in use: (1) the ones based on peer review or informed peer review supported with bibliometric measures; and (2) the ones based solely on bibliometric measures such as journal impact factors. The UK’s Research Excellence Framework (REF) is the largest research assessment system in the world (De Boer et al., 2015). Based on peer review, REF has been used since 1986 to distribute funds to research institutes and universities on the basis of their performances. Despite their shortcomings, PRFSs based on bibliometric measures only are on the rise, as they are, in comparison to peer review, easier and less costly to apply as a “proxy” to assess performance. Therefore, they tend to get preferred by increasingly more countries lately.

PRFSs and publication support systems based on bibliometric measures generally use the number of papers published in refereed journals and their impact in terms of citations as the main criteria to determine the research institutes and researchers to be supported. Impact factors (IF) and article influence scores (AIS) of journals are the two most commonly used metrics.

Journal IF was originally proposed by the late Eugene Garfield (1972) to help librarians in their selection of journals for subscription. It is an indicator of the quality of a journal in general and measures the citation impact of an “average” paper published therein. It does not say anything about the quality of an individual paper in that journal and how many citations, if any, it would gather in a certain period of time after its publication (e.g., two years).

Citation distributions used to calculate the IFs of journals are quite skewed, indicating that few papers published in a given journal get cited much more frequently while the majority get unnoticed or rarely cited (Marx & Bornmann, 2013). This is the case even for the most prestigious journals with the highest IFs such as Nature (IF = 38) and Science (IF = 35). As high as 75% of articles published in these journals get cited fewer times than their journal IFs indicate (Larivière et al., 2016, Table 2). Journal IFs vary by scientific discipline, too, as the number of researchers in each field, publication types (i.e., journal articles as opposed to books), and scholarly communication patterns tend to differ. In general, some 9%-10% of all the articles listed in Web of Science collect 44% of the total number of citations (Albarrán et al., 2011). More importantly, there exists no positive relationship between the number of citations that an article gets and the IF of the journal in which it is published (Zhang, Rousseau, & Sivertsen, 2017), and a large body of literature detailing the shortcomings of the use of journal IFs as a performance measure is readily available (e.g., Casadevall & Fang, 2012; Glänzel & Moed, 2002; Marx & Bornmann, 2013; Seglen, 1997; van Raan, 2005; Wouters et al., 2015). Yet, rather than checking the number of citations to the papers of individual researchers, PRFSs based on bibliometric measures continue to use journal IFs to assess the performance of individuals. Journal IFs are quite misleading in predicting the number of citations that any given article might get. What follows are a few examples of PRFS using journal IFs as a research assessment tool.

PRFSs are reviewed by several researchers (e.g., De Boer et al., 2015; European Commission, 2010; Geuna & Martin, 2003; Hicks, 2012). Most EU countries, Norway, USA, Australia, New Zealand, and China have some PRFSs in place. We provide a few examples of PRFSs that either solely use journal IF or use it in combination with peer review (excluding the ones based only on peer review such as REF in the UK).

Italy uses a PRFS where an expert panel decides whether to use citation analysis or peer review (or both) for each publication. Universities are ranked on the basis of a quality score consisting of citations and other journal metrics, which determine the amount of support each university gets. Some 30% of the research funds are distributed according to the outcome of this evaluation (Abramo, D’Angelo, & Di Costa, 2011; Abramo & D’Angelo, 2011, 2016).

Similarly, Spain uses a mixed system, although researchers are encouraged to publish in journals that are listed in the top quarters of JCR’s subject categories. Researchers who publish in such journals receive monetary support that ranges somewhere between 3% and 15% of their monthly salaries (Osuna, Cruz-Castro, & Sanz-Menéndez, 2011).

A number of countries such as Czech Republic, China, Finland, and Australia use journal IF exclusively to support research institutes and individual researchers. Finland, for instance, linked journal IF directly with research support by legislation (Adam, 2002). Similarly, Australia and the Czech Republic make direct linkage between research evaluation and funding by counting scholarly outputs and assigning a score to each on the basis of bibliometric measures. These scores are then used to determine the amount of monetary support and papers that appear in refereed journals or journals with relatively higher IFs get the highest scores (Butler, 2004; Butler, 2003; Good et al., 2015, Table 3). Norway also has a similar system based on weighting journals on the basis of various criteria and created three different journal lists (Schneider, 2009). China, on the other hand, uses journal IF most comprehensively in that academic recruitments and promotions, university rankings (and the amount of research support they get), support of Chinese journals that are listed in Chinese Citation Indexes all rely on journal IFs. The procedure seems to have been automated, as a researcher publishing in a journal with a certain IF knows how much support s/he would get. For instance, the author of a paper published in a journal with IF higher than 15 receives 300,000 Yuan (c. 43,000 USD) (Shao & Shen, 2012)! However, the use and appropriateness of such formulaic approaches has been questioned lately with a suggestion that China needs “to rethink its metrics- and citation-based research reward policies” (Teixeira da Silva, 2017).

Turkey is no exception: journal IFs are considered as an indicator of quality and have been used as an important criterion in academic promotions since early 1990s. In addition to individual universities, TÜBİTAK has initiated a nationwide monetary support system based exclusively on journal IFs. Journals classified under Q1, Q2, etc., in JCR’s subject categories have been used to determine the monetary compensation (Tonta, 2015). More recently (2016), Turkish Higher Education Council (HEC) started a new support scheme based mostly on journal IFs and the faculty whose scores are above a certain threshold in terms of number of academic activities (mostly publications) during the previous year get an additional 10% to 15% on top of their regular monthly salaries throughout the year (Akademik, 2015).

It should be noted that performance-based research funding and publication support systems based on quantitative measures tend to have some adverse effects. Researchers seem to adjust to the requirements very easily and change their publication patterns and behaviors. Such systems are prone to “gaming,” too, and researchers become more “opportunistic” (e.g., publication “inflation”) and less ethical (e.g., “fake” citations) in time. Unintended consequences of PRFSs in several countries (e.g., Australia, Czech Republic, and Spain) were reported in the literature (Butler, 2003; Butler, 2004; Good et al., 2015; Osuna, Cruz-Castro, & Sanz-Menéndez, 2011; Tonta, 2014). For example, more papers tend to get published in journals with relatively lower IFs. A similar trend has also been observed in Turkey (Kamalski et al., 2017; Önder et al., 2008; Yurtsever et al., 2001, 2002). As the Goodhart’s Law states, “When a measure becomes a target, it ceases to be a good measure.”

It should also be noted that correlation between competitive PRFSs and the research productivity is not clear-cut (Auranen & Nieminen, 2010). Excessive competition seems to reduce the time and energy otherwise to be expended for research. In this paper, we test the conjecture if TÜBİTAK’s publication support system has had an impact on the increase of number of publications listed in citation indexes with Turkish affiliations.

3 Data Sources and Method

We performed a search on Web of Science (WoS) (December 19, 2016) to identify all the publications with Turkish affiliations listed in Science Citation Index (SCI), Social Sciences Citation Index (SSCI) and Arts & Humanities Citation Index (A&HCI) between 1976 and 2015. More than 390,000 records were retrieved, 81% of which were full papers (articles) while the rest were other types of publications (e.g., reviews, notes, and letters to the editor).

TÜBİTAK provided the payment data for about 157,000 supported publications (93% of which were papers). These records were first cleaned, then coded as either “full papers” (articles) or “other” types of publications, classified under various criteria (e.g., year, class of journal, amount of support paid), ranked and combined, if necessary.

We used MS Excel and SPSS 23 for the detailed analysis of data and prepared both WoS and TÜBİTAK records for interrupted time series analysis outlined below (Interrupted, 2013). (See Appendix A for time series data prepared for interrupted time series analysis.)

The interrupted time series (ITS) analysis technique (also known as quasi-experimental time series analysis or intervention analysis) is used in this paper to measure the impact of TÜBİTAK’s support program. ITS analysis measures if an “event” occurring at any given stage has an immediate or delayed effect on the time series data. For instance, an unexpected political development in a given country may increase the exchange rates, or a terrorist attack may reduce the number of tourists. These “events” (called “interventions”) may be planned or not planned. As ITS analysis is a quasi-experimental method, it is possible (by means of using a control group) to verify if the change has occurred because of the intervention.

ITS analysis is based on the following statistical model:

Yt = ßpre + ßpost + et, (1)

where Yt represents the t’th observation in the time series, ßpre and ßpost represent the levels of series before and after the intervention, respectively, and et is the error related with Yt. The null hypothesis

H0 = ßpre - ßpost = 0, (2)

states that there is no statistically significant difference between the levels of series before and after the intervention (i.e., it has no impact on dependent variable (McDowall et al., 1980). It is assumed that the parameters in time series models stay the same before and after the intervention and that no other events that affect the parameters take place. ITS analysis can be applied to both static and dynamic (“ergodic”) time series. The ARIMA model is used for non-static series whose arithmetic means, variances, and co-variances change as time passes. This model is expressed as ARIMA (p, d, q) where p, d, and q represent the autoregressive operator (AR), the integrated operator (I), and the moving average operator (MA), respectively. If time series data is not stationary (d), it will first be made stationary to make its mean and variance constant over the years studied.

We have WoS data of publications with Turkish affiliations (1976-2015) and data of supported publications by TÜBİTAK (1997-2015). The program (“intervention”) started in 1993 and enough data points exist both before (1976-1992) and after (1993-2015) the intervention so as to be able to apply ITS analysis to time series data (Cochrane, 2002).

It is not always easy to determine when the performance-based funding system in a given institution is exactly introduced and how long it takes for the system to start to have some impact on the publication output of that institution (van den Besselaar, Heyman, & Sandstrom, 2017; Butler, 2017; Hicks, 2017). We took the date of the decision of TÜBİTAK’s Scientific Board to initiate the support program (June 12, 1993) as the starting date. As relatively fewer researchers benefited from the support program in the early years, we thought that the effect of the program might be observed with some delay (lag) and therefore measured its delayed effect one (1994), four (1997) and 10 years (2003) after of its start.

We have no data on papers (full articles) whose authors have not been supported. However, a relatively small group of authors of other types of contributions can function as a control group, as only 3% of the total amount of support on average was set aside for such contributions even though 19% of publications were of such nature. The authors of other types of contributions were paid half of what the authors of the full papers were, and a mere 1% of the support budget was allocated to them in 2013, for example.2 (2This percentage should ideally be 0 (zero) in order for it to function as a true control group. Yet, we think that it can be used as a control group with some caution and the generalization should be interpreted accordingly.)In other words, we can find out if TÜBİTAK’s support program has had any impact on the increase in the number of papers by comparing it with that of other types of contributions. If the number of other types of contributions that were not well supported did not increase but the number of papers supported increased, we can deduce that the source of the impact was the support program. Conversely, if, despite lack of support, the number of other types of contributions increased along with the number of papers receiving full monetary support, then the increase in the latter cannot be attributed to the program, suggesting that some factor(s) other than the support program may have played a role in this increase.

4 Findings and Discussion

The descriptive data about the number of papers and the total number of publications originating from Turkey are presented in Table 1 and Figure 1. The rate of increase is quite steep, especially starting from 2000s. This rate of increase made Turkey in those years one of the fastest growing countries in the world in terms of number of papers, and Turkey moved up the ladder very quickly from 45th in 1983 to 25th in 1999 to 18th in 2008 in the world, contributing to 1.56% of the overall scientific production in the world.

Table 1

Number of publications with Turkish affiliations (1976-2015).


Figure 1.

Number of papers and total number of publications with Turkish affiliations (1976-2015).

A considerable percentage of these publications were supported by TÜBİTAK’s support program when it was first initiated in 1993. However, the support program seems to have not kept up with the pace of increase of papers and the percentage of papers supported went down from 70% in early 2000s to below 30% in recent years (Table 2, Figure 2).

Table 2

Number of papers supported by TÜBİTAK (1997-2015).


Figure 2.

Number of papers listed in WoS with Turkish affiliations and supported by TÜBİTAK (1997-2015).

The detailed analysis of changes in TÜBİTAK’s support policies over the years is beyond the confines of this paper (see Tonta, 2017b). Instead, we concentrate on whether TÜBİTAK’s support program has actually played a role in the steep rate of increase of papers by Turkish researchers. The time path of the number of papers listed in the Web of Science (WoS) originating from Turkey between 1976 and 2015 is given below (Figure 3). The intervention point (1993) is marked on the graph. As there exists a trend of increase in the number of papers both before and after the intervention, we took the difference of the time series from the 1st level (d = 1) to make it stationary. Consequently, the auto-correlation function (ACF) and partial auto-correlation function (PACF) of the time series became static within the confidence intervals (Figure 4).


Figure 3.

Time path of papers with Turkish affiliations (1976-2015).


Figure 4.

Correlograms of autocorrelation (ACF) and partial autocorrelations (PACF) functions.

We then defined ARIMA (1, 1, 0) model for interrupted time series data and wanted to see the impact of TÜBİTAK’s support program in 1994, 1997, and 2003 (after one, four and 10 years of its start, respectively). The test statistic of the ARIMA model shows that the defined model is suitable for the time series data
(Χ2 = 23.531, DF = 17, p = .133) (Table 3). The parameters of the ARIMA model (estimates, SE, t- and p- values) are given in Table 4. The ARIMA Model did not produce statistically significant results (coefficient = .153, SE = .170, t = .899, p = .375). The coefficient for “Time series” in Table 4 gives the slope of the regression line before the intervention (14.051), which is used to analyze the different time points by taking into account the existing trend in data before calculating the effect of the intervention. The coefficient for “Before/after Support Program” represents the slope of y- axis when x is equal to 0 (zero) and is used to measure the effect of the intervention in later time points. The coefficient for “Effect” (29.091) gives the difference between slopes before and after the intervention. By adding this difference to the value of pre-intervention slope (14.051), the value of the post-intervention slope (43.142) can be calculated (Interrupted, 2013).

Table 3

Test statistic (Ljung Box).

Table 4

ARIMA Model Parameters.

In order to see the effect of the support program on the number of papers with Turkish affiliations, we continued with this model. The slopes of pre- and post-intervention are the same for all analyses. It is possible to see the direct effect of the intervention on the number of papers with Turkish affiliations (Table 5). According to the model, an additional 564 papers were published in 1994 because of the support program. However, the effect of the support program is not statistically significant (p = .157). The delayed effect of the program has not been materialized in later years, either, as additional number of papers published due to the program were limited (651 papers in 1997, and 826 in 2003) and the effect is not statistically significant (p > .05). As the effect of the program has been negligible, the formula of the effect of the intervention is not given.

Table 5

Values showing the delayed effect of TÜBİTAK’s support program.

Despite the fact that other types of contributions (non-papers) have been supported very little during the period of analysis, the rate of their increase is on a par with that of generously supported papers (see Figure 5). Slopes of linear regression lines of papers and non-papers are almost identical with corresponding R² values (y = 738.01x - 1E + 06, R² = 0.814 for papers; and y = 173.78x - 344912, R² = 0.766 for non-papers). As a control group, the rate of continuous increase in other types of publications seems to confirm the results of the interrupted time series analysis. For instance, some 4,000-7,000 other types of publications have been published annually in recent years, of which only a few hundreds got supported. Yet, the number of other publications continues to increase regardless of support, suggesting that TÜBİTAK’s support program is probably not the main factor causing the increase in the number of papers with Turkish affiliations. The main finding of this paper is, to some extent, in line with the evidence that researchers with Turkish affiliations do not seem to attach too much importance to TÜBİTAK’s “cash-for-publication” program (Yuret, 2017).


Figure 5.

Rate of increase of papers and non-papers.
Note: Scales for y axes on the left and right are different. y axis on the left represents the number of papers while the one on the right respresents the number of non-papers.

5 Limitations of the Study

It should be noted though that interrupted time series analysis has some limitations. The assumption that no other “event” or “events” occurred during the period of analysis that might have affected the time series data is one of them. For example, the prerequisite of having papers published in journals listed in citation indexes for academic promotion may have triggered this increase, as more than 90% of research in Turkey has been carried out in universities, and the number of academic personnel in universities has increased tremendously over the years. Moreover, in addition to the number of research personnel in universities, the number of papers may be increasing due to a number of other factors such as the number of researchers per 10,000 capita, and the share of R&D expenditures within the Gross National Product (GDP). As indicated earlier, even though some positive correlation between PRFSs and the number of papers has been observed, this may not necessarily point to a strong causality between the two. As was the case in Spain (Osuna, Cruz-Castro, & Sanz-Menéndez, 2011), the number of papers with Turkish affiliations continues to increase perhaps not because of TÜBİTAK’s support program but because of other factors such as the growth in and the maturity of universities’ research systems including academic personnel.

We should also note that the interrupted time series analysis tells us whether the intervention has had any significant effect on the dependent variable or not but it does not tell us what caused the increase in the number of papers if it was not the intervention. To find out this, we carried out a multiple regression analysis and observed fairly strong correlation between the number of papers with Turkish affiliations and the number of academic personnel as well as the number of supported papers. However, we decided not to report the results of the multiple regression analysis, as the Durbin-Watson statistic was rather small (0.921), probably indicating the existence of serial autocorrelation between variables and thereby making the results less reliable. This can to some extent be observed from Figure 2: the correlation between the number of papers with Turkish affiliations and the number of supported papers was positive and statistically significant between 1997 and 2006 whereas it was negative and not statistically significant between 2007 and 2015.

For a more definitive answer to the question of whether TÜBİTAK’s support program has had any effect on the increase in the number of papers with Turkish affiliations, a true control group is needed. In other words, the rate of increase of papers supported by TÜBİTAK needs to be compared with that of non-supported ones. Even such a comparison may not necessarily be sufficient to reveal the causality, should there be any, between TÜBİTAK’s support program and the steep increase in the number of papers with Turkish affiliations. For this, individual level data for both TÜBİTAK-supported and non-supported papers are needed to see if the increase is due to the increase of the productivity of: (1) the same researchers benefiting from TÜBİTAK’s support program; (2) more researchers responding to TÜBİTAK’s cash incentives; (3) researchers who have not sought TÜBİTAK support for their papers in the past at all; or (4) a combination of some or all of the above.

6 Conclusions

As part of TÜBİTAK’s support program, the authors of over 157,000 publications received more than 124 million Turkish Liras (in 2015 current prices, c. 35 million USD) as monetary support between 1997 and 2015. Yet, two thirds of all payments were less than 826 liras (or c. 230 USD). These “micropayments” might be one of the reasons why, according to the test results of the interrupted time series analysis, the program did not seem to have direct impact on the increase of the number of papers published by Turkish authors. It is likely that small amounts of payments were not much of an incentive for authors to publish more.

We should point out that the objective of the support program is not to increase the number of papers per se but to increase their impact and quality, as stated in the By-Law of TÜBİTAK’s support program (TÜBİTAK, 2016). Some authors may find the small payments satisfactory. Yet, if such small payments do not help achieve the program’s objectives, precautions should be taken to correct it. The support program seems to have functioned as a mechanism to transfer small amounts of payments to authors without any considerable improvement in the impact and quality of the papers. Transaction costs of such small payments should be borne in mind as well as the costs of missed opportunities of increasing the impact and quality of papers. For instance, it might perhaps be a better strategy to concentrate limited resources on a few high impact projects rather than to disperse them every year as “pocket money” to the authors of some 10,000 papers that appear mostly in journals with relatively low impact factors. Sustainability of the existing support program should also be considered and its impact should be monitored more often.

Such support programs should function as a leverage to speed up the scientific and economic development of countries. A thorough study as to why the support program did not seem to function as intended should be carried out. After a comprehensive review of existing support programs, new policies should be instituted to increase the impact and quality of scientific papers originating from Turkey, and TÜBİTAK’s support program should be redesigned accordingly.

Based on 25 years’ worth of payments data, this is perhaps one of the first large-scale studies showing that “cash-for-publication” policies or “piece rates” paid to researchers tend to have little or no effect on the increase of researchers’ productivity. The main finding of this paper has some implications for countries wherein publication subsidies are used as an incentive to increase the number and quality of papers published in international journals. They should be prepared to consider reviewing their existing support programs (based usually on bibliometric measures such as journal impact factors) and revising their reward policies.

Appendix A. Time series data prepared for interrupted time series analysis (1976-2015)

Acknowledgments

This paper is based on a report (in Turkish) on the evaluation of TÜBİTAK’s Support Program of International Scientific Publications (UBYT), which has recently been published by TÜBİTAK as a monograph (Tonta, 2017b)3 (3For the summary in English, see http://yunus.hacettepe.edu.tr/~tonta/yayinlar/tonta_UBYT_summary.pdf.). Research was carried out during my sabbatical year (2016-2017) at the School of Library and Information Science of Humboldt University (HU-IBI) in Berlin, Germany. I am grateful to and thank my family; my colleagues at Hacettepe University Department of Information Management in Ankara, Turkey, who assumed some of my responsibilities during my absence; Mr. Mehmet Mirat Satoğlu of TÜBİTAK and his colleagues who provided the payments data; Professor Michael Seadle of HU-IBI who offered a pleasant work environment with access to information resources; and Dr. Umut Al of Hacettepe University and Ms. Müge Akbulut of Yıldırım Beyazıt University who meticulously reviewed earlier versions of the report and offered help whenever I needed it. Views expressed in this paper with regards to TÜBİTAK’s support program may not necessarily reflect those of TÜBİTAK. Remaining errors are of course my own.

The authors have declared that no competing interests exist.

References

[1]
Abramo G.,& D’Angelo .+? (2011). National-scale research performance assessment at the individual level. Scientometrics, 86(2), 347-364.
DOI:10.1007/s11192-010-0297-2      URL    
There is an evident and rapid trend towards the adoption of evaluation exercises for national research systems for purposes, among others, of improving allocative efficiency in public funding of individual institutions. However the desired macroeconomic aims could be compromised if internal redistribution of government resources within each research institution does not follow a consistent logic: the intended effects of national evaluation systems can result only if a “funds for quality” rule is followed at all levels of decision-making. The objective of this study is to propose a bibliometric methodology for: (i) large-scale comparative evaluation of research performance by individual scientists, research groups and departments within research institution, to inform selective funding allocations; and (ii) assessment of strengths and weaknesses by field of research, to inform strategic planning and control. The proposed methodology has been applied to the hard science disciplines of the Italian university research system for the period 2004–2006.
[2]
Abramo G.,& D’Angelo .+? (2016). Refrain from adopting the combination of citation and journal metrics to grade publications, as used in the Italian national research assessment exercise (VQR 2011-2014). Scientometrics, 109(3), 1-13.
DOI:10.1007/s11192-016-2008-0      URL    
[3]
Abramo G., D’Angelo C.A., & Di Costa F. (2011). National research assessment exercises: A comparison of peer review and bibliometrics rankings. Scientometrics, 89: 929. https://doi.org/10.1007/s11192-011-0459-x.
DOI:10.1007/s11192-011-0459-x      URL    
Development of bibliometric techniques has reached such a level as to suggest their integration or total substitution for classic peer review in the national research assessment exercises, as far as the hard sciences are concerned. In this work we compare rankings lists of universities captured by the first Italian evaluation exercise, through peer review, with the results of bibliometric simulations. The comparison shows the great differences between peer review and bibliometric rankings for excellence and productivity.
[Cite within: 1]
[4]
Adam D. (2002). Citation analysis: The counting house. Nature, 415(415), 726-729.
DOI:10.1038/415726a      URL    
Not Available
[Cite within: 1]
[5]
Akademik Teşvik Ödeneği Yönetmeliği (By-law of Payment of Academic Incentive). (2015).Resmî Gazete. Retrieved from .
URL    
[Cite within: 2]
[6]
Albarrán P., Crespo J.A., Ortuño I., & Ruiz-Castillo J. (2011). The skewness of science in 219 subfields and a number of aggregates. Scientometrics, 88(2), 385-397.
DOI:10.1007/s11192-011-0407-9      URL    
[Cite within: 1]
[7]
Auranen O.,& Nieminen ,M. (2010). University research funding and publication performance—An international comparison. Research Policy, 39(6), 822-834.
DOI:10.1016/j.respol.2010.03.003      URL    
In current science policies, competition and output incentives are emphasized as a means of making university systems efficient and productive. By comparing eight countries, this article analyzes how funding environments of university research vary across countries and whether more competitive funding systems are more efficient in producing scientific publications. The article shows that there are significant differences in the competitiveness of funding systems, but no straightforward connection between financial incentives and the efficiency of university systems exists. Our results provoke questions about whether financial incentives boost publication productivity, and whether policy-makers should place greater emphasis on other factors relevant to high productivity.
[Cite within: 2]
[8]
Butler L. (2003). Explaining Australia’s increased share of ISI publications—the effects of a funding formula based on publication counts. Research Policy, 32(1), 143-155.
DOI:10.1016/S0048-7333(02)00007-0      URL    
Australia- share of publications in the Science Citation Index (SCI) has increased by 25% in the last decade. The worrying aspect associated with this trend is the significant decline in citation impact Australia is achieving relative to other countries. It has dropped from sixth position in a ranking of 11 OECD countries in 1988, to 10th position by 1993, and the distance from ninth place continues to widen. The increased publication activity came at a time when publication output was expected to decline due to pressures facing the higher education sector, which accounts for over two-thirds of Australian publications. This paper examines possible methodological and contextual explanations of the trends in Australia- presence in the SCI, and undertakes a detailed comparison of two universities that introduced diverse research management strategies in the late 1980s. The conclusion reached is that the driving force behind the Australian trends appears to lie with the increased culture of evaluation faced by the sector. Significant funds are distributed to universities, and within universities, on the basis of aggregate publication counts, with little attention paid to the impact or quality of that output. In consequence, journal publication productivity has increased significantly in the last decade, but its impact has declined.
[Cite within: 2]
[9]
Butler L. (2004). What happens when funding is linked to publication counts? In H.F. Moed et al.,(Ed.), Handbook of Quantitative Science and Technology Research: The Use of Publication and Patent Statistics in Studies of S&T Systems (pp. 389-405). Dordrecht: Kluwer.
[Cite within: 2]
[10]
Butler L. (2017). Response to van den Besselaar et al.: What happens when the Australian context is misunderstood. Journal of Informetrics, 11(3), 919-922.
DOI:10.1016/j.joi.2017.05.017      URL    
react-text: 205 fluency of genius and the loud noises of empty vessels" (Editorial, 1970). For example, publication counting can tell us little about the effect of a laboratory's work on others, but citation analysis can provide valuable information. It is important to highlight the difference between the short-term and long-term impact of citation. The short-term impact indicates how groups maintain... /react-text react-text: 206 /react-text [Show full abstract]
[11]
Casadevall A.,& Fang ,F.C. (2012. Causes for the persistence of impact factor mania. mBio, 5(2). Retrieved on April 28, 2017, from 2012). Causes for the persistence of impact factor mania. mBio, 5(2). Retrieved on April 28, 2017, from .
URL    
[Cite within: 4]
[12]
Cochrane Effective Practice and Organisation of Care Review Group. Data Collection Checklist.(2002). Retrieved from .
URL    
[13]
De Boer H., Jongbloed B.W.A., Benneworth S., Cremonini, L. Kolster R., Kottmann A., . . & Vossensteyn, J.J. (2015). Performance-based Funding and Performance Agreements in Fourteen Higher Education Systems. Enschede: University of Twente. Retrieved from
URL    
[Cite within: 2]
[14]
European Commission (2010). Assessing Europe’s University-Based Research. Retrieved from .
URL    
[Cite within: 1]
[15]
Franzoni C., Scellato G., & Stephan P. (2011). Changing incentives to publish. Science, 33(6043), 702-703.
[Cite within: 3]
[16]
Garfield E. (1972). Citation analysis as a tool in journal evaluation. Science, 178(4060), 471-479.
DOI:10.1126/science.178.4060.471      URL    
[17]
Geuna A.,& Martin ,B. (2003). University research evaluation and funding: An international comparison. Minerva, 41(4), 277-304.
[Cite within: 2]
[18]
Good B., Vermeulen N., Tiefenthaler B., & Arnold E. (2015). Counting quality? The Czech performance-based research funding system. Research Evaluation, 24(2), 91-105.
DOI:10.1093/reseval/rvu035      URL    
After the fall of the Iron Curtain and a subsequent period of restructuring the research and innovation system, the Czech Republic has introduced a performance-based research funding system, commonly known as the Evaluation Methodology. The Evaluation Methodology is purely quantitative and focused solely on research outputs (publications, patents, prototypes, etc.) to determine the amount of institutional funding for research organizations. While aiming to depersonalize and depoliticize the allocation of institutional funding in the research system, improve research productivity, and safeguard accountability, we argue that the Evaluation Methodology has in fact become a negative example of a performance-based research funding system. Our analysis of the Evaluation Methodology shows that it has introduced considerable instability and unpredictability in the Czech research system, making strategic planning for research organizations difficult. The article contributes to a growing body of literature on research evaluation and performance-based research funding systems, discussing the impacts of introducing such systems in countries including the UK, Spain, Slovakia, Hong Kong, Australia, Poland, Italy, New Zealand, Flanders, Norway, Denmark, and Finland. The Czech case provides new insights in the interactions between politico-economic regimes and research policy, while also directing the attention of research policy scholars to significant developments in Central and Eastern European countries.
[Cite within: 1]
[19]
Glänzel W.,& Moed, H.F.(2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171-193.
DOI:10.1023/A:1014848323806      URL    
[Cite within: 1]
[20]
Herbst M. (2007). Financing public universities: The case of performance funding. Dordrecht: Springer.
DOI:10.1007/978-1-4020-5560-7      URL    
This crucial book addresses newer practices of resource allocation which tie university funding to indicators of performance. It covers the evolvement of mass higher education and the associated curtailment of funding, the public management reform debate within which performance-based budgeting or funding evolved, and sketches alternative governance and management modes which can be used instead. Four appendices cover more technical matters.
[Cite within: 1]
[21]
Heywood J.S., Wei X., & Ye G. (2011). Piece rates for professors. Economics Letters, 113(3), 285-287.
DOI:10.1016/j.econlet.2011.08.005      URL    
[Cite within: 3]
[22]
Hicks D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251-261.
DOI:10.1016/j.respol.2011.09.007      URL    
The university research environment has been undergoing profound change in recent decades and performance-based research funding systems (PRFSs) are one of the many novelties introduced. This paper seeks to find general lessons in the accumulated experience with PRFSs that can serve to enrich our understanding of how research policy and innovation systems are evolving. The paper also links the PRFS experience with the public management literature, particularly new public management, and understanding of public sector performance evaluation systems. PRFSs were found to be complex, dynamic systems, balancing peer review and metrics, accommodating differences between fields, and involving lengthy consultation with the academic community and transparency in data and results. Although the importance of PRFSs seems based on their distribution of universities- research funding, this is something of an illusion, and the literature agrees that it is the competition for prestige created by a PRSF that creates powerful incentives within university systems. The literature suggests that under the right circumstances a PRFS will enhance control by professional elites. PRFSs since they aim for excellence, may compromise other important values such as equity or diversity. They will not serve the goal of enhancing the economic relevance of research.
[Cite within: 1]
[23]
Hicks D. (2017). What year? Difficulties in identifying the effect of policy on university output. Journal of Informetrics, 11(3), 933-936.
DOI:10.1016/j.joi.2017.05.020      URL    
react-text: 396 The university research environment has been undergoing profound change in recent decades. Aiming at international competitiveness and excellence, performance based university research funding systems have been implemented in many countries. However, evidence-based analysis of policy effects is scarce. This paper develops methods for evaluating the effect of university research policy on... /react-text react-text: 397 /react-text [Show full abstract]
[Cite within: 2]
[24]
Interrupted time series analysis. (2013. Retrieved on April 28, 2017, from 2013). Retrieved on April 28, 2017, from .
URL    
[25]
Kamalski J.et al. (2017).World of Research 20152017). World of Research 2015: Revealing Patterns and Archetypes in Scientific Research. Elsevier Analytic Services. Retrieved from .
URL    
[Cite within: 1]
[26]
Larivière V., Kiermer V., MacCallum C.,. . . & Curry S. (2016). A simple proposal for the publication of journal citation distributions. Retrieved from .
URL    
[27]
McDowall D., McCleary R., Meidinger E.E., & Hay R.A. (1980). Interrupted Time Series Analysis. Newbury Park: Sage.
[28]
Osuna C., Cruz-Castro L., & Sanz-Menéndez L. (2011). Overturning some assumptions about the effects of evaluation systems on publication performance. Scientometrics, 86(3), 575-592.
DOI:10.1007/s11192-010-0312-7      URL    
[Cite within: 4]
[29]
Önder C., Şevkli M., Altınok T., & Tavukçuoğlu C. (2008). Institutional change and scientific research: A preliminary bibliometric analysis of institutional influences on Turkey’s recent social science publications. Scientometrics, 76(3), 543-560.
DOI:10.1007/s11192-007-1878-6      URL    
This paper provides a detailed assessment of recent indexed journal publications by Turkish social scientists. We first present information on SCI, SSCI and AHCI indexed journal articles that were published by Turkish researchers over the past three decades. An inspection of publication statistics indicates a considerable improvement, especially during the last five years of the 1973–2005 period that we examine, in Turkey’s publication record in terms of number of articles authored or co-authored by Turkish researchers. In the next step, we scrutinize institutional sources of this improvement, emphasizing regulatory and organizational changes that have both forced researchers to publish in indexed journals and remunerated those who did so. Finally, we provide a qualitative assessment of recent improvement in publication performance of Turkish researchers by focusing on a particular behavioral consequence of institutional changes and its implications for impact that research from Turkey has on global research activity. Bibliometric analysis of articles published by Turkish researchers in SSCI-indexed journals during 2000–2005 shows that recent regulatory and organizational changes seem to have instituted a particular publication habit, publishing in journals with lower impact factor, which was earlier observed in other parts of the world where publication counts were used for performance evaluation, and that signs of improvement in our select indicators of impact are yet to be observed.
[30]
Schneider J.W. (2009). An outline of the bibliometric indicator used for performance-based funding of research institutions in Norway. European Political Science, 8(3), 364-378.
DOI:10.1057/eps.2009.19      URL    
This article outlines and discusses the bibliometric indicator used for performance-based funding of research institutions in Norway. It is argued that the indicator is novel and innovative as compared to the indicators used in other funding models. It compares institutions based on all their publication-based research activities across all disciplines. Specific incentives are given to researchers to focus their publication behaviour on the most ‘prestigious’ publication channels within the different fields. Such aims necessitate a documentation system based on high-quality data, and require differentiated publication counts as the basic measure. Experience until now suggests that the indicator works as intended.
[Cite within: 1]
[31]
Seglen P.O. (1997 February 5). Why the impact factor of journals should not be used for evaluating research . British Medical Journal 314 (7079), 498-502. Retrieved from .
URL    
[Cite within: 1]
[32]
Shao J.,& Shen H. (2012). Research assessment: The overemphasized impact factor in China. Research Evaluation, 21(3), 199-203.
DOI:10.1093/reseval/rvs011      URL    
The assessment of quality in scientific research is a complex problem. The use of more objective scientometric indices in research evaluation emerged in the 1960s and 1970s. These scientometric indicators, among which the most common one is probably the journal impact factor (IF), are increasingly employed to evaluate the quality of scientific research performed by individual scientists, research groups, or institutes. In China, the IF is used not only to measure a journal's scientific influence, but has become increasingly important as a basis for recruitment or promotion, awards of research funding, grants, and authors' academic advancement. But in fact, the assessment of research mainly based on the IF will cause much academic 'froth', so it is necessary for universities and research institutions to reset the academic assessment system in China. In the assessment of scientific research, more research activities, like the organization of conferences and seminars, the coordination of research groups, and the participation to conferences, should be considered. Copyright The Author 2012. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com, Oxford University Press.
[Cite within: 1]
[33]
TÜBİTAK Türkiye Adresli Uluslararası Bilimsel Yayınları Teşvik (UBYT) Programı Uygulama Usul ve Esasları.(2016). Retrieved from .
URL    
[Cite within: 1]
[34]
Teixeira da Silva,J.A. (2017). Does China need to rethink its metrics- and citations-based research reward policies? Scientometrics, 112(3), 1853-1857.
DOI:10.1007/s11192-017-2430-y      URL    
[35]
Tonta Y. (2014. Use and misuse of bibliometric measures for assessment of academic performance, tenure and publication support. In the 77th Annual Meeting of the Association for Information Science and Technology, October 31 - November 5, 2014, Seattle, WA. 2014). Use and misuse of bibliometric measures for assessment of academic performance, tenure and publication support. In the 77th Annual Meeting of the Association for Information Science and Technology, October 31 - November 5, 2014, Seattle, WA. .
URL    
[Cite within: 2]
[36]
Tonta Y. (2015). Support programs to increase the number of scientific publications using bibliometric measures: The Turkish case.In A.A. Salah et al.(Eds.). Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 4 July, 2015 (pp. 767-777). İstanbul: Boğaziçi University.
URL    
ABSTRACT Bibliometric measures for scientific journals such as journal impact factor, cited half-life, and article influence score are readily available through commercial companies such as Thomson Reuters, among others. These metrics were originally developed to help librarians in collection building and are based on the citation rates of published papers. Yet, they are increasingly being used, albeit undeservedly, as proxies for peer review to assess the quality of individual papers; and research funding, hiring, academic promotion and publication support policies are developed accordingly. This paper reviews the use of such metrics by the Turkish Scientific and Technological Research Council (TUBITAK) in its Support Program of International Scholarly Publications and concentrates on the most recent policy changes. A sample of 228 journals was selected on the basis of stratified sampling method to study the impact of changing algorithms on the level of support that journals received in 2013 and 2014. Findings are discussed and some recommendations are offered to improve the existing algorithm.
[Cite within: 2]
[37]
Tonta Y. (2017a. Does monetary support increase the number of scientific papers? An interrupted time series analysis. Paper presented at ISSI 2017: 16th International Scientometrics and Informetrics Conference, 16-20 October 2017, Wuhan University, Wuhan, China. Retrieved from 2017a). Does monetary support increase the number of scientific papers? An interrupted time series analysis. Paper presented at ISSI 2017: 16th International Scientometrics and Informetrics Conference, 16-20 October 2017, Wuhan University, Wuhan, China. Retrieved from .
URL    
[38]
Tonta Y. (2017b). TÜBİTAK Türkiye Adresli Uluslararası Bilimsel Yayınları Teşvik (UBYT) Programının Değerlendirilmesi. Ankara: TÜBİTAK ULAKBİM . Retrieved from.
URL    
[Cite within: 1]
[39]
research funding? Butler’s Australian case revisited. Journal of Informetrics, 11(3), 905-918. DOI: .
URL    
[40]
van Raan,A.F.J. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133-143.
DOI:10.1007/s11192-005-0008-6      URL    
[Cite within: 1]
[41]
Wouters P., Thelwall M., Kousha K., Waltman L., de Rijcke S.,Rushforth, A. & Franssen, T. (2015). The metric tide literature review: Supplementary report I to the independent review of the role of metrics in research assessment and management. Retrieved from .
URL    
[Cite within: 1]
[42]
Yuret T. (2017). Do researchers pay attention to publication subsidies? Journal of Informetrics, 11(2), 423-434.
DOI:10.1016/j.joi.2017.02.010      URL    
[Cite within: 1]
[43]
Yurtsever E., Gülgöz S., Yedekçioğlu Ö.A., & Tonta M. (2001). Sosyal Bilimler Atıf Dizini’nde (SSCI) Türkiye 1970-1999 (Turkey in Social Sciences Citation Index (SSCI): 1970-1999). Ankara: Türkiye Bilimler Akademisi.
[44]
Yurtsever E., Gülgöz S., Yedekçioğlu Ö.A., & Tonta M. (2002). Sağlık Bilimleri,Mühendislik ve Temel Bilimlerde Türkiye’nin Uluslararası Atıf Dizinindeki Yeri 1973-1999 (Turkey’s Place in Health Sciences, Engineering and Basic Sciences in International Citation Index). Ankara: Türkiye Bilimler Akademisi.
Resource
PDF downloaded times    
RichHTML read times    
Abstract viewed times    

Share
Export

External search by key words

Performance-based research funding systems     
Publication subsidies     
Publication support programs     
Interrupted time series analysis     

External search by authors

Yaşar Tonta    

Related articles(if any):