The project-based funding model in research: what impact on contemporary research? 1/2

article_14Aprile

PART 1: HOW SORRY SHOULD WE FEEL ABOUT COMPETITIVE MODELS IN RESEARCH FUNDING?

If we sketch a typology of research funding possibilities for an individual researcher or for a team, we can identify the following models:

  • Permanent funding to a scientific team/structure/laboratory by a national research organism (e.g. the French CNRS);
  • Public funding from a public research council (regional, national or transnational) for the implementation of a given research project (e.g., the US National Science Foundation (NSF), the UK Research Council, the European Research Council, the French Agence Nationale de la Recherche (ANR));
  • Private funding from investors (private companies seeking to invest in research and development, sponsors, non-governmental organizations, crowd-funders).

One could be tempted to assume that unlike private research funding (from private investors interested in concrete and short-term results directly connected to technology transfers with potential market benefits), public research funding could be more adequate for:

  • Long-term research projects, the results of which are not expected before many years or decades (e.g. research projects in spatial sciences like Hubble or Rosetta, that both lasted over two decades);
  • Fundamental research, with no direct applications in terms of technology transfer, but leading to new concepts and discoveries that are crucial to the development of applied sciences and to the very possibility of innovation;
  • Research projects in “economically non-competitive areas” such as the Humanities or Social Sciences (e.g. linguistic research on endangered languages).

Yet, even in the case of public funding, a new model is gaining importance and seems to threaten this apparent predisposition of public funding for “blue-sky research”: a model in which research projects are funded on a competitive basis, i.e. on the basis of call for projects in specialized areas with specifically imposed guidelines on a given research program 1 (see for instance the EU Joint call for project: “European research projects on neurodegenerative diseases: risk and protective factors, longitudinal cohort approaches and advanced experimental models”, 2015 2). In this case, research projects must submit a scientific description relevant to the proposed axes, an anticipation of the impact of the research, a proposition of methodology and schedule, a CV of research team members, a list of partners and a bibliography. This will be examined by a committee of experts, and the selected project(s) will be funded for an average duration of 2/3 years.

Of course, this model seems to match academic excellence criteria, since research is funded here on the basis of objective and scientific evaluations of projects by recognized experts of the field, and since competition helps to reveal the most solid and fertile scientific approaches. Besides, this project-based models seems to be more stimulating than recurrent funding, since it encourages the regular proposal of new research axes.

Yet, this project-based and competitive funding model can also lead to:

1. Waste of time and energy: As this model gains importance, a huge part of the time and energy of research teams is dedicated to writing applications and to monitoring published calls for projects. Similarly, scientists and experts spend lots of time reviewing projects, so that the daily life of a basic researcher is eventually reduced to writing projects, and then reviewing the projects of others. Johan Bollen, David Crandall, Damion Junk, Ying Ding and Katy Börner thus write:

In 2012, for example, NSF convened more than 17,000 scientists to review 53,556 proposals. Reviewers generally spend a considerable time and effort to assess and rate proposals of which only a minority can eventually get funded. Of course, such a high rejection rate is also frustrating for the applicants. Overall, the scientific community invests an extraordinary amount of time, energy, and effort into the writing and reviewing of research proposals, most of which end up not getting funded at all. This time would be better invested in conducting the research in the first place 3.

Although this method of funding adopted by governmental policies is meant to maximize and optimize the utilization of resources, it can result in a dramatic waste of public money, the activity of scientists and experts gets lost in often pointless projects writing and reviewing and the consequent bureaucratic and administrative issues.

2. Loss of creativity – conformism – conservatism: In addition, research projects funded by this model risk being artificially shaped so as to match the axes and criteria imposed by the calls. This hinders the creativity of scientists or their capacity to invent genuinely new and innovative paths for science. In the worst case, scientists will merely write “ad hoc proposals”, designed to guarantee the attribution of funding.

Funding agencies, acting as investors, will tend to privilege safe investments, and will therefore be reluctant to finance pioneering but risky research. One may for instance recall the case of Mario Capecchi described by Pierre Azoulay, Joshua S. Graff Zivin and Gustavo Manso in their paper on “Incentives and creativity” 4. His bold and risky project to develop gene targeting in mammalian cells was almost rejected in 1980 by the National Institutes of Health (NIH). He decided to take the risk to continue his research despite this negative diagnosis emitted by the funding committee and his research was so successful that four years later, the reviewers themselves recognized: “We are glad that you didn’t follow our advice.” This is clearly a case where the prevalence of a safety criterium in research funding policies could have prevented a risky but pioneer project from succeeding.

Indeed, in order to be funded, a research project will have to be not only safe but mainstream and critical approaches or non conformist hypotheses will tend to be rejected 5. This state of affairs thus reproduces the “lobbying effect” that sometimes affects private research funding, and discourages bold new research. Science then becomes a matter of marketing rather than a matter of authentic discoveries and research.

3. Arbitrariness in research funding: The idea that a competitive project-based funding model is a good way to promote transparency might be an illusion, for the criteria adopted in the selection of projects are not always clear and may sometimes be misguided. This creates a discouragement effect on scientists, some of whose innovative and promising projects are rejected with no assignable motive 6. In this respect, one can echo the critiques formulated by the biologist Alain Trautmann to the French Agence nationale de la recherche (ANR) 7:

The opacity and arbitrariness of this model generates a feeling of disgust and helplessness that undermines many scientists, condemned to spend always more time submitting new funding applications, producing projects rather than research.

The effective allocation of funding in this model is also highly dependent on the reputation of the project manager or of his/her home institution, which in turn encourages scientists to produce “quick and dirty” recycled papers in order to gain an impressive publication record. In addition, this reputation factor encourages once again a reproductive or conservative system leaving no room for the emergence of fresh early-career profiles and/or research institutions. This state of things is unfair, for scientific teams enjoying a high reputation usually benefit from comfortable funding from various resources, whereas young and promising teams might be left with no resources.

4. Encouragement of fraud: Just like the increase of competition in sports encourages cheating, the generalization of a competitive model in science funding can encourage such dishonest practice as scientific falsification or plagiarism. For instance, according to the 2012 annual report of the US Office of Research and Integrity (ORI): “From all sources, ORI received 423 allegations in 2012, an increase of 56 percent over the 240 allegations handled in 2011, and well above the 1992-2007 average of 198” 8. One may indeed suggest that the increased competition between researchers has played a part in this trend 9. On the other hand, unfair practices might be adopted by the experts themselves, who will sometimes tend to formulate undue criticisms towards their usual competitors and/or propose excessively feedback to the works of their usual partners in a given research field 10.

5. Impoverishment of fundamental or long-term research: The preference for this competitive project-based model may also impact the nature and orientations of funded research. Research councils or funding agencies could thus tend to promote exploitable research projects or research of public interest, which could disfavor low-impact but conceptually pioneering research in such fields as literature or theoretical philosophy. Of course, this switch of attitude does not imply per se a decrease of funding in areas such as the Humanities, for governments that allocate grants and launch competitive calls for projects usually show concern for a balance between disciplinary fields and a preservation of social sciences and humanities. Yet, one may suppose that even in those fields, research projects with an impact on society such as projects on bioethics, gender studies or animal rights will have more chance to be funded than merely speculative research on the history of ideas or strictly philological enquiries.

For the same reasons, this model could discourage fundamental research. It is often stressed that scientific or technological discoveries are not programmable, but result from a serendipitous attitude connected to deep fundamental research: in this respect, applied sciences and technological innovations cannot exist without a long-term and thorough blue-sky research activity, paving the way for pioneering applications. If public research funding adopts the same criteria as industrial research funding, one can see no reason why fundamental research with no direct economic or commercial impact should be preserved.

Lastly, a model of funding planned for 2 or 3 years might hinder scientists from developing long term research projects, and thus limit publicly funded research to short-term issues. In this respect, one may again quote the verdict of Alain Trautmann 11:

The research funding policy is being progressively withdrawn from research organisms such as the CNRS, and is entrusted to a set of structures (ANR, ANRS for AIDS Research, INCA for Cancer Research, Alzheimer Foundation etc.) to fund projects with the following characteristics: these are short-term projects, 2/3 years, which discourages research demanding a long investment.

6. Low funding-impact correlation: As a matter of fact, some studies show that there is no direct correlation between the impact or excellence of a research project and its capacity to receive adequate funding. For instance, Jean-Michel Fortin and David J. Currie, in their 2013 paper called “Big Science vs. Little Science: How Scientific Impact Scales with Funding”, have tried to examine the correlation of effective funding (in particular by the Natural Sciences and Engineering Research Council of Canada (NSERC) or other source of funding) with the impact of funded research. Their study was based upon four indices of scientific impact: numbers of articles published, numbers of citations to those articles, the most cited article, and the number of highly cited articles, each measured over a four-year period. Their striking conclusion is that: “Impact is positively, but only weakly, related to funding (…).Further, the impact of researchers who received increases in funding did not predictably increase” 12.

The evidence of this poor correlation is visible in various graphics such as the chart below, that examines the case of researchers in Animal Biology who held only an NSERC grant, versus those who also held a grant from CIHR, CFI (the Canadian Foundation for Innovation) and/or the Fonds Québécois de Recherche – Nature et Technologies (FQRNT). The graphics show that “researchers who held grants other than NSERC are not significantly more productive than those who did not”:

Figura 1

Hence their recommendation: “We conclude that scientific impact (as reflected by publications) is only weakly limited by funding. We suggest that funding strategies that target diversity, rather than ‘excellence’, are likely to prove to be more productive”.

I have reviewed the reasons why many members of the scientific community reject the constraints imposed by competitive funding models, defend the return to a model based on permanent funding for scientific structures by research organisms 13, or even propose new and alternative funding models for research 14.

But are all of these criticisms (if any) really well-founded? How can the concrete practice of scientific research adapt to this new reality of funding, can it bypass its potential weaknesses and find new ways to develop innovative, personal, risky, disinterested and long-term research within this new model? The second part of this paper will try to show how long-term and fundamental research strategies remain possible in a world of project-based research funding.

*Note: This article gives the views of the author, and not the position of SIRIS Lab, nor of SIRIS Academic. Please review our Comments Policy if you have any concerns on posting a comment below.

About the author, Sabine Plaud: I have a training in philosophy with a French PhD on the philosophy of Ludwig Wittgenstein. My theoretical concerns regard philosophy of language and in particular, the link between pictures and discourses, with a focus on the notion of modelization. In my activities at SIRIS Academic, I am thrilled by issues regarding research strategies and management of teaching and education policies.

Information about Featured Image: “3D Emergency Fund” by Chris Potter, 30/03/2013. Available here.

Notes:

  1. Cf. Bianca Potì and Emanuela Reale: “Changing allocation models for public research funding: an empirical exploration based on project funding data”, Science and Public Policy, 34(6), July 2007, p. 417–430.
  2. http://www.agence-nationale-recherche.fr/fileadmin/aap/2015/aap-jpcofund-2015.pdf
  3. Johan Bollen, David Crandall, Damion Junk, Ying Ding & Katy Börner: “From funding agencies to scientific agency Collective allocation of science funding as an alternative to peer review”, EMBO reports, Vol. 15, 2, 2014.
  4. Pierre Azoulay, Joshua S. Graff Zivin, Gustavo Manso: “Incentives and creativity: evidence from the academic life sciences”. The RAND Journal of Economics 42, 2010, p. 527–554.
  5. See for instance Thomas Heinze, Philip Shapira, Juan D. Rogers, and Jacqueline M. Senker: “Organizational and institutional influences on creativity in scientific research”, Research Policy, Volume 38, Issue 4, May 2009, 610-623.
  6. See for instance the data claiming that 90% of submission had been rejected by the French ANR at their last call for projects, proposed in the article « L’ANR recale 90% des projets scientifiques », published on the blog Sciences2, August 26 2014:
    http://sciences.blogs.liberation.fr/home/2014/08/lanr-recale-80-des-projets-scientifiques.html
  7. Cf. « A. Trautmann dénonce le défaut de transparence de l’Anr et refuse la prime attachée à la médaille d’argent du Cnrs », published on the blog « Sciences2 », September 22 2010:
    http://sciences.blogs.liberation.fr/home/2010/09/alain-trautmann-d%C3%A9nonce-le-d%C3%A9faut-de-transparence-de-lanr.html
  8. http://ori.hhs.gov/images/ddblock/ori_annual_report_2012.pdf.
  9. This critique is somehow analogue to a critique of the edition policy of scientific journal that tends to favor striking and spectacular scientific results, leaving no space for the publication of fundamental research issues and encouraging such practices as scientific falsification, due to the “publish or perish” principle.
  10. See Chris R. Triggle and David J. Triggle: “What is the future of peer review? Why is there fraud in science? Is plagiarism out of control? Why do scientists do bad things? Is it all a case of: ‘All that is necessary for the triumph of evil is that good men do nothing?’”, Vascular Health and Risk Management 2007, 3(1), pp. 39–53. See also Tinker Ready: “Plagiarize or perish?”, Nature Medicine 12, 494 (2006). Art. Cit.
  11. Art. Cit.
  12. Jean-Michel Fortin, David J. Currie: “Big Science vs. Little Science: How Scientific Impact Scales with Funding”, PLoS ONE 8(6), 2013.
  13. See for instance the press release published by the French Committee “Sciences en marche” on August 18 2014:http://sciencesenmarche.org/fr/financement-des-laboratoires
  14. See for instance the alternative model proposed in Johan Bollen, David Crandall, Damion Junk, Ying Ding & Katy Börner, art. cit.

Leave a Comment