Funding science in Chile

christmas 015

If five months ago you asked me to name the three things that I knew Chile by I would have said: military coup and camps, Isabel Allende and wine. In this order!

Visiting a winery

Visiting a winery

Then, at the end of 2012 I was invited to take part in the evaluation of the research funding programmes of CONICYT: the closest thing Chile has to a research council and without doubt one of the main public funders of research in the country. To my delight there was also a ‘cite visit’ which meant that I finally visited Chile – a dream harboured for many years.

This visit allowed me to expand my knowledge about Chile beyond the three facts mentioned above to include annual economic growth of over 5% (which in Europe is currently a level beyond our wildest aspirations), predominantly primary economy (nice lapis lazuli, silver, copper and other rare minerals), visible inequality, leading edge astronomy, well know mathematics, wonderful, warm people, safe streets and amazing writing (after all Chile has two Nobel Prizes for literature).

Pablo Neruda's house

Pablo Neruda’s house

More to the point, Chile is one of the few countries in the world that has been consistently increasing the public funding for research and is looking to increase it even further. It is interesting to note, for instance, that the funding of FONDECYT (National Fund for Scientific and Technological Research), the main programme for supporting the science base in Chile, has increased over twofold between 2007 and 2012 (from $71 million to over $160 million).

This is in a considered national effort to move away from its economy based on primary industry to being a knowledge based society with thriving innovation and creativity. Considerable proportion of FONDECYT is invested in developing the new generations of Chilean researchers; many young researchers are funded to continue their education abroad (at Masters and PhD) level.

Further, FONDECYT is only one, albeit the largest, of the programmes of CONICYT; the council also supports FONDEF (Fund for the Promotion of Scientific and technological Development); FONDAP (Fund for Research Centres of Excellence in Priority Areas); and PIA (Associative Research Programme).

If you are interested in more detail about the operating principles and pitfalls of these programmes you may read the evaluation report by following this link. I will only mention a number of things I noticed (in no order of importance):

  • As in many ‘catching up’ countries, education in Chile is seen as a means for social mobility and has become a ‘premium good’. Simply put, this means that education at all levels is very expensive; which exacerbates the social inequality visible in the country.
  • Chile is training many young scientists and academics at the moment; most of them are abroad. It is always possible that the research infrastructure developed within the country lags behind the capacity embodied in people. This means that it is a possibility that five years from now Chile has a large number of highly qualified, under-employed scientists. Which as investments go is probably not a very good one!
  • I am always bowed over and intrigued by Catholic education; some of the top universities (and colleges) around the world are Catholic. Until I went to Chile, I didn’t recognise that this means they are directly supported by the Vatican and that the chief officers of the university are appointed by the Pope.

I’ll leave you now with this example of mathematicians’ humour; this picture was taken at the department of mathematics.

Most people misunderstand this equation

Most people misunderstand this equation

Oh, and when you read the report, please ignore the pictures – many who will read this know that I am better and less serious looking.

About assessing academic performance and the Research Excellence Framework (REF)

Couple of weeks ago, I received a message informing me that the outcome of the latest round preparing the University of Manchester for the up-coming Research Excellence Framework (REF) is on line and I can view my results on a specific, dedicated part of the IT system. It shouldn’t come as a surprise that this part of the IT system is called REF Preparation Exercise.

Looking at my scores, made me consider again the way in which academic research and research outputs are assessed. Since in doing this credibility is important, let me assure you that I don’t see myself as being a victim of the system and I am doing rather well with it. Any criticism voiced here is because I’d like to see it working better rather than a ‘sour grapes’ kind of complaint.

But let me set out the scene for my readers outside the UK. British universities are in midst of preparing for the REF that will take place in 2014 and will assess ‘research that has taken place’ between 2008 and 2013. In fairness and honesty, British universities have been continuously preparing for one or other round of this exercise ever since its inception in the mid-1980s. The REF is the latest manifestation of the Research Assessment Exercise (RAE) that started in 1986 and through it a set of changes in the relationships between government (the state) and science were initiated. This exercise was to run every five years and assess the research performance of university units through a panel based peer review system; the Panels assign units a certain number of ‘stars’ (currently between 1 and 4). Base line funding for research depends on this assessment and so it is perceived as important by the universities to ‘do well’. In effect, this exercise is as much about reputation as it is about funding. For more on this you can see this article.

Universities have learned to play different games in light of these exercises; from going to great length to get their staff represented on the different panels to ensuring that members of faculty publish regularly in the ‘right’ journals and their publications are of the ‘right’ quality. The University of Manchester is no exception.

We, for instance, have an annual REF preparation exercise where members of staff select a number of their papers (all pre-entered in eScholar – a web based system used to keep a record of our publications); these are read by senior members of staff (Senior Lecturers and Professors) and assigned number of stars corresponding to these assigned by the REF panels (this mind, was done even when we were not entirely clear about the rules and criteria of assessment). There have been plenty of criticisms levelled on the practice mainly relating to the possibility that peer review works at the level of organisations and organisational units, the substantial costs involved and the vague criteria such assessments by necessity employ. Apart from the criticisms directed to nature of assessment there have been concerns regarding the use of the outcome from the preparation exercise (in promotions, for instance) and the Union has called for boycotting participation. Here, I don’t intend to discuss any of these.

What I would like to do is to share my observations regarding some features of the scores of my articles and share what I believe to be the fundamental problem in this kind of assessment system, namely a system focusing on the assessment of published output.

My observations are two:

  1. Any paper published in the top journal of the broad field (Research Policy) received 4*. This is probably one more example that reputation matters.
  2. Output published in other outlets (not top ranking broad domain journals) received lower rankings and it is likely that they were read more carefully. In this case, since knowledge of the narrow field is essential, the timeline of the publication obviously mattered – earlier publications that already have citations were ranked much higher. Epistemic difference also likely played a large role – the bigger the epistemic difference between the narrow research interests and approach of the assessor and the assessed the lower the ranking. I suppose, additional considerations came to play.

This, irrespective of any other concerns, led to couple of anomalies. These are that:

  1. A chapter in a book edited by Polish colleagues was rated 1* (this is the lowest score). This article, although probably not one of my best, was a key to opening a new and progressive personal (and possibly collective) research line. It was solicited by the editors of the book and set out for the first time the notion of science as a relationship between research spaces and research fields. This notion informed writing the proposal for EURECIA, a research project that got funded by the European Research Council (ERC) at slightly less than 500K euro; it provided the intellectual foundation for a later Research Policy paper; and is already being used by other colleagues.
  2. A co-authored article of mine (I am the lead author) in Science was rated 2* (and 1* by the external to the unit assessor). I do believe that anything published by Science warrants higher score than that. This particular article was unique in that it managed to put forward a coherent social science argument (that European level policy is moving from ‘Science in Europe’ to ‘European Science’ mode) in about 3,000 words. The article was reviewed by four peers and went through two rounds of review. Apart from that, it is clearly a part of the budding research line mentioned above.

What is the problem?

I believe that distortions in the ratings originate in the fact that what is being assessed are discrete outputs (published articles, chapters and others); these are assessed as independent events rather than as part of the continuous research lines that we as researchers build and the way in which these research lines intersect with the development of the research field (or fields). Concentrating the assessment on discrete outputs has different problematic implications but here I’d like to mention the following:

  • It can lead to inaccuracies that are far too important in terms of individual’s careers to be allowed;
  • It works against starting new individual research lines and taking risk in research.
  • This in turn reduces epistemic variety in research fields and works against the possibility for intellectual innovation.