My story: Marie Hella Lindberg
I was fortunate enough to attend a short course in policy evaluation methods organised by the Centre for Microdata Methods and Practice (Cemmap) at the University College of London from the 14th to the 17th of January 2020. This was all thanks to a grant from Epinor.
Part of the reason why I wanted to attend this course was that I in August 2019 went to my first conference as a PhD student: the Nordic Health Economics Association meeting, that took place in Reykjavik. At this conference, there were a lot of health economists using many fancy statistical methods to estimate (causal) effects from different programmes/interventions/events on some health outcome. I felt that I needed to widen (and deepen) my statistical toolkit to be able to speak the same language as these people, and hopefully one day use some of the same methods myself. These methods are also relevant for epidemiologists, I would say. My impression is that some of them are used in epidemiology too, but are often called different things.
The course covered so-called quasi-experimental methods and designs, such as randomised social experiments; natural experiments or instrumental variables; regression discontinuity design; various matching methods; before-after; difference-in-differences; and synthetic control methods. For each method/design, the tutor went through the theory and the formal framework, before we worked on our own with Stata practicals. The course was cleverly structured around a specific paper from 1986 and its data material. This data material was always used as examples for the different methods, and the tutor demonstrated how the data was “unfit” for this kind of analysis (estimating the effect on earnings of an employment programme) because the intervention and control groups were systematically different. If one should draw one lesson from the course, it is that the researcher has to carefully identify the population of interest.
I found the structure of the course very cool because we first, saw with a very clear example how important the data at hand are for the choice of method; and second, how statistical methods have been developed to overcome challenges faced by the authors of the paper since then. The tutor was extremely good, although she spoke very fast (she is Italian)!
The fee for the course was quite high (again, thank you Epinor!), and I realised partly why when I got to the building where the course was held: we received a thick compendium with all the course material, a badge, and a notepad. In addition, there were pastries or biscuits, as well as tea, coffee and water for all the breaks, and a person was organising all this for us throughout the week. It was nice, of course, but I have to say I don’t mind attending courses without all the extras in order to make it more accessible for more people.
I am very grateful that Epinor found my application “funding-worthy”, and it was a really good experience that I learned a lot from. In addition, London is an exciting city, and I got really skilled in enjoying my own company, by going to the theatre, eating out, going book-shopping and exploring the city (which turned out to be very difficult without google maps after my phone had died). Thanks for giving me this opportunity, Epinor!
Marie Hella Lindberg:
1 LaLonde, R.J. (1986), “Evaluating the Econometric Evaluation of Training Programmes with Experimental Data”, The American Economic Review, 76, 604-620.
Last updated: 03.02.2020 10:53