QUALEVAL: A toolkit for qualitative evaluative research

Project coordinator : 

Research assistant : 


Project description : 

As part of LIEPP’s cross-cutting activities, this project aims at improving collective skills in qualitative evaluation, through a dialogue between qualitative evaluation approaches and methods developed in academia and through professional practices of evaluation.

When specializing in quantitative methods, evaluators inside and outside the academic world tend to rely on a generally agreed-upon and common set of techniques: randomized controlled trials (RCTs), instrumental variables, difference-in-differences methods, matching methods, regression discontinuity methods (Fougère et Jacquemet 2019). The perimeter of relevant methods for qualitative evaluation is less clear-cut, and the existence of a common ground between academics and professional evaluators less obvious.

Some qualitative methods historically developed in basic research, such as semi-structured interviews, focus-groups and case studies, have been fruitfully transferred to evaluation (Patton 1990). Yet the practice of evaluation has also led to the development of original qualitative approaches, such as program theory analysis (Weiss 1998) and contribution analysis (Mayne 2012), which are not necessarily shared by academic researchers. Classical qualitative methods have also been modified to be better adjusted to the practice of evaluation, such as in the case of the practice of realist interviews (Manzano 2016).

Moreover, as an interdisciplinary field of practice, evaluation has played a key role in the development of reflections on mixed-methods (Greene, Benjamin, et Goodyear 2001), in connection with a deepening of epistemological reflections on causality, arguably the founding question of the field of evaluation (Maxwell 2004).

In the case of qualitative evaluation, then, there seems to be less consensus on the list of relevant methods and their respective usefulness for evaluation. This project aims to improve collective skills and contribute to the discussion on this matter through accessible reviews of key methodological publications on various qualitative evaluation techniques and approaches, including techniques which, commonly used by evaluators, are less familiar to academics practicing basic research. These article summaries, and other relevant resources, will be made available online through an Hypotheses blog. This synthesis of the literature will fuel a joint publication project between LIEPP and France Stratégie on qualitative methods for impact evaluation.

QUALEVAL is meant to serve as a resource for both professional evaluators and academic researchers interested in evaluation, in France and globally. Beyond its use for qualitative researchers and evaluators, the project also aims at making the basic principles of qualitative techniques accessible to those with a training and practice in quantitative methods, in order to favor interdisciplinary dialogue and the development of mixed-method projects, in line with LIEPP’s mission.

To access all the summaries

Cited references

Fougère, Denis, et Nicolas Jacquemet. 2019. « Causal inference and impact evaluation ». Economie et Statistique (510-511‑512):181‑200. doi: 10.24187/ecostat.2019.510t.1996.

Greene, Jennifer C., Lehn Benjamin, et Leslie Goodyear. 2001. « The merits of mixing methods in evaluation ». Evaluation 7(1):25‑44.

Manzano, Ana. 2016. « The Craft of Interviewing in Realist Evaluation ». Evaluation 22(3):342‑60. doi: 10.1177/1356389016638615.

Maxwell, Joseph A. 2004. « Using Qualitative Methods for Causal Explanation ». Field Methods 16(3):243‑64. doi: 10.1177/1525822X04266831.

Mayne, John. 2012. « Contribution analysis: Coming of age? » Evaluation 18(3):270‑80. doi: 10.1177/1356389012451663.

Patton, Michael Quinn. 1990. Qualitative evaluation and research methods. London: Sage.

Weiss, Carol H. 1998. Evaluation: Methods for Studying Programs and Policies. Upper Saddle River, NJ: Prentice-Hall.

Back to top