Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles
Healthcare Innovation and Policy Unit, Centre for Health Sciences, Barts and the London School of Medicine and Dentistry (Greenhalgh); Division of Medical Education, University College London (Russell)
"Much has been written about why electronic health (eHealth) initiatives fail....Less attention has been paid to why evaluations of such initiatives fail to deliver the insights expected of them..."
Published in PLoS Medicine (Volume 7, Issue 11), this article suggests that the assumptions, methods, and study designs of experimental science may be ill-suited to the particular challenges of evaluating eHealth programmes, especially in politicised situations where goals and criteria for success are contested. The authors offer an alternative set of guiding principles for eHealth evaluation based on traditions that view evaluation as social practice rather than as scientific testing, illustrating these with the example of the way they evaluated England's Summary Care Record (SCR) programme.
On the one hand, there is an approach to evaluation that the authors describe as resting on a set of assumptions that philosophers of science call "positivist". On this perspective, there is an external reality that can be objectively measured. That is: phenomena such as "project goals", "outcomes", and "formative feedback" can be precisely and unambiguously defined; facts and values are clearly distinguishable; and generalisable statements about the relationship between input and output variables are possible.
The next section illustrates how this approach to evaluating eHealth would have fallen short if used to evaluate SCR, which was part of a larger national information technology (IT) programme developed by the English Department of Health. When early findings were released, "critics of the program interpreted missed milestones as evidence of 'failure'..." The authors of this paper wrote the final SCR report, "an extended narrative to capture the multiple conflicting framings and inherent tensions that neither we nor the program's architects could resolve." They note that: "Collection and analysis of qualitative and quantitative data help illuminate these complexities rather than produce a single 'truth'....[T]ensions and ambiguities [are]...included as key findings, which may be preferable to expressing the 'main' findings as statistical relationships between variables and mentioning inconsistencies as a footnote or not at all."
Based on this experience, the authors offer "an alternative (and at this stage, provisional) set of principles,...which we invite others to critique, test, and refine." In sum, these principles are:
- Think about your own role in the evaluation. "Ask questions such as What am I investigating - and on whose behalf? How do I balance my obligations to the various institutions and individuals involved? Who owns the data I collect?"
- Put in place a governance process, "(including a broad-based advisory group with an independent chair) that formally recognises that there are multiple stakeholders and that power is unevenly distributed between them. Map out everyone's expectations of the program and the evaluation..."
- Provide the interpersonal and analytic space for effective dialogue - "(e.g., by offering to feed back anonymised data from one group of stakeholders to another)....Learning happens more through the processes of evaluation than from the final product of an evaluation report...
- Take an emergent approach: "Build theory from emerging data, not the other way round (for example, instead of seeking to test a predefined 'causal chain of reasoning', explore such links by observing social practices).
- Consider the "dynamic macro-level context (economic, political, demographic, technological) in which the eHealth innovation is being introduced."
- Consider the "different meso-level contexts (e.g., organisations, professional groups, networks), how action plays out in these settings (e.g., in terms of culture, strategic decisions, expectations of staff, incentives, rewards) and how this changes over time. Include reflections on the research process (e.g., gaining access) in this dataset.
- Consider the individuals through whom the eHealth innovation(s) will be adopted, deployed, and used. "Explore their backgrounds, identities and capabilities; what the technology means to them and what they think will happen if and when they use it."
- Consider "the eHealth technologies, the expectations and constraints inscribed in them (e.g., access controls, decision models) and how they 'work' or not in particular conditions of use. Expose conflicts and ambiguities (e.g., between professional codes of practice and the behaviours expected by technologies)."
- Use narrative as an analytic tool and to synthesise findings. "Analyse a sample of small-scale incidents in detail to unpack the complex ways in which macro- and meso-level influences impact on technology use at the front line."
- Consider critical events in relation to the evaluation itself. "Document systematically stakeholders' efforts to re-draw the boundaries of the evaluation, influence the methods, contest the findings, amend the language, modify the conclusions, and delay or suppress publication."
In conclusion, the authors stress that some eHealth initiatives will lend themselves to scientific evaluation based mainly or even entirely on positivist assumptions. However, others, particularly those - like SCR - that are large-scale, complex, politically driven, and differently framed by different stakeholders, may require that evaluators apply alternative criteria for rigour such as the 10 proposed above. "The precise balance between 'scientific' and 'alternative' approaches will depend on the nature and context of the program....An informed debate on ways of knowing in eHealth evaluation is urgently needed."
eHealth Intelligence Report, November 16 2010.
- Log in to post comments











































