Reason: Under embargo until Nov 2017. After this date a copy can be supplied under Section 51(2) of the Australian Copyright Act 1968 by submitting a document delivery request through your library or by emailing firstname.lastname@example.org
Undertaking and using health service evaluations in the field
In order to distinguish essays and pre-prints from academic theses, we have a separate category. These are often much longer text based documents than a paper.
posted on 20.02.2017by Morello, Renata Teresa
There is a great need for decision makers in healthcare to use robust and reliable evidence to support clinical and health policy choices that aim to improve the quality of healthcare and support the efficient use of scarce resources. This evidence is, however, often lacking in quality, quantity and reliability. While many health interventions hold good face validity, their implementation and use in practice may not always produce the desired improvements in patient care. This gap suggests that challenges exist in the production of such evidence.
This thesis sought to respond to this challenge through the undertaking of two health service evaluations. Each evaluation representing an independent, discrete piece of work addressing the primary objective of this thesis—to evaluate the impact of specific complex health interventions —providing evidence for, or against, its ongoing use. These evaluation case studies then provided a platform for reflecting on the contextual issues of the evaluation, lessons learnt and challenges confronted, addressing the second objective of this thesis—to describe the methodological and practical challenges experienced when undertaking such health service evaluations.
Case study 1 was a retrospective evaluation of a Telephone Support Program for elderly people with Chronic or Complex care needs. The intervention was developed and implemented by a private healthcare provider and commissioned by a private health insurer, aimed at reducing avoidable hospital admissions for elderly members living in the community. A non-randomised controlled study design was employed using propensity score matching. Compared to matched controls, the intervention was not observed to reduce hospital use of healthcare utilisation costs. However it was unclear if the finding of no effect was due to poor implementation fidelity, issues with data quality and integrity, methodological limitations of the evaluation or an ineffective intervention.
Case study 2 was a comprehensive evaluation of the 6-PACK program, a falls prevention intervention specifically developed for acute hospital wards. It was undertaken as part of a rigorously designed cluster randomised control trial (RCT) involving six hospitals (24 acute wards) across Australia and included an economic evaluation, a cost of fall study and an examination of implementation fidelity. The program was found to be ineffective at reducing fall (IRR = 1.04, 95% CI, 0.78 to 1.37; P=0.796) or fall injuries (IRR, 0.96, 95% CI, 0.72 to 1.27; P=0.766), above that of usual care, as such an economic evaluation was not undertaken. The cost of fall study found that patients who experienced an in-hospital fall were observed to have a 7 day longer hospital stay (p<0.000) and an additional AUD$5,395 in hospitalisation costs (p<0.000), compared to those without a recorded fall. While patients who experienced a fall-related injury had an 11 day longer hospital stay (p<0.000) and an additional AUD$9,917 in hospitalisation costs (p=0.003), compared to those without a recorded fall. The examination of implementation fidelity found reasonable levels of implementation fidelity during the cluster RCT. Therefore implementation failure did not appear to have been a key factor for the observed no effect in the 6-PACK trial.
These case studies highlighted some common challenges faced by evaluators when examining the impacts of an intervention in the ‘real-world’ setting. Three key themes emerged from the two case studies: 1) the challenges with designing and undertaking rigorous evaluations in the ‘real-world’ setting; 2) the use of secondary data sources, particularly with the measurement of outcome and confounding variables and the use of data across organisations and jurisdictions; and 3) the ability to define and examine implementation fidelity of complex health interventions.
When examining complex health interventions determining the level of implementation fidelity is essential to the interpretation of study findings, particularly when conducted in the dynamic and complicated environment, that is, healthcare. In addition, to ensure a more rigorous approach to health service evaluation in the ‘real-world’ is taken, resources need to be dedicated to enhance existing data sources and systems that are more conducive to evaluation, improving the strength of measures and the ability for data linkage to occur easily across sites and jurisdictions.