Chart_edited.jpg

THE "WHAT WORKS" (AND WHAT DOESN'T WORK) EVIDENCE FOR IMPROVING SERVICE DELIVERY CLIENT OUTCOMES WITHIN THE ALLIED HEALTH SECTOR.

There is an excellent electronic family game called "Fact or Fake", whereby players seek to identify whether a piece of information presented to them in the game is true (Fact) or false (Fake).

In the allied health sector, NZIPC believes that a similar discernment is needed to ascertain whether any specific time, investment, practice type, service delivery requirement, or training is actually going to produce better treatment outcomes for service delivery clients. 

NZIPC believes (and the client outcome evidence affirms) that there are a number of artificial barriers built into many funding agency service delivery access requirements, and that these barriers are unnecessarily creating long client waiting lists, putting increasing pressure on service delivery providers, negatively affecting agency staff retention, and attributing illegitimate (and unwelcome) expenses upon allied health practitioners.

The following list is a summary of the "what works" evidence. 

Prepare yourself - some of the allied health industries most sacred cows are about to be smashed upon the rocks of evidence below, but it's for a good cause: improving allied health service delivery client outcomes, whilst increasing efficiency and reach of current (not increased) funding.

NZIPC does not believe that the "more funding required" perspective is legitimate, until and unless it can be established that current funding is being utilised well, anchored within formally measured practice-based evidence outcomes.

Clinical Supervision: Fact, or Fake?

The Industry says:

"Fact: Practitioners must have a clinical supervisor, and must have attained a minimum number of practice hours under supervision in training, in order for service delivery clients to receive the best outcomes".

The Evidence says:

Fake.

There is no correlational data in  the international outcome research that reveals any impact that independent supervision has on any other variable within allied health client service delivery, which means that independent supervision does not improve allied health practitioner competence or client outcomes. The various agencies, professional associations, and funding bodies of same that make independent supervision mandatory for allied health agency staff may wish to reflect on this evidential truth. See Watkins (2011) a 30-year meta-analysis on Supervision outcome research, and Rousmaniere (2014) for more information. Using the Counselling profession as one example,  in the absence of formal routine client outcome measurement, Counsellors attain peak competence at around 50 hours of training, yet are required to complete up to 200 hours of supervised training, absent of any evidence that they need to do so. Assuming a standard 1:10 supervision session ratio, and assuming a standard $100.00 cost for a supervision session, this equates to an additional (and evidentially unnecessary)  additional expense of $1500.00 per practitioner under training.

Belonging to a Professional Association & Continuing Education: Fact, or Fake?

The Industry says:

"Fact: 'If Practitioners are highly trained, and belong to a Professional Association, this means that they will be more competent Practitioners, and clients will receive  better service delivery outcomes".

The Evidence says:

Fake.

There is no correlational data in  the international outcome research that reveals any impact that belonging to a Professional Association and engaging in Continuing Education has on any other variable within allied health client service delivery, which means that belonging to a Professional Association and engaging in Continuing Education does not improve allied health practitioner competence or client outcomes. The various agencies, professional associations, and funding bodies of same that make belonging to a Professional Association & Continuing Education mandatory for allied health agency staff may wish to reflect on this evidential truth. See Malouff (2012) which highlights that studies comparing service delivery by professionals and para-professionals either showed that para-professionals attained better client service delivery outcomes, or that there was no difference between the two service delivery groups, regardless of training or membership (or not) to a Professional Association.

Evidence-based, manualised Models of Practice: Fact, or Fake?

The Industry says:

"Fact: There are evidence-based, manualised models of practice that work better with service delivery clients than non-evidenced based, non-manualised models of practice".

The Evidence says:

Fake.

There is no correlational data in  the international outcome research that reveals any impact that using an evidence-based manualised model of practice over a non-evidence-based, non-manualised model of practice has on any other variable within allied health client service delivery, which means that using an evidence-based manualised model of practice over a non-evidence-based, non-manualised model of practice does not improve allied health practitioner competence or client outcomes. The various agencies, professional associations, and funding bodies of same that choose to favour  an evidence-based model of practice over a non-evidence-based model of practice, and then insist that allied health agency staff work within the favoured model of practice may wish to reflect on this evidential truth. See Truijens, et al (2018) which affirms that evidence-based, manualised models of practice attain the same client service delivery outcomes as non-evidence-based, non-manualised models of practice.

Higher qualifications and longer length of industry sector experience = better client outcomes: Fact, or Fake?

The Industry says:

"Fact: Practitioners who have degree- level or above qualifications, who have longer periods of industry experience, who have been highly trained in a particular discipline, who belong to a professional association, who undergo personal supervision, and who have undergone some form of personal therapy or personal agency engagement themselves will achieve better client outcomes".

The Evidence says:

Fake.

Research shows that “who” provides the therapy is an important determinant of outcome. Numerous studies demonstrate that some clinicians are more effective than others (e.g., Brown, Lambert, Jones, & Minami, 2005; Luborsky et al., 1986; Wampold & Brown, 2005). “Better” therapists, it turns out, form better therapeutic relationships with a broader range of clients. In fact, 97% of the difference in outcome between therapists is accounted for by differences in forming therapeutic relationships (Baldwin, Wampold, & Imel, 2007). By contrast, other therapist qualities have little or no impact on outcome, including: age, gender, years of experience, professional discipline, degree, training, licensure, theoretical orientation, amount of supervision or personal therapy, and use of evidence-based methods. 

A myriad of studies  have identified variables that have little or no correlation with the outcome of treatment, yet in New Zealand and around the world, most funding providers (and the agencies receiving funding) still insist on outdated and long-debunked sector service delivery practices. Evidence of such ineffective and irrelevant variables that have little or no correlation with attaining positive client service delivery outcomes include:

:• Consumer age, gender, diagnosis, and previous treatment history (Wampold & Brown, 2005).


• Clinician age, gender, years of experience, professional discipline, degree, training, licensure, theoretical orientation, amount of supervision, personal therapy, specific or general competence, and use of evidence-based practices (Beutler et al., 2004; Hubble et al., 2010; Nyman, Nafziger, & Smith, 2010; Miller, Hubble, & Duncan, 2007; Wampold & Brown, 2005).

• Model/technique of therapy (Benish, Imel, & Wampold, 2008; Imel et al., 2008; Miller, Wampold, & Varhely, 2008; Wampold et al., 1997; Wampold et al., 2002).

• Matching therapy to diagnosis (Wampold, 2001) .

• Adherence/fidelity/competence to a particular treatment approach (Duncan & Miller, 2005; Webb, DeRubeis, & Barber, 2010).

Formally measuring client outcomes is unnecessary - Practitioners "just know" how their clients are progressing: Fact, or  Fake?

The Industry says:

"Fact: Practitioners don't have to engage in formal client outcome measures, as clinicians intuitively "know" how their clients are progressing".

The Evidence says:

Fake.

 The majority of Practitioners in various service provider disciplines have never measured and do not know how effective they are (Hansen, Lambert & Forman, 2002; Sapyta, Riemer, & Bickman, 2005). Naturally, it is impossible for Practitioners to know if they are improving if they do not know their level of effectiveness. Additionally, Practitioners are not immune to a self-assessment bias in terms of comparing their own skills with those of their colleagues and in estimating the improvement or deterioration rates likely to occur with their clients (Dew & Reimer, 2003; Lambert, 2010). Walfish, McAlister, O’Donnell, and Lambert (2010) found that Practitioners on average rated their overall clinical skills and effectiveness at the 80th percentile – a statistical impossibility. Even worse, less than 4% considered themselves average and not a single person in the study rated his or her performance below average. The issue of Practitioners overestimating their personal effectiveness puts clients at risk for higher rates of dropout and negative outcome.

During their careers, in the absence of routine client outcome measurement Practitioners actually plateau in their effectiveness, after about 50 hours of front-line Practice.

Practitioners acclimatize to their settings, rely more on specific methods and strategies with which they are trained or are more comfortable, and become more confident in what they believe to be true about their clientele. Although these and other Practitioner factors may benefit specific clients in specific situations, they more often contribute to a plateauing of Practitioner effectiveness. Practitioners need to establish personal baselines of effectiveness and employ reliable and valid methods to monitor and track client feedback in relation to outcomes and the alliance to improve on those baselines

Miller (2011) summarized the impact of routinely monitoring and using outcome and alliance data from 13 RCTs involving 12,374 clinically, culturally, and economically diverse consumers and found:

 

•    Routine    outcome    monitoring    and    feedback    as    much    as    doubles    the    “effect    size”    (reliable    and    clinically    significant    change);

•    Decreases    dropout    rates    by    as    much    as    half;

•    Decreases    deterioration    by    33%;

•    Reduces    hospital stays   and    shortens    length    of    stay    by    66%;

•    Significantly    reduces    cost    of    care    compared    to    non-feedback    groups    (which    increased    in    cost).    

 

Additional evidence indicates that regular, session-by-session feedback (as opposed to less frequent intervals, i.e., every third session, pre- and post-services, etc.; Warren et al., 2010) is more effective in improving outcome and reducing dropouts.

 

Matching client service service to culture (e.g. by Maori, for Maori) : Fact, or Fake?

The Industry says:

"Fact: In order for Maori, Pacific Island, Asian, or other cultures to gain the most from client service delivery, services should be established and delivered by people of their own culture".

The Evidence says:

Fake.

Despite years of effort, including scores of randomized trials and meta-analyses,  experts conclude, “Current evidence does not offer a solution to the issue of which components of cultural adaptation are effective, for what population, and whether cultural adaptation works better than non-cultural adaption (1). The logic of this outcome is self-evident. The number of permutations and adaptations that are available would quickly become unmanageable for service providers, and too expensive to support for funders. Such initiatives as cultural service matching are simply an exercise in patronising virtue signalling, not evidence.

CONTACT NZIPC

Thanks for submitting!