Skills decline – a problem on the rise

In the 2-year course of our Osler journey, my business partner Jeff has said to many time who in the hospital he’d want looking after him if he needed a procedure performed : the senior registrar.

 

As Jeff sees it, senior registrars are about as sharp skills-wise as they are ever going to get.  They do the most procedures, they learned the most recently and they are yet to be cloaked by an air of invincibility.

 

And he’s not far off the mark.

 

But what it highlights is an increasingly recognised phenomenon – skills attrition in consultants.

 

In many procedural specialties, there is an almost precipitous drop off in exposure to invasive procedures from the day you pass your fellowship exam.  There is a changing of the guard, where those who once did, now supervise.  Add to that the competition for access to increasingly rare opportunities and there is little doubt that emergency physicians, retrievalists, rural generalists and intensivists are starved of exposure.

 

It’s more likely that the problem has been quietly suspected for some time, but as an industry we’ve been more inclined to turn a blind eye to it, for the solution presents an even bigger problem – if we’re all diminishing in our skills capacity, what on earth are we going to do about it?

 

But the problem is now becoming too big to ignore.  Andrew Tagg, an emergency physician from Melbourne, wrote about this recently.  Access to opportunities to perform procedures are becoming so rare that inevitably we are all deskilling.

 

So what to do?

 

The first step in any quality assurance process is to measure.  Any textbook on clinical audit will tell you the three key areas that we can measure – activity, process and outcome.

 

The first should be easy.  Documenting our activity is an important first step in detecting gaps in our experience.  There is a fairly clear relationship between recency of performance and ability to execute, so it makes sense to track the volume and timing of our activity.

 

The second examines our method.  Is it really too much to ask to submit ourselves to periodic review of our performance by our peers? Is there a better way to validate that my practice is consistent with modern standards?  While inevitably there are logistical challenges with this style of approach, the potential benefit in safety terms more than justifies applying it.

 

Finally, and most problematic, is to measure outcomes.  It’s difficult for many reasons, not the least of which are standardising definitions, accurate data collection (particularly of delayed outcomes) and the relatively low incidence of complications for most things we do.

 

We should not refuse to measure ourselves because we are afraid of what it might tell us.  The more mature response is to find out where our limitations lie, and find a solution.

 

We owe that much to our patients.

 

The old adage is that “Not all that is important can be measured, and not all that can be measured is important.” However, there is plenty that can be measured and is of value to us.

 

We owed it to our patients to try.

Osler’s novel clinical governance platform recognised again

Osler Technology is delighted to be recognised by the industry-leading Learning Technologies Awards in London in November 2016.  LTA is long-established as the peak awards within the online learning and training community, and Osler’s clinical governance platform placed second in the Novel Use of Technology – International category.

Learning Technologies Awards has recognised Osler's clinical governance platform

The award reinforces the sound learning principles upon which the platform is based.  Ensuring learners have access to high quality feedback, in a format they can use to reflect on easily, is critical for busy professionals.  The platform also incorporates an Entrustable Professional Attributes model of assessment, designed to better identify learners who need more assistance.

Work-readiness was a key theme in healthcare clinical governance this year, with the release of a stream of reports and papers identifying healthcare error as a major source of patient morbidity.  Osler is an immediate solution to many common governance issues.

2016 was a big year for Osler, with the completion of our commercialisation phase, the implementation to our first set of customers, successfully applying for grant funding from Federal and State Governments, and recognition in international awards and conferences.  But it’s only the beginning…

We look forward to working with all our supporters and customers in 2017

Stephen Duckett’s report highlights need for new thinking on clinical governance

Background

The recently released Duckett Report was commissioned in response to a review of the role of DHHS in detecting and managing critical safety risks and clinical governance across the system following a cluster of perinatal deaths at Djerriwarrh Health Service in 2013-14.

 

The report highlights that while Victorians have a right to assume that healthcare is generally of a high quality across the system, there continue to be significant deficiencies in the system’s defenses against avoidable patient injury.

 

The report cites a 2014-15 review of hospital acquired diagnoses in Victorian Healthcare System, which concluded that “complications of care are far from rare in our hospitals”.  In fact, more than 300,000 patients per year suffer a complication in Victorian hospitals, at least 70,000 of which are potentially preventable such as malnutrition or pressure ulcers. Many of these result in fatalities.

 

This issue, to say the least, is huge.

 

The Duckett report was commissioned to review the role of the DHHS in preventing these events.

 

Essentially, the report calls for the department to better support Victorian Health Services in providing a high level of local clinical governance on safety and quality, to monitor outcome data more closely, and to respond more effectively when things go wrong.

Major Findings

To this end, the report makes several important recommendations, among them :

 

  1. The department must set a clear example to the wider health service that this issue is its number one priority
  2. The system should focus less on “meeting accreditation standards”, and instead greater emphasis placed on outcome monitoring
  3. The department better supports the boards of health services by reviewing the appointment and training processes for board members, to ensure they can effectively oversee hospital governance
  4. Performance of the health service is more effectively monitored, by making better use of available data and filling in gaps as they exist
  5. Improvements are made in the utilization of data, so that the entire system can benefit, and learnings are better shared
  6. All hospitals should be open to periodic external review
  7. That hospitals are held to account in only providing care that falls within its scope of capability
  8. Consumers and front line clinicians must have a louder voice in the quality assurance process.

 

What does this all mean?

Promoting a culture of transparency and accountability, and most importantly trust, are an excellent start, and to this end, the department is to be congratulated for the example it has set.

 

As Dr Duckett himself points out, the department acted immediately to support Djerriwarrh protect its patients, investigate the cluster of deaths and engage in an open disclosure process.  It then sought prompt external review of its own role in the process and made those results public immediately.  It is a high level demonstration of the transparent accountability required at clinician level.

 

The recommendations of the Duckett review create a better environment for patient safety across the wider health system but this is only half the battle. Left unanswered is how health systems prevent patient injury in the first place.  Albeit beyond the scope of the review, here lies the rump of the improvement curve – what are the hospitals and health services, their clinical managers and individual clinicians supposed to do to improve patient care?  The report provides few answers.

 

If meaningful change is to be made, clinical staff will be the ones to make it, and they can only do so with the right tools and information at their disposal.

 

Without appropriate tools, processes and culture in place, no amount of oversight will achieve the department’s lofty goal of zero preventable patient injury.  This is where ready-made systems like Osler plays a role by taking the high level principles identified in the Duckett review and applying them at individual patient, clinical manager and clinician levels.

How can Osler contribute to improved clinical governance?

Osler provides an opportunity for hospitals to be proactive in their patient safety efforts.  As Dr Duckett points out, hospitals should be operating within their defined scope of practice.  The problem is, few clinical managers have sufficient granular visibility of their activity to enable this to occur.  Using Osler to ensure all staff are adequately trained to perform invasive procedures, non-technical skills and basic equipment familiarity helps manage this clear and apparent risk.

 

By providing real time, meaningful and comparative data on clinical proficiency, complication rates and currency or practice, Osler enables hospitals to identify and respond to limitations in service levels and patient care.

 

And by creating a collaborative environment for clinicians, Osler can distribute these essential learnings across the healthcare system so that Victorians, indeed all Australians can be treated in a safer manner.

 

 

“You have to be willing to acknowledge your problems before you can remedy them. If I were to characterise the state of public and private hospital care in the state of Victoria, I’d have to say that this first step is lacking. Both the public and private hospital systems and the government regulators who oversee them are in a state of denial with regard to the level of harm being caused to the public by inadequate attention to quality and safety deficiencies.”

Paul Levy

former president and CEO of Beth Israel Deaconess Medical Centre in Boston, Massachusetts

Deakin University, Thinker in Residence, 2016

Lambert Schuwirth : Where are we now with Competency Based Training in Healthcare?

Like so many other disciplines medical education has its vogues; topics, ideas, approaches that become popular so quickly and massively that some refer to them as ‘hypes’.

 

I must confess that I too think that some developments in medical education are hypes, and I cannot escape the impression that some are solutions in search of a problem. Actually I am not happy with such hypes because they make it more difficult to defend medical education as a serious and scientific discipline and do harm to my discipline.

 

Sometimes they even lead to very costly mistakes. The epitome of such a mistake has been the use of long branched scenario simulations for high-stakes, summative testing of clinical reasoning in the 1970s – 1980s. What seemed to be a good and valid approach turned out to be inherently flawed, but that was only found out after millions of dollars had been spent on projects and numerous candidates had been failed based on the use of faulty assessments. This, by the way, was nothing to do with a lack of computers, virtual reality or imagination power, but due to a lack of understanding of what clinical reasoning actually is, – in other words a lack of fundamental research. But this is long behind us and medical education has progressed as a discipline.

 

So where do we put the development of competency-based education then? Is it in the ‘hype’-basket or in the ‘genuine step forwards’ basket? I would argue that it is a step in the right direction but I would also argue that we are not there yet. Let me explain why I think this way.

 

Traditional Model

A typical traditional model of medical competence describes it as the combination of knowledge, skills, problem-solving ability and attitudes, assuming that each of them could be separately taught and separately assessed. Each of these four pillars was more or less seen as a personality trait: a relatively stable and relatively generic trait. So, for example, one could have good problem-solving ability and no knowledge or vice versa. In addition, it was assumed that for each of them there was one single best assessment method; vivas, for example, would best for clinical reasoning, OSCEs best for skills, etc.

 

Unfortunately, robust research findings do not support this intuitive model. Scores, for example, generalise much better within content across formats; one study showed that a knowledge test on skills correlated much better with an actual OSCE station (given the same content domain) that two different OSCE stations within the same examination. Numerous studies demonstrated that validity of an assessment is determined by what is asked, rather than whether it is an open-ended or multiple choice question. These findings and others made that the traditional model with ‘knowledge’, ‘skills’, ‘problem-solving ability’ and ‘attitudes’ was no longer tenable and a new and more integrative view on competence had to be developed; one that is based on the finding that content is important and not the format, one that seeks to combine information that is content-similar and not information that is format-similar.

 

In addition to this, medical curricula were often structured according to the healthcare organisation. In this structure, a competent graduate is seen as someone who has successfully completed all individual discipline-based examinations and successfully navigated all discipline-based clerkships, but without integration between them.

 

Of course, patients are not naturally ‘structured’ the way healthcare is structured; they may have undifferentiated complaints, combinations of morbidity, polypharmacy, etc.  So, here too there is a need to reorganise our curricula preferably with the patient as the integrative factor and not the organisation of secondary and tertiary healthcare.

 

Competency Based Learning

Competency-based education has been suggested as a way out of these dilemmas. It suggest that the structure of education and the definition of outcomes is better organised within content themes rather than a sort of personality traits putting the patients and the management of patients at the heart of the process. Competencies, although they have been defined in numerous ways, can generally be seen as simple to complex tasks a successful candidate must be able to handle and during which s/he uses at the right time the correct and relevant knowledge, skills, attitudes and meta-cognitions to manage the situation successfully.

 

Are we there yet? No, there is still considerable ground that has to be covered before we have truly competency-based education and some changes in thinking may need to take place.

 

The first is a change from a reductionist to a holistic view. Often our education and assessment deconstruct competence into little teachable and assessable pixels and then struggles to put them together to see the whole picture. That is why arbitrary decisions have to be made about cut-off scores, about how to combine assessment results, etc. In a more holistic approach the whole picture is described first and when needed the individual pixels can be inspected. For assessment this means that competence is not a combination of measurements, but a narrative in which measurement information is used. This is not unlike a patient’s condition: which is a narrative in which lab values can be used.

 

The second is a change from linear dynamic to more complex views. So many essential aspects of competence cannot be modelled in a one-size-fits all model. Feedback is not given exactly the same way to all students in all situation but is an adaptable and largely unpredictable process. The same applies to healthcare, our students should not be taught simple tricks. They are not expected to deal with every patient in every situation in exactly the same way, but need to be adaptable and agile as to cater to every – often unexpected – situation. No doctor can predict exactly what will be said 2 minutes and 45 seconds into the consultation but they can be pretty confident that their reaction will be correct and they will not overstep boundaries. Our education and assessment approaches will increasingly need to reflect this complexity/uncertainty.

 

The third is do away with our fear for qualitative information. In assessment, specifically, it is often thought that numbers are objective and therefore reliable. Even in research the typical quantitative article still contains many more words than numbers, because the words are needed to make sense of the numbers.  Even the often quoted Lord Kelvin who argues that “numbers or measurements are better than words” apparently needed words to make that claim (it was not a mathematical derivation but a rhetorical argument). I am not saying that we need to do away with numbers, quite the opposite. But, we do need to understand that the collection of data can happen objectively but any interpretation involves subjective human judgement, and we should not fear it. Is this a plea to be arbitrary and capricious with assessment and education? No it is not, as much as objectivity does not imply reliability nor does subjectivity imply unreliability, and much has been written about this.

 

So in summary, education and assessment need to be of high quality, transparent, meaningful and valid, and the development towards competency-based education and competency bases (programmatic) assessment are a step closer towards optimising the quality of the education of our future doctors.

 

 

About the author

Lambert Schuwirth is a Professor of Medical Education and Director Prideaux Centre for Research in Health Professions Education at Flinders University in Adelaide, Australia.

 

“How many of these have you done?”

A recent opinion piece in the Journal of the American Medical Association drew attention to the issue of procedural experience in healthcare.

 

Titled “How many have you done?”, the piece described the experience of a doctor who required a procedure herself, in this case, an amniocentesis.

 

Of course, what the patient was really asking is, “How can I be reassured you know what you are doing?”

 

The thrust of the piece was that the training doctor who performed the procedure had felt compelled to misrepresent their experience with the technique, deftly deflecting questions by the patient and her partner that explored his competence.  The author calls for a more honest response to these types of questions, while acknowledging that this is often difficult to do.

 

But is it any wonder a young doctor has trouble answering this question?

 

Healthcare continues to battle with the issue of competency.  It is still rare for doctors to be formally certified to perform specific procedures.  In fact, the industry still does not have a shared understanding of what competency actually is!

 

Furthermore, because it is uncommon for doctors to assiduously record their activity and outcome data, and even more rare for them to benchmark against their peers, most clinicians are simply oblivious to performance level.

 

So when patients are searching for reassurance that they will be cared for as best they can be, most of us struggle to be clear and meaningful in our response.  Because most of the time, we just don’t know.

 

Wouldn’t it be much better for the junior doctor to answer with authority?

 

“Well, I’ve completed a recognized pathway and been certified to practice after a period of supervision by experts.  Furthermore, I continuously review my performance results and feel comfortable that I’m doing well.”

Enough is enough

The June 2016 edition of Clinical Communique (a periodic report released by the Victorian Institute of Forensic Medicine) once again highlights the issues facing procedural healthcare.  The report highlights three recent coronial inquests into patients who succumbed to complications from central access devices, including a fatal myonecrosis, a pericardial tamponade and a carotid placement resulting in a stroke.  Multiple issues are highlighted in the insertion and subsequent management of these devices.

 

My problem with this is that we’ve heard it all before.  There is nothing new in these recommendations, yet the incidents keep happening.  And it’s far too simplistic to think of the clinical staff involved as “bad apples”.  Simple fact is, they are not.  They are hard working, intelligent, dedicated, diligent and well intentioned.  In fact, they’ll no doubt be completely traumatised by the experience.  So why does this keep happening?

 

Off the top of my head, let’s start with the following :
a) a lack of agreement on best approach (but there ARE existing guidelines)
b) failure to communicate guidelines effectively to those at the coalface
c) resistance by clinicians to embrace best available evidence
d) total lack of structured accreditation process for insertion of lines (and most other invasive procedures)
e) systemic failure to share learnings just like this on a wide enough scale

 

Surely its time our industry got its act together and did something meaningful to overcome these barriers.

 

If you’re interested, here’s the report

 

About the author

Dr Todd Fraser is an intensivist and retrieval physician, and the co-founder of the Osler Clinical Performance Platform, dedicated to improving certification and training in acute healthcare.

Development update

workforce 2 copyThe recent expansion of our Melbourne-based development team has allowed for significant progress in the past 6 months, culminating in the release of version 2.0 of our platform this month.

Version 2.0 includes important enhancements to key functionality such as procedure logging and the My Training section.

It also sees the release of our mobile enabled Assessments platform, enabling workplace based evaluation of clinical skills, procedural competence and equipment certification.  Structured assessments ensure that all staff are provided with objective and consistent feedback on their performance, improving skills and knowledge acquisition.

iStock_000039456550_Double

The Assessment framework supports many our purpose built training plans, such as :

  • Basic and Advance Life Support skills
  • Basic procedures
  • Certification in key equipment
  • Falls risk assessment and prevention

Assessments can also be custom built to suit local needs.

Combined with our mobile procedure supervision & evaluation, Osler Clinical Performance can enable vastly improved clinical governance and provide a real time solution to your credentialing & compliance requirements.

Osler Clinical Performance Version 2 is available for providers and institutions now.  If you’d like to trial Osler for 30 days, you can set up your own demo version AppExchange, or contact us at info@oslertechnology.com

How do you set yourself apart in an increasingly competitive intern market?

There once was a time when medical students could look forward to a guaranteed intern position.

Those days appear over – at least, in many jurisdictions.

Competition for intern places is increasing dramatically, and in some regions a surplus of new graduates is leading to the inability to access a position.

Highlighting the problem is information recently released by Flinders and Adelaide Medical Student Societies, supported by the Australian Medical Students Association, demonstrating that current models suggest a short fall of positions in South Australia of 87 intern positions in just 2 years. Almost half of these are domestic graduates.

Figure from post issued by FMSS / AMSS / AMSA

Figure from post issued by FMSS / AMSS / AMSA

In other areas, intern places are still guaranteed, but competition for more coveted training sites remains intense. So much so in fact, that many medical students are actively encouraged by their tutors and lecturers to be building their curriculum vitae from the day they begin medical school.

As one hospital Chief Medical Officer recently stated, interns are increasingly “vanilla flavoured, and the challenge is to look like boysenberry.”

Of course, there are many ways to do so, including research, community service and other pursuits.

Another is developing a portfolio of skills, better illustrating the credentials that a candidate brings to the table, demonstrating both a willingness to evolve their abilities, and evidence of their “work readiness”.

And with the concept of revalidation increasingly discussed among the world’s regulatory authorities such as the Medical Board of Australia, the routine documentation of activity, outcomes and complications may become an important habit to develop.


If you’d like to trial the Osler platform for individuals, you can register to become part of the Osler Community here

 

Hospitals interested in trialling the Osler platform can do so here

 

The 3rd leading cause of patient death might well be the hospital system itself

Another week, another report illustrating the harm that the healthcare industry inadvertently causes to its patients.

 

This week the British Medical Journal published a report by renowned patient safety champion Marty Makary which examines the role of healthcare error on mortality.

 

Healthcare error take many forms, including :

  • Unintended acts (either commission or omission)
  • Execution errors
  • Interpretation and synthesis errors
  • Planning errors, and
  • Deviations from processes of care

 

The report highlights the lack of visibility surrounding healthcare error. Annual causation mortality data is often compiled from death certificates and coding, based on classifications such as the International Classification of Diseases (ICD) code.   Such systems do not routinely account for healthcare error, and so do not feature on such annual lists.

 

The fact that healthcare error results in patient deaths comes as no surprise to most practicing clinicians, many of whom have either witnessed patient deaths related to management errors, or even been a part of the process themselves. And it’s not a pleasant experience.

 

What should be startling is that the issue has been recognized for so long, yet little progress seems to have been made. It’s now over 15 years since the seminal work of Lucian Leape and colleagues (1) highlighted as many as 98 000 American lives are lost each year related to iatrogenic factors in hospitals, with countless more injured. Many of these deaths are thought to be potentially preventable.

 

Despite the furore that Leape’s To Err is Human report generated, little seems to have changed. Subsequent reviews (2-6) have since estimated that between 200 000 and 400 000 US deaths can be connected to patient error annually.

 

Extrapolating the published literature, Makary and colleagues suggest that, if true, iatrogenic causes of death would rank third on the all-cause mortality table in the US (behind heart disease and cancer).

 

It’s hard to imagine that if healthcare error was viewed as a disease, widespread public awareness campaigns, fundraisers and dedicated research would be inevitable.

 

No one who works in healthcare could possibly suggest that a zero patient death rate due to iatrogenesis is possible. The healthcare system is almost as complex as the humans it cares for. It’s clear that the vast majority of patients who traverse the system are well cared for by highly motivated and caring individuals, and the results are usually positive.

 

That notwithstanding, healthcare needs to re-evaluate its approach to safety. The external perception of the acute care industry is that of a High-Reliability Organization, where safety is prioritised against all other factors. Industries such as oil and gas exploration, aviation, and nuclear power have demonstrated that this approach can reduce injury to almost zero.

 

The serial failure of our industry to embrace standard, risk averse behaviours contributes greatly to the harm it generates :

 

  • Failure of orientation
  • Failure to validate procedural competence
  • Failure to ensure equipment familiarization
  • Failure to embed policy and procedural change
  • Failure to embrace literature and national standards
  • Failure to embrace technology that can improve and enhance safety standards
  • Safe working hours
  • Failure of process documentation and audit
  • Communication failure
  • Failure to measure and report outcomes transparently
  • Failure to ensure critical incident learnings are widely distributed
  • Failure to report and investigate “near miss events”
  • Failure to create a no-blame culture

 

Applying these principles to the healthcare sector will inevitably create tensions and encounter barriers to implementation, but the first step is an acceptance that we can do better.

 

“I think doctors and nurses and other medical professionals are the heroes of the patient safety movement and come up with creative innovations to fix the problems,” he said. “But they need the support from the system to solve these problems and to help us help improve the quality of care.” Marty Makary (source CNN)

 

 

References

  1. Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer health system.National Academies Press, 1999.
  2. Leape LL, Lawthers AG, Brennan TA, Johnson WG. Preventing medical injury. Qual Rev Bull1993;19:144-9.pmid:8332330.
  3. HealthGrades quality study: patient safety in American hospitals. 2004. http://www.providersedge.com/ehdocs/ehr_articles/Patient_Safety_in_American_Hospitals-2004.pdf.
  4. Department of Health and Human Services. Adverse events in hospitals: national incidence among Medicare beneficiaries. 2010. http://oig.hhs.gov/oei/reports/oei-06-09-00090.pdf.
  5. Classen D, Resar R, Griffin F, et al. Global “trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff2011;30:581-9doi:10.1377/hlthaff.2011.0190.

American Hospital Association. Fast facts on US hospitals. 2015.http://www.aha.org/research/rc/stat-studies/fast-facts.shtml

 

About the author

Dr Todd Fraser is a passionate campaigner for patient safety through better process.  He is an Intensivist and Retrieval Physician, and co-founder of Osler Technology.