Tumgik
#and then developing case-specific intervention scenarios known as ‘what-if’ models to make measurable sustainability improvements. In this w
startexport · 3 years
Text
#DSM #launches #Sustell™ an #intelligent #sustainability #service to #drive #improvements in the #environmental #footprint and #profitability of #animal #protein #production
#DSM #launches #Sustell™ an #intelligent #sustainability #service to #drive #improvements in the #environmental #footprint and #profitability of #animal #protein #production
DSM launches Sustell™ an intelligent sustainability service to drive improvements in the environmental footprint and profitability of animal protein productionKaiseraugst, CH, 06 May 2021 12:00 CETRoyal DSM, a global science-based company active in Nutrition, Health and Sustainable Living, has launched Sustell™ — a first-of-its-kind intelligent sustainability service that delivers accurate,…
Tumblr media
View On WordPress
#06 May 2021 12:00 CET Royal DSM#a global science-based company active in Nutrition#a powerful solution to measure#a recognized independent expert and leader in Life Cycle Analysis (LCA) and sustainability performance in the food and agriculture fields. T#and actionable farm-level solutions – to improve the environmental footprint and profitability of animal protein production. Through Sustell#and in alignment with ISO 14040/44. Built on credible and sound food#and then developing case-specific intervention scenarios known as ‘what-if’ models to make measurable sustainability improvements. In this w#animal nutrition and sustainability. The Expert Center partners with animal protein producers#assessing the baseline environmental footprint of their animal production using their actual farm and feed data rather than industry average#built on validated protocols#calculation methodologies and proven processes that meet international standards. It provides accurate#CEO Blonk Consultants & Blonk Sustainability Tools: “We are excited to be launching Sustell™ today alongside DSM – a truly intelligent s#CH#comparable analyses and results of environmental impact assessments#compare and improve the sustainability of animal protein.” Ivo Lansbergen#corresponding to impact assessment method Environmental Footprint 2.0. providing global recognition for the results. By providing deep insig#DSM is advancing its precision animal farming journey towards a brighter future. DSM has developed Sustell™ together with Blonk#DSM is underlining its commitment to its strategic initiative We Make It Possible#DSM launches Sustell™ an intelligent sustainability service to drive improvements in the environmental footprint and profitability of animal#farm by farm#feed and agriculture databases like the Agri-footprint database and GFLI (Global Feed LCA Institute). Covers the 17 largest agricultural pro#for the first time#globally recognized#has launched Sustell™ — a first-of-its-kind intelligent sustainability service that delivers accurate#have#Health and Sustainable Living#including the ability to certify and incentivize sustainable farm practices. For example#interventions and improvements throughout the animal protein value chain: Compliant with the internationally recognized FAO LEAP (Livestock#marine and freshwater eutrophication#ozone depletion
0 notes
kristinsimmons · 6 years
Text
On Data and Informatics For Value-Based Healthcare
By SALLY LEWIS, MD
Introduction
Value-based healthcare is gaining popularity as an approach to increase sustainability in healthcare. It has its critics, possibly because its roots are in a health system where part of the drive for a hospital to improve outcomes is to increase market share by being the best at what you do. This is not really a solution for improving population health and does not translate well to publicly-funded healthcare systems such as the NHS. However, when we put aside dogma about how we would wish to fund healthcare, value-based healthcare provides us with a very useful set of tools with which to tackle some of the fundamental problems of sustainability in delivering high quality care.
What is value?
Defined by Professor Michael Porter at Harvard Business School, value is defined as a function of outcomes and costs. Therefore to achieve high value we must deliver the best possible outcomes in the most efficient way, outcomes which matter from the perspective of the individual receiving healthcare and not provider process measures or targets. Sir Muir Gray expands on the idea of technical value (outcomes/costs) to specifically describe ‘personal value’ and ‘allocative value’, encouraging us to focus also on shared decision making, individual preferences for care and ensuring that resources are allocated for maximum value. This article seeks to demonstrate that the role of data and informatics in supporting value-based care goes much further than the collection and remote analysis of big datasets – in fact, the true benefit sits much closer to the interaction between clinician and patient.
Data collection – costing and outcomes
Costing Costing of healthcare for value should be done for the whole patient journey. This is important as it is not possible for value to be created in a service alone – it has to be assessed in terms of the outcomes delivered relative to the investment in all possible interventions for a particular population, whether this is a finite episode of care such as a cataract pathway or the costs attached to a population living with a chronic disease such as Parkinson’s disease. In the latter case, we can take a ‘year of care’ approach to the costing, recognising that there are subpopulations with varying needs within the overall caseload. It is very important to identify these cohorts of patients so that their unmet needs can be characterised and quantified. All too often we adopt a ‘one size fits all’ service for all groups resulting in no-one’s needs being entirely met and creating a really unsatisfactory experience for patients and their carers. Outcome data, including patient-reported outcomes, can inform this.
Improving value for patients through improving outcomes and containing costs can only be achieved through flexible approaches to meeting these needs, by avoiding over- and under- intervention , both of which contribute to poorer outcomes and experience of care.
Two main methods of costing are described and a blend of the two is probably required to gain a full picture. The first is patient-level costing defined by the HFMA as ‘allocating costs, where possible, to an individual patient. Assigning costs to individual patients provides opportunities for a much greater understanding of how costs are built up. The systems that gather this information are known as patient-level information and costing systems (PLICS).’
The second method is Time-driven activity-based costing (TDABC) again defined by the HFMA as ‘a costing method used by some to improve the accuracy of cost estimates for processes and interventions. It requires organisations to estimate the staff, equipment and time for each step of a process, the total costs associated with the staff involved and the time a patient will spend at each step of the process.’
A blended approach of PLICs , TDABC and cohorting is particularly helpful as, although we often talk about clinical activity in terms of disease pathways, peoples’ experience of healthcare is frequently non-linear and complex. We therefore need to obtain a feel for overall programme spend – this is challenging but essential if we are to allocate resource properly in order to improve outcomes for the long term. This is something we often fail to do given that decision-making about resource allocation is often driven by targets rather than need, and is one of the main reasons why we have failed to increase investment in primary care despite the rhetoric.
Outcomes
What is an outcome? Too often in healthcare what we describe as an outcome simply isn’t at least not from the perspective of a patient. ‘An outcome can be defined as a milestone, endpoint or consequence which matters to a person.’ Complete outcome datasets typically contain four domains:
1. Case mix variables 2. Treatment variables 3. Clinically reported outcomes 4. Patient-reported outcomes
Generally all four domains must be brought together to achieve robust analysis of a big dataset that is properly risk adjusted -the approach and methodology adopted by the International Consortium for Healthcare Outcomes Measurement (ICHOM) is very useful here. However, big data analysis still comes with a health warning and a whole load of assumptions built in, as it does not tell us anything about the individual preferences and goals of those individuals contributing to the data.
Typically, aggregated PROMS data have tended to be used to look at clinical effectiveness but this misses their usefulness in wider applications. A fuller appreciation of these applications is essential to inform the technology needed to support their capture and maximise the opportunities for healthcare improvement. ie to give us all of the user stories. These nuances of application of outcome data fall broadly into two groups:</d
a) Longitudinal tracking of chronic disease b) Episodic care, particularly surgical pathways c) Longitudinal tracking of chronic disease d). prioritisation of issues by the individual
One of the first things to be observed when capturing patient-reported outcomes in this context is the development of the ability for patients to prioritise and rank the most important issues to be addressed in a particular consultation from their perspective. This is a useful aide memoire for the conversation  and improves the patient experience, facilitating a two way exchange of knowledge, expectations and goals. This also tends to facilitate the broaching of issues which are more sensitive and difficult to talk about.
supporting shared decision making
Tracking PROM data can also be useful as a more objective assessment of the impact of an intervention such as a new medication.
As data accumulates we will have the ability to utilise ‘real world’ outcome data in reflecting back to patients information better tailored to their own context (rather than that of an idealised population studied in a randomised controlled trial). This contextualisation aids decision making, enabling people to make choices more likely to help then reach their goals and to think about the trade offs between two different courses of action.
support new models of care As described in the section on costing, we frequently identify cohorts within a population living with a condition. Each cohort has varying levels of need. As we shall explore later in the informatics section, outcome measures (combined with the correct IT functionality) can form an important part of developing new approaches to more flexible models of care eg virtual monitoring.
triggers for key decision points Longitudinal tracking of outcome data reveals trajectories of disease progression over time and can therefore act as a trigger or prompt for key clinical decisions such as when to discuss anticipatory care planning or intervene to prevent hospital admission.
Needs assessment At a service and programme level, outcome data, aggregated data allows for the identification of population needs. The characterisation and quantification of those unmet needs aids service planning through the identification of the previously mentioned cohorts and, crucially, the allocation of resource. Value for patients cannot usually be delivered by a single service and needs a system-wide approach so the marrying up of costing and outcome data is important here. we can then tailor services more properly to need eg in separating out those with new diagnoses, those who are stable on maintenance therapy, and those with complex and high level needs.
Larger datasets Larger datasets can be further triangulated with costing and process data for benchmarking purposes to inform efficiency and effectiveness, also important for monitoring patient safety.
Episodic care
For a finite episode of care outcome data can also inform shared decision making, particularly important in preference-sensitive clinical scenarios eg where an invasive intervention may be undertaken for symptom control.
PROMS data is already in use to look at clinical variation and quality improvement. Aggregate data of this sort does come with a health warning – even when adequately risk-adjusted it does not tell us about patient preferences or goals. In other words, in data terms, a relatively poor outcome score may actually have been the best thing in the world from the perspective of the individual receiving the treatment…if it helped them meet their limited goals.
Assessing value across a surgical pathway also assumes that this was ‘the right thing to do’. ie that the same outcome could not have been achieved through different management approaches. High quality does not necessarily always equate to high value.
Informatics and IT
It is probably becoming obvious that the role of informatics in supporting value-based healthcare goes far beyond the simple collection of patient-report outcome measures and the subsequent analysis of big datasets (though these are important features).
As ever, much of the success of the informatics comes down to early and intensive consideration of the needs of the users of the technology – patients and their clinicians in the main. What problems are we trying to solve? People need confidence that their data is safe and to understand how it will (and will not) be used. People need to feel that participation in completing an outcome questionnaire sent to them at home is essential to their direct care and not spam communication! The way we develop this two way communication between people and their clinical teams is absolutely critical to the development of new models of care and to enable people more flexible access to their clinicians and their medical records.
On the clinical side, there are now clear recommendations on the standards which should be applied to medical records and the linking of clinical information, as well as standards for information governance. Both of these topics are mentioned because of their importance, but are beyond the scope of this article.
Finally, there is an urgent to develop and expand our analytical capability in healthcare so that we measure, combine, analyse, present and utilise data effectively. Otherwise we run the risk of ‘not being able to see the wood for the trees…’ and let’s not forget that this data does not belong to us . It belongs to our patients and need to be treated with the utmost respect.
National Clinical Director for Value-based Healthcare. GP. Honorary Chair, Swansea Med School. @RslewsiSally
On Data and Informatics For Value-Based Healthcare published first on https://wittooth.tumblr.com/
0 notes
isaacscrawford · 6 years
Text
On Data and Informatics For Value-Based Healthcare
By SALLY LEWIS, MD
Introduction
Value-based healthcare is gaining popularity as an approach to increase sustainability in healthcare. It has its critics, possibly because its roots are in a health system where part of the drive for a hospital to improve outcomes is to increase market share by being the best at what you do. This is not really a solution for improving population health and does not translate well to publicly-funded healthcare systems such as the NHS. However, when we put aside dogma about how we would wish to fund healthcare, value-based healthcare provides us with a very useful set of tools with which to tackle some of the fundamental problems of sustainability in delivering high quality care.
What is value?
Defined by Professor Michael Porter at Harvard Business School, value is defined as a function of outcomes and costs. Therefore to achieve high value we must deliver the best possible outcomes in the most efficient way, outcomes which matter from the perspective of the individual receiving healthcare and not provider process measures or targets. Sir Muir Gray expands on the idea of technical value (outcomes/costs) to specifically describe ‘personal value’ and ‘allocative value’, encouraging us to focus also on shared decision making, individual preferences for care and ensuring that resources are allocated for maximum value. This article seeks to demonstrate that the role of data and informatics in supporting value-based care goes much further than the collection and remote analysis of big datasets – in fact, the true benefit sits much closer to the interaction between clinician and patient.
Data collection – costing and outcomes
Costing Costing of healthcare for value should be done for the whole patient journey. This is important as it is not possible for value to be created in a service alone – it has to be assessed in terms of the outcomes delivered relative to the investment in all possible interventions for a particular population, whether this is a finite episode of care such as a cataract pathway or the costs attached to a population living with a chronic disease such as Parkinson’s disease. In the latter case, we can take a ‘year of care’ approach to the costing, recognising that there are subpopulations with varying needs within the overall caseload. It is very important to identify these cohorts of patients so that their unmet needs can be characterised and quantified. All too often we adopt a ‘one size fits all’ service for all groups resulting in no-one’s needs being entirely met and creating a really unsatisfactory experience for patients and their carers. Outcome data, including patient-reported outcomes, can inform this.
Improving value for patients through improving outcomes and containing costs can only be achieved through flexible approaches to meeting these needs, by avoiding over- and under- intervention , both of which contribute to poorer outcomes and experience of care.
Two main methods of costing are described and a blend of the two is probably required to gain a full picture. The first is patient-level costing defined by the HFMA as ‘allocating costs, where possible, to an individual patient. Assigning costs to individual patients provides opportunities for a much greater understanding of how costs are built up. The systems that gather this information are known as patient-level information and costing systems (PLICS).’
The second method is Time-driven activity-based costing (TDABC) again defined by the HFMA as ‘a costing method used by some to improve the accuracy of cost estimates for processes and interventions. It requires organisations to estimate the staff, equipment and time for each step of a process, the total costs associated with the staff involved and the time a patient will spend at each step of the process.’
A blended approach of PLICs , TDABC and cohorting is particularly helpful as, although we often talk about clinical activity in terms of disease pathways, peoples’ experience of healthcare is frequently non-linear and complex. We therefore need to obtain a feel for overall programme spend – this is challenging but essential if we are to allocate resource properly in order to improve outcomes for the long term. This is something we often fail to do given that decision-making about resource allocation is often driven by targets rather than need, and is one of the main reasons why we have failed to increase investment in primary care despite the rhetoric.
Outcomes
What is an outcome? Too often in healthcare what we describe as an outcome simply isn’t at least not from the perspective of a patient. ‘An outcome can be defined as a milestone, endpoint or consequence which matters to a person.’ Complete outcome datasets typically contain four domains:
1. Case mix variables 2. Treatment variables 3. Clinically reported outcomes 4. Patient-reported outcomes
Generally all four domains must be brought together to achieve robust analysis of a big dataset that is properly risk adjusted -the approach and methodology adopted by the International Consortium for Healthcare Outcomes Measurement (ICHOM) is very useful here. However, big data analysis still comes with a health warning and a whole load of assumptions built in, as it does not tell us anything about the individual preferences and goals of those individuals contributing to the data.
Typically, aggregated PROMS data have tended to be used to look at clinical effectiveness but this misses their usefulness in wider applications. A fuller appreciation of these applications is essential to inform the technology needed to support their capture and maximise the opportunities for healthcare improvement. ie to give us all of the user stories. These nuances of application of outcome data fall broadly into two groups:</d
a) Longitudinal tracking of chronic disease b) Episodic care, particularly surgical pathways c) Longitudinal tracking of chronic disease d). prioritisation of issues by the individual
One of the first things to be observed when capturing patient-reported outcomes in this context is the development of the ability for patients to prioritise and rank the most important issues to be addressed in a particular consultation from their perspective. This is a useful aide memoire for the conversation  and improves the patient experience, facilitating a two way exchange of knowledge, expectations and goals. This also tends to facilitate the broaching of issues which are more sensitive and difficult to talk about.
supporting shared decision making
Tracking PROM data can also be useful as a more objective assessment of the impact of an intervention such as a new medication.
As data accumulates we will have the ability to utilise ‘real world’ outcome data in reflecting back to patients information better tailored to their own context (rather than that of an idealised population studied in a randomised controlled trial). This contextualisation aids decision making, enabling people to make choices more likely to help then reach their goals and to think about the trade offs between two different courses of action.
support new models of care As described in the section on costing, we frequently identify cohorts within a population living with a condition. Each cohort has varying levels of need. As we shall explore later in the informatics section, outcome measures (combined with the correct IT functionality) can form an important part of developing new approaches to more flexible models of care eg virtual monitoring.
triggers for key decision points Longitudinal tracking of outcome data reveals trajectories of disease progression over time and can therefore act as a trigger or prompt for key clinical decisions such as when to discuss anticipatory care planning or intervene to prevent hospital admission.
Needs assessment At a service and programme level, outcome data, aggregated data allows for the identification of population needs. The characterisation and quantification of those unmet needs aids service planning through the identification of the previously mentioned cohorts and, crucially, the allocation of resource. Value for patients cannot usually be delivered by a single service and needs a system-wide approach so the marrying up of costing and outcome data is important here. we can then tailor services more properly to need eg in separating out those with new diagnoses, those who are stable on maintenance therapy, and those with complex and high level needs.
Larger datasets Larger datasets can be further triangulated with costing and process data for benchmarking purposes to inform efficiency and effectiveness, also important for monitoring patient safety.
Episodic care
For a finite episode of care outcome data can also inform shared decision making, particularly important in preference-sensitive clinical scenarios eg where an invasive intervention may be undertaken for symptom control.
PROMS data is already in use to look at clinical variation and quality improvement. Aggregate data of this sort does come with a health warning – even when adequately risk-adjusted it does not tell us about patient preferences or goals. In other words, in data terms, a relatively poor outcome score may actually have been the best thing in the world from the perspective of the individual receiving the treatment…if it helped them meet their limited goals.
Assessing value across a surgical pathway also assumes that this was ‘the right thing to do’. ie that the same outcome could not have been achieved through different management approaches. High quality does not necessarily always equate to high value.
Informatics and IT
It is probably becoming obvious that the role of informatics in supporting value-based healthcare goes far beyond the simple collection of patient-report outcome measures and the subsequent analysis of big datasets (though these are important features).
As ever, much of the success of the informatics comes down to early and intensive consideration of the needs of the users of the technology – patients and their clinicians in the main. What problems are we trying to solve? People need confidence that their data is safe and to understand how it will (and will not) be used. People need to feel that participation in completing an outcome questionnaire sent to them at home is essential to their direct care and not spam communication! The way we develop this two way communication between people and their clinical teams is absolutely critical to the development of new models of care and to enable people more flexible access to their clinicians and their medical records.
On the clinical side, there are now clear recommendations on the standards which should be applied to medical records and the linking of clinical information, as well as standards for information governance. Both of these topics are mentioned because of their importance, but are beyond the scope of this article.
Finally, there is an urgent to develop and expand our analytical capability in healthcare so that we measure, combine, analyse, present and utilise data effectively. Otherwise we run the risk of ‘not being able to see the wood for the trees…’ and let’s not forget that this data does not belong to us . It belongs to our patients and need to be treated with the utmost respect.
National Clinical Director for Value-based Healthcare. GP. Honorary Chair, Swansea Med School. @RslewsiSally
Article source:The Health Care Blog
0 notes
bisoroblog · 6 years
Text
When Coaching Teachers Has Curiosity As Its Primary Goal
How can school leaders push for innovation when every year 15 to 20 percent of the teaching staff turns over, along with a similar number of students? High turnover rates make it difficult to hold on to institutional knowledge, and even worse, the rationale for systems can become murky. Schools end up continuing practices they’ve always used out of inertia; the person who implemented an idea, and who can defend its importance, may have even left. These conditions make for a difficult environment in which to lead change.
The British International School in Shanghai is by many measures a successful school. The 1,500 students represent 45 nationalities in preschool through a high school International Baccalaureate program and are often the children of expatriates or wealthy Chinese families. But because it is an international school it has significant turnover. Principal Neil Hopkin needed to find a way to continue pushing his teachers to improve within an environment that wasn’t naturally oriented towards change.
‘When we tried to boil down what we were looking for — it was helping our colleagues rediscover their curiosity.’Dr. Neil Hopkin, Principal of British International School in Shanghai
Hopkin and other school leaders wanted to get outside the realm of their own experience; if they relied too heavily on what they “knew,” nothing would change. Hopkin was looking for something that would bring a spirit of innovation into the teaching culture. He was looking for ways to push beyond the known. When a vice-president at a pharmaceutical company reached out to Hopkin offering to connect as a coach as part of his company’s local outreach Hopkin leapt at the chance. And through the process he was struck by how effective the coaching felt, and how applicable it was to a teaching context. The experience helped him settle on working to improve teaching practice at the school through a coaching model.
“Our innovation was that we wanted to make teaching and learning better, but we wanted to be better at making teaching and learning better,” Hopkin said during a presentation on the coaching model his school uses at the Building Learning Communities conference in Boston. That statement is both completely mundane, and the essential struggle of schools everywhere. Every school leader wants a sustainable, effective way to help teachers improve, but finding that process is much more difficult than it sounds because each teacher is an individual, with specific strengths and weaknesses. And every teacher reacts to change differently.
The coaching model they devised is devoid of judgement. Coaches — those with an interest in coaching and a senior or middle leadership role — try to position themselves as thought partners for the teacher, starting before the lesson even happens. The coach and teacher set a goal, devise a plan together, execute the plan, and while it’s happening the coach tries to give in-the-moment observations.
“The key to this isn’t just the sense of community that coaches have, it’s the moment of intervention during practice,” said Victoria Soloway, director of teaching and learning at the school. That’s what makes this type of coaching different from the standard lesson observation leaders used to do. In a typical observation, the teacher doesn’t get any feedback until after the lesson is over, and then it usually feels evaluative. At the British International School of Shanghai, Hopkin and Soloway are trying to create an experience among the staff that helps teachers notice their own teaching moves as they happen in the classroom.
“We’re trying to get to a space where someone coming into your space isn’t the expert,” Hopkin said. “You as the teacher are still the expert with responsibility and control for your professional experience.” They didn’t want to follow a checklist of teaching practices or try to emulate the teaching approach of one star teacher. Instead, their goal was to help each teacher become a star in their own unique way.
“When we tried to boil down what we were looking for — it was helping our colleagues rediscover their curiosity,” Hopkin said. He wanted teachers asking themselves questions like: Why does my lesson go this way? Why don’t I like this kind of student? Why did this go so well? “We wanted them to see the world with awe and wonder,” he said.
Ironically, this focus on curiosity, learning mindsets, hands-on experiences, and reflection are exactly what teachers at this school offer to students. But they weren’t as comfortable engaging in the same process around their own professional learning. Hopkin hopes the coaching model he’s developing — one based around partnership and shared responsibility between coach and teacher for the fate of the lesson — will help teachers shift their mindsets about change. WHAT DOES IT LOOK LIKE
Hopkin and Soloway quickly found that they needed to be flexible with their colleagues, many of whom were uncomfortable with this new approach to teacher professional development. The two leaders wanted coaching sessions to be positive and supportive, personalized and teacher-centered, challenging and reflective, non-evaluative and retrievable. But they also knew many teachers had not experienced that type of coaching before and were wary of anything that seemed like an evaluation.
“What they’re getting is something that’s right for them not only in terms of what they should be working on, but also in how they like to learn and how they like to feel,” Hopkin said. He wants the experience to support teachers as they become even more independent and autonomous in their practice. To do that, he needed them to feel completely safe, so he made it very clear to the staff that nothing from coaching sessions would ever come up in annual reviews.
“No one will take a risk if they feel like they will be kicked in the teeth for landing on their face when trying something new,” Hopkin said.
All coaching sessions start with a pre-lesson meeting when the coach and teacher talk about goals for the lesson and try to anticipate questions or obstacles that might arise. Together they practice how the teacher might address those scenarios. The coach videotapes the lesson with a camera mounted on a Swivel to track the teacher as he or she moves about the classroom. Depending on the teacher’s comfort with in-class coaching, the coach may also be in the room quietly offering observations or questions at the point of practice. For teachers who hate that idea, Hopkin and Soloway offer asynchronous coaching based on the video footage. No matter which style of coaching happens, the coach and teacher meet again to reflect on what happened during the lesson.
When Soloway coaches, she likes to flag moments in the video with voiceover, pointing out strong moments or areas where she noticed something. Sometimes she’ll offer resources to the teacher to help further their thinking on the goal they’ve identified. Soloway says it’s important that if the lesson goes poorly the coach shares the blame, but if the lesson goes wonderfully, it’s best to give the teacher all the credit.
The pre-lesson materials, video and reflection make the experience a retrievable one. Teachers can revisit the video or the notes on their own time. Hopkin admitted it took teachers time to get used to this coaching model and many were skeptical at first. Teachers were also at very different points in their professional learning. Some were already experimental, pushing beyond what felt comfortable regularly and wondering “what if” as a matter of practice. Others, needed more coaching on the nuts and bolts, with the coach continually referring back to a schoolwide “Teaching and Learning Principles” document.
In those cases, Hopkin said “it’s an interplay between are we working on the basics or are we working on something more bespoke to you?” But because the coach is working to help teachers notice and correct course autonomously, he or she is often only asking questions and leaving space for the teacher to develop her own thought or next action.
“It’s painfully slow because you’re just going to ask questions that you want the teacher to ask themselves,” Hopkin said. This type of professional learning is akin to what many schools want teachers to provide students. Leaders at British International School of Shanghai learned that using the same teaching pedagogy with professional learners can be an effective way to shift instruction.
After one year of coaching in this way, Hopkin surveyed staff to see if they could substantiate how the coaching had improved their teaching. Teachers reported a 30 percent increase in their confidence, and they felt they were saving 40 percent of their time because they had to do less re-teaching. School leaders also looked at an array of measures like student grades and test scores, student interaction rates, questioning skill surveys, student attention rates, collaborative learning conversations, etc. to try and determine if the quality of learning had increased. Using information from teachers, students, and leaders they compiled those results and determined that the quality of learning increased by 70 percent.
A WHOLE SCHOOL INNOVATION EFFECT
In the process of helping individual teachers embrace small innovations to their teaching, the school as a whole has become more able to embrace change. Before the coaching program began, the International School of Shanghai already had positive things happening, and Hopkin wanted to retain those experiences. But he also wanted to inspire innovation and he’s aware that “efficiency suffocates new thinking. The better you are at something the less likely you are to be open to something different.”
So he framed the coaching sessions as little experiments, each in service of the broader school strategy. A teacher would make a hypothesis, experiment with it in the classroom, reflect on the insights garnered and how it connected to the larger community goals. The results of those experiments then became data points for broader decisions school leaders were considering.
“We wanted to have a pioneering spirit from people who are well seated in the more traditional paradigm,” Soloway said. They tried to allow teachers to move up and back along the innovation spectrum, with each person offering important insights to the learning community. They did not only celebrate the pioneering teachers, but also the pioneering spirit of very traditional ones.
“TeachMeets” are one way school leaders celebrate individual learning as a community. Any person who wants to share what they are working on and how it’s going can do so. The rest of the “audience” — other colleagues — move around to different speakers depending on what interests them. Multiple mini-presentations are going on at once, with the audience moving between them fluidly. This practice helped spread insights beyond grade-level or subject-specific teams. It creates a positive buzz in a staff meeting and individuals can follow up with questions afterwards.
“We’re looking for people to make that international statement of learning: aahh,” Hopkin said. If coaching can help stimulate curiosity in teachers to continue improving and trying new things, then it has done its job in his mind.
When Coaching Teachers Has Curiosity As Its Primary Goal published first on http://ift.tt/2y2Rir2
0 notes
perfectzablog · 6 years
Text
When Coaching Teachers Has Curiosity As Its Primary Goal
How can school leaders push for innovation when every year 15 to 20 percent of the teaching staff turns over, along with a similar number of students? High turnover rates make it difficult to hold on to institutional knowledge, and even worse, the rationale for systems can become murky. Schools end up continuing practices they’ve always used out of inertia; the person who implemented an idea, and who can defend its importance, may have even left. These conditions make for a difficult environment in which to lead change.
The British International School in Shanghai is by many measures a successful school. The 1,500 students represent 45 nationalities in preschool through a high school International Baccalaureate program and are often the children of expatriates or wealthy Chinese families. But because it is an international school it has significant turnover. Principal Neil Hopkin needed to find a way to continue pushing his teachers to improve within an environment that wasn’t naturally oriented towards change.
‘When we tried to boil down what we were looking for — it was helping our colleagues rediscover their curiosity.’Dr. Neil Hopkin, Principal of British International School in Shanghai
Hopkin and other school leaders wanted to get outside the realm of their own experience; if they relied too heavily on what they “knew,” nothing would change. Hopkin was looking for something that would bring a spirit of innovation into the teaching culture. He was looking for ways to push beyond the known. When a vice-president at a pharmaceutical company reached out to Hopkin offering to connect as a coach as part of his company’s local outreach Hopkin leapt at the chance. And through the process he was struck by how effective the coaching felt, and how applicable it was to a teaching context. The experience helped him settle on working to improve teaching practice at the school through a coaching model.
“Our innovation was that we wanted to make teaching and learning better, but we wanted to be better at making teaching and learning better,” Hopkin said during a presentation on the coaching model his school uses at the Building Learning Communities conference in Boston. That statement is both completely mundane, and the essential struggle of schools everywhere. Every school leader wants a sustainable, effective way to help teachers improve, but finding that process is much more difficult than it sounds because each teacher is an individual, with specific strengths and weaknesses. And every teacher reacts to change differently.
The coaching model they devised is devoid of judgement. Coaches — those with an interest in coaching and a senior or middle leadership role — try to position themselves as thought partners for the teacher, starting before the lesson even happens. The coach and teacher set a goal, devise a plan together, execute the plan, and while it’s happening the coach tries to give in-the-moment observations.
“The key to this isn’t just the sense of community that coaches have, it’s the moment of intervention during practice,” said Victoria Soloway, director of teaching and learning at the school. That’s what makes this type of coaching different from the standard lesson observation leaders used to do. In a typical observation, the teacher doesn’t get any feedback until after the lesson is over, and then it usually feels evaluative. At the British International School of Shanghai, Hopkin and Soloway are trying to create an experience among the staff that helps teachers notice their own teaching moves as they happen in the classroom.
“We’re trying to get to a space where someone coming into your space isn’t the expert,” Hopkin said. “You as the teacher are still the expert with responsibility and control for your professional experience.” They didn’t want to follow a checklist of teaching practices or try to emulate the teaching approach of one star teacher. Instead, their goal was to help each teacher become a star in their own unique way.
“When we tried to boil down what we were looking for — it was helping our colleagues rediscover their curiosity,�� Hopkin said. He wanted teachers asking themselves questions like: Why does my lesson go this way? Why don’t I like this kind of student? Why did this go so well? “We wanted them to see the world with awe and wonder,” he said.
Ironically, this focus on curiosity, learning mindsets, hands-on experiences, and reflection are exactly what teachers at this school offer to students. But they weren’t as comfortable engaging in the same process around their own professional learning. Hopkin hopes the coaching model he’s developing — one based around partnership and shared responsibility between coach and teacher for the fate of the lesson — will help teachers shift their mindsets about change. WHAT DOES IT LOOK LIKE
Hopkin and Soloway quickly found that they needed to be flexible with their colleagues, many of whom were uncomfortable with this new approach to teacher professional development. The two leaders wanted coaching sessions to be positive and supportive, personalized and teacher-centered, challenging and reflective, non-evaluative and retrievable. But they also knew many teachers had not experienced that type of coaching before and were wary of anything that seemed like an evaluation.
“What they’re getting is something that’s right for them not only in terms of what they should be working on, but also in how they like to learn and how they like to feel,” Hopkin said. He wants the experience to support teachers as they become even more independent and autonomous in their practice. To do that, he needed them to feel completely safe, so he made it very clear to the staff that nothing from coaching sessions would ever come up in annual reviews.
“No one will take a risk if they feel like they will be kicked in the teeth for landing on their face when trying something new,” Hopkin said.
All coaching sessions start with a pre-lesson meeting when the coach and teacher talk about goals for the lesson and try to anticipate questions or obstacles that might arise. Together they practice how the teacher might address those scenarios. The coach videotapes the lesson with a camera mounted on a Swivel to track the teacher as he or she moves about the classroom. Depending on the teacher’s comfort with in-class coaching, the coach may also be in the room quietly offering observations or questions at the point of practice. For teachers who hate that idea, Hopkin and Soloway offer asynchronous coaching based on the video footage. No matter which style of coaching happens, the coach and teacher meet again to reflect on what happened during the lesson.
When Soloway coaches, she likes to flag moments in the video with voiceover, pointing out strong moments or areas where she noticed something. Sometimes she’ll offer resources to the teacher to help further their thinking on the goal they’ve identified. Soloway says it’s important that if the lesson goes poorly the coach shares the blame, but if the lesson goes wonderfully, it’s best to give the teacher all the credit.
The pre-lesson materials, video and reflection make the experience a retrievable one. Teachers can revisit the video or the notes on their own time. Hopkin admitted it took teachers time to get used to this coaching model and many were skeptical at first. Teachers were also at very different points in their professional learning. Some were already experimental, pushing beyond what felt comfortable regularly and wondering “what if” as a matter of practice. Others, needed more coaching on the nuts and bolts, with the coach continually referring back to a schoolwide “Teaching and Learning Principles” document.
In those cases, Hopkin said “it’s an interplay between are we working on the basics or are we working on something more bespoke to you?” But because the coach is working to help teachers notice and correct course autonomously, he or she is often only asking questions and leaving space for the teacher to develop her own thought or next action.
“It’s painfully slow because you’re just going to ask questions that you want the teacher to ask themselves,” Hopkin said. This type of professional learning is akin to what many schools want teachers to provide students. Leaders at British International School of Shanghai learned that using the same teaching pedagogy with professional learners can be an effective way to shift instruction.
After one year of coaching in this way, Hopkin surveyed staff to see if they could substantiate how the coaching had improved their teaching. Teachers reported a 30 percent increase in their confidence, and they felt they were saving 40 percent of their time because they had to do less re-teaching. School leaders also looked at an array of measures like student grades and test scores, student interaction rates, questioning skill surveys, student attention rates, collaborative learning conversations, etc. to try and determine if the quality of learning had increased. Using information from teachers, students, and leaders they compiled those results and determined that the quality of learning increased by 70 percent.
A WHOLE SCHOOL INNOVATION EFFECT
In the process of helping individual teachers embrace small innovations to their teaching, the school as a whole has become more able to embrace change. Before the coaching program began, the International School of Shanghai already had positive things happening, and Hopkin wanted to retain those experiences. But he also wanted to inspire innovation and he’s aware that “efficiency suffocates new thinking. The better you are at something the less likely you are to be open to something different.”
So he framed the coaching sessions as little experiments, each in service of the broader school strategy. A teacher would make a hypothesis, experiment with it in the classroom, reflect on the insights garnered and how it connected to the larger community goals. The results of those experiments then became data points for broader decisions school leaders were considering.
“We wanted to have a pioneering spirit from people who are well seated in the more traditional paradigm,” Soloway said. They tried to allow teachers to move up and back along the innovation spectrum, with each person offering important insights to the learning community. They did not only celebrate the pioneering teachers, but also the pioneering spirit of very traditional ones.
“TeachMeets” are one way school leaders celebrate individual learning as a community. Any person who wants to share what they are working on and how it’s going can do so. The rest of the “audience” — other colleagues — move around to different speakers depending on what interests them. Multiple mini-presentations are going on at once, with the audience moving between them fluidly. This practice helped spread insights beyond grade-level or subject-specific teams. It creates a positive buzz in a staff meeting and individuals can follow up with questions afterwards.
“We’re looking for people to make that international statement of learning: aahh,” Hopkin said. If coaching can help stimulate curiosity in teachers to continue improving and trying new things, then it has done its job in his mind.
When Coaching Teachers Has Curiosity As Its Primary Goal published first on http://ift.tt/2xi3x5d
0 notes
kristinsimmons · 6 years
Text
On Data and Informatics For Value-Based Healthcare
By SALLY LEWIS, MD
Introduction
Value-based healthcare is gaining popularity as an approach to increase sustainability in healthcare. It has its critics, possibly because its roots are in a health system where part of the drive for a hospital to improve outcomes is to increase market share by being the best at what you do. This is not really a solution for improving population health and does not translate well to publicly-funded healthcare systems such as the NHS. However, when we put aside dogma about how we would wish to fund healthcare, value-based healthcare provides us with a very useful set of tools with which to tackle some of the fundamental problems of sustainability in delivering high quality care.
What is value?
Defined by Professor Michael Porter at Harvard Business School, value is defined as a function of outcomes and costs. Therefore to achieve high value we must deliver the best possible outcomes in the most efficient way, outcomes which matter from the perspective of the individual receiving healthcare and not provider process measures or targets. Sir Muir Gray expands on the idea of technical value (outcomes/costs) to specifically describe ‘personal value’ and ‘allocative value’, encouraging us to focus also on shared decision making, individual preferences for care and ensuring that resources are allocated for maximum value. This article seeks to demonstrate that the role of data and informatics in supporting value-based care goes much further than the collection and remote analysis of big datasets – in fact, the true benefit sits much closer to the interaction between clinician and patient.
Data collection – costing and outcomes
Costing Costing of healthcare for value should be done for the whole patient journey. This is important as it is not possible for value to be created in a service alone – it has to be assessed in terms of the outcomes delivered relative to the investment in all possible interventions for a particular population, whether this is a finite episode of care such as a cataract pathway or the costs attached to a population living with a chronic disease such as Parkinson’s disease. In the latter case, we can take a ‘year of care’ approach to the costing, recognising that there are subpopulations with varying needs within the overall caseload. It is very important to identify these cohorts of patients so that their unmet needs can be characterised and quantified. All too often we adopt a ‘one size fits all’ service for all groups resulting in no-one’s needs being entirely met and creating a really unsatisfactory experience for patients and their carers. Outcome data, including patient-reported outcomes, can inform this.
Improving value for patients through improving outcomes and containing costs can only be achieved through flexible approaches to meeting these needs, by avoiding over- and under- intervention , both of which contribute to poorer outcomes and experience of care.
Two main methods of costing are described and a blend of the two is probably required to gain a full picture. The first is patient-level costing defined by the HFMA as ‘allocating costs, where possible, to an individual patient. Assigning costs to individual patients provides opportunities for a much greater understanding of how costs are built up. The systems that gather this information are known as patient-level information and costing systems (PLICS).’
The second method is Time-driven activity-based costing (TDABC) again defined by the HFMA as ‘a costing method used by some to improve the accuracy of cost estimates for processes and interventions. It requires organisations to estimate the staff, equipment and time for each step of a process, the total costs associated with the staff involved and the time a patient will spend at each step of the process.’
A blended approach of PLICs , TDABC and cohorting is particularly helpful as, although we often talk about clinical activity in terms of disease pathways, peoples’ experience of healthcare is frequently non-linear and complex. We therefore need to obtain a feel for overall programme spend – this is challenging but essential if we are to allocate resource properly in order to improve outcomes for the long term. This is something we often fail to do given that decision-making about resource allocation is often driven by targets rather than need, and is one of the main reasons why we have failed to increase investment in primary care despite the rhetoric.
Outcomes
What is an outcome? Too often in healthcare what we describe as an outcome simply isn’t at least not from the perspective of a patient. ‘An outcome can be defined as a milestone, endpoint or consequence which matters to a person.’ Complete outcome datasets typically contain four domains:
1. Case mix variables 2. Treatment variables 3. Clinically reported outcomes 4. Patient-reported outcomes
Generally all four domains must be brought together to achieve robust analysis of a big dataset that is properly risk adjusted -the approach and methodology adopted by the International Consortium for Healthcare Outcomes Measurement (ICHOM) is very useful here. However, big data analysis still comes with a health warning and a whole load of assumptions built in, as it does not tell us anything about the individual preferences and goals of those individuals contributing to the data.
Typically, aggregated PROMS data have tended to be used to look at clinical effectiveness but this misses their usefulness in wider applications. A fuller appreciation of these applications is essential to inform the technology needed to support their capture and maximise the opportunities for healthcare improvement. ie to give us all of the user stories. These nuances of application of outcome data fall broadly into two groups:</d
a) Longitudinal tracking of chronic disease b) Episodic care, particularly surgical pathways c) Longitudinal tracking of chronic disease d). prioritisation of issues by the individual
One of the first things to be observed when capturing patient-reported outcomes in this context is the development of the ability for patients to prioritise and rank the most important issues to be addressed in a particular consultation from their perspective. This is a useful aide memoire for the conversation  and improves the patient experience, facilitating a two way exchange of knowledge, expectations and goals. This also tends to facilitate the broaching of issues which are more sensitive and difficult to talk about.
supporting shared decision making
Tracking PROM data can also be useful as a more objective assessment of the impact of an intervention such as a new medication.
As data accumulates we will have the ability to utilise ‘real world’ outcome data in reflecting back to patients information better tailored to their own context (rather than that of an idealised population studied in a randomised controlled trial). This contextualisation aids decision making, enabling people to make choices more likely to help then reach their goals and to think about the trade offs between two different courses of action.
support new models of care As described in the section on costing, we frequently identify cohorts within a population living with a condition. Each cohort has varying levels of need. As we shall explore later in the informatics section, outcome measures (combined with the correct IT functionality) can form an important part of developing new approaches to more flexible models of care eg virtual monitoring.
triggers for key decision points Longitudinal tracking of outcome data reveals trajectories of disease progression over time and can therefore act as a trigger or prompt for key clinical decisions such as when to discuss anticipatory care planning or intervene to prevent hospital admission.
Needs assessment At a service and programme level, outcome data, aggregated data allows for the identification of population needs. The characterisation and quantification of those unmet needs aids service planning through the identification of the previously mentioned cohorts and, crucially, the allocation of resource. Value for patients cannot usually be delivered by a single service and needs a system-wide approach so the marrying up of costing and outcome data is important here. we can then tailor services more properly to need eg in separating out those with new diagnoses, those who are stable on maintenance therapy, and those with complex and high level needs.
Larger datasets Larger datasets can be further triangulated with costing and process data for benchmarking purposes to inform efficiency and effectiveness, also important for monitoring patient safety.
Episodic care
For a finite episode of care outcome data can also inform shared decision making, particularly important in preference-sensitive clinical scenarios eg where an invasive intervention may be undertaken for symptom control.
PROMS data is already in use to look at clinical variation and quality improvement. Aggregate data of this sort does come with a health warning – even when adequately risk-adjusted it does not tell us about patient preferences or goals. In other words, in data terms, a relatively poor outcome score may actually have been the best thing in the world from the perspective of the individual receiving the treatment…if it helped them meet their limited goals.
Assessing value across a surgical pathway also assumes that this was ‘the right thing to do’. ie that the same outcome could not have been achieved through different management approaches. High quality does not necessarily always equate to high value.
Informatics and IT
It is probably becoming obvious that the role of informatics in supporting value-based healthcare goes far beyond the simple collection of patient-report outcome measures and the subsequent analysis of big datasets (though these are important features).
As ever, much of the success of the informatics comes down to early and intensive consideration of the needs of the users of the technology – patients and their clinicians in the main. What problems are we trying to solve? People need confidence that their data is safe and to understand how it will (and will not) be used. People need to feel that participation in completing an outcome questionnaire sent to them at home is essential to their direct care and not spam communication! The way we develop this two way communication between people and their clinical teams is absolutely critical to the development of new models of care and to enable people more flexible access to their clinicians and their medical records.
On the clinical side, there are now clear recommendations on the standards which should be applied to medical records and the linking of clinical information, as well as standards for information governance. Both of these topics are mentioned because of their importance, but are beyond the scope of this article.
Finally, there is an urgent to develop and expand our analytical capability in healthcare so that we measure, combine, analyse, present and utilise data effectively. Otherwise we run the risk of ‘not being able to see the wood for the trees…’ and let’s not forget that this data does not belong to us . It belongs to our patients and need to be treated with the utmost respect.
National Clinical Director for Value-based Healthcare. GP. Honorary Chair, Swansea Med School. @RslewsiSally
On Data and Informatics For Value-Based Healthcare published first on https://wittooth.tumblr.com/
0 notes
kristinsimmons · 6 years
Text
On Data and Informatics For Value-Based Healthcare
By SALLY LEWIS, MD
Introduction
Value-based healthcare is gaining popularity as an approach to increase sustainability in healthcare. It has its critics, possibly because its roots are in a health system where part of the drive for a hospital to improve outcomes is to increase market share by being the best at what you do. This is not really a solution for improving population health and does not translate well to publicly-funded healthcare systems such as the NHS. However, when we put aside dogma about how we would wish to fund healthcare, value-based healthcare provides us with a very useful set of tools with which to tackle some of the fundamental problems of sustainability in delivering high quality care.
What is value?
Defined by Professor Michael Porter at Harvard Business School, value is defined as a function of outcomes and costs. Therefore to achieve high value we must deliver the best possible outcomes in the most efficient way, outcomes which matter from the perspective of the individual receiving healthcare and not provider process measures or targets. Sir Muir Gray expands on the idea of technical value (outcomes/costs) to specifically describe ‘personal value’ and ‘allocative value’, encouraging us to focus also on shared decision making, individual preferences for care and ensuring that resources are allocated for maximum value. This article seeks to demonstrate that the role of data and informatics in supporting value-based care goes much further than the collection and remote analysis of big datasets – in fact, the true benefit sits much closer to the interaction between clinician and patient.
Data collection – costing and outcomes
Costing Costing of healthcare for value should be done for the whole patient journey. This is important as it is not possible for value to be created in a service alone – it has to be assessed in terms of the outcomes delivered relative to the investment in all possible interventions for a particular population, whether this is a finite episode of care such as a cataract pathway or the costs attached to a population living with a chronic disease such as Parkinson’s disease. In the latter case, we can take a ‘year of care’ approach to the costing, recognising that there are subpopulations with varying needs within the overall caseload. It is very important to identify these cohorts of patients so that their unmet needs can be characterised and quantified. All too often we adopt a ‘one size fits all’ service for all groups resulting in no-one’s needs being entirely met and creating a really unsatisfactory experience for patients and their carers. Outcome data, including patient-reported outcomes, can inform this.
Improving value for patients through improving outcomes and containing costs can only be achieved through flexible approaches to meeting these needs, by avoiding over- and under- intervention , both of which contribute to poorer outcomes and experience of care.
Two main methods of costing are described and a blend of the two is probably required to gain a full picture. The first is patient-level costing defined by the HFMA as ‘allocating costs, where possible, to an individual patient. Assigning costs to individual patients provides opportunities for a much greater understanding of how costs are built up. The systems that gather this information are known as patient-level information and costing systems (PLICS).’
The second method is Time-driven activity-based costing (TDABC) again defined by the HFMA as ‘a costing method used by some to improve the accuracy of cost estimates for processes and interventions. It requires organisations to estimate the staff, equipment and time for each step of a process, the total costs associated with the staff involved and the time a patient will spend at each step of the process.’
A blended approach of PLICs , TDABC and cohorting is particularly helpful as, although we often talk about clinical activity in terms of disease pathways, peoples’ experience of healthcare is frequently non-linear and complex. We therefore need to obtain a feel for overall programme spend – this is challenging but essential if we are to allocate resource properly in order to improve outcomes for the long term. This is something we often fail to do given that decision-making about resource allocation is often driven by targets rather than need, and is one of the main reasons why we have failed to increase investment in primary care despite the rhetoric.
Outcomes
What is an outcome? Too often in healthcare what we describe as an outcome simply isn’t at least not from the perspective of a patient. ‘An outcome can be defined as a milestone, endpoint or consequence which matters to a person.’ Complete outcome datasets typically contain four domains:
1. Case mix variables 2. Treatment variables 3. Clinically reported outcomes 4. Patient-reported outcomes
Generally all four domains must be brought together to achieve robust analysis of a big dataset that is properly risk adjusted -the approach and methodology adopted by the International Consortium for Healthcare Outcomes Measurement (ICHOM) is very useful here. However, big data analysis still comes with a health warning and a whole load of assumptions built in, as it does not tell us anything about the individual preferences and goals of those individuals contributing to the data.
Typically, aggregated PROMS data have tended to be used to look at clinical effectiveness but this misses their usefulness in wider applications. A fuller appreciation of these applications is essential to inform the technology needed to support their capture and maximise the opportunities for healthcare improvement. ie to give us all of the user stories. These nuances of application of outcome data fall broadly into two groups:</d
a) Longitudinal tracking of chronic disease b) Episodic care, particularly surgical pathways c) Longitudinal tracking of chronic disease d). prioritisation of issues by the individual
One of the first things to be observed when capturing patient-reported outcomes in this context is the development of the ability for patients to prioritise and rank the most important issues to be addressed in a particular consultation from their perspective. This is a useful aide memoire for the conversation  and improves the patient experience, facilitating a two way exchange of knowledge, expectations and goals. This also tends to facilitate the broaching of issues which are more sensitive and difficult to talk about.
supporting shared decision making
Tracking PROM data can also be useful as a more objective assessment of the impact of an intervention such as a new medication.
As data accumulates we will have the ability to utilise ‘real world’ outcome data in reflecting back to patients information better tailored to their own context (rather than that of an idealised population studied in a randomised controlled trial). This contextualisation aids decision making, enabling people to make choices more likely to help then reach their goals and to think about the trade offs between two different courses of action.
support new models of care As described in the section on costing, we frequently identify cohorts within a population living with a condition. Each cohort has varying levels of need. As we shall explore later in the informatics section, outcome measures (combined with the correct IT functionality) can form an important part of developing new approaches to more flexible models of care eg virtual monitoring.
triggers for key decision points Longitudinal tracking of outcome data reveals trajectories of disease progression over time and can therefore act as a trigger or prompt for key clinical decisions such as when to discuss anticipatory care planning or intervene to prevent hospital admission.
Needs assessment At a service and programme level, outcome data, aggregated data allows for the identification of population needs. The characterisation and quantification of those unmet needs aids service planning through the identification of the previously mentioned cohorts and, crucially, the allocation of resource. Value for patients cannot usually be delivered by a single service and needs a system-wide approach so the marrying up of costing and outcome data is important here. we can then tailor services more properly to need eg in separating out those with new diagnoses, those who are stable on maintenance therapy, and those with complex and high level needs.
Larger datasets Larger datasets can be further triangulated with costing and process data for benchmarking purposes to inform efficiency and effectiveness, also important for monitoring patient safety.
Episodic care
For a finite episode of care outcome data can also inform shared decision making, particularly important in preference-sensitive clinical scenarios eg where an invasive intervention may be undertaken for symptom control.
PROMS data is already in use to look at clinical variation and quality improvement. Aggregate data of this sort does come with a health warning – even when adequately risk-adjusted it does not tell us about patient preferences or goals. In other words, in data terms, a relatively poor outcome score may actually have been the best thing in the world from the perspective of the individual receiving the treatment…if it helped them meet their limited goals.
Assessing value across a surgical pathway also assumes that this was ‘the right thing to do’. ie that the same outcome could not have been achieved through different management approaches. High quality does not necessarily always equate to high value.
Informatics and IT
It is probably becoming obvious that the role of informatics in supporting value-based healthcare goes far beyond the simple collection of patient-report outcome measures and the subsequent analysis of big datasets (though these are important features).
As ever, much of the success of the informatics comes down to early and intensive consideration of the needs of the users of the technology – patients and their clinicians in the main. What problems are we trying to solve? People need confidence that their data is safe and to understand how it will (and will not) be used. People need to feel that participation in completing an outcome questionnaire sent to them at home is essential to their direct care and not spam communication! The way we develop this two way communication between people and their clinical teams is absolutely critical to the development of new models of care and to enable people more flexible access to their clinicians and their medical records.
On the clinical side, there are now clear recommendations on the standards which should be applied to medical records and the linking of clinical information, as well as standards for information governance. Both of these topics are mentioned because of their importance, but are beyond the scope of this article.
Finally, there is an urgent to develop and expand our analytical capability in healthcare so that we measure, combine, analyse, present and utilise data effectively. Otherwise we run the risk of ‘not being able to see the wood for the trees…’ and let’s not forget that this data does not belong to us . It belongs to our patients and need to be treated with the utmost respect.
National Clinical Director for Value-based Healthcare. GP. Honorary Chair, Swansea Med School. @RslewsiSally
On Data and Informatics For Value-Based Healthcare published first on https://wittooth.tumblr.com/
0 notes