Site logo

  • Case Study Evaluation Approach
  • Learning Center

A case study evaluation approach can be an incredibly powerful tool for monitoring and evaluating complex programs and policies. By identifying common themes and patterns, this approach allows us to better understand the successes and challenges faced by the program. In this article, we’ll explore the benefits of using a case study evaluation approach in the monitoring and evaluation of projects, programs, and public policies.

Table of Contents

Introduction to Case Study Evaluation Approach

The advantages of a case study evaluation approach, types of case studies, potential challenges with a case study evaluation approach, guiding principles for successful implementation of a case study evaluation approach.

  • Benefits of Incorporating the Case Study Evaluation Approach in the Monitoring and Evaluation of Projects and Programs

A case study evaluation approach is a great way to gain an in-depth understanding of a particular issue or situation. This type of approach allows the researcher to observe, analyze, and assess the effects of a particular situation on individuals or groups.

An individual, a location, or a project may serve as the focal point of a case study’s attention. Quantitative and qualitative data are frequently used in conjunction with one another.

It also allows the researcher to gain insights into how people react to external influences. By using a case study evaluation approach, researchers can gain insights into how certain factors such as policy change or a new technology have impacted individuals and communities. The data gathered through this approach can be used to formulate effective strategies for responding to changes and challenges. Ultimately, this monitoring and evaluation approach helps organizations make better decision about the implementation of their plans.

This approach can be used to assess the effectiveness of a policy, program, or initiative by considering specific elements such as implementation processes, outcomes, and impact. A case study evaluation approach can provide an in-depth understanding of the effectiveness of a program by closely examining the processes involved in its implementation. This includes understanding the context, stakeholders, and resources to gain insight into how well a program is functioning or has been executed. By evaluating these elements, it can help to identify areas for improvement and suggest potential solutions. The findings from this approach can then be used to inform decisions about policies, programs, and initiatives for improved outcomes.

It is also useful for determining if other policies, programs, or initiatives could be applied to similar situations in order to achieve similar results or improved outcomes. All in all, the case study monitoring evaluation approach is an effective method for determining the effectiveness of specific policies, programs, or initiatives. By researching and analyzing the successes of previous cases, this approach can be used to identify similar approaches that could be applied to similar situations in order to achieve similar results or improved outcomes.

A case study evaluation approach offers the advantage of providing in-depth insight into a particular program or policy. This can be accomplished by analyzing data and observations collected from a range of stakeholders such as program participants, service providers, and community members. The monitoring and evaluation approach is used to assess the impact of programs and inform the decision-making process to ensure successful implementation. The case study monitoring and evaluation approach can help identify any underlying issues that need to be addressed in order to improve program effectiveness. It also provides a reality check on how successful programs are actually working, allowing organizations to make adjustments as needed. Overall, a case study monitoring and evaluation approach helps to ensure that policies and programs are achieving their objectives while providing valuable insight into how they are performing overall.

By taking a qualitative approach to data collection and analysis, case study evaluations are able to capture nuances in the context of a particular program or policy that can be overlooked when relying solely on quantitative methods. Using this approach, insights can be gleaned from looking at the individual experiences and perspectives of actors involved, providing a more detailed understanding of the impact of the program or policy than is possible with other evaluation methodologies. As such, case study monitoring evaluation is an invaluable tool in assessing the effectiveness of a particular initiative, enabling more informed decision-making as well as more effective implementation of programs and policies.

Furthermore, this approach is an effective way to uncover experiential information that can help to inform the ongoing improvement of policy and programming over time All in all, the case study monitoring evaluation approach offers an effective way to uncover experiential information necessary to inform the ongoing improvement of policy and programming. By analyzing the data gathered from this systematic approach, stakeholders can gain deeper insight into how best to make meaningful and long-term changes in their respective organizations.

Case studies come in a variety of forms, each of which can be put to a unique set of evaluation tasks. Evaluators have come to a consensus on describing six distinct sorts of case studies, which are as follows: illustrative, exploratory, critical instance, program implementation, program effects, and cumulative.

Illustrative Case Study

An illustrative case study is a type of case study that is used to provide a detailed and descriptive account of a particular event, situation, or phenomenon. It is often used in research to provide a clear understanding of a complex issue, and to illustrate the practical application of theories or concepts.

An illustrative case study typically uses qualitative data, such as interviews, surveys, or observations, to provide a detailed account of the unit being studied. The case study may also include quantitative data, such as statistics or numerical measurements, to provide additional context or to support the qualitative data.

The goal of an illustrative case study is to provide a rich and detailed description of the unit being studied, and to use this information to illustrate broader themes or concepts. For example, an illustrative case study of a successful community development project may be used to illustrate the importance of community engagement and collaboration in achieving development goals.

One of the strengths of an illustrative case study is its ability to provide a detailed and nuanced understanding of a particular issue or phenomenon. By focusing on a single case, the researcher is able to provide a detailed and in-depth analysis that may not be possible through other research methods.

However, one limitation of an illustrative case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a single unit, it may not be representative of other similar units or situations.

A well-executed case study can shed light on wider research topics or concepts through its thorough and descriptive analysis of a specific event or phenomenon.

Exploratory Case Study

An exploratory case study is a type of case study that is used to investigate a new or previously unexplored phenomenon or issue. It is often used in research when the topic is relatively unknown or when there is little existing literature on the topic.

Exploratory case studies are typically qualitative in nature and use a variety of methods to collect data, such as interviews, observations, and document analysis. The focus of the study is to gather as much information as possible about the phenomenon being studied and to identify new and emerging themes or patterns.

The goal of an exploratory case study is to provide a foundation for further research and to generate hypotheses about the phenomenon being studied. By exploring the topic in-depth, the researcher can identify new areas of research and generate new questions to guide future research.

One of the strengths of an exploratory case study is its ability to provide a rich and detailed understanding of a new or emerging phenomenon. By using a variety of data collection methods, the researcher can gather a broad range of data and perspectives to gain a more comprehensive understanding of the phenomenon being studied.

However, one limitation of an exploratory case study is that the findings may not be generalizable to other contexts or populations. Because the study is focused on a new or previously unexplored phenomenon, the findings may not be applicable to other situations or populations.

Exploratory case studies are an effective research strategy for learning about novel occurrences, developing research hypotheses, and gaining a deep familiarity with a topic of study.

Critical Instance Case Study

A critical instance case study is a type of case study that focuses on a specific event or situation that is critical to understanding a broader issue or phenomenon. The goal of a critical instance case study is to analyze the event in depth and to draw conclusions about the broader issue or phenomenon based on the analysis.

A critical instance case study typically uses qualitative data, such as interviews, observations, or document analysis, to provide a detailed and nuanced understanding of the event being studied. The data are analyzed using various methods, such as content analysis or thematic analysis, to identify patterns and themes that emerge from the data.

The critical instance case study is often used in research when a particular event or situation is critical to understanding a broader issue or phenomenon. For example, a critical instance case study of a successful disaster response effort may be used to identify key factors that contributed to the success of the response, and to draw conclusions about effective disaster response strategies more broadly.

One of the strengths of a critical instance case study is its ability to provide a detailed and in-depth analysis of a particular event or situation. By focusing on a critical instance, the researcher is able to provide a rich and nuanced understanding of the event, and to draw conclusions about broader issues or phenomena based on the analysis.

However, one limitation of a critical instance case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a specific event or situation, the findings may not be applicable to other similar events or situations.

A critical instance case study is a valuable research method that can provide a detailed and nuanced understanding of a particular event or situation and can be used to draw conclusions about broader issues or phenomena based on the analysis.

Program Implementation Program Implementation

A program implementation case study is a type of case study that focuses on the implementation of a particular program or intervention. The goal of the case study is to provide a detailed and comprehensive account of the program implementation process, and to identify factors that contributed to the success or failure of the program.

Program implementation case studies typically use qualitative data, such as interviews, observations, and document analysis, to provide a detailed and nuanced understanding of the program implementation process. The data are analyzed using various methods, such as content analysis or thematic analysis, to identify patterns and themes that emerge from the data.

The program implementation case study is often used in research to evaluate the effectiveness of a particular program or intervention, and to identify strategies for improving program implementation in the future. For example, a program implementation case study of a school-based health program may be used to identify key factors that contributed to the success or failure of the program, and to make recommendations for improving program implementation in similar settings.

One of the strengths of a program implementation case study is its ability to provide a detailed and comprehensive account of the program implementation process. By using qualitative data, the researcher is able to capture the complexity and nuance of the implementation process, and to identify factors that may not be captured by quantitative data alone.

However, one limitation of a program implementation case study is that the findings may not be generalizable to other contexts or populations. Because the case study focuses on a specific program or intervention, the findings may not be applicable to other programs or interventions in different settings.

An effective research tool, a case study of program implementation may illuminate the intricacies of the implementation process and point the way towards future enhancements.

Program Effects Case Study

A program effects case study is a research method that evaluates the effectiveness of a particular program or intervention by examining its outcomes or effects. The purpose of this type of case study is to provide a detailed and comprehensive account of the program’s impact on its intended participants or target population.

A program effects case study typically employs both quantitative and qualitative data collection methods, such as surveys, interviews, and observations, to evaluate the program’s impact on the target population. The data is then analyzed using statistical and thematic analysis to identify patterns and themes that emerge from the data.

The program effects case study is often used to evaluate the success of a program and identify areas for improvement. For example, a program effects case study of a community-based HIV prevention program may evaluate the program’s effectiveness in reducing HIV transmission rates among high-risk populations and identify factors that contributed to the program’s success.

One of the strengths of a program effects case study is its ability to provide a detailed and nuanced understanding of a program’s impact on its intended participants or target population. By using both quantitative and qualitative data, the researcher can capture both the objective and subjective outcomes of the program and identify factors that may have contributed to the outcomes.

However, a limitation of the program effects case study is that it may not be generalizable to other populations or contexts. Since the case study focuses on a particular program and population, the findings may not be applicable to other programs or populations in different settings.

A program effects case study is a good way to do research because it can give a detailed look at how a program affects the people it is meant for. This kind of case study can be used to figure out what needs to be changed and how to make programs that work better.

Cumulative Case Study

A cumulative case study is a type of case study that involves the collection and analysis of multiple cases to draw broader conclusions. Unlike a single-case study, which focuses on one specific case, a cumulative case study combines multiple cases to provide a more comprehensive understanding of a phenomenon.

The purpose of a cumulative case study is to build up a body of evidence through the examination of multiple cases. The cases are typically selected to represent a range of variations or perspectives on the phenomenon of interest. Data is collected from each case using a range of methods, such as interviews, surveys, and observations.

The data is then analyzed across cases to identify common themes, patterns, and trends. The analysis may involve both qualitative and quantitative methods, such as thematic analysis and statistical analysis.

The cumulative case study is often used in research to develop and test theories about a phenomenon. For example, a cumulative case study of successful community-based health programs may be used to identify common factors that contribute to program success, and to develop a theory about effective community-based health program design.

One of the strengths of the cumulative case study is its ability to draw on a range of cases to build a more comprehensive understanding of a phenomenon. By examining multiple cases, the researcher can identify patterns and trends that may not be evident in a single case study. This allows for a more nuanced understanding of the phenomenon and helps to develop more robust theories.

However, one limitation of the cumulative case study is that it can be time-consuming and resource-intensive to collect and analyze data from multiple cases. Additionally, the selection of cases may introduce bias if the cases are not representative of the population of interest.

In summary, a cumulative case study is a valuable research method that can provide a more comprehensive understanding of a phenomenon by examining multiple cases. This type of case study is particularly useful for developing and testing theories and identifying common themes and patterns across cases.

When conducting a case study evaluation approach, one of the main challenges is the need to establish a contextually relevant research design that accounts for the unique factors of the case being studied. This requires close monitoring of the case, its environment, and relevant stakeholders. In addition, the researcher must build a framework for the collection and analysis of data that is able to draw meaningful conclusions and provide valid insights into the dynamics of the case. Ultimately, an effective case study monitoring evaluation approach will allow researchers to form an accurate understanding of their research subject.

Additionally, depending on the size and scope of the case, there may be concerns regarding the availability of resources and personnel that could be allocated to data collection and analysis. To address these issues, a case study monitoring evaluation approach can be adopted, which would involve a mix of different methods such as interviews, surveys, focus groups and document reviews. Such an approach could provide valuable insights into the effectiveness and implementation of the case in question. Additionally, this type of evaluation can be tailored to the specific needs of the case study to ensure that all relevant data is collected and respected.

When dealing with a highly sensitive or confidential subject matter within a case study, researchers must take extra measures to prevent bias during data collection as well as protect participant anonymity while also collecting valid data in order to ensure reliable results

Moreover, when conducting a case study evaluation it is important to consider the potential implications of the data gathered. By taking extra measures to prevent bias and protect participant anonymity, researchers can ensure reliable results while also collecting valid data. Maintaining confidentiality and deploying ethical research practices are essential when conducting a case study to ensure an unbiased and accurate monitoring evaluation.

When planning and implementing a case study evaluation approach, it is important to ensure the guiding principles of research quality, data collection, and analysis are met. To ensure these principles are upheld, it is essential to develop a comprehensive monitoring and evaluation plan. This plan should clearly outline the steps to be taken during the data collection and analysis process. Furthermore, the plan should provide detailed descriptions of the project objectives, target population, key indicators, and timeline. It is also important to include metrics or benchmarks to monitor progress and identify any potential areas for improvement. By implementing such an approach, it will be possible to ensure that the case study evaluation approach yields valid and reliable results.

To ensure successful implementation, it is essential to establish a reliable data collection process that includes detailed information such as the scope of the study, the participants involved, and the methods used to collect data. Additionally, it is important to have a clear understanding of what will be examined through the evaluation process and how the results will be used. All in all, it is essential to establish a sound monitoring evaluation approach for a successful case study implementation. This includes creating a reliable data collection process that encompasses the scope of the study, the participants involved, and the methods used to collect data. It is also imperative to have an understanding of what will be examined and how the results will be utilized. Ultimately, effective planning is key to ensure that the evaluation process yields meaningful insights.

Benefits of Incorporating the Case Study Evaluation Approach in the Monitoring and Evaluation of Projects and Programmes

Using a case study approach in monitoring and evaluation allows for a more detailed and in-depth exploration of the project’s success, helping to identify key areas of improvement and successes that may have been overlooked through traditional evaluation. Through this case study method, specific data can be collected and analyzed to identify trends and different perspectives that can support the evaluation process. This data can allow stakeholders to gain a better understanding of the project’s successes and failures, helping them make informed decisions on how to strengthen current activities or shape future initiatives. From a monitoring and evaluation standpoint, this approach can provide an increased level of accuracy in terms of accurately assessing the effectiveness of the project.

This can provide valuable insights into what works—and what doesn’t—when it comes to implementing projects and programs, aiding decision-makers in making future plans that better meet their objectives However, monitoring and evaluation is just one approach to assessing the success of a case study. It does provide a useful insight into what initiatives may be successful, but it is important to note that there are other effective research methods, such as surveys and interviews, that can also help to further evaluate the success of a project or program.

In conclusion, a case study evaluation approach can be incredibly useful in monitoring and evaluating complex programs and policies. By exploring key themes, patterns and relationships, organizations can gain a detailed understanding of the successes, challenges and limitations of their program or policy. This understanding can then be used to inform decision-making and improve outcomes for those involved. With its ability to provide an in-depth understanding of a program or policy, the case study evaluation approach has become an invaluable tool for monitoring and evaluation professionals.

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

case study with evaluation

Jobs for You

Call for consultancy: evaluation of dfpa projects in kenya, uganda and ethiopia.

  • The Danish Family Planning Association

Project Assistant – Close Out

  • United States (Remote)

Global Technical Advisor – Information Management

  • Belfast, UK
  • Concern Worldwide

Intern- International Project and Proposal Support – ISPI

  • United States

Budget and Billing Consultant

Manager ii, budget and billing, usaid/lac office of regional sustainable development – program analyst, team leader, senior finance and administrative manager, data scientist.

  • New York, NY, USA
  • Everytown For Gun Safety

Energy Evaluation Specialist

Senior evaluation specialist, associate project manager, project manager i, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

15.7 Evaluation: Presentation and Analysis of Case Study

Learning outcomes.

By the end of this section, you will be able to:

  • Revise writing to follow the genre conventions of case studies.
  • Evaluate the effectiveness and quality of a case study report.

Case studies follow a structure of background and context , methods , findings , and analysis . Body paragraphs should have main points and concrete details. In addition, case studies are written in formal language with precise wording and with a specific purpose and audience (generally other professionals in the field) in mind. Case studies also adhere to the conventions of the discipline’s formatting guide ( APA Documentation and Format in this study). Compare your case study with the following rubric as a final check.

Score Critical Language Awareness Clarity and Coherence Rhetorical Choices

The text always adheres to the “Editing Focus” of this chapter: words often confused, as discussed in Section 15.6. The text also shows ample evidence of the writer’s intent to consciously meet or challenge conventional expectations in rhetorically effective ways. Paragraphs are unified under a single, clear topic. Abundant background and supporting details provide a sense of completeness. Evidence of qualitative and quantitative data collection is clear. Transitions and subheads connect ideas and sections, thus establishing coherence throughout. Applicable visuals clarify abstract ideas. The writer clearly and consistently recognizes and works within the limits and purpose of the case study. The writer engages the audience by inviting them to contribute to the research and suggests ways for doing so. The implications, relevance, and consequences of the research are explained. The study shows mature command of language and consistent objectivity. Quotations from participant(s) are accurate and relevant.

The text usually adheres to the “Editing Focus” of this chapter: words often confused, as discussed in Section 15.6. The text also shows some evidence of the writer’s intent to consciously meet or challenge conventional expectations in rhetorically effective ways. Paragraphs usually are unified under a single, clear topic. Background and supporting details provide a sense of completeness. Evidence of qualitative and quantitative data collection is clear. Transitions and subheads connect ideas and sections, thus establishing coherence. Applicable visuals clarify abstract ideas. The writer usually recognizes and works within the limits and purpose of the case study. The writer engages the audience by inviting them to contribute to the research and usually suggests ways for doing so. The implications, relevance, and consequences of the research are explained. The study shows command of language and objectivity. Quotations from participant(s) are usually accurate and relevant.

The text generally adheres to the “Editing Focus” of this chapter: words often confused, as discussed in Section 15.6. The text also shows limited evidence of the writer’s intent to consciously meet or challenge conventional expectations in rhetorically effective ways. Paragraphs generally are unified under a single, clear topic. Background and supporting details provide a sense of completeness. Some evidence of qualitative and quantitative data collection is clear. Some transitions and subheads connect ideas and sections, generally establishing coherence. Visuals may clarify abstract ideas or may seem irrelevant. The writer generally recognizes and works within the limits and purpose of the case study. The writer sometimes engages the audience by inviting them to contribute to the research but may not suggest ways for doing so. The implications, relevance, and consequences of the research are explained, if not fully. The study shows some command of language and objectivity. Quotations from participant(s) are generally accurate, if not always relevant.

The text occasionally adheres to the “Editing Focus” of this chapter: words often confused, as discussed in Section 15.6. The text also shows emerging evidence of the writer’s intent to consciously meet or challenge conventional expectations in rhetorically effective ways. Paragraphs sometimes are unified under a single, clear topic. Background and supporting details are insufficient to provide a sense of completeness. There is little evidence of qualitative or quantitative data collection. Some transitions and subheads connect ideas and sections, but coherence may be lacking. Visuals are either missing or irrelevant. The writer occasionally recognizes and works within the limits and purpose of the case study. The writer rarely engages the audience by inviting them to contribute to the research or suggests ways for doing so. The implications, relevance, and consequences of the research are haphazardly explained, if at all. The study shows little command of language or objectivity. Quotations from participant(s) are questionable and often irrelevant.

The text does not adhere to the “Editing Focus” of this chapter: words often confused, as discussed in Section 15.6. The text also shows little to no evidence of the writer’s intent to consciously meet or challenge conventional expectations in rhetorically effective ways. Paragraphs are not unified under a single, clear topic. Background and supporting details are insufficient to provide a sense of completeness. There is little evidence of qualitative or quantitative data collection. Transitions and subheads are missing or inappropriate to provide coherence. Visuals are either missing or irrelevant. The writer does not recognize or work within the limits and purpose of the case study. The writer does not engage the audience by inviting them to contribute to the research. The implications, relevance, and consequences of the research are haphazardly explained, if at all. The study shows little command of language or objectivity. Quotations, if any, from participant(s) are questionable and often irrelevant.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Authors: Michelle Bachelor Robinson, Maria Jerskey, featuring Toby Fulwiler
  • Publisher/website: OpenStax
  • Book title: Writing Guide with Handbook
  • Publication date: Dec 21, 2021
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Section URL: https://openstax.org/books/writing-guide/pages/15-7-evaluation-presentation-and-analysis-of-case-study

© Dec 19, 2023 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

  • Open access
  • Published: 10 November 2020

Case study research for better evaluations of complex interventions: rationale and challenges

  • Sara Paparini   ORCID: orcid.org/0000-0002-1909-2481 1 ,
  • Judith Green 2 ,
  • Chrysanthi Papoutsi 1 ,
  • Jamie Murdoch 3 ,
  • Mark Petticrew 4 ,
  • Trish Greenhalgh 1 ,
  • Benjamin Hanckel 5 &
  • Sara Shaw 1  

BMC Medicine volume  18 , Article number:  301 ( 2020 ) Cite this article

19k Accesses

43 Citations

35 Altmetric

Metrics details

The need for better methods for evaluation in health research has been widely recognised. The ‘complexity turn’ has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might link interventions and outcomes. We argue that case study research—currently denigrated as poor evidence—is an under-utilised resource for not only providing evidence about context and transferability, but also for helping strengthen causal inferences when pathways between intervention and effects are likely to be non-linear.

Case study research, as an overall approach, is based on in-depth explorations of complex phenomena in their natural, or real-life, settings. Empirical case studies typically enable dynamic understanding of complex challenges and provide evidence about causal mechanisms and the necessary and sufficient conditions (contexts) for intervention implementation and effects. This is essential evidence not just for researchers concerned about internal and external validity, but also research users in policy and practice who need to know what the likely effects of complex programmes or interventions will be in their settings. The health sciences have much to learn from scholarship on case study methodology in the social sciences. However, there are multiple challenges in fully exploiting the potential learning from case study research. First are misconceptions that case study research can only provide exploratory or descriptive evidence. Second, there is little consensus about what a case study is, and considerable diversity in how empirical case studies are conducted and reported. Finally, as case study researchers typically (and appropriately) focus on thick description (that captures contextual detail), it can be challenging to identify the key messages related to intervention evaluation from case study reports.

Whilst the diversity of published case studies in health services and public health research is rich and productive, we recommend further clarity and specific methodological guidance for those reporting case study research for evaluation audiences.

Peer Review reports

The need for methodological development to address the most urgent challenges in health research has been well-documented. Many of the most pressing questions for public health research, where the focus is on system-level determinants [ 1 , 2 ], and for health services research, where provisions typically vary across sites and are provided through interlocking networks of services [ 3 ], require methodological approaches that can attend to complexity. The need for methodological advance has arisen, in part, as a result of the diminishing returns from randomised controlled trials (RCTs) where they have been used to answer questions about the effects of interventions in complex systems [ 4 , 5 , 6 ]. In conditions of complexity, there is limited value in maintaining the current orientation to experimental trial designs in the health sciences as providing ‘gold standard’ evidence of effect.

There are increasing calls for methodological pluralism [ 7 , 8 ], with the recognition that complex intervention and context are not easily or usefully separated (as is often the situation when using trial design), and that system interruptions may have effects that are not reducible to linear causal pathways between intervention and outcome. These calls are reflected in a shifting and contested discourse of trial design, seen with the emergence of realist [ 9 ], adaptive and hybrid (types 1, 2 and 3) [ 10 , 11 ] trials that blend studies of effectiveness with a close consideration of the contexts of implementation. Similarly, process evaluation has now become a core component of complex healthcare intervention trials, reflected in MRC guidance on how to explore implementation, causal mechanisms and context [ 12 ].

Evidence about the context of an intervention is crucial for questions of external validity. As Woolcock [ 4 ] notes, even if RCT designs are accepted as robust for maximising internal validity, questions of transferability (how well the intervention works in different contexts) and generalisability (how well the intervention can be scaled up) remain unanswered [ 5 , 13 ]. For research evidence to have impact on policy and systems organisation, and thus to improve population and patient health, there is an urgent need for better methods for strengthening external validity, including a better understanding of the relationship between intervention and context [ 14 ].

Policymakers, healthcare commissioners and other research users require credible evidence of relevance to their settings and populations [ 15 ], to perform what Rosengarten and Savransky [ 16 ] call ‘careful abstraction’ to the locales that matter for them. They also require robust evidence for understanding complex causal pathways. Case study research, currently under-utilised in public health and health services evaluation, can offer considerable potential for strengthening faith in both external and internal validity. For example, in an empirical case study of how the policy of free bus travel had specific health effects in London, UK, a quasi-experimental evaluation (led by JG) identified how important aspects of context (a good public transport system) and intervention (that it was universal) were necessary conditions for the observed effects, thus providing useful, actionable evidence for decision-makers in other contexts [ 17 ].

The overall approach of case study research is based on the in-depth exploration of complex phenomena in their natural, or ‘real-life’, settings. Empirical case studies typically enable dynamic understanding of complex challenges rather than restricting the focus on narrow problem delineations and simple fixes. Case study research is a diverse and somewhat contested field, with multiple definitions and perspectives grounded in different ways of viewing the world, and involving different combinations of methods. In this paper, we raise awareness of such plurality and highlight the contribution that case study research can make to the evaluation of complex system-level interventions. We review some of the challenges in exploiting the current evidence base from empirical case studies and conclude by recommending that further guidance and minimum reporting criteria for evaluation using case studies, appropriate for audiences in the health sciences, can enhance the take-up of evidence from case study research.

Case study research offers evidence about context, causal inference in complex systems and implementation

Well-conducted and described empirical case studies provide evidence on context, complexity and mechanisms for understanding how, where and why interventions have their observed effects. Recognition of the importance of context for understanding the relationships between interventions and outcomes is hardly new. In 1943, Canguilhem berated an over-reliance on experimental designs for determining universal physiological laws: ‘As if one could determine a phenomenon’s essence apart from its conditions! As if conditions were a mask or frame which changed neither the face nor the picture!’ ([ 18 ] p126). More recently, a concern with context has been expressed in health systems and public health research as part of what has been called the ‘complexity turn’ [ 1 ]: a recognition that many of the most enduring challenges for developing an evidence base require a consideration of system-level effects [ 1 ] and the conceptualisation of interventions as interruptions in systems [ 19 ].

The case study approach is widely recognised as offering an invaluable resource for understanding the dynamic and evolving influence of context on complex, system-level interventions [ 20 , 21 , 22 , 23 ]. Empirically, case studies can directly inform assessments of where, when, how and for whom interventions might be successfully implemented, by helping to specify the necessary and sufficient conditions under which interventions might have effects and to consolidate learning on how interdependencies, emergence and unpredictability can be managed to achieve and sustain desired effects. Case study research has the potential to address four objectives for improving research and reporting of context recently set out by guidance on taking account of context in population health research [ 24 ], that is to (1) improve the appropriateness of intervention development for specific contexts, (2) improve understanding of ‘how’ interventions work, (3) better understand how and why impacts vary across contexts and (4) ensure reports of intervention studies are most useful for decision-makers and researchers.

However, evaluations of complex healthcare interventions have arguably not exploited the full potential of case study research and can learn much from other disciplines. For evaluative research, exploratory case studies have had a traditional role of providing data on ‘process’, or initial ‘hypothesis-generating’ scoping, but might also have an increasing salience for explanatory aims. Across the social and political sciences, different kinds of case studies are undertaken to meet diverse aims (description, exploration or explanation) and across different scales (from small N qualitative studies that aim to elucidate processes, or provide thick description, to more systematic techniques designed for medium-to-large N cases).

Case studies with explanatory aims vary in terms of their positioning within mixed-methods projects, with designs including (but not restricted to) (1) single N of 1 studies of interventions in specific contexts, where the overall design is a case study that may incorporate one or more (randomised or not) comparisons over time and between variables within the case; (2) a series of cases conducted or synthesised to provide explanation from variations between cases; and (3) case studies of particular settings within RCT or quasi-experimental designs to explore variation in effects or implementation.

Detailed qualitative research (typically done as ‘case studies’ within process evaluations) provides evidence for the plausibility of mechanisms [ 25 ], offering theoretical generalisations for how interventions may function under different conditions. Although RCT designs reduce many threats to internal validity, the mechanisms of effect remain opaque, particularly when the causal pathways between ‘intervention’ and ‘effect’ are long and potentially non-linear: case study research has a more fundamental role here, in providing detailed observational evidence for causal claims [ 26 ] as well as producing a rich, nuanced picture of tensions and multiple perspectives [ 8 ].

Longitudinal or cross-case analysis may be best suited for evidence generation in system-level evaluative research. Turner [ 27 ], for instance, reflecting on the complex processes in major system change, has argued for the need for methods that integrate learning across cases, to develop theoretical knowledge that would enable inferences beyond the single case, and to develop generalisable theory about organisational and structural change in health systems. Qualitative Comparative Analysis (QCA) [ 28 ] is one such formal method for deriving causal claims, using set theory mathematics to integrate data from empirical case studies to answer questions about the configurations of causal pathways linking conditions to outcomes [ 29 , 30 ].

Nonetheless, the single N case study, too, provides opportunities for theoretical development [ 31 ], and theoretical generalisation or analytical refinement [ 32 ]. How ‘the case’ and ‘context’ are conceptualised is crucial here. Findings from the single case may seem to be confined to its intrinsic particularities in a specific and distinct context [ 33 ]. However, if such context is viewed as exemplifying wider social and political forces, the single case can be ‘telling’, rather than ‘typical’, and offer insight into a wider issue [ 34 ]. Internal comparisons within the case can offer rich possibilities for logical inferences about causation [ 17 ]. Further, case studies of any size can be used for theory testing through refutation [ 22 ]. The potential lies, then, in utilising the strengths and plurality of case study to support theory-driven research within different methodological paradigms.

Evaluation research in health has much to learn from a range of social sciences where case study methodology has been used to develop various kinds of causal inference. For instance, Gerring [ 35 ] expands on the within-case variations utilised to make causal claims. For Gerring [ 35 ], case studies come into their own with regard to invariant or strong causal claims (such as X is a necessary and/or sufficient condition for Y) rather than for probabilistic causal claims. For the latter (where experimental methods might have an advantage in estimating effect sizes), case studies offer evidence on mechanisms: from observations of X affecting Y, from process tracing or from pattern matching. Case studies also support the study of emergent causation, that is, the multiple interacting properties that account for particular and unexpected outcomes in complex systems, such as in healthcare [ 8 ].

Finally, efficacy (or beliefs about efficacy) is not the only contributor to intervention uptake, with a range of organisational and policy contingencies affecting whether an intervention is likely to be rolled out in practice. Case study research is, therefore, invaluable for learning about contextual contingencies and identifying the conditions necessary for interventions to become normalised (i.e. implemented routinely) in practice [ 36 ].

The challenges in exploiting evidence from case study research

At present, there are significant challenges in exploiting the benefits of case study research in evaluative health research, which relate to status, definition and reporting. Case study research has been marginalised at the bottom of an evidence hierarchy, seen to offer little by way of explanatory power, if nonetheless useful for adding descriptive data on process or providing useful illustrations for policymakers [ 37 ]. This is an opportune moment to revisit this low status. As health researchers are increasingly charged with evaluating ‘natural experiments’—the use of face masks in the response to the COVID-19 pandemic being a recent example [ 38 ]—rather than interventions that take place in settings that can be controlled, research approaches using methods to strengthen causal inference that does not require randomisation become more relevant.

A second challenge for improving the use of case study evidence in evaluative health research is that, as we have seen, what is meant by ‘case study’ varies widely, not only across but also within disciplines. There is indeed little consensus amongst methodologists as to how to define ‘a case study’. Definitions focus, variously, on small sample size or lack of control over the intervention (e.g. [ 39 ] p194), on in-depth study and context [ 40 , 41 ], on the logic of inference used [ 35 ] or on distinct research strategies which incorporate a number of methods to address questions of ‘how’ and ‘why’ [ 42 ]. Moreover, definitions developed for specific disciplines do not capture the range of ways in which case study research is carried out across disciplines. Multiple definitions of case study reflect the richness and diversity of the approach. However, evidence suggests that a lack of consensus across methodologists results in some of the limitations of published reports of empirical case studies [ 43 , 44 ]. Hyett and colleagues [ 43 ], for instance, reviewing reports in qualitative journals, found little match between methodological definitions of case study research and how authors used the term.

This raises the third challenge we identify that case study reports are typically not written in ways that are accessible or useful for the evaluation research community and policymakers. Case studies may not appear in journals widely read by those in the health sciences, either because space constraints preclude the reporting of rich, thick descriptions, or because of the reported lack of willingness of some biomedical journals to publish research that uses qualitative methods [ 45 ], signalling the persistence of the aforementioned evidence hierarchy. Where they do, however, the term ‘case study’ is used to indicate, interchangeably, a qualitative study, an N of 1 sample, or a multi-method, in-depth analysis of one example from a population of phenomena. Definitions of what constitutes the ‘case’ are frequently lacking and appear to be used as a synonym for the settings in which the research is conducted. Despite offering insights for evaluation, the primary aims may not have been evaluative, so the implications may not be explicitly drawn out. Indeed, some case study reports might properly be aiming for thick description without necessarily seeking to inform about context or causality.

Acknowledging plurality and developing guidance

We recognise that definitional and methodological plurality is not only inevitable, but also a necessary and creative reflection of the very different epistemological and disciplinary origins of health researchers, and the aims they have in doing and reporting case study research. Indeed, to provide some clarity, Thomas [ 46 ] has suggested a typology of subject/purpose/approach/process for classifying aims (e.g. evaluative or exploratory), sample rationale and selection and methods for data generation of case studies. We also recognise that the diversity of methods used in case study research, and the necessary focus on narrative reporting, does not lend itself to straightforward development of formal quality or reporting criteria.

Existing checklists for reporting case study research from the social sciences—for example Lincoln and Guba’s [ 47 ] and Stake’s [ 33 ]—are primarily orientated to the quality of narrative produced, and the extent to which they encapsulate thick description, rather than the more pragmatic issues of implications for intervention effects. Those designed for clinical settings, such as the CARE (CAse REports) guidelines, provide specific reporting guidelines for medical case reports about single, or small groups of patients [ 48 ], not for case study research.

The Design of Case Study Research in Health Care (DESCARTE) model [ 44 ] suggests a series of questions to be asked of a case study researcher (including clarity about the philosophy underpinning their research), study design (with a focus on case definition) and analysis (to improve process). The model resembles toolkits for enhancing the quality and robustness of qualitative and mixed-methods research reporting, and it is usefully open-ended and non-prescriptive. However, even if it does include some reflections on context, the model does not fully address aspects of context, logic and causal inference that are perhaps most relevant for evaluative research in health.

Hence, for evaluative research where the aim is to report empirical findings in ways that are intended to be pragmatically useful for health policy and practice, this may be an opportune time to consider how to best navigate plurality around what is (minimally) important to report when publishing empirical case studies, especially with regards to the complex relationships between context and interventions, information that case study research is well placed to provide.

The conventional scientific quest for certainty, predictability and linear causality (maximised in RCT designs) has to be augmented by the study of uncertainty, unpredictability and emergent causality [ 8 ] in complex systems. This will require methodological pluralism, and openness to broadening the evidence base to better understand both causality in and the transferability of system change intervention [ 14 , 20 , 23 , 25 ]. Case study research evidence is essential, yet is currently under exploited in the health sciences. If evaluative health research is to move beyond the current impasse on methods for understanding interventions as interruptions in complex systems, we need to consider in more detail how researchers can conduct and report empirical case studies which do aim to elucidate the contextual factors which interact with interventions to produce particular effects. To this end, supported by the UK’s Medical Research Council, we are embracing the challenge to develop guidance for case study researchers studying complex interventions. Following a meta-narrative review of the literature, we are planning a Delphi study to inform guidance that will, at minimum, cover the value of case study research for evaluating the interrelationship between context and complex system-level interventions; for situating and defining ‘the case’, and generalising from case studies; as well as provide specific guidance on conducting, analysing and reporting case study research. Our hope is that such guidance can support researchers evaluating interventions in complex systems to better exploit the diversity and richness of case study research.

Availability of data and materials

Not applicable (article based on existing available academic publications)

Abbreviations

Qualitative comparative analysis

Quasi-experimental design

Randomised controlled trial

Diez Roux AV. Complex systems thinking and current impasses in health disparities research. Am J Public Health. 2011;101(9):1627–34.

Article   Google Scholar  

Ogilvie D, Mitchell R, Mutrie N, M P, Platt S. Evaluating health effects of transport interventions: methodologic case study. Am J Prev Med 2006;31:118–126.

Walshe C. The evaluation of complex interventions in palliative care: an exploration of the potential of case study research strategies. Palliat Med. 2011;25(8):774–81.

Woolcock M. Using case studies to explore the external validity of ‘complex’ development interventions. Evaluation. 2013;19:229–48.

Cartwright N. Are RCTs the gold standard? BioSocieties. 2007;2(1):11–20.

Deaton A, Cartwright N. Understanding and misunderstanding randomized controlled trials. Soc Sci Med. 2018;210:2–21.

Salway S, Green J. Towards a critical complex systems approach to public health. Crit Public Health. 2017;27(5):523–4.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Bonell C, Warren E, Fletcher A. Realist trials and the testing of context-mechanism-outcome configurations: a response to Van Belle et al. Trials. 2016;17:478.

Pallmann P, Bedding AW, Choodari-Oskooei B. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16:29.

Curran G, Bauer M, Mittman B, Pyne J, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26. https://doi.org/10.1097/MLR.0b013e3182408812 .

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015 [cited 2020 Jun 27];350. Available from: https://www.bmj.com/content/350/bmj.h1258 .

Evans RE, Craig P, Hoddinott P, Littlecott H, Moore L, Murphy S, et al. When and how do ‘effective’ interventions need to be adapted and/or re-evaluated in new contexts? The need for guidance. J Epidemiol Community Health. 2019;73(6):481–2.

Shoveller J. A critical examination of representations of context within research on population health interventions. Crit Public Health. 2016;26(5):487–500.

Treweek S, Zwarenstein M. Making trials matter: pragmatic and explanatory trials and the problem of applicability. Trials. 2009;10(1):37.

Rosengarten M, Savransky M. A careful biomedicine? Generalization and abstraction in RCTs. Crit Public Health. 2019;29(2):181–91.

Green J, Roberts H, Petticrew M, Steinbach R, Goodman A, Jones A, et al. Integrating quasi-experimental and inductive designs in evaluation: a case study of the impact of free bus travel on public health. Evaluation. 2015;21(4):391–406.

Canguilhem G. The normal and the pathological. New York: Zone Books; 1991. (1949).

Google Scholar  

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

King G, Keohane RO, Verba S. Designing social inquiry: scientific inference in qualitative research: Princeton University Press; 1994.

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

Yin R. Enhancing the quality of case studies in health services research. Health Serv Res. 1999;34(5 Pt 2):1209.

CAS   PubMed   PubMed Central   Google Scholar  

Raine R, Fitzpatrick R, Barratt H, Bevan G, Black N, Boaden R, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res. 2016 [cited 2020 Jun 30];4(16). Available from: https://www.journalslibrary.nihr.ac.uk/hsdr/hsdr04160#/abstract .

Craig P, Di Ruggiero E, Frohlich KL, E M, White M, Group CCGA. Taking account of context in population health intervention research: guidance for producers, users and funders of research. NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Grant RL, Hood R. Complex systems, explanation and policy: implications of the crisis of replication for public health research. Crit Public Health. 2017;27(5):525–32.

Mahoney J. Strategies of causal inference in small-N analysis. Sociol Methods Res. 2000;4:387–424.

Turner S. Major system change: a management and organisational research perspective. In: Rosalind Raine, Ray Fitzpatrick, Helen Barratt, Gywn Bevan, Nick Black, Ruth Boaden, et al. Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Health Serv Deliv Res. 2016;4(16) 2016. https://doi.org/10.3310/hsdr04160.

Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Serv Res. 1999;34(5 Pt 2):1225.

Hanckel B, Petticrew M, Thomas J, Green J. Protocol for a systematic review of the use of qualitative comparative analysis for evaluative questions in public health research. Syst Rev. 2019;8(1):252.

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis: Cambridge University Press; 2012. 369 p.

Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. 2006;12:219–45.

Tsoukas H. Craving for generality and small-N studies: a Wittgensteinian approach towards the epistemology of the particular in organization and management studies. Sage Handb Organ Res Methods. 2009:285–301.

Stake RE. The art of case study research. London: Sage Publications Ltd; 1995.

Mitchell JC. Typicality and the case study. Ethnographic research: A guide to general conduct. Vol. 238241. 1984.

Gerring J. What is a case study and what is it good for? Am Polit Sci Rev. 2004;98(2):341–54.

May C, Mort M, Williams T, F M, Gask L. Health technology assessment in its local contexts: studies of telehealthcare. Soc Sci Med 2003;57:697–710.

McGill E. Trading quality for relevance: non-health decision-makers’ use of evidence on the social determinants of health. BMJ Open. 2015;5(4):007053.

Greenhalgh T. We can’t be 100% sure face masks work – but that shouldn’t stop us wearing them | Trish Greenhalgh. The Guardian. 2020 [cited 2020 Jun 27]; Available from: https://www.theguardian.com/commentisfree/2020/jun/05/face-masks-coronavirus .

Hammersley M. So, what are case studies? In: What’s wrong with ethnography? New York: Routledge; 1992.

Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach. BMC Med Res Methodol. 2011;11(1):100.

Luck L, Jackson D, Usher K. Case study: a bridge across the paradigms. Nurs Inq. 2006;13(2):103–9.

Yin RK. Case study research and applications: design and methods: Sage; 2017.

Hyett N, A K, Dickson-Swift V. Methodology or method? A critical review of qualitative case study reports. Int J Qual Stud Health Well-Being. 2014;9:23606.

Carolan CM, Forbat L, Smith A. Developing the DESCARTE model: the design of case study research in health care. Qual Health Res. 2016;26(5):626–39.

Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to the BMJ editors on qualitative research. Bmj. 2016;352.

Thomas G. A typology for the case study in social science following a review of definition, discourse, and structure. Qual Inq. 2011;17(6):511–21.

Lincoln YS, Guba EG. Judging the quality of case study reports. Int J Qual Stud Educ. 1990;3(1):53–9.

Riley DS, Barber MS, Kienle GS, Aronson JK, Schoen-Angerer T, Tugwell P, et al. CARE guidelines for case reports: explanation and elaboration document. J Clin Epidemiol. 2017;89:218–35.

Download references

Acknowledgements

Not applicable

This work was funded by the Medical Research Council - MRC Award MR/S014632/1 HCS: Case study, Context and Complex interventions (TRIPLE C). SP was additionally funded by the University of Oxford's Higher Education Innovation Fund (HEIF).

Author information

Authors and affiliations.

Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK

Sara Paparini, Chrysanthi Papoutsi, Trish Greenhalgh & Sara Shaw

Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Judith Green

School of Health Sciences, University of East Anglia, Norwich, UK

Jamie Murdoch

Public Health, Environments and Society, London School of Hygiene & Tropical Medicin, London, UK

Mark Petticrew

Institute for Culture and Society, Western Sydney University, Penrith, Australia

Benjamin Hanckel

You can also search for this author in PubMed   Google Scholar

Contributions

JG, MP, SP, JM, TG, CP and SS drafted the initial paper; all authors contributed to the drafting of the final version, and read and approved the final manuscript.

Corresponding author

Correspondence to Sara Paparini .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Paparini, S., Green, J., Papoutsi, C. et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med 18 , 301 (2020). https://doi.org/10.1186/s12916-020-01777-6

Download citation

Received : 03 July 2020

Accepted : 07 September 2020

Published : 10 November 2020

DOI : https://doi.org/10.1186/s12916-020-01777-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative
  • Case studies
  • Mixed-method
  • Public health
  • Health services research
  • Interventions

BMC Medicine

ISSN: 1741-7015

case study with evaluation

  • Open access
  • Published: 27 November 2020

Designing process evaluations using case study to explore the context of complex interventions evaluated in trials

  • Aileen Grant 1 ,
  • Carol Bugge 2 &
  • Mary Wells 3  

Trials volume  21 , Article number:  982 ( 2020 ) Cite this article

12k Accesses

13 Citations

5 Altmetric

Metrics details

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail, and whether they can be transferred to other settings and populations. However, historically, context has not been sufficiently explored and reported resulting in the poor uptake of trial results. Therefore, suitable methodologies are needed to guide the investigation of context. Case study is one appropriate methodology, but there is little guidance about what case study design can offer the study of context in trials. We address this gap in the literature by presenting a number of important considerations for process evaluation using a case study design.

In this paper, we define context, the relationship between complex interventions and context, and describe case study design methodology. A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. We describe each of these in detail and highlight with examples from recently published process evaluations.

Conclusions

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation. We provide a comprehensive overview of the issues for process evaluation design to consider when using a case study design.

Trial registration

DQIP - ClinicalTrials.gov number, NCT01425502 - OPAL - ISRCTN57746448

Peer Review reports

Contribution to the literature

We illustrate how case study methodology can explore the complex, dynamic and uncertain relationship between context and interventions within trials.

We depict different case study designs and illustrate there is not one formula and that design needs to be tailored to the context and trial design.

Case study can support comparisons between intervention and control arms and between cases within arms to uncover and explain differences in detail.

We argue that case study can illustrate how components have evolved and been redefined through implementation.

Key issues for consideration in case study design within process evaluations are presented and illustrated with examples.

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail and whether they can be transferred to other settings and populations. However, historically, not all trials have had a process evaluation component, nor have they sufficiently reported aspects of context, resulting in poor uptake of trial findings [ 1 ]. Considerations of context are often absent from published process evaluations, with few studies acknowledging, taking account of or describing context during implementation, or assessing the impact of context on implementation [ 2 , 3 ]. At present, evidence from trials is not being used in a timely manner [ 4 , 5 ], and this can negatively impact on patient benefit and experience [ 6 ]. It takes on average 17 years for knowledge from research to be implemented into practice [ 7 ]. Suitable methodologies are therefore needed that allow for context to be exposed; one appropriate methodological approach is case study [ 8 , 9 ].

In 2015, the Medical Research Council (MRC) published guidance for process evaluations [ 10 ]. This was a key milestone in legitimising as well as providing tools, methods and a framework for conducting process evaluations. Nevertheless, as with all guidance, there is a need for reflection, challenge and refinement. There have been a number of critiques of the MRC guidance, including that interventions should be considered as events in systems [ 11 , 12 , 13 , 14 ]; a need for better use, critique and development of theories [ 15 , 16 , 17 ]; and a need for more guidance on integrating qualitative and quantitative data [ 18 , 19 ]. Although the MRC process evaluation guidance does consider appropriate qualitative and quantitative methods, it does not mention case study design and what it can offer the study of context in trials.

The case study methodology is ideally suited to real-world, sustainable intervention development and evaluation because it can explore and examine contemporary complex phenomena, in depth, in numerous contexts and using multiple sources of data [ 8 ]. Case study design can capture the complexity of the case, the relationship between the intervention and the context and how the intervention worked (or not) [ 8 ]. There are a number of textbooks on a case study within the social science fields [ 8 , 9 , 20 ], but there are no case study textbooks and a paucity of useful texts on how to design, conduct and report case study within the health arena. Few examples exist within the trial design and evaluation literature [ 3 , 21 ]. Therefore, guidance to enable well-designed process evaluations using case study methodology is required.

We aim to address the gap in the literature by presenting a number of important considerations for process evaluation using a case study design. First, we define the context and describe the relationship between complex health interventions and context.

What is context?

While there is growing recognition that context interacts with the intervention to impact on the intervention’s effectiveness [ 22 ], context is still poorly defined and conceptualised. There are a number of different definitions in the literature, but as Bate et al. explained ‘almost universally, we find context to be an overworked word in everyday dialogue but a massively understudied and misunderstood concept’ [ 23 ]. Ovretveit defines context as ‘everything the intervention is not’ [ 24 ]. This last definition is used by the MRC framework for process evaluations [ 25 ]; however; the problem with this definition is that it is highly dependent on how the intervention is defined. We have found Pfadenhauer et al.’s definition useful:

Context is conceptualised as a set of characteristics and circumstances that consist of active and unique factors that surround the implementation. As such it is not a backdrop for implementation but interacts, influences, modifies and facilitates or constrains the intervention and its implementation. Context is usually considered in relation to an intervention or object, with which it actively interacts. A boundary between the concepts of context and setting is discernible: setting refers to the physical, specific location in which the intervention is put into practice. Context is much more versatile, embracing not only the setting but also roles, interactions and relationships [ 22 ].

Traditionally, context has been conceptualised in terms of barriers and facilitators, but what is a barrier in one context may be a facilitator in another, so it is the relationship and dynamics between the intervention and context which are the most important [ 26 ]. There is a need for empirical research to really understand how different contextual factors relate to each other and to the intervention. At present, research studies often list common contextual factors, but without a depth of meaning and understanding, such as government or health board policies, organisational structures, professional and patient attitudes, behaviours and beliefs [ 27 ]. The case study methodology is well placed to understand the relationship between context and intervention where these boundaries may not be clearly evident. It offers a means of unpicking the contextual conditions which are pertinent to effective implementation.

The relationship between complex health interventions and context

Health interventions are generally made up of a number of different components and are considered complex due to the influence of context on their implementation and outcomes [ 3 , 28 ]. Complex interventions are often reliant on the engagement of practitioners and patients, so their attitudes, behaviours, beliefs and cultures influence whether and how an intervention is effective or not. Interventions are context-sensitive; they interact with the environment in which they are implemented. In fact, many argue that interventions are a product of their context, and indeed, outcomes are likely to be a product of the intervention and its context [ 3 , 29 ]. Within a trial, there is also the influence of the research context too—so the observed outcome could be due to the intervention alone, elements of the context within which the intervention is being delivered, elements of the research process or a combination of all three. Therefore, it can be difficult and unhelpful to separate the intervention from the context within which it was evaluated because the intervention and context are likely to have evolved together over time. As a result, the same intervention can look and behave differently in different contexts, so it is important this is known, understood and reported [ 3 ]. Finally, the intervention context is dynamic; the people, organisations and systems change over time, [ 3 ] which requires practitioners and patients to respond, and they may do this by adapting the intervention or contextual factors. So, to enable researchers to replicate successful interventions, or to explain why the intervention was not successful, it is not enough to describe the components of the intervention, they need to be described by their relationship to their context and resources [ 3 , 28 ].

What is a case study?

Case study methodology aims to provide an in-depth, holistic, balanced, detailed and complete picture of complex contemporary phenomena in its natural context [ 8 , 9 , 20 ]. In this case, the phenomena are the implementation of complex interventions in a trial. Case study methodology takes the view that the phenomena can be more than the sum of their parts and have to be understood as a whole [ 30 ]. It is differentiated from a clinical case study by its analytical focus [ 20 ].

The methodology is particularly useful when linked to trials because some of the features of the design naturally fill the gaps in knowledge generated by trials. Given the methodological focus on understanding phenomena in the round, case study methodology is typified by the use of multiple sources of data, which are more commonly qualitatively guided [ 31 ]. The case study methodology is not epistemologically specific, like realist evaluation, and can be used with different epistemologies [ 32 ], and with different theories, such as Normalisation Process Theory (which explores how staff work together to implement a new intervention) or the Consolidated Framework for Implementation Research (which provides a menu of constructs associated with effective implementation) [ 33 , 34 , 35 ]. Realist evaluation can be used to explore the relationship between context, mechanism and outcome, but case study differs from realist evaluation by its focus on a holistic and in-depth understanding of the relationship between an intervention and the contemporary context in which it was implemented [ 36 ]. Case study enables researchers to choose epistemologies and theories which suit the nature of the enquiry and their theoretical preferences.

Designing a process evaluation using case study

An important part of any study is the research design. Due to their varied philosophical positions, the seminal authors in the field of case study have different epistemic views as to how a case study should be conducted [ 8 , 9 ]. Stake takes an interpretative approach (interested in how people make sense of their world), and Yin has more positivistic leanings, arguing for objectivity, validity and generalisability [ 8 , 9 ].

Regardless of the philosophical background, a well-designed process evaluation using case study should consider the following core components: the purpose; the definition of the intervention, the trial design, the case, and the theories or logic models underpinning the intervention; the sampling approach; and the conceptual or theoretical framework [ 8 , 9 , 20 , 31 , 33 ]. We now discuss these critical components in turn, with reference to two process evaluations that used case study design, the DQIP and OPAL studies [ 21 , 37 , 38 , 39 , 40 , 41 ].

The purpose of a process evaluation is to evaluate and explain the relationship between the intervention and its components, to context and outcome. It can help inform judgements about validity (by exploring the intervention components and their relationship with one another (construct validity), the connections between intervention and outcomes (internal validity) and the relationship between intervention and context (external validity)). It can also distinguish between implementation failure (where the intervention is poorly delivered) and intervention failure (intervention design is flawed) [ 42 , 43 ]. By using a case study to explicitly understand the relationship between context and the intervention during implementation, the process evaluation can explain the intervention effects and the potential generalisability and optimisation into routine practice [ 44 ].

The DQIP process evaluation aimed to qualitatively explore how patients and GP practices responded to an intervention designed to reduce high-risk prescribing of nonsteroidal anti-inflammatory drugs (NSAIDs) and/or antiplatelet agents (see Table  1 ) and quantitatively examine how change in high-risk prescribing was associated with practice characteristics and implementation processes. The OPAL process evaluation (see Table  2 ) aimed to quantitatively understand the factors which influenced the effectiveness of a pelvic floor muscle training intervention for women with urinary incontinence and qualitatively explore the participants’ experiences of treatment and adherence.

Defining the intervention and exploring the theories or assumptions underpinning the intervention design

Process evaluations should also explore the utility of the theories or assumptions underpinning intervention design [ 49 ]. Not all theories underpinning interventions are based on a formal theory, but they based on assumptions as to how the intervention is expected to work. These can be depicted as a logic model or theory of change [ 25 ]. To capture how the intervention and context evolve requires the intervention and its expected mechanisms to be clearly defined at the outset [ 50 ]. Hawe and colleagues recommend defining interventions by function (what processes make the intervention work) rather than form (what is delivered) [ 51 ]. However, in some cases, it may be useful to know if some of the components are redundant in certain contexts or if there is a synergistic effect between all the intervention components.

The DQIP trial delivered two interventions, one intervention was delivered to professionals with high fidelity and then professionals delivered the other intervention to patients by form rather than function allowing adaptations to the local context as appropriate. The assumptions underpinning intervention delivery were prespecified in a logic model published in the process evaluation protocol [ 52 ].

Case study is well placed to challenge or reinforce the theoretical assumptions or redefine these based on the relationship between the intervention and context. Yin advocates the use of theoretical propositions; these direct attention to specific aspects of the study for investigation [ 8 ] can be based on the underlying assumptions and tested during the course of the process evaluation. In case studies, using an epistemic position more aligned with Yin can enable research questions to be designed, which seek to expose patterns of unanticipated as well as expected relationships [ 9 ]. The OPAL trial was more closely aligned with Yin, where the research team predefined some of their theoretical assumptions, based on how the intervention was expected to work. The relevant parts of the data analysis then drew on data to support or refute the theoretical propositions. This was particularly useful for the trial as the prespecified theoretical propositions linked to the mechanisms of action on which the intervention was anticipated to have an effect (or not).

Tailoring to the trial design

Process evaluations need to be tailored to the trial, the intervention and the outcomes being measured [ 45 ]. For example, in a stepped wedge design (where the intervention is delivered in a phased manner), researchers should try to ensure process data are captured at relevant time points or in a two-arm or multiple arm trial, ensure data is collected from the control group(s) as well as the intervention group(s). In the DQIP trial, a stepped wedge trial, at least one process evaluation case, was sampled per cohort. Trials often continue to measure outcomes after delivery of the intervention has ceased, so researchers should also consider capturing ‘follow-up’ data on contextual factors, which may continue to influence the outcome measure. The OPAL trial had two active treatment arms so collected process data from both arms. In addition, as the trial was interested in long-term adherence, the trial and the process evaluation collected data from participants for 2 years after the intervention was initially delivered, providing 24 months follow-up data, in line with the primary outcome for the trial.

Defining the case

Case studies can include single or multiple cases in their design. Single case studies usually sample typical or unique cases, their advantage being the depth and richness that can be achieved over a long period of time. The advantages of multiple case study design are that cases can be compared to generate a greater depth of analysis. Multiple case study sampling may be carried out in order to test for replication or contradiction [ 8 ]. Given that trials are often conducted over a number of sites, a multiple case study design is more sensible for process evaluations, as there is likely to be variation in implementation between sites. Case definition may occur at a variety of levels but is most appropriate if it reflects the trial design. For example, a case in an individual patient level trial is likely to be defined as a person/patient (e.g. a woman with urinary incontinence—OPAL trial) whereas in a cluster trial, a case is like to be a cluster, such as an organisation (e.g. a general practice—DQIP trial). Of course, the process evaluation could explore cases with less distinct boundaries, such as communities or relationships; however, the clarity with which these cases are defined is important, in order to scope the nature of the data that will be generated.

Carefully sampled cases are critical to a good case study as sampling helps inform the quality of the inferences that can be made from the data [ 53 ]. In both qualitative and quantitative research, how and how many participants to sample must be decided when planning the study. Quantitative sampling techniques generally aim to achieve a random sample. Qualitative research generally uses purposive samples to achieve data saturation, occurring when the incoming data produces little or no new information to address the research questions. The term data saturation has evolved from theoretical saturation in conventional grounded theory studies; however, its relevance to other types of studies is contentious as the term saturation seems to be widely used but poorly justified [ 54 ]. Empirical evidence suggests that for in-depth interview studies, saturation occurs at 12 interviews for thematic saturation, but typically more would be needed for a heterogenous sample higher degrees of saturation [ 55 , 56 ]. Both DQIP and OPAL case studies were huge with OPAL designed to interview each of the 40 individual cases four times and DQIP designed to interview the lead DQIP general practitioner (GP) twice (to capture change over time), another GP and the practice manager from each of the 10 organisational cases. Despite the plethora of mixed methods research textbooks, there is very little about sampling as discussions typically link to method (e.g. interviews) rather than paradigm (e.g. case study).

Purposive sampling can improve the generalisability of the process evaluation by sampling for greater contextual diversity. The typical or average case is often not the richest source of information. Outliers can often reveal more important insights, because they may reflect the implementation of the intervention using different processes. Cases can be selected from a number of criteria, which are not mutually exclusive, to enable a rich and detailed picture to be built across sites [ 53 ]. To avoid the Hawthorne effect, it is recommended that process evaluations sample from both intervention and control sites, which enables comparison and explanation. There is always a trade-off between breadth and depth in sampling, so it is important to note that often quantity does not mean quality and that carefully sampled cases can provide powerful illustrative examples of how the intervention worked in practice, the relationship between the intervention and context and how and why they evolved together. The qualitative components of both DQIP and OPAL process evaluations aimed for maximum variation sampling. Please see Table  1 for further information on how DQIP’s sampling frame was important for providing contextual information on processes influencing effective implementation of the intervention.

Conceptual and theoretical framework

A conceptual or theoretical framework helps to frame data collection and analysis [ 57 ]. Theories can also underpin propositions, which can be tested in the process evaluation. Process evaluations produce intervention-dependent knowledge, and theories help make the research findings more generalizable by providing a common language [ 16 ]. There are a number of mid-range theories which have been designed to be used with process evaluation [ 34 , 35 , 58 ]. The choice of the appropriate conceptual or theoretical framework is, however, dependent on the philosophical and professional background of the research. The two examples within this paper used our own framework for the design of process evaluations, which proposes a number of candidate processes which can be explored, for example, recruitment, delivery, response, maintenance and context [ 45 ]. This framework was published before the MRC guidance on process evaluations, and both the DQIP and OPAL process evaluations were designed before the MRC guidance was published. The DQIP process evaluation explored all candidates in the framework whereas the OPAL process evaluation selected four candidates, illustrating that process evaluations can be selective in what they explore based on the purpose, research questions and resources. Furthermore, as Kislov and colleagues argue, we also have a responsibility to critique the theoretical framework underpinning the evaluation and refine theories to advance knowledge [ 59 ].

Data collection

An important consideration is what data to collect or measure and when. Case study methodology supports a range of data collection methods, both qualitative and quantitative, to best answer the research questions. As the aim of the case study is to gain an in-depth understanding of phenomena in context, methods are more commonly qualitative or mixed method in nature. Qualitative methods such as interviews, focus groups and observation offer rich descriptions of the setting, delivery of the intervention in each site and arm, how the intervention was perceived by the professionals delivering the intervention and the patients receiving the intervention. Quantitative methods can measure recruitment, fidelity and dose and establish which characteristics are associated with adoption, delivery and effectiveness. To ensure an understanding of the complexity of the relationship between the intervention and context, the case study should rely on multiple sources of data and triangulate these to confirm and corroborate the findings [ 8 ]. Process evaluations might consider using routine data collected in the trial across all sites and additional qualitative data across carefully sampled sites for a more nuanced picture within reasonable resource constraints. Mixed methods allow researchers to ask more complex questions and collect richer data than can be collected by one method alone [ 60 ]. The use of multiple sources of data allows data triangulation, which increases a study’s internal validity but also provides a more in-depth and holistic depiction of the case [ 20 ]. For example, in the DQIP process evaluation, the quantitative component used routinely collected data from all sites participating in the trial and purposively sampled cases for a more in-depth qualitative exploration [ 21 , 38 , 39 ].

The timing of data collection is crucial to study design, especially within a process evaluation where data collection can potentially influence the trial outcome. Process evaluations are generally in parallel or retrospective to the trial. The advantage of a retrospective design is that the evaluation itself is less likely to influence the trial outcome. However, the disadvantages include recall bias, lack of sensitivity to nuances and an inability to iteratively explore the relationship between intervention and outcome as it develops. To capture the dynamic relationship between intervention and context, the process evaluation needs to be parallel and longitudinal to the trial. Longitudinal methodological design is rare, but it is needed to capture the dynamic nature of implementation [ 40 ]. How the intervention is delivered is likely to change over time as it interacts with context. For example, as professionals deliver the intervention, they become more familiar with it, and it becomes more embedded into systems. The OPAL process evaluation was a longitudinal, mixed methods process evaluation where the quantitative component had been predefined and built into trial data collection systems. Data collection in both the qualitative and quantitative components mirrored the trial data collection points, which were longitudinal to capture adherence and contextual changes over time.

There is a lot of attention in the recent literature towards a systems approach to understanding interventions in context, which suggests interventions are ‘events within systems’ [ 61 , 62 ]. This framing highlights the dynamic nature of context, suggesting that interventions are an attempt to change systems dynamics. This conceptualisation would suggest that the study design should collect contextual data before and after implementation to assess the effect of the intervention on the context and vice versa.

Data analysis

Designing a rigorous analysis plan is particularly important for multiple case studies, where researchers must decide whether their approach to analysis is case or variable based. Case-based analysis is the most common, and analytic strategies must be clearly articulated for within and across case analysis. A multiple case study design can consist of multiple cases, where each case is analysed at the case level, or of multiple embedded cases, where data from all the cases are pulled together for analysis at some level. For example, OPAL analysis was at the case level, but all the cases for the intervention and control arms were pulled together at the arm level for more in-depth analysis and comparison. For Yin, analytical strategies rely on theoretical propositions, but for Stake, analysis works from the data to develop theory. In OPAL and DQIP, case summaries were written to summarise the cases and detail within-case analysis. Each of the studies structured these differently based on the phenomena of interest and the analytic technique. DQIP applied an approach more akin to Stake [ 9 ], with the cases summarised around inductive themes whereas OPAL applied a Yin [ 8 ] type approach using theoretical propositions around which the case summaries were structured. As the data for each case had been collected through longitudinal interviews, the case summaries were able to capture changes over time. It is beyond the scope of this paper to discuss different analytic techniques; however, to ensure the holistic examination of the intervention(s) in context, it is important to clearly articulate and demonstrate how data is integrated and synthesised [ 31 ].

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation [ 38 ]. Case study can enable comparisons within and across intervention and control arms and enable the evolving relationship between intervention and context to be captured holistically rather than considering processes in isolation. Utilising a longitudinal design can enable the dynamic relationship between context and intervention to be captured in real time. This information is fundamental to holistically explaining what intervention was implemented, understanding how and why the intervention worked or not and informing the transferability of the intervention into routine clinical practice.

Case study designs are not prescriptive, but process evaluations using case study should consider the purpose, trial design, the theories or assumptions underpinning the intervention, and the conceptual and theoretical frameworks informing the evaluation. We have discussed each of these considerations in turn, providing a comprehensive overview of issues for process evaluations using a case study design. There is no single or best way to conduct a process evaluation or a case study, but researchers need to make informed choices about the process evaluation design. Although this paper focuses on process evaluations, we recognise that case study design could also be useful during intervention development and feasibility trials. Elements of this paper are also applicable to other study designs involving trials.

Availability of data and materials

No data and materials were used.

Abbreviations

Data-driven Quality Improvement in Primary Care

Medical Research Council

Nonsteroidal anti-inflammatory drugs

Optimizing Pelvic Floor Muscle Exercises to Achieve Long-term benefits

Blencowe NB. Systematic review of intervention design and delivery in pragmatic and explanatory surgical randomized clinical trials. Br J Surg. 2015;102:1037–47.

Article   CAS   PubMed   Google Scholar  

Dixon-Woods M. The problem of context in quality improvement. In: Foundation TH, editor. Perspectives on context: The Health Foundation; 2014.

Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13(1):95.

Article   PubMed   PubMed Central   Google Scholar  

Grant A, Sullivan F, Dowell J. An ethnographic exploration of influences on prescribing in general practice: why is there variation in prescribing practices? Implement Sci. 2013;8(1):72.

Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med. 2007;49(3):355–63.

Article   PubMed   Google Scholar  

Ward V, House AF, Hamer S. Developing a framework for transferring knowledge into action: a thematic analysis of the literature. J Health Serv Res Policy. 2009;14(3):156–64.

Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

Yin R. Case study research and applications: design and methods. Los Angeles: Sage Publications Inc; 2018.

Google Scholar  

Stake R. The art of case study research. Thousand Oaks, California: Sage Publications Ltd; 1995.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, Moore L, O’Cathain A, Tinati T, Wight D, et al. Process evaluation of complex interventions: Medical Research Council guidance. Br Med J. 2015;350.

Hawe P. Minimal, negligible and negligent interventions. Soc Sci Med. 2015;138:265–8.

Moore GF, Evans RE, Hawkins J, Littlecott H, Melendez-Torres GJ, Bonell C, Murphy S. From complex social interventions to interventions in complex social systems: future directions and unresolved questions for intervention development and evaluation. Evaluation. 2018;25(1):23–45.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95.

Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, Greaves F, Harper L, Hawe P, Moore L, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4.

Moore G, Cambon L, Michie S, Arwidson P, Ninot G, Ferron C, Potvin L, Kellou N, Charlesworth J, Alla F, et al. Population health intervention research: the place of theories. Trials. 2019;20(1):285.

Kislov R. Engaging with theory: from theoretically informed to theoretically informative improvement research. BMJ Qual Saf. 2019;28(3):177–9.

Boulton R, Sandall J, Sevdalis N. The cultural politics of ‘Implementation Science’. J Med Human. 2020;41(3):379-94. h https://doi.org/10.1007/s10912-020-09607-9 .

Cheng KKF, Metcalfe A. Qualitative methods and process evaluation in clinical trials context: where to head to? Int J Qual Methods. 2018;17(1):1609406918774212.

Article   Google Scholar  

Richards DA, Bazeley P, Borglin G, Craig P, Emsley R, Frost J, Hill J, Horwood J, Hutchings HA, Jinks C, et al. Integrating quantitative and qualitative data and findings when undertaking randomised controlled trials. BMJ Open. 2019;9(11):e032081.

Thomas G. How to do your case study, 2nd edition edn. London: Sage Publications Ltd; 2016.

Grant A, Dreischulte T, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: case study evaluation of adoption and maintenance of a complex intervention to reduce high-risk primary care prescribing. BMJ Open. 2017;7(3).

Pfadenhauer L, Rohwer A, Burns J, Booth A, Lysdahl KB, Hofmann B, Gerhardus A, Mozygemba K, Tummers M, Wahlster P, et al. Guidance for the assessment of context and implementation in health technology assessments (HTA) and systematic reviews of complex interventions: the Context and Implementation of Complex Interventions (CICI) framework: Integrate-HTA; 2016.

Bate P, Robert G, Fulop N, Ovretveit J, Dixon-Woods M. Perspectives on context. London: The Health Foundation; 2014.

Ovretveit J. Understanding the conditions for improvement: research to discover which context influences affect improvement success. BMJ Qual Saf. 2011;20.

Medical Research Council: Process evaluation of complex interventions: UK Medical Research Council (MRC) guidance. 2015.

May CR, Johnson M, Finch T. Implementation, context and complexity. Implement Sci. 2016;11(1):141.

Bate P. Context is everything. In: Perpesctives on Context. The Health Foundation 2014.

Horton TJ, Illingworth JH, Warburton WHP. Overcoming challenges in codifying and replicating complex health care interventions. Health Aff. 2018;37(2):191–7.

O'Connor AM, Tugwell P, Wells GA, Elmslie T, Jolly E, Hollingworth G, McPherson R, Bunn H, Graham I, Drake E. A decision aid for women considering hormone therapy after menopause: decision support framework and evaluation. Patient Educ Couns. 1998;33:267–79.

Creswell J, Poth C. Qualiative inquiry and research design, fourth edition edn. Thousan Oaks, California: Sage Publications; 2018.

Carolan CM, Forbat L, Smith A. Developing the DESCARTE model: the design of case study research in health care. Qual Health Res. 2016;26(5):626–39.

Takahashi ARW, Araujo L. Case study research: opening up research opportunities. RAUSP Manage J. 2020;55(1):100–11.

Tight M. Understanding case study research, small-scale research with meaning. London: Sage Publications; 2017.

May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalisation process theory. Sociology. 2009;43:535.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice. A consolidated framework for advancing implementation science. Implement Sci. 2009;4.

Pawson R, Tilley N. Realist evaluation. London: Sage; 1997.

Dreischulte T, Donnan P, Grant A, Hapca A, McCowan C, Guthrie B. Safer prescribing - a trial of education, informatics & financial incentives. N Engl J Med. 2016;374:1053–64.

Grant A, Dreischulte T, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: active and less active ingredients of a multi-component complex intervention to reduce high-risk primary care prescribing. Implement Sci. 2017;12(1):4.

Dreischulte T, Grant A, Hapca A, Guthrie B. Process evaluation of the Data-driven Quality Improvement in Primary Care (DQIP) trial: quantitative examination of variation between practices in recruitment, implementation and effectiveness. BMJ Open. 2018;8(1):e017133.

Grant A, Dean S, Hay-Smith J, Hagen S, McClurg D, Taylor A, Kovandzic M, Bugge C. Effectiveness and cost-effectiveness randomised controlled trial of basic versus biofeedback-mediated intensive pelvic floor muscle training for female stress or mixed urinary incontinence: protocol for the OPAL (Optimising Pelvic Floor Exercises to Achieve Long-term benefits) trial mixed methods longitudinal qualitative case study and process evaluation. BMJ Open. 2019;9(2):e024152.

Hagen S, McClurg D, Bugge C, Hay-Smith J, Dean SG, Elders A, Glazener C, Abdel-fattah M, Agur WI, Booth J, et al. Effectiveness and cost-effectiveness of basic versus biofeedback-mediated intensive pelvic floor muscle training for female stress or mixed urinary incontinence: protocol for the OPAL randomised trial. BMJ Open. 2019;9(2):e024153.

Steckler A, Linnan L. Process evaluation for public health interventions and research; 2002.

Durlak JA. Why programme implementation is so important. J Prev Intervent Commun. 1998;17(2):5–18.

Bonell C, Oakley A, Hargreaves J, VS, Rees R. Assessment of generalisability in trials of health interventions: suggested framework and systematic review. Br Med J. 2006;333(7563):346–9.

Article   CAS   Google Scholar  

Grant A, Treweek S, Dreischulte T, Foy R, Guthrie B. Process evaluations for cluster-randomised trials of complex interventions: a proposed framework for design and reporting. Trials. 2013;14(1):15.

Yin R. Case study research: design and methods. London: Sage Publications; 2003.

Bugge C, Hay-Smith J, Grant A, Taylor A, Hagen S, McClurg D, Dean S: A 24 month longitudinal qualitative study of women’s experience of electromyography biofeedback pelvic floor muscle training (PFMT) and PFMT alone for urinary incontinence: adherence, outcome and context. ICS Gothenburg 2019 2019. https://www.ics.org/2019/abstract/473 . Access 10.9.2020.

Suzanne Hagen, Andrew Elders, Susan Stratton, Nicole Sergenson, Carol Bugge, Sarah Dean, Jean Hay-Smith, Mary Kilonzo, Maria Dimitrova, Mohamed Abdel-Fattah, Wael Agur, Jo Booth, Cathryn Glazener, Karen Guerrero, Alison McDonald, John Norrie, Louise R Williams, Doreen McClurg. Effectiveness of pelvic floor muscle training with and without electromyographic biofeedback for urinary incontinence in women: multicentre randomised controlled trial BMJ 2020;371. https://doi.org/10.1136/bmj.m3719 .

Cook TD. Emergent principles for the design, implementation, and analysis of cluster-based experiments in social science. Ann Am Acad Pol Soc Sci. 2005;599(1):176–98.

Hoffmann T, Glasziou P, Boutron I, Milne R, Perera R, Moher D. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Br Med J. 2014;348.

Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? Br Med J. 2004;328(7455):1561–3.

Grant A, Dreischulte T, Treweek S, Guthrie B. Study protocol of a mixed-methods evaluation of a cluster randomised trial to improve the safety of NSAID and antiplatelet prescribing: Data-driven Quality Improvement in Primary Care. Trials. 2012;13:154.

Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. 2006;12(2):219–45.

Thorne S. The great saturation debate: what the “S word” means and doesn’t mean in qualitative research reporting. Can J Nurs Res. 2020;52(1):3–5.

Guest G, Bunce A, Johnson L. How many interviews are enough?: an experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Guest G, Namey E, Chen M. A simple method to assess and report thematic saturation in qualitative research. PLoS One. 2020;15(5):e0232076.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015;24(3):228–38.

Rycroft-Malone J. The PARIHS framework: a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;4:297-304.

Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):103.

Cresswell JW, Plano Clark VL. Designing and conducting mixed methods research. Thousand Oaks: Sage Publications Ltd; 2007.

Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43:267–76.

Craig P, Ruggiero E, Frohlich KL, Mykhalovskiy E, White M. Taking account of context in population health intervention research: guidance for producers, users and funders of research: National Institute for Health Research; 2018. https://www.ncbi.nlm.nih.gov/books/NBK498645/pdf/Bookshelf_NBK498645.pdf .

Download references

Acknowledgements

We would like to thank Professor Shaun Treweek for the discussions about context in trials.

No funding was received for this work.

Author information

Authors and affiliations.

School of Nursing, Midwifery and Paramedic Practice, Robert Gordon University, Garthdee Road, Aberdeen, AB10 7QB, UK

Aileen Grant

Faculty of Health Sciences and Sport, University of Stirling, Pathfoot Building, Stirling, FK9 4LA, UK

Carol Bugge

Department of Surgery and Cancer, Imperial College London, Charing Cross Campus, London, W6 8RP, UK

You can also search for this author in PubMed   Google Scholar

Contributions

AG, CB and MW conceptualised the study. AG wrote the paper. CB and MW commented on the drafts. All authors have approved the final manuscript.

Corresponding author

Correspondence to Aileen Grant .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval and consent to participate is not appropriate as no participants were included.

Consent for publication

Consent for publication is not required as no participants were included.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Grant, A., Bugge, C. & Wells, M. Designing process evaluations using case study to explore the context of complex interventions evaluated in trials. Trials 21 , 982 (2020). https://doi.org/10.1186/s13063-020-04880-4

Download citation

Received : 09 April 2020

Accepted : 06 November 2020

Published : 27 November 2020

DOI : https://doi.org/10.1186/s13063-020-04880-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Process evaluation
  • Case study design

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

case study with evaluation

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Qualitative Research:...

Qualitative Research: Case study evaluation

  • Related content
  • Peer review
  • Justin Keen , research fellow, health economics research group a ,
  • Tim Packwood a
  • Brunel University, Uxbridge, Middlesex UB8 3PH
  • a Correspondence to: Dr Keen.

Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy.

This is the last in a series of seven articles describing non-quantitative techniques and showing their value in health research

Introduction

The medical approach to understanding disease has traditionally drawn heavily on qualitative data, and in particular on case studies to illustrate important or interesting phenomena. The tradition continues today, not least in regular case reports in this and other medical journals. Moreover, much of the everyday work of doctors and other health professionals still involves decisions that are qualitative rather than quantitative in nature.

This paper discusses the use of qualitative research methods, not in clinical care but in case study evaluations of health service interventions. It is useful for doctors to understand the principles guiding the design and conduct of these evaluations, because they are frequently used by both researchers and inspectorial agencies (such as the Audit Commission in the United Kingdom and the Office of Technology Assessment in the United States) to investigate the work of doctors and other health professionals.

We briefly discuss the circumstances in which case study research can usefully be undertaken in health service settings and the ways in which qualitative methods are used within case studies. Examples show how qualitative methods are applied, both in purely qualitative studies and alongside quantitative methods.

Case study evaluations

Doctors often find themselves asking important practical questions, such as should we be involved in the management of hospitals and, if so, how? how will new government policies affect the lives of our patients? and how can we cope with changes …

Log in using your username and password

BMA Member Log In

If you have a subscription to The BMJ, log in:

  • Need to activate
  • Log in via institution
  • Log in via OpenAthens

Log in through your institution

Subscribe from £184 *.

Subscribe and get access to all BMJ articles, and much more.

* For online subscription

Access this article for 1 day for: £50 / $60/ €56 ( excludes VAT )

You can download a PDF version for your personal record.

Buy this article

case study with evaluation

Breadcrumbs Section. Click here to navigate to respective pages.

Program Evaluation

Program Evaluation

DOI link for Program Evaluation

Get Citation

This text provides a solid foundation in program evaluation, covering the main components of evaluating agencies and their programs, how best to address those components, and the procedures to follow when conducting evaluations. Different models and approaches are paired with practical techniques, such as how to plan an interview to collect qualitative data and how to use statistical analyses to report results. In every chapter, case studies provide real world examples of evaluations broken down into the main elements of program evaluation: the needs that led to the program, the implementation of program plans, the people connected to the program, unexpected side effects, the role of evaluators in improving programs, the results, and the factors behind the results. In addition, the story of one of the evaluators involved in each case study is presented to show the human side of evaluation. This new edition also offers enhanced and expanded case studies, making them a central organizing theme, and adds more international examples.

New online resources for this edition include a table of evaluation models, examples of program evaluation reports, sample handouts for presentations to stakeholders, links to YouTube videos and additional annotated resources.  All resources are available for download under the tab eResources at www.routledge.com/9781138103962 .

TABLE OF CONTENTS

Chapter 1 | 24  pages, chapter 2 | 24  pages, planning an evaluation, chapter 3 | 28  pages, developing and using a theory of the program, chapter 4 | 25  pages, developing measures of implementation and outcomes, chapter 5 | 21  pages, ethics in program evaluation, chapter 6 | 20  pages, the assessment of need, chapter 7 | 12  pages, monitoring the implementation and the operation of programs, chapter 8 | 21  pages, qualitative evaluation methods, chapter 9 | 23  pages, outcome evaluations with one group, chapter 10 | 21  pages, quasi-experimental approaches to outcome evaluation, chapter 11 | 18  pages, using experiments to evaluate programs, chapter 12 | 20  pages, analyses of costs and outcomes, chapter 13 | 18  pages, evaluation reports, chapter 14 | 18  pages, how to encourage utilization.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of trials

Designing process evaluations using case study to explore the context of complex interventions evaluated in trials

Aileen grant.

1 School of Nursing, Midwifery and Paramedic Practice, Robert Gordon University, Garthdee Road, Aberdeen, AB10 7QB UK

Carol Bugge

2 Faculty of Health Sciences and Sport, University of Stirling, Pathfoot Building, Stirling, FK9 4LA UK

3 Department of Surgery and Cancer, Imperial College London, Charing Cross Campus, London, W6 8RP UK

Associated Data

No data and materials were used.

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail, and whether they can be transferred to other settings and populations. However, historically, context has not been sufficiently explored and reported resulting in the poor uptake of trial results. Therefore, suitable methodologies are needed to guide the investigation of context. Case study is one appropriate methodology, but there is little guidance about what case study design can offer the study of context in trials. We address this gap in the literature by presenting a number of important considerations for process evaluation using a case study design.

In this paper, we define context, the relationship between complex interventions and context, and describe case study design methodology. A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. We describe each of these in detail and highlight with examples from recently published process evaluations.

Conclusions

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation. We provide a comprehensive overview of the issues for process evaluation design to consider when using a case study design.

Trial registration

DQIP - ClinicalTrials.gov number, {"type":"clinical-trial","attrs":{"text":"NCT01425502","term_id":"NCT01425502"}} NCT01425502 - OPAL - ISRCTN57746448

Contribution to the literature

  • We illustrate how case study methodology can explore the complex, dynamic and uncertain relationship between context and interventions within trials.
  • We depict different case study designs and illustrate there is not one formula and that design needs to be tailored to the context and trial design.
  • Case study can support comparisons between intervention and control arms and between cases within arms to uncover and explain differences in detail.
  • We argue that case study can illustrate how components have evolved and been redefined through implementation.
  • Key issues for consideration in case study design within process evaluations are presented and illustrated with examples.

Process evaluations are an important component of an effectiveness evaluation as they focus on understanding the relationship between interventions and context to explain how and why interventions work or fail and whether they can be transferred to other settings and populations. However, historically, not all trials have had a process evaluation component, nor have they sufficiently reported aspects of context, resulting in poor uptake of trial findings [ 1 ]. Considerations of context are often absent from published process evaluations, with few studies acknowledging, taking account of or describing context during implementation, or assessing the impact of context on implementation [ 2 , 3 ]. At present, evidence from trials is not being used in a timely manner [ 4 , 5 ], and this can negatively impact on patient benefit and experience [ 6 ]. It takes on average 17 years for knowledge from research to be implemented into practice [ 7 ]. Suitable methodologies are therefore needed that allow for context to be exposed; one appropriate methodological approach is case study [ 8 , 9 ].

In 2015, the Medical Research Council (MRC) published guidance for process evaluations [ 10 ]. This was a key milestone in legitimising as well as providing tools, methods and a framework for conducting process evaluations. Nevertheless, as with all guidance, there is a need for reflection, challenge and refinement. There have been a number of critiques of the MRC guidance, including that interventions should be considered as events in systems [ 11 – 14 ]; a need for better use, critique and development of theories [ 15 – 17 ]; and a need for more guidance on integrating qualitative and quantitative data [ 18 , 19 ]. Although the MRC process evaluation guidance does consider appropriate qualitative and quantitative methods, it does not mention case study design and what it can offer the study of context in trials.

The case study methodology is ideally suited to real-world, sustainable intervention development and evaluation because it can explore and examine contemporary complex phenomena, in depth, in numerous contexts and using multiple sources of data [ 8 ]. Case study design can capture the complexity of the case, the relationship between the intervention and the context and how the intervention worked (or not) [ 8 ]. There are a number of textbooks on a case study within the social science fields [ 8 , 9 , 20 ], but there are no case study textbooks and a paucity of useful texts on how to design, conduct and report case study within the health arena. Few examples exist within the trial design and evaluation literature [ 3 , 21 ]. Therefore, guidance to enable well-designed process evaluations using case study methodology is required.

We aim to address the gap in the literature by presenting a number of important considerations for process evaluation using a case study design. First, we define the context and describe the relationship between complex health interventions and context.

What is context?

While there is growing recognition that context interacts with the intervention to impact on the intervention’s effectiveness [ 22 ], context is still poorly defined and conceptualised. There are a number of different definitions in the literature, but as Bate et al. explained ‘almost universally, we find context to be an overworked word in everyday dialogue but a massively understudied and misunderstood concept’ [ 23 ]. Ovretveit defines context as ‘everything the intervention is not’ [ 24 ]. This last definition is used by the MRC framework for process evaluations [ 25 ]; however; the problem with this definition is that it is highly dependent on how the intervention is defined. We have found Pfadenhauer et al.’s definition useful:

Context is conceptualised as a set of characteristics and circumstances that consist of active and unique factors that surround the implementation. As such it is not a backdrop for implementation but interacts, influences, modifies and facilitates or constrains the intervention and its implementation. Context is usually considered in relation to an intervention or object, with which it actively interacts. A boundary between the concepts of context and setting is discernible: setting refers to the physical, specific location in which the intervention is put into practice. Context is much more versatile, embracing not only the setting but also roles, interactions and relationships [ 22 ].

Traditionally, context has been conceptualised in terms of barriers and facilitators, but what is a barrier in one context may be a facilitator in another, so it is the relationship and dynamics between the intervention and context which are the most important [ 26 ]. There is a need for empirical research to really understand how different contextual factors relate to each other and to the intervention. At present, research studies often list common contextual factors, but without a depth of meaning and understanding, such as government or health board policies, organisational structures, professional and patient attitudes, behaviours and beliefs [ 27 ]. The case study methodology is well placed to understand the relationship between context and intervention where these boundaries may not be clearly evident. It offers a means of unpicking the contextual conditions which are pertinent to effective implementation.

The relationship between complex health interventions and context

Health interventions are generally made up of a number of different components and are considered complex due to the influence of context on their implementation and outcomes [ 3 , 28 ]. Complex interventions are often reliant on the engagement of practitioners and patients, so their attitudes, behaviours, beliefs and cultures influence whether and how an intervention is effective or not. Interventions are context-sensitive; they interact with the environment in which they are implemented. In fact, many argue that interventions are a product of their context, and indeed, outcomes are likely to be a product of the intervention and its context [ 3 , 29 ]. Within a trial, there is also the influence of the research context too—so the observed outcome could be due to the intervention alone, elements of the context within which the intervention is being delivered, elements of the research process or a combination of all three. Therefore, it can be difficult and unhelpful to separate the intervention from the context within which it was evaluated because the intervention and context are likely to have evolved together over time. As a result, the same intervention can look and behave differently in different contexts, so it is important this is known, understood and reported [ 3 ]. Finally, the intervention context is dynamic; the people, organisations and systems change over time, [ 3 ] which requires practitioners and patients to respond, and they may do this by adapting the intervention or contextual factors. So, to enable researchers to replicate successful interventions, or to explain why the intervention was not successful, it is not enough to describe the components of the intervention, they need to be described by their relationship to their context and resources [ 3 , 28 ].

What is a case study?

Case study methodology aims to provide an in-depth, holistic, balanced, detailed and complete picture of complex contemporary phenomena in its natural context [ 8 , 9 , 20 ]. In this case, the phenomena are the implementation of complex interventions in a trial. Case study methodology takes the view that the phenomena can be more than the sum of their parts and have to be understood as a whole [ 30 ]. It is differentiated from a clinical case study by its analytical focus [ 20 ].

The methodology is particularly useful when linked to trials because some of the features of the design naturally fill the gaps in knowledge generated by trials. Given the methodological focus on understanding phenomena in the round, case study methodology is typified by the use of multiple sources of data, which are more commonly qualitatively guided [ 31 ]. The case study methodology is not epistemologically specific, like realist evaluation, and can be used with different epistemologies [ 32 ], and with different theories, such as Normalisation Process Theory (which explores how staff work together to implement a new intervention) or the Consolidated Framework for Implementation Research (which provides a menu of constructs associated with effective implementation) [ 33 – 35 ]. Realist evaluation can be used to explore the relationship between context, mechanism and outcome, but case study differs from realist evaluation by its focus on a holistic and in-depth understanding of the relationship between an intervention and the contemporary context in which it was implemented [ 36 ]. Case study enables researchers to choose epistemologies and theories which suit the nature of the enquiry and their theoretical preferences.

Designing a process evaluation using case study

An important part of any study is the research design. Due to their varied philosophical positions, the seminal authors in the field of case study have different epistemic views as to how a case study should be conducted [ 8 , 9 ]. Stake takes an interpretative approach (interested in how people make sense of their world), and Yin has more positivistic leanings, arguing for objectivity, validity and generalisability [ 8 , 9 ].

Regardless of the philosophical background, a well-designed process evaluation using case study should consider the following core components: the purpose; the definition of the intervention, the trial design, the case, and the theories or logic models underpinning the intervention; the sampling approach; and the conceptual or theoretical framework [ 8 , 9 , 20 , 31 , 33 ]. We now discuss these critical components in turn, with reference to two process evaluations that used case study design, the DQIP and OPAL studies [ 21 , 37 – 41 ].

The purpose of a process evaluation is to evaluate and explain the relationship between the intervention and its components, to context and outcome. It can help inform judgements about validity (by exploring the intervention components and their relationship with one another (construct validity), the connections between intervention and outcomes (internal validity) and the relationship between intervention and context (external validity)). It can also distinguish between implementation failure (where the intervention is poorly delivered) and intervention failure (intervention design is flawed) [ 42 , 43 ]. By using a case study to explicitly understand the relationship between context and the intervention during implementation, the process evaluation can explain the intervention effects and the potential generalisability and optimisation into routine practice [ 44 ].

The DQIP process evaluation aimed to qualitatively explore how patients and GP practices responded to an intervention designed to reduce high-risk prescribing of nonsteroidal anti-inflammatory drugs (NSAIDs) and/or antiplatelet agents (see Table  1 ) and quantitatively examine how change in high-risk prescribing was associated with practice characteristics and implementation processes. The OPAL process evaluation (see Table  2 ) aimed to quantitatively understand the factors which influenced the effectiveness of a pelvic floor muscle training intervention for women with urinary incontinence and qualitatively explore the participants’ experiences of treatment and adherence.

Data-driven Quality Improvement in Primary Care (DQIP)

The DQIP trial was a cluster randomised, stepped wedge trial in 33 practices from one Scottish health board (NHS Tayside) which aimed to reduce high-risk prescribing of nonsteroidal anti-inflammatory drugs (NSAIDs) and/or selected antiplatelet agents. All practices received the intervention but were randomised to one of 10 different start dates.

The DQIP intervention comprised education, an informatics tool and a financial incentive. In DQIP, the researchers delivered education and training to the general practices (clusters) with high fidelity (of form and function); the general practices were then free to organise themselves as they saw fit to deliver the intervention to patients (fidelity of function, but variation in form). For the process evaluation design, the DQIP intervention was conceptualised as two interventions; the intervention delivered to clusters (intervention 1), and the intervention delivered to patients (intervention 2) as there may be different processes working at the cluster and individual patient level. Data were collected on intervention 1 in all practices and in a purposive sample for intervention 2, using a comparative case study design for both interventions.

: The DQIP mixed method process evaluation aimed to qualitatively explore how patients and practices responded to the intervention and quantitatively examine how a change in high risk prescribing was associated with practice characteristics and implementation processes.

: A mixed method multiple comparative case study with general practices as the units of analysis.

: Prespecified analysis to explore associations between practice characteristics, implementation processes and change in prescribing.

: One practice was sampled per cohort of the trial. Ten general practices were purposively sampled based on their initial adoption of the intervention, four practices which rapidly implemented the intervention and six who were initial implementation failures.

: Routine data collected during the trial were used to inform the quantitative part of the process evaluation, and to mirror the trial design, a case was selected for qualitative data collection from each cohort in the trial.

Framework for process evaluation design [ ] and Normalisation Process Theory [ ].

An in-depth description of each case was constructed detailing the practice characteristics and perceptions of all staff who participated in the interviews, with additional data from the educational outreach observation and informal interviews. Deductive theoretical analysis using the Normalisation Process Theory was also conducted. The Framework technique of matrixes facilitated detailed exploration by theoretical construct theme, practice and cross- and within-case comparisons Thematic and theoretical saturation was achieved.

Case study design illustrated that to achieve effective implementation agreement that the topic (NSAIDs and antiplatelets) was important among all clinical staff was fundamental. In addition, that practices made plans early in the process to implement the intervention including responsibility for the work and regularly evaluated their progress. Also, how practices internally organised to do the work varied illustrating this was not important for effective evaluation. Case study design was important for illustrating that implementation failure occurred at different stages depending on the practice culture and context, illuminating the differences in organisational processes and the contextual and organisational factors which impacted on effective implementation. The case study’s holistic approach to understanding how the context and culture within each practice influenced processes was key to explaining whether the intervention worked or not. Also, practice context was not fixed, so most practices adapted and were able to deliver some elements.

Optimising Pelvic Floor Exercises to Achieve Long-term benefits (OPAL)

The OPAL trial was a large multi-centre pragmatic randomised controlled trial of two active treatment arms delivered across 23 primary and secondary care sites for 600 women.

OPAL aimed to determine the effectiveness of two active treatment arms: basic pelvic floor muscle training (PFMT) and basic pelvic floor muscle training (PFMT) with biofeedback mediated intensive for the treatment of stress or mixed (stress and urgency) female urinary incontinence.

The OPAL trial has an embedded mixed methods process evaluation and a longitudinal qualitative case study, which aim to explain the trial outcomes. The longitudinal qualitative study aimed to investigate women’s experiences and adherence to the interventions.

A two-tailed embedded multiple case study design utilising longitudinal interviews. Within a multiple case study design, Yin outlines a ‘two-tailed’ approach where cases are selected to represent two extremities in relation to phenomena; in this case, the extremities are the control and intervention groups [ ]. Units of analysis were at the individual case (the participants) and the trial arms (intervention and control). There were two units of analysis to enable an in-depth exploration within each case but also at the trial arm level to explore commonalities and differences between the cases in each arm and between the arms.

: Prespecified analysis to explore fidelity, engagement and mediating factors using descriptive and interpretative statistics.

The two-tailed case study design means multiple cases (  = 20) from each trial arm were sampled to enable comparison between the trial arms. Cases were purposively sampled for variance in centre type (university hospital, district general hospital or community delivered service), therapist delivery type (physiotherapist/nurse), women’s type of UI (stress or mixed) and over time to reflect recruitment to the trial.

Mirroring the trial data collection, the case study was longitudinal, with women interviewed four times (baseline, post-treatment, 12 months and 24 months post-randomisation). Twenty women per arm were recruited, and 24 women across both arms were interviewed 4 times. GF

Framework for process evaluation design [ ].

A case was built and summarised over 2 years, with four data points for each woman. Case summaries were written summarising women’s experiences. Theoretical propositions were developed to guide the analysis. All the cases for one trial arm were grouped and within arm consistencies/inconsistencies searched for. The experimental and comparator tails were compared to one another using the theoretical propositions. Thematic and theoretical saturation was achieved.

Adherence to the interventions in the OPAL trial was hugely variable; in both trial arms, there were some women who had good adherence, some who were adherent at certain time points and some who did not adhere well at all. The case study was useful in illuminating the ways in which the context of the participants’ lives influenced adherence in both trial arms. The temporal nature of the data collection and case studies was useful in illustrating that the context of the women’s lives was dynamic, and thus, their engagement with the interventions was also dynamic. The in-depth case studies were useful in illustrating that based on each participants’ unique circumstances, different contextual factors and personal characteristics were important, such as their motivation to maintain engagement. Across-arm case comparison was able to illustrate that these factors were not related to the interventions but specific to the participants. Across-arm comparisons showed that although many participants had not maintained adherence, they felt more skilled in pelvic floor muscle training (PFMT) and able to restart PFMT exercise after a break [ , ].

Defining the intervention and exploring the theories or assumptions underpinning the intervention design

Process evaluations should also explore the utility of the theories or assumptions underpinning intervention design [ 49 ]. Not all theories underpinning interventions are based on a formal theory, but they based on assumptions as to how the intervention is expected to work. These can be depicted as a logic model or theory of change [ 25 ]. To capture how the intervention and context evolve requires the intervention and its expected mechanisms to be clearly defined at the outset [ 50 ]. Hawe and colleagues recommend defining interventions by function (what processes make the intervention work) rather than form (what is delivered) [ 51 ]. However, in some cases, it may be useful to know if some of the components are redundant in certain contexts or if there is a synergistic effect between all the intervention components.

The DQIP trial delivered two interventions, one intervention was delivered to professionals with high fidelity and then professionals delivered the other intervention to patients by form rather than function allowing adaptations to the local context as appropriate. The assumptions underpinning intervention delivery were prespecified in a logic model published in the process evaluation protocol [ 52 ].

Case study is well placed to challenge or reinforce the theoretical assumptions or redefine these based on the relationship between the intervention and context. Yin advocates the use of theoretical propositions; these direct attention to specific aspects of the study for investigation [ 8 ] can be based on the underlying assumptions and tested during the course of the process evaluation. In case studies, using an epistemic position more aligned with Yin can enable research questions to be designed, which seek to expose patterns of unanticipated as well as expected relationships [ 9 ]. The OPAL trial was more closely aligned with Yin, where the research team predefined some of their theoretical assumptions, based on how the intervention was expected to work. The relevant parts of the data analysis then drew on data to support or refute the theoretical propositions. This was particularly useful for the trial as the prespecified theoretical propositions linked to the mechanisms of action on which the intervention was anticipated to have an effect (or not).

Tailoring to the trial design

Process evaluations need to be tailored to the trial, the intervention and the outcomes being measured [ 45 ]. For example, in a stepped wedge design (where the intervention is delivered in a phased manner), researchers should try to ensure process data are captured at relevant time points or in a two-arm or multiple arm trial, ensure data is collected from the control group(s) as well as the intervention group(s). In the DQIP trial, a stepped wedge trial, at least one process evaluation case, was sampled per cohort. Trials often continue to measure outcomes after delivery of the intervention has ceased, so researchers should also consider capturing ‘follow-up’ data on contextual factors, which may continue to influence the outcome measure. The OPAL trial had two active treatment arms so collected process data from both arms. In addition, as the trial was interested in long-term adherence, the trial and the process evaluation collected data from participants for 2 years after the intervention was initially delivered, providing 24 months follow-up data, in line with the primary outcome for the trial.

Defining the case

Case studies can include single or multiple cases in their design. Single case studies usually sample typical or unique cases, their advantage being the depth and richness that can be achieved over a long period of time. The advantages of multiple case study design are that cases can be compared to generate a greater depth of analysis. Multiple case study sampling may be carried out in order to test for replication or contradiction [ 8 ]. Given that trials are often conducted over a number of sites, a multiple case study design is more sensible for process evaluations, as there is likely to be variation in implementation between sites. Case definition may occur at a variety of levels but is most appropriate if it reflects the trial design. For example, a case in an individual patient level trial is likely to be defined as a person/patient (e.g. a woman with urinary incontinence—OPAL trial) whereas in a cluster trial, a case is like to be a cluster, such as an organisation (e.g. a general practice—DQIP trial). Of course, the process evaluation could explore cases with less distinct boundaries, such as communities or relationships; however, the clarity with which these cases are defined is important, in order to scope the nature of the data that will be generated.

Carefully sampled cases are critical to a good case study as sampling helps inform the quality of the inferences that can be made from the data [ 53 ]. In both qualitative and quantitative research, how and how many participants to sample must be decided when planning the study. Quantitative sampling techniques generally aim to achieve a random sample. Qualitative research generally uses purposive samples to achieve data saturation, occurring when the incoming data produces little or no new information to address the research questions. The term data saturation has evolved from theoretical saturation in conventional grounded theory studies; however, its relevance to other types of studies is contentious as the term saturation seems to be widely used but poorly justified [ 54 ]. Empirical evidence suggests that for in-depth interview studies, saturation occurs at 12 interviews for thematic saturation, but typically more would be needed for a heterogenous sample higher degrees of saturation [ 55 , 56 ]. Both DQIP and OPAL case studies were huge with OPAL designed to interview each of the 40 individual cases four times and DQIP designed to interview the lead DQIP general practitioner (GP) twice (to capture change over time), another GP and the practice manager from each of the 10 organisational cases. Despite the plethora of mixed methods research textbooks, there is very little about sampling as discussions typically link to method (e.g. interviews) rather than paradigm (e.g. case study).

Purposive sampling can improve the generalisability of the process evaluation by sampling for greater contextual diversity. The typical or average case is often not the richest source of information. Outliers can often reveal more important insights, because they may reflect the implementation of the intervention using different processes. Cases can be selected from a number of criteria, which are not mutually exclusive, to enable a rich and detailed picture to be built across sites [ 53 ]. To avoid the Hawthorne effect, it is recommended that process evaluations sample from both intervention and control sites, which enables comparison and explanation. There is always a trade-off between breadth and depth in sampling, so it is important to note that often quantity does not mean quality and that carefully sampled cases can provide powerful illustrative examples of how the intervention worked in practice, the relationship between the intervention and context and how and why they evolved together. The qualitative components of both DQIP and OPAL process evaluations aimed for maximum variation sampling. Please see Table  1 for further information on how DQIP’s sampling frame was important for providing contextual information on processes influencing effective implementation of the intervention.

Conceptual and theoretical framework

A conceptual or theoretical framework helps to frame data collection and analysis [ 57 ]. Theories can also underpin propositions, which can be tested in the process evaluation. Process evaluations produce intervention-dependent knowledge, and theories help make the research findings more generalizable by providing a common language [ 16 ]. There are a number of mid-range theories which have been designed to be used with process evaluation [ 34 , 35 , 58 ]. The choice of the appropriate conceptual or theoretical framework is, however, dependent on the philosophical and professional background of the research. The two examples within this paper used our own framework for the design of process evaluations, which proposes a number of candidate processes which can be explored, for example, recruitment, delivery, response, maintenance and context [ 45 ]. This framework was published before the MRC guidance on process evaluations, and both the DQIP and OPAL process evaluations were designed before the MRC guidance was published. The DQIP process evaluation explored all candidates in the framework whereas the OPAL process evaluation selected four candidates, illustrating that process evaluations can be selective in what they explore based on the purpose, research questions and resources. Furthermore, as Kislov and colleagues argue, we also have a responsibility to critique the theoretical framework underpinning the evaluation and refine theories to advance knowledge [ 59 ].

Data collection

An important consideration is what data to collect or measure and when. Case study methodology supports a range of data collection methods, both qualitative and quantitative, to best answer the research questions. As the aim of the case study is to gain an in-depth understanding of phenomena in context, methods are more commonly qualitative or mixed method in nature. Qualitative methods such as interviews, focus groups and observation offer rich descriptions of the setting, delivery of the intervention in each site and arm, how the intervention was perceived by the professionals delivering the intervention and the patients receiving the intervention. Quantitative methods can measure recruitment, fidelity and dose and establish which characteristics are associated with adoption, delivery and effectiveness. To ensure an understanding of the complexity of the relationship between the intervention and context, the case study should rely on multiple sources of data and triangulate these to confirm and corroborate the findings [ 8 ]. Process evaluations might consider using routine data collected in the trial across all sites and additional qualitative data across carefully sampled sites for a more nuanced picture within reasonable resource constraints. Mixed methods allow researchers to ask more complex questions and collect richer data than can be collected by one method alone [ 60 ]. The use of multiple sources of data allows data triangulation, which increases a study’s internal validity but also provides a more in-depth and holistic depiction of the case [ 20 ]. For example, in the DQIP process evaluation, the quantitative component used routinely collected data from all sites participating in the trial and purposively sampled cases for a more in-depth qualitative exploration [ 21 , 38 , 39 ].

The timing of data collection is crucial to study design, especially within a process evaluation where data collection can potentially influence the trial outcome. Process evaluations are generally in parallel or retrospective to the trial. The advantage of a retrospective design is that the evaluation itself is less likely to influence the trial outcome. However, the disadvantages include recall bias, lack of sensitivity to nuances and an inability to iteratively explore the relationship between intervention and outcome as it develops. To capture the dynamic relationship between intervention and context, the process evaluation needs to be parallel and longitudinal to the trial. Longitudinal methodological design is rare, but it is needed to capture the dynamic nature of implementation [ 40 ]. How the intervention is delivered is likely to change over time as it interacts with context. For example, as professionals deliver the intervention, they become more familiar with it, and it becomes more embedded into systems. The OPAL process evaluation was a longitudinal, mixed methods process evaluation where the quantitative component had been predefined and built into trial data collection systems. Data collection in both the qualitative and quantitative components mirrored the trial data collection points, which were longitudinal to capture adherence and contextual changes over time.

There is a lot of attention in the recent literature towards a systems approach to understanding interventions in context, which suggests interventions are ‘events within systems’ [ 61 , 62 ]. This framing highlights the dynamic nature of context, suggesting that interventions are an attempt to change systems dynamics. This conceptualisation would suggest that the study design should collect contextual data before and after implementation to assess the effect of the intervention on the context and vice versa.

Data analysis

Designing a rigorous analysis plan is particularly important for multiple case studies, where researchers must decide whether their approach to analysis is case or variable based. Case-based analysis is the most common, and analytic strategies must be clearly articulated for within and across case analysis. A multiple case study design can consist of multiple cases, where each case is analysed at the case level, or of multiple embedded cases, where data from all the cases are pulled together for analysis at some level. For example, OPAL analysis was at the case level, but all the cases for the intervention and control arms were pulled together at the arm level for more in-depth analysis and comparison. For Yin, analytical strategies rely on theoretical propositions, but for Stake, analysis works from the data to develop theory. In OPAL and DQIP, case summaries were written to summarise the cases and detail within-case analysis. Each of the studies structured these differently based on the phenomena of interest and the analytic technique. DQIP applied an approach more akin to Stake [ 9 ], with the cases summarised around inductive themes whereas OPAL applied a Yin [ 8 ] type approach using theoretical propositions around which the case summaries were structured. As the data for each case had been collected through longitudinal interviews, the case summaries were able to capture changes over time. It is beyond the scope of this paper to discuss different analytic techniques; however, to ensure the holistic examination of the intervention(s) in context, it is important to clearly articulate and demonstrate how data is integrated and synthesised [ 31 ].

There are a number of approaches to process evaluation design in the literature; however, there is a paucity of research on what case study design can offer process evaluations. We argue that case study is one of the best research designs to underpin process evaluations, to capture the dynamic and complex relationship between intervention and context during implementation [ 38 ]. Case study can enable comparisons within and across intervention and control arms and enable the evolving relationship between intervention and context to be captured holistically rather than considering processes in isolation. Utilising a longitudinal design can enable the dynamic relationship between context and intervention to be captured in real time. This information is fundamental to holistically explaining what intervention was implemented, understanding how and why the intervention worked or not and informing the transferability of the intervention into routine clinical practice.

Case study designs are not prescriptive, but process evaluations using case study should consider the purpose, trial design, the theories or assumptions underpinning the intervention, and the conceptual and theoretical frameworks informing the evaluation. We have discussed each of these considerations in turn, providing a comprehensive overview of issues for process evaluations using a case study design. There is no single or best way to conduct a process evaluation or a case study, but researchers need to make informed choices about the process evaluation design. Although this paper focuses on process evaluations, we recognise that case study design could also be useful during intervention development and feasibility trials. Elements of this paper are also applicable to other study designs involving trials.

Acknowledgements

We would like to thank Professor Shaun Treweek for the discussions about context in trials.

Abbreviations

DQIPData-driven Quality Improvement in Primary Care
MRCMedical Research Council
NSAIDsNonsteroidal anti-inflammatory drugs
OPALOptimizing Pelvic Floor Muscle Exercises to Achieve Long-term benefits

Authors’ contributions

AG, CB and MW conceptualised the study. AG wrote the paper. CB and MW commented on the drafts. All authors have approved the final manuscript.

No funding was received for this work.

Availability of data and materials

Ethics approval and consent to participate.

Ethics approval and consent to participate is not appropriate as no participants were included.

Consent for publication

Consent for publication is not required as no participants were included.

Competing interests

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Aileen Grant, Email: [email protected] .

Carol Bugge, Email: [email protected] .

Mary Wells, Email: [email protected] .

Case Study Evaluation: Past, Present and Future Challenges: Volume 15

Table of contents, case study evaluation: past, present and future challenges, advances in program evaluation, copyright page, list of contributors, introduction, case study, methodology and educational evaluation: a personal view.

This chapter gives one version of the recent history of evaluation case study. It looks back over the emergence of case study as a sociological method, developed in the early years of the 20th Century and celebrated and elaborated by the Chicago School of urban sociology at Chicago University, starting throughout the 1920s and 1930s. Some of the basic methods, including constant comparison, were generated at that time. Only partly influenced by this methodological movement, an alliance between an Illinois-based team in the United States and a team at the University of East Anglia in the United Kingdom recast the case method as a key tool for the evaluation of social and educational programmes.

Letters from a Headmaster ☆ Originally published in Simons, H. (Ed.) (1980). Towards a Science of the Singular: Essays about Case Study in Educational Research and Evaluation. Occasional Papers No. 10. Norwich, UK: Centre for Applied Research, University of East Anglia.

Story telling and educational understanding ☆ previously published in occasional papers #12, evaluation centre, university of western michigan, 1978..

The full ‘storytelling’ paper was written in 1978 and was influential in its time. It is reprinted here, introduced by an Author's reflection on it in 2014. The chapter describes the author’s early disenchantment with traditional approaches to educational research.

He regards educational research as, at best, a misnomer, since little of it is preceded by a search . Entitled educational researchers often fancy themselves as scientists at work. But those whom they attempt to describe are often artists at work. Statistical methodologies enable educational researchers to measure something, but their measurements can neither capture nor explain splendid teaching.

Since such a tiny fraction of what is published in educational research journals influences school practitioners, professional researchers should risk trying alternative approaches to uncovering what is going on in schools.

Story telling is posited as a possible key to producing insights that inform and ultimately improve educational practice. It advocates openness to broad inquiry into the culture of the educational setting.

Case Study as Antidote to the Literal

Much programme and policy evaluation yields to the pressure to report on the productivity of programmes and is perforce compliant with the conditions of contract. Too often the view of these evaluations is limited to a literal reading of the analytical challenge. If we are evaluating X we look critically at X1, X2 and X3. There might be cause for embracing adjoining data sources such as W1 and Y1. This ignores frequent realities that an evaluation specification is only an approximate starting point for an unpredictable journey into comprehensive understanding; that the specification represents only that which is wanted by the sponsor, and not all that may be needed ; and that the contractual specification too often insists on privileging the questions and concerns of a few. Case study evaluation proves an alternative that allows for the less-than-literal in the form of analysis of contingencies – how people, phenomena and events may be related in dynamic ways, how context and action have only a blurred dividing line and how what defines the case as a case may only emerge late in the study.

Thinking about Case Studies in 3-D: Researching the NHS Clinical Commissioning Landscape in England

What is our unit of analysis and by implication what are the boundaries of our cases? This is a question we grapple with at the start of every new project. We observe that case studies are often referred to in an unreflective manner and are often conflated with geographical location. Neat units of analysis and clearly bounded cases usually do not reflect the messiness encountered during qualitative fieldwork. Others have puzzled over these questions. We briefly discuss work to problematise the use of households as units of analysis in the context of apartheid South Africa and then consider work of other anthropologists engaged in multi-site ethnography. We have found the notion of ‘following’ chains, paths and threads across sites to be particularly insightful.

We present two examples from our work studying commissioning in the English National Health Service (NHS) to illustrate our struggles with case studies. The first is a study of Practice-based Commissioning groups and the second is a study of the early workings of Clinical Commissioning Groups. In both instances we show how ideas of what constituted our unit of analysis and the boundaries of our cases became less clear as our research progressed. We also discuss pressures we experienced to add more case studies to our projects. These examples illustrate the primacy for us of understanding interactions between place, local history and rapidly developing policy initiatives. Understanding cases in this way can be challenging in a context where research funders hold different views of what constitutes a case.

The Case for Evaluating Process and Worth: Evaluation of a Programme for Carers and People with Dementia

A case study methodology was applied as a major component of a mixed-methods approach to the evaluation of a mobile dementia education and support service in the Bega Valley Shire, New South Wales, Australia. In-depth interviews with people with dementia (PWD), their carers, programme staff, family members and service providers and document analysis including analysis of client case notes and client database were used.

The strengths of the case study approach included: (i) simultaneous evaluation of programme process and worth, (ii) eliciting the theory of change and addressing the problem of attribution, (iii) demonstrating the impact of the programme on earlier steps identified along the causal pathway (iv) understanding the complexity of confounding factors, (v) eliciting the critical role of the social, cultural and political context, (vi) understanding the importance of influences contributing to differences in programme impact for different participants and (vii) providing insight into how programme participants experience the value of the programme including unintended benefits.

The broader case of the collective experience of dementia and as part of this experience, the impact of a mobile programme of support and education, in a predominately rural area grew from the investigation of the programme experience of ‘individual cases’ of carers and PWD. Investigation of living conditions, relationships, service interactions through observation and increased depth of interviews with service providers and family members would have provided valuable perspectives and thicker description of the case for increased understanding of the case and strength of the evaluation.

The Collapse of “Primary Care” in Medical Education: A Case Study of Michigan’s Community/University Health Partnerships Project

This chapter describes a case study of a social change project in medical education (primary care), in which the critical interpretive evaluation methodology I sought to use came up against the “positivist” approach preferred by senior figures in the medical school who commissioned the evaluation.

I describe the background to the study and justify the evaluation approach and methods employed in the case study – drawing on interviews, document analysis, survey research, participant observation, literature reviews, and critical incidents – one of which was the decision by the medical school hierarchy to restrict my contact with the lay community in my official evaluation duties. The use of critical ethnography also embraced wider questions about circuits of power and the social and political contexts within which the “social change” effort occurred.

Central to my analysis is John Gaventa’s theory of power as “the internalization of values that inhibit consciousness and participation while encouraging powerlessness and dependency.” Gaventa argued, essentially, that the evocation of power has as much to do with preventing decisions as with bringing them about. My chosen case illustrated all three dimensions of power that Gaventa originally uncovered in his portrait of self-interested Appalachian coal mine owners: (1) communities were largely excluded from decision making power; (2) issues were avoided or suppressed; and (3) the interests of the oppressed went largely unrecognized.

The account is auto-ethnographic, hence the study is limited by my abilities, biases, and subject positions. I reflect on these in the chapter.

The study not only illustrates the unique contribution of case study as a research methodology but also its low status in the positivist paradigm adhered to by many doctors. Indeed, the tension between the potential of case study to illuminate the complexities of community engagement through thick description and the rejection of this very method as inherently “flawed” suggests that medical education may be doomed to its neoliberal fate for some time to come.

‘Lead’ Standard Evaluation

This is a personal narrative, but I trust not a self-regarding one. For more years than I care to remember I have been working in the field of curriculum (or ‘program’) evaluation. The field by any standards is dispersed and fragmented, with variously ascribed purposes, roles, implicit values, political contexts, and social research methods. Attempts to organize this territory into an ‘evaluation theory tree’ (e.g. Alkin, M., & Christie, C. (2003). An evaluation theory tree. In M. Alkin (Ed.), Evaluation roots: Tracing theorists’ views and influences (pp. 12–65). Thousand Oaks, CA: Sage) have identified broad types or ‘branches’, but the migration of specific characteristics (like ‘case study’) or individual practitioners across the boundaries has tended to undermine the analysis at the level of detail, and there is no suggestion that it represents a cladistic taxonomy. There is, however, general agreement that the roots of evaluation practice tap into a variety of cultural sources, being grounded bureaucratically in (potentially conflicting) doctrines of accountability and methodologically in discipline-based or pragmatically eclectic formats for systematic social enquiry.

In general, this diversity is not treated as problematic. The professional evaluation community has increasingly taken the view (‘let all the flowers grow’) that evaluation models can be deemed appropriate across a wide spectrum, with their appropriateness determined by the nature of the task and its context, including in relation to hybrid studies using mixed models or displaying what Geertz (Geertz, C. (1980/1993). Blurred genres: The refiguration of social thought. The American Scholar , 49(2), 165–179) called ‘blurred genres’. However, from time to time historic tribal rivalries re-emerge as particular practitioners feel the need to defend their modus operandi (and thereby their livelihood) against paradigm shifts or governments and other sponsors of program evaluation seeking for ideological reasons to prioritize certain types of study at the expense of others. The latter possibility poses a potential threat that needs to be taken seriously by evaluators within the broad tradition showcased in this volume, interpretive qualitative case studies of educational programs that combine naturalistic description (often ‘thick’; Geertz, C. (1973). Thick description: Towards an interpretive theory of culture. In The interpretation of culture (pp. 3–30). New York, NY: Basic Books.) description with a values-orientated analysis of their implications. Such studies are more likely to seek inspiration from anthropology or critical discourse analysis than from the randomly controlled trials familiar in medical research or laboratory practice in the physical sciences, despite the impressive rigour of the latter in appropriate contexts. It is the risk of ideological allegiance that I address in this chapter.

Freedom from the Rubric

Twice-told tales how public inquiry could inform n of 1 case study research.

This chapter considers the usefulness and validity of public inquiries as a source of data and preliminary interpretation for case study research. Using two contrasting examples – the Bristol Inquiry into excess deaths in a children’s cardiac surgery unit and the Woolf Inquiry into a breakdown of governance at the London School of Economics (LSE) – I show how academics can draw fruitfully on, and develop further analysis from, the raw datasets, published summaries and formal judgements of public inquiries.

Academic analysis of public inquiries can take two broad forms, corresponding to the two main approaches to individual case study defined by Stake: instrumental (selecting the public inquiry on the basis of pre-defined theoretical features and using the material to develop and test theoretical propositions) and intrinsic (selecting the public inquiry on the basis of the particular topic addressed and using the material to explore questions about what was going on and why).

The advantages of a public inquiry as a data source for case study research typically include a clear and uncontested focus of inquiry; the breadth and richness of the dataset collected; the exceptional level of support available for the tasks of transcribing, indexing, collating, summarising and so on; and the expert interpretations and insights of the inquiry’s chair (with which the researcher may or may not agree). A significant disadvantage is that whilst the dataset collected for a public inquiry is typically ‘rich’, it has usually been collected under far from ideal research conditions. Hence, while public inquiries provide a potentially rich resource for researchers, those who seek to use public inquiry data for research must justify their choice on both ethical and scientific grounds.

Evaluation as the Co-Construction of Knowledge: Case Studies of Place-Based Leadership and Public Service Innovation

This chapter introduces the notion of the ‘Innovation Story’ as a methodological approach to public policy evaluation, which builds in greater opportunity for learning and reflexivity.

The Innovation Story is an adaptation of the case study approach and draws on participatory action research traditions. It is a structured narrative that describes a particular public policy innovation in the personalised contexts in which it is experienced by innovators. Its construction involves a discursive process through which involved actors tell their story, explain it to others, listen to their questions and co-construct knowledge of change together.

The approach was employed to elaborate five case studies of place-based leadership and public service innovation in the United Kingdom, The Netherlands and Mexico. The key findings are that spaces in which civic leaders come together from different ‘realms’ of leadership in a locality (community, business, professional managers and political leaders) can become innovation zones that foster inventive behaviour. Much depends on the quality of civic leadership, and its capacity to foster genuine dialogue and co-responsibility. This involves the evaluation seeking out influential ideas from below the level of strategic management, and documenting leadership activities of those who are skilled at ‘boundary crossing’ – for example, communicating between sectors.

The evaluator can be a key player in this process, as a convenor of safe spaces for actors to come together to discuss and deliberate before returning to practice. Our approach therefore argues for a particular awareness of the political nature of policy evaluation in terms of negotiating these spaces, and the need for politically engaged evaluators who are skilled in facilitating collective learning processes.

Evaluation Noir: The Other Side of the Experience

What are the boundaries of a case study, and what should new evaluators do when these boundaries are breached? How does a new evaluator interpret the breakdown of communication, how do new evaluators protect themselves when the evaluation fails? This chapter discusses the journey of an evaluator new to the field of qualitative evaluative inquiry. Integrating the perspective of a senior evaluator, the authors reflect on three key experiences that informed the new evaluator. The authors hope to provide a rare insight into case study practice as emotional issues turn out to be just as complex as the methodology used.

About the Editors

About the authors.

  • Jill Russell
  • Trisha Greenhalgh
  • Saville Kushner

All feedback is valuable

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

Case Study Research Method in Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews).

The case study research method originated in clinical medicine (the case history, i.e., the patient’s personal history). In psychology, case studies are often confined to the study of a particular individual.

The information is mainly biographical and relates to events in the individual’s past (i.e., retrospective), as well as to significant events that are currently occurring in his or her everyday life.

The case study is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies.

Freud (1909a, 1909b) conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

This makes it clear that the case study is a method that should only be used by a psychologist, therapist, or psychiatrist, i.e., someone with a professional qualification.

There is an ethical issue of competence. Only someone qualified to diagnose and treat a person can conduct a formal case study relating to atypical (i.e., abnormal) behavior or atypical development.

case study

 Famous Case Studies

  • Anna O – One of the most famous case studies, documenting psychoanalyst Josef Breuer’s treatment of “Anna O” (real name Bertha Pappenheim) for hysteria in the late 1800s using early psychoanalytic theory.
  • Little Hans – A child psychoanalysis case study published by Sigmund Freud in 1909 analyzing his five-year-old patient Herbert Graf’s house phobia as related to the Oedipus complex.
  • Bruce/Brenda – Gender identity case of the boy (Bruce) whose botched circumcision led psychologist John Money to advise gender reassignment and raise him as a girl (Brenda) in the 1960s.
  • Genie Wiley – Linguistics/psychological development case of the victim of extreme isolation abuse who was studied in 1970s California for effects of early language deprivation on acquiring speech later in life.
  • Phineas Gage – One of the most famous neuropsychology case studies analyzes personality changes in railroad worker Phineas Gage after an 1848 brain injury involving a tamping iron piercing his skull.

Clinical Case Studies

  • Studying the effectiveness of psychotherapy approaches with an individual patient
  • Assessing and treating mental illnesses like depression, anxiety disorders, PTSD
  • Neuropsychological cases investigating brain injuries or disorders

Child Psychology Case Studies

  • Studying psychological development from birth through adolescence
  • Cases of learning disabilities, autism spectrum disorders, ADHD
  • Effects of trauma, abuse, deprivation on development

Types of Case Studies

  • Explanatory case studies : Used to explore causation in order to find underlying principles. Helpful for doing qualitative analysis to explain presumed causal links.
  • Exploratory case studies : Used to explore situations where an intervention being evaluated has no clear set of outcomes. It helps define questions and hypotheses for future research.
  • Descriptive case studies : Describe an intervention or phenomenon and the real-life context in which it occurred. It is helpful for illustrating certain topics within an evaluation.
  • Multiple-case studies : Used to explore differences between cases and replicate findings across cases. Helpful for comparing and contrasting specific cases.
  • Intrinsic : Used to gain a better understanding of a particular case. Helpful for capturing the complexity of a single case.
  • Collective : Used to explore a general phenomenon using multiple case studies. Helpful for jointly studying a group of cases in order to inquire into the phenomenon.

Where Do You Find Data for a Case Study?

There are several places to find data for a case study. The key is to gather data from multiple sources to get a complete picture of the case and corroborate facts or findings through triangulation of evidence. Most of this information is likely qualitative (i.e., verbal description rather than measurement), but the psychologist might also collect numerical data.

1. Primary sources

  • Interviews – Interviewing key people related to the case to get their perspectives and insights. The interview is an extremely effective procedure for obtaining information about an individual, and it may be used to collect comments from the person’s friends, parents, employer, workmates, and others who have a good knowledge of the person, as well as to obtain facts from the person him or herself.
  • Observations – Observing behaviors, interactions, processes, etc., related to the case as they unfold in real-time.
  • Documents & Records – Reviewing private documents, diaries, public records, correspondence, meeting minutes, etc., relevant to the case.

2. Secondary sources

  • News/Media – News coverage of events related to the case study.
  • Academic articles – Journal articles, dissertations etc. that discuss the case.
  • Government reports – Official data and records related to the case context.
  • Books/films – Books, documentaries or films discussing the case.

3. Archival records

Searching historical archives, museum collections and databases to find relevant documents, visual/audio records related to the case history and context.

Public archives like newspapers, organizational records, photographic collections could all include potentially relevant pieces of information to shed light on attitudes, cultural perspectives, common practices and historical contexts related to psychology.

4. Organizational records

Organizational records offer the advantage of often having large datasets collected over time that can reveal or confirm psychological insights.

Of course, privacy and ethical concerns regarding confidential data must be navigated carefully.

However, with proper protocols, organizational records can provide invaluable context and empirical depth to qualitative case studies exploring the intersection of psychology and organizations.

  • Organizational/industrial psychology research : Organizational records like employee surveys, turnover/retention data, policies, incident reports etc. may provide insight into topics like job satisfaction, workplace culture and dynamics, leadership issues, employee behaviors etc.
  • Clinical psychology : Therapists/hospitals may grant access to anonymized medical records to study aspects like assessments, diagnoses, treatment plans etc. This could shed light on clinical practices.
  • School psychology : Studies could utilize anonymized student records like test scores, grades, disciplinary issues, and counseling referrals to study child development, learning barriers, effectiveness of support programs, and more.

How do I Write a Case Study in Psychology?

Follow specified case study guidelines provided by a journal or your psychology tutor. General components of clinical case studies include: background, symptoms, assessments, diagnosis, treatment, and outcomes. Interpreting the information means the researcher decides what to include or leave out. A good case study should always clarify which information is the factual description and which is an inference or the researcher’s opinion.

1. Introduction

  • Provide background on the case context and why it is of interest, presenting background information like demographics, relevant history, and presenting problem.
  • Compare briefly to similar published cases if applicable. Clearly state the focus/importance of the case.

2. Case Presentation

  • Describe the presenting problem in detail, including symptoms, duration,and impact on daily life.
  • Include client demographics like age and gender, information about social relationships, and mental health history.
  • Describe all physical, emotional, and/or sensory symptoms reported by the client.
  • Use patient quotes to describe the initial complaint verbatim. Follow with full-sentence summaries of relevant history details gathered, including key components that led to a working diagnosis.
  • Summarize clinical exam results, namely orthopedic/neurological tests, imaging, lab tests, etc. Note actual results rather than subjective conclusions. Provide images if clearly reproducible/anonymized.
  • Clearly state the working diagnosis or clinical impression before transitioning to management.

3. Management and Outcome

  • Indicate the total duration of care and number of treatments given over what timeframe. Use specific names/descriptions for any therapies/interventions applied.
  • Present the results of the intervention,including any quantitative or qualitative data collected.
  • For outcomes, utilize visual analog scales for pain, medication usage logs, etc., if possible. Include patient self-reports of improvement/worsening of symptoms. Note the reason for discharge/end of care.

4. Discussion

  • Analyze the case, exploring contributing factors, limitations of the study, and connections to existing research.
  • Analyze the effectiveness of the intervention,considering factors like participant adherence, limitations of the study, and potential alternative explanations for the results.
  • Identify any questions raised in the case analysis and relate insights to established theories and current research if applicable. Avoid definitive claims about physiological explanations.
  • Offer clinical implications, and suggest future research directions.

5. Additional Items

  • Thank specific assistants for writing support only. No patient acknowledgments.
  • References should directly support any key claims or quotes included.
  • Use tables/figures/images only if substantially informative. Include permissions and legends/explanatory notes.
  • Provides detailed (rich qualitative) information.
  • Provides insight for further research.
  • Permitting investigation of otherwise impractical (or unethical) situations.

Case studies allow a researcher to investigate a topic in far more detail than might be possible if they were trying to deal with a large number of research participants (nomothetic approach) with the aim of ‘averaging’.

Because of their in-depth, multi-sided approach, case studies often shed light on aspects of human thinking and behavior that would be unethical or impractical to study in other ways.

Research that only looks into the measurable aspects of human behavior is not likely to give us insights into the subjective dimension of experience, which is important to psychoanalytic and humanistic psychologists.

Case studies are often used in exploratory research. They can help us generate new ideas (that might be tested by other methods). They are an important way of illustrating theories and can help show how different aspects of a person’s life are related to each other.

The method is, therefore, important for psychologists who adopt a holistic point of view (i.e., humanistic psychologists ).

Limitations

  • Lacking scientific rigor and providing little basis for generalization of results to the wider population.
  • Researchers’ own subjective feelings may influence the case study (researcher bias).
  • Difficult to replicate.
  • Time-consuming and expensive.
  • The volume of data, together with the time restrictions in place, impacted the depth of analysis that was possible within the available resources.

Because a case study deals with only one person/event/group, we can never be sure if the case study investigated is representative of the wider body of “similar” instances. This means the conclusions drawn from a particular case may not be transferable to other settings.

Because case studies are based on the analysis of qualitative (i.e., descriptive) data , a lot depends on the psychologist’s interpretation of the information she has acquired.

This means that there is a lot of scope for Anna O , and it could be that the subjective opinions of the psychologist intrude in the assessment of what the data means.

For example, Freud has been criticized for producing case studies in which the information was sometimes distorted to fit particular behavioral theories (e.g., Little Hans ).

This is also true of Money’s interpretation of the Bruce/Brenda case study (Diamond, 1997) when he ignored evidence that went against his theory.

Breuer, J., & Freud, S. (1895).  Studies on hysteria . Standard Edition 2: London.

Curtiss, S. (1981). Genie: The case of a modern wild child .

Diamond, M., & Sigmundson, K. (1997). Sex Reassignment at Birth: Long-term Review and Clinical Implications. Archives of Pediatrics & Adolescent Medicine , 151(3), 298-304

Freud, S. (1909a). Analysis of a phobia of a five year old boy. In The Pelican Freud Library (1977), Vol 8, Case Histories 1, pages 169-306

Freud, S. (1909b). Bemerkungen über einen Fall von Zwangsneurose (Der “Rattenmann”). Jb. psychoanal. psychopathol. Forsch ., I, p. 357-421; GW, VII, p. 379-463; Notes upon a case of obsessional neurosis, SE , 10: 151-318.

Harlow J. M. (1848). Passage of an iron rod through the head.  Boston Medical and Surgical Journal, 39 , 389–393.

Harlow, J. M. (1868).  Recovery from the Passage of an Iron Bar through the Head .  Publications of the Massachusetts Medical Society. 2  (3), 327-347.

Money, J., & Ehrhardt, A. A. (1972).  Man & Woman, Boy & Girl : The Differentiation and Dimorphism of Gender Identity from Conception to Maturity. Baltimore, Maryland: Johns Hopkins University Press.

Money, J., & Tucker, P. (1975). Sexual signatures: On being a man or a woman.

Further Information

  • Case Study Approach
  • Case Study Method
  • Enhancing the Quality of Case Studies in Health Services Research
  • “We do things together” A case study of “couplehood” in dementia
  • Using mixed methods for evaluating an integrative approach to cancer care: a case study

Print Friendly, PDF & Email

Home

U.S. Government Accountability Office

Program Evaluation: Case Study Evaluations

GAO presented information on the use of case study evaluations for GAO audit and evaluation work, focusing on: (1) the definition of a case study; (2) conditions under which a case study is an appropriate evaluation method for GAO work; and (3) distinguishing a good case study from a poor one. GAO also included information on: (1) various case study applications; and (2) case study design and strength assessment.

Full Report

Office of public affairs.

Sarah Kaczmarek Acting Managing Director [email protected] (202) 512-4800

This site uses cookies to optimize functionality and give you the best possible experience. If you continue to navigate this website beyond this page, cookies will be placed on your browser. To learn more about cookies, click here .

AIP Publishing Logo

Evaluation of infrastructure quality: Telecommunication projects as a case study

[email protected]

[email protected]

  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Reprints and Permissions
  • Cite Icon Cite
  • Search Site

Maha Abdulrazzaq Alwan , Gafel Kareem Aswed , Mohammed Neamah Ahmed; Evaluation of infrastructure quality: Telecommunication projects as a case study. AIP Conf. Proc. 19 August 2024; 3105 (1): 050072. https://doi.org/10.1063/5.0212809

Download citation file:

  • Ris (Zotero)
  • Reference Manager

This study sought to investigate and assess the quality of infrastructure projects and the extent to which quality levels are influenced by the use of knowledge areas for project management (using Ministry of Telecommunications projects as a case study). The correlations between certain variables have been explained by studies on the factors impacting project quality using a variety of techniques and methods. Surprisingly, structural equation modeling (SEM) has received very little attention in regards to the aspects influencing project quality studies. The study goal is to develop a model that clarified and identified the crucial elements affecting quality in infrastructure projects in order to overcome this knowledge gap. Smart pls3 with quantitative strategy is used in this research. This study clarified the opinions of several specialists based on their involvement with government building projects in Telecommunication Ministry. The questionnaire was used as a data collection tool, with 85 questionnaires distributed and (79) questionnaires retrieved, with a response rate of (93%), using the purposive and proportional sampling techniques. It was discovered that the quality of projects implemented by Ministry of Communications cadres was medium and implemented in light of project management knowledge areas ) PMBOK 7 th edition) in partial. Attention must be increased to the implementation of their projects in accordance with these standards because of its impact on increasing quality and thus increasing the sustainability of their projects. The study also recommends the necessity of guiding and directing project managers and implementing engineers to raise the level of interest in implementing project management standards in order to achieve the highest levels of quality.

Citing articles via

Publish with us - request a quote.

case study with evaluation

Sign up for alerts

  • Online ISSN 1551-7616
  • Print ISSN 0094-243X
  • For Researchers
  • For Librarians
  • For Advertisers
  • Our Publishing Partners  
  • Physics Today
  • Conference Proceedings
  • Special Topics

pubs.aip.org

  • Privacy Policy
  • Terms of Use

Connect with AIP Publishing

This feature is available to subscribers only.

Sign In or Create an Account

  • Study protocol
  • Open access
  • Published: 15 August 2024

Reducing asthma attacks in disadvantaged school children with asthma: study protocol for a type 2 hybrid implementation-effectiveness trial (Better Asthma Control for Kids, BACK)

  • Amy G. Huebschmann   ORCID: orcid.org/0000-0002-9329-3142 1 , 2 , 3 ,
  • Nicole M. Wagner 1 , 2 ,
  • Melanie Gleason 4 , 7 ,
  • John T. Brinton 4 ,
  • Michaela Brtnikova 2 , 4 ,
  • Sarah E. Brewer 2 , 5 ,
  • Anowara Begum 2 ,
  • Rachel Armstrong 2 ,
  • Lisa Ross DeCamp 2 , 4 ,
  • Arthur McFarlane 7 ,
  • Heather DeKeyser 2 , 4 , 7 ,
  • Holly Coleman 8 ,
  • Monica J. Federico 4 , 7 ,
  • Stanley J. Szefler 4 , 7 &
  • Lisa C. Cicutto 6  

Implementation Science volume  19 , Article number:  60 ( 2024 ) Cite this article

161 Accesses

2 Altmetric

Metrics details

Asthma is a leading cause of children’s hospitalizations, emergency department visits, and missed school days. Our school-based asthma intervention has reduced asthma exacerbations for children experiencing health disparities in the Denver Metropolitan Area, due partly to addressing care coordination for asthma and social determinants of health (SDOH), such as access to healthcare and medications. Limited dissemination of school-based asthma programs has occurred in other metropolitan and rural areas of Colorado. We formed and engaged community advisory boards in socioeconomically diverse regions of Colorado to develop two implementation strategy packages for delivering our school-based asthma intervention — now termed “Better Asthma Control for Kids (BACK)" — with tailoring to regional priorities, needs and resources.

In this proposed type 2 hybrid implementation-effectiveness trial, where the primary goal is equitable reach to families to reduce asthma disparities, we will compare two different packages of implementation strategies to deliver BACK across four Colorado regions. The two implementation packages to be compared are: 1) standard set of implementation strategies including Tailor and Adapt to context, Facilitation and Training termed, BACK-Standard (BACK-S); 2) BACK-S plus an enhanced implementation strategy, that incorporates network weaving with community partners and consumer engagement with school families, termed BACK-Enhanced (BACK-E). Our evaluation will be guided by the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework, including its Pragmatic Robust Implementation Sustainability Model (PRISM) determinants of implementation outcomes. Our central hypothesis is that our BACK-E implementation strategy will have significantly greater reach to eligible children/families than BACK-S (primary outcome) and that both BACK-E and BACK-S groups will have significantly reduced asthma exacerbation rates (“attacks”) and improved asthma control as compared to usual care.

We expect both the BACK-S and BACK-E strategy packages will accelerate dissemination of our BACK program across the state – the comparative impact of BACK-S vs. BACK-E on reach and other RE-AIM outcomes may inform strategy selection for scaling BACK and other effective school-based programs to address chronic illness disparities.

Trial registration

Clinicaltrials.gov identifier: NCT06003569, registered on August 22, 2023, https://classic.clinicaltrials.gov/ct2/show/NCT06003569 .

Contribution to the literature:

In prior work, we developed, refined and implemented the Better Asthma Control for Kids (BACK) program in urban Colorado school districts where it decreased asthma exacerbations and has been sustained through support from staff in local school districts and state public health department grants.

In four geopolitically diverse (urban and rural) areas, this project will test the comparative impact of implementing the evidence-based BACK program to reduce asthma disparities with two different packages of implementation strategies.

Data from this trial will inform a “dissemination playbook” to accelerate future dissemination of BACK to communities experiencing pediatric asthma care inequities.

Asthma disproportionately affects children living in historically marginalized and under-resourced communities [ 1 ]. Disparities in asthma outcomes include higher mortality rates, worse asthma control, greater likelihood of emergency visits, and higher rates of school absenteeism [ 2 , 3 , 4 ]. Pediatric asthma disparities are partly driven by unmet Social Determinants of Health (SDOH) needs, such as lack of insurance and transportation that lead to fewer preventive care visits and more emergency visits and hospitalizations [ 1 , 5 , 6 , 7 , 8 , 9 , 10 , 11 ]. In addition to health impacts, poor asthma control causes educational disparities through missed school days, increased fatigue, and difficulty concentrating due to interrupted sleep that negatively impact school performance [ 12 , 13 , 14 , 15 ]. As a result, asthma is one of seven educationally relevant health disparities that school leaders prioritize [ 16 , 17 ].

Over the past 18 years, we have developed an effective school-based asthma program that reduced asthma exacerbations and school absences [ 18 , 19 , 20 , 21 ]. These positive outcomes have been achieved by identifying eligible children with poor asthma control through routine school registration processes, and by a community Asthma Navigator (ANav) providing asthma care coordination and case management (see Table  1 ). Our approach includes coordination across families, schools, health care providers, and community agencies with resources to address unmet SDOH needs – the latter is key to address disparity drivers such as inadequate access to healthcare and difficulty affording medications (Fig.  1 ) [ 20 , 22 , 23 , 24 ]. Care coordination with health care provider teams is critical to ensure that the necessary asthma care plan and medications are available at school for students, and allows school nurses to alert providers to asthma care needs or gaps. These core functions of our school-based Better Asthma Control for Kids (BACK) program have been identified as effective in systematic reviews and other studies of school-based asthma management programs [ 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 ].

figure 1

Key roles in our patient-centered BACK program. Our BACK program puts the child and family at the center of everything we do. The role of the ANav is to support the school nurse-led team to deliver the core functions of BACK. This includes the ANav assisting the school nurse-led team with asthma education for the child/family, and asthma case management and care coordination between the child/family, health provider team, and community resources for SDOH. The ANav links families to community resources for SDOH to address financial constraints of asthma care, such as inadequate insurance coverage, lack of affordable transportation, and difficulty affording medications

A challenge for sustained implementation is that BACK requires ongoing public health funding in most of the school districts where it has been implemented; thus, BACK could benefit from additional community-engaged efforts to support sustainability. In addition, relatively little implementation of school-based asthma programs has occurred in rural and smaller urban areas, and current implementation guides cannot differentiate the potential benefit and cost for smaller versus larger school districts [ 36 ]. Thus, the key next steps to scale BACK more broadly are to test the relative impact and cost of alternate implementation strategy packages for delivering BACK, and to develop a more robust implementation guide that allows future adopters to consider the tradeoffs of cost and impact of alternate implementation approaches.

To take these next steps, we leveraged funding from the Disparities Elimination through Coordinated Interventions to Prevent and Control Heart and Lung Disease Risk (DECIPHeR) Award to conduct a 3-year community-engaged planning phase in regions across Colorado where we had not previously implemented BACK [ 37 , 38 ]. Specifically, we completed an Exploration phase [ 38 ] and Preparation phase [ 38 ] guided by the implementation determinants in the Pragmatic Robust Implementation Sustainability Model (PRISM) [ 39 ]; the activities included : 1) regular meetings with multi-sectoral community advisory boards (CABs) representing community, family, health care and school partners, 2) conduct of needs assessments to identify local needs, priorities and resources, and 3) tailoring BACK implementation strategies to local context with input from our CABs. Based on CAB input, we added a key non-profit partner (Trailhead Institute©) to our implementation team to build organizational capacity to connect with local public health and community organizations across the state, as part of our implementation and sustainment efforts.

In this NIH-funded hybrid type 2 implementation-effectiveness trial, we will implement BACK in four diverse regions of Colorado in school districts that have high rates of unmet SDOH needs, based either on free-reduced lunch rates or rural status [ 40 ]. Students with poorly controlled asthma [ 41 ] will be eligible to enroll in the study. We will evaluate the impact of implementing BACK with two different implementation packages: either 1) a standard set of implementation strategies including Tailor and adapt to context, Facilitation and Training, termed BACK-Standard (BACK-S), or 2) BACK-S strategies plus enhanced implementation strategies, incorporating two of the Expert Recommendations for Implementing Change (ERIC) strategies [ 42 ] of “network weaving” and “consumer engagement”, termed BACK-Enhanced (BACK-E).

Study aims and hypotheses

Our primary implementation aim is to compare the reach to students with uncontrolled asthma between BACK-E and BACK-S. We hypothesize that student reach will be significantly greater when delivered using BACK-E as compared to BACK-S. Our secondary aim is to determine and compare annual asthma exacerbation rates (i.e., exacerbations/year) in students randomized to either study arm (i.e., effectiveness). We hypothesize that BACK, delivered either as BACK-E or BACK-S, will be more effective than usual care at reducing annual asthma exacerbation rates. Our third aim will identify PRISM contextual factors [ 39 , 43 ] (see Fig.  2 ) that predict student reach and retention, school-level adoption, costs to future adopters (schools), and sustainment for BACK-S or BACK-E. Quantitative predictors of sustainment include the CSAT score at a school-level across UH3 years 2–4, as well as contextual factors of each region. Contextual factors considered for this model will include urban versus rural location and school district size. We will evaluate these factors’ contribution to actual sustainment and to CSAT scores across implementation study arms.

figure 2

Implementation Research Logic Model. Abbreviations – PRISM (Pragmatic Robust Implementation Sustainability Model), RN (School Nurse), ANav (Asthma Navigator), BACK (Better Asthma Control for Kids), SDOH (Social Determinants of Health), CAB (Community Advisory Board)

In addition, we expect our qualitative and mixed methods analyses will identify how and why implementation strategies used for local uptake and sustainability vary in their impacts due to differences in contextual factors. Lessons learned will be incorporated into a BACK dissemination playbook co-developed with our community partners in a program sustainment phase, so that future communities can choose to implement BACK in a way that addresses local factors critical for success and sustainability.

Implementation science framework

For this hybrid type 2 implementation-effectiveness trial, we are using PRISM to guide our approach and evaluation across our 4 study phases of Exploration, Preparation, Implementation and Sustainment [ 37 , 38 , 39 , 43 , 44 , 45 , 46 ]. PRISM includes both the contextual determinants of successful implementation, as well as guidance on how to assess the Reach, Effectiveness, Adoption, Implementation and Maintenance/Sustainment (RE-AIM) outcomes with attention to health equity and representativeness [ 39 , 43 , 44 , 45 ]. The PRISM contextual determinants are multi-level and include: the characteristics and perspectives on the intervention of inner setting school nurses/staff, implementers of ANavs, as well as children/families, the implementation and sustainability infrastructure, and the external environment [ 39 , 43 , 44 , 45 ].The implementation and sustainability infrastructure for BACK includes local resources available to support initial implementation and sustained delivery of BACK. The external environment includes policies, regulations and incentives that support or hinder implementation and sustainment of BACK. Our Implementation Research Logic Model (see Fig.  2 ) outlines our key PRISM findings from our Exploration and Preparation phases on the left-hand side, the implementation strategy packages (BACK-S vs. BACK-E) that we will test, the core BACK intervention functions, expected mechanisms that were priority process measures identified by our CABs, and our RE-AIM outcomes that we expect to improve with the delivery of BACK. Further details on the BACK intervention and the BACK-S and BACK-E strategy packages are provided below.

Study design

In this hybrid type 2 implementation-effectiveness trial, cluster randomization will occur in a parallel group approach, with a phased-in enrollment to BACK for all study arms [ 40 , 47 ]. Figure  3 provides an overview of the study timeline and arms. In brief, children with uncontrolled asthma will be identified and recruited annually across the control phase and a two-year period of study team-supported implementation of either BACK-E or BACK-S. A comparison of the implementation strategies for BACK-E and BACK-S is provided in Table  2 .

figure 3

Study design for DECIPHeR Colorado program. In this diagram, there are two groupings of schools, those in GROUP A that had organizational readiness to implement either BACK-S or BACK-E in Year 1 of the funded trial, and those in GROUP B needed an additional year to prepare for implementation. The rows in each group represent the study arms for randomization. The columns are the study years, where the first column is the final year of the UG3-funded planning phase that included baseline data collection for the schools/school nurses in GROUP A, and Years 1–4 are the UH3-funded hybrid implementation-effectiveness trial period. BACK-S indicates the standard Better Asthma Control for Kids (BACK) package of implementation strategies of Tailor and adapt to context, Facilitation and Training, and BACK-E represents the enhanced strategy package of BACK-S strategies plus network weaving and consumer engagement. After 2 years of implementation, schools transition into the Maintenance phase where we will assess if they sustain either BACK-S or BACK-E, designated as MBACK-S or MBACK-E; for Arms 1 and 2 in Group B those are MBACK-S and MBACK-E, respectively

Intervention

The BACK intervention is a multiple level and multi-component intervention involving students and families with asthma, school nurses, community health care providers, and community agencies with resources to address SDOH. BACK is delivered by school nurses and ANavs according to the core intervention functions listed in Table  1 . As in our prior school-based asthma programs, the intervention dose is three ANav visits for each student with asthma at their school and three ANav visits with their parent/guardian in-person, or by videoconference or telephone [ 18 , 19 ]. ANav intervention activities include a standardized assessment to identify asthma education, asthma care and SDOH needs, and the resultant development of an individualized tailored plan of care. All BACK visits include asthma education reinforced with the provision of asthma educational materials; case management and care coordination to support successful asthma management at school and at home. The initial BACK visit includes assessment of SDOH needs related to asthma care (e.g., access to asthma care, transportation, and medications) [ 50 ], and ANavs provide community referrals to support any identified needs.

Implementation strategy packages

Implementation strategies for the UH3 trial were developed collaboratively between the research team and CABs, by considering local PRISM contextual factors, priority outcomes of success and how best to deliver BACK to accomplish these priority outcomes. These strategies are delivered in one of two study arms as the BACK-S package or the BACK-E package. Both packages are delivered by an ANav in partnership with the school nurse-led team. The BACK-S package includes a tailor-and-adapt to context strategy of approaches identified as necessary to implement BACK in schools, including Facilitation and Training (see Table  2 ) [ 51 , 52 , 53 ]. BACK-S plans for implementation were operationalized as an implementation blueprint in the planning phase (2020–2023). Tailoring of this blueprint to the local context will occur through our Facilitation strategy that promotes adaptability of forms while maintaining fidelity to core functions of BACK-S and BACK-E, and an annual program evaluation process that may identify new strategies needed to tailor-and-adapt to context. Our Facilitation approach supports problem-solving through required weekly community of practice meetings for ANavs, optional learning collaborative meetings for school nurses (3 or more times yearly) and an “all hands meeting” annually of ANavs, school nurses and champions from local clinics – see “Researcher Team and implementer roles and responsibilities” for details on Facilitation leaders. The BACK-E package includes the BACK-S package plus an Enhanced Strategy package of consumer engagement with students/family through school-wide communications and network weaving to develop interrelationships to address SDOH (see Table  2 ). Our CABs voted to add these Enhanced strategies based on perceptions that increased school and community involvement were both feasible and likely to increase family willingness to participate.

Research team and implementer roles and responsibilities

The BACK intervention is adopted at schools and by school nurses and their team members and implemented by ANavs and school nurse-led teams (see Fig.  1 ). In addition, community partners play a key role by engaging with and supporting schools and families.

Research team

The multidisciplinary research team consists of asthma specialists, implementation science experts, community-engaged research experts, health equity research experts, clinical trial specialists, school asthma program leaders, qualitative methodology expertise, public health representatives, biostatisticians, program evaluators, data collectors, ANavs, and research staff.

Community partners

Intersectoral CABs in each of our 4 Colorado regions are our primary community partners. CAB members include school nurses, health care providers, family members of a child with asthma, and/or community SDOH agency members. We also have a State Advisory Board with a liaison from each regional CAB and other state-level advisors from the Colorado Department of Public Health and Environment, Colorado Department of Education, and State Network of Colorado Ambulatory Practices and Partners network of primary care providers. Additionally, we partnered with a state-wide, non-profit, agency (Trailhead Institute) that serves as a link between community organizations, schools and local public health agencies across Colorado.

School nurses/team responsibilities

Each school nurse-led team consists of a school nurse and their “health aides” and delegates that support asthma care provision in schools. For the BACK program, the nurse or delegate introduces the ANav to the school community (other team members, teachers, staff, students/families), identify students with asthma, assist with completion and interpretation of the AIF [ 41 ] that identifies asthma control and thus eligibility, and partner with ANavs to obtain the Colorado Asthma Care Plan for Schools and inhalers at school.

ANav role/responsibilities

The ANav is the “glue” of the BACK intervention, and provides asthma education, case management and care coordination across families, school nurses/teams, health care teams and community partners to address families’ unmet social needs related to asthma care.

Training for school nurses and ANavs

ANavs are trained in data collection and data quality standards, asthma education, care coordination and case management, health navigation and mobilization through community engagement. ANavs implementing BACK-E are also trained on how to conduct the elements of the enhanced implementation strategies. School nurses are trained on the provision of quality asthma care in schools (identifying children with asthma, assessing asthma, Colorado Asthma Care Plan, inhaler use), working with ANavs to coordinate quality asthma care provision, and using academic platforms to support asthma care in schools.

Facilitators and trainers

Our program facilitators and trainers include research team investigators with expertise in asthma and asthma in schools (MG, LC), research assistants (RA, AB), and experienced ANavs who have delivered our school-based asthma management program in the Denver area schools for over a decade. See “Implementation Strategy Packages” above for details on the Facilitation meetings.

Study settings, populations, and invested community partners

Given this intervention seeks to improve health disparities, BACK exerts influence across multiple socio-ecological levels. Thus, it is important to assess the impact of the study across those multiple levels of adopting elementary schools and their staff, students with asthma and their families, and ANav implementers. Students with asthma and their families will be assessed for the Reach and Effectiveness outcomes. Elementary schools and their staff will be assessed for the Adoption, Implementation and Maintenance outcomes. Our specific operationalized assessments of each RE-AIM outcome and the PRISM contextual assessments are described below in the “ Methods – Outcomes/Data Collection Procedures” section.

Recruitment

Regional recruitment.

During the 3-year DECIPHeR-funded planning phase (2020–2023), the study team formed regional CABs. Through CAB discussions, eligibility criteria (see Table  3 ) were refined; regional CABs also supported identification of key districts serving under-resourced populations. Initially, five regions were approached for involvement and formed a CAB. Four of the five regions were able to develop school nurse and district support for the program. The Southwest Colorado region was particularly hard-hit by the COVID-19 pandemic in terms of school nurse turnover. Despite quarterly CAB meetings, we were unable to consistently engage with school nurses in that region, resulting in school recruitment for the DECIPHeR-funded UH3 phase trial coming from four of the five regions. We continue to engage the fifth region in our State Advisory Board meetings and follow-up CAB meetings to determine if they can be brought in during our BACK sustainment phase.

UH3 phase trial recruitment occurs across the levels detailed in Table  3 , and outlined here.

School setting and school nurse recruitment

School districts meeting eligibility criteria (see Table  3 ) within our four regions were identified using the Colorado Department of Education school enrollment and characteristics database. Eligible school districts were invited to participate using email and phone. For those potentially interested in participation, a research team member (RA and/or LC) held meetings with school district officials and/or school nurses to provide additional information and discussion (e.g., flyer, e-mail, district-specific school nurse meetings). Regional CABs supported recruitment by advocating with school nurses and district officials.

Participant recruitment (students with asthma)

Recruitment processes for eligible students with asthma (see Table  3 ) were developed from prior processes [ 18 , 19 ] and CAB input to support feasibility and program sustainability. These include: 1) BACK program intervention introduction to the school community via back-to-school nights or school registration attendance (ANav or research team member) and/or newsletters, 2) ANav 1:1 meeting with school nurses early in the school year, 3) Standard school nurse processes to identify children with asthma at registration or by review of their health database, 4) ANav outreach: contacting caregivers of children with asthma by phone call, texting, e-mail and/or postcard mailer to assess for student eligibility of uncontrolled asthma, and 5) ANav offering to meet with caregivers to discuss the BACK program.

Participant enrollment (students with asthma)

ANavs enroll interested and eligible students with uncontrolled asthma and their caregiver after discussing the BACK program and study, answering questions, and attaining informed consent from the caregiver and assent from the child. We propose enrollment of at least 300 students/families over the 4-year trial across the 4 regions. Study consent forms and enrollment processes were approved by the Colorado Multiple Institutional Review Board.

Outcomes/data collection procedures

We will assess RE-AIM outcomes both quantitatively and qualitatively in a mixed-methods approach [ 40 , 47 ]. We first describe our student level outcomes: the primary outcome of Reach (Table  4 ) followed by our Effectiveness outcome (see Table  5 ). Outcomes of adoption (see Table  6 ), implementation (see Table  7 ) and maintenance (i.e., school sustainment, see Table  8 ) include assessments at school setting, school staff and implementer levels. Analyses will compare the reach (primary outcome), student retention, adoption, costs to future adopters, and sustainment of schools between BACK-S vs. BACK-E overall and by region. Qualitative methods will identify contextual factors that predict student reach and retention, school-level adoption, costs to future adopters (schools), and sustainment for BACK-S or BACK-E.

This section provides an overview of data collection methods and procedures, including tracking of outcomes and tasks, surveys and interviews. Further details are provided for each outcome assessment below.

Task tracking will be ongoing and completed by ANavs in both BACK-S and BACK-E arms; tracking of relevant items in the usual care arm are completed by masked data collectors.

Tracked outcomes include reasons for students/families participating (or not), intervention activities, time spent in each visit with the student and family, and implementation-related activities.

Annual surveys will be sent in the second half of each school year to ANavs, school nurses, families, and health care providers. Surveys will be sent by e-mail from a REDCap online database with up to 2 follow-up contacts for completion over a 2-month period.

Survey items are tailored to the target population and outcome assessed and are defined in more detail for each RE-AIM outcome in Tables 5 , 6 , 7 and 8 .

Enrolled caregivers of students with asthma will complete the following assessments:

Baseline assessment that includes:

Identical questions from the AIF asked to confirm eligibility

Health outcomes at the time of program enrollment

Current asthma care, knowledge, self-management and barriers/facilitators to care

Unmet SDOH needs that may influence asthma management

Annual end-of-school year program evaluation survey

Annual assessment of health outcomes completed by a data collector masked to study group, with support of translators for non-English speaking caregivers, if warranted [ 41 ]

Annual interviews will be conducted with all ANavs, and a purposive sample of school nurses, and students/families with higher and lower recommendations to use BACK (i.e., Net Promoter score) [ 55 ].

Interviews will explore topics of acceptability, appropriateness, and perceptions of the program, experiences with program reach, intervention quality, effectiveness, and reasons to continue with the program in the future or not.

Primary outcome: reach (student level)

Reach is defined as the proportion and representativeness of eligible students who enroll in BACK. Reach will be assessed as a dichotomous variable identified as yes for those who consent, and no for those who do not consent during the enrollment period. ANavs will track reported reasons families are willing/not willing to participate. In terms of retention, ANavs will track reasons for participant dropout and will offer a brief interview to examine these reasons further.

Secondary outcome: effectiveness (student level)

Effectiveness is defined as the impact of the BACK program on asthma health in students (see Table  5 ). The health outcome of greatest interest to our BACK community partners (and in our prior studies) is asthma control, operationalized as annual exacerbation rates (primary health/effectiveness outcome). Additionally, we will evaluate a secondary health/effectiveness outcome of school absences due to asthma. We will compare our effectiveness outcomes among students with uncontrolled asthma randomized to either usual care (control) or BACK-S or BACK-E. Additionally, family perception of effectiveness will be obtained from families receiving active BACK intervention through annual survey assessment and further explored with families completing interviews.

Secondary outcome: adoption (setting level)

Adoption is defined as the proportion and representativeness of settings and staff who work in these settings, respectively, that agree to deliver the intervention [ 43 , 45 ]. Adoption measures are described in detail in Table  6 , and these include both quantitative and qualitative data. As noted above in Table  3 , we define our adoption setting at the school level and our staff who adopt BACK at the school nurse level – the nurse role is critical to identify students with asthma, and they partner with ANavs who deliver other BACK intervention functions. Additionally, we will examine Adoption-related factors through annual survey measures and qualitative interviews: feasibility, acceptability and appropriateness. In terms of representativeness, Colorado Department of Education data and school characteristics will be used to identify representativeness at the school setting level — see Table  6 for examples.

Secondary outcome: implementation (school setting staff and ANav level)

Implementation is defined as fidelity to the intervention’s core functions (see Table  1 ) in consistency and quality, structured assessment of any adaptations made according to standard criteria, and the time and costs of the program [ 56 , 57 ]. The implementation measures and data collection procedures are described in Table  7 below. Fidelity completeness and quality will be assessed for intervention core functions and both implementation strategy packages. We will use a mixed methods approach to examine fidelity; family and school nurse input in annual interviews will provide key insights.

To assess Adaptations, BACK Facilitators will complete short debrief summaries following each Facilitation discussion; these will be mined for emergent topics related to intervention and implementation adaptations. Adaptations will be discussed bi-weekly by study team members, and tracked based on standard implementation methods [ 56 , 57 , 58 , 59 ]. Adaptations will be reviewed with the full study team annually (with appropriate masking) to inform tailoring the program to context in the upcoming year while preserving the core functions of the intervention and avoiding adding any BACK-E implementation strategies to the BACK-S package.

To measure program costs, activities will be defined by developing a process map and quantified through ongoing tracking of intervention and implementation activities, attendance, annual surveys, and time estimates, per standard time-based activity costing methods [ 60 ]. We will work with ANavs and school nurses annually to track both BACK program costs and any reimbursements/incentives received by schools for BACK visits, such as Medicaid reimbursements.

Secondary outcome: maintenance (setting level)

Maintenance is defined as the extent to which a program becomes accepted practice within the setting, and this is operationalized as a school continuing the BACK intervention (either BACK-S or BACK-E) after 2 years of active implementation support. Maintenance will be captured using a mixed methods approach as described in Table  8 . For qualitative assessments, a structured focus group guide will be used, and sessions will be audio recorded, transcribed, and analyzed. The research team, school nurses, ANavs, and health care providers will complete the validated clinical sustainability assessment tool annually [ 61 ]. With appropriate masking, CSAT and qualitative results will be discussed by the study team at the dedicated annual program evaluation meeting and explored with CABs to further pursue opportunities to enhance program sustainability. School sustainment of any BACK core functions (see Table  1 ) during the maintenance phase will be tracked annually by a data collector masked to study arm assignment — with attribution of who delivers each BACK function (e.g., ANav, school nurse).

Methods – analysis and power calculation for aims 1 and 2

Our type 2 hybrid trial will address our overarching hypotheses and research questions regarding comparative implementation outcomes to inform future schools’ decisions to adopt or sustain BACK, and the effectiveness of BACK on asthma control across a set of rural and urban schools [ 47 , 62 ]. By phasing in the active BACK implementation packages the study contains a control group for one year. This allows a comparison of both BACK-S and BACK-E to usual asthma care, a comparison of great interest to our CAB members, school communities and investigators.

The phased-in, parallel group randomized trials (GRTs) include both a 3-arm trial and a 2-arm trial. The primary aim of the 3-arm trial is to compare the health outcomes of each arm to control (no BACK). Implementation outcomes cannot be compared in the 3-arm trial, as the primary implementation measure of reach will not be evaluated in the control arm. The primary aim of the 2-arm trial is to compare implementation outcomes and health outcomes between the two arms of BACK-E and BACK-S (Fig.  3 ). We refer to these trials as the ‘3-arm trial’ and the ‘2-arm trial’ throughout the analysis section.

The primary implementation outcome is reach (see Table  5 ). The primary health outcome is number of asthma exacerbations in the previous year (see Table  6 ).

For statistical analysis of Aims 1 and 2, the primary implementation aim and primary health outcome aim of interest can be written as the following null hypotheses for modeling and testing either the 3-arm trial or the 2-arm trial:

In a 3-arm trial, after one year of implementing the programs the null hypotheses for the health outcome of interest are:

no difference between the incidence rates of asthma exacerbation for BACK-S compared to controls.

no difference between the incidence rates of asthma exacerbation for BACK-E compared to controls.

In the 2-arm trial, after one year of implementing the programs, the null hypotheses for the primary implementation outcome of interest and the primary health outcome of interest between implementation arms are:

no difference in the odds of reach between BACK-S compared to BACK-E. (Primary hypothesis of interest)

no difference in the rates of asthma exacerbation between BACK-S compared to BACK-E (Secondary hypothesis; Primary health outcome is rates of asthma exacerbations)

The null hypotheses will be assessed using a generalized linear mixed model [ 63 , 64 ]. The primary implementation outcome is binary and the primary health outcome is a count. Analytic models of the implementation outcomes will be longitudinal mixed models. Analytic models of health outcomes will be constrained longitudinal data analysis models where baseline measures are included in the vector of outcomes. [ 65 ] Models will allow for time varying effects and correlation via nested random coefficient effects, and will include time-varying random effects for students, school, and ANavs in any model for continuous or count outcomes. The regression model will adjust for member level covariates of age, sex and insurance provider.

Power and sample size estimates were calculated using the GRT Sample Size Calculator available at the NIH website [ 66 ]. For both comparisons of interest, we fixed power at 90% and focused on the simple difference in rates. The power analysis also assumed an ICC of 0.05, based on three years of data from our Denver Metropolitan Area program in 6 school districts. Expected rates of asthma (9%), AIF [ 41 ] completion (64%), eligibility (~ 15 children per nurse) and engagement also come from the Denver Metropolitan Area program numbers [ 20 ].

For Aim 1, the 2-arm trial of the primary implementation outcome of reach rate (anticipated lower bound on reach rates of 33%), a power and sample size analysis of a parallel GRT indicates that 30 nurse clusters in each arm with 5 individuals per cluster is powered at 90% to detect differences in the reach rates of at least 20% for a Type 1 error rate of 0.05.

For the Aim 2, the 3-arm trial of the health outcome of asthma exacerbation (expected baseline rate of 1.5 exacerbations per student per year for usual care), a power and sample size analysis of a parallel GRT indicates that 20 clusters (school nurses) in each arm with 5 individuals per cluster is powered at 90% to detect differences in asthma exacerbation rates of at least 0.75 exacerbations per person year (50% reduction) at the adjusted Type I error rate of 0.025 for two comparisons.

Analysis of secondary outcomes

Evaluation of the RE-AIM outcomes, community engagement outcomes, and health-related outcomes will involve the collection and analysis of both quantitative and qualitative data. These data will be analyzed and integrated according to our research group’s published methods for qualitative assessments of RE-AIM outcomes, including specific methods to measure adaptations, fidelity, and implementation costs [ 67 , 68 , 69 , 70 ]. Measures for each RE-AIM outcome are described in detail in Tables 4 , 5 , 6 , 7 and 8 above. Community engagement outcomes include the number of health care partner organizations to support asthma care and the number of SDOH partner organizations and the strength of partner relationships [ 71 ] for each navigator. Methods for quantitative and qualitative data collection and our mixed methods approach for secondary outcomes are described below.

Setting level adoption, implementation and maintenance analyses

Quantitative data analysis for aims 1 and 2.

Analysis will compare adoption rates by settings and staff who do and do not agree to participate using t-tests, Fishers exact test, and Wilcoxon rank sum as appropriate for each measure (i.e., representativeness). Fidelity rates will be assessed using a percentage of completeness of at least 2 visits in all eligible students [ 72 ]. Survey measures described in Tables 5 , 6 , 7 and 8 will be analyzed using methods established for each validated measure (e.g. CSAT [ 61 ]) and descriptive statistics including percentages, means, and standard deviations for novel measures (e.g. patient satisfaction) [ 54 , 55 , 61 ].

We will evaluate intervention and implementation costs (see Table  7 ) using a time driven activity-based costing approach (TDABC) from the payer perspective [ 60 , 73 ]. As recommended by StaRI we will separately assess costs for the BACK intervention and for the implementation strategies, including explicit assessment of the additional costs of the enhanced implementation strategy [ 74 ]. For community engagement, we will compare the number of partnering SDOH and asthma care (primary care provider and specialty clinics) organizations and the strength of relationship at baseline (UG3 year 3) and in subsequent UH3 trial years.

Quantitative data analysis for aim 3

Our third aim will identify PRISM contextual factors [ 39 , 43 ] from qualitative data (see Fig.  2 ) that predict student reach and retention, school-level adoption, costs to future adopters (schools), and sustainment for BACK-S or BACK-E. Regression models will assess the effects of PRISM contextual level factors upon each implementation outcome, including school characteristics (e.g., school district size). Data will be summarized, and models will assess differences in expected implementation outcomes by cross-sectional levels of contextual factors. Quantitative predictors of sustainment will also include repeated annual measures of CSAT [ 61 ]. We will evaluate the contextual factors’ contribution to each implementation outcome by implementation study arm.

Qualitative data analysis for aims 1–3

We will use a modified grounded theory methodology and will employ a combination of inductive and deductive approaches to coding and analysis. Interviews will be audio-recorded, transcribed verbatim, and managed with ATLAS.ti23. We will use a team-based approach to coding and analysis. We will follow best practices for virtual semi-structured interviews and thematic content analysis techniques [ 75 , 76 , 77 ]. The qualitative team will meet regularly throughout coding and analysis to develop shared interpretations of the data, reveal and check biases and assumptions, develop themes, and finalize results. We will present preliminary findings to CABs and other stakeholders to include their interpretation before finalizing findings [ 78 ].

Mixed methods approach and analysis

Using a complex convergent mixed methods design, each element of data collection will typically occur separately, meaning the quantitative data will be collected and analyzed and the qualitative data will be collected and analyzed [ 79 , 80 ]. Then the two sets of data will be analyzed and interpreted together by using a matrix approach to mixing the data and primary analysis integration strategies of expanding, explaining and connecting as shown in Fig.  4 and joint displays will be created [ 81 , 82 , 83 , 84 ].

figure 4

Annual Timeline for Mixed Methods data collection and analysis. This figure depicts the timing of qualitative and quantitative data collection and analysis over the course of each study year, and the planned use of these data. Abbreviations: AIF (Asthma Intake Form), Qual (qualitative), Quant (quantitative), BACK (Better Asthma Control for Kids intervention), NPS (net promoter score – level of recommendation of BACK), SDOH (social determinants of health)

The end result of this mixed-method analysis will be to identify contextual factors that predict RE-AIM outcomes, and to follow approaches recommended by Shelton, Chambers and Glasgow to identify health equity considerations for schools randomized to BACK-E compared to BACK-S [ 85 ].

Methods—development of playbook to support sustainment

With CAB input, we already developed an online implementation guide that will be adapted to support sustainment and future dissemination of BACK based on this trial’s findings. We will refine the resultant dissemination playbook to describe the relative impact and cost of using BACK-S or BACK-E for a given context. Briefly, this guide will be based on the findings on our RE-AIM mixed methods analysis of how specific contextual typologies of communities (e.g., community size (urban, suburban, rural) influenced our results. Overall, this playbook will inform decisions about whether to adopt BACK for different typologies of school districts, schools, students and families and communities.

This study has potential to impact both pediatric asthma disparities and the field of IS. For the field, by comparing our implementation outcomes between BACK-E vs. BACK-S, we will determine if the addition of the enhanced strategy package (BACK-E) to promote further school/community engagement yields additional benefits in reach, retention and other implementation outcomes. Regarding asthma disparities, we expect the BACK program will address inequities by improving asthma control and associated morbidity, and we will test this hypothesis using a control group. Taken together, these data will inform future communities and schools to decide whether or not the benefits of BACK are worth their investment.

This community-engaged trial will test the impact of BACK to reduce pediatric asthma disparities. It will also develop key products to disseminate BACK more broadly, including a dissemination playbook to accelerate sustainable dissemination of BACK to other communities experiencing health inequities in childhood asthma.

Availability of data and materials

Not applicable as there are no data in this protocol manuscript.

Abbreviations

Social Determinants of Health

Better Asthma Control for Kids

School Registered Nurse

Asthma Navigator

Emergency Department

Randomized Controlled Trial

Exploration, Preparation, Implementation and Sustainment Model

Disparities Elimination through Coordinated Interventions to Prevent and Control Heart and Lung Disease Risk

Pragmatic Robust Implementation Sustainability Model

Community Advisory Board

BACK-Standard

BACK-Enhanced

Reach, Effectiveness, Adoption, Implementation, and Maintenance

Asthma Intake Form

Akinbami LJ, Moorman JE, Bailey C, Zahran HS, King M, Johnson CA, Liu X. Trends in asthma prevalence, health care use, and mortality in the United States, 2001–2010. NCHS Data Brief. 2012;94:1–8.

Google Scholar  

Akinbami LJ, Moorman JE, Garbe PL, Sondik EJ. Status of childhood asthma in the United States, 1980–2007. Pediatrics. 2009;123(Suppl 3):S131–45.

Article   PubMed   Google Scholar  

Lieu TA, Lozano P, Finkelstein JA, Chi FW, Jensvold NG, Capra AM, et al. Racial/ethnic variation in asthma status and management practices among children in managed medicaid. Pediatrics. 2002;109(5):857–65.

Noyes K, Bajorska A, Fisher S, Sauer J, Fagnano M, Halterman JS. Cost-effectiveness of the School-Based Asthma Therapy (SBAT) program. Pediatrics. 2013;131(3):e709–17.

Article   PubMed   PubMed Central   Google Scholar  

Agency for Healthcare Research and Quality. 2017 National Healthcare Quality and Disparities Report [Available from: http://www.ahrq.gov/research/findings/nhqrdr/nhqdr17/index.html .

Akinbami LJ, Moorman JE, Liu X. Asthma prevalence, health care use, and mortality: United States, 2005–2009. Natl Health Stat Report. 2011;32:1–14.

Akinbami LJ, Moorman JE, Simon AE, Schoendorf KC. Trends in racial disparities for asthma outcomes among children 0 to 17 years, 2001–2010. J Allergy Clin Immunol. 2014;134(3):547-53 e5.

Crocker D, Brown C, Moolenaar R, Moorman J, Bailey C, Mannino D, Holguin F. Racial and ethnic disparities in asthma medication usage and health-care utilization: data from the National Asthma Survey. Chest. 2009;136(4):1063–71.

Dougherty D, Chen X, Gray DT, Simon AE. Child and adolescent health care quality and disparities: are we making progress? Acad Pediatr. 2014;14(2):137–48.

Smith LA, Bokhour B, Hohman KH, Miroshnik I, Kleinman KP, Cohn E, et al. Modifiable risk factors for suboptimal control and controller medication underuse among children with asthma. Pediatrics. 2008;122(4):760–9.

Stingone JA, Claudio L. Disparities in the use of urgent health care services among asthmatic children. Ann Allergy Asthma Immunol. 2006;97(2):244–50.

Daniel LC, Boergers J, Kopel SJ, Koinis-Mitchell D. Missed sleep and asthma morbidity in urban children. Ann Allergy Asthma Immunol. 2012;109(1):41–6.

Diette GB, Markson L, Skinner EA, Nguyen TT, Algatt-Bergstrom P, Wu AW. Nocturnal asthma in children affects school attendance, school performance, and parents’ work attendance. Arch Pediatr Adolesc Med. 2000;154(9):923–8.

Article   CAS   PubMed   Google Scholar  

Moonie S, Sterling DA, Figgs LW, Castro M. The relationship between school absence, academic performance, and asthma status. J Sch Health. 2008;78(3):140–8.

Moonie SA, Sterling DA, Figgs L, Castro M. Asthma status and severity affects missed school days. J Sch Health. 2006;76(1):18–24.

Basch CE. Asthma and the achievement gap among urban minority youth. J Sch Health. 2011;81(10):606–13.

Basch CE. Healthier students are better learners: high-quality, strategically planned, and effectively coordinated school health programs must be a fundamental mission of schools to help close the achievement gap. J Sch Health. 2011;81(10):650–62.

Szefler SJ, Cloutier MM, Villarreal M, Hollenbach JP, Gleason M, Haas-Howard C, et al. Building Bridges for Asthma Care: Reducing school absence for inner-city children with health disparities. J Allergy Clin Immunol. 2019;143(2):746-54 e2.

Cicutto L, Gleason M, Haas-Howard C, White M, Hollenbach JP, Williams S, et al. Building Bridges for Asthma Care Program: A School-Centered Program Connecting Schools, Families, and Community Health-Care Providers. J Sch Nurs. 2020;36(3):168–80.

Liptzin DR, Gleason MC, Cicutto LC, Cleveland CL, Shocks DJ, White MK, et al. Developing, Implementing, and Evaluating a School-Centered Asthma Program: Step-Up Asthma Program. J Allergy Clin Immunol Pract. 2016;4(5):972-9 e1.

Cicutto L, Gleason M, Szefler SJ. Establishing school-centered asthma programs. J Allergy Clin Immunol. 2014;134(6):1223–30.

Huebschmann AG, Gleason M, Armstrong R, Sheridan A, Kim A, Haas-Howard C, et al. Notes From the Field: Diverse Partner Perspectives Improve the Usability and Equity Focus of Implementation Guides. Ethnicity and Disease. 2024;DECIPHeR(Special Issue):132–4.

Brewer SE, Reedy J, Maestas D, DeCamp LR, Begum A, Brtnikova M, et al. Understanding Core Community Needs for School-Based Asthma Programming: A Qualitative Assessment in Colorado Communities. Ethnicity and Disease. 2024;DECIPHeR(Special Issue):35–43.

Brewer SE, DeCamp LR, Reedy J, Armstrong R, DeKeyser HH, Federico MJ, et al. Developing a Social Determinants of Health Needs Assessment for Colorado Kids (SNACK) Tool for a School-Based Asthma Program: Findings from a Pilot Study. Ethnicity and Disease. 2024;DECIPHeR(Special Issue):126–31.

Harris K, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self-management interventions for asthma in children and adolescents: a mixed methods systematic review. Cochrane Database Syst Rev. 2019;1:CD011651.

PubMed   Google Scholar  

Kneale D, Harris K, McDonald VM, Thomas J, Grigg J. Effectiveness of school-based self-management interventions for asthma among children and adolescents: findings from a Cochrane systematic review and meta-analysis. Thorax. 2019;74(5):432–8.

Walter H, Sadeque-Iqbal F, Ulysse R, Castillo D, Fitzpatrick A, Singleton J. Effectiveness of school-based family asthma educational programs in quality of life and asthma exacerbations in asthmatic children aged five to 18: a systematic review. JBI Database System Rev Implement Rep. 2016;14(11):113–38.

Cicutto L, Murphy S, Coutts D, O’Rourke J, Lang G, Chapman C, Coates P. Breaking the access barrier: evaluating an asthma center’s efforts to provide education to children with asthma in schools. Chest. 2005;128(4):1928–35.

Cicutto L, To T, Murphy S. A randomized controlled trial of a public health nurse-delivered asthma program to elementary schools. J Sch Health. 2013;83(12):876–84.

Eakin MN, Zaeh S, Eckmann T, Ruvalcaba E, Rand CS, Hilliard ME, Riekert KA. Effectiveness of a Home- and School-Based Asthma Educational Program for Head Start Children With Asthma: A Randomized Clinical Trial. JAMA Pediatr. 2020;174(12):1191–8.

Halterman JS, Fagnano M, Montes G, Fisher S, Tremblay P, Tajon R, et al. The school-based preventive asthma care trial: results of a pilot study. J Pediatr. 2012;161(6):1109–15.

Halterman JS, Szilagyi PG, Fisher SG, Fagnano M, Tremblay P, Conn KM, et al. Randomized controlled trial to improve care for urban children with asthma: results of the School-Based Asthma Therapy trial. Arch Pediatr Adolesc Med. 2011;165(3):262–8.

Bruzzese JM, Sheares BJ, Vincent EJ, Du Y, Sadeghi H, Levison MJ, et al. Effects of a school-based intervention for urban adolescents with asthma. A controlled trial. Am J Respir Crit Care Med. 2011;183(8):998–1006.

Gerald LB, McClure LA, Mangan JM, Harrington KF, Gibson L, Erwin S, et al. Increasing adherence to inhaled steroid therapy among schoolchildren: randomized, controlled trial of school-based supervised asthma therapy. Pediatrics. 2009;123(2):466–74.

Gottlieb LM, Hessler D, Long D, Laves E, Burns AR, Amaya A, et al. Effects of Social Needs Screening and In-Person Service Navigation on Child Health: A Randomized Clinical Trial. JAMA Pediatr. 2016;170(11):e162521.

Cicutto L, Gleason M, Haas-Howard C, Jenkins-Nygren L, Labonde S, Patrick K. Competency-Based Framework and Continuing Education for Preparing a Skilled School Health Workforce for Asthma Care: The Colorado Experience. J Sch Nurs. 2017;33(4):277–84.

Moullin JC, Dickson KS, Stadnick NA, Rabin B, Aarons GA. Systematic review of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Implement Sci. 2019;14(1):1.

Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23.

Feldstein AC, Glasgow RE. A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Jt Comm J Qual Patient Saf. 2008;34(4):228–43.

Curran GM, Landes SJ, McBain SA, Pyne JM, Smith JD, Fernandez ME, et al. Reflections on 10 years of effectiveness-implementation hybrid studies. Front Health Serv. 2022;2:1053496.

School Nursing and Health - Health Conditions & Care Plans: Colorado Department of Education; [updated 2/12/2024. Available from: https://www.cde.state.co.us/healthandwellness/snh_healthissues#asthma .

Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–57.

Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, et al. RE-AIM Planning and Evaluation Framework: Adapting to New Science and Practice With a 20-Year Review. Front Public Health. 2019;7:64.

Holtrop JS, Estabrooks PA, Gaglio B, Harden SM, Kessler RS, King DK, et al. Understanding and applying the RE-AIM framework: Clarifications and resources. Journal of Clinical and Translational Science. 2021;5(1):e126.

Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–7.

Article   CAS   PubMed   PubMed Central   Google Scholar  

McGuier EA, Kolko DJ, Stadnick NA, Brookman-Frazee L, Wolk CB, Yuan CT, et al. Advancing research on teams and team effectiveness in implementation science: An application of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Implement Res Pract. 2023;4:26334895231190856.

Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26.

Johnson EE, MacGeorge C, King KL, Andrews AL, Teufel RJ 2nd, Kruis R, et al. Facilitators and Barriers to Implementation of School-Based Telehealth Asthma Care: Program Champion Perspectives. Acad Pediatr. 2021;21(7):1262–72.

Waltz TJ, Powell BJ, Fernandez ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14(1):42.

Brewer SE. DL, Reedy J., Armstrong R., DeKeyser H., Federico M., McFarlane II A., Figlio G., Huebschmann AG., Szefler S., Cicutto L. Developing a Social Determinants of Health Needs Assessment for Colorado Kids (SNACK) Tool for a School-based Asthma Program: Findings from a Pilot Study. Ethnicity & Disease.

Parchman ML, Anderson ML, Dorr DA, Fagnan LJ, O’Meara ES, Tuzzio L, et al. A Randomized Trial of External Practice Support to Improve Cardiovascular Risk Factors in Primary Care. Ann Fam Med. 2019;17(Suppl 1):S40–9.

Kilbourne AM, Almirall D, Eisenberg D, Waxmonsky J, Goodrich DE, Fortney JC, et al. Protocol: Adaptive Implementation of Effective Programs Trial (ADEPT): cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program. Implement Sci. 2014;9:132.

Kilbourne AM, Geng E, Eshun-Wilson I, Sweeney S, Shelley D, Cohen DJ, et al. How does facilitation in healthcare work? Using mechanism mapping to illuminate the black box of a meta-implementation strategy. Implement Sci Commun. 2023;4(1):53.

Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108.

Adams C, Walpola R, Schembri AM, Harrison R. The ultimate question? Evaluating the use of Net Promoter Score in healthcare: A systematic review. Health Expect. 2022;25(5):2328–39.

Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.

Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16(1):36.

Smith JD, Norton WE, Mitchell SA, Cronin C, Hassett MJ, Ridgeway JL, et al. The Longitudinal Implementation Strategy Tracking System (LISTS): feasibility, usability, and pilot testing of a novel method. Implement Sci Commun. 2023;4(1):153.

Miller CJ, Wiltsey-Stirman S, Baumann AA. Iterative Decision-making for Evaluation of Adaptations (IDEA): A decision tree for balancing adaptation, fidelity, and intervention impact. J Community Psychol. 2020;48(4):1163–77.

Cidav Z, Mandell D, Pyne J, Beidas R, Curran G, Marcus S. A pragmatic method for costing implementation strategies using time-driven activity-based costing. Implement Sci. 2020;15(1):28.

Malone S, Prewitt K, Hackett R, Lin JC, McKay V, Walsh-Bailey C, Luke DA. The Clinical Sustainability Assessment Tool: measuring organizational capacity to promote sustainability in healthcare. Implement Sci Commun. 2021;2(1):77.

Landes SJ, McBain SA, Curran GM. An introduction to effectiveness-implementation hybrid designs. Psychiatry Res. 2019;280: 112513.

McCullagh P. Generalized linear models. 2nd ed. London: Routledge London; 2018. p. 361.

Liang K-Y, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika. 1986;73(1):13–22.

Article   Google Scholar  

Lu K. On efficiency of constrained longitudinal data analysis versus longitudinal analysis of covariance. Biometrics. 2010;66(3):891–6.

Research Methods Resources: National Institutes of Health: National Institues of Health; [updated Feb 5, 2024. Available from: https://researchmethodsresources.nih.gov/ .

Hall TL, Holtrop JS, Dickinson LM, Glasgow RE. Understanding adaptations to patient-centered medical home activities: The PCMH adaptations model. Transl Behav Med. 2017;7(4):861–72.

Holtrop JS, Rabin BA, Glasgow RE. Qualitative approaches to use of the RE-AIM framework: rationale and methods. BMC Health Serv Res. 2018;18(1):177.

Kluger BM, Katz M, Galifianakis N, Pantilat SZ, Kutner JS, Sillau S, et al. Does outpatient palliative care improve patient-centered outcomes in Parkinson’s disease: Rationale, design, and implementation of a pragmatic comparative effectiveness trial. Contemp Clin Trials. 2019;79:28–36.

Luoma KA, Leavitt IM, Marrs JC, Nederveld AL, Regensteiner JG, Dunn AL, et al. How can clinical practices pragmatically increase physical activity for patients with type 2 diabetes? A systematic review. Transl Behav Med. 2017;7(4):751–72.

Center for the Advancement of Collaborative Strategies in Health. Partnership Self-Assessment Tool - Questionnaire 2002 [Available from: https://atrium.lib.uoguelph.ca/items/8cf153d3-8d37-4a88-aa5a-9ca089bd796a .

Harden SM, Gaglio B, Shoup JA, Kinney KA, Johnson SB, Brito F, et al. Fidelity to and comparative results across behavioral interventions evaluated through the RE-AIM framework: a systematic review. Syst Rev. 2015;4:155.

Huebschmann AG, Trinkley KE, Gritz M, Glasgow RE. Pragmatic considerations and approaches for measuring staff time as an implementation cost in health systems and clinics: key issues and applied examples. Implement Sci Commun. 2022;3(1):44.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017;356:i6795.

Dicicco-Bloom B, Crabtree BF. The qualitative research interview. Med Educ. 2006;40(4):314–21.

Keen S, Lomeli-Rodriguez M, Joffe H. From Challenge to Opportunity: Virtual Qualitative Research During COVID-19 and Beyond. Int J Qual Methods. 2022;21:16094069221105076.

Miles MB, Huberman AM, Saldaña J. Qualitative data analysis : a methods sourcebook. Third edition ed. Thousand Oaks, Califorinia: SAGE Publications, Inc. Thousand Oaks, Califorinia; 2014.

Mauthner NS, Doucet A. Reflexive Accounts and Accounts of Reflexivity in Qualitative Data Analysis. Sociology. 2003;37(3):413–31.

Meissner H, Creswell J, Klassen AC, Plano V, Smith KC. Best practices for mixed methods research in the health sciences: National Institutes of Health; 2011.

Creswell JW, Plano Clark VL. Designing and Conducting Mixed Methods Research. 3rd ed. Thousand Oaks: SAGE Publications, Inc; 2017. p. 520.

Fetters MD, & Guetterman, T. C. Development of a joint display as a mixed analysis. The Routledge Reviewer's Guide to Mixed Methods Analysis: Routledge; 2021. p. 259–76.

Guetterman TC, Fetters, M. D., & Creswell, J. W. Integrating quantitative and qualitative results in health science mixed methods research through joint displays. The Annals of Family Medicine. 2015. p. 554–61.

Fetters MD. The mixed methods research workbook : activities for designing, implementing, and publishing projects. Los Angeles: SAGE Los Angeles; 2020.

Book   Google Scholar  

Creswell JW, Clark VLP. Designing and conducting mixed methods research: Sage publications; 2017.

Shelton RC, Chambers DA, Glasgow RE. An Extension of RE-AIM to Enhance Sustainability: Addressing Dynamic Context and Promoting Health Equity Over Time. Front Public Health. 2020;8:134.

Download references

Acknowledgements

Not applicable.

- Research reported in this publication was supported by the National Heart Lung Blood Institute (NHLBI ) of the National Institutes of Health under award number UH3 HL151297 as part of the Disparities Elimination through Coordinated Interventions to Prevent and Control Heart and Lung Disease Risk (DECIPHeR) Alliance (decipheralliance.org). Additionally, support came from National Institute on Drug Abuse (NIDA) grant (K01DA056698) for NMW, Colorado Department of Public Health and Environment Cancer, Cardiovascular and Pulmonary Disease Program - Colorado Comprehensive School-Centered Asthma Program (AsthmaCOMP) Expansion 2024*0346 for MG, AMF and SJS, and NHLBI K23HL146791 (HDK). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Author information

Authors and affiliations.

Anschutz Medical Campus Department of Medicine, Division of General Internal Medicine, University of Colorado, 12631 E. 17th Ave., Mailstop B180, Aurora, CO, USA

Amy G. Huebschmann & Nicole M. Wagner

Adult and Child Center for Outcomes Research and Delivery Science (ACCORDS), 1890 Revere Ct, Suite P32-3200, Mailstop F443, Aurora, CO, 80045, USA

Amy G. Huebschmann, Nicole M. Wagner, Michaela Brtnikova, Sarah E. Brewer, Anowara Begum, Rachel Armstrong, Lisa Ross DeCamp & Heather DeKeyser

Ludeman Family Center for Women’s Health Research, Aurora, CO, USA

Amy G. Huebschmann

Department of Pediatrics, University of Colorado School of Medicine, Aurora, USA, CO

Melanie Gleason, John T. Brinton, Michaela Brtnikova, Lisa Ross DeCamp, Heather DeKeyser, Monica J. Federico & Stanley J. Szefler

Department of Family Medicine, University of Colorado School of Medicine, Aurora, CO, USA

Sarah E. Brewer

National Jewish Health and University of Colorado College of Nursing and Clinical Sciences, Aurora, CO, USA

Lisa C. Cicutto

Breathing Institute, Children’s Hospital Colorado, 13123 East 16Th Avenue, Mailstop B395, Aurora, CO, 80045, USA

Melanie Gleason, Arthur McFarlane, Heather DeKeyser, Monica J. Federico & Stanley J. Szefler

Trailhead Institute, 1999 Broadway Suite 200, Denver, CO, 80202, USA

Holly Coleman

You can also search for this author in PubMed   Google Scholar

Contributions

AGH, SJS, LCC, LRD and JTB made substantial contributions to the conception and design of the work. AGH drafted the manuscript. SJS, LCC, LRD, JTB, MB, SEB, AM, NMW, AB, MG, HC, HD, MJF, and RA substantively revised the manuscript. All authors approved the final manuscript and agreed to be personally accountable for the author’s own contributions and to ensure that questions related to the accuracy or integrity of any part of the work are appropriately investigated, resolved, and the resolution documented in the literature.

Corresponding author

Correspondence to Amy G. Huebschmann .

Ethics declarations

Ethics approval and consent to participate.

The proposed research was reviewed by the Colorado Multiple Institutional Review Board (COMIRB), approval number 20–0883; the amendment to approve the protocol for this type 2 hybrid effectiveness-implementation trial was approved on 10/19/2023.

Consent for publication

Competing interests.

- AGH, NMW, MG, JTB, MB, SEB, AB, RA, LRD, AMF, HDK, HC, MF and LCC declare that they have no competing interests.

- Declaration of potential competing interests for SJS: prior service as a consultant for new drug development for Astra Zeneca, Eli Lilly, GlaxoSmithKline, Moderna, OM Pharma, Propeller Health, Regeneron, and Sanofi. This proposal does not involve comparisons of asthma medications.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Huebschmann, A.G., Wagner, N.M., Gleason, M. et al. Reducing asthma attacks in disadvantaged school children with asthma: study protocol for a type 2 hybrid implementation-effectiveness trial (Better Asthma Control for Kids, BACK). Implementation Sci 19 , 60 (2024). https://doi.org/10.1186/s13012-024-01387-3

Download citation

Received : 21 June 2024

Accepted : 16 July 2024

Published : 15 August 2024

DOI : https://doi.org/10.1186/s13012-024-01387-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Social determinants of health
  • Implementation science
  • School health services
  • Health equity
  • Child health

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

case study with evaluation

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sustainability-logo

Article Menu

case study with evaluation

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

Evaluation of Urban Resilience and Its Influencing Factors: A Case Study of the Yichang–Jingzhou–Jingmen–Enshi Urban Agglomeration in China

case study with evaluation

Reviewer 1 Report

The manuscript presents an Evaluation of Urban Resilience and Its Influencing Factors in a single case study.

The overall context of the topic is within the sustainability topic. It contributes to schoolers in the urban resilience evaluation topic by (01) developing the indicator system domain based on four dimensions—economy, ecology, society, and infrastructure and (02) urban resilience prediction in the future. If possible expand the ecology criteria with the environmental pollution issues. 

In section 1 similar previous research has been adequately presented. Several research problems given in section 1 have been resolved in the manuscript. 

Section 2 is easy to read and understand and very well presented.

In the discussion section, the policy recommendations for the government to build resilient cities and improve sustainable urban development could be summarised in a more detailed manner. 

High Percent match: 45% is detected.  iThenticate report

Author Response

Please refer to the attachment for details.

case study with evaluation

Reviewer 2 Report

Please refer to the attached file.

Reviewer 3 Report

I reviewed the article titled "Evaluation of Urban Resilience and Its Influencing Factors: Case Study of the Yichang-Jingzhou-Jingmen-Enshi Urban Agglomeration in China". It contributes significantly to the scientific literature in several key ways:

1-      The article develops a comprehensive urban resilience evaluation indicator system based on four dimensions: economy, ecology, society, and infrastructure. This multi-dimensional approach provides a holistic view of urban resilience and its various components, enhancing the understanding of how different factors contribute to urban resilience.

2-      The use of the entropy weight method, Getis-Ord Gi* model, robustness testing, and CA-Markov model demonstrates a robust and scientifically rigorous approach to measuring and analyzing urban resilience. These methods ensure that the results are reliable and provide a solid foundation for policy recommendations.

3-      The article analyzes the spatiotemporal evolution characteristics of urban resilience in the YJJE urban agglomeration from 2010 to 2023. This long-term analysis reveals trends and patterns that are critical for understanding how urban resilience evolves over time and across different regions.

4-      By identifying critical driving factors of urban resilience, such as park green space area, total amount of urban social retail, financial expenditure per capita, number of buses per 10,000 people, urban disposable income per capita, and GDP per capita, the article provides valuable insights into what factors most significantly impact urban resilience. This information is essential for policymakers and urban planners aiming to enhance resilience in their cities.

5-      The use of the CA-Markov model to predict urban resilience in 2030 adds a forward-looking perspective to the research. This predictive analysis helps in planning and preparing for future challenges, making the findings relevant for long-term urban development strategies.

6-      The study offers scientific references and policy recommendations for building resilient cities and improving sustainable urban development. These recommendations are grounded in empirical data and robust analysis, making them practical and actionable for government authorities.

Based on the review of the study, below are the overall weaknesses identified which need the author(s) attention:

1-      The study focuses solely on the Yichang-Jingzhou-Jingmen-Enshi urban agglomeration, which might limit the generalizability of the findings to other regions or countries.

2-      The data used is derived from various statistical yearbooks and reports, which might vary in accuracy and reliability. Any errors or inconsistencies in these sources could affect the study's conclusions.

3-      The study covers the period from 2010 to 2023. While this is a significant period, the rapidly changing nature of urban environments might require more recent data to be fully relevant.

4-      The CA-Markov model and other methods used in the study are based on specific assumptions. Any deviations from these assumptions in real-world scenarios could affect the accuracy of the predictions.

5-      The choice of indicators for measuring urban resilience might not capture all relevant aspects. Important factors could be omitted, leading to an incomplete assessment of resilience.

6-      The use of the entropy weight method assumes equal importance of each domain (economy, ecology, society, infrastructure). This might not accurately reflect the varying significance of different factors in different contexts.

7-      The study uses three different standardization methods and selects one based on robustness analysis. However, the chosen method might still introduce biases or inaccuracies.

8-      The study examines both prefecture-level and county-level cities, but the spatial resolution might not be fine enough to capture all relevant variations within these regions.

9-      Predictions for 2030 are based on historical data and current trends, which might not account for unexpected future developments or disruptions.

10-   While the study provides policy recommendations, it might not fully consider the practical challenges of implementing these recommendations in different political and economic contexts.

11-   The study might not fully account for all environmental factors influencing urban resilience, such as microclimates, local biodiversity, and specific ecological interactions.

12-   The study might not adequately address the diversity within the urban agglomeration, such as varying socioeconomic conditions, cultural differences, and local governance structures.

13-   The indicators for infrastructure resilience might not cover all critical aspects, such as the quality and maintenance of infrastructure, which are crucial for resilience.

14-   The study might not fully incorporate human factors such as community engagement, social networks, and individual behaviors, which are important for resilience.

15-   The study might not adequately account for short-term economic fluctuations and their impact on urban resilience, focusing instead on long-term trends.

16-   While the study mentions climate change, it might not fully incorporate the latest projections and potential extreme events that could impact urban resilience.

17-   The study might have gaps in its literature review, missing relevant recent studies or alternative theoretical frameworks that could provide additional insights or critique.

In addition to the weaknesses mentioned above, below are suggested area of improvement divided section by section:

1-      Abstract: Ensure that the abstract is concise and avoids repetition. Clearly state the unique contributions of the study. Include specific numerical results or key findings to provide a snapshot of the study’s outcomes.

2-      Introduction: Expand the literature review to include recent studies on urban resilience, particularly those conducted in different geographic contexts. Clearly articulate the specific gap in the existing literature that this study aims to address. Ensure that the research objectives are clearly stated and aligned with the research questions and hypotheses.

3-      Materials and Methods: Provide more details about the socio-economic and environmental characteristics of the study area to contextualize the findings. Explain the rationale for selecting specific indicators for each dimension of urban resilience. Discuss the reliability and validity of the data sources used, including any limitations. Provide more detailed descriptions of the standardization methods, the entropy weight method, and the robustness analysis. Clearly state the assumptions underlying the CA-Markov model and other methods used.

4-      Results: Provide more detailed analyses of the results, including the interpretation of the spatiotemporal patterns and trends observed. Discuss the impact of each indicator on urban resilience in more detail, highlighting any unexpected findings. Enhance the quality of figures and maps to ensure they are clear and informative. Include more visual aids to support the textual descriptions.

5-      Discussion: Conduct a more thorough comparison with similar studies in other regions to highlight the uniqueness and relevance of the findings. Discuss the practical challenges of implementing the policy recommendations and suggest possible solutions. Clearly state the limitations of the study, including data limitations, methodological constraints, and the scope of the study area. Suggest specific areas for future research to address the identified limitations and explore new aspects of urban resilience.

6-      Conclusions: Provide a concise summary of the key findings, including specific numerical results. Discuss the broader implications of the findings for urban resilience research and policy beyond the study area. Offer actionable recommendations for policymakers, urban planners, and researchers.

The authors have made substantial efforts to address the comments and improve the quality of the study titled "Evaluation of Urban Resilience and Its Influencing Factors: Case Study of the Yichang-Jingzhou-Jingmen-Enshi Urban Agglomeration in China." The revised manuscript demonstrates a thorough consideration of the suggestions, particularly in enhancing the discussion on the generalizability of findings, the accuracy and reliability of data, and the inclusion of climate change and environmental factors. The methodological robustness has been strengthened, and additional emphasis has been placed on human factors and social aspects, enriching the overall analysis. These revisions have significantly improved the clarity and comprehensiveness of the study.

Thank you for your diligent efforts in refining the manuscript and enhancing its contribution to the field of urban resilience and sustainability. Your dedication to addressing the feedback and making necessary revisions is greatly appreciated.

Zhao, Z.; Hu, Z.; Han, X.; Chen, L.; Li, Z. Evaluation of Urban Resilience and Its Influencing Factors: A Case Study of the Yichang–Jingzhou–Jingmen–Enshi Urban Agglomeration in China. Sustainability 2024 , 16 , 7090. https://doi.org/10.3390/su16167090

Zhao Z, Hu Z, Han X, Chen L, Li Z. Evaluation of Urban Resilience and Its Influencing Factors: A Case Study of the Yichang–Jingzhou–Jingmen–Enshi Urban Agglomeration in China. Sustainability . 2024; 16(16):7090. https://doi.org/10.3390/su16167090

Zhao, Zhilong, Zengzeng Hu, Xu Han, Lu Chen, and Zhiyong Li. 2024. "Evaluation of Urban Resilience and Its Influencing Factors: A Case Study of the Yichang–Jingzhou–Jingmen–Enshi Urban Agglomeration in China" Sustainability 16, no. 16: 7090. https://doi.org/10.3390/su16167090

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Case study: School district works to give employees a supportive health care experience

With UnitedHealthcare, Minneapolis Public Schools has experienced a higher utilization of benefits, quicker resolution of issues and an improved health care experience for employees.

Building healthier workplaces together

case study with evaluation

Video transcript

[UPBEAT MUSIC PLAYING IN THE BACKGROUND]

[Text On Screen – Building healthier workplaces together]

[VIDEO OF SCENES FROM SCHOOL, STUDENTS TAKING AN EXAM, A SCHOOLBUS ARRIVING AT THE SCHOOL BUILDING, TEACHERS IN THE CLASSROOM, CHILDREN ARRIVING TO SCHOOL]

[LOGO: UNITEDHEALTHCARE]

[Text On Screen – Organization: Minneapolis Public Schools, Location: Minneapolis, MN, Industry: K-12 Education, Number of employees: 6,300]

[SOFTER MUSIC PLAYING IN THE BACKGROUND]

[VIDEO OF AN AERIAL VIEW OF MPS BUILDING WITH MINNEAPOLIS SKYLINE BEHIND IT]

[PETER RONZA SPEAKING ON SCREEN]

[Text On Screen – Peter Ronza, Director of Total Compensation Minneapolis Public Schools]

PETER RONZA: People are sometimes shocked at what goes into running this. The school district currently deploys around 6,300 benefits eligible employees. Roughly 50 percent are what we would call front serving. They're in the schools, they're providing the education. And roughly 50 percent are providing those support functions.

[VIDEO OF A TEACHER IN A CLASSROOM, TRANSITIONING TO SUPPORT STAFF TALKING IN THE OFFICE]

Our demographics are expansive. So we want to make sure that our program is second to none so that when those employees need their health care, they have it.

[VIDEO OF IBRAHIMA DIOP WORKING IN HIS OFFICE]

[IBRAHIMA DIOP SPEAKING ON SCREEN]

[Text On Screen – Ibrahima Diop, Chief of Finance and Operations, Minneapolis Public Schools]

IBRAHIMA DIOP: It's about balancing between the well-being of our staff and cost. And it's much easier to keep doing what you've always done.

[VIDEO OF IBRAHIMA DIOP TALKING TO MPS STAFF]

When we felt that we needed to make a change, what company is giving us the best value?

[VIDEO OF SCENES FROM A SCHOOL, SCHOOL BUS, STUDENTS ARRIVING, TEACHERS IN THE CLASSROOM]

I am proud to say that we were able to switch to UnitedHealthcare because we can provide what we want to provide to our staff, our community, and attract great candidates for the vacancies that we have.

PETER RONZA VOICEOVER: What has been incredibly impressive is the dedicated staff that has been given to us.

[VIDEO OF JAMES BENNETT TALKING TO A COLLEAGUE]

[JAMES BENNETT SPEAKING ON SCREEN]

[Text On Screen – James Bennett, Dedicated Service Account Manager, UnitedHealthcare]

JAMES BENNETT: My role is to work through issues with the employees, answering questions, assisting employees with anything from claims, to eligibility, to coverage. You really have to really like what you're doing and you have to really care about the individuals that you are providing services for.

[VIDEO OF PETER RONZA AND JAMES BENNETT CHATTING, TRANSITIONING TO PETER RONZA CHATTING WITH COURTNEY AYERS]

PETER RONZA: We're very grateful for James. His knowledge and accessibility to the resources of UnitedHealthcare not only help us, as administrators, when we may have an issue or a question, they help our employees greatly.

[COURTNEY AYERS SPEAKING ON SCREEN]

[Text On Screen – Courtney Ayers, Wellness Coordinator, Minneapolis Public Schools]

COURTNEY AYERS: UnitedHealthcare is super helpful when trying to send out communications because they can see the data of our claims and what our employees are going in for and using their health plan for.

[VIDEO OF COURTNEY AYERS WORKING AT HER COMPUTER, TRANSITIONING TO A PHOTOGRAPHS OF HER FAMILY AND BABY]

We recently just had our first child, and I was very grateful to have access to our UnitedHealthcare benefits. It was so helpful to be able to have a large network, being able to just use their apps, having access to our on-site account manager, to have that relationship.

[VIDEO OF COURTNEY AYERS WORKING AT HER COMPUTER]

When you have access to quality healthcare, that makes you feel like your employer cares about you. You're not just an employee. You are a mom, you have a family. It’s just awesome.

[VIDEO OF A TEACHER IN A CLASSROOM]

TEACHER SPEAKING TO HER STUDENTS: The trick I use is you put your finger on the angle, don't touch a side, wherever your finger ends up, that's your opposite side.

[VIDEO OF SCENES FROM A SCHOOL, INCLUDING A STUDENT COMPLETING A LESSON, TEACHERS IN THE CLASSROOM]

PETER RONZA: Since bringing on UnitedHealthcare, it has enabled our employees to make important healthcare decisions, without complexity, and they can concentrate on then doing their job of providing an education to our students.

TEACHER SPEAKING TO A STUDENT: Oh, Jaleya, way too kind.

[VIDEO OF AN AERIAL VIEW OF MSP BUILDING WITH MINNEAPOLIS SKYLINE BEHIND IT]

[LOGO: UNITED HEALTHCARE, THERE FOR WHAT MATTERS™]

[Text On Screen – Uhc.om/employer. This case study is true. Results will vary based on client specific demographics and plan design. All trademarks are the property of their respective owners. Administrative services provided by UnitedHealthcare Company in NJ, and UnitedHealthcare Insurance Company of New York in NY. ©2024 United HealthCare Services, Inc. All Rights Reserved. EI#########]

[END MUSIC]

Around 6,300 benefits-eligible teachers, administrators and other staff members fill the 87 Minneapolis Public Schools (MPS) buildings throughout the metro area — which has a rich history dating back to 1834 when the first school was founded.

Funded by taxpayer dollars, MPS recognized that working with a carrier capable of providing quality benefits and offering hands-on support was vital to offering a more competitive and enticing compensation package.

That’s what led MPS to switch to UnitedHealthcare, with Peter Ronza, director of total compensation for MPS, indicating that the relationship and level of service provided by UnitedHealthcare has been “flawless and unmatched” compared to other vendors he’s worked with.

Designing benefits that support all MPS employees — from teachers and custodians to administrators and food service personnel — is where the strategic guidance of UnitedHealthcare has made a difference. 

Thumnail image for article

Offering employees a competitive benefits package

$33.7M in total savings generated from UnitedHealthcare programs beyond contracted discounts 1

“The collaboration with UnitedHealthcare has enabled us to do even more than we were doing before,” Ronza says. “We’ve come a very long way, not only bringing our benefits to where they should be but doing so in a fiscally responsible way.”

“You have to go through a prioritization phase by making sure that the student is at the center of the decisions that we make,” says Ibrahima Diop, chief of finance and operations for MPS.

For MPS, that meant offering employees an expansive provider network and a generous suite of benefits and programs through UnitedHealthcare, along with an on-site clinic to help make health care more accessible and affordable, especially for its lower-paid employees.

Through this clinic, employees and their covered dependents can receive primary care services, labs and medications for common conditions, while also receiving referrals to UnitedHealthcare network providers or clinical programs as needed.

“The more employees don’t have to worry about their health, the more they can concentrate at work,” Ronza says.

Engaging employees for better health plan utilization

Offering benefits is one thing, but getting employees to understand how to use them is another. “Health care is really useless unless employees know how to use it,” Ronza says.

With guidance from UnitedHealthcare, MPS has been — and continues to be — able to identify opportunities to better engage and educate its employees about the health benefits available to them.

This includes looking at claims data and utilization patterns to help inform wellness initiatives and targeted employee communications. For instance, a multi-touch email and direct mail campaign promoting preventive care led by UnitedHealthcare, in addition to the wellness activities led by MPS, likely contributed to the nearly 3-point increase in the percentage of adults who received a wellness visit in 2023. 2

Delivering a more supportive health care experience

473 the number of members assisted by UnitedHealthcare on-site service account manager 3

Understanding how much the employee experience matters to MPS, UnitedHealthcare assigned a dedicated on-site service account manager, James Bennett, to help employees and their families understand their coverage and benefits information and resolve billing or claims issues.

“James has been a huge benefit,” Ronza says. “UnitedHealthcare has allowed our employees to have somebody they can talk to, who can look at things we can’t look at and offer support.”

In one situation, an MPS employee was undergoing a transplant and received numerous bills for various appointments, tests and more. James brought clarity, helping the employee more effectively navigate their health care journey.

This level of service has also made Ronza’s job easier and strengthened the relationship between MPS and UnitedHealthcare.

“I’ve worked with a variety of health benefit vendors throughout the course of my career, but the experience with UnitedHealthcare and their service has been flawless and unmatched.”

More articles

Broker - page template - more news experience fragment, current broker or employer group client.

Access uhceservices to check commissions, manage eligibility, request ID cards and more.

American Academy of Sleep Medicine – Association for Sleep Clinicians and Researchers Logo

Contact | Patient Info | Foundation | AASM Engage JOIN Today   Login with CSICloud

  • AASM Scoring Manual
  • Artificial Intelligence
  • COVID-19 Resources
  • EHR Integration
  • Emerging Technology
  • Insomnia Toolkit for Clinicians
  • Patient Information
  • Practice Promotion Resources
  • Provider Fact Sheets
  • #SleepTechnology
  • Telemedicine

The logo of the Sleep Clinical Data Registry.

  • Annual Meeting
  • Career Center
  • Case Study of the Month
  • Change Agents Submission Winners
  • Compensation Survey
  • Conference Support
  • Continuing Medical Education (CME)
  • Maintenance of Certification (MOC)
  • State Sleep Societies
  • Talking Sleep Podcast
  • Young Investigators Research Forum (YIRF)

Sleep press release

  • Leadership Election
  • Board Nomination Process
  • Membership Directory
  • Volunteer Opportunities
  • International Assembly

A group of business people discussing at a table.

  • Accreditation News
  • Accreditation Verification
  • Accreditation: Contact Us
  • Program Changes

The logo of the American Academy of Sleep Medicine's accreditation.

AASM accreditation demonstrates a sleep medicine provider’s commitment to high quality, patient-centered care through adherence to these standards.

  • AASM Social Media Ambassador
  • Advertising
  • Affiliated Sites
  • Autoscoring Certification
  • Diversity, Equity and Inclusion
  • IEP Sponsors
  • Industry Programs
  • Newsletters
  • Patient Advocacy Roundtable
  • President’s Report
  • Social Media
  • Strategic Plan
  • Working at AASM
  • Practice Standards
  • Coding and Reimbursement
  • Choose Sleep
  • Advanced Practice Registered Nurses and Physician Assistants (APRN PA)
  • Accredited Sleep Technologist Education Program (ASTEP)
  • Inter-scorer Reliability (ISR)
  • Coding Education Program (A-CEP)
  • Individual Member – Benefits
  • Individual Member – Categories
  • Members-Only Resources
  • Apply for AASM Fellow
  • Individual Member – FAQs
  • Facility Member – Benefits
  • Facility Member – FAQs
  • Sleep Team Assemblies
  • Types of Accreditation
  • Choose AASM Accreditation
  • Special Application Types
  • Apply or Renew
  • Event Code of Conduct Policy
  • Guiding Principles for Industry Support
  • CMSS Financial Disclosure

Case Study of the Month August

Case Study of the Month – August 2024

Members only resource, share this story, choose your platform, related posts.

Case Study of the Month – July 2024

Case Study of the Month – July 2024

Case Study of the Month – June 2024

Case Study of the Month – June 2024

case study with evaluation

This website may not work correctly because your browser is out of date. Please update your browser .

Guidance for the design of qualitative case study evaluation

  • https://www.betterevaluation.org/sites/default/files/Vanclay.pdf File type PDF File size 1.11 MB

This guide, written by Professor Frank Vanclay of the Department of Cultural Geography, University of Groningen , provides notes on planning and implementing qualitative case study research. It outlines the use of a variety of different evaluation options that can be used in outcomes assessment and provides examples of the use of story based approaches with a discussion focused on their effectiveness.

"The attempt to identify what works and why are perennial questions for evaluators, program and project managers, funding agencies and policy makers. Policies, programs, plans and projects (hereafter all ‘programs’ for convenience) all start with good intent, often with long term and (over)optimistic goals. An important issue is how to assess the likelihood of success of these programs during their life, often before their goals have been fully achieved. Thus some sense of interim performance is needed, to provide feedback to finetune the program, to determine whether subsequent tranche payments should be made, and also to assist in decision making about whether similar programs should be funded." (Vanclay, 2012)

  • Introduction: the need for qualitative evaluation
  • A note on terminology
  • Quick overview of qualitative methods used in evaluation
  • Background: a short history of qualitative evaluation
  • Designing and conducting a story-based approach to qualitative evaluation
  • A real application of performance story reporting at the program level
  • How to undertake a performance story report evaluation
  • A personal assessment of the effectiveness of story-based evaluation
  • Speculation on the feasibility of story-based evaluation in the context of eu cohesion policy
  • Answers to some frequently asked questions 

Vanclay, F. (2012). Guidance for the design of qualitative case study evaluation,  Department of Cultural Geography, University of Groningen. Retrieved from:  http://ec.europa.eu/regional_policy/sources/docgener/evaluation/doc/performance/Vanclay.pdf

Related links

  • http://ec.europa.eu/regional_policy/sources/docgener/evaluation/doc/performance/Vanclay.pdf

Back to top

© 2022 BetterEvaluation. All right reserved.

IMAGES

  1. Writing A Case Study Analysis

    case study with evaluation

  2. 49 Free Case Study Templates ( + Case Study Format Examples + )

    case study with evaluation

  3. Case Study: How to Write, Examples, Types, and Templates

    case study with evaluation

  4. 49 Free Case Study Templates ( + Case Study Format Examples + )

    case study with evaluation

  5. 49 Free Case Study Templates ( + Case Study Format Examples + )

    case study with evaluation

  6. (PDF) Textbook Evaluation: A Case Study

    case study with evaluation

COMMENTS

  1. Case study

    There are different types of case studies, which can be used for different purposes in evaluation. The GAO (Government Accountability Office) has described six different types of case study: 1. Illustrative: This is descriptive in character and intended to add realism and in-depth examples to other information about a program or policy.

  2. PDF Using Case Studies to do Program Evaluation

    A case study evaluation for a program implemented in a turbulent environment should begin when program planning begins. A case study evaluation allows you to create a full, complex picture of what occurs in such environments. For example, ordinance work is pursued in political arenas, some of which are highly volatile.

  3. Case Study Evaluation Approach

    A case study evaluation approach is a great way to gain an in-depth understanding of a particular issue or situation. This type of approach allows the researcher to observe, analyze, and assess the effects of a particular situation on individuals or groups. An individual, a location, or a project may serve as the focal point of a case study's ...

  4. 15.7 Evaluation: Presentation and Analysis of Case Study

    Learning Outcomes. By the end of this section, you will be able to: Revise writing to follow the genre conventions of case studies. Evaluate the effectiveness and quality of a case study report. Case studies follow a structure of background and context, methods, findings, and analysis. Body paragraphs should have main points and concrete details.

  5. Case study research for better evaluations of complex interventions

    The need for better methods for evaluation in health research has been widely recognised. The 'complexity turn' has drawn attention to the limitations of relying on causal inference from randomised controlled trials alone for understanding whether, and under which conditions, interventions in complex systems improve health services or the public health, and what mechanisms might link ...

  6. Designing process evaluations using case study to explore the context

    A well-designed process evaluation using case study should consider the following core components: the purpose; definition of the intervention; the trial design, the case, the theories or logic models underpinning the intervention, the sampling approach and the conceptual or theoretical framework. We describe each of these in detail and ...

  7. Case study evaluations

    Resource link. Case study evaluations - World Bank. This guide, written by Linda G. Morra and Amy C. Friedlander for the World Bank, provides guidance and advice on the use of case studies. The paper attempts to clarify what is and is not a case study, what is case study methodology, how they can be used, and how they should be written up for ...

  8. Qualitative Research: Case study evaluation

    Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well designed case study and gives examples showing how qualitative methods are used in evaluations of health services and health policy. This is the last in a series of seven articles describing non ...

  9. (PDF) Case study evaluation

    Case study evaluations, using one or more qualitative methods, have been used to investigate important practical and policy questions in health care. This paper describes the features of a well ...

  10. How to Critically Evaluate Case Studies in Social Work

    The purpose of this article is to develop guidelines to assist practitioners and researchers in evaluating and developing rigorous case studies. The main concern in evaluating a case study is to accurately assess its quality and ultimately to offer clients social work interventions informed by the best available evidence.

  11. Program Evaluation

    In every chapter, case studies provide real world examples of evaluations broken down into the main elements of program evaluation: the needs that led to the program, the implementation of program plans, the people connected to the program, unexpected side effects, the role of evaluators in improving programs, the results, and the factors ...

  12. Designing process evaluations using case study to explore the context

    Realist evaluation can be used to explore the relationship between context, mechanism and outcome, but case study differs from realist evaluation by its focus on a holistic and in-depth understanding of the relationship between an intervention and the contemporary context in which it was implemented . Case study enables researchers to choose ...

  13. PDF Case Study Evaluations

    Case studies are appropriate for determining the effects of programs or projects and reasons for success or failure. OED does most impact evaluation case studies for this purpose. The method is often used in combination with others, such as sample surveys, and there is a mix of qualitative and quantitative data.

  14. Case Study Evaluation: Past, Present and Future Challenges:

    Case study evaluation proves an alternative that allows for the less-than-literal in the form of analysis of contingencies - how people, phenomena and events may be related in dynamic ways, how context and action have only a blurred dividing line and how what defines the case as a case may only emerge late in the study.

  15. How to Critically Evaluate Case Studies in Social Work

    The main concern in evaluating a case study is to accurately assess its quality and ultimately to offer clients social work interventions informed by the best available evidence. To assess the ...

  16. Writing a Case Study Analysis

    Identify the key problems and issues in the case study. Formulate and include a thesis statement, summarizing the outcome of your analysis in 1-2 sentences. Background. Set the scene: background information, relevant facts, and the most important issues. Demonstrate that you have researched the problems in this case study. Evaluation of the Case

  17. Case Study Research Method in Psychology

    Descriptive case studies: Describe an intervention or phenomenon and the real-life context in which it occurred. It is helpful for illustrating certain topics within an evaluation. Multiple-case studies: Used to explore differences between cases and replicate findings across cases. Helpful for comparing and contrasting specific cases.

  18. Program Evaluation: Case Study Evaluations

    Highlights. GAO presented information on the use of case study evaluations for GAO audit and evaluation work, focusing on: (1) the definition of a case study; (2) conditions under which a case study is an appropriate evaluation method for GAO work; and (3) distinguishing a good case study from a poor one. GAO also included information on: (1 ...

  19. Week 32: Better use of case studies in evaluation

    Cumulative: This brings together findings from many case studies to answer evaluative questions. Comparative case studies: These are not only multiple case studies but ones which are designed to use the comparisons between the cases to build and test hypotheses. 3. Match sampling, data collection, analysis and reporting to the type of case.

  20. Case study evaluations

    Daily Updates of the Latest Projects & Documents. This document is being processed or is not available. The Operations Evaluation Department (OED) uses case studies for in-depth consideration of the results of a project or group of projects or to illustrate given points. .

  21. Evaluation of infrastructure quality: Telecommunication projects as a

    This study sought to investigate and assess the quality of infrastructure projects and the extent to which quality levels are influenced by the use of knowledge areas for project management (using Ministry of Telecommunications projects as a case study).

  22. Case 25-2024: A 12-Year-Old Boy with Autism and Decreased Vision

    In a study of optic atrophy in 218 children in the United States, the most common cause was a tumor (in 29%), and only 2 patients (1%) had toxic or metabolic disease. 14 In a more recent study of ...

  23. Full article: Integrating dual evaluation and FLUS model for land use

    Integrating dual evaluation and FLUS model for land use simulation and urban growth boundary delineation in production-living-ecology spaces: a case study of Central Harbin, China ... Measuring the efficiency and driving factors of urban land use based on the DEA method and the PLS-SEM model—a case study of 35 large and medium-sized cities in ...

  24. Reducing asthma attacks in disadvantaged school children with asthma

    For this hybrid type 2 implementation-effectiveness trial, we are using PRISM to guide our approach and evaluation across our 4 study phases of Exploration, Preparation, Implementation and Sustainment ... and provides asthma education, case management and care coordination across families, school nurses/teams, health care teams and community ...

  25. Sustainability

    The authors have made substantial efforts to address the comments and improve the quality of the study titled "Evaluation of Urban Resilience and Its Influencing Factors: Case Study of the Yichang-Jingzhou-Jingmen-Enshi Urban Agglomeration in China." ... A Case Study of the Yichang-Jingzhou-Jingmen-Enshi Urban Agglomeration in China ...

  26. Case study: School district works to give employees a supportive health

    With UnitedHealthcare, Minneapolis Public Schools has experienced a higher utilization of benefits, quicker resolution of issues and an improved health care experience for employees.

  27. Case Study of the Month

    Case Study of the Month; Change Agents Submission Winners; Compensation Survey; Conference Support; Continuing Medical Education (CME) Maintenance of Certification (MOC) State Sleep Societies; Talking Sleep Podcast; Young Investigators Research Forum (YIRF)

  28. Guidance for the design of qualitative case study evaluation

    This guide, written by Professor Frank Vanclay of the Department of Cultural Geography, University of Groningen, provides notes on planning and implementing qualitative case study research.It outlines the use of a variety of different evaluation options that can be used in outcomes assessment and provides examples of the use of story based approaches with a discussion focused on their ...

  29. Periorbital melanosis and its possible association with insulin

    Skin can serve as a window to a patient's overall health and its changes can occasionally indicate underlying disorders. 1 Periorbital melanosis (POM) is a common benign skin condition that can affect men and women of any age and is characterized by bilateral skin hyperpigmentation that can be periorbital or infraorbital. 2 POM can occur as a primary disorder independent of any systemic or ...

  30. Oral fibrolipoma in dogs: Retrospective case series study and

    Oral fibrolipoma in dogs: Retrospective case series study and comparative review. ... Culp WT, Ehrhart N, Withrow SJ, et al. Results of surgical excision and evaluation of factors associated with survival time in dogs with lingual neoplasia: 97 cases (1995-2008). J Am Vet Med Assoc. 2013;242:1392-1397. Crossref.