U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

The case study approach

Sarah crowe.

1 Division of Primary Care, The University of Nottingham, Nottingham, UK

Kathrin Cresswell

2 Centre for Population Health Sciences, The University of Edinburgh, Edinburgh, UK

Ann Robertson

3 School of Health in Social Science, The University of Edinburgh, Edinburgh, UK

Anthony Avery

Aziz sheikh.

The case study approach allows in-depth, multi-faceted explorations of complex issues in their real-life settings. The value of the case study approach is well recognised in the fields of business, law and policy, but somewhat less so in health services research. Based on our experiences of conducting several health-related case studies, we reflect on the different types of case study design, the specific research questions this approach can help answer, the data sources that tend to be used, and the particular advantages and disadvantages of employing this methodological approach. The paper concludes with key pointers to aid those designing and appraising proposals for conducting case study research, and a checklist to help readers assess the quality of case study reports.

Introduction

The case study approach is particularly useful to employ when there is a need to obtain an in-depth appreciation of an issue, event or phenomenon of interest, in its natural real-life context. Our aim in writing this piece is to provide insights into when to consider employing this approach and an overview of key methodological considerations in relation to the design, planning, analysis, interpretation and reporting of case studies.

The illustrative 'grand round', 'case report' and 'case series' have a long tradition in clinical practice and research. Presenting detailed critiques, typically of one or more patients, aims to provide insights into aspects of the clinical case and, in doing so, illustrate broader lessons that may be learnt. In research, the conceptually-related case study approach can be used, for example, to describe in detail a patient's episode of care, explore professional attitudes to and experiences of a new policy initiative or service development or more generally to 'investigate contemporary phenomena within its real-life context' [ 1 ]. Based on our experiences of conducting a range of case studies, we reflect on when to consider using this approach, discuss the key steps involved and illustrate, with examples, some of the practical challenges of attaining an in-depth understanding of a 'case' as an integrated whole. In keeping with previously published work, we acknowledge the importance of theory to underpin the design, selection, conduct and interpretation of case studies[ 2 ]. In so doing, we make passing reference to the different epistemological approaches used in case study research by key theoreticians and methodologists in this field of enquiry.

This paper is structured around the following main questions: What is a case study? What are case studies used for? How are case studies conducted? What are the potential pitfalls and how can these be avoided? We draw in particular on four of our own recently published examples of case studies (see Tables ​ Tables1, 1 , ​ ,2, 2 , ​ ,3 3 and ​ and4) 4 ) and those of others to illustrate our discussion[ 3 - 7 ].

Example of a case study investigating the reasons for differences in recruitment rates of minority ethnic people in asthma research[ 3 ]

Minority ethnic people experience considerably greater morbidity from asthma than the White majority population. Research has shown however that these minority ethnic populations are likely to be under-represented in research undertaken in the UK; there is comparatively less marginalisation in the US.
To investigate approaches to bolster recruitment of South Asians into UK asthma studies through qualitative research with US and UK researchers, and UK community leaders.
Single intrinsic case study
Centred on the issue of recruitment of South Asian people with asthma.
In-depth interviews were conducted with asthma researchers from the UK and US. A supplementary questionnaire was also provided to researchers.
Framework approach.
Barriers to ethnic minority recruitment were found to centre around:
 1. The attitudes of the researchers' towards inclusion: The majority of UK researchers interviewed were generally supportive of the idea of recruiting ethnically diverse participants but expressed major concerns about the practicalities of achieving this; in contrast, the US researchers appeared much more committed to the policy of inclusion.
 2. Stereotypes and prejudices: We found that some of the UK researchers' perceptions of ethnic minorities may have influenced their decisions on whether to approach individuals from particular ethnic groups. These stereotypes centred on issues to do with, amongst others, language barriers and lack of altruism.
 3. Demographic, political and socioeconomic contexts of the two countries: Researchers suggested that the demographic profile of ethnic minorities, their political engagement and the different configuration of the health services in the UK and the US may have contributed to differential rates.
 4. Above all, however, it appeared that the overriding importance of the US National Institute of Health's policy to mandate the inclusion of minority ethnic people (and women) had a major impact on shaping the attitudes and in turn the experiences of US researchers'; the absence of any similar mandate in the UK meant that UK-based researchers had not been forced to challenge their existing practices and they were hence unable to overcome any stereotypical/prejudicial attitudes through experiential learning.

Example of a case study investigating the process of planning and implementing a service in Primary Care Organisations[ 4 ]

Health work forces globally are needing to reorganise and reconfigure in order to meet the challenges posed by the increased numbers of people living with long-term conditions in an efficient and sustainable manner. Through studying the introduction of General Practitioners with a Special Interest in respiratory disorders, this study aimed to provide insights into this important issue by focusing on community respiratory service development.
To understand and compare the process of workforce change in respiratory services and the impact on patient experience (specifically in relation to the role of general practitioners with special interests) in a theoretically selected sample of Primary Care Organisations (PCOs), in order to derive models of good practice in planning and the implementation of a broad range of workforce issues.
Multiple-case design of respiratory services in health regions in England and Wales.
Four PCOs.
Face-to-face and telephone interviews, e-mail discussions, local documents, patient diaries, news items identified from local and national websites, national workshop.
Reading, coding and comparison progressed iteratively.
 1. In the screening phase of this study (which involved semi-structured telephone interviews with the person responsible for driving the reconfiguration of respiratory services in 30 PCOs), the barriers of financial deficit, organisational uncertainty, disengaged clinicians and contradictory policies proved insurmountable for many PCOs to developing sustainable services. A key rationale for PCO re-organisation in 2006 was to strengthen their commissioning function and those of clinicians through Practice-Based Commissioning. However, the turbulence, which surrounded reorganisation was found to have the opposite desired effect.
 2. Implementing workforce reconfiguration was strongly influenced by the negotiation and contest among local clinicians and managers about "ownership" of work and income.
 3. Despite the intention to make the commissioning system more transparent, personal relationships based on common professional interests, past work history, friendships and collegiality, remained as key drivers for sustainable innovation in service development.
It was only possible to undertake in-depth work in a selective number of PCOs and, even within these selected PCOs, it was not possible to interview all informants of potential interest and/or obtain all relevant documents. This work was conducted in the early stages of a major NHS reorganisation in England and Wales and thus, events are likely to have continued to evolve beyond the study period; we therefore cannot claim to have seen any of the stories through to their conclusion.

Example of a case study investigating the introduction of the electronic health records[ 5 ]

Healthcare systems globally are moving from paper-based record systems to electronic health record systems. In 2002, the NHS in England embarked on the most ambitious and expensive IT-based transformation in healthcare in history seeking to introduce electronic health records into all hospitals in England by 2010.
To describe and evaluate the implementation and adoption of detailed electronic health records in secondary care in England and thereby provide formative feedback for local and national rollout of the NHS Care Records Service.
A mixed methods, longitudinal, multi-site, socio-technical collective case study.
Five NHS acute hospital and mental health Trusts that have been the focus of early implementation efforts.
Semi-structured interviews, documentary data and field notes, observations and quantitative data.
Qualitative data were analysed thematically using a socio-technical coding matrix, combined with additional themes that emerged from the data.
 1. Hospital electronic health record systems have developed and been implemented far more slowly than was originally envisioned.
 2. The top-down, government-led standardised approach needed to evolve to admit more variation and greater local choice for hospitals in order to support local service delivery.
 3. A range of adverse consequences were associated with the centrally negotiated contracts, which excluded the hospitals in question.
 4. The unrealistic, politically driven, timeline (implementation over 10 years) was found to be a major source of frustration for developers, implementers and healthcare managers and professionals alike.
We were unable to access details of the contracts between government departments and the Local Service Providers responsible for delivering and implementing the software systems. This, in turn, made it difficult to develop a holistic understanding of some key issues impacting on the overall slow roll-out of the NHS Care Record Service. Early adopters may also have differed in important ways from NHS hospitals that planned to join the National Programme for Information Technology and implement the NHS Care Records Service at a later point in time.

Example of a case study investigating the formal and informal ways students learn about patient safety[ 6 ]

There is a need to reduce the disease burden associated with iatrogenic harm and considering that healthcare education represents perhaps the most sustained patient safety initiative ever undertaken, it is important to develop a better appreciation of the ways in which undergraduate and newly qualified professionals receive and make sense of the education they receive.
To investigate the formal and informal ways pre-registration students from a range of healthcare professions (medicine, nursing, physiotherapy and pharmacy) learn about patient safety in order to become safe practitioners.
Multi-site, mixed method collective case study.
: Eight case studies (two for each professional group) were carried out in educational provider sites considering different programmes, practice environments and models of teaching and learning.
Structured in phases relevant to the three knowledge contexts:
Documentary evidence (including undergraduate curricula, handbooks and module outlines), complemented with a range of views (from course leads, tutors and students) and observations in a range of academic settings.
Policy and management views of patient safety and influences on patient safety education and practice. NHS policies included, for example, implementation of the National Patient Safety Agency's , which encourages organisations to develop an organisational safety culture in which staff members feel comfortable identifying dangers and reporting hazards.
The cultures to which students are exposed i.e. patient safety in relation to day-to-day working. NHS initiatives included, for example, a hand washing initiative or introduction of infection control measures.
 1. Practical, informal, learning opportunities were valued by students. On the whole, however, students were not exposed to nor engaged with important NHS initiatives such as risk management activities and incident reporting schemes.
 2. NHS policy appeared to have been taken seriously by course leaders. Patient safety materials were incorporated into both formal and informal curricula, albeit largely implicit rather than explicit.
 3. Resource issues and peer pressure were found to influence safe practice. Variations were also found to exist in students' experiences and the quality of the supervision available.
The curriculum and organisational documents collected differed between sites, which possibly reflected gatekeeper influences at each site. The recruitment of participants for focus group discussions proved difficult, so interviews or paired discussions were used as a substitute.

What is a case study?

A case study is a research approach that is used to generate an in-depth, multi-faceted understanding of a complex issue in its real-life context. It is an established research design that is used extensively in a wide variety of disciplines, particularly in the social sciences. A case study can be defined in a variety of ways (Table ​ (Table5), 5 ), the central tenet being the need to explore an event or phenomenon in depth and in its natural context. It is for this reason sometimes referred to as a "naturalistic" design; this is in contrast to an "experimental" design (such as a randomised controlled trial) in which the investigator seeks to exert control over and manipulate the variable(s) of interest.

Definitions of a case study

AuthorDefinition
Stake[ ] (p.237)
Yin[ , , ] (Yin 1999 p. 1211, Yin 1994 p. 13)
 •
 • (Yin 2009 p18)
Miles and Huberman[ ] (p. 25)
Green and Thorogood[ ] (p. 284)
George and Bennett[ ] (p. 17)"

Stake's work has been particularly influential in defining the case study approach to scientific enquiry. He has helpfully characterised three main types of case study: intrinsic , instrumental and collective [ 8 ]. An intrinsic case study is typically undertaken to learn about a unique phenomenon. The researcher should define the uniqueness of the phenomenon, which distinguishes it from all others. In contrast, the instrumental case study uses a particular case (some of which may be better than others) to gain a broader appreciation of an issue or phenomenon. The collective case study involves studying multiple cases simultaneously or sequentially in an attempt to generate a still broader appreciation of a particular issue.

These are however not necessarily mutually exclusive categories. In the first of our examples (Table ​ (Table1), 1 ), we undertook an intrinsic case study to investigate the issue of recruitment of minority ethnic people into the specific context of asthma research studies, but it developed into a instrumental case study through seeking to understand the issue of recruitment of these marginalised populations more generally, generating a number of the findings that are potentially transferable to other disease contexts[ 3 ]. In contrast, the other three examples (see Tables ​ Tables2, 2 , ​ ,3 3 and ​ and4) 4 ) employed collective case study designs to study the introduction of workforce reconfiguration in primary care, the implementation of electronic health records into hospitals, and to understand the ways in which healthcare students learn about patient safety considerations[ 4 - 6 ]. Although our study focusing on the introduction of General Practitioners with Specialist Interests (Table ​ (Table2) 2 ) was explicitly collective in design (four contrasting primary care organisations were studied), is was also instrumental in that this particular professional group was studied as an exemplar of the more general phenomenon of workforce redesign[ 4 ].

What are case studies used for?

According to Yin, case studies can be used to explain, describe or explore events or phenomena in the everyday contexts in which they occur[ 1 ]. These can, for example, help to understand and explain causal links and pathways resulting from a new policy initiative or service development (see Tables ​ Tables2 2 and ​ and3, 3 , for example)[ 1 ]. In contrast to experimental designs, which seek to test a specific hypothesis through deliberately manipulating the environment (like, for example, in a randomised controlled trial giving a new drug to randomly selected individuals and then comparing outcomes with controls),[ 9 ] the case study approach lends itself well to capturing information on more explanatory ' how ', 'what' and ' why ' questions, such as ' how is the intervention being implemented and received on the ground?'. The case study approach can offer additional insights into what gaps exist in its delivery or why one implementation strategy might be chosen over another. This in turn can help develop or refine theory, as shown in our study of the teaching of patient safety in undergraduate curricula (Table ​ (Table4 4 )[ 6 , 10 ]. Key questions to consider when selecting the most appropriate study design are whether it is desirable or indeed possible to undertake a formal experimental investigation in which individuals and/or organisations are allocated to an intervention or control arm? Or whether the wish is to obtain a more naturalistic understanding of an issue? The former is ideally studied using a controlled experimental design, whereas the latter is more appropriately studied using a case study design.

Case studies may be approached in different ways depending on the epistemological standpoint of the researcher, that is, whether they take a critical (questioning one's own and others' assumptions), interpretivist (trying to understand individual and shared social meanings) or positivist approach (orientating towards the criteria of natural sciences, such as focusing on generalisability considerations) (Table ​ (Table6). 6 ). Whilst such a schema can be conceptually helpful, it may be appropriate to draw on more than one approach in any case study, particularly in the context of conducting health services research. Doolin has, for example, noted that in the context of undertaking interpretative case studies, researchers can usefully draw on a critical, reflective perspective which seeks to take into account the wider social and political environment that has shaped the case[ 11 ].

Example of epistemological approaches that may be used in case study research

ApproachCharacteristicsCriticismsKey references
Involves questioning one's own assumptions taking into account the wider political and social environment.It can possibly neglect other factors by focussing only on power relationships and may give the researcher a position that is too privileged.Howcroft and Trauth[ ] Blakie[ ] Doolin[ , ]
Interprets the limiting conditions in relation to power and control that are thought to influence behaviour.Bloomfield and Best[ ]
Involves understanding meanings/contexts and processes as perceived from different perspectives, trying to understand individual and shared social meanings. Focus is on theory building.Often difficult to explain unintended consequences and for neglecting surrounding historical contextsStake[ ] Doolin[ ]
Involves establishing which variables one wishes to study in advance and seeing whether they fit in with the findings. Focus is often on testing and refining theory on the basis of case study findings.It does not take into account the role of the researcher in influencing findings.Yin[ , , ] Shanks and Parr[ ]

How are case studies conducted?

Here, we focus on the main stages of research activity when planning and undertaking a case study; the crucial stages are: defining the case; selecting the case(s); collecting and analysing the data; interpreting data; and reporting the findings.

Defining the case

Carefully formulated research question(s), informed by the existing literature and a prior appreciation of the theoretical issues and setting(s), are all important in appropriately and succinctly defining the case[ 8 , 12 ]. Crucially, each case should have a pre-defined boundary which clarifies the nature and time period covered by the case study (i.e. its scope, beginning and end), the relevant social group, organisation or geographical area of interest to the investigator, the types of evidence to be collected, and the priorities for data collection and analysis (see Table ​ Table7 7 )[ 1 ]. A theory driven approach to defining the case may help generate knowledge that is potentially transferable to a range of clinical contexts and behaviours; using theory is also likely to result in a more informed appreciation of, for example, how and why interventions have succeeded or failed[ 13 ].

Example of a checklist for rating a case study proposal[ 8 ]

Clarity: Does the proposal read well?
Integrity: Do its pieces fit together?
Attractiveness: Does it pique the reader's interest?
The case: Is the case adequately defined?
The issues: Are major research questions identified?
Data Resource: Are sufficient data sources identified?
Case Selection: Is the selection plan reasonable?
Data Gathering: Are data-gathering activities outlined?
Validation: Is the need and opportunity for triangulation indicated?
Access: Are arrangements for start-up anticipated?
Confidentiality: Is there sensitivity to the protection of people?
Cost: Are time and resource estimates reasonable?

For example, in our evaluation of the introduction of electronic health records in English hospitals (Table ​ (Table3), 3 ), we defined our cases as the NHS Trusts that were receiving the new technology[ 5 ]. Our focus was on how the technology was being implemented. However, if the primary research interest had been on the social and organisational dimensions of implementation, we might have defined our case differently as a grouping of healthcare professionals (e.g. doctors and/or nurses). The precise beginning and end of the case may however prove difficult to define. Pursuing this same example, when does the process of implementation and adoption of an electronic health record system really begin or end? Such judgements will inevitably be influenced by a range of factors, including the research question, theory of interest, the scope and richness of the gathered data and the resources available to the research team.

Selecting the case(s)

The decision on how to select the case(s) to study is a very important one that merits some reflection. In an intrinsic case study, the case is selected on its own merits[ 8 ]. The case is selected not because it is representative of other cases, but because of its uniqueness, which is of genuine interest to the researchers. This was, for example, the case in our study of the recruitment of minority ethnic participants into asthma research (Table ​ (Table1) 1 ) as our earlier work had demonstrated the marginalisation of minority ethnic people with asthma, despite evidence of disproportionate asthma morbidity[ 14 , 15 ]. In another example of an intrinsic case study, Hellstrom et al.[ 16 ] studied an elderly married couple living with dementia to explore how dementia had impacted on their understanding of home, their everyday life and their relationships.

For an instrumental case study, selecting a "typical" case can work well[ 8 ]. In contrast to the intrinsic case study, the particular case which is chosen is of less importance than selecting a case that allows the researcher to investigate an issue or phenomenon. For example, in order to gain an understanding of doctors' responses to health policy initiatives, Som undertook an instrumental case study interviewing clinicians who had a range of responsibilities for clinical governance in one NHS acute hospital trust[ 17 ]. Sampling a "deviant" or "atypical" case may however prove even more informative, potentially enabling the researcher to identify causal processes, generate hypotheses and develop theory.

In collective or multiple case studies, a number of cases are carefully selected. This offers the advantage of allowing comparisons to be made across several cases and/or replication. Choosing a "typical" case may enable the findings to be generalised to theory (i.e. analytical generalisation) or to test theory by replicating the findings in a second or even a third case (i.e. replication logic)[ 1 ]. Yin suggests two or three literal replications (i.e. predicting similar results) if the theory is straightforward and five or more if the theory is more subtle. However, critics might argue that selecting 'cases' in this way is insufficiently reflexive and ill-suited to the complexities of contemporary healthcare organisations.

The selected case study site(s) should allow the research team access to the group of individuals, the organisation, the processes or whatever else constitutes the chosen unit of analysis for the study. Access is therefore a central consideration; the researcher needs to come to know the case study site(s) well and to work cooperatively with them. Selected cases need to be not only interesting but also hospitable to the inquiry [ 8 ] if they are to be informative and answer the research question(s). Case study sites may also be pre-selected for the researcher, with decisions being influenced by key stakeholders. For example, our selection of case study sites in the evaluation of the implementation and adoption of electronic health record systems (see Table ​ Table3) 3 ) was heavily influenced by NHS Connecting for Health, the government agency that was responsible for overseeing the National Programme for Information Technology (NPfIT)[ 5 ]. This prominent stakeholder had already selected the NHS sites (through a competitive bidding process) to be early adopters of the electronic health record systems and had negotiated contracts that detailed the deployment timelines.

It is also important to consider in advance the likely burden and risks associated with participation for those who (or the site(s) which) comprise the case study. Of particular importance is the obligation for the researcher to think through the ethical implications of the study (e.g. the risk of inadvertently breaching anonymity or confidentiality) and to ensure that potential participants/participating sites are provided with sufficient information to make an informed choice about joining the study. The outcome of providing this information might be that the emotive burden associated with participation, or the organisational disruption associated with supporting the fieldwork, is considered so high that the individuals or sites decide against participation.

In our example of evaluating implementations of electronic health record systems, given the restricted number of early adopter sites available to us, we sought purposively to select a diverse range of implementation cases among those that were available[ 5 ]. We chose a mixture of teaching, non-teaching and Foundation Trust hospitals, and examples of each of the three electronic health record systems procured centrally by the NPfIT. At one recruited site, it quickly became apparent that access was problematic because of competing demands on that organisation. Recognising the importance of full access and co-operative working for generating rich data, the research team decided not to pursue work at that site and instead to focus on other recruited sites.

Collecting the data

In order to develop a thorough understanding of the case, the case study approach usually involves the collection of multiple sources of evidence, using a range of quantitative (e.g. questionnaires, audits and analysis of routinely collected healthcare data) and more commonly qualitative techniques (e.g. interviews, focus groups and observations). The use of multiple sources of data (data triangulation) has been advocated as a way of increasing the internal validity of a study (i.e. the extent to which the method is appropriate to answer the research question)[ 8 , 18 - 21 ]. An underlying assumption is that data collected in different ways should lead to similar conclusions, and approaching the same issue from different angles can help develop a holistic picture of the phenomenon (Table ​ (Table2 2 )[ 4 ].

Brazier and colleagues used a mixed-methods case study approach to investigate the impact of a cancer care programme[ 22 ]. Here, quantitative measures were collected with questionnaires before, and five months after, the start of the intervention which did not yield any statistically significant results. Qualitative interviews with patients however helped provide an insight into potentially beneficial process-related aspects of the programme, such as greater, perceived patient involvement in care. The authors reported how this case study approach provided a number of contextual factors likely to influence the effectiveness of the intervention and which were not likely to have been obtained from quantitative methods alone.

In collective or multiple case studies, data collection needs to be flexible enough to allow a detailed description of each individual case to be developed (e.g. the nature of different cancer care programmes), before considering the emerging similarities and differences in cross-case comparisons (e.g. to explore why one programme is more effective than another). It is important that data sources from different cases are, where possible, broadly comparable for this purpose even though they may vary in nature and depth.

Analysing, interpreting and reporting case studies

Making sense and offering a coherent interpretation of the typically disparate sources of data (whether qualitative alone or together with quantitative) is far from straightforward. Repeated reviewing and sorting of the voluminous and detail-rich data are integral to the process of analysis. In collective case studies, it is helpful to analyse data relating to the individual component cases first, before making comparisons across cases. Attention needs to be paid to variations within each case and, where relevant, the relationship between different causes, effects and outcomes[ 23 ]. Data will need to be organised and coded to allow the key issues, both derived from the literature and emerging from the dataset, to be easily retrieved at a later stage. An initial coding frame can help capture these issues and can be applied systematically to the whole dataset with the aid of a qualitative data analysis software package.

The Framework approach is a practical approach, comprising of five stages (familiarisation; identifying a thematic framework; indexing; charting; mapping and interpretation) , to managing and analysing large datasets particularly if time is limited, as was the case in our study of recruitment of South Asians into asthma research (Table ​ (Table1 1 )[ 3 , 24 ]. Theoretical frameworks may also play an important role in integrating different sources of data and examining emerging themes. For example, we drew on a socio-technical framework to help explain the connections between different elements - technology; people; and the organisational settings within which they worked - in our study of the introduction of electronic health record systems (Table ​ (Table3 3 )[ 5 ]. Our study of patient safety in undergraduate curricula drew on an evaluation-based approach to design and analysis, which emphasised the importance of the academic, organisational and practice contexts through which students learn (Table ​ (Table4 4 )[ 6 ].

Case study findings can have implications both for theory development and theory testing. They may establish, strengthen or weaken historical explanations of a case and, in certain circumstances, allow theoretical (as opposed to statistical) generalisation beyond the particular cases studied[ 12 ]. These theoretical lenses should not, however, constitute a strait-jacket and the cases should not be "forced to fit" the particular theoretical framework that is being employed.

When reporting findings, it is important to provide the reader with enough contextual information to understand the processes that were followed and how the conclusions were reached. In a collective case study, researchers may choose to present the findings from individual cases separately before amalgamating across cases. Care must be taken to ensure the anonymity of both case sites and individual participants (if agreed in advance) by allocating appropriate codes or withholding descriptors. In the example given in Table ​ Table3, 3 , we decided against providing detailed information on the NHS sites and individual participants in order to avoid the risk of inadvertent disclosure of identities[ 5 , 25 ].

What are the potential pitfalls and how can these be avoided?

The case study approach is, as with all research, not without its limitations. When investigating the formal and informal ways undergraduate students learn about patient safety (Table ​ (Table4), 4 ), for example, we rapidly accumulated a large quantity of data. The volume of data, together with the time restrictions in place, impacted on the depth of analysis that was possible within the available resources. This highlights a more general point of the importance of avoiding the temptation to collect as much data as possible; adequate time also needs to be set aside for data analysis and interpretation of what are often highly complex datasets.

Case study research has sometimes been criticised for lacking scientific rigour and providing little basis for generalisation (i.e. producing findings that may be transferable to other settings)[ 1 ]. There are several ways to address these concerns, including: the use of theoretical sampling (i.e. drawing on a particular conceptual framework); respondent validation (i.e. participants checking emerging findings and the researcher's interpretation, and providing an opinion as to whether they feel these are accurate); and transparency throughout the research process (see Table ​ Table8 8 )[ 8 , 18 - 21 , 23 , 26 ]. Transparency can be achieved by describing in detail the steps involved in case selection, data collection, the reasons for the particular methods chosen, and the researcher's background and level of involvement (i.e. being explicit about how the researcher has influenced data collection and interpretation). Seeking potential, alternative explanations, and being explicit about how interpretations and conclusions were reached, help readers to judge the trustworthiness of the case study report. Stake provides a critique checklist for a case study report (Table ​ (Table9 9 )[ 8 ].

Potential pitfalls and mitigating actions when undertaking case study research

Potential pitfallMitigating action
Selecting/conceptualising the wrong case(s) resulting in lack of theoretical generalisationsDeveloping in-depth knowledge of theoretical and empirical literature, justifying choices made
Collecting large volumes of data that are not relevant to the case or too little to be of any valueFocus data collection in line with research questions, whilst being flexible and allowing different paths to be explored
Defining/bounding the caseFocus on related components (either by time and/or space), be clear what is outside the scope of the case
Lack of rigourTriangulation, respondent validation, the use of theoretical sampling, transparency throughout the research process
Ethical issuesAnonymise appropriately as cases are often easily identifiable to insiders, informed consent of participants
Integration with theoretical frameworkAllow for unexpected issues to emerge and do not force fit, test out preliminary explanations, be clear about epistemological positions in advance

Stake's checklist for assessing the quality of a case study report[ 8 ]

1. Is this report easy to read?
2. Does it fit together, each sentence contributing to the whole?
3. Does this report have a conceptual structure (i.e. themes or issues)?
4. Are its issues developed in a series and scholarly way?
5. Is the case adequately defined?
6. Is there a sense of story to the presentation?
7. Is the reader provided some vicarious experience?
8. Have quotations been used effectively?
9. Are headings, figures, artefacts, appendices, indexes effectively used?
10. Was it edited well, then again with a last minute polish?
11. Has the writer made sound assertions, neither over- or under-interpreting?
12. Has adequate attention been paid to various contexts?
13. Were sufficient raw data presented?
14. Were data sources well chosen and in sufficient number?
15. Do observations and interpretations appear to have been triangulated?
16. Is the role and point of view of the researcher nicely apparent?
17. Is the nature of the intended audience apparent?
18. Is empathy shown for all sides?
19. Are personal intentions examined?
20. Does it appear individuals were put at risk?

Conclusions

The case study approach allows, amongst other things, critical events, interventions, policy developments and programme-based service reforms to be studied in detail in a real-life context. It should therefore be considered when an experimental design is either inappropriate to answer the research questions posed or impossible to undertake. Considering the frequency with which implementations of innovations are now taking place in healthcare settings and how well the case study approach lends itself to in-depth, complex health service research, we believe this approach should be more widely considered by researchers. Though inherently challenging, the research case study can, if carefully conceptualised and thoughtfully undertaken and reported, yield powerful insights into many important aspects of health and healthcare delivery.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

AS conceived this article. SC, KC and AR wrote this paper with GH, AA and AS all commenting on various drafts. SC and AS are guarantors.

Pre-publication history

The pre-publication history for this paper can be accessed here:

http://www.biomedcentral.com/1471-2288/11/100/prepub

Acknowledgements

We are grateful to the participants and colleagues who contributed to the individual case studies that we have drawn on. This work received no direct funding, but it has been informed by projects funded by Asthma UK, the NHS Service Delivery Organisation, NHS Connecting for Health Evaluation Programme, and Patient Safety Research Portfolio. We would also like to thank the expert reviewers for their insightful and constructive feedback. Our thanks are also due to Dr. Allison Worth who commented on an earlier draft of this manuscript.

  • Yin RK. Case study research, design and method. 4. London: Sage Publications Ltd.; 2009. [ Google Scholar ]
  • Keen J, Packwood T. Qualitative research; case study evaluation. BMJ. 1995; 311 :444–446. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sheikh A, Halani L, Bhopal R, Netuveli G, Partridge M, Car J. et al. Facilitating the Recruitment of Minority Ethnic People into Research: Qualitative Case Study of South Asians and Asthma. PLoS Med. 2009; 6 (10):1–11. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pinnock H, Huby G, Powell A, Kielmann T, Price D, Williams S, The process of planning, development and implementation of a General Practitioner with a Special Interest service in Primary Care Organisations in England and Wales: a comparative prospective case study. Report for the National Co-ordinating Centre for NHS Service Delivery and Organisation R&D (NCCSDO) 2008. http://www.sdo.nihr.ac.uk/files/project/99-final-report.pdf
  • Robertson A, Cresswell K, Takian A, Petrakaki D, Crowe S, Cornford T. et al. Prospective evaluation of the implementation and adoption of NHS Connecting for Health's national electronic health record in secondary care in England: interim findings. BMJ. 2010; 41 :c4564. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pearson P, Steven A, Howe A, Sheikh A, Ashcroft D, Smith P. the Patient Safety Education Study Group. Learning about patient safety: organisational context and culture in the education of healthcare professionals. J Health Serv Res Policy. 2010; 15 :4–10. doi: 10.1258/jhsrp.2009.009052. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • van Harten WH, Casparie TF, Fisscher OA. The evaluation of the introduction of a quality management system: a process-oriented case study in a large rehabilitation hospital. Health Policy. 2002; 60 (1):17–37. doi: 10.1016/S0168-8510(01)00187-7. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Stake RE. The art of case study research. London: Sage Publications Ltd.; 1995. [ Google Scholar ]
  • Sheikh A, Smeeth L, Ashcroft R. Randomised controlled trials in primary care: scope and application. Br J Gen Pract. 2002; 52 (482):746–51. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • King G, Keohane R, Verba S. Designing Social Inquiry. Princeton: Princeton University Press; 1996. [ Google Scholar ]
  • Doolin B. Information technology as disciplinary technology: being critical in interpretative research on information systems. Journal of Information Technology. 1998; 13 :301–311. doi: 10.1057/jit.1998.8. [ CrossRef ] [ Google Scholar ]
  • George AL, Bennett A. Case studies and theory development in the social sciences. Cambridge, MA: MIT Press; 2005. [ Google Scholar ]
  • Eccles M. the Improved Clinical Effectiveness through Behavioural Research Group (ICEBeRG) Designing theoretically-informed implementation interventions. Implementation Science. 2006; 1 :1–8. doi: 10.1186/1748-5908-1-1. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Netuveli G, Hurwitz B, Levy M, Fletcher M, Barnes G, Durham SR, Sheikh A. Ethnic variations in UK asthma frequency, morbidity, and health-service use: a systematic review and meta-analysis. Lancet. 2005; 365 (9456):312–7. [ PubMed ] [ Google Scholar ]
  • Sheikh A, Panesar SS, Lasserson T, Netuveli G. Recruitment of ethnic minorities to asthma studies. Thorax. 2004; 59 (7):634. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hellström I, Nolan M, Lundh U. 'We do things together': A case study of 'couplehood' in dementia. Dementia. 2005; 4 :7–22. doi: 10.1177/1471301205049188. [ CrossRef ] [ Google Scholar ]
  • Som CV. Nothing seems to have changed, nothing seems to be changing and perhaps nothing will change in the NHS: doctors' response to clinical governance. International Journal of Public Sector Management. 2005; 18 :463–477. doi: 10.1108/09513550510608903. [ CrossRef ] [ Google Scholar ]
  • Lincoln Y, Guba E. Naturalistic inquiry. Newbury Park: Sage Publications; 1985. [ Google Scholar ]
  • Barbour RS. Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ. 2001; 322 :1115–1117. doi: 10.1136/bmj.322.7294.1115. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mays N, Pope C. Qualitative research in health care: Assessing quality in qualitative research. BMJ. 2000; 320 :50–52. doi: 10.1136/bmj.320.7226.50. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mason J. Qualitative researching. London: Sage; 2002. [ Google Scholar ]
  • Brazier A, Cooke K, Moravan V. Using Mixed Methods for Evaluating an Integrative Approach to Cancer Care: A Case Study. Integr Cancer Ther. 2008; 7 :5–17. doi: 10.1177/1534735407313395. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miles MB, Huberman M. Qualitative data analysis: an expanded sourcebook. 2. CA: Sage Publications Inc.; 1994. [ Google Scholar ]
  • Pope C, Ziebland S, Mays N. Analysing qualitative data. Qualitative research in health care. BMJ. 2000; 320 :114–116. doi: 10.1136/bmj.320.7227.114. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cresswell KM, Worth A, Sheikh A. Actor-Network Theory and its role in understanding the implementation of information technology developments in healthcare. BMC Med Inform Decis Mak. 2010; 10 (1):67. doi: 10.1186/1472-6947-10-67. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Malterud K. Qualitative research: standards, challenges, and guidelines. Lancet. 2001; 358 :483–488. doi: 10.1016/S0140-6736(01)05627-6. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yin R. Case study research: design and methods. 2. Thousand Oaks, CA: Sage Publishing; 1994. [ Google Scholar ]
  • Yin R. Enhancing the quality of case studies in health services research. Health Serv Res. 1999; 34 :1209–1224. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Green J, Thorogood N. Qualitative methods for health research. 2. Los Angeles: Sage; 2009. [ Google Scholar ]
  • Howcroft D, Trauth E. Handbook of Critical Information Systems Research, Theory and Application. Cheltenham, UK: Northampton, MA, USA: Edward Elgar; 2005. [ Google Scholar ]
  • Blakie N. Approaches to Social Enquiry. Cambridge: Polity Press; 1993. [ Google Scholar ]
  • Doolin B. Power and resistance in the implementation of a medical management information system. Info Systems J. 2004; 14 :343–362. doi: 10.1111/j.1365-2575.2004.00176.x. [ CrossRef ] [ Google Scholar ]
  • Bloomfield BP, Best A. Management consultants: systems development, power and the translation of problems. Sociological Review. 1992; 40 :533–560. [ Google Scholar ]
  • Shanks G, Parr A. Proceedings of the European Conference on Information Systems. Naples; 2003. Positivist, single case study research in information systems: A critical analysis. [ Google Scholar ]

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Case Study | Definition, Examples & Methods

Case Study | Definition, Examples & Methods

Published on 5 May 2022 by Shona McCombes . Revised on 30 January 2023.

A case study is a detailed study of a specific subject, such as a person, group, place, event, organisation, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.

A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating, and understanding different aspects of a research problem .

Table of contents

When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyse the case.

A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.

Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.

You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.

Case study examples
Research question Case study
What are the ecological effects of wolf reintroduction? Case study of wolf reintroduction in Yellowstone National Park in the US
How do populist politicians use narratives about history to gain support? Case studies of Hungarian prime minister Viktor Orbán and US president Donald Trump
How can teachers implement active learning strategies in mixed-level classrooms? Case study of a local school that promotes active learning
What are the main advantages and disadvantages of wind farms for rural communities? Case studies of three rural wind farm development projects in different parts of the country
How are viral marketing strategies changing the relationship between companies and consumers? Case study of the iPhone X marketing campaign
How do experiences of work in the gig economy differ by gender, race, and age? Case studies of Deliveroo and Uber drivers in London

Prevent plagiarism, run a free check.

Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:

  • Provide new or unexpected insights into the subject
  • Challenge or complicate existing assumptions and theories
  • Propose practical courses of action to resolve a problem
  • Open up new directions for future research

Unlike quantitative or experimental research, a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.

If you find yourself aiming to simultaneously investigate and solve an issue, consider conducting action research . As its name suggests, action research conducts research and takes action at the same time, and is highly iterative and flexible. 

However, you can also choose a more common or representative case to exemplify a particular category, experience, or phenomenon.

While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:

  • Exemplify a theory by showing how it explains the case under investigation
  • Expand on a theory by uncovering new concepts and ideas that need to be incorporated
  • Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions

To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.

There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews, observations, and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data .

The aim is to gain as thorough an understanding as possible of the case and its context.

In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.

How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis, with separate sections or chapters for the methods , results , and discussion .

Others are written in a more narrative style, aiming to explore the case from various angles and analyse its meanings and implications (for example, by using textual analysis or discourse analysis ).

In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, January 30). Case Study | Definition, Examples & Methods. Scribbr. Retrieved 12 August 2024, from https://www.scribbr.co.uk/research-methods/case-studies/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, correlational research | guide, design & examples, a quick guide to experimental design | 5 steps & examples, descriptive research design | definition, methods & examples.

Case Study Research

  • First Online: 29 September 2022

Cite this chapter

case study experimental research

  • Robert E. White   ORCID: orcid.org/0000-0002-8045-164X 3 &
  • Karyn Cooper 4  

2069 Accesses

1 Citations

As a footnote to the previous chapter, there is such a beast known as the ethnographic case study. Ethnographic case study has found its way into this chapter rather than into the previous one because of grammatical considerations. Simply put, the “case study” part of the phrase is the noun (with “case” as an adjective defining what kind of study it is), while the “ethnographic” part of the phrase is an adjective defining the type of case study that is being conducted. As such, the case study becomes the methodology, while the ethnography part refers to a method, mode or approach relating to the development of the study.

The experiential account that we get from a case study or qualitative research of a similar vein is just so necessary. How things happen over time and the degree to which they are subject to personality and how they are only gradually perceived as tolerable or intolerable by the communities and the groups that are involved is so important. Robert Stake, University of Illinois, Urbana-Champaign

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bartlett, L., & Vavrus, F. (2017). Rethinking case study research . Routledge.

Google Scholar  

Bauman, Z. (2000). Liquid modernity . Polity Press.

Bhaskar, R., & Danermark, B. (2006). Metatheory, interdisciplinarity and disability research: A critical realist perspective. Scandinavian Journal of Disability Research, 8 (4), 278–297.

Article   Google Scholar  

Bulmer, M. (1986). The Chicago School of sociology: Institutionalization, diversity, and the rise of sociological research . University of Chicago Press.

Campbell, D. T. (1975). Degrees of freedom and the case study. Comparative Political Studies, 8 (1), 178–191.

Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research . Houghton Mifflin.

Chua, W. F. (1986). Radical developments in accounting thought. The Accounting Review, 61 (4), 601–632.

Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). Sage.

Creswell, J. W., & Poth, C. N. (2017). Qualitative inquiry and research design . Sage.

Davey, L. (1991). The application of case study evaluations. Practical Assessment, Research, & Evaluation 2 (9) . Retrieved May 28, 2018, from http://PAREonline.net/getvn.asp?v=2&n=9

Demetriou, H. (2017). The case study. In E. Wilson (Ed.), School-based research: A guide for education students (pp. 124–138). Sage.

Denzin, N. K., & Lincoln, Y. S. (2005). The Sage handbook of qualitative research . Sage.

Flyvbjerg, B. (2004). Five misunderstandings about case-study research. In C. Seale, G. Gobo, J. F. Gubrium, & D. Silverman (Eds.), Qualitative research practice (pp. 420–433). Sage.

Hamel, J., Dufour, S., & Fortin, D. (1993). Case study methods . Sage.

Book   Google Scholar  

Healy, M. E. (1947). Le Play’s contribution to sociology: His method. The American Catholic Sociological Review, 8 (2), 97–110.

Johansson, R. (2003). Case study methodology. [Keynote speech]. In International Conference “Methodologies in Housing Research.” Royal Institute of Technology, Stockholm, September 2003 (pp. 1–14).

Klonoski, R. (2013). The case for case studies: Deriving theory from evidence. Journal of Business Case Studies, 9 (31), 261–266.

McDonough, J., & McDonough, S. (1997). Research methods for English language teachers . Routledge.

Merriam, S. B. (1998). Qualitative research and case study applications in education . Jossey-Bass.

Miles, M. B. (1979). Qualitative data as an attractive nuisance: The problem of analysis. Administrative Science Quarterly, 24 (4), 590–601.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Sage.

Mills, A. J., Durepos, G. & E. Wiebe (Eds.) (2010). What is a case study? Encyclopedia of case study research, Volumes I and II. Sage.

National Film Board of Canada. (2012, April). Here at home: In search of the real cost of homelessness . [Web documentary]. Retrieved February 9, 2020, from http://athome.nfb.ca/#/athome/home

Popper, K. (2002). Conjectures and refutations: The growth of scientific knowledge . Routledge.

Ridder, H.-G. (2017). The theory contribution of case study research designs. Business Research, 10 (2), 281–305.

Rolls, G. (2005). Classic case studies in psychology . Hodder Education.

Seawright, J., & Gerring, J. (2008). Case-Selection techniques in case study research: A menu of qualitative and quantitative options. Political Research Quarterly, 61 , 294–308.

Stake, R. E. (1995). The art of case study research . Sage.

Stake, R. E. (2005). Multiple case study analysis . Guilford Press.

Swanborn, P. G. (2010). Case study research: What, why and how? Sage.

Thomas, W. I., & Znaniecki, F. (1996). The Polish peasant in Europe and America: A classic work in immigration history . University of Illinois Press.

Yin, R. K. (1981). The case study crisis: Some answers. Administrative Science Quarterly, 26 (1), 58–65.

Yin, R. K. (1991). Advancing rigorous methodologies : A Review of “Towards Rigor in Reviews of Multivocal Literatures….”. Review of Educational Research, 61 (3), 299–305.

Yin, R. K. (1999). Enhancing the quality of case studies in health services research. Health Services Research, 34 (5) Part II, 1209–1224.

Yin, R. K. (2012). Applications of case study research (3rd ed.). Sage.

Yin, R. K. (2014). Case study research: Design and methods (5th ed.). Sage.

Zaretsky, E. (1996). Introduction. In W. I. Thomas & F. Znaniecki (Eds.), The Polish peasant in Europe and America: A classic work in immigration history (pp. vii–xvii). University of Illinois Press.

Download references

Author information

Authors and affiliations.

Faculty of Education, St. Francis Xavier University, Antigonish, NS, Canada

Robert E. White

OISE, University of Toronto, Toronto, ON, Canada

Karyn Cooper

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Robert E. White .

A Case in Case Study Methodology

Christine Benedichte Meyer

Norwegian School of Economics and Business Administration

Meyer, C. B. (2001). A Case in Case Study Methodology. Field Methods 13 (4), 329-352.

The purpose of this article is to provide a comprehensive view of the case study process from the researcher’s perspective, emphasizing methodological considerations. As opposed to other qualitative or quantitative research strategies, such as grounded theory or surveys, there are virtually no specific requirements guiding case research. This is both the strength and the weakness of this approach. It is a strength because it allows tailoring the design and data collection procedures to the research questions. On the other hand, this approach has resulted in many poor case studies, leaving it open to criticism, especially from the quantitative field of research. This article argues that there is a particular need in case studies to be explicit about the methodological choices one makes. This implies discussing the wide range of decisions concerned with design requirements, data collection procedures, data analysis, and validity and reliability. The approach here is to illustrate these decisions through a particular case study of two mergers in the financial industry in Norway.

In the past few years, a number of books have been published that give useful guidance in conducting qualitative studies (Gummesson 1988; Cassell & Symon 1994; Miles & Huberman 1994; Creswell 1998; Flick 1998; Rossman & Rallis 1998; Bryman & Burgess 1999; Marshall & Rossman 1999; Denzin & Lincoln 2000). One approach often mentioned is the case study (Yin 1989). Case studies are widely used in organizational studies in the social science disciplines of sociology, industrial relations, and anthropology (Hartley 1994). Such a study consists of detailed investigation of one or more organizations, or groups within organizations, with a view to providing an analysis of the context and processes involved in the phenomenon under study.

As opposed to other qualitative or quantitative research strategies, such as grounded theory (Glaser and Strauss 1967) or surveys (Nachmias & Nachmias 1981), there are virtually no specific requirements guiding case research. Yin (1989) and Eisenhardt (1989) give useful insights into the case study as a research strategy, but leave most of the design decisions on the table. This is both the strength and the weakness of this approach. It is a strength because it allows tailoring the design and data collection procedures to the research questions. On the other hand, this approach has resulted in many poor case studies, leaving it open to criticism, especially from the quantitative field of research (Cook and Campbell 1979). The fact that the case study is a rather loose design implies that there are a number of choices that need to be addressed in a principled way.

Although case studies have become a common research strategy, the scope of methodology sections in articles published in journals is far too limited to give the readers a detailed and comprehensive view of the decisions taken in the particular studies, and, given the format of methodology sections, will remain so. The few books (Yin 1989, 1993; Hamel, Dufour, & Fortin 1993; Stake 1995) and book chapters on case studies (Hartley 1994; Silverman 2000) are, on the other hand, mainly normative and span a broad range of different kinds of case studies. One exception is Pettigrew (1990, 1992), who places the case study in the context of a research tradition (the Warwick process research).

Given the contextual nature of the case study and its strength in addressing contemporary phenomena in real-life contexts, I believe that there is a need for articles that provide a comprehensive overview of the case study process from the researcher’s perspective, emphasizing methodological considerations. This implies addressing the whole range of choices concerning specific design requirements, data collection procedures, data analysis, and validity and reliability.

WHY A CASE STUDY?

Case studies are tailor-made for exploring new processes or behaviors or ones that are little understood (Hartley 1994). Hence, the approach is particularly useful for responding to how and why questions about a contemporary set of events (Leonard-Barton 1990). Moreover, researchers have argued that certain kinds of information can be difficult or even impossible to tackle by means other than qualitative approaches such as the case study (Sykes 1990). Gummesson (1988:76) argues that an important advantage of case study research is the opportunity for a holistic view of the process: “The detailed observations entailed in the case study method enable us to study many different aspects, examine them in relation to each other, view the process within its total environment and also use the researchers’ capacity for ‘verstehen.’ ”

The contextual nature of the case study is illustrated in Yin’s (1993:59) definition of a case study as an empirical inquiry that “investigates a contemporary phenomenon within its real-life context and addresses a situation in which the boundaries between phenomenon and context are not clearly evident.”

The key difference between the case study and other qualitative designs such as grounded theory and ethnography (Glaser & Strauss 1967; Strauss & Corbin 1990; Gioia & Chittipeddi 1991) is that the case study is open to the use of theory or conceptual categories that guide the research and analysis of data. In contrast, grounded theory or ethnography presupposes that theoretical perspectives are grounded in and emerge from firsthand data. Hartley (1994) argues that without a theoretical framework, the researcher is in severe danger of providing description without meaning. Gummesson (1988) says that a lack of preunderstanding will cause the researcher to spend considerable time gathering basic information. This preunderstanding may arise from general knowledge such as theories, models, and concepts or from specific knowledge of institutional conditions and social patterns. According to Gummesson, the key is not to require researchers to have split but dual personalities: “Those who are able to balance on a razor’s edge using their pre-understanding without being its slave” (p. 58).

DESCRIPTION OF THE ILLUSTRATIVE STUDY

The study that will be used for illustrative purposes is a comparative and longitudinal case study of organizational integration in mergers and acquisitions taking place in Norway. The study had two purposes: (1) to identify contextual factors and features of integration that facilitated or impeded organizational integration, and (2) to study how the three dimensions of organizational integration (integration of tasks, unification of power, and integration of cultures and identities) interrelated and evolved over time. Examples of contextual factors were relative power, degree of friendliness, and economic climate. Integration features included factors such as participation, communication, and allocation of positions and functions.

Mergers and acquisitions are inherently complex. Researchers in the field have suggested that managers continuously underestimate the task of integrating the merging organizations in the postintegration process (Haspeslaph & Jemison 1991). The process of organizational integration can lead to sharp interorganizational conflict as the different top management styles, organizational and work unit cultures, systems, and other aspects of organizational life come into contact (Blake & Mounton 1985; Schweiger & Walsh 1990; Cartwright & Cooper 1993). Furthermore, cultural change in mergers and acquisitions is compounded by additional uncertainties, ambiguities, and stress inherent in the combination process (Buono & Bowditch 1989).

I focused on two combinations: one merger and one acquisition. The first case was a merger between two major Norwegian banks, Bergen Bank and DnC (to be named DnB), that started in the late 1980s. The second case was a study of a major acquisition in the insurance industry (i.e., Gjensidige’s acquisition of Forenede), that started in the early 1990s. Both combinations aimed to realize operational synergies though merging the two organizations into one entity. This implied disruption of organizational boundaries and threat to the existing power distribution and organizational cultures.

The study of integration processes in mergers and acquisitions illustrates the need to find a design that opens for exploration of sensitive issues such as power struggles between the two merging organizations. Furthermore, the inherent complexity in the integration process, involving integration of tasks, unification of power, and cultural integration stressed the need for in-depth study of the phenomenon over time. To understand the cultural integration process, the design also had to be linked to the past history of the two organizations.

DESIGN DECISIONS

In the introduction, I stressed that a case is a rather loose design that requires that a number of design choices be made. In this section, I go through the most important choices I faced in the study of organizational integration in mergers and acquisitions. These include: (1) selection of cases; (2) sampling time; (3) choosing business areas, divisions, and sites; and (4) selection of and choices regarding data collection procedures, interviews, documents, and observation.

Selection of Cases

There are several choices involved in selecting cases. First, there is the question of how many cases to include. Second, one must sample cases and decide on a unit of analysis. I will explore these issues subsequently.

Single or Multiple Cases

Case studies can involve single or multiple cases. The problem of single cases is limitations in generalizability and several information-processing biases (Eisenhardt 1989).

One way to respond to these biases is by applying a multi-case approach (Leonard-Barton 1990). Multiple cases augment external validity and help guard against observer biases. Moreover, multi-case sampling adds confidence to findings. By looking at a range of similar and contrasting cases, we can understand a single-case finding, grounding it by specifying how and where and, if possible, why it behaves as it does. (Miles & Huberman 1994)

Given these limitations of the single case study, it is desirable to include more than one case study in the study. However, the desire for depth and a pluralist perspective and tracking the cases over time implies that the number of cases must be fairly few. I chose two cases, which clearly does not support generalizability any more than does one case, but allows for comparison and contrast between the cases as well as a deeper and richer look at each case.

Originally, I planned to include a third case in the study. Due to changes in management during the initial integration process, my access to the case was limited and I left this case entirely. However, a positive side effect was that it allowed a deeper investigation of the two original cases and in hindsight turned out to be a good decision.

Sampling Cases

The logic of sampling cases is fundamentally different from statistical sampling. The logic in case studies involves theoretical sampling, in which the goal is to choose cases that are likely to replicate or extend the emergent theory or to fill theoretical categories and provide examples for polar types (Eisenhardt 1989). Hence, whereas quantitative sampling concerns itself with representativeness, qualitative sampling seeks information richness and selects the cases purposefully rather than randomly (Crabtree and Miller 1992).

The choice of cases was guided by George (1979) and Pettigrew’s (1990) recommendations. The aim was to find cases that matched the three dimensions in the dependent variable and provided variation in the contextual factors, thus representing polar cases.

To match the choice of outcome variable, organizational integration, I chose cases in which the purpose was to fully consolidate the merging parties’ operations. A full consolidation would imply considerable disruption in the organizational boundaries and would be expected to affect the task-related, political, and cultural features of the organizations. As for the contextual factors, the two cases varied in contextual factors such as relative power, friendliness, and economic climate. The DnB merger was a friendly combination between two equal partners in an unfriendly economic climate. Gjensidige’s acquisition of Forenede was, in contrast, an unfriendly and unbalanced acquisition in a friendly economic climate.

Unit of Analysis

Another way to respond to researchers’ and respondents’ biases is to have more than one unit of analysis in each case (Yin 1993). This implies that, in addition to developing contrasts between the cases, researchers can focus on contrasts within the cases (Hartley 1994). In case studies, there is a choice of a holistic or embedded design (Yin 1989). A holistic design examines the global nature of the phenomenon, whereas an embedded design also pays attention to subunit(s).

I used an embedded design to analyze the cases (i.e., within each case, I also gave attention to subunits and subprocesses). In both cases, I compared the combination processes in the various divisions and local networks. Moreover, I compared three distinct change processes in DnB: before the merger, during the initial combination, and two years after the merger. The overall and most important unit of analysis in the two cases was, however, the integration process.

Sampling Time

According to Pettigrew (1990), time sets a reference for what changes can be seen and how those changes are explained. When conducting a case study, there are several important issues to decide when sampling time. The first regards how many times data should be collected, while the second concerns when to enter the organizations. There is also a need to decide whether to collect data on a continuous basis or in distinct periods.

Number of data collections. I studied the process by collecting real time and retrospective data at two points in time, with one-and-a-half- and two-year intervals in the two cases. Collecting data twice had some interesting implications for the interpretations of the data. During the first data collection in the DnB study, for example, I collected retrospective data about the premerger and initial combination phase and real-time data about the second step in the combination process.

Although I gained a picture of how the employees experienced the second stage of the combination process, it was too early to assess the effects of this process at that stage. I entered the organization two years later and found interesting effects that I had not anticipated the first time. Moreover, it was interesting to observe how people’s attitudes toward the merger processes changed over time to be more positive and less emotional.

When to enter the organizations. It would be desirable to have had the opportunity to collect data in the precombination processes. However, researchers are rarely given access in this period due to secrecy. The emphasis in this study was to focus on the postcombination process. As such, the precombination events were classified as contextual factors. This implied that it was most important to collect real-time data after the parties had been given government approval to merge or acquire. What would have been desirable was to gain access earlier in the postcombination process. This was not possible because access had to be negotiated. Due to the change of CEO in the middle of the merger process and the need for renegotiating access, this took longer than expected.

Regarding the second case, I was restricted by the time frame of the study. In essence, I had to choose between entering the combination process as soon as governmental approval was given, or entering the organization at a later stage. In light of the previous studies in the field that have failed to go beyond the initial two years, and given the need to collect data about the cultural integration process, I chose the latter strategy. And I decided to enter the organizations at two distinct periods of time rather than on a continuous basis.

There were several reasons for this approach, some methodological and some practical. First, data collection on a continuous basis would have required use of extensive observation that I didn’t have access to, and getting access to two data collections in DnB was difficult in itself. Second, I had a stay abroad between the first and second data collection in Gjensidige. Collecting data on a continuous basis would probably have allowed for better mapping of the ongoing integration process, but the contrasts between the two different stages in the integration process that I wanted to elaborate would probably be more difficult to detect. In Table 1 I have listed the periods of time in which I collected data in the two combinations.

Sampling Business Areas, Divisions, and Sites

Even when the cases for a study have been chosen, it is often necessary to make further choices within each case to make the cases researchable. The most important criteria that set the boundaries for the study are importance or criticality, relevance, and representativeness. At the time of the data collection, my criteria for making these decisions were not as conscious as they may appear here. Rather, being restricted by time and my own capacity as a researcher, I had to limit the sites and act instinctively. In both cases, I decided to concentrate on the core businesses (criticality criterion) and left out the business units that were only mildly affected by the integration process (relevance criterion). In the choice of regional offices, I used the representativeness criterion as the number of offices widely exceeded the number of sites possible to study. In making these choices, I relied on key informants in the organizations.

SELECTION OF DATA COLLECTION PROCEDURES

The choice of data collection procedures should be guided by the research question and the choice of design. The case study approach typically combines data collection methods such as archives, interviews, questionnaires, and observations (Yin 1989). This triangulated methodology provides stronger substantiation of constructs and hypotheses. However, the choice of data collection methods is also subject to constraints in time, financial resources, and access.

I chose a combination of interviews, archives, and observation, with main emphasis on the first two. Conducting a survey was inappropriate due to the lack of established concepts and indicators. The reason for limited observation, on the other hand, was due to problems in obtaining access early in the study and time and resource constraints. In addition to choosing among several different data collection methods, there are a number of choices to be made for each individual method.

When relying on interviews as the primary data collection method, the issue of building trust between the researcher and the interviewees becomes very important. I addressed this issue by several means. First, I established a procedure of how to approach the interviewees. In most cases, I called them first, then sent out a letter explaining the key features of the project and outlining the broad issues to be addressed in the interview. In this letter, the support from the institution’s top management was also communicated. In most cases, the top management’s support of the project was an important prerequisite for the respondent’s input. Some interviewees did, however, fear that their input would be open to the top management without disguising the information source. Hence, it became important to communicate how I intended to use and store the information.

To establish trust, I also actively used my preunderstanding of the context in the first case and the phenomenon in the second case. As I built up an understanding of the cases, I used this information to gain confidence. The active use of my preunderstanding did, however, pose important challenges in not revealing too much of the research hypotheses and in balancing between asking open-ended questions and appearing knowledgeable.

There are two choices involved in conducting interviews. The first concerns the sampling of interviewees. The second is that you must decide on issues such as the structure of the interviews, use of tape recorder, and involvement of other researchers.

Sampling Interviewees

Following the desire for detailed knowledge of each case and for grasping different participant’s views the aim was, in line with Pettigrew (1990), to apply a pluralist view by describing and analyzing competing versions of reality as seen by actors in the combination processes.

I used four criteria for sampling informants. First, I drew informants from populations representing multiple perspectives. The first data collection in DnB was primarily focused on the top management level. Moreover, most middle managers in the first data collection were employed at the head offices, either in Bergen or Oslo. In the second data collection, I compensated for this skew by including eight local middle managers in the sample. The difference between the number of employees interviewed in DnB and Gjensidige was primarily due to the fact that Gjensidige has three unions, whereas DnB only has one. The distribution of interviewees is outlined in Table 2 .

The second criterion was to use multiple informants. According to Glick et al. (1990), an important advantage of using multiple informants is that the validity of information provided by one informant can be checked against that provided by other informants. Moreover, the validity of the data used by the researcher can be enhanced by resolving the discrepancies among different informants’ reports. Hence, I selected multiple respondents from each perspective.

Third, I focused on key informants who were expected to be knowledgeable about the combination process. These people included top management members, managers, and employees involved in the integration project. To validate the information from these informants, I also used a fourth criterion by selecting managers and employees who had been affected by the process but who were not involved in the project groups.

Structured versus unstructured. In line with the explorative nature of the study, the goal of the interviews was to see the research topic from the perspective of the interviewee, and to understand why he or she came to have this particular perspective. To meet this goal, King (1994:15) recommends that one have “a low degree of structure imposed on the interviewer, a preponderance of open questions, a focus on specific situations and action sequences in the world of the interviewee rather than abstractions and general opinions.” In line with these recommendations, the collection of primary data in this study consists of unstructured interviews.

Using tape recorders and involving other researchers. The majority of the interviews were tape-recorded, and I could thus concentrate fully on asking questions and responding to the interviewees’ answers. In the few interviews that were not tape-recorded, most of which were conducted in the first phase of the DnB-study, two researchers were present. This was useful as we were both able to discuss the interviews later and had feedback on the role of an interviewer.

In hindsight, however, I wish that these interviews had been tape-recorded to maintain the level of accuracy and richness of data. Hence, in the next phases of data collection, I tape-recorded all interviews, with two exceptions (people who strongly opposed the use of this device). All interviews that were tape-recorded were transcribed by me in full, which gave me closeness and a good grasp of the data.

When organizations merge or make acquisitions, there are often a vast number of documents to choose from to build up an understanding of what has happened and to use in the analyses. Furthermore, when firms make acquisitions or merge, they often hire external consultants, each of whom produces more documents. Due to time constraints, it is seldom possible to collect and analyze all these documents, and thus the researcher has to make a selection.

The choice of documentation was guided by my previous experience with merger and acquisition processes and the research question. Hence, obtaining information on the postintegration process was more important than gaining access to the due-diligence analysis. As I learned about the process, I obtained more documents on specific issues. I did not, however, gain access to all the documents I asked for, and, in some cases, documents had been lost or shredded.

The documents were helpful in a number of ways. First, and most important, they were used as inputs to the interview guide and saved me time, because I did not have to ask for facts in the interviews. They were also useful for tracing the history of the organizations and statements made by key people in the organizations. Third, the documents were helpful in counteracting the biases of the interviews. A list of the documents used in writing the cases is shown in Table 3 .

Observation

The major strength of direct observation is that it is unobtrusive and does not require direct interaction with participants (Adler and Adler 1994). Observation produces rigor when it is combined with other methods. When the researcher has access to group processes, direct observation can illuminate the discrepancies between what people said in the interviews and casual conversations and what they actually do (Pettigrew 1990).

As with interviews, there are a number of choices involved in conducting observations. Although I did some observations in the study, I used interviews as the key data collection source. Discussion in this article about observations will thus be somewhat limited. Nevertheless, I faced a number of choices in conducting observations, including type of observation, when to enter, how much observation to conduct, and which groups to observe.

The are four ways in which an observer may gather data: (1) the complete participant who operates covertly, concealing any intention to observe the setting; (2) the participant-as-observer, who forms relationships and participates in activities, but makes no secret of his or her intentions to observe events; (3) the observer-as-participant, who maintains only superficial contact with the people being studied; and (4) the complete observer, who merely stands back and eavesdrops on the proceedings (Waddington 1994).

In this study, I used the second and third ways of observing. The use of the participant-as-observer mode, on which much ethnographic research is based, was rather limited in the study. There were two reasons for this. First, I had limited time available for collecting data, and in my view interviews made more effective use of this limited time than extensive participant observation. Second, people were rather reluctant to let me observe these political and sensitive processes until they knew me better and felt I could be trusted. Indeed, I was dependent on starting the data collection before having built sufficient trust to observe key groups in the integration process. Nevertheless, Gjensidige allowed me to study two employee seminars to acquaint me with the organization. Here I admitted my role as an observer but participated fully in the activities. To achieve variation, I chose two seminars representing polar groups of employees.

As observer-as-participant, I attended a top management meeting at the end of the first data collection in Gjensidige and observed the respondents during interviews and in more informal meetings, such as lunches. All these observations gave me an opportunity to validate the data from the interviews. Observing the top management group was by far the most interesting and rewarding in terms of input.

Both DnB and Gjensidige started to open up for more extensive observation when I was about to finish the data collection. By then, I had built up the trust needed to undertake this approach. Unfortunately, this came a little late for me to take advantage of it.

DATA ANALYSIS

Published studies generally describe research sites and data-collection methods, but give little space to discuss the analysis (Eisenhardt 1989). Thus, one cannot follow how a researcher arrives at the final conclusions from a large volume of field notes (Miles and Huberman 1994).

In this study, I went through the stages by which the data were reduced and analyzed. This involved establishing the chronology, coding, writing up the data according to phases and themes, introducing organizational integration into the analysis, comparing the cases, and applying the theory. I will discuss these phases accordingly.

The first step in the analysis was to establish the chronology of the cases. To do this, I used internal and external documents. I wrote the chronologies up and included appendices in the final report.

The next step was to code the data into phases and themes reflecting the contextual factors and features of integration. For the interviews, this implied marking the text with a specific phase and a theme, and grouping the paragraphs on the same theme and phase together. I followed the same procedure in organizing the documents.

I then wrote up the cases using phases and themes to structure them. Before starting to write up the cases, I scanned the information on each theme, built up the facts and filled in with perceptions and reactions that were illustrative and representative of the data.

The documents were primarily useful in establishing the facts, but they also provided me with some perceptions and reactions that were validated in the interviews. The documents used included internal letters and newsletters as well as articles from the press. The interviews were less factual, as intended, and gave me input to assess perceptions and reactions. The limited observation was useful to validate the data from the interviews. The result of this step was two descriptive cases.

To make each case more analytical, I introduced the three dimensions of organizational integration—integration of tasks, unification of power, and cultural integration—into the analysis. This helped to focus the case and to develop a framework that could be used to compare the cases. The cases were thus structured according to phases, organizational integration, and themes reflecting the factors and features in the study.

I took all these steps to become more familiar with each case as an individual entity. According to Eisenhardt (1989:540), this is a process that “allows the unique patterns of each case to emerge before the investigators push to generalise patterns across cases. In addition it gives investigators a rich familiarity with each case which, in turn, accelerates cross-case comparison.”

The comparison between the cases constituted the next step in the analysis. Here, I used the categories from the case chapters, filled in the features and factors, and compared and contrasted the findings. The idea behind cross-case searching tactics is to force investigators to go beyond initial impressions, especially through the use of structural and diverse lenses on the data. These tactics improve the likelihood of accurate and reliable theory, that is, theory with a close fit to the data (Eisenhardt 1989).

As a result, I had a number of overall themes, concepts, and relationships that had emerged from the within-case analysis and cross-case comparisons. The next step was to compare these emergent findings with theory from the organizational field of mergers and acquisitions, as well as other relevant perspectives.

This method of generalization is known as analytical generalization. In this approach, a previously developed theory is used as a template with which to compare the empirical results of the case study (Yin 1989). This comparison of emergent concepts, theory, or hypotheses with the extant literature involves asking what it is similar to, what it contradicts, and why. The key to this process is to consider a broad range of theory (Eisenhardt 1989). On the whole, linking emergent theory to existent literature enhances the internal validity, generalizability, and theoretical level of theory-building from case research.

According to Eisenhardt (1989), examining literature that conflicts with the emergent literature is important for two reasons. First, the chance of neglecting conflicting findings is reduced. Second, “conflicting results forces researchers into a more creative, frame-breaking mode of thinking than they might otherwise be able to achieve” (p. 544). Similarly, Eisenhardt (1989) claims that literature discussing similar findings is important because it ties together underlying similarities in phenomena not normally associated with each other. The result is often a theory with a stronger internal validity, wider generalizability, and a higher conceptual level.

The analytical generalization in the study included exploring and developing the concepts and examining the relationships between the constructs. In carrying out this analytical generalization, I acted on Eisenhardt’s (1989) recommendation to use a broad range of theory. First, I compared and contrasted the findings with the organizational stream on mergers and acquisition literature. Then I discussed other relevant literatures, including strategic change, power and politics, social justice, and social identity theory to explore how these perspectives could contribute to the understanding of the findings. Finally, I discussed the findings that could not be explained either by the merger and acquisition literature or the four theoretical perspectives.

In every scientific study, questions are raised about whether the study is valid and reliable. The issues of validity and reliability in case studies are just as important as for more deductive designs, but the application is fundamentally different.

VALIDITY AND RELIABILITY

The problems of validity in qualitative studies are related to the fact that most qualitative researchers work alone in the field, they focus on the findings rather than describe how the results were reached, and they are limited in processing information (Miles and Huberman 1994).

Researchers writing about qualitative methods have questioned whether the same criteria can be used for qualitative and quantitative studies (Kirk & Miller 1986; Sykes 1990; Maxwell 1992). The problem with the validity criteria suggested in qualitative research is that there is little consistency across the articles as each author suggests a new set of criteria.

One approach in examining validity and reliability is to apply the criteria used in quantitative research. Hence, the criteria to be examined here are objectivity/intersubjectivity, construct validity, internal validity, external validity, and reliability.

Objectivity/Intersubjectivity

The basic issue of objectivity can be framed as one of relative neutrality and reasonable freedom from unacknowledged research biases (Miles & Huberman 1994). In a real-time longitudinal study, the researcher is in danger of losing objectivity and of becoming too involved with the organization, the people, and the process. Hence, Leonard-Barton (1990) claims that one may be perceived as, and may even become, an advocate rather than an observer.

According to King (1994), however, qualitative research, in seeking to describe and make sense of the world, does not require researchers to strive for objectivity and distance themselves from research participants. Indeed, to do so would make good qualitative research impossible, as the interviewer’s sensitivity to subjective aspects of his or her relationship with the interviewee is an essential part of the research process (King 1994:31).

This does not imply, however, that the issue of possible research bias can be ignored. It is just as important as in a structured quantitative interview that the findings are not simply the product of the researcher’s prejudices and prior experience. One way to guard against this bias is for the researcher to explicitly recognize his or her presuppositions and to make a conscious effort to set these aside in the analysis (Gummesson 1988). Furthermore, rival conclusions should be considered (Miles & Huberman 1994).

My experience from the first phase of the DnB study was that it was difficult to focus the questions and the analysis of the data when the research questions were too vague and broad. As such, developing a framework before collecting the data for the study was useful in guiding the collection and analysis of data. Nevertheless, it was important to be open-minded and receptive to new and surprising data. In the DnB study, for example, the positive effect of the reorganization process on the integration of cultures came as a complete surprise to me and thus needed further elaboration.

I also consciously searched for negative evidence and problems by interviewing outliers (Miles & Huberman 1994) and asking problem-oriented questions. In Gjensidige, the first interviews with the top management revealed a much more positive perception of the cultural integration process than I had expected. To explore whether this was a result of overreliance on elite informants, I continued posing problem-oriented questions to outliers and people at lower levels in the organization. Moreover, I told them about the DnB study to be explicit about my presuppositions.

Another important issue when assessing objectivity is whether other researchers can trace the interpretations made in the case studies, or what is called intersubjectivity. To deal with this issue, Miles & Huberman (1994) suggest that: (1) the study’s general methods and procedures should be described in detail, (2) one should be able to follow the process of analysis, (3) conclusions should be explicitly linked with exhibits of displayed data, and (4) the data from the study should be made available for reanalysis by others.

In response to these requirements, I described the study’s data collection procedures and processing in detail. Then, the primary data were displayed in the written report in the form of quotations and extracts from documents to support and illustrate the interpretations of the data. Because the study was written up in English, I included the Norwegian text in a separate appendix. Finally, all the primary data from the study were accessible for a small group of distinguished researchers.

Construct Validity

Construct validity refers to whether there is substantial evidence that the theoretical paradigm correctly corresponds to observation (Kirk & Miller 1986). In this form of validity, the issue is the legitimacy of the application of a given concept or theory to established facts.

The strength of qualitative research lies in the flexible and responsive interaction between the interviewer and the respondents (Sykes 1990). Thus, meaning can be probed, topics covered easily from a number of angles, and questions made clear for respondents. This is an advantage for exploring the concepts (construct or theoretical validity) and the relationships between them (internal validity). Similarly, Hakim (1987) says the great strength of qualitative research is the validity of data obtained because individuals are interviewed in sufficient detail for the results to be taken as true, correct, and believable reports of their views and experiences.

Construct validity can be strengthened by applying a longitudinal multicase approach, triangulation, and use of feedback loops. The advantage of applying a longitudinal approach is that one gets the opportunity to test sensitivity of construct measures to the passage of time. Leonard-Barton (1990), for example, found that one of her main constructs, communicability, varied across time and relative to different groups of users. Thus, the longitudinal study aided in defining the construct more precisely. By using more than one case study, one can validate stability of construct across situations (Leonard-Barton 1990). Since my study only consists of two case studies, the opportunity to test stability of constructs across cases is somewhat limited. However, the use of more than one unit of analysis helps to overcome this limitation.

Construct validity is strengthened by the use of multiple sources of evidence to build construct measures, which define the construct and distinguish it from other constructs. These multiple sources of evidence can include multiple viewpoints within and across the data sources. My study responds to these requirements in its sampling of interviewees and uses of multiple data sources.

Use of feedback loops implies returning to interviewees with interpretations and developing theory and actively seeking contradictions in data (Crabtree & Miller 1992; King 1994). In DnB, the written report had to be approved by the bank’s top management after the first data collection. Apart from one minor correction, the bank had no objections to the established facts. In their comments on my analysis, some of the top managers expressed the view that the political process had been overemphasized, and that the CEO’s role in initiating a strategic process was undervalued. Hence, an important objective in the second data collection was to explore these comments further. Moreover, the report was not as positive as the management had hoped for, and negotiations had to be conducted to publish the report. The result of these negotiations was that publication of the report was postponed one-and-a-half years.

The experiences from the first data collection in the DnB had some consequences. I was more cautious and brought up the problems of confidentiality and the need to publish at the outset of the Gjensidige study. Also, I had to struggle to get access to the DnB case for the second data collection and some of the information I asked for was not released. At Gjensidige, I sent a preliminary draft of the case chapter to the corporation’s top management for comments, in addition to having second interviews with a small number of people. Beside testing out the factual description, these sessions gave me the opportunity to test out the theoretical categories established as a result of the within-case analysis.

Internal Validity

Internal validity concerns the validity of the postulated relationships among the concepts. The main problem of internal validity as a criterion in qualitative research is that it is often not open to scrutiny. According to Sykes (1990), the researcher can always provide a plausible account and, with careful editing, may ensure its coherence. Recognition of this problem has led to calls for better documentation of the processes of data collection, the data itself, and the interpretative contribution of the researcher. The discussion of how I met these requirements was outlined in the section on objectivity/subjectivity above.

However, there are some advantages in using qualitative methods, too. First, the flexible and responsive methods of data collection allow cross-checking and amplification of information from individual units as it is generated. Respondents’ opinions and understandings can be thoroughly explored. The internal validity results from strategies that eliminate ambiguity and contradiction, filling in detail and establishing strong connections in data.

Second, the longitudinal study enables one to track cause and effect. Moreover, it can make one aware of intervening variables (Leonard-Barton 1990). Eisenhardt (1989:542) states, “Just as hypothesis testing research an apparent relationship may simply be a spurious correlation or may reflect the impact of some third variable on each of the other two. Therefore, it is important to discover the underlying reasons for why the relationship exists.”

Generalizability

According to Mitchell (1983), case studies are not based on statistical inference. Quite the contrary, the inferring process turns exclusively on the theoretically necessary links among the features in the case study. The validity of the extrapolation depends not on the typicality or representativeness of the case but on the cogency of the theoretical reasoning. Hartley (1994:225) claims, “The detailed knowledge of the organization and especially the knowledge about the processes underlying the behaviour and its context can help to specify the conditions under which behaviour can be expected to occur. In other words, the generalisation is about theoretical propositions not about populations.”

Generalizability is normally based on the assumption that this theory may be useful in making sense of similar persons or situations (Maxwell 1992). One way to increase the generalizability is to apply a multicase approach (Leonard-Barton 1990). The advantage of this approach is that one can replicate the findings from one case study to another. This replication logic is similar to that used on multiple experiments (Yin 1993).

Given the choice of two case studies, the generalizability criterion is not supported in this study. Through the discussion of my choices, I have tried to show that I had to strike a balance between the need for depth and mapping changes over time and the number of cases. In doing so, I deliberately chose to provide a deeper and richer look at each case, allowing the reader to make judgments about the applicability rather than making a case for generalizability.

Reliability

Reliability focuses on whether the process of the study is consistent and reasonably stable over time and across researchers and methods (Miles & Huberman 1994). In the context of qualitative research, reliability is concerned with two questions (Sykes 1990): Could the same study carried out by two researchers produce the same findings? and Could a study be repeated using the same researcher and respondents to yield the same findings?

The problem of reliability in qualitative research is that differences between replicated studies using different researchers are to be expected. However, while it may not be surprising that different researchers generate different findings and reach different conclusions, controlling for reliability may still be relevant. Kirk and Miller’s (1986:311) definition takes into account the particular relationship between the researcher’s orientation, the generation of data, and its interpretation:

For reliability to be calculated, it is incumbent on the scientific investigator to document his or her procedure. This must be accomplished at such a level of abstraction that the loci of decisions internal to the project are made apparent. The curious public deserves to know how the qualitative researcher prepares him or herself for the endeavour, and how the data is collected and analysed.

The study addresses these requirements by discussing my point of departure regarding experience and framework, the sampling and data collection procedures, and data analysis.

Case studies often lack academic rigor and are, as such, regarded as inferior to more rigorous methods where there are more specific guidelines for collecting and analyzing data. These criticisms stress that there is a need to be very explicit about the choices one makes and the need to justify them.

One reason why case studies are criticized may be that researchers disagree about the definition and the purpose of carrying out case studies. Case studies have been regarded as a design (Cook and Campbell 1979), as a qualitative methodology (Cassell and Symon 1994), as a particular data collection procedure (Andersen 1997), and as a research strategy (Yin 1989). Furthermore, the purpose for carrying out case studies is unclear. Some regard case studies as supplements to more rigorous qualitative studies to be carried out in the early stage of the research process; others claim that it can be used for multiple purposes and as a research strategy in its own right (Gummesson 1988; Yin 1989). Given this unclear status, researchers need to be very clear about their interpretation of the case study and the purpose of carrying out the study.

This article has taken Yin’s (1989) definition of the case study as a research strategy as a starting point and argued that the choice of the case study should be guided by the research question(s). In the illustrative study, I used a case study strategy because of a need to explore sensitive, ill-defined concepts in depth, over time, taking into account the context and history of the mergers and the existing knowledge about the phenomenon. However, the choice of a case study strategy extended rather than limited the number of decisions to be made. In Schramm’s (1971, cited in Yin 1989:22–23) words, “The essence of a case study, the central tendency among all types of case study, is that it tries to illuminate a decision or set of decisions, why they were taken, how they were implemented, and with what result.”

Hence, the purpose of this article has been to illustrate the wide range of decisions that need to be made in the context of a particular case study and to discuss the methodological considerations linked to these decisions. I argue that there is a particular need in case studies to be explicit about the methodological choices one makes and that these choices can be best illustrated through a case study of the case study strategy.

As in all case studies, however, there are limitations to the generalizability of using one particular case study for illustrative purposes. As such, the strength of linking the methodological considerations to a specific context and phenomenon also becomes a weakness. However, I would argue that the questions raised in this article are applicable to many case studies, but that the answers are very likely to vary. The design choices are shown in Table 4 . Hence, researchers choosing a longitudinal, comparative case study need to address the same set of questions with regard to design, data collection procedures, and analysis, but they are likely to come up with other conclusions, given their different research questions.

Adler, P. A., and P. Adler. 1994. Observational techniques. In Handbook of qualitative research, edited by N. K. Denzin and Y. S. Lincoln, 377–92. London: Sage.

Andersen, S. S. 1997. Case-studier og generalisering: Forskningsstrategi og design (Case studies and generalization: Research strategy and design). Bergen, Norway: Fagbokforlaget.

Blake, R. R., and J. S. Mounton. 1985. How to achieve integration on the human side of the merger. Organizational Dynamics 13 (3): 41–56.

Bryman, A., and R. G. Burgess. 1999. Qualitative research. London: Sage.

Buono, A. F., and J. L. Bowditch. 1989. The human side of mergers and acquisitions. San Francisco: Jossey-Bass.

Cartwright, S., and C. L. Cooper. 1993. The psychological impact of mergers and acquisitions on the individual: A study of building society managers. Human Relations 46 (3): 327–47.

Cassell, C., and G. Symon, eds. 1994. Qualitative methods in organizational research: A practical guide. London: Sage.

Cook, T. D., and D. T. Campbell. 1979. Quasi experimentation: Design & analysis issues for field settings. Boston: Houghton Mifflin.

Crabtree, B. F., and W. L. Miller. 1992. Primary care research: A multimethod typology and qualitative road map. In Doing qualitative research: Methods for primary care, edited by B. F. Crabtree and W. L. Miller, 3–28. Vol. 3. Thousand Oaks, CA: Sage.

Creswell, J. W. 1998. Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, CA: Sage.

Denzin, N. K., and L. S. Lincoln. 2000. Handbook of qualitative research. London: Sage.

Eisenhardt, K. M. 1989. Building theories from case study research. Academy of Management Review 14 (4): 532–50.

Flick, U. 1998. An introduction to qualitative research. London: Sage.

George, A. L. 1979. Case studies and theory development: The method of structured, focused comparison. In Diplomacy: New approaches in history, theory, and policy, edited by P. G. Lauren, 43–68. New York: Free Press.

Gioia, D. A., and K. Chittipeddi. 1991. Sensemaking and sensegiving in strategic change initiation. Strategic Management Journal 12:433–48.

Glaser, B. G., and A. L. Strauss. 1967. The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine.

Glick, W. H, G. P. Huber, C. C. Miller, D. H. Doty, and K. M. Sutcliffe. 1990. Studying changes in organizational design and effectiveness: Retrospective event histories and periodic assessments. Organization Science 1 (3): 293–312.

Gummesson, E. 1988. Qualitative methods in management research. Lund, Norway: Studentlitteratur, Chartwell-Bratt.

Hakim, C. 1987. Research design. Strategies and choices in the design of social research. Boston: Unwin Hyman.

Hamel, J., S. Dufour, and D. Fortin. 1993. Case study methods. London: Sage.

Hartley, J. F. 1994. Case studies in organizational research. In Qualitative methods in organizational research: A practical guide, edited by C. Cassell and G. Symon, 209–29. London: Sage.

Haspeslaph, P., and D. B. Jemison. 1991. The challenge of renewal through acquisitions. Planning Review 19 (2): 27–32.

King, N. 1994. The qualitative research interview. In Qualitative methods in organizational research: A practical guide, edited by C. Cassell and G. Symon, 14–36. London: Sage.

Kirk, J., and M. L. Miller. 1986. Reliability and validity in qualitative research. Qualitative Research Methods Series 1. London: Sage.

Leonard-Barton, D. 1990.Adual methodology for case studies: Synergistic use of a longitudinal single site with replicated multiple sites. Organization Science 1 (3): 248–66.

Marshall, C., and G. B. Rossman. 1999. Designing qualitative research. London: Sage.

Maxwell, J. A. 1992. Understanding and validity in qualitative research. Harvard Educational Review 62 (3): 279–99.

Miles, M. B., and A. M. Huberman. 1994. Qualitative data analysis. 2d ed. London: Sage.

Mitchell, J. C. 1983. Case and situation analysis. Sociology Review 51 (2): 187–211.

Nachmias, C., and D. Nachmias. 1981. Research methods in the social sciences. London: Edward Arnhold.

Pettigrew, A. M. 1990. Longitudinal field research on change: Theory and practice. Organization Science 1 (3): 267–92.

___. (1992). The character and significance of strategic process research. Strategic Management Journal 13:5–16.

Rossman, G. B., and S. F. Rallis. 1998. Learning in the field: An introduction to qualitative research. Thousand Oaks, CA: Sage.

Schramm, W. 1971. Notes on case studies for instructional media projects. Working paper for Academy of Educational Development, Washington DC.

Schweiger, D. M., and J. P. Walsh. 1990. Mergers and acquisitions: An interdisciplinary view. In Research in personnel and human resource management, edited by G. R. Ferris and K. M. Rowland, 41–107. Greenwich, CT: JAI.

Silverman, D. 2000. Doing qualitative research: A practical handbook. London: Sage.

Stake, R. E. 1995. The art of case study research. London: Sage.

Strauss, A. L., and J. Corbin. 1990. Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage.

Sykes, W. 1990. Validity and reliability in qualitative market research: A review of the literature. Journal of the Market Research Society 32 (3): 289–328.

Waddington, D. 1994. Participant observation. In Qualitative methods in organizational research, edited by C. Cassell and G. Symon, 107–22. London: Sage.

Yin, R. K. 1989. Case study research: Design and methods. Applied Social Research Series, Vol. 5. London: Sage.

___. 1993. Applications of case study research. Applied Social Research Series, Vol. 34. London: Sage.

Christine Benedichte Meyer is an associate professor in the Department of Strategy and Management in the Norwegian School of Economics and Business Administration, Bergen-Sandviken, Norway. Her research interests are mergers and acquisitions, strategic change, and qualitative research. Recent publications include: “Allocation Processes in Mergers and Acquisitions: An Organisational Justice Perspective” (British Journal of Management 2001) and “Motives for Acquisitions in the Norwegian Financial Industry” (CEMS Business Review 1997).

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

White, R.E., Cooper, K. (2022). Case Study Research. In: Qualitative Research in the Post-Modern Era. Springer, Cham. https://doi.org/10.1007/978-3-030-85124-8_7

Download citation

DOI : https://doi.org/10.1007/978-3-030-85124-8_7

Published : 29 September 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-85126-2

Online ISBN : 978-3-030-85124-8

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

case study experimental research

Home Market Research

Experimental Research: What it is + Types of designs

Experimental Research Design

Any research conducted under scientifically acceptable conditions uses experimental methods. The success of experimental studies hinges on researchers confirming the change of a variable is based solely on the manipulation of the constant variable. The research should establish a notable cause and effect.

What is Experimental Research?

Experimental research is a study conducted with a scientific approach using two sets of variables. The first set acts as a constant, which you use to measure the differences of the second set. Quantitative research methods , for example, are experimental.

If you don’t have enough data to support your decisions, you must first determine the facts. This research gathers the data necessary to help you make better decisions.

You can conduct experimental research in the following situations:

  • Time is a vital factor in establishing a relationship between cause and effect.
  • Invariable behavior between cause and effect.
  • You wish to understand the importance of cause and effect.

Experimental Research Design Types

The classic experimental design definition is: “The methods used to collect data in experimental studies.”

There are three primary types of experimental design:

  • Pre-experimental research design
  • True experimental research design
  • Quasi-experimental research design

The way you classify research subjects based on conditions or groups determines the type of research design  you should use.

0 1. Pre-Experimental Design

A group, or various groups, are kept under observation after implementing cause and effect factors. You’ll conduct this research to understand whether further investigation is necessary for these particular groups.

You can break down pre-experimental research further into three types:

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

0 2. True Experimental Design

It relies on statistical analysis to prove or disprove a hypothesis, making it the most accurate form of research. Of the types of experimental design, only true design can establish a cause-effect relationship within a group. In a true experiment, three factors need to be satisfied:

  • There is a Control Group, which won’t be subject to changes, and an Experimental Group, which will experience the changed variables.
  • A variable that can be manipulated by the researcher
  • Random distribution

This experimental research method commonly occurs in the physical sciences.

0 3. Quasi-Experimental Design

The word “Quasi” indicates similarity. A quasi-experimental design is similar to an experimental one, but it is not the same. The difference between the two is the assignment of a control group. In this research, an independent variable is manipulated, but the participants of a group are not randomly assigned. Quasi-research is used in field settings where random assignment is either irrelevant or not required.

Importance of Experimental Design

Experimental research is a powerful tool for understanding cause-and-effect relationships. It allows us to manipulate variables and observe the effects, which is crucial for understanding how different factors influence the outcome of a study.

But the importance of experimental research goes beyond that. It’s a critical method for many scientific and academic studies. It allows us to test theories, develop new products, and make groundbreaking discoveries.

For example, this research is essential for developing new drugs and medical treatments. Researchers can understand how a new drug works by manipulating dosage and administration variables and identifying potential side effects.

Similarly, experimental research is used in the field of psychology to test theories and understand human behavior. By manipulating variables such as stimuli, researchers can gain insights into how the brain works and identify new treatment options for mental health disorders.

It is also widely used in the field of education. It allows educators to test new teaching methods and identify what works best. By manipulating variables such as class size, teaching style, and curriculum, researchers can understand how students learn and identify new ways to improve educational outcomes.

In addition, experimental research is a powerful tool for businesses and organizations. By manipulating variables such as marketing strategies, product design, and customer service, companies can understand what works best and identify new opportunities for growth.

Advantages of Experimental Research

When talking about this research, we can think of human life. Babies do their own rudimentary experiments (such as putting objects in their mouths) to learn about the world around them, while older children and teens do experiments at school to learn more about science.

Ancient scientists used this research to prove that their hypotheses were correct. For example, Galileo Galilei and Antoine Lavoisier conducted various experiments to discover key concepts in physics and chemistry. The same is true of modern experts, who use this scientific method to see if new drugs are effective, discover treatments for diseases, and create new electronic devices (among others).

It’s vital to test new ideas or theories. Why put time, effort, and funding into something that may not work?

This research allows you to test your idea in a controlled environment before marketing. It also provides the best method to test your theory thanks to the following advantages:

Advantages of experimental research

  • Researchers have a stronger hold over variables to obtain desired results.
  • The subject or industry does not impact the effectiveness of experimental research. Any industry can implement it for research purposes.
  • The results are specific.
  • After analyzing the results, you can apply your findings to similar ideas or situations.
  • You can identify the cause and effect of a hypothesis. Researchers can further analyze this relationship to determine more in-depth ideas.
  • Experimental research makes an ideal starting point. The data you collect is a foundation for building more ideas and conducting more action research .

Whether you want to know how the public will react to a new product or if a certain food increases the chance of disease, experimental research is the best place to start. Begin your research by finding subjects using  QuestionPro Audience  and other tools today.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Jotform vs SurveyMonkey

Jotform vs SurveyMonkey: Which Is Best in 2024

Aug 15, 2024

case study experimental research

360 Degree Feedback Spider Chart is Back!

Aug 14, 2024

Jotform vs Wufoo

Jotform vs Wufoo: Comparison of Features and Prices

Aug 13, 2024

case study experimental research

Product or Service: Which is More Important? — Tuesday CX Thoughts

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence
  • Privacy Policy

Research Method

Home » Case Study – Methods, Examples and Guide

Case Study – Methods, Examples and Guide

Table of Contents

Case Study Research

A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation.

It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied. Case studies typically involve multiple sources of data, including interviews, observations, documents, and artifacts, which are analyzed using various techniques, such as content analysis, thematic analysis, and grounded theory. The findings of a case study are often used to develop theories, inform policy or practice, or generate new research questions.

Types of Case Study

Types and Methods of Case Study are as follows:

Single-Case Study

A single-case study is an in-depth analysis of a single case. This type of case study is useful when the researcher wants to understand a specific phenomenon in detail.

For Example , A researcher might conduct a single-case study on a particular individual to understand their experiences with a particular health condition or a specific organization to explore their management practices. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a single-case study are often used to generate new research questions, develop theories, or inform policy or practice.

Multiple-Case Study

A multiple-case study involves the analysis of several cases that are similar in nature. This type of case study is useful when the researcher wants to identify similarities and differences between the cases.

For Example, a researcher might conduct a multiple-case study on several companies to explore the factors that contribute to their success or failure. The researcher collects data from each case, compares and contrasts the findings, and uses various techniques to analyze the data, such as comparative analysis or pattern-matching. The findings of a multiple-case study can be used to develop theories, inform policy or practice, or generate new research questions.

Exploratory Case Study

An exploratory case study is used to explore a new or understudied phenomenon. This type of case study is useful when the researcher wants to generate hypotheses or theories about the phenomenon.

For Example, a researcher might conduct an exploratory case study on a new technology to understand its potential impact on society. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as grounded theory or content analysis. The findings of an exploratory case study can be used to generate new research questions, develop theories, or inform policy or practice.

Descriptive Case Study

A descriptive case study is used to describe a particular phenomenon in detail. This type of case study is useful when the researcher wants to provide a comprehensive account of the phenomenon.

For Example, a researcher might conduct a descriptive case study on a particular community to understand its social and economic characteristics. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a descriptive case study can be used to inform policy or practice or generate new research questions.

Instrumental Case Study

An instrumental case study is used to understand a particular phenomenon that is instrumental in achieving a particular goal. This type of case study is useful when the researcher wants to understand the role of the phenomenon in achieving the goal.

For Example, a researcher might conduct an instrumental case study on a particular policy to understand its impact on achieving a particular goal, such as reducing poverty. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of an instrumental case study can be used to inform policy or practice or generate new research questions.

Case Study Data Collection Methods

Here are some common data collection methods for case studies:

Interviews involve asking questions to individuals who have knowledge or experience relevant to the case study. Interviews can be structured (where the same questions are asked to all participants) or unstructured (where the interviewer follows up on the responses with further questions). Interviews can be conducted in person, over the phone, or through video conferencing.

Observations

Observations involve watching and recording the behavior and activities of individuals or groups relevant to the case study. Observations can be participant (where the researcher actively participates in the activities) or non-participant (where the researcher observes from a distance). Observations can be recorded using notes, audio or video recordings, or photographs.

Documents can be used as a source of information for case studies. Documents can include reports, memos, emails, letters, and other written materials related to the case study. Documents can be collected from the case study participants or from public sources.

Surveys involve asking a set of questions to a sample of individuals relevant to the case study. Surveys can be administered in person, over the phone, through mail or email, or online. Surveys can be used to gather information on attitudes, opinions, or behaviors related to the case study.

Artifacts are physical objects relevant to the case study. Artifacts can include tools, equipment, products, or other objects that provide insights into the case study phenomenon.

How to conduct Case Study Research

Conducting a case study research involves several steps that need to be followed to ensure the quality and rigor of the study. Here are the steps to conduct case study research:

  • Define the research questions: The first step in conducting a case study research is to define the research questions. The research questions should be specific, measurable, and relevant to the case study phenomenon under investigation.
  • Select the case: The next step is to select the case or cases to be studied. The case should be relevant to the research questions and should provide rich and diverse data that can be used to answer the research questions.
  • Collect data: Data can be collected using various methods, such as interviews, observations, documents, surveys, and artifacts. The data collection method should be selected based on the research questions and the nature of the case study phenomenon.
  • Analyze the data: The data collected from the case study should be analyzed using various techniques, such as content analysis, thematic analysis, or grounded theory. The analysis should be guided by the research questions and should aim to provide insights and conclusions relevant to the research questions.
  • Draw conclusions: The conclusions drawn from the case study should be based on the data analysis and should be relevant to the research questions. The conclusions should be supported by evidence and should be clearly stated.
  • Validate the findings: The findings of the case study should be validated by reviewing the data and the analysis with participants or other experts in the field. This helps to ensure the validity and reliability of the findings.
  • Write the report: The final step is to write the report of the case study research. The report should provide a clear description of the case study phenomenon, the research questions, the data collection methods, the data analysis, the findings, and the conclusions. The report should be written in a clear and concise manner and should follow the guidelines for academic writing.

Examples of Case Study

Here are some examples of case study research:

  • The Hawthorne Studies : Conducted between 1924 and 1932, the Hawthorne Studies were a series of case studies conducted by Elton Mayo and his colleagues to examine the impact of work environment on employee productivity. The studies were conducted at the Hawthorne Works plant of the Western Electric Company in Chicago and included interviews, observations, and experiments.
  • The Stanford Prison Experiment: Conducted in 1971, the Stanford Prison Experiment was a case study conducted by Philip Zimbardo to examine the psychological effects of power and authority. The study involved simulating a prison environment and assigning participants to the role of guards or prisoners. The study was controversial due to the ethical issues it raised.
  • The Challenger Disaster: The Challenger Disaster was a case study conducted to examine the causes of the Space Shuttle Challenger explosion in 1986. The study included interviews, observations, and analysis of data to identify the technical, organizational, and cultural factors that contributed to the disaster.
  • The Enron Scandal: The Enron Scandal was a case study conducted to examine the causes of the Enron Corporation’s bankruptcy in 2001. The study included interviews, analysis of financial data, and review of documents to identify the accounting practices, corporate culture, and ethical issues that led to the company’s downfall.
  • The Fukushima Nuclear Disaster : The Fukushima Nuclear Disaster was a case study conducted to examine the causes of the nuclear accident that occurred at the Fukushima Daiichi Nuclear Power Plant in Japan in 2011. The study included interviews, analysis of data, and review of documents to identify the technical, organizational, and cultural factors that contributed to the disaster.

Application of Case Study

Case studies have a wide range of applications across various fields and industries. Here are some examples:

Business and Management

Case studies are widely used in business and management to examine real-life situations and develop problem-solving skills. Case studies can help students and professionals to develop a deep understanding of business concepts, theories, and best practices.

Case studies are used in healthcare to examine patient care, treatment options, and outcomes. Case studies can help healthcare professionals to develop critical thinking skills, diagnose complex medical conditions, and develop effective treatment plans.

Case studies are used in education to examine teaching and learning practices. Case studies can help educators to develop effective teaching strategies, evaluate student progress, and identify areas for improvement.

Social Sciences

Case studies are widely used in social sciences to examine human behavior, social phenomena, and cultural practices. Case studies can help researchers to develop theories, test hypotheses, and gain insights into complex social issues.

Law and Ethics

Case studies are used in law and ethics to examine legal and ethical dilemmas. Case studies can help lawyers, policymakers, and ethical professionals to develop critical thinking skills, analyze complex cases, and make informed decisions.

Purpose of Case Study

The purpose of a case study is to provide a detailed analysis of a specific phenomenon, issue, or problem in its real-life context. A case study is a qualitative research method that involves the in-depth exploration and analysis of a particular case, which can be an individual, group, organization, event, or community.

The primary purpose of a case study is to generate a comprehensive and nuanced understanding of the case, including its history, context, and dynamics. Case studies can help researchers to identify and examine the underlying factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and detailed understanding of the case, which can inform future research, practice, or policy.

Case studies can also serve other purposes, including:

  • Illustrating a theory or concept: Case studies can be used to illustrate and explain theoretical concepts and frameworks, providing concrete examples of how they can be applied in real-life situations.
  • Developing hypotheses: Case studies can help to generate hypotheses about the causal relationships between different factors and outcomes, which can be tested through further research.
  • Providing insight into complex issues: Case studies can provide insights into complex and multifaceted issues, which may be difficult to understand through other research methods.
  • Informing practice or policy: Case studies can be used to inform practice or policy by identifying best practices, lessons learned, or areas for improvement.

Advantages of Case Study Research

There are several advantages of case study research, including:

  • In-depth exploration: Case study research allows for a detailed exploration and analysis of a specific phenomenon, issue, or problem in its real-life context. This can provide a comprehensive understanding of the case and its dynamics, which may not be possible through other research methods.
  • Rich data: Case study research can generate rich and detailed data, including qualitative data such as interviews, observations, and documents. This can provide a nuanced understanding of the case and its complexity.
  • Holistic perspective: Case study research allows for a holistic perspective of the case, taking into account the various factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and comprehensive understanding of the case.
  • Theory development: Case study research can help to develop and refine theories and concepts by providing empirical evidence and concrete examples of how they can be applied in real-life situations.
  • Practical application: Case study research can inform practice or policy by identifying best practices, lessons learned, or areas for improvement.
  • Contextualization: Case study research takes into account the specific context in which the case is situated, which can help to understand how the case is influenced by the social, cultural, and historical factors of its environment.

Limitations of Case Study Research

There are several limitations of case study research, including:

  • Limited generalizability : Case studies are typically focused on a single case or a small number of cases, which limits the generalizability of the findings. The unique characteristics of the case may not be applicable to other contexts or populations, which may limit the external validity of the research.
  • Biased sampling: Case studies may rely on purposive or convenience sampling, which can introduce bias into the sample selection process. This may limit the representativeness of the sample and the generalizability of the findings.
  • Subjectivity: Case studies rely on the interpretation of the researcher, which can introduce subjectivity into the analysis. The researcher’s own biases, assumptions, and perspectives may influence the findings, which may limit the objectivity of the research.
  • Limited control: Case studies are typically conducted in naturalistic settings, which limits the control that the researcher has over the environment and the variables being studied. This may limit the ability to establish causal relationships between variables.
  • Time-consuming: Case studies can be time-consuming to conduct, as they typically involve a detailed exploration and analysis of a specific case. This may limit the feasibility of conducting multiple case studies or conducting case studies in a timely manner.
  • Resource-intensive: Case studies may require significant resources, including time, funding, and expertise. This may limit the ability of researchers to conduct case studies in resource-constrained settings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Transformative Design

Transformative Design – Methods, Types, Guide

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Experimental Research Design

Experimental Design – Types, Methods, Guide

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Triangulation

Triangulation in Research – Types, Methods and...

Quasi-Experimental Design

Quasi-Experimental Research Design – Types...

  • Experimental Research Designs: Types, Examples & Methods

busayo.longe

Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

Imagine taking 2 samples of the same plant and exposing one of them to sunlight, while the other is kept away from sunlight. Let the plant exposed to sunlight be called sample A, while the latter is called sample B.

If after the duration of the research, we find out that sample A grows and sample B dies, even though they are both regularly wetted and given the same treatment. Therefore, we can conclude that sunlight will aid growth in all similar plants.

What is Experimental Research?

Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in drawing a reasonable conclusion regarding the relationship between these 2 variable types.

The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute.

Mostly related to a laboratory test procedure, experimental research designs involve collecting quantitative data and performing statistical analysis on them during research. Therefore, making it an example of quantitative research method .

What are The Types of Experimental Research Design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

Pre-experimental Research Design

In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group.

Although very practical, experimental research is lacking in several areas of the true-experimental criteria. The pre-experimental research design is further divided into three types

  • One-shot Case Study Research Design

In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  • One-group Pretest-posttest Research Design: 

This research design combines both posttest and pretest study by carrying out a test on a single group before the treatment is administered and after the treatment is administered. With the former being administered at the beginning of treatment and later at the end.

  • Static-group Comparison: 

In a static-group comparison study, 2 or more groups are placed under observation, where only one of the groups is subjected to some treatment while the other groups are held static. All the groups are post-tested, and the observed differences between the groups are assumed to be a result of the treatment.

Quasi-experimental Research Design

  The word “quasi” means partial, half, or pseudo. Therefore, the quasi-experimental research bearing a resemblance to the true experimental research, but not the same.  In quasi-experiments, the participants are not randomly assigned, and as such, they are used in settings where randomization is difficult or impossible.

 This is very common in educational research, where administrators are unwilling to allow the random selection of students for experimental samples.

Some examples of quasi-experimental research design include; the time series, no equivalent control group design, and the counterbalanced design.

True Experimental Research Design

The true experimental research design relies on statistical analysis to approve or disprove a hypothesis. It is the most accurate type of experimental design and may be carried out with or without a pretest on at least 2 randomly assigned dependent subjects.

The true experimental research design must contain a control group, a variable that can be manipulated by the researcher, and the distribution must be random. The classification of true experimental design include:

  • The posttest-only Control Group Design: In this design, subjects are randomly selected and assigned to the 2 groups (control and experimental), and only the experimental group is treated. After close observation, both groups are post-tested, and a conclusion is drawn from the difference between these groups.
  • The pretest-posttest Control Group Design: For this control group design, subjects are randomly assigned to the 2 groups, both are presented, but only the experimental group is treated. After close observation, both groups are post-tested to measure the degree of change in each group.
  • Solomon four-group Design: This is the combination of the pretest-only and the pretest-posttest control groups. In this case, the randomly selected subjects are placed into 4 groups.

The first two of these groups are tested using the posttest-only method, while the other two are tested using the pretest-posttest method.

Examples of Experimental Research

Experimental research examples are different, depending on the type of experimental research design that is being considered. The most basic example of experimental research is laboratory experiments, which may differ in nature depending on the subject of research.

Administering Exams After The End of Semester

During the semester, students in a class are lectured on particular courses and an exam is administered at the end of the semester. In this case, the students are the subjects or dependent variables while the lectures are the independent variables treated on the subjects.

Only one group of carefully selected subjects are considered in this research, making it a pre-experimental research design example. We will also notice that tests are only carried out at the end of the semester, and not at the beginning.

Further making it easy for us to conclude that it is a one-shot case study research. 

Employee Skill Evaluation

Before employing a job seeker, organizations conduct tests that are used to screen out less qualified candidates from the pool of qualified applicants. This way, organizations can determine an employee’s skill set at the point of employment.

In the course of employment, organizations also carry out employee training to improve employee productivity and generally grow the organization. Further evaluation is carried out at the end of each training to test the impact of the training on employee skills, and test for improvement.

Here, the subject is the employee, while the treatment is the training conducted. This is a pretest-posttest control group experimental research example.

Evaluation of Teaching Method

Let us consider an academic institution that wants to evaluate the teaching method of 2 teachers to determine which is best. Imagine a case whereby the students assigned to each teacher is carefully selected probably due to personal request by parents or due to stubbornness and smartness.

This is a no equivalent group design example because the samples are not equal. By evaluating the effectiveness of each teacher’s teaching method this way, we may conclude after a post-test has been carried out.

However, this may be influenced by factors like the natural sweetness of a student. For example, a very smart student will grab more easily than his or her peers irrespective of the method of teaching.

What are the Characteristics of Experimental Research?  

Experimental research contains dependent, independent and extraneous variables. The dependent variables are the variables being treated or manipulated and are sometimes called the subject of the research.

The independent variables are the experimental treatment being exerted on the dependent variables. Extraneous variables, on the other hand, are other factors affecting the experiment that may also contribute to the change.

The setting is where the experiment is carried out. Many experiments are carried out in the laboratory, where control can be exerted on the extraneous variables, thereby eliminating them.

Other experiments are carried out in a less controllable setting. The choice of setting used in research depends on the nature of the experiment being carried out.

  • Multivariable

Experimental research may include multiple independent variables, e.g. time, skills, test scores, etc.

Why Use Experimental Research Design?  

Experimental research design can be majorly used in physical sciences, social sciences, education, and psychology. It is used to make predictions and draw conclusions on a subject matter. 

Some uses of experimental research design are highlighted below.

  • Medicine: Experimental research is used to provide the proper treatment for diseases. In most cases, rather than directly using patients as the research subject, researchers take a sample of the bacteria from the patient’s body and are treated with the developed antibacterial

The changes observed during this period are recorded and evaluated to determine its effectiveness. This process can be carried out using different experimental research methods.

  • Education: Asides from science subjects like Chemistry and Physics which involves teaching students how to perform experimental research, it can also be used in improving the standard of an academic institution. This includes testing students’ knowledge on different topics, coming up with better teaching methods, and the implementation of other programs that will aid student learning.
  • Human Behavior: Social scientists are the ones who mostly use experimental research to test human behaviour. For example, consider 2 people randomly chosen to be the subject of the social interaction research where one person is placed in a room without human interaction for 1 year.

The other person is placed in a room with a few other people, enjoying human interaction. There will be a difference in their behaviour at the end of the experiment.

  • UI/UX: During the product development phase, one of the major aims of the product team is to create a great user experience with the product. Therefore, before launching the final product design, potential are brought in to interact with the product.

For example, when finding it difficult to choose how to position a button or feature on the app interface, a random sample of product testers are allowed to test the 2 samples and how the button positioning influences the user interaction is recorded.

What are the Disadvantages of Experimental Research?  

  • It is highly prone to human error due to its dependency on variable control which may not be properly implemented. These errors could eliminate the validity of the experiment and the research being conducted.
  • Exerting control of extraneous variables may create unrealistic situations. Eliminating real-life variables will result in inaccurate conclusions. This may also result in researchers controlling the variables to suit his or her personal preferences.
  • It is a time-consuming process. So much time is spent on testing dependent variables and waiting for the effect of the manipulation of dependent variables to manifest.
  • It is expensive.
  • It is very risky and may have ethical complications that cannot be ignored. This is common in medical research, where failed trials may lead to a patient’s death or a deteriorating health condition.
  • Experimental research results are not descriptive.
  • Response bias can also be supplied by the subject of the conversation.
  • Human responses in experimental research can be difficult to measure.

What are the Data Collection Methods in Experimental Research?  

Data collection methods in experimental research are the different ways in which data can be collected for experimental research. They are used in different cases, depending on the type of research being carried out.

1. Observational Study

This type of study is carried out over a long period. It measures and observes the variables of interest without changing existing conditions.

When researching the effect of social interaction on human behavior, the subjects who are placed in 2 different environments are observed throughout the research. No matter the kind of absurd behavior that is exhibited by the subject during this period, its condition will not be changed.

This may be a very risky thing to do in medical cases because it may lead to death or worse medical conditions.

2. Simulations

This procedure uses mathematical, physical, or computer models to replicate a real-life process or situation. It is frequently used when the actual situation is too expensive, dangerous, or impractical to replicate in real life.

This method is commonly used in engineering and operational research for learning purposes and sometimes as a tool to estimate possible outcomes of real research. Some common situation software are Simulink, MATLAB, and Simul8.

Not all kinds of experimental research can be carried out using simulation as a data collection tool . It is very impractical for a lot of laboratory-based research that involves chemical processes.

A survey is a tool used to gather relevant data about the characteristics of a population and is one of the most common data collection tools. A survey consists of a group of questions prepared by the researcher, to be answered by the research subject.

Surveys can be shared with the respondents both physically and electronically. When collecting data through surveys, the kind of data collected depends on the respondent, and researchers have limited control over it.

Formplus is the best tool for collecting experimental data using survey s. It has relevant features that will aid the data collection process and can also be used in other aspects of experimental research.

Differences between Experimental and Non-Experimental Research 

1. In experimental research, the researcher can control and manipulate the environment of the research, including the predictor variable which can be changed. On the other hand, non-experimental research cannot be controlled or manipulated by the researcher at will.

This is because it takes place in a real-life setting, where extraneous variables cannot be eliminated. Therefore, it is more difficult to conclude non-experimental studies, even though they are much more flexible and allow for a greater range of study fields.

2. The relationship between cause and effect cannot be established in non-experimental research, while it can be established in experimental research. This may be because many extraneous variables also influence the changes in the research subject, making it difficult to point at a particular variable as the cause of a particular change

3. Independent variables are not introduced, withdrawn, or manipulated in non-experimental designs, but the same may not be said about experimental research.

Experimental Research vs. Alternatives and When to Use Them

1. experimental research vs causal comparative.

Experimental research enables you to control variables and identify how the independent variable affects the dependent variable. Causal-comparative find out the cause-and-effect relationship between the variables by comparing already existing groups that are affected differently by the independent variable.

For example, in an experiment to see how K-12 education affects children and teenager development. An experimental research would split the children into groups, some would get formal K-12 education, while others won’t. This is not ethically right because every child has the right to education. So, what we do instead would be to compare already existing groups of children who are getting formal education with those who due to some circumstances can not.

Pros and Cons of Experimental vs Causal-Comparative Research

  • Causal-Comparative:   Strengths:  More realistic than experiments, can be conducted in real-world settings.  Weaknesses:  Establishing causality can be weaker due to the lack of manipulation.

2. Experimental Research vs Correlational Research

When experimenting, you are trying to establish a cause-and-effect relationship between different variables. For example, you are trying to establish the effect of heat on water, the temperature keeps changing (independent variable) and you see how it affects the water (dependent variable).

For correlational research, you are not necessarily interested in the why or the cause-and-effect relationship between the variables, you are focusing on the relationship. Using the same water and temperature example, you are only interested in the fact that they change, you are not investigating which of the variables or other variables causes them to change.

Pros and Cons of Experimental vs Correlational Research

3. experimental research vs descriptive research.

With experimental research, you alter the independent variable to see how it affects the dependent variable, but with descriptive research you are simply studying the characteristics of the variable you are studying.

So, in an experiment to see how blown glass reacts to temperature, experimental research would keep altering the temperature to varying levels of high and low to see how it affects the dependent variable (glass). But descriptive research would investigate the glass properties.

Pros and Cons of Experimental vs Descriptive Research

4. experimental research vs action research.

Experimental research tests for causal relationships by focusing on one independent variable vs the dependent variable and keeps other variables constant. So, you are testing hypotheses and using the information from the research to contribute to knowledge.

However, with action research, you are using a real-world setting which means you are not controlling variables. You are also performing the research to solve actual problems and improve already established practices.

For example, if you are testing for how long commutes affect workers’ productivity. With experimental research, you would vary the length of commute to see how the time affects work. But with action research, you would account for other factors such as weather, commute route, nutrition, etc. Also, experimental research helps know the relationship between commute time and productivity, while action research helps you look for ways to improve productivity

Pros and Cons of Experimental vs Action Research

Conclusion  .

Experimental research designs are often considered to be the standard in research designs. This is partly due to the common misconception that research is equivalent to scientific experiments—a component of experimental research design.

In this research design, one or more subjects or dependent variables are randomly assigned to different treatments (i.e. independent variables manipulated by the researcher) and the results are observed to conclude. One of the uniqueness of experimental research is in its ability to control the effect of extraneous variables.

Experimental research is suitable for research whose goal is to examine cause-effect relationships, e.g. explanatory research. It can be conducted in the laboratory or field settings, depending on the aim of the research that is being carried out. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of experimental research
  • experimental research methods
  • types of experimental research
  • busayo.longe

Formplus

You may also like:

Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.

case study experimental research

What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research

Simpson’s Paradox & How to Avoid it in Experimental Research

In this article, we are going to look at Simpson’s Paradox from its historical point and later, we’ll consider its effect in...

Response vs Explanatory Variables: Definition & Examples

In this article, we’ll be comparing the two types of variables, what they both mean and see some of their real-life applications in research

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

case study experimental research

Enago Academy's Most Popular Articles

10 Tips to Prevent Research Papers From Being Retracted

  • Publishing Research

10 Tips to Prevent Research Papers From Being Retracted

Research paper retractions represent a critical event in the scientific community. When a published article…

2024 Scholar Metrics: Unveiling research impact (2019-2023)

  • Industry News

Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles

Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

case study experimental research

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • AI in Academia
  • Promoting Research
  • Career Corner
  • Diversity and Inclusion
  • Infographics
  • Expert Video Library
  • Other Resources
  • Enago Learn
  • Upcoming & On-Demand Webinars
  • Peer-Review Week 2023
  • Open Access Week 2023
  • Conference Videos
  • Enago Report
  • Journal Finder
  • Enago Plagiarism & AI Grammar Check
  • Editing Services
  • Publication Support Services
  • Research Impact
  • Translation Services
  • Publication solutions
  • AI-Based Solutions
  • Thought Leadership
  • Call for Articles
  • Call for Speakers
  • Author Training
  • Edit Profile

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

case study experimental research

In your opinion, what is the most effective way to improve integrity in the peer review process?

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Research Design | Types, Guide & Examples

What Is a Research Design | Types, Guide & Examples

Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.

A research design is a strategy for answering your   research question  using empirical data. Creating a research design means making decisions about:

  • Your overall research objectives and approach
  • Whether you’ll rely on primary research or secondary research
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach
and describe frequencies, averages, and correlations about relationships between variables

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

case study experimental research

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types.

  • Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships
  • Descriptive and correlational designs allow you to measure variables and describe relationships between them.
Type of design Purpose and characteristics
Experimental relationships effect on a
Quasi-experimental )
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

  • Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .

Questionnaires Interviews
)

Observation methods

Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.

Operationalization

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity
) )

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample—by mail, online, by phone, or in person?

If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organizing and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).

On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarize your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/methodology/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, what is your plagiarism score.

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 3. Psychological Science

3.2 Psychologists Use Descriptive, Correlational, and Experimental Research Designs to Understand Behaviour

Learning objectives.

  • Differentiate the goals of descriptive, correlational, and experimental research designs and explain the advantages and disadvantages of each.
  • Explain the goals of descriptive research and the statistical techniques used to interpret it.
  • Summarize the uses of correlational research and describe why correlational research cannot be used to infer causality.
  • Review the procedures of experimental research and explain how it can be used to draw causal inferences.

Psychologists agree that if their ideas and theories about human behaviour are to be taken seriously, they must be backed up by data. However, the research of different psychologists is designed with different goals in mind, and the different goals require different approaches. These varying approaches, summarized in Table 3.2, are known as research designs . A research design  is the specific method a researcher uses to collect, analyze, and interpret data . Psychologists use three major types of research designs in their research, and each provides an essential avenue for scientific investigation. Descriptive research  is research designed to provide a snapshot of the current state of affairs . Correlational research  is research designed to discover relationships among variables and to allow the prediction of future events from present knowledge . Experimental research  is research in which initial equivalence among research participants in more than one group is created, followed by a manipulation of a given experience for these groups and a measurement of the influence of the manipulation . Each of the three research designs varies according to its strengths and limitations, and it is important to understand how each differs.

Table 3.2 Characteristics of the Three Research Designs
Research design Goal Advantages Disadvantages
Descriptive To create a snapshot of the current state of affairs Provides a relatively complete picture of what is occurring at a given time. Allows the development of questions for further study. Does not assess relationships among variables. May be unethical if participants do not know they are being observed.
Correlational To assess the relationships between and among two or more variables Allows testing of expected relationships between and among variables and the making of predictions. Can assess these relationships in everyday life events. Cannot be used to draw inferences about the causal relationships between and among the variables.
Experimental To assess the causal impact of one or more experimental manipulations on a dependent variable Allows drawing of conclusions about the causal relationships among variables. Cannot experimentally manipulate many important variables. May be expensive and time consuming.
Source: Stangor, 2011.

Descriptive Research: Assessing the Current State of Affairs

Descriptive research is designed to create a snapshot of the current thoughts, feelings, or behaviour of individuals. This section reviews three types of descriptive research : case studies , surveys , and naturalistic observation (Figure 3.4).

Sometimes the data in a descriptive research project are based on only a small set of individuals, often only one person or a single small group. These research designs are known as case studies — descriptive records of one or more individual’s experiences and behaviour . Sometimes case studies involve ordinary individuals, as when developmental psychologist Jean Piaget used his observation of his own children to develop his stage theory of cognitive development. More frequently, case studies are conducted on individuals who have unusual or abnormal experiences or characteristics or who find themselves in particularly difficult or stressful situations. The assumption is that by carefully studying individuals who are socially marginal, who are experiencing unusual situations, or who are going through a difficult phase in their lives, we can learn something about human nature.

Sigmund Freud was a master of using the psychological difficulties of individuals to draw conclusions about basic psychological processes. Freud wrote case studies of some of his most interesting patients and used these careful examinations to develop his important theories of personality. One classic example is Freud’s description of “Little Hans,” a child whose fear of horses the psychoanalyst interpreted in terms of repressed sexual impulses and the Oedipus complex (Freud, 1909/1964).

Another well-known case study is Phineas Gage, a man whose thoughts and emotions were extensively studied by cognitive psychologists after a railroad spike was blasted through his skull in an accident. Although there are questions about the interpretation of this case study (Kotowicz, 2007), it did provide early evidence that the brain’s frontal lobe is involved in emotion and morality (Damasio et al., 2005). An interesting example of a case study in clinical psychology is described by Rokeach (1964), who investigated in detail the beliefs of and interactions among three patients with schizophrenia, all of whom were convinced they were Jesus Christ.

In other cases the data from descriptive research projects come in the form of a survey — a measure administered through either an interview or a written questionnaire to get a picture of the beliefs or behaviours of a sample of people of interest . The people chosen to participate in the research (known as the sample) are selected to be representative of all the people that the researcher wishes to know about (the population). In election polls, for instance, a sample is taken from the population of all “likely voters” in the upcoming elections.

The results of surveys may sometimes be rather mundane, such as “Nine out of 10 doctors prefer Tymenocin” or “The median income in the city of Hamilton is $46,712.” Yet other times (particularly in discussions of social behaviour), the results can be shocking: “More than 40,000 people are killed by gunfire in the United States every year” or “More than 60% of women between the ages of 50 and 60 suffer from depression.” Descriptive research is frequently used by psychologists to get an estimate of the prevalence (or incidence ) of psychological disorders.

A final type of descriptive research — known as naturalistic observation — is research based on the observation of everyday events . For instance, a developmental psychologist who watches children on a playground and describes what they say to each other while they play is conducting descriptive research, as is a biopsychologist who observes animals in their natural habitats. One example of observational research involves a systematic procedure known as the strange situation , used to get a picture of how adults and young children interact. The data that are collected in the strange situation are systematically coded in a coding sheet such as that shown in Table 3.3.

Table 3.3 Sample Coding Form Used to Assess Child’s and Mother’s Behaviour in the Strange Situation
Coder name:
This table represents a sample coding sheet from an episode of the “strange situation,” in which an infant (usually about one year old) is observed playing in a room with two adults — the child’s mother and a stranger. Each of the four coding categories is scored by the coder from 1 (the baby makes no effort to engage in the behaviour) to 7 (the baby makes a significant effort to engage in the behaviour). More information about the meaning of the coding can be found in Ainsworth, Blehar, Waters, and Wall (1978).
Coding categories explained
Proximity The baby moves toward, grasps, or climbs on the adult.
Maintaining contact The baby resists being put down by the adult by crying or trying to climb back up.
Resistance The baby pushes, hits, or squirms to be put down from the adult’s arms.
Avoidance The baby turns away or moves away from the adult.
Episode Coding categories
Proximity Contact Resistance Avoidance
Mother and baby play alone 1 1 1 1
Mother puts baby down 4 1 1 1
Stranger enters room 1 2 3 1
Mother leaves room; stranger plays with baby 1 3 1 1
Mother re-enters, greets and may comfort baby, then leaves again 4 2 1 2
Stranger tries to play with baby 1 3 1 1
Mother re-enters and picks up baby 6 6 1 2
Source: Stang0r, 2011.

The results of descriptive research projects are analyzed using descriptive statistics — numbers that summarize the distribution of scores on a measured variable . Most variables have distributions similar to that shown in Figure 3.5 where most of the scores are located near the centre of the distribution, and the distribution is symmetrical and bell-shaped. A data distribution that is shaped like a bell is known as a normal distribution .

A distribution can be described in terms of its central tendency — that is, the point in the distribution around which the data are centred — and its dispersion, or spread . The arithmetic average, or arithmetic mean , symbolized by the letter M , is the most commonly used measure of central tendency . It is computed by calculating the sum of all the scores of the variable and dividing this sum by the number of participants in the distribution (denoted by the letter N ). In the data presented in Figure 3.5 the mean height of the students is 67.12 inches (170.5 cm). The sample mean is usually indicated by the letter M .

In some cases, however, the data distribution is not symmetrical. This occurs when there are one or more extreme scores (known as outliers ) at one end of the distribution. Consider, for instance, the variable of family income (see Figure 3.6), which includes an outlier (a value of $3,800,000). In this case the mean is not a good measure of central tendency. Although it appears from Figure 3.6 that the central tendency of the family income variable should be around $70,000, the mean family income is actually $223,960. The single very extreme income has a disproportionate impact on the mean, resulting in a value that does not well represent the central tendency.

The median is used as an alternative measure of central tendency when distributions are not symmetrical. The median  is the score in the center of the distribution, meaning that 50% of the scores are greater than the median and 50% of the scores are less than the median . In our case, the median household income ($73,000) is a much better indication of central tendency than is the mean household income ($223,960).

A final measure of central tendency, known as the mode , represents the value that occurs most frequently in the distribution . You can see from Figure 3.6 that the mode for the family income variable is $93,000 (it occurs four times).

In addition to summarizing the central tendency of a distribution, descriptive statistics convey information about how the scores of the variable are spread around the central tendency. Dispersion refers to the extent to which the scores are all tightly clustered around the central tendency , as seen in Figure 3.7.

Or they may be more spread out away from it, as seen in Figure 3.8.

One simple measure of dispersion is to find the largest (the maximum ) and the smallest (the minimum ) observed values of the variable and to compute the range of the variable as the maximum observed score minus the minimum observed score. You can check that the range of the height variable in Figure 3.5 is 72 – 62 = 10. The standard deviation , symbolized as s , is the most commonly used measure of dispersion . Distributions with a larger standard deviation have more spread. The standard deviation of the height variable is s = 2.74, and the standard deviation of the family income variable is s = $745,337.

An advantage of descriptive research is that it attempts to capture the complexity of everyday behaviour. Case studies provide detailed information about a single person or a small group of people, surveys capture the thoughts or reported behaviours of a large population of people, and naturalistic observation objectively records the behaviour of people or animals as it occurs naturally. Thus descriptive research is used to provide a relatively complete understanding of what is currently happening.

Despite these advantages, descriptive research has a distinct disadvantage in that, although it allows us to get an idea of what is currently happening, it is usually limited to static pictures. Although descriptions of particular experiences may be interesting, they are not always transferable to other individuals in other situations, nor do they tell us exactly why specific behaviours or events occurred. For instance, descriptions of individuals who have suffered a stressful event, such as a war or an earthquake, can be used to understand the individuals’ reactions to the event but cannot tell us anything about the long-term effects of the stress. And because there is no comparison group that did not experience the stressful situation, we cannot know what these individuals would be like if they hadn’t had the stressful experience.

Correlational Research: Seeking Relationships among Variables

In contrast to descriptive research, which is designed primarily to provide static pictures, correlational research involves the measurement of two or more relevant variables and an assessment of the relationship between or among those variables. For instance, the variables of height and weight are systematically related (correlated) because taller people generally weigh more than shorter people. In the same way, study time and memory errors are also related, because the more time a person is given to study a list of words, the fewer errors he or she will make. When there are two variables in the research design, one of them is called the predictor variable and the other the outcome variable . The research design can be visualized as shown in Figure 3.9, where the curved arrow represents the expected correlation between these two variables.

One way of organizing the data from a correlational study with two variables is to graph the values of each of the measured variables using a scatter plot . As you can see in Figure 3.10 a scatter plot  is a visual image of the relationship between two variables . A point is plotted for each individual at the intersection of his or her scores for the two variables. When the association between the variables on the scatter plot can be easily approximated with a straight line , as in parts (a) and (b) of Figure 3.10 the variables are said to have a linear relationship .

When the straight line indicates that individuals who have above-average values for one variable also tend to have above-average values for the other variable , as in part (a), the relationship is said to be positive linear . Examples of positive linear relationships include those between height and weight, between education and income, and between age and mathematical abilities in children. In each case, people who score higher on one of the variables also tend to score higher on the other variable. Negative linear relationships , in contrast, as shown in part (b), occur when above-average values for one variable tend to be associated with below-average values for the other variable. Examples of negative linear relationships include those between the age of a child and the number of diapers the child uses, and between practice on and errors made on a learning task. In these cases, people who score higher on one of the variables tend to score lower on the other variable.

Relationships between variables that cannot be described with a straight line are known as nonlinear relationships . Part (c) of Figure 3.10 shows a common pattern in which the distribution of the points is essentially random. In this case there is no relationship at all between the two variables, and they are said to be independent . Parts (d) and (e) of Figure 3.10 show patterns of association in which, although there is an association, the points are not well described by a single straight line. For instance, part (d) shows the type of relationship that frequently occurs between anxiety and performance. Increases in anxiety from low to moderate levels are associated with performance increases, whereas increases in anxiety from moderate to high levels are associated with decreases in performance. Relationships that change in direction and thus are not described by a single straight line are called curvilinear relationships .

The most common statistical measure of the strength of linear relationships among variables is the Pearson correlation coefficient , which is symbolized by the letter r . The value of the correlation coefficient ranges from r = –1.00 to r = +1.00. The direction of the linear relationship is indicated by the sign of the correlation coefficient. Positive values of r (such as r = .54 or r = .67) indicate that the relationship is positive linear (i.e., the pattern of the dots on the scatter plot runs from the lower left to the upper right), whereas negative values of r (such as r = –.30 or r = –.72) indicate negative linear relationships (i.e., the dots run from the upper left to the lower right). The strength of the linear relationship is indexed by the distance of the correlation coefficient from zero (its absolute value). For instance, r = –.54 is a stronger relationship than r = .30, and r = .72 is a stronger relationship than r = –.57. Because the Pearson correlation coefficient only measures linear relationships, variables that have curvilinear relationships are not well described by r , and the observed correlation will be close to zero.

It is also possible to study relationships among more than two measures at the same time. A research design in which more than one predictor variable is used to predict a single outcome variable is analyzed through multiple regression (Aiken & West, 1991).  Multiple regression  is a statistical technique, based on correlation coefficients among variables, that allows predicting a single outcome variable from more than one predictor variable . For instance, Figure 3.11 shows a multiple regression analysis in which three predictor variables (Salary, job satisfaction, and years employed) are used to predict a single outcome (job performance). The use of multiple regression analysis shows an important advantage of correlational research designs — they can be used to make predictions about a person’s likely score on an outcome variable (e.g., job performance) based on knowledge of other variables.

An important limitation of correlational research designs is that they cannot be used to draw conclusions about the causal relationships among the measured variables. Consider, for instance, a researcher who has hypothesized that viewing violent behaviour will cause increased aggressive play in children. He has collected, from a sample of Grade 4 children, a measure of how many violent television shows each child views during the week, as well as a measure of how aggressively each child plays on the school playground. From his collected data, the researcher discovers a positive correlation between the two measured variables.

Although this positive correlation appears to support the researcher’s hypothesis, it cannot be taken to indicate that viewing violent television causes aggressive behaviour. Although the researcher is tempted to assume that viewing violent television causes aggressive play, there are other possibilities. One alternative possibility is that the causal direction is exactly opposite from what has been hypothesized. Perhaps children who have behaved aggressively at school develop residual excitement that leads them to want to watch violent television shows at home (Figure 3.13):

Although this possibility may seem less likely, there is no way to rule out the possibility of such reverse causation on the basis of this observed correlation. It is also possible that both causal directions are operating and that the two variables cause each other (Figure 3.14).

Still another possible explanation for the observed correlation is that it has been produced by the presence of a common-causal variable (also known as a third variable ). A common-causal variable  is a variable that is not part of the research hypothesis but that causes both the predictor and the outcome variable and thus produces the observed correlation between them . In our example, a potential common-causal variable is the discipline style of the children’s parents. Parents who use a harsh and punitive discipline style may produce children who like to watch violent television and who also behave aggressively in comparison to children whose parents use less harsh discipline (Figure 3.15)

In this case, television viewing and aggressive play would be positively correlated (as indicated by the curved arrow between them), even though neither one caused the other but they were both caused by the discipline style of the parents (the straight arrows). When the predictor and outcome variables are both caused by a common-causal variable, the observed relationship between them is said to be spurious . A spurious relationship  is a relationship between two variables in which a common-causal variable produces and “explains away” the relationship . If effects of the common-causal variable were taken away, or controlled for, the relationship between the predictor and outcome variables would disappear. In the example, the relationship between aggression and television viewing might be spurious because by controlling for the effect of the parents’ disciplining style, the relationship between television viewing and aggressive behaviour might go away.

Common-causal variables in correlational research designs can be thought of as mystery variables because, as they have not been measured, their presence and identity are usually unknown to the researcher. Since it is not possible to measure every variable that could cause both the predictor and outcome variables, the existence of an unknown common-causal variable is always a possibility. For this reason, we are left with the basic limitation of correlational research: correlation does not demonstrate causation. It is important that when you read about correlational research projects, you keep in mind the possibility of spurious relationships, and be sure to interpret the findings appropriately. Although correlational research is sometimes reported as demonstrating causality without any mention being made of the possibility of reverse causation or common-causal variables, informed consumers of research, like you, are aware of these interpretational problems.

In sum, correlational research designs have both strengths and limitations. One strength is that they can be used when experimental research is not possible because the predictor variables cannot be manipulated. Correlational designs also have the advantage of allowing the researcher to study behaviour as it occurs in everyday life. And we can also use correlational designs to make predictions — for instance, to predict from the scores on their battery of tests the success of job trainees during a training session. But we cannot use such correlational information to determine whether the training caused better job performance. For that, researchers rely on experiments.

Experimental Research: Understanding the Causes of Behaviour

The goal of experimental research design is to provide more definitive conclusions about the causal relationships among the variables in the research hypothesis than is available from correlational designs. In an experimental research design, the variables of interest are called the independent variable (or variables ) and the dependent variable . The independent variable  in an experiment is the causing variable that is created (manipulated) by the experimenter . The dependent variable  in an experiment is a measured variable that is expected to be influenced by the experimental manipulation . The research hypothesis suggests that the manipulated independent variable or variables will cause changes in the measured dependent variables. We can diagram the research hypothesis by using an arrow that points in one direction. This demonstrates the expected direction of causality (Figure 3.16):

Research Focus: Video Games and Aggression

Consider an experiment conducted by Anderson and Dill (2000). The study was designed to test the hypothesis that viewing violent video games would increase aggressive behaviour. In this research, male and female undergraduates from Iowa State University were given a chance to play with either a violent video game (Wolfenstein 3D) or a nonviolent video game (Myst). During the experimental session, the participants played their assigned video games for 15 minutes. Then, after the play, each participant played a competitive game with an opponent in which the participant could deliver blasts of white noise through the earphones of the opponent. The operational definition of the dependent variable (aggressive behaviour) was the level and duration of noise delivered to the opponent. The design of the experiment is shown in Figure 3.17

Two advantages of the experimental research design are (a) the assurance that the independent variable (also known as the experimental manipulation ) occurs prior to the measured dependent variable, and (b) the creation of initial equivalence between the conditions of the experiment (in this case by using random assignment to conditions).

Experimental designs have two very nice features. For one, they guarantee that the independent variable occurs prior to the measurement of the dependent variable. This eliminates the possibility of reverse causation. Second, the influence of common-causal variables is controlled, and thus eliminated, by creating initial equivalence among the participants in each of the experimental conditions before the manipulation occurs.

The most common method of creating equivalence among the experimental conditions is through random assignment to conditions, a procedure in which the condition that each participant is assigned to is determined through a random process, such as drawing numbers out of an envelope or using a random number table . Anderson and Dill first randomly assigned about 100 participants to each of their two groups (Group A and Group B). Because they used random assignment to conditions, they could be confident that, before the experimental manipulation occurred, the students in Group A were, on average, equivalent to the students in Group B on every possible variable, including variables that are likely to be related to aggression, such as parental discipline style, peer relationships, hormone levels, diet — and in fact everything else.

Then, after they had created initial equivalence, Anderson and Dill created the experimental manipulation — they had the participants in Group A play the violent game and the participants in Group B play the nonviolent game. Then they compared the dependent variable (the white noise blasts) between the two groups, finding that the students who had viewed the violent video game gave significantly longer noise blasts than did the students who had played the nonviolent game.

Anderson and Dill had from the outset created initial equivalence between the groups. This initial equivalence allowed them to observe differences in the white noise levels between the two groups after the experimental manipulation, leading to the conclusion that it was the independent variable (and not some other variable) that caused these differences. The idea is that the only thing that was different between the students in the two groups was the video game they had played.

Despite the advantage of determining causation, experiments do have limitations. One is that they are often conducted in laboratory situations rather than in the everyday lives of people. Therefore, we do not know whether results that we find in a laboratory setting will necessarily hold up in everyday life. Second, and more important, is that some of the most interesting and key social variables cannot be experimentally manipulated. If we want to study the influence of the size of a mob on the destructiveness of its behaviour, or to compare the personality characteristics of people who join suicide cults with those of people who do not join such cults, these relationships must be assessed using correlational designs, because it is simply not possible to experimentally manipulate these variables.

Key Takeaways

  • Descriptive, correlational, and experimental research designs are used to collect and analyze data.
  • Descriptive designs include case studies, surveys, and naturalistic observation. The goal of these designs is to get a picture of the current thoughts, feelings, or behaviours in a given group of people. Descriptive research is summarized using descriptive statistics.
  • Correlational research designs measure two or more relevant variables and assess a relationship between or among them. The variables may be presented on a scatter plot to visually show the relationships. The Pearson Correlation Coefficient ( r ) is a measure of the strength of linear relationship between two variables.
  • Common-causal variables may cause both the predictor and outcome variable in a correlational design, producing a spurious relationship. The possibility of common-causal variables makes it impossible to draw causal conclusions from correlational research designs.
  • Experimental research involves the manipulation of an independent variable and the measurement of a dependent variable. Random assignment to conditions is normally used to create initial equivalence between the groups, allowing researchers to draw causal conclusions.

Exercises and Critical Thinking

  • There is a negative correlation between the row that a student sits in in a large class (when the rows are numbered from front to back) and his or her final grade in the class. Do you think this represents a causal relationship or a spurious relationship, and why?
  • Think of two variables (other than those mentioned in this book) that are likely to be correlated, but in which the correlation is probably spurious. What is the likely common-causal variable that is producing the relationship?
  • Imagine a researcher wants to test the hypothesis that participating in psychotherapy will cause a decrease in reported anxiety. Describe the type of research design the investigator might use to draw this conclusion. What would be the independent and dependent variables in the research?

Image Attributions

Figure 3.4: “ Reading newspaper ” by Alaskan Dude (http://commons.wikimedia.org/wiki/File:Reading_newspaper.jpg) is licensed under CC BY 2.0

Aiken, L., & West, S. (1991).  Multiple regression: Testing and interpreting interactions . Newbury Park, CA: Sage.

Ainsworth, M. S., Blehar, M. C., Waters, E., & Wall, S. (1978).  Patterns of attachment: A psychological study of the strange situation . Hillsdale, NJ: Lawrence Erlbaum Associates.

Anderson, C. A., & Dill, K. E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life.  Journal of Personality and Social Psychology, 78 (4), 772–790.

Damasio, H., Grabowski, T., Frank, R., Galaburda, A. M., Damasio, A. R., Cacioppo, J. T., & Berntson, G. G. (2005). The return of Phineas Gage: Clues about the brain from the skull of a famous patient. In  Social neuroscience: Key readings.  (pp. 21–28). New York, NY: Psychology Press.

Freud, S. (1909/1964). Analysis of phobia in a five-year-old boy. In E. A. Southwell & M. Merbaum (Eds.),  Personality: Readings in theory and research  (pp. 3–32). Belmont, CA: Wadsworth. (Original work published 1909).

Kotowicz, Z. (2007). The strange case of Phineas Gage.  History of the Human Sciences, 20 (1), 115–131.

Rokeach, M. (1964).  The three Christs of Ypsilanti: A psychological study . New York, NY: Knopf.

Stangor, C. (2011). Research methods for the behavioural sciences (4th ed.). Mountain View, CA: Cengage.

Long Descriptions

Figure 3.6 long description: There are 25 families. 24 families have an income between $44,000 and $111,000 and one family has an income of $3,800,000. The mean income is $223,960 while the median income is $73,000. [Return to Figure 3.6]

Figure 3.10 long description: Types of scatter plots.

  • Positive linear, r=positive .82. The plots on the graph form a rough line that runs from lower left to upper right.
  • Negative linear, r=negative .70. The plots on the graph form a rough line that runs from upper left to lower right.
  • Independent, r=0.00. The plots on the graph are spread out around the centre.
  • Curvilinear, r=0.00. The plots of the graph form a rough line that goes up and then down like a hill.
  • Curvilinear, r=0.00. The plots on the graph for a rough line that goes down and then up like a ditch.

[Return to Figure 3.10]

Introduction to Psychology - 1st Canadian Edition Copyright © 2014 by Jennifer Walinga and Charles Stangor is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

case study experimental research

Experimental and Quasi-Experimental Research

Guide Title: Experimental and Quasi-Experimental Research Guide ID: 64

You approach a stainless-steel wall, separated vertically along its middle where two halves meet. After looking to the left, you see two buttons on the wall to the right. You press the top button and it lights up. A soft tone sounds and the two halves of the wall slide apart to reveal a small room. You step into the room. Looking to the left, then to the right, you see a panel of more buttons. You know that you seek a room marked with the numbers 1-0-1-2, so you press the button marked "10." The halves slide shut and enclose you within the cubicle, which jolts upward. Soon, the soft tone sounds again. The door opens again. On the far wall, a sign silently proclaims, "10th floor."

You have engaged in a series of experiments. A ride in an elevator may not seem like an experiment, but it, and each step taken towards its ultimate outcome, are common examples of a search for a causal relationship-which is what experimentation is all about.

You started with the hypothesis that this is in fact an elevator. You proved that you were correct. You then hypothesized that the button to summon the elevator was on the left, which was incorrect, so then you hypothesized it was on the right, and you were correct. You hypothesized that pressing the button marked with the up arrow would not only bring an elevator to you, but that it would be an elevator heading in the up direction. You were right.

As this guide explains, the deliberate process of testing hypotheses and reaching conclusions is an extension of commonplace testing of cause and effect relationships.

Basic Concepts of Experimental and Quasi-Experimental Research

Discovering causal relationships is the key to experimental research. In abstract terms, this means the relationship between a certain action, X, which alone creates the effect Y. For example, turning the volume knob on your stereo clockwise causes the sound to get louder. In addition, you could observe that turning the knob clockwise alone, and nothing else, caused the sound level to increase. You could further conclude that a causal relationship exists between turning the knob clockwise and an increase in volume; not simply because one caused the other, but because you are certain that nothing else caused the effect.

Independent and Dependent Variables

Beyond discovering causal relationships, experimental research further seeks out how much cause will produce how much effect; in technical terms, how the independent variable will affect the dependent variable. You know that turning the knob clockwise will produce a louder noise, but by varying how much you turn it, you see how much sound is produced. On the other hand, you might find that although you turn the knob a great deal, sound doesn't increase dramatically. Or, you might find that turning the knob just a little adds more sound than expected. The amount that you turned the knob is the independent variable, the variable that the researcher controls, and the amount of sound that resulted from turning it is the dependent variable, the change that is caused by the independent variable.

Experimental research also looks into the effects of removing something. For example, if you remove a loud noise from the room, will the person next to you be able to hear you? Or how much noise needs to be removed before that person can hear you?

Treatment and Hypothesis

The term treatment refers to either removing or adding a stimulus in order to measure an effect (such as turning the knob a little or a lot, or reducing the noise level a little or a lot). Experimental researchers want to know how varying levels of treatment will affect what they are studying. As such, researchers often have an idea, or hypothesis, about what effect will occur when they cause something. Few experiments are performed where there is no idea of what will happen. From past experiences in life or from the knowledge we possess in our specific field of study, we know how some actions cause other reactions. Experiments confirm or reconfirm this fact.

Experimentation becomes more complex when the causal relationships they seek aren't as clear as in the stereo knob-turning examples. Questions like "Will olestra cause cancer?" or "Will this new fertilizer help this plant grow better?" present more to consider. For example, any number of things could affect the growth rate of a plant-the temperature, how much water or sun it receives, or how much carbon dioxide is in the air. These variables can affect an experiment's results. An experimenter who wants to show that adding a certain fertilizer will help a plant grow better must ensure that it is the fertilizer, and nothing else, affecting the growth patterns of the plant. To do this, as many of these variables as possible must be controlled.

Matching and Randomization

In the example used in this guide (you'll find the example below), we discuss an experiment that focuses on three groups of plants -- one that is treated with a fertilizer named MegaGro, another group treated with a fertilizer named Plant!, and yet another that is not treated with fetilizer (this latter group serves as a "control" group). In this example, even though the designers of the experiment have tried to remove all extraneous variables, results may appear merely coincidental. Since the goal of the experiment is to prove a causal relationship in which a single variable is responsible for the effect produced, the experiment would produce stronger proof if the results were replicated in larger treatment and control groups.

Selecting groups entails assigning subjects in the groups of an experiment in such a way that treatment and control groups are comparable in all respects except the application of the treatment. Groups can be created in two ways: matching and randomization. In the MegaGro experiment discussed below, the plants might be matched according to characteristics such as age, weight and whether they are blooming. This involves distributing these plants so that each plant in one group exactly matches characteristics of plants in the other groups. Matching may be problematic, though, because it "can promote a false sense of security by leading [the experimenter] to believe that [the] experimental and control groups were really equated at the outset, when in fact they were not equated on a host of variables" (Jones, 291). In other words, you may have flowers for your MegaGro experiment that you matched and distributed among groups, but other variables are unaccounted for. It would be difficult to have equal groupings.

Randomization, then, is preferred to matching. This method is based on the statistical principle of normal distribution. Theoretically, any arbitrarily selected group of adequate size will reflect normal distribution. Differences between groups will average out and become more comparable. The principle of normal distribution states that in a population most individuals will fall within the middle range of values for a given characteristic, with increasingly fewer toward either extreme (graphically represented as the ubiquitous "bell curve").

Differences between Quasi-Experimental and Experimental Research

Thus far, we have explained that for experimental research we need:

  • a hypothesis for a causal relationship;
  • a control group and a treatment group;
  • to eliminate confounding variables that might mess up the experiment and prevent displaying the causal relationship; and
  • to have larger groups with a carefully sorted constituency; preferably randomized, in order to keep accidental differences from fouling things up.

But what if we don't have all of those? Do we still have an experiment? Not a true experiment in the strictest scientific sense of the term, but we can have a quasi-experiment, an attempt to uncover a causal relationship, even though the researcher cannot control all the factors that might affect the outcome.

A quasi-experimenter treats a given situation as an experiment even though it is not wholly by design. The independent variable may not be manipulated by the researcher, treatment and control groups may not be randomized or matched, or there may be no control group. The researcher is limited in what he or she can say conclusively.

The significant element of both experiments and quasi-experiments is the measure of the dependent variable, which it allows for comparison. Some data is quite straightforward, but other measures, such as level of self-confidence in writing ability, increase in creativity or in reading comprehension are inescapably subjective. In such cases, quasi-experimentation often involves a number of strategies to compare subjectivity, such as rating data, testing, surveying, and content analysis.

Rating essentially is developing a rating scale to evaluate data. In testing, experimenters and quasi-experimenters use ANOVA (Analysis of Variance) and ANCOVA (Analysis of Co-Variance) tests to measure differences between control and experimental groups, as well as different correlations between groups.

Since we're mentioning the subject of statistics, note that experimental or quasi-experimental research cannot state beyond a shadow of a doubt that a single cause will always produce any one effect. They can do no more than show a probability that one thing causes another. The probability that a result is the due to random chance is an important measure of statistical analysis and in experimental research.

Example: Causality

Let's say you want to determine that your new fertilizer, MegaGro, will increase the growth rate of plants. You begin by getting a plant to go with your fertilizer. Since the experiment is concerned with proving that MegaGro works, you need another plant, using no fertilizer at all on it, to compare how much change your fertilized plant displays. This is what is known as a control group.

Set up with a control group, which will receive no treatment, and an experimental group, which will get MegaGro, you must then address those variables that could invalidate your experiment. This can be an extensive and exhaustive process. You must ensure that you use the same plant; that both groups are put in the same kind of soil; that they receive equal amounts of water and sun; that they receive the same amount of exposure to carbon-dioxide-exhaling researchers, and so on. In short, any other variable that might affect the growth of those plants, other than the fertilizer, must be the same for both plants. Otherwise, you can't prove absolutely that MegaGro is the only explanation for the increased growth of one of those plants.

Such an experiment can be done on more than two groups. You may not only want to show that MegaGro is an effective fertilizer, but that it is better than its competitor brand of fertilizer, Plant! All you need to do, then, is have one experimental group receiving MegaGro, one receiving Plant! and the other (the control group) receiving no fertilizer. Those are the only variables that can be different between the three groups; all other variables must be the same for the experiment to be valid.

Controlling variables allows the researcher to identify conditions that may affect the experiment's outcome. This may lead to alternative explanations that the researcher is willing to entertain in order to isolate only variables judged significant. In the MegaGro experiment, you may be concerned with how fertile the soil is, but not with the plants'; relative position in the window, as you don't think that the amount of shade they get will affect their growth rate. But what if it did? You would have to go about eliminating variables in order to determine which is the key factor. What if one receives more shade than the other and the MegaGro plant, which received more shade, died? This might prompt you to formulate a plausible alternative explanation, which is a way of accounting for a result that differs from what you expected. You would then want to redo the study with equal amounts of sunlight.

Methods: Five Steps

Experimental research can be roughly divided into five phases:

Identifying a research problem

The process starts by clearly identifying the problem you want to study and considering what possible methods will affect a solution. Then you choose the method you want to test, and formulate a hypothesis to predict the outcome of the test.

For example, you may want to improve student essays, but you don't believe that teacher feedback is enough. You hypothesize that some possible methods for writing improvement include peer workshopping, or reading more example essays. Favoring the former, your experiment would try to determine if peer workshopping improves writing in high school seniors. You state your hypothesis: peer workshopping prior to turning in a final draft will improve the quality of the student's essay.

Planning an experimental research study

The next step is to devise an experiment to test your hypothesis. In doing so, you must consider several factors. For example, how generalizable do you want your end results to be? Do you want to generalize about the entire population of high school seniors everywhere, or just the particular population of seniors at your specific school? This will determine how simple or complex the experiment will be. The amount of time funding you have will also determine the size of your experiment.

Continuing the example from step one, you may want a small study at one school involving three teachers, each teaching two sections of the same course. The treatment in this experiment is peer workshopping. Each of the three teachers will assign the same essay assignment to both classes; the treatment group will participate in peer workshopping, while the control group will receive only teacher comments on their drafts.

Conducting the experiment

At the start of an experiment, the control and treatment groups must be selected. Whereas the "hard" sciences have the luxury of attempting to create truly equal groups, educators often find themselves forced to conduct their experiments based on self-selected groups, rather than on randomization. As was highlighted in the Basic Concepts section, this makes the study a quasi-experiment, since the researchers cannot control all of the variables.

For the peer workshopping experiment, let's say that it involves six classes and three teachers with a sample of students randomly selected from all the classes. Each teacher will have a class for a control group and a class for a treatment group. The essay assignment is given and the teachers are briefed not to change any of their teaching methods other than the use of peer workshopping. You may see here that this is an effort to control a possible variable: teaching style variance.

Analyzing the data

The fourth step is to collect and analyze the data. This is not solely a step where you collect the papers, read them, and say your methods were a success. You must show how successful. You must devise a scale by which you will evaluate the data you receive, therefore you must decide what indicators will be, and will not be, important.

Continuing our example, the teachers' grades are first recorded, then the essays are evaluated for a change in sentence complexity, syntactical and grammatical errors, and overall length. Any statistical analysis is done at this time if you choose to do any. Notice here that the researcher has made judgments on what signals improved writing. It is not simply a matter of improved teacher grades, but a matter of what the researcher believes constitutes improved use of the language.

Writing the paper/presentation describing the findings

Once you have completed the experiment, you will want to share findings by publishing academic paper (or presentations). These papers usually have the following format, but it is not necessary to follow it strictly. Sections can be combined or not included, depending on the structure of the experiment, and the journal to which you submit your paper.

  • Abstract : Summarize the project: its aims, participants, basic methodology, results, and a brief interpretation.
  • Introduction : Set the context of the experiment.
  • Review of Literature : Provide a review of the literature in the specific area of study to show what work has been done. Should lead directly to the author's purpose for the study.
  • Statement of Purpose : Present the problem to be studied.
  • Participants : Describe in detail participants involved in the study; e.g., how many, etc. Provide as much information as possible.
  • Materials and Procedures : Clearly describe materials and procedures. Provide enough information so that the experiment can be replicated, but not so much information that it becomes unreadable. Include how participants were chosen, the tasks assigned them, how they were conducted, how data were evaluated, etc.
  • Results : Present the data in an organized fashion. If it is quantifiable, it is analyzed through statistical means. Avoid interpretation at this time.
  • Discussion : After presenting the results, interpret what has happened in the experiment. Base the discussion only on the data collected and as objective an interpretation as possible. Hypothesizing is possible here.
  • Limitations : Discuss factors that affect the results. Here, you can speculate how much generalization, or more likely, transferability, is possible based on results. This section is important for quasi-experimentation, since a quasi-experiment cannot control all of the variables that might affect the outcome of a study. You would discuss what variables you could not control.
  • Conclusion : Synthesize all of the above sections.
  • References : Document works cited in the correct format for the field.

Experimental and Quasi-Experimental Research: Issues and Commentary

Several issues are addressed in this section, including the use of experimental and quasi-experimental research in educational settings, the relevance of the methods to English studies, and ethical concerns regarding the methods.

Using Experimental and Quasi-Experimental Research in Educational Settings

Charting causal relationships in human settings.

Any time a human population is involved, prediction of casual relationships becomes cloudy and, some say, impossible. Many reasons exist for this; for example,

  • researchers in classrooms add a disturbing presence, causing students to act abnormally, consciously or unconsciously;
  • subjects try to please the researcher, just because of an apparent interest in them (known as the Hawthorne Effect); or, perhaps
  • the teacher as researcher is restricted by bias and time pressures.

But such confounding variables don't stop researchers from trying to identify causal relationships in education. Educators naturally experiment anyway, comparing groups, assessing the attributes of each, and making predictions based on an evaluation of alternatives. They look to research to support their intuitive practices, experimenting whenever they try to decide which instruction method will best encourage student improvement.

Combining Theory, Research, and Practice

The goal of educational research lies in combining theory, research, and practice. Educational researchers attempt to establish models of teaching practice, learning styles, curriculum development, and countless other educational issues. The aim is to "try to improve our understanding of education and to strive to find ways to have understanding contribute to the improvement of practice," one writer asserts (Floden 1996, p. 197).

In quasi-experimentation, researchers try to develop models by involving teachers as researchers, employing observational research techniques. Although results of this kind of research are context-dependent and difficult to generalize, they can act as a starting point for further study. The "educational researcher . . . provides guidelines and interpretive material intended to liberate the teacher's intelligence so that whatever artistry in teaching the teacher can achieve will be employed" (Eisner 1992, p. 8).

Bias and Rigor

Critics contend that the educational researcher is inherently biased, sample selection is arbitrary, and replication is impossible. The key to combating such criticism has to do with rigor. Rigor is established through close, proper attention to randomizing groups, time spent on a study, and questioning techniques. This allows more effective application of standards of quantitative research to qualitative research.

Often, teachers cannot wait to for piles of experimentation data to be analyzed before using the teaching methods (Lauer and Asher 1988). They ultimately must assess whether the results of a study in a distant classroom are applicable in their own classrooms. And they must continuously test the effectiveness of their methods by using experimental and qualitative research simultaneously. In addition to statistics (quantitative), researchers may perform case studies or observational research (qualitative) in conjunction with, or prior to, experimentation.

Relevance to English Studies

Situations in english studies that might encourage use of experimental methods.

Whenever a researcher would like to see if a causal relationship exists between groups, experimental and quasi-experimental research can be a viable research tool. Researchers in English Studies might use experimentation when they believe a relationship exists between two variables, and they want to show that these two variables have a significant correlation (or causal relationship).

A benefit of experimentation is the ability to control variables, such as the amount of treatment, when it is given, to whom and so forth. Controlling variables allows researchers to gain insight into the relationships they believe exist. For example, a researcher has an idea that writing under pseudonyms encourages student participation in newsgroups. Researchers can control which students write under pseudonyms and which do not, then measure the outcomes. Researchers can then analyze results and determine if this particular variable alone causes increased participation.

Transferability-Applying Results

Experimentation and quasi-experimentation allow for generating transferable results and accepting those results as being dependent upon experimental rigor. It is an effective alternative to generalizability, which is difficult to rely upon in educational research. English scholars, reading results of experiments with a critical eye, ultimately decide if results will be implemented and how. They may even extend that existing research by replicating experiments in the interest of generating new results and benefiting from multiple perspectives. These results will strengthen the study or discredit findings.

Concerns English Scholars Express about Experiments

Researchers should carefully consider if a particular method is feasible in humanities studies, and whether it will yield the desired information. Some researchers recommend addressing pertinent issues combining several research methods, such as survey, interview, ethnography, case study, content analysis, and experimentation (Lauer and Asher, 1988).

Advantages and Disadvantages of Experimental Research: Discussion

In educational research, experimentation is a way to gain insight into methods of instruction. Although teaching is context specific, results can provide a starting point for further study. Often, a teacher/researcher will have a "gut" feeling about an issue which can be explored through experimentation and looking at causal relationships. Through research intuition can shape practice .

A preconception exists that information obtained through scientific method is free of human inconsistencies. But, since scientific method is a matter of human construction, it is subject to human error . The researcher's personal bias may intrude upon the experiment , as well. For example, certain preconceptions may dictate the course of the research and affect the behavior of the subjects. The issue may be compounded when, although many researchers are aware of the affect that their personal bias exerts on their own research, they are pressured to produce research that is accepted in their field of study as "legitimate" experimental research.

The researcher does bring bias to experimentation, but bias does not limit an ability to be reflective . An ethical researcher thinks critically about results and reports those results after careful reflection. Concerns over bias can be leveled against any research method.

Often, the sample may not be representative of a population, because the researcher does not have an opportunity to ensure a representative sample. For example, subjects could be limited to one location, limited in number, studied under constrained conditions and for too short a time.

Despite such inconsistencies in educational research, the researcher has control over the variables , increasing the possibility of more precisely determining individual effects of each variable. Also, determining interaction between variables is more possible.

Even so, artificial results may result . It can be argued that variables are manipulated so the experiment measures what researchers want to examine; therefore, the results are merely contrived products and have no bearing in material reality. Artificial results are difficult to apply in practical situations, making generalizing from the results of a controlled study questionable. Experimental research essentially first decontextualizes a single question from a "real world" scenario, studies it under controlled conditions, and then tries to recontextualize the results back on the "real world" scenario. Results may be difficult to replicate .

Perhaps, groups in an experiment may not be comparable . Quasi-experimentation in educational research is widespread because not only are many researchers also teachers, but many subjects are also students. With the classroom as laboratory, it is difficult to implement randomizing or matching strategies. Often, students self-select into certain sections of a course on the basis of their own agendas and scheduling needs. Thus when, as often happens, one class is treated and the other used for a control, the groups may not actually be comparable. As one might imagine, people who register for a class which meets three times a week at eleven o'clock in the morning (young, no full-time job, night people) differ significantly from those who register for one on Monday evenings from seven to ten p.m. (older, full-time job, possibly more highly motivated). Each situation presents different variables and your group might be completely different from that in the study. Long-term studies are expensive and hard to reproduce. And although often the same hypotheses are tested by different researchers, various factors complicate attempts to compare or synthesize them. It is nearly impossible to be as rigorous as the natural sciences model dictates.

Even when randomization of students is possible, problems arise. First, depending on the class size and the number of classes, the sample may be too small for the extraneous variables to cancel out. Second, the study population is not strictly a sample, because the population of students registered for a given class at a particular university is obviously not representative of the population of all students at large. For example, students at a suburban private liberal-arts college are typically young, white, and upper-middle class. In contrast, students at an urban community college tend to be older, poorer, and members of a racial minority. The differences can be construed as confounding variables: the first group may have fewer demands on its time, have less self-discipline, and benefit from superior secondary education. The second may have more demands, including a job and/or children, have more self-discipline, but an inferior secondary education. Selecting a population of subjects which is representative of the average of all post-secondary students is also a flawed solution, because the outcome of a treatment involving this group is not necessarily transferable to either the students at a community college or the students at the private college, nor are they universally generalizable.

When a human population is involved, experimental research becomes concerned if behavior can be predicted or studied with validity. Human response can be difficult to measure . Human behavior is dependent on individual responses. Rationalizing behavior through experimentation does not account for the process of thought, making outcomes of that process fallible (Eisenberg, 1996).

Nevertheless, we perform experiments daily anyway . When we brush our teeth every morning, we are experimenting to see if this behavior will result in fewer cavities. We are relying on previous experimentation and we are transferring the experimentation to our daily lives.

Moreover, experimentation can be combined with other research methods to ensure rigor . Other qualitative methods such as case study, ethnography, observational research and interviews can function as preconditions for experimentation or conducted simultaneously to add validity to a study.

We have few alternatives to experimentation. Mere anecdotal research , for example is unscientific, unreplicatable, and easily manipulated. Should we rely on Ed walking into a faculty meeting and telling the story of Sally? Sally screamed, "I love writing!" ten times before she wrote her essay and produced a quality paper. Therefore, all the other faculty members should hear this anecdote and know that all other students should employ this similar technique.

On final disadvantage: frequently, political pressure drives experimentation and forces unreliable results. Specific funding and support may drive the outcomes of experimentation and cause the results to be skewed. The reader of these results may not be aware of these biases and should approach experimentation with a critical eye.

Advantages and Disadvantages of Experimental Research: Quick Reference List

Experimental and quasi-experimental research can be summarized in terms of their advantages and disadvantages. This section combines and elaborates upon many points mentioned previously in this guide.

gain insight into methods of instruction

subject to human error

intuitive practice shaped by research

personal bias of researcher may intrude

teachers have bias but can be reflective

sample may not be representative

researcher can have control over variables

can produce artificial results

humans perform experiments anyway

results may only apply to one situation and may be difficult to replicate

can be combined with other research methods for rigor

groups may not be comparable

use to determine what is best for population

human response can be difficult to measure

provides for greater transferability than anecdotal research

political pressure may skew results

Ethical Concerns

Experimental research may be manipulated on both ends of the spectrum: by researcher and by reader. Researchers who report on experimental research, faced with naive readers of experimental research, encounter ethical concerns. While they are creating an experiment, certain objectives and intended uses of the results might drive and skew it. Looking for specific results, they may ask questions and look at data that support only desired conclusions. Conflicting research findings are ignored as a result. Similarly, researchers, seeking support for a particular plan, look only at findings which support that goal, dismissing conflicting research.

Editors and journals do not publish only trouble-free material. As readers of experiments members of the press might report selected and isolated parts of a study to the public, essentially transferring that data to the general population which may not have been intended by the researcher. Take, for example, oat bran. A few years ago, the press reported how oat bran reduces high blood pressure by reducing cholesterol. But that bit of information was taken out of context. The actual study found that when people ate more oat bran, they reduced their intake of saturated fats high in cholesterol. People started eating oat bran muffins by the ton, assuming a causal relationship when in actuality a number of confounding variables might influence the causal link.

Ultimately, ethical use and reportage of experimentation should be addressed by researchers, reporters and readers alike.

Reporters of experimental research often seek to recognize their audience's level of knowledge and try not to mislead readers. And readers must rely on the author's skill and integrity to point out errors and limitations. The relationship between researcher and reader may not sound like a problem, but after spending months or years on a project to produce no significant results, it may be tempting to manipulate the data to show significant results in order to jockey for grants and tenure.

Meanwhile, the reader may uncritically accept results that receive validity by being published in a journal. However, research that lacks credibility often is not published; consequentially, researchers who fail to publish run the risk of being denied grants, promotions, jobs, and tenure. While few researchers are anything but earnest in their attempts to conduct well-designed experiments and present the results in good faith, rhetorical considerations often dictate a certain minimization of methodological flaws.

Concerns arise if researchers do not report all, or otherwise alter, results. This phenomenon is counterbalanced, however, in that professionals are also rewarded for publishing critiques of others' work. Because the author of an experimental study is in essence making an argument for the existence of a causal relationship, he or she must be concerned not only with its integrity, but also with its presentation. Achieving persuasiveness in any kind of writing involves several elements: choosing a topic of interest, providing convincing evidence for one's argument, using tone and voice to project credibility, and organizing the material in a way that meets expectations for a logical sequence. Of course, what is regarded as pertinent, accepted as evidence, required for credibility, and understood as logical varies according to context. If the experimental researcher hopes to make an impact on the community of professionals in their field, she must attend to the standards and orthodoxy's of that audience.

Related Links

Contrasts: Traditional and computer-supported writing classrooms. This Web presents a discussion of the Transitions Study, a year-long exploration of teachers and students in computer-supported and traditional writing classrooms. Includes description of study, rationale for conducting the study, results and implications of the study.

http://kairos.technorhetoric.net/2.2/features/reflections/page1.htm

Annotated Bibliography

A cozy world of trivial pursuits? (1996, June 28) The Times Educational Supplement . 4174, pp. 14-15.

A critique discounting the current methods Great Britain employs to fund and disseminate educational research. The belief is that research is performed for fellow researchers not the teaching public and implications for day to day practice are never addressed.

Anderson, J. A. (1979, Nov. 10-13). Research as argument: the experimental form. Paper presented at the annual meeting of the Speech Communication Association, San Antonio, TX.

In this paper, the scientist who uses the experimental form does so in order to explain that which is verified through prediction.

Anderson, Linda M. (1979). Classroom-based experimental studies of teaching effectiveness in elementary schools . (Technical Report UTR&D-R- 4102). Austin: Research and Development Center for Teacher Education, University of Texas.

Three recent large-scale experimental studies have built on a database established through several correlational studies of teaching effectiveness in elementary school.

Asher, J. W. (1976). Educational research and evaluation methods . Boston: Little, Brown.

Abstract unavailable by press time.

Babbie, Earl R. (1979). The Practice of Social Research . Belmont, CA: Wadsworth.

A textbook containing discussions of several research methodologies used in social science research.

Bangert-Drowns, R.L. (1993). The word processor as instructional tool: a meta-analysis of word processing in writing instruction. Review of Educational Research, 63 (1), 69-93.

Beach, R. (1993). The effects of between-draft teacher evaluation versus student self-evaluation on high school students' revising of rough drafts. Research in the Teaching of English, 13 , 111-119.

The question of whether teacher evaluation or guided self-evaluation of rough drafts results in increased revision was addressed in Beach's study. Differences in the effects of teacher evaluations, guided self-evaluation (using prepared guidelines,) and no evaluation of rough drafts were examined. The final drafts of students (10th, 11th, and 12th graders) were compared with their rough drafts and rated by judges according to degree of change.

Beishuizen, J. & Moonen, J. (1992). Research in technology enriched schools: a case for cooperation between teachers and researchers . (ERIC Technical Report ED351006).

This paper describes the research strategies employed in the Dutch Technology Enriched Schools project to encourage extensive and intensive use of computers in a small number of secondary schools, and to study the effects of computer use on the classroom, the curriculum, and school administration and management.

Borg, W. P. (1989). Educational Research: an Introduction . (5th ed.). New York: Longman.

An overview of educational research methodology, including literature review and discussion of approaches to research, experimental design, statistical analysis, ethics, and rhetorical presentation of research findings.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research . Boston: Houghton Mifflin.

A classic overview of research designs.

Campbell, D.T. (1988). Methodology and epistemology for social science: selected papers . ed. E. S. Overman. Chicago: University of Chicago Press.

This is an overview of Campbell's 40-year career and his work. It covers in seven parts measurement, experimental design, applied social experimentation, interpretive social science, epistemology and sociology of science. Includes an extensive bibliography.

Caporaso, J. A., & Roos, Jr., L. L. (Eds.). Quasi-experimental approaches: Testing theory and evaluating policy. Evanston, WA: Northwestern University Press.

A collection of articles concerned with explicating the underlying assumptions of quasi-experimentation and relating these to true experimentation. With an emphasis on design. Includes a glossary of terms.

Collier, R. Writing and the word processor: How wary of the gift-giver should we be? Unpublished manuscript.

Unpublished typescript. Charts the developments to date in computers and composition and speculates about the future within the framework of Willie Sypher's model of the evolution of creative discovery.

Cook, T.D. & Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings . Boston: Houghton Mifflin Co.

The authors write that this book "presents some quasi-experimental designs and design features that can be used in many social research settings. The designs serve to probe causal hypotheses about a wide variety of substantive issues in both basic and applied research."

Cutler, A. (1970). An experimental method for semantic field study. Linguistic Communication, 2 , N. pag.

This paper emphasizes the need for empirical research and objective discovery procedures in semantics, and illustrates a method by which these goals may be obtained.

Daniels, L. B. (1996, Summer). Eisenberg's Heisenberg: The indeterminancies of rationality. Curriculum Inquiry, 26 , 181-92.

Places Eisenberg's theories in relation to the death of foundationalism by showing that he distorts rational studies into a form of relativism. He looks at Eisenberg's ideas on indeterminacy, methods and evidence, what he is against and what we should think of what he says.

Danziger, K. (1990). Constructing the subject: Historical origins of psychological research. Cambridge: Cambridge University Press.

Danzinger stresses the importance of being aware of the framework in which research operates and of the essentially social nature of scientific activity.

Diener, E., et al. (1972, December). Leakage of experimental information to potential future subjects by debriefed subjects. Journal of Experimental Research in Personality , 264-67.

Research regarding research: an investigation of the effects on the outcome of an experiment in which information about the experiment had been leaked to subjects. The study concludes that such leakage is not a significant problem.

Dudley-Marling, C., & Rhodes, L. K. (1989). Reflecting on a close encounter with experimental research. Canadian Journal of English Language Arts. 12 , 24-28.

Researchers, Dudley-Marling and Rhodes, address some problems they met in their experimental approach to a study of reading comprehension. This article discusses the limitations of experimental research, and presents an alternative to experimental or quantitative research.

Edgington, E. S. (1985). Random assignment and experimental research. Educational Administration Quarterly, 21 , N. pag.

Edgington explores ways on which random assignment can be a part of field studies. The author discusses both non-experimental and experimental research and the need for using random assignment.

Eisenberg, J. (1996, Summer). Response to critiques by R. Floden, J. Zeuli, and L. Daniels. Curriculum Inquiry, 26 , 199-201.

A response to critiques of his argument that rational educational research methods are at best suspect and at worst futile. He believes indeterminacy controls this method and worries that chaotic research is failing students.

Eisner, E. (1992, July). Are all causal claims positivistic? A reply to Francis Schrag. Educational Researcher, 21 (5), 8-9.

Eisner responds to Schrag who claimed that critics like Eisner cannot escape a positivistic paradigm whatever attempts they make to do so. Eisner argues that Schrag essentially misses the point for trying to argue for the paradigm solely on the basis of cause and effect without including the rest of positivistic philosophy. This weakens his argument against multiple modal methods, which Eisner argues provides opportunities to apply the appropriate research design where it is most applicable.

Floden, R.E. (1996, Summer). Educational research: limited, but worthwhile and maybe a bargain. (response to J.A. Eisenberg). Curriculum Inquiry, 26 , 193-7.

Responds to John Eisenberg critique of educational research by asserting the connection between improvement of practice and research results. He places high value of teacher discrepancy and knowledge that research informs practice.

Fortune, J. C., & Hutson, B. A. (1994, March/April). Selecting models for measuring change when true experimental conditions do not exist. Journal of Educational Research, 197-206.

This article reviews methods for minimizing the effects of nonideal experimental conditions by optimally organizing models for the measurement of change.

Fox, R. F. (1980). Treatment of writing apprehension and tts effects on composition. Research in the Teaching of English, 14 , 39-49.

The main purpose of Fox's study was to investigate the effects of two methods of teaching writing on writing apprehension among entry level composition students, A conventional teaching procedure was used with a control group, while a workshop method was employed with the treatment group.

Gadamer, H-G. (1976). Philosophical hermeneutics . (D. E. Linge, Trans.). Berkeley, CA: University of California Press.

A collection of essays with the common themes of the mediation of experience through language, the impossibility of objectivity, and the importance of context in interpretation.

Gaise, S. J. (1981). Experimental vs. non-experimental research on classroom second language learning. Bilingual Education Paper Series, 5 , N. pag.

Aims on classroom-centered research on second language learning and teaching are considered and contrasted with the experimental approach.

Giordano, G. (1983). Commentary: Is experimental research snowing us? Journal of Reading, 27 , 5-7.

Do educational research findings actually benefit teachers and students? Giordano states his opinion that research may be helpful to teaching, but is not essential and often is unnecessary.

Goldenson, D. R. (1978, March). An alternative view about the role of the secondary school in political socialization: A field-experimental study of theory and research in social education. Theory and Research in Social Education , 44-72.

This study concludes that when political discussion among experimental groups of secondary school students is led by a teacher, the degree to which the students' views were impacted is proportional to the credibility of the teacher.

Grossman, J., and J. P. Tierney. (1993, October). The fallibility of comparison groups. Evaluation Review , 556-71.

Grossman and Tierney present evidence to suggest that comparison groups are not the same as nontreatment groups.

Harnisch, D. L. (1992). Human judgment and the logic of evidence: A critical examination of research methods in special education transition literature. In D. L. Harnisch et al. (Eds.), Selected readings in transition.

This chapter describes several common types of research studies in special education transition literature and the threats to their validity.

Hawisher, G. E. (1989). Research and recommendations for computers and composition. In G. Hawisher and C. Selfe. (Eds.), Critical Perspectives on Computers and Composition Instruction . (pp. 44-69). New York: Teacher's College Press.

An overview of research in computers and composition to date. Includes a synthesis grid of experimental research.

Hillocks, G. Jr. (1982). The interaction of instruction, teacher comment, and revision in teaching the composing process. Research in the Teaching of English, 16 , 261-278.

Hillock conducted a study using three treatments: observational or data collecting activities prior to writing, use of revisions or absence of same, and either brief or lengthy teacher comments to identify effective methods of teaching composition to seventh and eighth graders.

Jenkinson, J. C. (1989). Research design in the experimental study of intellectual disability. International Journal of Disability, Development, and Education, 69-84.

This article catalogues the difficulties of conducting experimental research where the subjects are intellectually disables and suggests alternative research strategies.

Jones, R. A. (1985). Research Methods in the Social and Behavioral Sciences. Sunderland, MA: Sinauer Associates, Inc..

A textbook designed to provide an overview of research strategies in the social sciences, including survey, content analysis, ethnographic approaches, and experimentation. The author emphasizes the importance of applying strategies appropriately and in variety.

Kamil, M. L., Langer, J. A., & Shanahan, T. (1985). Understanding research in reading and writing . Newton, Massachusetts: Allyn and Bacon.

Examines a wide variety of problems in reading and writing, with a broad range of techniques, from different perspectives.

Kennedy, J. L. (1985). An Introduction to the Design and Analysis of Experiments in Behavioral Research . Lanham, MD: University Press of America.

An introductory textbook of psychological and educational research.

Keppel, G. (1991). Design and analysis: a researcher's handbook . Englewood Cliffs, NJ: Prentice Hall.

This updates Keppel's earlier book subtitled "a student's handbook." Focuses on extensive information about analytical research and gives a basic picture of research in psychology. Covers a range of statistical topics. Includes a subject and name index, as well as a glossary.

Knowles, G., Elija, R., & Broadwater, K. (1996, Spring/Summer). Teacher research: enhancing the preparation of teachers? Teaching Education, 8 , 123-31.

Researchers looked at one teacher candidate who participated in a class which designed their own research project correlating to a question they would like answered in the teaching world. The goal of the study was to see if preservice teachers developed reflective practice by researching appropriate classroom contexts.

Lace, J., & De Corte, E. (1986, April 16-20). Research on media in western Europe: A myth of sisyphus? Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

Identifies main trends in media research in western Europe, with emphasis on three successive stages since 1960: tools technology, systems technology, and reflective technology.

Latta, A. (1996, Spring/Summer). Teacher as researcher: selected resources. Teaching Education, 8 , 155-60.

An annotated bibliography on educational research including milestones of thought, practical applications, successful outcomes, seminal works, and immediate practical applications.

Lauer. J.M. & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford University Press.

Approaching experimentation from a humanist's perspective to it, authors focus on eight major research designs: Case studies, ethnographies, sampling and surveys, quantitative descriptive studies, measurement, true experiments, quasi-experiments, meta-analyses, and program evaluations. It takes on the challenge of bridging language of social science with that of the humanist. Includes name and subject indexes, as well as a glossary and a glossary of symbols.

Mishler, E. G. (1979). Meaning in context: Is there any other kind? Harvard Educational Review, 49 , 1-19.

Contextual importance has been largely ignored by traditional research approaches in social/behavioral sciences and in their application to the education field. Developmental and social psychologists have increasingly noted the inadequacies of this approach. Drawing examples for phenomenology, sociolinguistics, and ethnomethodology, the author proposes alternative approaches for studying meaning in context.

Mitroff, I., & Bonoma, T. V. (1978, May). Psychological assumptions, experimentations, and real world problems: A critique and an alternate approach to evaluation. Evaluation Quarterly , 235-60.

The authors advance the notion of dialectic as a means to clarify and examine the underlying assumptions of experimental research methodology, both in highly controlled situations and in social evaluation.

Muller, E. W. (1985). Application of experimental and quasi-experimental research designs to educational software evaluation. Educational Technology, 25 , 27-31.

Muller proposes a set of guidelines for the use of experimental and quasi-experimental methods of research in evaluating educational software. By obtaining empirical evidence of student performance, it is possible to evaluate if programs are making the desired learning effect.

Murray, S., et al. (1979, April 8-12). Technical issues as threats to internal validity of experimental and quasi-experimental designs . San Francisco: University of California.

The article reviews three evaluation models and analyzes the flaws common to them. Remedies are suggested.

Muter, P., & Maurutto, P. (1991). Reading and skimming from computer screens and books: The paperless office revisited? Behavior and Information Technology, 10 (4), 257-66.

The researchers test for reading and skimming effectiveness, defined as accuracy combined with speed, for written text compared to text on a computer monitor. They conclude that, given optimal on-line conditions, both are equally effective.

O'Donnell, A., Et al. (1992). The impact of cooperative writing. In J. R. Hayes, et al. (Eds.). Reading empirical research studies: The rhetoric of research . (pp. 371-84). Hillsdale, NJ: Lawrence Erlbaum Associates.

A model of experimental design. The authors investigate the efficacy of cooperative writing strategies, as well as the transferability of skills learned to other, individual writing situations.

Palmer, D. (1988). Looking at philosophy . Mountain View, CA: Mayfield Publishing.

An introductory text with incisive but understandable discussions of the major movements and thinkers in philosophy from the Pre-Socratics through Sartre. With illustrations by the author. Includes a glossary.

Phelps-Gunn, T., & Phelps-Terasaki, D. (1982). Written language instruction: Theory and remediation . London: Aspen Systems Corporation.

The lack of research in written expression is addressed and an application on the Total Writing Process Model is presented.

Poetter, T. (1996, Spring/Summer). From resistance to excitement: becoming qualitative researchers and reflective practitioners. Teaching Education , 8109-19.

An education professor reveals his own problematic research when he attempted to institute a educational research component to a teacher preparation program. He encountered dissent from students and cooperating professionals and ultimately was rewarded with excitement towards research and a recognized correlation to practice.

Purves, A. C. (1992). Reflections on research and assessment in written composition. Research in the Teaching of English, 26 .

Three issues concerning research and assessment is writing are discussed: 1) School writing is a matter of products not process, 2) school writing is an ill-defined domain, 3) the quality of school writing is what observers report they see. Purves discusses these issues while looking at data collected in a ten-year study of achievement in written composition in fourteen countries.

Rathus, S. A. (1987). Psychology . (3rd ed.). Poughkeepsie, NY: Holt, Rinehart, and Winston.

An introductory psychology textbook. Includes overviews of the major movements in psychology, discussions of prominent examples of experimental research, and a basic explanation of relevant physiological factors. With chapter summaries.

Reiser, R. A. (1982). Improving the research skills of instructional designers. Educational Technology, 22 , 19-21.

In his paper, Reiser starts by stating the importance of research in advancing the field of education, and points out that graduate students in instructional design lack the proper skills to conduct research. The paper then goes on to outline the practicum in the Instructional Systems Program at Florida State University which includes: 1) Planning and conducting an experimental research study; 2) writing the manuscript describing the study; 3) giving an oral presentation in which they describe their research findings.

Report on education research . (Journal). Washington, DC: Capitol Publication, Education News Services Division.

This is an independent bi-weekly newsletter on research in education and learning. It has been publishing since Sept. 1969.

Rossell, C. H. (1986). Why is bilingual education research so bad?: Critique of the Walsh and Carballo study of Massachusetts bilingual education programs . Boston: Center for Applied Social Science, Boston University. (ERIC Working Paper 86-5).

The Walsh and Carballo evaluation of the effectiveness of transitional bilingual education programs in five Massachusetts communities has five flaws and the five flaws are discussed in detail.

Rubin, D. L., & Greene, K. (1992). Gender-typical style in written language. Research in the Teaching of English, 26.

This study was designed to find out whether the writing styles of men and women differ. Rubin and Green discuss the pre-suppositions that women are better writers than men.

Sawin, E. (1992). Reaction: Experimental research in the context of other methods. School of Education Review, 4 , 18-21.

Sawin responds to Gage's article on methodologies and issues in educational research. He agrees with most of the article but suggests the concept of scientific should not be regarded in absolute terms and recommends more emphasis on scientific method. He also questions the value of experiments over other types of research.

Schoonmaker, W. E. (1984). Improving classroom instruction: A model for experimental research. The Technology Teacher, 44, 24-25.

The model outlined in this article tries to bridge the gap between classroom practice and laboratory research, using what Schoonmaker calls active research. Research is conducted in the classroom with the students and is used to determine which two methods of classroom instruction chosen by the teacher is more effective.

Schrag, F. (1992). In defense of positivist research paradigms. Educational Researcher, 21, (5), 5-8.

The controversial defense of the use of positivistic research methods to evaluate educational strategies; the author takes on Eisner, Erickson, and Popkewitz.

Smith, J. (1997). The stories educational researchers tell about themselves. Educational Researcher, 33 (3), 4-11.

Recapitulates main features of an on-going debate between advocates for using vocabularies of traditional language arts and whole language in educational research. An "impasse" exists were advocates "do not share a theoretical disposition concerning both language instruction and the nature of research," Smith writes (p. 6). He includes a very comprehensive history of the debate of traditional research methodology and qualitative methods and vocabularies. Definitely worth a read by graduates.

Smith, N. L. (1980). The feasibility and desirability of experimental methods in evaluation. Evaluation and Program Planning: An International Journal , 251-55.

Smith identifies the conditions under which experimental research is most desirable. Includes a review of current thinking and controversies.

Stewart, N. R., & Johnson, R. G. (1986, March 16-20). An evaluation of experimental methodology in counseling and counselor education research. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

The purpose of this study was to evaluate the quality of experimental research in counseling and counselor education published from 1976 through 1984.

Spector, P. E. (1990). Research Designs. Newbury Park, California: Sage Publications.

In this book, Spector introduces the basic principles of experimental and nonexperimental design in the social sciences.

Tait, P. E. (1984). Do-it-yourself evaluation of experimental research. Journal of Visual Impairment and Blindness, 78 , 356-363 .

Tait's goal is to provide the reader who is unfamiliar with experimental research or statistics with the basic skills necessary for the evaluation of research studies.

Walsh, S. M. (1990). The current conflict between case study and experimental research: A breakthrough study derives benefits from both . (ERIC Document Number ED339721).

This paper describes a study that was not experimentally designed, but its major findings were generalizable to the overall population of writers in college freshman composition classes. The study was not a case study, but it provided insights into the attitudes and feelings of small clusters of student writers.

Waters, G. R. (1976). Experimental designs in communication research. Journal of Business Communication, 14 .

The paper presents a series of discussions on the general elements of experimental design and the scientific process and relates these elements to the field of communication.

Welch, W. W. (March 1969). The selection of a national random sample of teachers for experimental curriculum evaluation. Scholastic Science and Math , 210-216.

Members of the evaluation section of Harvard project physics describe what is said to be the first attempt to select a national random sample of teachers, and list 6 steps to do so. Cost and comparison with a volunteer group are also discussed.

Winer, B.J. (1971). Statistical principles in experimental design , (2nd ed.). New York: McGraw-Hill.

Combines theory and application discussions to give readers a better understanding of the logic behind statistical aspects of experimental design. Introduces the broad topic of design, then goes into considerable detail. Not for light reading. Bring your aspirin if you like statistics. Bring morphine is you're a humanist.

Winn, B. (1986, January 16-21). Emerging trends in educational technology research. Paper presented at the Annual Convention of the Association for Educational Communication Technology.

This examination of the topic of research in educational technology addresses four major areas: (1) why research is conducted in this area and the characteristics of that research; (2) the types of research questions that should or should not be addressed; (3) the most appropriate methodologies for finding answers to research questions; and (4) the characteristics of a research report that make it good and ultimately suitable for publication.

Citation Information

Luann Barnes, Jennifer Hauser, Luana Heikes, Anthony J. Hernandez, Paul Tim Richard, Katherine Ross, Guo Hua Yang, and Mike Palmquist. (1994-2024). Experimental and Quasi-Experimental Research. The WAC Clearinghouse. Colorado State University. Available at https://wac.colostate.edu/repository/writing/guides/.

Copyright Information

Copyright © 1994-2024 Colorado State University and/or this site's authors, developers, and contributors . Some material displayed on this site is used with permission.

helpful professor logo

15 Famous Experiments and Case Studies in Psychology

15 Famous Experiments and Case Studies in Psychology

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

psychology theories, explained below

Psychology has seen thousands upon thousands of research studies over the years. Most of these studies have helped shape our current understanding of human thoughts, behavior, and feelings.

The psychology case studies in this list are considered classic examples of psychological case studies and experiments, which are still being taught in introductory psychology courses up to this day.

Some studies, however, were downright shocking and controversial that you’d probably wonder why such studies were conducted back in the day. Imagine participating in an experiment for a small reward or extra class credit, only to be left scarred for life. These kinds of studies, however, paved the way for a more ethical approach to studying psychology and implementation of research standards such as the use of debriefing in psychology research .

Case Study vs. Experiment

Before we dive into the list of the most famous studies in psychology, let us first review the difference between case studies and experiments.

  • It is an in-depth study and analysis of an individual, group, community, or phenomenon. The results of a case study cannot be applied to the whole population, but they can provide insights for further studies.
  • It often uses qualitative research methods such as observations, surveys, and interviews.
  • It is often conducted in real-life settings rather than in controlled environments.
  • An experiment is a type of study done on a sample or group of random participants, the results of which can be generalized to the whole population.
  • It often uses quantitative research methods that rely on numbers and statistics.
  • It is conducted in controlled environments, wherein some things or situations are manipulated.

See Also: Experimental vs Observational Studies

Famous Experiments in Psychology

1. the marshmallow experiment.

Psychologist Walter Mischel conducted the marshmallow experiment at Stanford University in the 1960s to early 1970s. It was a simple test that aimed to define the connection between delayed gratification and success in life.

The instructions were fairly straightforward: children ages 4-6 were presented a piece of marshmallow on a table and they were told that they would receive a second piece if they could wait for 15 minutes without eating the first marshmallow.

About one-third of the 600 participants succeeded in delaying gratification to receive the second marshmallow. Mischel and his team followed up on these participants in the 1990s, learning that those who had the willpower to wait for a larger reward experienced more success in life in terms of SAT scores and other metrics.

This case study also supported self-control theory , a theory in criminology that holds that people with greater self-control are less likely to end up in trouble with the law!

The classic marshmallow experiment, however, was debunked in a 2018 replication study done by Tyler Watts and colleagues.

This more recent experiment had a larger group of participants (900) and a better representation of the general population when it comes to race and ethnicity. In this study, the researchers found out that the ability to wait for a second marshmallow does not depend on willpower alone but more so on the economic background and social status of the participants.

2. The Bystander Effect

In 1694, Kitty Genovese was murdered in the neighborhood of Kew Gardens, New York. It was told that there were up to 38 witnesses and onlookers in the vicinity of the crime scene, but nobody did anything to stop the murder or call for help.

Such tragedy was the catalyst that inspired social psychologists Bibb Latane and John Darley to formulate the phenomenon called bystander effect or bystander apathy .

Subsequent investigations showed that this story was exaggerated and inaccurate, as there were actually only about a dozen witnesses, at least two of whom called the police. But the case of Kitty Genovese led to various studies that aim to shed light on the bystander phenomenon.

Latane and Darley tested bystander intervention in an experimental study . Participants were asked to answer a questionnaire inside a room, and they would either be alone or with two other participants (who were actually actors or confederates in the study). Smoke would then come out from under the door. The reaction time of participants was tested — how long would it take them to report the smoke to the authorities or the experimenters?

The results showed that participants who were alone in the room reported the smoke faster than participants who were with two passive others. The study suggests that the more onlookers are present in an emergency situation, the less likely someone would step up to help, a social phenomenon now popularly called the bystander effect.

3. Asch Conformity Study

Have you ever made a decision against your better judgment just to fit in with your friends or family? The Asch Conformity Studies will help you understand this kind of situation better.

In this experiment, a group of participants were shown three numbered lines of different lengths and asked to identify the longest of them all. However, only one true participant was present in every group and the rest were actors, most of whom told the wrong answer.

Results showed that the participants went for the wrong answer, even though they knew which line was the longest one in the first place. When the participants were asked why they identified the wrong one, they said that they didn’t want to be branded as strange or peculiar.

This study goes to show that there are situations in life when people prefer fitting in than being right. It also tells that there is power in numbers — a group’s decision can overwhelm a person and make them doubt their judgment.

4. The Bobo Doll Experiment

The Bobo Doll Experiment was conducted by Dr. Albert Bandura, the proponent of social learning theory .

Back in the 1960s, the Nature vs. Nurture debate was a popular topic among psychologists. Bandura contributed to this discussion by proposing that human behavior is mostly influenced by environmental rather than genetic factors.

In the Bobo Doll Experiment, children were divided into three groups: one group was shown a video in which an adult acted aggressively toward the Bobo Doll, the second group was shown a video in which an adult play with the Bobo Doll, and the third group served as the control group where no video was shown.

The children were then led to a room with different kinds of toys, including the Bobo Doll they’ve seen in the video. Results showed that children tend to imitate the adults in the video. Those who were presented the aggressive model acted aggressively toward the Bobo Doll while those who were presented the passive model showed less aggression.

While the Bobo Doll Experiment can no longer be replicated because of ethical concerns, it has laid out the foundations of social learning theory and helped us understand the degree of influence adult behavior has on children.

5. Blue Eye / Brown Eye Experiment

Following the assassination of Martin Luther King Jr. in 1968, third-grade teacher Jane Elliott conducted an experiment in her class. Although not a formal experiment in controlled settings, A Class Divided is a good example of a social experiment to help children understand the concept of racism and discrimination.

The class was divided into two groups: blue-eyed children and brown-eyed children. For one day, Elliott gave preferential treatment to her blue-eyed students, giving them more attention and pampering them with rewards. The next day, it was the brown-eyed students’ turn to receive extra favors and privileges.

As a result, whichever group of students was given preferential treatment performed exceptionally well in class, had higher quiz scores, and recited more frequently; students who were discriminated against felt humiliated, answered poorly in tests, and became uncertain with their answers in class.

This study is now widely taught in sociocultural psychology classes.

6. Stanford Prison Experiment

One of the most controversial and widely-cited studies in psychology is the Stanford Prison Experiment , conducted by Philip Zimbardo at the basement of the Stanford psychology building in 1971. The hypothesis was that abusive behavior in prisons is influenced by the personality traits of the prisoners and prison guards.

The participants in the experiment were college students who were randomly assigned as either a prisoner or a prison guard. The prison guards were then told to run the simulated prison for two weeks. However, the experiment had to be stopped in just 6 days.

The prison guards abused their authority and harassed the prisoners through verbal and physical means. The prisoners, on the other hand, showed submissive behavior. Zimbardo decided to stop the experiment because the prisoners were showing signs of emotional and physical breakdown.

Although the experiment wasn’t completed, the results strongly showed that people can easily get into a social role when others expect them to, especially when it’s highly stereotyped .

7. The Halo Effect

Have you ever wondered why toothpastes and other dental products are endorsed in advertisements by celebrities more often than dentists? The Halo Effect is one of the reasons!

The Halo Effect shows how one favorable attribute of a person can gain them positive perceptions in other attributes. In the case of product advertisements, attractive celebrities are also perceived as intelligent and knowledgeable of a certain subject matter even though they’re not technically experts.

The Halo Effect originated in a classic study done by Edward Thorndike in the early 1900s. He asked military commanding officers to rate their subordinates based on different qualities, such as physical appearance, leadership, dependability, and intelligence.

The results showed that high ratings of a particular quality influences the ratings of other qualities, producing a halo effect of overall high ratings. The opposite also applied, which means that a negative rating in one quality also correlated to negative ratings in other qualities.

Experiments on the Halo Effect came in various formats as well, supporting Thorndike’s original theory. This phenomenon suggests that our perception of other people’s overall personality is hugely influenced by a quality that we focus on.

8. Cognitive Dissonance

There are experiences in our lives when our beliefs and behaviors do not align with each other and we try to justify them in our minds. This is cognitive dissonance , which was studied in an experiment by Leon Festinger and James Carlsmith back in 1959.

In this experiment, participants had to go through a series of boring and repetitive tasks, such as spending an hour turning pegs in a wooden knob. After completing the tasks, they were then paid either $1 or $20 to tell the next participants that the tasks were extremely fun and enjoyable. Afterwards, participants were asked to rate the experiment. Those who were given $1 rated the experiment as more interesting and fun than those who received $20.

The results showed that those who received a smaller incentive to lie experienced cognitive dissonance — $1 wasn’t enough incentive for that one hour of painstakingly boring activity, so the participants had to justify that they had fun anyway.

Famous Case Studies in Psychology

9. little albert.

In 1920, behaviourist theorists John Watson and Rosalie Rayner experimented on a 9-month-old baby to test the effects of classical conditioning in instilling fear in humans.

This was such a controversial study that it gained popularity in psychology textbooks and syllabi because it is a classic example of unethical research studies done in the name of science.

In one of the experiments, Little Albert was presented with a harmless stimulus or object, a white rat, which he wasn’t scared of at first. But every time Little Albert would see the white rat, the researchers would play a scary sound of hammer and steel. After about 6 pairings, Little Albert learned to fear the rat even without the scary sound.

Little Albert developed signs of fear to different objects presented to him through classical conditioning . He even generalized his fear to other stimuli not present in the course of the experiment.

10. Phineas Gage

Phineas Gage is such a celebrity in Psych 101 classes, even though the way he rose to popularity began with a tragic accident. He was a resident of Central Vermont and worked in the construction of a new railway line in the mid-1800s. One day, an explosive went off prematurely, sending a tamping iron straight into his face and through his brain.

Gage survived the accident, fortunately, something that is considered a feat even up to this day. He managed to find a job as a stagecoach after the accident. However, his family and friends reported that his personality changed so much that “he was no longer Gage” (Harlow, 1868).

New evidence on the case of Phineas Gage has since come to light, thanks to modern scientific studies and medical tests. However, there are still plenty of mysteries revolving around his brain damage and subsequent recovery.

11. Anna O.

Anna O., a social worker and feminist of German Jewish descent, was one of the first patients to receive psychoanalytic treatment.

Her real name was Bertha Pappenheim and she inspired much of Sigmund Freud’s works and books on psychoanalytic theory, although they hadn’t met in person. Their connection was through Joseph Breuer, Freud’s mentor when he was still starting his clinical practice.

Anna O. suffered from paralysis, personality changes, hallucinations, and rambling speech, but her doctors could not find the cause. Joseph Breuer was then called to her house for intervention and he performed psychoanalysis, also called the “talking cure”, on her.

Breuer would tell Anna O. to say anything that came to her mind, such as her thoughts, feelings, and childhood experiences. It was noted that her symptoms subsided by talking things out.

However, Breuer later referred Anna O. to the Bellevue Sanatorium, where she recovered and set out to be a renowned writer and advocate of women and children.

12. Patient HM

H.M., or Henry Gustav Molaison, was a severe amnesiac who had been the subject of countless psychological and neurological studies.

Henry was 27 when he underwent brain surgery to cure the epilepsy that he had been experiencing since childhood. In an unfortunate turn of events, he lost his memory because of the surgery and his brain also became unable to store long-term memories.

He was then regarded as someone living solely in the present, forgetting an experience as soon as it happened and only remembering bits and pieces of his past. Over the years, his amnesia and the structure of his brain had helped neuropsychologists learn more about cognitive functions .

Suzanne Corkin, a researcher, writer, and good friend of H.M., recently published a book about his life. Entitled Permanent Present Tense , this book is both a memoir and a case study following the struggles and joys of Henry Gustav Molaison.

13. Chris Sizemore

Chris Sizemore gained celebrity status in the psychology community when she was diagnosed with multiple personality disorder, now known as dissociative identity disorder.

Sizemore has several alter egos, which included Eve Black, Eve White, and Jane. Various papers about her stated that these alter egos were formed as a coping mechanism against the traumatic experiences she underwent in her childhood.

Sizemore said that although she has succeeded in unifying her alter egos into one dominant personality, there were periods in the past experienced by only one of her alter egos. For example, her husband married her Eve White alter ego and not her.

Her story inspired her psychiatrists to write a book about her, entitled The Three Faces of Eve , which was then turned into a 1957 movie of the same title.

14. David Reimer

When David was just 8 months old, he lost his penis because of a botched circumcision operation.

Psychologist John Money then advised Reimer’s parents to raise him as a girl instead, naming him Brenda. His gender reassignment was supported by subsequent surgery and hormonal therapy.

Money described Reimer’s gender reassignment as a success, but problems started to arise as Reimer was growing up. His boyishness was not completely subdued by the hormonal therapy. When he was 14 years old, he learned about the secrets of his past and he underwent gender reassignment to become male again.

Reimer became an advocate for children undergoing the same difficult situation he had been. His life story ended when he was 38 as he took his own life.

15. Kim Peek

Kim Peek was the inspiration behind Rain Man , an Oscar-winning movie about an autistic savant character played by Dustin Hoffman.

The movie was released in 1988, a time when autism wasn’t widely known and acknowledged yet. So it was an eye-opener for many people who watched the film.

In reality, Kim Peek was a non-autistic savant. He was exceptionally intelligent despite the brain abnormalities he was born with. He was like a walking encyclopedia, knowledgeable about travel routes, US zip codes, historical facts, and classical music. He also read and memorized approximately 12,000 books in his lifetime.

This list of experiments and case studies in psychology is just the tip of the iceberg! There are still countless interesting psychology studies that you can explore if you want to learn more about human behavior and dynamics.

You can also conduct your own mini-experiment or participate in a study conducted in your school or neighborhood. Just remember that there are ethical standards to follow so as not to repeat the lasting physical and emotional harm done to Little Albert or the Stanford Prison Experiment participants.

Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70 (9), 1–70. https://doi.org/10.1037/h0093718

Bandura, A., Ross, D., & Ross, S. A. (1961). Transmission of aggression through imitation of aggressive models. The Journal of Abnormal and Social Psychology, 63 (3), 575–582. https://doi.org/10.1037/h0045925

Elliott, J., Yale University., WGBH (Television station : Boston, Mass.), & PBS DVD (Firm). (2003). A class divided. New Haven, Conn.: Yale University Films.

Festinger, L., & Carlsmith, J. M. (1959). Cognitive consequences of forced compliance. The Journal of Abnormal and Social Psychology, 58 (2), 203–210. https://doi.org/10.1037/h0041593

Haney, C., Banks, W. C., & Zimbardo, P. G. (1973). A study of prisoners and guards in a simulated prison. Naval Research Review , 30 , 4-17.

Latane, B., & Darley, J. M. (1968). Group inhibition of bystander intervention in emergencies. Journal of Personality and Social Psychology, 10 (3), 215–221. https://doi.org/10.1037/h0026570

Mischel, W. (2014). The Marshmallow Test: Mastering self-control. Little, Brown and Co.

Thorndike, E. (1920) A Constant Error in Psychological Ratings. Journal of Applied Psychology , 4 , 25-29. http://dx.doi.org/10.1037/h0071663

Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of experimental psychology , 3 (1), 1.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Number Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Word Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Outdoor Games for Kids
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 50 Incentives to Give to Students

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Case Study vs. Experiment

What's the difference.

Case studies and experiments are both research methods used in various fields to gather data and draw conclusions. However, they differ in their approach and purpose. A case study involves in-depth analysis of a particular individual, group, or situation, aiming to provide a detailed understanding of a specific phenomenon. On the other hand, an experiment involves manipulating variables and observing the effects on a sample population, aiming to establish cause-and-effect relationships. While case studies provide rich qualitative data, experiments provide quantitative data that can be statistically analyzed. Ultimately, the choice between these methods depends on the research question and the desired outcomes.

AttributeCase StudyExperiment
Research MethodQualitativeQuantitative
ObjectiveDescriptiveCausal
Sample SizeSmallLarge
Controlled VariablesLess controlledHighly controlled
Manipulation of VariablesNot manipulatedManipulated
Data CollectionObservations, interviews, surveysMeasurements, surveys, experiments
Data AnalysisQualitative analysisStatistical analysis
GeneralizabilityLess generalizableMore generalizable
TimeframeLongerShorter

Further Detail

Introduction.

When conducting research, there are various methods available to gather data and analyze phenomena. Two commonly used approaches are case study and experiment. While both methods aim to provide insights and answers to research questions, they differ in their design, implementation, and the type of data they generate. In this article, we will explore the attributes of case study and experiment, highlighting their strengths and limitations.

A case study is an in-depth investigation of a particular individual, group, or phenomenon. It involves collecting and analyzing detailed information from multiple sources, such as interviews, observations, documents, and archival records. Case studies are often used in social sciences, psychology, and business research to gain a deep understanding of complex and unique situations.

One of the key attributes of a case study is its ability to provide rich and detailed data. Researchers can gather a wide range of information, allowing for a comprehensive analysis of the case. This depth of data enables researchers to explore complex relationships, identify patterns, and generate new hypotheses.

Furthermore, case studies are particularly useful when studying rare or unique phenomena. Since they focus on specific cases, they can provide valuable insights into situations that are not easily replicated or observed in controlled experiments. This attribute makes case studies highly relevant in fields where generalizability is not the primary goal.

However, it is important to note that case studies have limitations. Due to their qualitative nature, the findings may lack generalizability to broader populations or contexts. The small sample size and the subjective interpretation of data can also introduce bias. Additionally, case studies are time-consuming and resource-intensive, requiring extensive data collection and analysis.

An experiment is a research method that involves manipulating variables and measuring their effects on outcomes. It aims to establish cause-and-effect relationships by controlling and manipulating independent variables while keeping other factors constant. Experiments are commonly used in natural sciences, psychology, and medicine to test hypotheses and determine the impact of specific interventions or treatments.

One of the key attributes of an experiment is its ability to establish causal relationships. By controlling variables and randomly assigning participants to different conditions, researchers can confidently attribute any observed effects to the manipulated variables. This attribute allows for strong internal validity, making experiments a powerful tool for drawing causal conclusions.

Moreover, experiments often provide quantitative data, allowing for statistical analysis and objective comparisons. This attribute enhances the precision and replicability of findings, enabling researchers to draw more robust conclusions. The ability to replicate experiments also contributes to the cumulative nature of scientific knowledge.

However, experiments also have limitations. They are often conducted in controlled laboratory settings, which may limit the generalizability of findings to real-world contexts. Ethical considerations may also restrict the manipulation of certain variables or the use of certain interventions. Additionally, experiments can be time-consuming and costly, especially when involving large sample sizes or long-term follow-ups.

While case studies and experiments have distinct attributes, they can complement each other in research. Case studies provide in-depth insights and a rich understanding of complex phenomena, while experiments offer controlled conditions and the ability to establish causal relationships. By combining these methods, researchers can gain a more comprehensive understanding of the research question at hand.

When deciding between case study and experiment, researchers should consider the nature of their research question, the available resources, and the desired level of control and generalizability. Case studies are particularly suitable when exploring unique or rare phenomena, aiming for depth rather than breadth, and when resources allow for extensive data collection and analysis. On the other hand, experiments are ideal for establishing causal relationships, testing specific hypotheses, and when control over variables is crucial.

In conclusion, case study and experiment are two valuable research methods with their own attributes and limitations. Both approaches contribute to the advancement of knowledge in various fields, and their selection depends on the research question, available resources, and desired outcomes. By understanding the strengths and weaknesses of each method, researchers can make informed decisions and conduct rigorous and impactful research.

Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.

Logo for VCU Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Part 3: Using quantitative methods

13. Experimental design

Chapter outline.

  • What is an experiment and when should you use one? (8 minute read)
  • True experimental designs (7 minute read)
  • Quasi-experimental designs (8 minute read)
  • Non-experimental designs (5 minute read)
  • Critical, ethical, and critical considerations  (5 minute read)

Content warning : examples in this chapter contain references to non-consensual research in Western history, including experiments conducted during the Holocaust and on African Americans (section 13.6).

13.1 What is an experiment and when should you use one?

Learning objectives.

Learners will be able to…

  • Identify the characteristics of a basic experiment
  • Describe causality in experimental design
  • Discuss the relationship between dependent and independent variables in experiments
  • Explain the links between experiments and generalizability of results
  • Describe advantages and disadvantages of experimental designs

The basics of experiments

The first experiment I can remember using was for my fourth grade science fair. I wondered if latex- or oil-based paint would hold up to sunlight better. So, I went to the hardware store and got a few small cans of paint and two sets of wooden paint sticks. I painted one with oil-based paint and the other with latex-based paint of different colors and put them in a sunny spot in the back yard. My hypothesis was that the oil-based paint would fade the most and that more fading would happen the longer I left the paint sticks out. (I know, it’s obvious, but I was only 10.)

I checked in on the paint sticks every few days for a month and wrote down my observations. The first part of my hypothesis ended up being wrong—it was actually the latex-based paint that faded the most. But the second part was right, and the paint faded more and more over time. This is a simple example, of course—experiments get a heck of a lot more complex than this when we’re talking about real research.

Merriam-Webster defines an experiment   as “an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.” Each of these three components of the definition will come in handy as we go through the different types of experimental design in this chapter. Most of us probably think of the physical sciences when we think of experiments, and for good reason—these experiments can be pretty flashy! But social science and psychological research follow the same scientific methods, as we’ve discussed in this book.

As the video discusses, experiments can be used in social sciences just like they can in physical sciences. It makes sense to use an experiment when you want to determine the cause of a phenomenon with as much accuracy as possible. Some types of experimental designs do this more precisely than others, as we’ll see throughout the chapter. If you’ll remember back to Chapter 11  and the discussion of validity, experiments are the best way to ensure internal validity, or the extent to which a change in your independent variable causes a change in your dependent variable.

Experimental designs for research projects are most appropriate when trying to uncover or test a hypothesis about the cause of a phenomenon, so they are best for explanatory research questions. As we’ll learn throughout this chapter, different circumstances are appropriate for different types of experimental designs. Each type of experimental design has advantages and disadvantages, and some are better at controlling the effect of extraneous variables —those variables and characteristics that have an effect on your dependent variable, but aren’t the primary variable whose influence you’re interested in testing. For example, in a study that tries to determine whether aspirin lowers a person’s risk of a fatal heart attack, a person’s race would likely be an extraneous variable because you primarily want to know the effect of aspirin.

In practice, many types of experimental designs can be logistically challenging and resource-intensive. As practitioners, the likelihood that we will be involved in some of the types of experimental designs discussed in this chapter is fairly low. However, it’s important to learn about these methods, even if we might not ever use them, so that we can be thoughtful consumers of research that uses experimental designs.

While we might not use all of these types of experimental designs, many of us will engage in evidence-based practice during our time as social workers. A lot of research developing evidence-based practice, which has a strong emphasis on generalizability, will use experimental designs. You’ve undoubtedly seen one or two in your literature search so far.

The logic of experimental design

How do we know that one phenomenon causes another? The complexity of the social world in which we practice and conduct research means that causes of social problems are rarely cut and dry. Uncovering explanations for social problems is key to helping clients address them, and experimental research designs are one road to finding answers.

As you read about in Chapter 8 (and as we’ll discuss again in Chapter 15 ), just because two phenomena are related in some way doesn’t mean that one causes the other. Ice cream sales increase in the summer, and so does the rate of violent crime; does that mean that eating ice cream is going to make me murder someone? Obviously not, because ice cream is great. The reality of that relationship is far more complex—it could be that hot weather makes people more irritable and, at times, violent, while also making people want ice cream. More likely, though, there are other social factors not accounted for in the way we just described this relationship.

Experimental designs can help clear up at least some of this fog by allowing researchers to isolate the effect of interventions on dependent variables by controlling extraneous variables . In true experimental design (discussed in the next section) and some quasi-experimental designs, researchers accomplish this w ith the control group and the experimental group . (The experimental group is sometimes called the “treatment group,” but we will call it the experimental group in this chapter.) The control group does not receive the intervention you are testing (they may receive no intervention or what is known as “treatment as usual”), while the experimental group does. (You will hopefully remember our earlier discussion of control variables in Chapter 8 —conceptually, the use of the word “control” here is the same.)

case study experimental research

In a well-designed experiment, your control group should look almost identical to your experimental group in terms of demographics and other relevant factors. What if we want to know the effect of CBT on social anxiety, but we have learned in prior research that men tend to have a more difficult time overcoming social anxiety? We would want our control and experimental groups to have a similar gender mix because it would limit the effect of gender on our results, since ostensibly, both groups’ results would be affected by gender in the same way. If your control group has 5 women, 6 men, and 4 non-binary people, then your experimental group should be made up of roughly the same gender balance to help control for the influence of gender on the outcome of your intervention. (In reality, the groups should be similar along other dimensions, as well, and your group will likely be much larger.) The researcher will use the same outcome measures for both groups and compare them, and assuming the experiment was designed correctly, get a pretty good answer about whether the intervention had an effect on social anxiety.

You will also hear people talk about comparison groups , which are similar to control groups. The primary difference between the two is that a control group is populated using random assignment, but a comparison group is not. Random assignment entails using a random process to decide which participants are put into the control or experimental group (which participants receive an intervention and which do not). By randomly assigning participants to a group, you can reduce the effect of extraneous variables on your research because there won’t be a systematic difference between the groups.

Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other related fields. Random sampling also helps a great deal with generalizability , whereas random assignment increases internal validity .

We have already learned about internal validity in Chapter 11 . The use of an experimental design will bolster internal validity since it works to isolate causal relationships. As we will see in the coming sections, some types of experimental design do this more effectively than others. It’s also worth considering that true experiments, which most effectively show causality , are often difficult and expensive to implement. Although other experimental designs aren’t perfect, they still produce useful, valid evidence and may be more feasible to carry out.

Key Takeaways

  • Experimental designs are useful for establishing causality, but some types of experimental design do this better than others.
  • Experiments help researchers isolate the effect of the independent variable on the dependent variable by controlling for the effect of extraneous variables .
  • Experiments use a control/comparison group and an experimental group to test the effects of interventions. These groups should be as similar to each other as possible in terms of demographics and other relevant factors.
  • True experiments have control groups with randomly assigned participants, while other types of experiments have comparison groups to which participants are not randomly assigned.
  • Think about the research project you’ve been designing so far. How might you use a basic experiment to answer your question? If your question isn’t explanatory, try to formulate a new explanatory question and consider the usefulness of an experiment.
  • Why is establishing a simple relationship between two variables not indicative of one causing the other?

13.2 True experimental design

  • Describe a true experimental design in social work research
  • Understand the different types of true experimental designs
  • Determine what kinds of research questions true experimental designs are suited for
  • Discuss advantages and disadvantages of true experimental designs

True experimental design , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity and its ability to establish ( causality ) through treatment manipulation, while controlling for the effects of extraneous variable. Sometimes the treatment level is no treatment, while other times it is simply a different treatment than that which we are trying to evaluate. For example, we might have a control group that is made up of people who will not receive any treatment for a particular condition. Or, a control group could consist of people who consent to treatment with DBT when we are testing the effectiveness of CBT.

As we discussed in the previous section, a true experiment has a control group with participants randomly assigned , and an experimental group . This is the most basic element of a true experiment. The next decision a researcher must make is when they need to gather data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle measurement another way? Below, we’ll discuss the three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.

Using a true experiment in social work research is often pretty difficult, since as I mentioned earlier, true experiments can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.

For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The folks in the experimental group will receive CBT, while the folks in the control group will receive more unstructured, basic talk therapy. These designs, as we talked about above, are best suited for explanatory research questions.

Before we get started, take a look at the table below. When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the experiment. Table 13.1 starts us off by laying out what each of the abbreviations mean.

Table 13.1 Experimental research design notations
R Randomly assigned group (control/comparison or experimental)
O Observation/measurement taken of dependent variable
X Intervention or treatment
X Experimental or new intervention
X Typical intervention/treatment as usual
A, B, C, etc. Denotes different groups (control/comparison and experimental)

Pretest and post-test control group design

In pretest and post-test control group design , participants are given a pretest of some kind to measure their baseline state before their participation in an intervention. In our social anxiety experiment, we would have participants in both the experimental and control groups complete some measure of social anxiety—most likely an established scale and/or a structured interview—before they start their treatment. As part of the experiment, we would have a defined time period during which the treatment would take place (let’s say 12 weeks, just for illustration). At the end of 12 weeks, we would give both groups the same measure as a post-test .

case study experimental research

In the diagram, RA (random assignment group A) is the experimental group and RB is the control group. O 1 denotes the pre-test, X e denotes the experimental intervention, and O 2 denotes the post-test. Let’s look at this diagram another way, using the example of CBT for social anxiety that we’ve been talking about.

case study experimental research

In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way, with X i denoting treatment as usual (Figure 13.3).

case study experimental research

Hopefully, these diagrams provide you a visualization of how this type of experiment establishes time order , a key component of a causal relationship. Did the change occur after the intervention? Assuming there is a change in the scores between the pretest and post-test, we would be able to say that yes, the change did occur after the intervention. Causality can’t exist if the change happened before the intervention—this would mean that something else led to the change, not our intervention.

Post-test only control group design

Post-test only control group design involves only giving participants a post-test, just like it sounds (Figure 13.4).

case study experimental research

But why would you use this design instead of using a pretest/post-test design? One reason could be the testing effect that can happen when research participants take a pretest. In research, the testing effect refers to “measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself” (Engel & Schutt, 2017, p. 444) [1] (When we say “measurement error,” all we mean is the accuracy of the way we measure the dependent variable.) Figure 13.4 is a visualization of this type of experiment. The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome.

Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the post-test, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is.

However, without a baseline measurement establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing time order is thus a little more difficult. You must balance this consideration with the benefits of this type of design.

Solomon four group design

One way we can possibly measure how much the testing effect might change the results of the experiment is with the Solomon four group design. Basically, as part of this experiment, you have two control groups and two experimental groups. The first pair of groups receives both a pretest and a post-test. The other pair of groups receives only a post-test (Figure 13.5). This design helps address the problem of establishing time order in post-test only control group designs.

case study experimental research

For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our post-test measures, and groups C and D would take only our post-test measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.

Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.

  • True experimental design is best suited for explanatory research questions.
  • True experiments require random assignment of participants to control and experimental groups.
  • Pretest/post-test research design involves two points of measurement—one pre-intervention and one post-intervention.
  • Post-test only research design involves only one point of measurement—post-intervention. It is a useful design to minimize the effect of testing effects on our results.
  • Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a post-test, while the other receives only a post-test. This can help uncover the influence of testing effects.
  • Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
  • What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a student researcher?
  • What hypothesis(es) would you test using this true experiment?

13.4 Quasi-experimental designs

  • Describe a quasi-experimental design in social work research
  • Understand the different types of quasi-experimental designs
  • Determine what kinds of research questions quasi-experimental designs are suited for
  • Discuss advantages and disadvantages of quasi-experimental designs

Quasi-experimental designs are a lot more common in social work research than true experimental designs. Although quasi-experiments don’t do as good a job of giving us robust proof of causality , they still allow us to establish time order , which is a key element of causality. The prefix quasi means “resembling,” so quasi-experimental research is research that resembles experimental research, but is not true experimental research. Nonetheless, given proper research design, quasi-experiments can still provide extremely rigorous and useful results.

There are a few key differences between true experimental and quasi-experimental research. The primary difference between quasi-experimental research and true experimental research is that quasi-experimental research does not involve random assignment to control and experimental groups. Instead, we talk about comparison groups in quasi-experimental research instead. As a result, these types of experiments don’t control the effect of extraneous variables as well as a true experiment.

Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention.  We’re able to eliminate some threats to internal validity, but we can’t do this as effectively as we can with a true experiment.  Realistically, our CBT-social anxiety project is likely to be a quasi experiment, based on the resources and participant pool we’re likely to have available. 

It’s important to note that not all quasi-experimental designs have a comparison group.  There are many different kinds of quasi-experiments, but we will discuss the three main types below: nonequivalent comparison group designs, time series designs, and ex post facto comparison group designs.

Nonequivalent comparison group design

You will notice that this type of design looks extremely similar to the pretest/post-test design that we discussed in section 13.3. But instead of random assignment to control and experimental groups, researchers use other methods to construct their comparison and experimental groups. A diagram of this design will also look very similar to pretest/post-test design, but you’ll notice we’ve removed the “R” from our groups, since they are not randomly assigned (Figure 13.6).

case study experimental research

Researchers using this design select a comparison group that’s as close as possible based on relevant factors to their experimental group. Engel and Schutt (2017) [2] identify two different selection methods:

  • Individual matching : Researchers take the time to match individual cases in the experimental group to similar cases in the comparison group. It can be difficult, however, to match participants on all the variables you want to control for.
  • Aggregate matching : Instead of trying to match individual participants to each other, researchers try to match the population profile of the comparison and experimental groups. For example, researchers would try to match the groups on average age, gender balance, or median income. This is a less resource-intensive matching method, but researchers have to ensure that participants aren’t choosing which group (comparison or experimental) they are a part of.

As we’ve already talked about, this kind of design provides weaker evidence that the intervention itself leads to a change in outcome. Nonetheless, we are still able to establish time order using this method, and can thereby show an association between the intervention and the outcome. Like true experimental designs, this type of quasi-experimental design is useful for explanatory research questions.

What might this look like in a practice setting? Let’s say you’re working at an agency that provides CBT and other types of interventions, and you have identified a group of clients who are seeking help for social anxiety, as in our earlier example. Once you’ve obtained consent from your clients, you can create a comparison group using one of the matching methods we just discussed. If the group is small, you might match using individual matching, but if it’s larger, you’ll probably sort people by demographics to try to get similar population profiles. (You can do aggregate matching more easily when your agency has some kind of electronic records or database, but it’s still possible to do manually.)

Time series design

Another type of quasi-experimental design is a time series design. Unlike other types of experimental design, time series designs do not have a comparison group. A time series is a set of measurements taken at intervals over a period of time (Figure 13.7). Proper time series design should include at least three pre- and post-intervention measurement points. While there are a few types of time series designs, we’re going to focus on the most common: interrupted time series design.

case study experimental research

But why use this method? Here’s an example. Let’s think about elementary student behavior throughout the school year. As anyone with children or who is a teacher knows, kids get very excited and animated around holidays, days off, or even just on a Friday afternoon. This fact might mean that around those times of year, there are more reports of disruptive behavior in classrooms. What if we took our one and only measurement in mid-December? It’s possible we’d see a higher-than-average rate of disruptive behavior reports, which could bias our results if our next measurement is around a time of year students are in a different, less excitable frame of mind. When we take multiple measurements throughout the first half of the school year, we can establish a more accurate baseline for the rate of these reports by looking at the trend over time.

We may want to test the effect of extended recess times in elementary school on reports of disruptive behavior in classrooms. When students come back after the winter break, the school extends recess by 10 minutes each day (the intervention), and the researchers start tracking the monthly reports of disruptive behavior again. These reports could be subject to the same fluctuations as the pre-intervention reports, and so we once again take multiple measurements over time to try to control for those fluctuations.

This method improves the extent to which we can establish causality because we are accounting for a major extraneous variable in the equation—the passage of time. On its own, it does not allow us to account for other extraneous variables, but it does establish time order and association between the intervention and the trend in reports of disruptive behavior. Finding a stable condition before the treatment that changes after the treatment is evidence for causality between treatment and outcome.

Ex post facto comparison group design

Ex post facto (Latin for “after the fact”) designs are extremely similar to nonequivalent comparison group designs. There are still comparison and experimental groups, pretest and post-test measurements, and an intervention. But in ex post facto designs, participants are assigned to the comparison and experimental groups once the intervention has already happened. This type of design often occurs when interventions are already up and running at an agency and the agency wants to assess effectiveness based on people who have already completed treatment.

In most clinical agency environments, social workers conduct both initial and exit assessments, so there are usually some kind of pretest and post-test measures available. We also typically collect demographic information about our clients, which could allow us to try to use some kind of matching to construct comparison and experimental groups.

In terms of internal validity and establishing causality, ex post facto designs are a bit of a mixed bag. The ability to establish causality depends partially on the ability to construct comparison and experimental groups that are demographically similar so we can control for these extraneous variables .

Quasi-experimental designs are common in social work intervention research because, when designed correctly, they balance the intense resource needs of true experiments with the realities of research in practice. They still offer researchers tools to gather robust evidence about whether interventions are having positive effects for clients.

  • Quasi-experimental designs are similar to true experiments, but do not require random assignment to experimental and control groups.
  • In quasi-experimental projects, the group not receiving the treatment is called the comparison group, not the control group.
  • Nonequivalent comparison group design is nearly identical to pretest/post-test experimental design, but participants are not randomly assigned to the experimental and control groups. As a result, this design provides slightly less robust evidence for causality.
  • Nonequivalent groups can be constructed by individual matching or aggregate matching .
  • Time series design does not have a control or experimental group, and instead compares the condition of participants before and after the intervention by measuring relevant factors at multiple points in time. This allows researchers to mitigate the error introduced by the passage of time.
  • Ex post facto comparison group designs are also similar to true experiments, but experimental and comparison groups are constructed after the intervention is over. This makes it more difficult to control for the effect of extraneous variables, but still provides useful evidence for causality because it maintains the time order of the experiment.
  • Think back to the experiment you considered for your research project in Section 13.3. Now that you know more about quasi-experimental designs, do you still think it’s a true experiment? Why or why not?
  • What should you consider when deciding whether an experimental or quasi-experimental design would be more feasible or fit your research question better?

13.5 Non-experimental designs

  • Describe non-experimental designs in social work research
  • Discuss how non-experimental research differs from true and quasi-experimental research
  • Demonstrate an understanding the different types of non-experimental designs
  • Determine what kinds of research questions non-experimental designs are suited for
  • Discuss advantages and disadvantages of non-experimental designs

The previous sections have laid out the basics of some rigorous approaches to establish that an intervention is responsible for changes we observe in research participants. This type of evidence is extremely important to build an evidence base for social work interventions, but it’s not the only type of evidence to consider. We will discuss qualitative methods, which provide us with rich, contextual information, in Part 4 of this text. The designs we’ll talk about in this section are sometimes used in qualitative research  but in keeping with our discussion of experimental design so far, we’re going to stay in the quantitative research realm for now. Non-experimental is also often a stepping stone for more rigorous experimental design in the future, as it can help test the feasibility of your research.

In general, non-experimental designs do not strongly support causality and don’t address threats to internal validity. However, that’s not really what they’re intended for. Non-experimental designs are useful for a few different types of research, including explanatory questions in program evaluation. Certain types of non-experimental design are also helpful for researchers when they are trying to develop a new assessment or scale. Other times, researchers or agency staff did not get a chance to gather any assessment information before an intervention began, so a pretest/post-test design is not possible.

A genderqueer person sitting on a couch, talking to a therapist in a brightly-lit room

A significant benefit of these types of designs is that they’re pretty easy to execute in a practice or agency setting. They don’t require a comparison or control group, and as Engel and Schutt (2017) [3] point out, they “flow from a typical practice model of assessment, intervention, and evaluating the impact of the intervention” (p. 177). Thus, these designs are fairly intuitive for social workers, even when they aren’t expert researchers. Below, we will go into some detail about the different types of non-experimental design.

One group pretest/post-test design

Also known as a before-after one-group design, this type of research design does not have a comparison group and everyone who participates in the research receives the intervention (Figure 13.8). This is a common type of design in program evaluation in the practice world. Controlling for extraneous variables is difficult or impossible in this design, but given that it is still possible to establish some measure of time order, it does provide weak support for causality.

case study experimental research

Imagine, for example, a researcher who is interested in the effectiveness of an anti-drug education program on elementary school students’ attitudes toward illegal drugs. The researcher could assess students’ attitudes about illegal drugs (O 1 ), implement the anti-drug program (X), and then immediately after the program ends, the researcher could once again measure students’ attitudes toward illegal drugs (O 2 ). You can see how this would be relatively simple to do in practice, and have probably been involved in this type of research design yourself, even if informally. But hopefully, you can also see that this design would not provide us with much evidence for causality because we have no way of controlling for the effect of extraneous variables. A lot of things could have affected any change in students’ attitudes—maybe girls already had different attitudes about illegal drugs than children of other genders, and when we look at the class’s results as a whole, we couldn’t account for that influence using this design.

All of that doesn’t mean these results aren’t useful, however. If we find that children’s attitudes didn’t change at all after the drug education program, then we need to think seriously about how to make it more effective or whether we should be using it at all. (This immediate, practical application of our results highlights a key difference between program evaluation and research, which we will discuss in Chapter 23 .)

After-only design

As the name suggests, this type of non-experimental design involves measurement only after an intervention. There is no comparison or control group, and everyone receives the intervention. I have seen this design repeatedly in my time as a program evaluation consultant for nonprofit organizations, because often these organizations realize too late that they would like to or need to have some sort of measure of what effect their programs are having.

Because there is no pretest and no comparison group, this design is not useful for supporting causality since we can’t establish the time order and we can’t control for extraneous variables. However, that doesn’t mean it’s not useful at all! Sometimes, agencies need to gather information about how their programs are functioning. A classic example of this design is satisfaction surveys—realistically, these can only be administered after a program or intervention. Questions regarding satisfaction, ease of use or engagement, or other questions that don’t involve comparisons are best suited for this type of design.

Static-group design

A final type of non-experimental research is the static-group design. In this type of research, there are both comparison and experimental groups, which are not randomly assigned. There is no pretest, only a post-test, and the comparison group has to be constructed by the researcher. Sometimes, researchers will use matching techniques to construct the groups, but often, the groups are constructed by convenience of who is being served at the agency.

Non-experimental research designs are easy to execute in practice, but we must be cautious about drawing causal conclusions from the results. A positive result may still suggest that we should continue using a particular intervention (and no result or a negative result should make us reconsider whether we should use that intervention at all). You have likely seen non-experimental research in your daily life or at your agency, and knowing the basics of how to structure such a project will help you ensure you are providing clients with the best care possible.

  • Non-experimental designs are useful for describing phenomena, but cannot demonstrate causality.
  • After-only designs are often used in agency and practice settings because practitioners are often not able to set up pre-test/post-test designs.
  • Non-experimental designs are useful for explanatory questions in program evaluation and are helpful for researchers when they are trying to develop a new assessment or scale.
  • Non-experimental designs are well-suited to qualitative methods.
  • If you were to use a non-experimental design for your research project, which would you choose? Why?
  • Have you conducted non-experimental research in your practice or professional life? Which type of non-experimental design was it?

13.6 Critical, ethical, and cultural considerations

  • Describe critiques of experimental design
  • Identify ethical issues in the design and execution of experiments
  • Identify cultural considerations in experimental design

As I said at the outset, experiments, and especially true experiments, have long been seen as the gold standard to gather scientific evidence. When it comes to research in the biomedical field and other physical sciences, true experiments are subject to far less nuance than experiments in the social world. This doesn’t mean they are easier—just subject to different forces. However, as a society, we have placed the most value on quantitative evidence obtained through empirical observation and especially experimentation.

Major critiques of experimental designs tend to focus on true experiments, especially randomized controlled trials (RCTs), but many of these critiques can be applied to quasi-experimental designs, too. Some researchers, even in the biomedical sciences, question the view that RCTs are inherently superior to other types of quantitative research designs. RCTs are far less flexible and have much more stringent requirements than other types of research. One seemingly small issue, like incorrect information about a research participant, can derail an entire RCT. RCTs also cost a great deal of money to implement and don’t reflect “real world” conditions. The cost of true experimental research or RCTs also means that some communities are unlikely to ever have access to these research methods. It is then easy for people to dismiss their research findings because their methods are seen as “not rigorous.”

Obviously, controlling outside influences is important for researchers to draw strong conclusions, but what if those outside influences are actually important for how an intervention works? Are we missing really important information by focusing solely on control in our research? Is a treatment going to work the same for white women as it does for indigenous women? With the myriad effects of our societal structures, you should be very careful ever assuming this will be the case. This doesn’t mean that cultural differences will negate the effect of an intervention; instead, it means that you should remember to practice cultural humility implementing all interventions, even when we “know” they work.

How we build evidence through experimental research reveals a lot about our values and biases, and historically, much experimental research has been conducted on white people, and especially white men. [4] This makes sense when we consider the extent to which the sciences and academia have historically been dominated by white patriarchy. This is especially important for marginalized groups that have long been ignored in research literature, meaning they have also been ignored in the development of interventions and treatments that are accepted as “effective.” There are examples of marginalized groups being experimented on without their consent, like the Tuskegee Experiment or Nazi experiments on Jewish people during World War II. We cannot ignore the collective consciousness situations like this can create about experimental research for marginalized groups.

None of this is to say that experimental research is inherently bad or that you shouldn’t use it. Quite the opposite—use it when you can, because there are a lot of benefits, as we learned throughout this chapter. As a social work researcher, you are uniquely positioned to conduct experimental research while applying social work values and ethics to the process and be a leader for others to conduct research in the same framework. It can conflict with our professional ethics, especially respect for persons and beneficence, if we do not engage in experimental research with our eyes wide open. We also have the benefit of a great deal of practice knowledge that researchers in other fields have not had the opportunity to get. As with all your research, always be sure you are fully exploring the limitations of the research.

  • While true experimental research gathers strong evidence, it can also be inflexible, expensive, and overly simplistic in terms of important social forces that affect the resources.
  • Marginalized communities’ past experiences with experimental research can affect how they respond to research participation.
  • Social work researchers should use both their values and ethics, and their practice experiences, to inform research and push other researchers to do the same.
  • Think back to the true experiment you sketched out in the exercises for Section 13.3. Are there cultural or historical considerations you hadn’t thought of with your participant group? What are they? Does this change the type of experiment you would want to do?
  • How can you as a social work researcher encourage researchers in other fields to consider social work ethics and values in their experimental research?

Media Attributions

  • Being kinder to yourself © Evgenia Makarova is licensed under a CC BY-NC-ND (Attribution NonCommercial NoDerivatives) license
  • Original by author is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Original by author. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Orginal by author. is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • therapist © Zackary Drucker is licensed under a CC BY-NC-ND (Attribution NonCommercial NoDerivatives) license
  • nonexper-pretest-posttest is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • Engel, R. & Schutt, R. (2016). The practice of research in social work. Thousand Oaks, CA: SAGE Publications, Inc. ↵
  • Sullivan, G. M. (2011). Getting off the “gold standard”: Randomized controlled trials and education research. Journal of Graduate Medical Education ,  3 (3), 285-289. ↵

an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.

explains why particular phenomena work in the way that they do; answers “why” questions

variables and characteristics that have an effect on your outcome, but aren't the primary variable whose influence you're interested in testing.

the group of participants in our study who do not receive the intervention we are researching in experiments with random assignment

in experimental design, the group of participants in our study who do receive the intervention we are researching

the group of participants in our study who do not receive the intervention we are researching in experiments without random assignment

using a random process to decide which participants are tested in which conditions

The ability to apply research findings beyond the study sample to some broader population,

Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.

the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief

An experimental design in which one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed

a type of experimental design in which participants are randomly assigned to control and experimental groups, one group receives an intervention, and both groups receive pre- and post-test assessments

A measure of a participant's condition before they receive an intervention or treatment.

A measure of a participant's condition after an intervention or, if they are part of the control/comparison group, at the end of an experiment.

A demonstration that a change occurred after an intervention. An important criterion for establishing causality.

an experimental design in which participants are randomly assigned to control and treatment groups, one group receives an intervention, and both groups receive only a post-test assessment

The measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself

a subtype of experimental design that is similar to a true experiment, but does not have randomly assigned control and treatment groups

In nonequivalent comparison group designs, the process by which researchers match individual cases in the experimental group to similar cases in the comparison group.

In nonequivalent comparison group designs, the process in which researchers match the population profile of the comparison and experimental groups.

a set of measurements taken at intervals over a period of time

Research that involves the use of data that represents human expression through words, pictures, movies, performance and other artifacts.

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

IMAGES

  1. Mixed Methods Single Case Research: State of the Art and Future

    case study experimental research

  2. Experimental Study Design: Research, Types of Design, Methods and

    case study experimental research

  3. FREE 10+ Experimental Research Samples & Templates in MS Word

    case study experimental research

  4. Experimental Research Thesis Examples Pdf

    case study experimental research

  5. research methods case study experimental

    case study experimental research

  6. Experimental research

    case study experimental research

COMMENTS

  1. Case Study Method: A Step-by-Step Guide for Business Researchers

    Although case studies have been discussed extensively in the literature, little has been written about the specific steps one may use to conduct case study research effectively (Gagnon, 2010; Hancock & Algozzine, 2016).Baskarada (2014) also emphasized the need to have a succinct guideline that can be practically followed as it is actually tough to execute a case study well in practice.

  2. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  3. Case Study Methods and Examples

    The purpose of case study research is twofold: (1) to provide descriptive information and (2) to suggest theoretical relevance. Rich description enables an in-depth or sharpened understanding of the case. It is unique given one characteristic: case studies draw from more than one data source. Case studies are inherently multimodal or mixed ...

  4. What Is a Case Study?

    Revised on November 20, 2023. A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are ...

  5. Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling. cause-and-effect relationships between variables across various disciplines. This. paper delineates the key ...

  6. The case study approach

    A case study is a research approach that is used to generate an in-depth, multi-faceted understanding of a complex issue in its real-life context. It is an established research design that is used extensively in a wide variety of disciplines, particularly in the social sciences. A case study can be defined in a variety of ways (Table 5 ), the ...

  7. Case Study

    A case study research design usually involves qualitative methods, but quantitative methods are sometimes also used. Case studies are good for describing, ... Unlike quantitative or experimental research, a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected ...

  8. Case Study vs. Experimental Research

    Case study research involves in-depth analysis of a single individual, group, or event, often using qualitative methods to explore complex phenomena. On the other hand, experimental research involves manipulating variables and measuring their effects on outcomes in a controlled setting to establish cause-and-effect relationships.

  9. Case Study Research

    The term "case study" refers to both a specific research design or methodology, and a method of analysis for examining a problem. Mills et al. ( 2010) note that case study, both as a methodology and as a method—unlike many qualitative methodologies—is frequently used to generalize across populations.

  10. An Experimental Template for Case Study Research

    By implication, this dichotomization precludes a consideration of experimental strategies in case study work. Yet, we argue that one gains purchase on the tasks of research design by integrating the criteria traditionally applied to experimental work to all research in the social sciences—including case study work, the focus of this article.

  11. An Experimental Template for Case Study Research

    which case study research designs attempt to mimic the virtues of experimental design and the degree to which they succeed. The classic experiment, with manipulated treatment and randomized control, thus provides a useful template for discussion. about methodological issues in experimental and observational contexts.

  12. (PDF) Qualitative Case Study Methodology: Study Design and

    McMaster University, West Hamilton, Ontario, Canada. Qualitative case study methodology prov ides tools for researchers to study. complex phenomena within their contexts. When the approach is ...

  13. Experimental Research: What it is + Types of designs

    The classic experimental design definition is: "The methods used to collect data in experimental studies.". There are three primary types of experimental design: The way you classify research subjects based on conditions or groups determines the type of research design you should use. 01. Pre-Experimental Design.

  14. Case Study

    Defnition: A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation. It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied.

  15. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  16. Experimental Research Designs: Types, Examples & Methods

    The pre-experimental research design is further divided into three types. One-shot Case Study Research Design. In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  17. Experimental Research Designs: Types, Examples & Advantages

    Pre-experimental research is of three types —. One-shot Case Study Research Design. One-group Pretest-posttest Research Design. Static-group Comparison. 2. True Experimental Research Design. A true experimental research design relies on statistical analysis to prove or disprove a researcher's hypothesis.

  18. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  19. Single‐case experimental designs: Characteristics, changes, and

    Tactics of Scientific Research (Sidman, 1960) provides a visionary treatise on single-case designs, their scientific underpinnings, and their critical role in understanding behavior. Since the foundational base was provided, single-case designs have proliferated especially in areas of application where they have been used to evaluate interventions with an extraordinary range of clients ...

  20. The Family of Single-Case Experimental Designs

    Abstract. Single-case experimental designs (SCEDs) represent a family of research designs that use experimental methods to study the effects of treatments on outcomes. The fundamental unit of analysis is the single case—which can be an individual, clinic, or community—ideally with replications of effects within and/or between cases.

  21. Experimental Research: Definition, Types and Examples

    The three main types of experimental research design are: 1. Pre-experimental research. A pre-experimental research study is an observational approach to performing an experiment. It's the most basic style of experimental research. Free experimental research can occur in one of these design structures: One-shot case study research design: In ...

  22. 3.2 Psychologists Use Descriptive, Correlational, and Experimental

    Descriptive, correlational, and experimental research designs are used to collect and analyze data. Descriptive designs include case studies, surveys, and naturalistic observation. The goal of these designs is to get a picture of the current thoughts, feelings, or behaviours in a given group of people.

  23. Experimental and Quasi-Experimental Research

    The current conflict between case study and experimental research: A breakthrough study derives benefits from both. (ERIC Document Number ED339721). This paper describes a study that was not experimentally designed, but its major findings were generalizable to the overall population of writers in college freshman composition classes.

  24. 15 Famous Experiments and Case Studies in Psychology

    6. Stanford Prison Experiment. One of the most controversial and widely-cited studies in psychology is the Stanford Prison Experiment, conducted by Philip Zimbardo at the basement of the Stanford psychology building in 1971. The hypothesis was that abusive behavior in prisons is influenced by the personality traits of the prisoners and prison ...

  25. Case Study vs. Experiment

    A case study involves in-depth analysis of a particular individual, group, or situation, aiming to provide a detailed understanding of a specific phenomenon. On the other hand, an experiment involves manipulating variables and observing the effects on a sample population, aiming to establish cause-and-effect relationships.

  26. 13. Experimental design

    Key Takeaways. Experimental designs are useful for establishing causality, but some types of experimental design do this better than others. Experiments help researchers isolate the effect of the independent variable on the dependent variable by controlling for the effect of extraneous variables.; Experiments use a control/comparison group and an experimental group to test the effects of ...

  27. Candidate malaria vaccine provides lasting protection in ...

    Recent trials of an experimental malaria vaccine in healthy Malian adults found that all three tested regimens were safe. One of the trials enrolled 300 healthy women ages 18 to 38 years who ...