Skip Navigation

  • Scientific Research
  • Professional Development
  • Career Paths

Image of three blue squares stacked vertically to look like pages.

  • About Neuronline
  • Community Leaders Program
  • Write for Us
  • Community Guidelines

Neuronline logo

  • COLLECTIONS

Scientific Rigor and the Quest for Truth

  • By Cheryl Sisk, PhD
  • Featured in:
  • Rigor and Reproducibility

A male neuroscientist explains his research at a conference.

Scientific rigor means implementing the highest standards and best practices of the scientific method and applying those to one’s research. It is all about discovering the truth.

Scientific rigor involves minimizing bias in subject selection and data analysis. It is about determining the appropriate sample size for your study so that you have sufficient statistical power to be more confident about whether you are generating false positives or missing out on false negatives. It’s about conducting research that has a good chance of being replicated in your own lab and other laboratories.

The idea is to figure out under what range of conditions a particular outcome is generated — the wider the range, the higher the likelihood that the outcome reflects a scientific truth. If results can be replicated in one’s own lab, but cannot be reproduced in another lab, does not generalize well in other situations, or does not hold up over the test of time, the outcome might not be that important.

What’s at Stake

We want to know and like to think that our research is generalizable and robust. If the research is not conducted with scientific rigor, then generalizability and robustness are not going to happen.

And, if neuroscience research is not conducted with scientific rigor, then it’s a big waste of time, money, and energy in pursuing outcomes that are not real. We also risk the possibility of missing great insights because we didn’t conduct the research in a way that would allow us to detect those insights.

Some of the issues around blind analysis and randomization of subjects are particularly important for clinical applications because lives are going to depend on their outcomes.

We also want science to be objective and empirical. But we’ve known for a long time that when an experimenter knows what treatment group a patient, rat, or a mouse is in, then that knowledge can influence the way they look at the data or analyze the data. It’s not intentional, but it is experimenter bias. The same is true for patients. If they know they are in one group and not the other, then bias is introduced. It is therefore critical to know how large the placebo effect is. Ultimately we are after the truth. If people — either the experimenter or the subject — are not blind to their treatment group, that introduces bias that will cloud the truth.

The challenges are mainly around the time it takes to be careful and rigorous in research. PIs are faced with many responsibilities ranging from teaching, completing administrative/committee work, administering grants, keeping the lab funded, and publishing papers. They are pushed and pulled in all directions. However, it takes time to ensure that there is scientific rigor, and implementing the best practices of scientific rigor will almost certainly increase the time to publication of one’s results.

My Lab’s Approach

Before any experiment is conducted, the student or postdoc has to write a plan of study so they think purposefully about their study in advance. The plan of study includes, among other points: rationale for the study, an answer to whether it is discovery science or hypothesis-driven (both have their place), sample size, determination of statistical power, detailed methods and procedures, and how data will be analyzed.

We also periodically read and discuss three papers that span 110 years of science. One is John Platt’s 1964 Science paper, “Strong Inference,” which talks about hypothesis testing and the importance of having alternative hypotheses and an experimental design intended to falsify (not support) your hypothesis. The second paper written in 1897 by Thomas Crowder Chamberlin discusses the method of multiple working hypotheses. More recently, a 2014 paper by Douglas Fudge, “50 Years of Platt’s Strong Inference,” revisits principles outlined in the 1964 Platt paper.

Students find the idea of designing experiments that will falsify your hypothesis, given a particular outcome, a challenging exercise — and so do I. We have a lot of fun doing this.

Forward Look

Through the Training Modules to Enhance Data Reproducibility grant (I am a co-PI), SfN seeks to raise awareness in the neuroscience community about the importance of scientific rigor and its application to our discipline. If we are going to make strides that stand the test of time, it is essential.

The next generation of neuroscientists needs to be aware of these important points. Of course, awareness will make the discipline stronger. We also need to provide necessary training to conduct research in a rigorous way.

In addition, the relationship between young investigators and trainees and their mentors is really important in learning about scientific rigor. It is my responsibility as a mentor and trainer to raise young neuroscientists in a way that makes scientific rigor and strong inference habits of mind and practice.

It is also important for young researchers just learning the ropes to seek training in this area. This can teach them to question both their own assumptions and those of others in their lab, and to do what they can as budding scientists to enhance the scientific rigor and credibility of their research. 

About the Contributor

Cheryl Sisk

More in Scientific Research

Members of the Amazon research team and Valeria Muoio.

  • Accessibility Policy
  • Privacy Notice
  • Manage Cookies

SfN logo with "SfN" in a blue box next to Society for Neuroscience in red text and the SfN tag line that reads "Advancing the understanding of the brain and nervous system"

Copyright © 2019 Society for Neuroscience

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 3. Psychological Science

3.1 Psychologists Use the Scientific Method to Guide Their Research

Learning objectives.

  • Describe the principles of the scientific method and explain its importance in conducting and interpreting research.
  • Differentiate laws from theories and explain how research hypotheses are developed and tested.
  • Discuss the procedures that researchers use to ensure that their research with humans and with animals is ethical.

Psychologists aren’t the only people who seek to understand human behaviour and solve social problems. Philosophers, religious leaders, and politicians, among others, also strive to provide explanations for human behaviour. But psychologists believe that research is the best tool for understanding human beings and their relationships with others. Rather than accepting the claim of a philosopher that people do (or do not) have free will, a psychologist would collect data to empirically test whether or not people are able to actively control their own behaviour. Rather than accepting a politician’s contention that creating (or abandoning) a new centre for mental health will improve the lives of individuals in the inner city, a psychologist would empirically assess the effects of receiving mental health treatment on the quality of life of the recipients. The statements made by psychologists are empirical, which means they are based on systematic collection and analysis of data .

The Scientific Method

All scientists (whether they are physicists, chemists, biologists, sociologists, or psychologists) are engaged in the basic processes of collecting data and drawing conclusions about those data. The methods used by scientists have developed over many years and provide a common framework for developing, organizing, and sharing information. The scientific method  is the set of assumptions, rules, and procedures scientists use to conduct research .

In addition to requiring that science be empirical, the scientific method demands that the procedures used be objective , or free from the personal bias or emotions of the scientist . The scientific method proscribes how scientists collect and analyze data, how they draw conclusions from data, and how they share data with others. These rules increase objectivity by placing data under the scrutiny of other scientists and even the public at large. Because data are reported objectively, other scientists know exactly how the scientist collected and analyzed the data. This means that they do not have to rely only on the scientist’s own interpretation of the data; they may draw their own, potentially different, conclusions.

Most new research is designed to replicate — that is, to repeat, add to, or modify — previous research findings. The scientific method therefore results in an accumulation of scientific knowledge through the reporting of research and the addition to and modification of these reported findings by other scientists.

Laws and Theories as Organizing Principles

One goal of research is to organize information into meaningful statements that can be applied in many situations. Principles that are so general as to apply to all situations in a given domain of inquiry are known as laws . There are well-known laws in the physical sciences, such as the law of gravity and the laws of thermodynamics, and there are some universally accepted laws in psychology, such as the law of effect and Weber’s law. But because laws are very general principles and their validity has already been well established, they are themselves rarely directly subjected to scientific test.

The next step down from laws in the hierarchy of organizing principles is theory. A theory  is an integrated set of principles that explains and predicts many, but not all, observed relationships within a given domain of inquiry . One example of an important theory in psychology is the stage theory of cognitive development proposed by the Swiss psychologist Jean Piaget. The theory states that children pass through a series of cognitive stages as they grow, each of which must be mastered in succession before movement to the next cognitive stage can occur . This is an extremely useful theory in human development because it can be applied to many different content areas and can be tested in many different ways.

Good theories have four important characteristics. First, good theories are general , meaning they summarize many different outcomes . Second, they are parsimonious , meaning they provide the simplest possible account of those outcomes . The stage theory of cognitive development meets both of these requirements. It can account for developmental changes in behaviour across a wide variety of domains, and yet it does so parsimoniously — by hypothesizing a simple set of cognitive stages. Third, good theories provide ideas for future research. The stage theory of cognitive development has been applied not only to learning about cognitive skills, but also to the study of children’s moral (Kohlberg, 1966) and gender (Ruble & Martin, 1998) development.

Finally, good theories are falsifiable  (Popper, 1959), which means the variables of interest can be adequately measured and the relationships between the variables that are predicted by the theory can be shown through research to be incorrect . The stage theory of cognitive development is falsifiable because the stages of cognitive reasoning can be measured and because if research discovers, for instance, that children learn new tasks before they have reached the cognitive stage hypothesized to be required for that task, then the theory will be shown to be incorrect.

No single theory is able to account for all behaviour in all cases. Rather, theories are each limited in that they make accurate predictions in some situations or for some people but not in other situations or for other people. As a result, there is a constant exchange between theory and data: existing theories are modified on the basis of collected data, and the new modified theories then make new predictions that are tested by new data, and so forth. When a better theory is found, it will replace the old one. This is part of the accumulation of scientific knowledge.

The Research Hypothesis

Theories are usually framed too broadly to be tested in a single experiment. Therefore, scientists use a more precise statement of the presumed relationship between specific parts of a theory — a research hypothesis — as the basis for their research. A research hypothesis  is a specific and falsifiable prediction about the relationship between or among two or more variables , where a variable  is any attribute that can assume different values among different people or across different times or places . The research hypothesis states the existence of a relationship between the variables of interest and the specific direction of that relationship. For instance, the research hypothesis “Using marijuana will reduce learning” predicts that there is a relationship between one variable, “using marijuana,” and another variable called “learning.” Similarly, in the research hypothesis “Participating in psychotherapy will reduce anxiety,” the variables that are expected to be related are “participating in psychotherapy” and “level of anxiety.”

When stated in an abstract manner, the ideas that form the basis of a research hypothesis are known as conceptual variables. Conceptual variables  are abstract ideas that form the basis of research hypotheses . Sometimes the conceptual variables are rather simple — for instance, age, gender, or weight. In other cases the conceptual variables represent more complex ideas, such as anxiety, cognitive development, learning, self-esteem, or sexism.

The first step in testing a research hypothesis involves turning the conceptual variables into measured variables , which are variables consisting of numbers that represent the conceptual variables . For instance, the conceptual variable “participating in psychotherapy” could be represented as the measured variable “number of psychotherapy hours the patient has accrued,” and the conceptual variable “using marijuana” could be assessed by having the research participants rate, on a scale from 1 to 10, how often they use marijuana or by administering a blood test that measures the presence of the chemicals in marijuana.

Psychologists use the term operational definition  to refer to a precise statement of how a conceptual variable is turned into a measured variable . The relationship between conceptual and measured variables in a research hypothesis is diagrammed in Figure 3.1. The conceptual variables are represented in circles at the top of the figure (Psychotherapy and anxiety), and the measured variables are represented in squares at the bottom (number of hours the patient has spent in psychotherapy and anxiety concerns as reported by the patient). The two vertical arrows, which lead from the conceptual variables to the measured variables, represent the operational definitions of the two variables. The arrows indicate the expectation that changes in the conceptual variables (psychotherapy and anxiety) will cause changes in the corresponding measured variables (number of hours in psychotherapy and reported anxiety concernts). The measured variables are then used to draw inferences about the conceptual variables.

Table 3.1 lists some potential operational definitions of conceptual variables that have been used in psychological research. As you read through this list, note that in contrast to the abstract conceptual variables, the measured variables are very specific. This specificity is important for two reasons. First, more specific definitions mean that there is less danger that the collected data will be misunderstood by others. Second, specific definitions will enable future researchers to replicate the research.

Table 3.1 Examples of the Operational Definitions of Conceptual Variables that Have Been Used in Psychological Research
Conceptual variable Operational definitions
Aggression
Interpersonal attraction
Employee satisfaction ) to 9 ( )
Decision-making skills
Depression

Conducting Ethical Research

One of the questions that all scientists must address concerns the ethics of their research. Physicists are concerned about the potentially harmful outcomes of their experiments with nuclear materials. Biologists worry about the potential outcomes of creating genetically engineered human babies. Medical researchers agonize over the ethics of withholding potentially beneficial drugs from control groups in clinical trials. Likewise, psychologists are continually considering the ethics of their research.

Research in psychology may cause some stress, harm, or inconvenience for the people who participate in that research. For instance, researchers may require introductory psychology students to participate in research projects and then deceive these students, at least temporarily, about the nature of the research. Psychologists may induce stress, anxiety, or negative moods in their participants, expose them to weak electrical shocks, or convince them to behave in ways that violate their moral standards. And researchers may sometimes use animals in their research, potentially harming them in the process.

Decisions about whether research is ethical are made using established ethical codes developed by scientific organizations, such as the Canadian Psychological Association, and federal governments. In Canada, the federal agencies, Health Canada, and the Canadian Institute for Health Research provide the guidelines for ethical standards in research. Some research, such as the research conducted by the Nazis on prisoners during World War II, is perceived as immoral by almost everyone. Other procedures, such as the use of animals in research testing the effectiveness of drugs, are more controversial.

Scientific research has provided information that has improved the lives of many people. Therefore, it is unreasonable to argue that because scientific research has costs, no research should be conducted. This argument fails to consider the fact that there are significant costs to not doing research and that these costs may be greater than the potential costs of conducting the research (Rosenthal, 1994). In each case, before beginning to conduct the research, scientists have attempted to determine the potential risks and benefits of the research and have come to the conclusion that the potential benefits of conducting the research outweigh the potential costs to the research participants.

Characteristics of an Ethical Research Project Using Human Participants

  • Trust and positive rapport are created between the researcher and the participant.
  • The rights of both the experimenter and participant are considered, and the relationship between them is mutually beneficial.
  • The experimenter treats the participant with concern and respect and attempts to make the research experience a pleasant and informative one.
  • Before the research begins, the participant is given all information relevant to his or her decision to participate, including any possibilities of physical danger or psychological stress.
  • The participant is given a chance to have questions about the procedure answered, thus guaranteeing his or her free choice about participating.
  • After the experiment is over, any deception that has been used is made public, and the necessity for it is explained.
  • The experimenter carefully debriefs the participant, explaining the underlying research hypothesis and the purpose of the experimental procedure in detail and answering any questions.
  • The experimenter provides information about how he or she can be contacted and offers to provide information about the results of the research if the participant is interested in receiving it. (Stangor, 2011)

This list presents some of the most important factors that psychologists take into consideration when designing their research. The most direct ethical concern of the scientist is to prevent harm to the research participants. One example is the well-known research of Stanley Milgram (1974) investigating obedience to authority. In these studies, participants were induced by an experimenter to administer electric shocks to another person so that Milgram could study the extent to which they would obey the demands of an authority figure. Most participants evidenced high levels of stress resulting from the psychological conflict they experienced between engaging in aggressive and dangerous behaviour and following the instructions of the experimenter. Studies such as those by Milgram are no longer conducted because the scientific community is now much more sensitized to the potential of such procedures to create emotional discomfort or harm.

Another goal of ethical research is to guarantee that participants have free choice regarding whether they wish to participate in research. Students in psychology classes may be allowed, or even required, to participate in research, but they are also always given an option to choose a different study to be in, or to perform other activities instead. And once an experiment begins, the research participant is always free to leave the experiment if he or she wishes to. Concerns with free choice also occur in institutional settings, such as in schools, hospitals, corporations, and prisons, when individuals are required by the institutions to take certain tests, or when employees are told or asked to participate in research.

Researchers must also protect the privacy of the research participants. In some cases data can be kept anonymous by not having the respondents put any identifying information on their questionnaires. In other cases the data cannot be anonymous because the researcher needs to keep track of which respondent contributed the data. In this case, one technique is to have each participant use a unique code number to identify his or her data, such as the last four digits of the student ID number. In this way the researcher can keep track of which person completed which questionnaire, but no one will be able to connect the data with the individual who contributed them.

Perhaps the most widespread ethical concern to the participants in behavioural research is the extent to which researchers employ deception. Deception   occurs whenever research participants are not completely and fully informed about the nature of the research project before participating in it . Deception may occur in an active way, such as when the researcher tells the participants that he or she is studying learning when in fact the experiment really concerns obedience to authority. In other cases the deception is more passive, such as when participants are not told about the hypothesis being studied or the potential use of the data being collected.

Some researchers have argued that no deception should ever be used in any research (Baumrind, 1985). They argue that participants should always be told the complete truth about the nature of the research they are in, and that when participants are deceived there will be negative consequences, such as the possibility that participants may arrive at other studies already expecting to be deceived. Other psychologists defend the use of deception on the grounds that it is needed to get participants to act naturally and to enable the study of psychological phenomena that might not otherwise get investigated. They argue that it would be impossible to study topics such as altruism, aggression, obedience, and stereotyping without using deception because if participants were informed ahead of time what the study involved, this knowledge would certainly change their behaviour. The codes of ethics of the Canadian Psychological Association and the Tri-Council Policy Statement of Canada’s three federal research agencies (the Canadian Institute of Health Research [CIHR], the Natural Sciences and Engineering Research Council of Canada [NSERC], and the Social Sciences and Humanities Research Council of Canada [SSHRC] or “the Agencies”) allow researchers to use deception, but these codes also require them to explicitly consider how their research might be conducted without the use of deception.

Ensuring that Research Is Ethical

Making decisions about the ethics of research involves weighing the costs and benefits of conducting versus not conducting a given research project. The costs involve potential harm to the research participants and to the field, whereas the benefits include the potential for advancing knowledge about human behaviour and offering various advantages, some educational, to the individual participants. Most generally, the ethics of a given research project are determined through a cost-benefit analysis , in which the costs are compared with the benefits . If the potential costs of the research appear to outweigh any potential benefits that might come from it, then the research should not proceed.

Arriving at a cost-benefit ratio is not simple. For one thing, there is no way to know ahead of time what the effects of a given procedure will be on every person or animal who participates or what benefit to society the research is likely to produce. In addition, what is ethical is defined by the current state of thinking within society, and thus perceived costs and benefits change over time. In Canada, the Tri-Council regulations require that all universities receiving funds from the Agencies set up an Ethical Review Board (ERB) to determine whether proposed research meets department regulations. The ERB  is a committee of at least five members whose goal it is to determine the cost-benefit ratio of research conducted within an institution . The ERB must approve the procedures of all the research conducted at the institution before the research can begin. The board may suggest modifications to the procedures, or (in rare cases) it may inform the scientist that the research violates Tri-Council Research Policy Statement and thus cannot be conducted at all.

One important tool for ensuring that research is ethical is the use of informed consent . A sample informed consent form is shown in Figure 3.2, Informed consent , conducted before a participant begins a research session, is designed to explain the research procedures and inform the participant of his or her rights during the investigation . The informed consent explains as much as possible about the true nature of the study, particularly everything that might be expected to influence willingness to participate, but it may in some cases withhold some information that allows the study to work.

The informed consent form explains the research procedures and informs the participant of his or her rights during the investigation. Informed consent should address the following issues:

  • A very general statement about the purpose of the study
  • A brief description of what the participants will be asked to do
  • A brief description of the risks, if any, and what the researcher will do to restore the participant
  • A statement informing participants that they may refuse to participate or withdraw at any time without being penalized
  • A statement regarding how the participant’s confidentiality will be protected
  • Encouragement to ask questions about participation
  • Instructions regarding whom to contact if there are concerns
  • Information regarding where the subjects may be informed about the study’s findings

Because participating in research has the potential for producing long-term changes in the research participants, all participants should be fully debriefed immediately after their participation. The debriefing  is a procedure designed to fully explain the purposes and procedures of the research and remove any harmful after-effects of participation .

Research with Animals

Because animals make up an important part of the natural world, and because some research cannot be conducted using humans, animals are also participants in psychological research (Figure 3.3). Most psychological research using animals is now conducted with rats, mice, and birds, and the use of other animals in research is declining (Thomas & Blackman, 1992). As with ethical decisions involving human participants, a set of basic principles has been developed that helps researchers make informed decisions about such research; a summary is shown below.

Canadian Psychological Association Guidelines on Humane Care and Use of Animals in Research

The following are some of the most important ethical principles from the Canadian Psychological Association’s (CPA) guidelines on research with animals.

  • II.45 Not use animals in their research unless there is a reasonable expectation that the research will increase understanding of the structures and processes underlying behaviour, or increase understanding of the particular animal species used in the study, or result eventually in benefits to the health and welfare of humans or other animals.
  • II.46 Use a procedure subjecting animals to pain, stress, or privation only if an alternative procedure is unavailable and the goal is justified by its prospective scientific, educational, or applied value.
  • II.47 Make every effort to minimize the discomfort, illness, and pain of animals. This would include performing surgical procedures only under appropriate anaesthesia, using techniques to avoid infection and minimize pain during and after surgery and, if disposing of experimental animals is carried out at the termination of the study, doing so in a humane way. (Canadian Code of Ethics for Psychologists)
  • II.48 Use animals in classroom demonstrations only if the instructional objectives cannot be achieved through the use of video-tapes, films, or other methods, and if the type of demonstration is warranted by the anticipated instructional gain  (Canadian Psychological Association, 2000).

Because the use of animals in research involves a personal value, people naturally disagree about this practice. Although many people accept the value of such research (Plous, 1996), a minority of people, including animal-rights activists, believe that it is ethically wrong to conduct research on animals. This argument is based on the assumption that because animals are living creatures just as humans are, no harm should ever be done to them.

Most scientists, however, reject this view. They argue that such beliefs ignore the potential benefits that have come, and continue to come, from research with animals. For instance, drugs that can reduce the incidence of cancer or AIDS may first be tested on animals, and surgery that can save human lives may first be practised on animals. Research on animals has also led to a better understanding of the physiological causes of depression, phobias, and stress, among other illnesses. In contrast to animal-rights activists, then, scientists believe that because there are many benefits that accrue from animal research, such research can and should continue as long as the humane treatment of the animals used in the research is guaranteed.

Key Takeaways

  • Psychologists use the scientific method to generate, accumulate, and report scientific knowledge.
  • Basic research, which answers questions about behaviour, and applied research, which finds solutions to everyday problems, inform each other and work together to advance science.
  • Research reports describing scientific studies are published in scientific journals so that other scientists and laypersons may review the empirical findings.
  • Organizing principles, including laws, theories, and research hypotheses, give structure and uniformity to scientific methods.
  • Concerns for conducting ethical research are paramount. Researchers ensure that participants are given free choice to participate and that their privacy is protected. Informed consent and debriefing help provide humane treatment of participants.
  • A cost-benefit analysis is used to determine what research should and should not be allowed to proceed.

Exercises and Critical Thinking

  • Give an example from personal experience of how you or someone you know has benefited from the results of scientific research.
  • Find and discuss a research project that in your opinion has ethical concerns. Explain why you find these concerns to be troubling.
  • Indicate your personal feelings about the use of animals in research. When should and should not animals be used? What principles have you used to come to these conclusions?

Image Attributions

Figure 3.3: “ Wistar rat ” by Janet Stephens (http://en.wikipedia.org/wiki/File:Wistar_rat.jpg) is in the public domain .

Baumrind, D. (1985). Research using intentional deception: Ethical issues revisited.  American Psychologist, 40 , 165–174.

Canadian Psychological Association. (2000). Canadian code of ethics for psychologists (third edition) [PDF] . Retrieved July 2014 from http://www.cpa.ca/cpasite/userfiles/Documents/Practice_Page/Ethics_Code_Psych.pdf

Kohlberg, L. (1966). A cognitive-developmental analysis of children’s sex-role concepts and attitudes. In E. E. Maccoby (Ed.),  The development of sex differences . Stanford, CA: Stanford University Press.

Milgram, S. (1974).  Obedience to authority: An experimental view . New York, NY: Harper and Row.

Plous, S. (1996). Attitudes toward the use of animals in psychological research and education.  Psychological Science, 7 , 352–358.

Popper, K. R. (1959).  The logic of scientific discovery . New York, NY: Basic Books.

Rosenthal, R. (1994). Science and ethics in conducting, analyzing, and reporting psychological research.  Psychological Science, 5 , 127–134.

Ruble, D., & Martin, C. (1998). Gender development. In W. Damon (Ed.),  Handbook of child psychology  (5th ed., pp. 933–1016). New York, NY: John Wiley & Sons.

Stangor, C. (2011).  Research methods for the behavioral sciences  (4th ed.). Mountain View, CA: Cengage.

Thomas, G., & Blackman, D. (1992). The future of animal studies in psychology.  American Psychologist, 47 , 1678.

Long Descriptions

Figure 3.2 long description: Sample research consent form.

My name is [insert your name], and this research project is part of the requirement for a [insert your degree program] at [blank] University. My credentials with [blank] university can be established by telephoning [insert name and number of supervisor].

This document constitutes an agreement to participate in my research project, the objective of which is to [insert research objectives and the sponsoring organization here].

The research will consist of [insert your methodology] and its foreseen to last [insert amount of time]. The foreseen questions will refer to [insert summary of foreseen questions]. In addition to submitting my final report to [blank] University in partial fulfillment for a [insert your degree program], I will also be sharing my search findings with [insert your sponsoring organization]. [Disclose all the purposes to which the research data is going to be put, e.g. journal articles, books, etc.].

Information will be recorded in hand-written format (or taped/videotaped, etc) and where appropriate, summarized, in anonymous format, in the body of the final report. At no time will any specific comments be attributed to any individual unless specific agreement has been obtained beforehand. All documentation will be kept strictly confidential.

A copy of the final report will be published. A copy will be housed at [blank] university, available online through [blank] and will be publicly accessible. Access and distribution will be unrestricted.

[Disclose any and all conflicts of interest and how those will be managed.]

You are not compelled to participate in this research project. If you do choose to participate, you are free to withdraw at any time without prejudice. Similarly, if you choose not to participate in this research project, this information will also be maintained in confidence.

By signing this letter, you give free and informed consent to participate in this project.

Name (Please print), Signed: Date: [Return to Figure 3.2]

Introduction to Psychology - 1st Canadian Edition Copyright © 2014 by Jennifer Walinga and Charles Stangor is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

the research work is conducted to test the truth of

the research work is conducted to test the truth of

What is Research?: The Truth about Research

  • The Truth about Research
  • Research Steps
  • Evaluating Sources
  • Parts of a Research Article

Research isn't what you think it is.  It's not just dusty books written by men long since gone from this world.  No...research is something we all do almost every day of our lives.  You do it when you ask your friend if the movie they saw was any good.  You do it when you go on Yelp to find a place to eat.  You even do it when you read the comments section of an article on your favorite site.  The key is knowing how to do it well.  Doing it well doesn't mean putting the same amount of research into every problem--it means knowing the level of research needed, as well as how to reach that level.

Different Topics (and Different Needs) Require Different Expertise

For example, Wikipedia is fine if I want to know what happened on my favorite tv show last season, but I can't cite Wikipedia on my term paper.  I also wouldn't go to a lawyer to see if I need to have my appendix taken out.  So when doing research, it is always important to take the author's experience and credentials into consideration.  Just because I trust Rachael Ray to provide good recipes for game day doesn't mean I should take financial advice from her.  So always consider the source.

All Information is Created for a Reason

The person creating the information you're reading, hearing, watching, etc., is doing it for a reason.  Maybe it's to inform or educate you; maybe it's to entertain or persuade you; maybe it's to get back at his roommate for not washing the dishes.  The point is that every piece of information you read was created with a specific purpose.

Knowledge is Power

All information has value.  The amount of that value depends on the person who created that information and on the person who is receiving it.  For instance, if your 8 year old nephew tells you that a meteorite is about to hit New Orleans, you are not going to value that information the same way you would if Neil deGrasse Tyson said it.  Likewise, you probably wouldn't value details of the latest kids movie as much as your nephew would.

Research is Answering a Question

Any time you do research, you are simply trying to answer a question.  In the process of answering that question, you may find that even more questions pop up.  For instance, you may wonder why the sky is blue.  You learn that it has to do with the way light refracts.  You may wonder why light refracts, which would lead you to discover that light acts like a wave and a particle.  And it goes on and on.

Articles and Books are Just Really Public Conversations

Most scholarly articles and books started as questions that the author had about a topic.  From there, the author either agreed or disagreed with what he/she read.  Tons of research later, the author publishes their opinion.  Think of it as a really public, really slow, really long conversation between scholars on a topic.

Searching for Information is an Exploration You Should Plan for

It's easy to get lost in the din of information.  Have a plan.  It's okay to venture from the plan from time to time.  But having a plan makes it easier to find your way back to what you're trying to find out.  As you go along, you may decide that you'd prefer to go a slightly (or totally) different direction with your research.  That's fine.  That's normal.  Just don't go off in a direction because you got distracted.  Think Wikipedia--you start out looking up the plot of Game of Thrones and end up reading about cheetahs in the Serengeti.

  • Next: Research Steps >>
  • Last Updated: Jul 20, 2017 9:23 PM
  • URL: https://libguides.uno.edu/whatisresearch

Every print subscription comes with full digital access

Science News

Is redoing scientific research the best way to find truth.

During replication attempts, too many studies fail to pass muster

reproducing experiments

REPEAT PERFORMANCE  By some accounts, science is facing a crisis of confidence because the results of many studies aren’t confirmed when researchers attempt to replicate the experiments. 

Harry Campbell

Share this:

By Tina Hesman Saey

January 13, 2015 at 2:23 pm

R. Allan Mufson remembers the alarming letters from physicians. They were testing a drug intended to help cancer patients by boosting levels of oxygen-carrying hemoglobin in their blood.

In animal studies and early clinical trials, the drug known as Epo (for erythropoietin) appeared to counteract anemia caused by radiation and chemotherapy. It had the potential to spare patients from the need for blood transfusions. Researchers also had evidence that Epo might increase radiation’s tumor-killing power.

But when doctors started giving Epo or related drugs, called erythropoietic-stimulating agents, to large numbers of cancer patients in clinical trials, it looked like deaths increased. Physicians were concerned, and some stopped their studies early.

At the same time, laboratory researchers were collecting evidence that Epo might be feeding rather than fighting tumors. When other scientists, particularly researchers who worked for the company that made the drug, tried to replicate the original findings, they couldn’t.

Next issue, February 7: “Big Data, Big Challenges”  

This article is part one of a two-part series. In the Feb. 7 issue, writer Tina Hesman Saey will explore how a move toward massive datasets creates opportunities for chaos and errors to multiply .

Scientists should be able to say whether Epo is good or bad for cancer patients, but seven years later, they still can’t. The Epo debate highlights deeper trouble in the life sciences and social sciences, two fields where it appears particularly hard to replicate research findings. Replicability is a cornerstone of science, but too many studies are failing the test.

“There’s a community sense that this is a growing problem,” says Lawrence Tabak, deputy director of the National Institutes of Health. Early last year, NIH joined the chorus of researchers drawing attention to the problem, and the agency issued a plan and a call to action.

Unprecedented funding challenges have put scientists under extreme pressure to publish quickly and often. Those pressures may lead researchers to publish results before proper vetting or to keep hush about experiments that didn’t pan out. At the same time, journals have pared down the section in a published paper devoted to describing a study’s methods: “In some journals it’s really a methods tweet,” Tabak says. Scientists are less certain than ever that what they read in journals is true.

12 reasons research goes wrong

Many people say one solution to the problem is to have independent labs replicate key studies to validate their findings. The hope is to identify where and why things go wrong. Armed with that knowledge, the replicators think they can improve the reliability of published reports.

Others call that quest futile, saying it’s difficult — if not impossible — to redo a study exactly, especially when working with highly variable subjects, such as people, animals or cells. Repeating published work wastes time and money, the critics say, and it does nothing to advance knowledge. They’d prefer to see questions approached with a variety of different methods. It’s the general patterns and basic principles — the reproducibility of a finding, not the precise replication of a specific experiment — that really matter.

It seems that everyone has an opinion about the underlying causes leading to irreproducibility, and many have offered solutions. But no one really knows entirely what is wrong or if any of the proffered fixes will work.

Much of the controversy has centered on the types of statistical analyses used in most scientific studies, and hardly anyone disputes that the math is a major tripping point. An influential 2005 paper looking at the statistical weakness of scientific studies generated much of the self-reflection taking place within the medical community over the last decade. While those issues still exist, especially as more complex analyses are applied to big data studies, there remain deeper problems that may be harder to fix.

Taking sides

Epo researchers weren’t the first to find discrepancies in their results, but their experience set the stage for much of the current controversy.

Story continues below infographic

Wrong answers

the research work is conducted to test the truth of

The Bayer pharmaceutical company tried to repeat studies in three research fields (gold chart), mostly cancer studies. Almost two-thirds of the redos (dark teal) produced results inconsistent with the original findings.

Source: F. Prinz et al/Nature Reviews Drug Discovery 2011

Mufson, head of the National Cancer Institute’s Cancer Immunology and Hematology Etiology Branch, organized a two-day workshop in 2007 where academic, government and pharmaceutical company scientists, clinicians and patient advocates discussed the Epo findings.

A divide quickly emerged between pharmaceutical researchers and scientists from academic labs, says Charles Bennett, an oncologist at the University of South Carolina.

Bennett was part of a team that had reported in 2005 that erythropoietin reduced the need for blood transfusions and possibly improved survival among cancer patients. But he came to the meeting armed with very different data . He and colleagues found that erythropoietin and darbepoetin used to treat anemia in cancer patients raised the risk of blood clots by 57 percent and the risk of dying by about 10 percent. Others found that people with breast or head and neck cancers died sooner than other cancer patients if they took Epo.

Those who argued that Epo was harmful to patients cited cellular mechanisms: tumor cells make more Epo receptors than other cells. More receptors, the researchers feared, meant the drug was stimulating growth of the cancer cells, a finding that might explain why patients were dying.

Company scientists from Amgen, which makes Epo drugs, charged that they had tried and could not replicate the results published by the academic researchers. After listening to the researchers hash through data for two days, Bennett could see why there was conflict. The company and academic scientists couldn’t even agree on what constituted growth of tumor cells, or on the correct tools for detecting Epo receptors on tumor cells, he says. That disconnect meant neither side would be able to confirm the other’s findings, nor could they completely discount the results. The meeting ended with a list of concerns and direction for future studies, but little consensus.

“I went in thinking it was black and white,” Bennett says. “Now, I’m very much convinced it’s a gray answer and everybody’s right.”

From there, pressure continued to build. In 2012, Amgen caused shock waves by reporting that it could independently confirm only six of 53 “landmark” papers on preclinical cancer studies. Replicating results is one of the first steps companies take before investing in further development of a drug. Amgen will not disclose how it conducted the replication experiments or even which studies it tried to replicate. Bennett suspects the controversial Epo experiments were among the chosen studies, perhaps tinting the results.

We’re always in a gray area between perfect truth and complete falsehood.

— Giovanni Parmigiani

Amgen’s revelation came on the heels of a similar report from the pharmaceutical company Bayer. In 2011, three Bayer researchers reported in Nature Reviews Drug Discovery that company scientists could fully replicate only about 20 to 25 percent of published preclinical cancer, cardiovascular and women’s health studies. Like Amgen, Bayer did not say which studies it attempted to replicate. But those inconsistencies could mean the company would have to drop projects or expend more resources to validate the original reports.

Scientists were already uneasy because of a well-known 2005 essay by epidemiologist John Ioannidis, now at Stanford University. He had used statistical arguments to contend that most research findings are false. Faulty statistics often indicate a finding is true when it is not. Those falsely positive results usually don’t replicate.

Academic scientists have had no easier time than drug companies in replicating others’ results. Researchers at MD Anderson Cancer Center in Houston surveyed their colleagues about whether they had ever had difficulty replicating findings from published papers. More than half, 54.6 percent, of the 434 respondents said that they had, the survey team reported in  PLOS ONE  in 2013. Only a third of those people were able to correct the discrepancy or explain why they got different answers.

“Those kinds of studies are sort of shocking and worrying,” says Elizabeth Iorns, a biologist at the University of Miami in Florida and chief executive officer for Science Exchange, a network of labs that attempt to independently validate research results.

Over the long term, science is a self-correcting process and will sort itself out, Tabak and NIH director Francis Collins wrote last January in  Nature . “In the shorter term, however, the checks and balances that once ensured scientific fidelity have been hobbled. This has compromised the ability of today’s researchers to reproduce others’ findings,” Tabak and Collins wrote.  Myriad reasons for the failure to reproduce have been given, many involving the culture of science. Fixing the problem is going to require a more sophisticated understanding of what’s actually wrong, Ioannidis and others argue.

Two schools of thought

Researchers don’t even agree on whether it is necessary to duplicate studies exactly, or to validate the underlying principles, says Giovanni Parmigiani, a statistician at the Dana-Farber Cancer Institute in Boston. Scientists have two schools of thought about verifying someone else’s results: replication and reproducibility. The replication school teaches that researchers should retrace all of the steps in a study from data generation through the final analysis to see if the same answer emerges. If a study is true and right, it should.

Proponents of the other school, reproducibility, contend that complete duplication only demonstrates whether a phenomenon occurs under the exact conditions of the experiment. Obtaining consistent results across studies using different methods or groups of people or animals is a more reliable gauge of biological meaningfulness, the reproducibility school teaches. To add to the confusion, some scientists reverse the labels.

Timothy Wilson, a social psychologist at the University of Virginia in Charlottesville, is in the reproducibility camp. He would prefer that studies extend the original findings, perhaps modifying variables to learn more about the underlying principles. “Let’s try to discover something,” he says. “This is the way science marches forward. It’s slow and messy, but it works.”

But Iorns and Brian Nosek, a psychologist and one of Wilson’s colleagues at the University of Virginia, are among those who think exact duplication can move research in the right direction.

In 2013, Nosek and his former student Jeffrey Spies cofounded the Center for Open Science, with the lofty goal “to increase openness, integrity and reproducibility of scientific research.” Their approach was twofold: provide infrastructure to allow scientists to more easily and openly share data and conduct research projects to repeat studies in various disciplines in science.

Soon, Nosek and Iorns’ Science Exchange teamed up to replicate 50 of the most important (defined as highly cited) cancer studies published between 2010 and 2012. On December 10, 2014, the Reproducibility Project: Cancer Biology kicked off when three groups announced in the journal  eLife  their intention to replicate key experiments from previous studies and shared their plans for how to do it.

Iorns, Nosek and their collaborators hope the effort will give scientists a better idea of the reliability of these studies. If the replication efforts fail, the researchers want to know why. It’s possible that the underlying biology is sound, but that some technical glitch prevents successful replication of the results. Or the researchers may have been barking up the wrong tree. Most likely the real answer is somewhere in the middle.

Neuroscience researchers realized the value of duplicating studies with therapeutic potential early on. In 2003, the National Institute of Neuro-logical Disorders and Stroke contracted labs to redo some important spinal cord injury studies that showed promise for helping patients. Neuroscientist Oswald Steward of the University of California, Irvine School of Medicine heads one of the contract labs.

Story continues below table

Survey says …

the research work is conducted to test the truth of

Academic researchers have trouble duplicating other researchers’ published results, a survey from MD Anderson Cancer Center suggests. Researchers, especially junior faculty members and trainees, often don’t resolve discrepancies and tend not to publish conflicting reports.

Source: A. Mobley et al/PLOS ONE 2013

Of the 12 studies Steward and colleagues tried to copy, they could fully replicate only one. And only after the researchers determined that the ability of a drug to limit hemorrhaging and nerve degeneration near an injury depended upon the exact mechanism that produced the injury. Half the studies could not be replicated at all and the rest were partially replicated, or produced mixed or less robust results than the originals, according to a 2012 report in Experimental Neurology .

Notably, the researchers cited 11 reasons that might account for why previous studies failed to replicate; only one was that the original study was wrong. Exact duplications of original studies are impossible, Steward and his colleagues contend.

Acts of aggression

Before looking at cancer studies, Nosek investigated his own field with a large collaborative research project. In a special issue of Social Psychology published last April , he and other researchers reported results of 15 replication studies testing 26 psychological phenomena. Of 26 original observations tested, they could only replicate 10. That doesn’t mean the rest failed entirely; several of the replication studies got similar or mixed results to the original, but they couldn’t qualify as a success because they didn’t pass statistical tests.

Simone Schnall conducted one of the studies that other researchers claimed they could not replicate. Schnall, a social psychologist at the University of Cambridge, studies how emotions affect judgment.

She has found that making people sit at a sticky, filthy desk or showing them revolting movie scenes not only disgusts them, it makes their moral judgment harsher. In 2008, Schnall and colleagues examined disgust’s flip side, cleanliness, and found that hand washing made people’s moral judgments less harsh.

M. Brent Donnellan, one of the researchers who attempted to replicate Schnall’s 2008 findings, blogged before the replication study was published that his group made two unsuccessful attempts to duplicate Schnall’s original findings. “We gave it our best shot and pretty much encountered an epic fail as my 10-year-old would say,” he wrote. When Schnall and others complained that the comments were unprofessional and pointed out several possible reasons the study failed to replicate, Donnellan, a psychologist at Texas A&M University in College Station, apologized for the remark, calling it “ill-advised.”

Schnall’s criticism set off a flurry of negative remarks from some researchers, while others leapt to her defense. The most vociferous of her champions have called replicators “bullies,” “second-stringers” and worse. The experience, Schnall said, has damaged her reputation and affected her ability to get funding; when decision makers hear about the failed replication they suspect she did something wrong.

“Somehow failure to replicate is viewed as more informative than the original studies,” says Wilson. In Schnall’s case, “For all we know it was an epic fail on the replicators’ part.”

The scientific community needs to realize that it is difficult to replicate a study, says Ioannidis. “People should not be shamed,” he says. Every geneticist, himself included, has published studies purporting to find genetic causes of disease that turned out to be wrong, he says.

Iorns is not out to stigmatize anyone, she says. “We don’t want people to feel like we’re policing them or coming after them.” She aims to improve the quality of science and scientists. Researchers should be rewarded for producing consistently reproducible results, she says. “Ultimately it should be the major criteria by which scientists are assessed. What could be more important?”

Variable soup

Much of the discussion of replicability has centered on social and cultural factors that contribute to publication of irreplicable results, but no one has really been discussing the mechanisms that may lead replication efforts to fail, says Benjamin Djulbegovic, a clinical researcher at the University of South Florida in Tampa. He and long-time collaborator mathematician Iztok Hozo of Indiana University Northwest in Gary have been mulling over the question for years, Djulbegovic says.

They were inspired by the “butterfly effect,” an illustration of chaos theory that one small action can have major repercussions later. The classic example holds that a butterfly flapping its wings in Brazil can brew a tornado in Texas. Djulbegovic hit on the idea that there’s chaos at work in most biology and psychology studies as well.

Changing even a few of the original conditions of an experiment can have a butterfly effect on the outcome of replication attempts, he and Hozo reported in June in Acta Informatica Medica . The two researchers considered a simplified case in which 12 factors may affect a doctor’s decision on how to treat a patient. The researchers focused on clinical decision making, but the concept is applicable to other areas of science, Djulbegovic says. Most of the factors, such as the decision maker’s time pressure (yes or no) or cultural factors (present or not important) have two possible starting places. The doctor’s individual characteristics — age (old or young), gender (male or female) — could have four combinations and the decision maker’s personality had five different modes. All together, those dozen initial factors make up 20,480 combinations that could represent the initial conditions of the experiment.

That didn’t even include variables about where exams took place, the conditions affecting study participants (Were they tired? Had they recently fought with a loved one or indulged in alcohol?), or the handling of biological samples that might affect the diagnostic test results. Researchers have good and bad days too. “You may interview people in the morning, but nobody controls how well you slept last night or how long it took you to drive to work,” Djulbegovic says. Those invisible variables may very well influence the medical decisions made that day and therefore affect the study’s outcome.

Djulbegovic and Hozo varied some of the initial conditions in computer simulations. If initial conditions varied between experiments by 2.5 conditions or less, the results were highly consistent, or replicable. But changing 3.5 to four initial factors gave answers all over the map, indicating that very slight changes in initial conditions can render experiments irreproducible.

The study is not rigorous mathematical proof, Djulbegovic says. “We just sort of put some thoughts in writing.”

Balancing act

Failed replications may offer scientists some valuable insights, Steward says. “We need to recognize that many results won’t make it through the translational grist mill.” In other words, a therapy that shows promise under specific conditions, but can’t be replicated in other labs, is not ready to be tried in humans.

“In many cases there’s a biological story there, but it’s a fragile one,” Steward says. “If it’s fragile, it’s not translatable.”

That doesn’t negate the original findings, he says. “It changes our perspective on what it takes to get a translatable result.”

Nosek, proponent of replication that he is, admits that scientists need room for error. Requiring absolute replicability could discourage researchers from ever taking a chance, producing only the tiniest of incremental advances, he says. Science needs both crazy ideas and careful research to succeed.

“It’s totally OK that you have this outrageous attempt that fails,” Nosek says. After all, “Einstein was wrong about a lot of stuff. Newton. Thomas Edison. They had plenty of failures, too.” But science can’t survive on bold audacity alone, either. “We need a balance of innovation and verification,” Nosek says.

How best to achieve that balance is anybody’s guess. In their January 2014 paper , Collins and Tabak reviewed NIH’s plan, which includes training modules for teaching early-career scientists the proper way to do research, standards for committees reviewing research proposals and an emphasis on data sharing. But the funding agency can’t change things alone.

In November, in response to the NIH call to action, more than 30 major journals announced that they had adopted a set of guidelines for reporting results of preclinical studies. Those guidelines include calls for more rigorous statistical analyses, detailed reporting on how the studies were done, and a strong recommendation that all datasets be made available upon request.

Ioannidis offered his own suggestions in the October PLOS Medicine . “We need better science on the way science is done,” he says. He helped start the Meta-Research Innovation Center at Stanford to conduct research on research and figure out how to improve it.

In the decade since he published his assertion of the wrongness of research, Ioannidis has seen change. “We’re doing better, but the challenges are even bigger than they were 10 years ago,” he says.

He is reluctant to put a number on science’s reliability as a whole, though. “If I said 55 to 65 percent [of results] are not replicable, it would not do justice to the fact that some types of scientific results are 99 percent likely to be true.”

Science is not irrevocably broken, he asserts. It just needs some improvements.

“Despite the fact that I’ve published papers with pretty depressive titles, I’m actually an optimist,” Ioannidis says. “I find no other investment of a society that is better placed than science.”

This article appeared in the January 24, 2015 issue of Science News with the headline  “Repeat Performance.”

More Stories from Science News on Science & Society

An artsy food shot shows a white bowl on a gray counter. A spatter of orange coats the bottom of the bowl while a device drips a syrupy dot on top. The orange is a fungus that gave this rice custard a fruity taste.

A fluffy, orange fungus could transform food waste into tasty dishes

Norwegian archipelago of Svalbard

‘Turning to Stone’ paints rocks as storytellers and mentors

A Victorian-era book titled Mohun is propped up to show it's deep yellow cover, which is decorated by a paler flower with green leaves and vines.

Old books can have unsafe levels of chromium, but readers’ risk is low

Astronauts Sunita Williams and Butch Wilmore float in the International Space Station.

Astronauts actually get stuck in space all the time

digital art of an unexplained anomalous phenomena (UAP)

Scientists are getting serious about UFOs. Here’s why

abstract person with wavy colors flowing in and out of brain

‘Then I Am Myself the World’ ponders what it means to be conscious

A horizontal still from the movie 'Twisters' a man and a woman stand next to each other in a field, backs to the camera, and share a look while an active tornado is nearby.

Twisters asks if you can 'tame' a tornado. We have the answer

caravans in Northampton, England, surrounded by floodwater

The world has water problems. This book has solutions

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

the research work is conducted to test the truth of

Community Blog

Keep up-to-date on postgraduate related issues with our quick reads written by students, postdocs, professors and industry leaders.

What is Research? – Purpose of Research

Picture of DiscoverPhDs

  • By DiscoverPhDs
  • September 10, 2020

Purpose of Research - What is Research

The purpose of research is to enhance society by advancing knowledge through the development of scientific theories, concepts and ideas. A research purpose is met through forming hypotheses, collecting data, analysing results, forming conclusions, implementing findings into real-life applications and forming new research questions.

What is Research

Simply put, research is the process of discovering new knowledge. This knowledge can be either the development of new concepts or the advancement of existing knowledge and theories, leading to a new understanding that was not previously known.

As a more formal definition of research, the following has been extracted from the Code of Federal Regulations :

the research work is conducted to test the truth of

While research can be carried out by anyone and in any field, most research is usually done to broaden knowledge in the physical, biological, and social worlds. This can range from learning why certain materials behave the way they do, to asking why certain people are more resilient than others when faced with the same challenges.

The use of ‘systematic investigation’ in the formal definition represents how research is normally conducted – a hypothesis is formed, appropriate research methods are designed, data is collected and analysed, and research results are summarised into one or more ‘research conclusions’. These research conclusions are then shared with the rest of the scientific community to add to the existing knowledge and serve as evidence to form additional questions that can be investigated. It is this cyclical process that enables scientific research to make continuous progress over the years; the true purpose of research.

What is the Purpose of Research

From weather forecasts to the discovery of antibiotics, researchers are constantly trying to find new ways to understand the world and how things work – with the ultimate goal of improving our lives.

The purpose of research is therefore to find out what is known, what is not and what we can develop further. In this way, scientists can develop new theories, ideas and products that shape our society and our everyday lives.

Although research can take many forms, there are three main purposes of research:

  • Exploratory: Exploratory research is the first research to be conducted around a problem that has not yet been clearly defined. Exploration research therefore aims to gain a better understanding of the exact nature of the problem and not to provide a conclusive answer to the problem itself. This enables us to conduct more in-depth research later on.
  • Descriptive: Descriptive research expands knowledge of a research problem or phenomenon by describing it according to its characteristics and population. Descriptive research focuses on the ‘how’ and ‘what’, but not on the ‘why’.
  • Explanatory: Explanatory research, also referred to as casual research, is conducted to determine how variables interact, i.e. to identify cause-and-effect relationships. Explanatory research deals with the ‘why’ of research questions and is therefore often based on experiments.

Characteristics of Research

There are 8 core characteristics that all research projects should have. These are:

  • Empirical  – based on proven scientific methods derived from real-life observations and experiments.
  • Logical  – follows sequential procedures based on valid principles.
  • Cyclic  – research begins with a question and ends with a question, i.e. research should lead to a new line of questioning.
  • Controlled  – vigorous measures put into place to keep all variables constant, except those under investigation.
  • Hypothesis-based  – the research design generates data that sufficiently meets the research objectives and can prove or disprove the hypothesis. It makes the research study repeatable and gives credibility to the results.
  • Analytical  – data is generated, recorded and analysed using proven techniques to ensure high accuracy and repeatability while minimising potential errors and anomalies.
  • Objective  – sound judgement is used by the researcher to ensure that the research findings are valid.
  • Statistical treatment  – statistical treatment is used to transform the available data into something more meaningful from which knowledge can be gained.

Finding a PhD has never been this easy – search for a PhD by keyword, location or academic area of interest.

Types of Research

Research can be divided into two main types: basic research (also known as pure research) and applied research.

Basic Research

Basic research, also known as pure research, is an original investigation into the reasons behind a process, phenomenon or particular event. It focuses on generating knowledge around existing basic principles.

Basic research is generally considered ‘non-commercial research’ because it does not focus on solving practical problems, and has no immediate benefit or ways it can be applied.

While basic research may not have direct applications, it usually provides new insights that can later be used in applied research.

Applied Research

Applied research investigates well-known theories and principles in order to enhance knowledge around a practical aim. Because of this, applied research focuses on solving real-life problems by deriving knowledge which has an immediate application.

Methods of Research

Research methods for data collection fall into one of two categories: inductive methods or deductive methods.

Inductive research methods focus on the analysis of an observation and are usually associated with qualitative research. Deductive research methods focus on the verification of an observation and are typically associated with quantitative research.

Research definition

Qualitative Research

Qualitative research is a method that enables non-numerical data collection through open-ended methods such as interviews, case studies and focus groups .

It enables researchers to collect data on personal experiences, feelings or behaviours, as well as the reasons behind them. Because of this, qualitative research is often used in fields such as social science, psychology and philosophy and other areas where it is useful to know the connection between what has occurred and why it has occurred.

Quantitative Research

Quantitative research is a method that collects and analyses numerical data through statistical analysis.

It allows us to quantify variables, uncover relationships, and make generalisations across a larger population. As a result, quantitative research is often used in the natural and physical sciences such as engineering, biology, chemistry, physics, computer science, finance, and medical research, etc.

What does Research Involve?

Research often follows a systematic approach known as a Scientific Method, which is carried out using an hourglass model.

A research project first starts with a problem statement, or rather, the research purpose for engaging in the study. This can take the form of the ‘ scope of the study ’ or ‘ aims and objectives ’ of your research topic.

Subsequently, a literature review is carried out and a hypothesis is formed. The researcher then creates a research methodology and collects the data.

The data is then analysed using various statistical methods and the null hypothesis is either accepted or rejected.

In both cases, the study and its conclusion are officially written up as a report or research paper, and the researcher may also recommend lines of further questioning. The report or research paper is then shared with the wider research community, and the cycle begins all over again.

Although these steps outline the overall research process, keep in mind that research projects are highly dynamic and are therefore considered an iterative process with continued refinements and not a series of fixed stages.

Difference between the journal paper status of In Review and Under Review

This post explains the difference between the journal paper status of In Review and Under Review.

Reference Manager

Reference management software solutions offer a powerful way for you to track and manage your academic references. Read our blog post to learn more about what they are and how to use them.

MBA vs PhD

Considering whether to do an MBA or a PhD? If so, find out what their differences are, and more importantly, which one is better suited for you.

Join thousands of other students and stay up to date with the latest PhD programmes, funding opportunities and advice.

the research work is conducted to test the truth of

Browse PhDs Now

the research work is conducted to test the truth of

A science investigatory project is a science-based research project or study that is performed by school children in a classroom, exhibition or science fair.

Do you need to have published papers to do a PhD?

Do you need to have published papers to do a PhD? The simple answer is no but it could benefit your application if you can.

the research work is conducted to test the truth of

Sara is currently in the 4th year of the Physics Doctoral Program at The Graduate Center of the City University of New York. Her research investigates quantum transport properties of 2D electron systems.

the research work is conducted to test the truth of

Nathan is about to enter the 2nd year of his PhD at the University of Hertfordshire. His research looks at how lifestyle stresses can impact skin barrier biophysics and skin barrier and oral cavity biochemistry and microbiology.

Join Thousands of Students

Department of Health & Human Services

Module 1: Introduction: What is Research?

Module 1

Learning Objectives

By the end of this module, you will be able to:

  • Explain how the scientific method is used to develop new knowledge
  • Describe why it is important to follow a research plan

Text Box: The Scientific Method

The Scientific Method consists of observing the world around you and creating a  hypothesis  about relationships in the world. A hypothesis is an informed and educated prediction or explanation about something. Part of the research process involves testing the  hypothesis , and then examining the results of these tests as they relate to both the hypothesis and the world around you. When a researcher forms a hypothesis, this acts like a map through the research study. It tells the researcher which factors are important to study and how they might be related to each other or caused by a  manipulation  that the researcher introduces (e.g. a program, treatment or change in the environment). With this map, the researcher can interpret the information he/she collects and can make sound conclusions about the results.

Research can be done with human beings, animals, plants, other organisms and inorganic matter. When research is done with human beings and animals, it must follow specific rules about the treatment of humans and animals that have been created by the U.S. Federal Government. This ensures that humans and animals are treated with dignity and respect, and that the research causes minimal harm.

No matter what topic is being studied, the value of the research depends on how well it is designed and done. Therefore, one of the most important considerations in doing good research is to follow the design or plan that is developed by an experienced researcher who is called the  Principal Investigator  (PI). The PI is in charge of all aspects of the research and creates what is called a  protocol  (the research plan) that all people doing the research must follow. By doing so, the PI and the public can be sure that the results of the research are real and useful to other scientists.

Module 1: Discussion Questions

  • How is a hypothesis like a road map?
  • Who is ultimately responsible for the design and conduct of a research study?
  • How does following the research protocol contribute to informing public health practices?

PDF

Email Updates

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Scientific Method Steps in Psychology Research

Steps, Uses, and Key Terms

Verywell / Theresa Chiechi

How do researchers investigate psychological phenomena? They utilize a process known as the scientific method to study different aspects of how people think and behave.

When conducting research, the scientific method steps to follow are:

  • Observe what you want to investigate
  • Ask a research question and make predictions
  • Test the hypothesis and collect data
  • Examine the results and draw conclusions
  • Report and share the results 

This process not only allows scientists to investigate and understand different psychological phenomena but also provides researchers and others a way to share and discuss the results of their studies.

Generally, there are five main steps in the scientific method, although some may break down this process into six or seven steps. An additional step in the process can also include developing new research questions based on your findings.

What Is the Scientific Method?

What is the scientific method and how is it used in psychology?

The scientific method consists of five steps. It is essentially a step-by-step process that researchers can follow to determine if there is some type of relationship between two or more variables.

By knowing the steps of the scientific method, you can better understand the process researchers go through to arrive at conclusions about human behavior.

Scientific Method Steps

While research studies can vary, these are the basic steps that psychologists and scientists use when investigating human behavior.

The following are the scientific method steps:

Step 1. Make an Observation

Before a researcher can begin, they must choose a topic to study. Once an area of interest has been chosen, the researchers must then conduct a thorough review of the existing literature on the subject. This review will provide valuable information about what has already been learned about the topic and what questions remain to be answered.

A literature review might involve looking at a considerable amount of written material from both books and academic journals dating back decades.

The relevant information collected by the researcher will be presented in the introduction section of the final published study results. This background material will also help the researcher with the first major step in conducting a psychology study: formulating a hypothesis.

Step 2. Ask a Question

Once a researcher has observed something and gained some background information on the topic, the next step is to ask a question. The researcher will form a hypothesis, which is an educated guess about the relationship between two or more variables

For example, a researcher might ask a question about the relationship between sleep and academic performance: Do students who get more sleep perform better on tests at school?

In order to formulate a good hypothesis, it is important to think about different questions you might have about a particular topic.

You should also consider how you could investigate the causes. Falsifiability is an important part of any valid hypothesis. In other words, if a hypothesis was false, there needs to be a way for scientists to demonstrate that it is false.

Step 3. Test Your Hypothesis and Collect Data

Once you have a solid hypothesis, the next step of the scientific method is to put this hunch to the test by collecting data. The exact methods used to investigate a hypothesis depend on exactly what is being studied. There are two basic forms of research that a psychologist might utilize: descriptive research or experimental research.

Descriptive research is typically used when it would be difficult or even impossible to manipulate the variables in question. Examples of descriptive research include case studies, naturalistic observation , and correlation studies. Phone surveys that are often used by marketers are one example of descriptive research.

Correlational studies are quite common in psychology research. While they do not allow researchers to determine cause-and-effect, they do make it possible to spot relationships between different variables and to measure the strength of those relationships. 

Experimental research is used to explore cause-and-effect relationships between two or more variables. This type of research involves systematically manipulating an independent variable and then measuring the effect that it has on a defined dependent variable .

One of the major advantages of this method is that it allows researchers to actually determine if changes in one variable actually cause changes in another.

While psychology experiments are often quite complex, a simple experiment is fairly basic but does allow researchers to determine cause-and-effect relationships between variables. Most simple experiments use a control group (those who do not receive the treatment) and an experimental group (those who do receive the treatment).

Step 4. Examine the Results and Draw Conclusions

Once a researcher has designed the study and collected the data, it is time to examine this information and draw conclusions about what has been found.  Using statistics , researchers can summarize the data, analyze the results, and draw conclusions based on this evidence.

So how does a researcher decide what the results of a study mean? Not only can statistical analysis support (or refute) the researcher’s hypothesis; it can also be used to determine if the findings are statistically significant.

When results are said to be statistically significant, it means that it is unlikely that these results are due to chance.

Based on these observations, researchers must then determine what the results mean. In some cases, an experiment will support a hypothesis, but in other cases, it will fail to support the hypothesis.

So what happens if the results of a psychology experiment do not support the researcher's hypothesis? Does this mean that the study was worthless?

Just because the findings fail to support the hypothesis does not mean that the research is not useful or informative. In fact, such research plays an important role in helping scientists develop new questions and hypotheses to explore in the future.

After conclusions have been drawn, the next step is to share the results with the rest of the scientific community. This is an important part of the process because it contributes to the overall knowledge base and can help other scientists find new research avenues to explore.

Step 5. Report the Results

The final step in a psychology study is to report the findings. This is often done by writing up a description of the study and publishing the article in an academic or professional journal. The results of psychological studies can be seen in peer-reviewed journals such as  Psychological Bulletin , the  Journal of Social Psychology ,  Developmental Psychology , and many others.

The structure of a journal article follows a specified format that has been outlined by the  American Psychological Association (APA) . In these articles, researchers:

  • Provide a brief history and background on previous research
  • Present their hypothesis
  • Identify who participated in the study and how they were selected
  • Provide operational definitions for each variable
  • Describe the measures and procedures that were used to collect data
  • Explain how the information collected was analyzed
  • Discuss what the results mean

Why is such a detailed record of a psychological study so important? By clearly explaining the steps and procedures used throughout the study, other researchers can then replicate the results. The editorial process employed by academic and professional journals ensures that each article that is submitted undergoes a thorough peer review, which helps ensure that the study is scientifically sound.

Once published, the study becomes another piece of the existing puzzle of our knowledge base on that topic.

Before you begin exploring the scientific method steps, here's a review of some key terms and definitions that you should be familiar with:

  • Falsifiable : The variables can be measured so that if a hypothesis is false, it can be proven false
  • Hypothesis : An educated guess about the possible relationship between two or more variables
  • Variable : A factor or element that can change in observable and measurable ways
  • Operational definition : A full description of exactly how variables are defined, how they will be manipulated, and how they will be measured

Uses for the Scientific Method

The  goals of psychological studies  are to describe, explain, predict and perhaps influence mental processes or behaviors. In order to do this, psychologists utilize the scientific method to conduct psychological research. The scientific method is a set of principles and procedures that are used by researchers to develop questions, collect data, and reach conclusions.

Goals of Scientific Research in Psychology

Researchers seek not only to describe behaviors and explain why these behaviors occur; they also strive to create research that can be used to predict and even change human behavior.

Psychologists and other social scientists regularly propose explanations for human behavior. On a more informal level, people make judgments about the intentions, motivations , and actions of others on a daily basis.

While the everyday judgments we make about human behavior are subjective and anecdotal, researchers use the scientific method to study psychology in an objective and systematic way. The results of these studies are often reported in popular media, which leads many to wonder just how or why researchers arrived at the conclusions they did.

Examples of the Scientific Method

Now that you're familiar with the scientific method steps, it's useful to see how each step could work with a real-life example.

Say, for instance, that researchers set out to discover what the relationship is between psychotherapy and anxiety .

  • Step 1. Make an observation : The researchers choose to focus their study on adults ages 25 to 40 with generalized anxiety disorder.
  • Step 2. Ask a question : The question they want to answer in their study is: Do weekly psychotherapy sessions reduce symptoms in adults ages 25 to 40 with generalized anxiety disorder?
  • Step 3. Test your hypothesis : Researchers collect data on participants' anxiety symptoms . They work with therapists to create a consistent program that all participants undergo. Group 1 may attend therapy once per week, whereas group 2 does not attend therapy.
  • Step 4. Examine the results : Participants record their symptoms and any changes over a period of three months. After this period, people in group 1 report significant improvements in their anxiety symptoms, whereas those in group 2 report no significant changes.
  • Step 5. Report the results : Researchers write a report that includes their hypothesis, information on participants, variables, procedure, and conclusions drawn from the study. In this case, they say that "Weekly therapy sessions are shown to reduce anxiety symptoms in adults ages 25 to 40."

Of course, there are many details that go into planning and executing a study such as this. But this general outline gives you an idea of how an idea is formulated and tested, and how researchers arrive at results using the scientific method.

Erol A. How to conduct scientific research ? Noro Psikiyatr Ars . 2017;54(2):97-98. doi:10.5152/npa.2017.0120102

University of Minnesota. Psychologists use the scientific method to guide their research .

Shaughnessy, JJ, Zechmeister, EB, & Zechmeister, JS. Research Methods In Psychology . New York: McGraw Hill Education; 2015.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

the research work is conducted to test the truth of

User Preferences

Content preview.

Arcu felis bibendum ut tristique et egestas quis:

  • Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
  • Duis aute irure dolor in reprehenderit in voluptate
  • Excepteur sint occaecat cupidatat non proident

Keyboard Shortcuts

S.3 hypothesis testing.

In reviewing hypothesis tests, we start first with the general idea. Then, we keep returning to the basic procedures of hypothesis testing, each time adding a little more detail.

The general idea of hypothesis testing involves:

  • Making an initial assumption.
  • Collecting evidence (data).
  • Based on the available evidence (data), deciding whether to reject or not reject the initial assumption.

Every hypothesis test — regardless of the population parameter involved — requires the above three steps.

Example S.3.1

Is normal body temperature really 98.6 degrees f section  .

Consider the population of many, many adults. A researcher hypothesized that the average adult body temperature is lower than the often-advertised 98.6 degrees F. That is, the researcher wants an answer to the question: "Is the average adult body temperature 98.6 degrees? Or is it lower?" To answer his research question, the researcher starts by assuming that the average adult body temperature was 98.6 degrees F.

Then, the researcher went out and tried to find evidence that refutes his initial assumption. In doing so, he selects a random sample of 130 adults. The average body temperature of the 130 sampled adults is 98.25 degrees.

Then, the researcher uses the data he collected to make a decision about his initial assumption. It is either likely or unlikely that the researcher would collect the evidence he did given his initial assumption that the average adult body temperature is 98.6 degrees:

  • If it is likely , then the researcher does not reject his initial assumption that the average adult body temperature is 98.6 degrees. There is not enough evidence to do otherwise.
  • either the researcher's initial assumption is correct and he experienced a very unusual event;
  • or the researcher's initial assumption is incorrect.

In statistics, we generally don't make claims that require us to believe that a very unusual event happened. That is, in the practice of statistics, if the evidence (data) we collected is unlikely in light of the initial assumption, then we reject our initial assumption.

Example S.3.2

Criminal trial analogy section  .

One place where you can consistently see the general idea of hypothesis testing in action is in criminal trials held in the United States. Our criminal justice system assumes "the defendant is innocent until proven guilty." That is, our initial assumption is that the defendant is innocent.

In the practice of statistics, we make our initial assumption when we state our two competing hypotheses -- the null hypothesis ( H 0 ) and the alternative hypothesis ( H A ). Here, our hypotheses are:

  • H 0 : Defendant is not guilty (innocent)
  • H A : Defendant is guilty

In statistics, we always assume the null hypothesis is true . That is, the null hypothesis is always our initial assumption.

The prosecution team then collects evidence — such as finger prints, blood spots, hair samples, carpet fibers, shoe prints, ransom notes, and handwriting samples — with the hopes of finding "sufficient evidence" to make the assumption of innocence refutable.

In statistics, the data are the evidence.

The jury then makes a decision based on the available evidence:

  • If the jury finds sufficient evidence — beyond a reasonable doubt — to make the assumption of innocence refutable, the jury rejects the null hypothesis and deems the defendant guilty. We behave as if the defendant is guilty.
  • If there is insufficient evidence, then the jury does not reject the null hypothesis . We behave as if the defendant is innocent.

In statistics, we always make one of two decisions. We either "reject the null hypothesis" or we "fail to reject the null hypothesis."

Errors in Hypothesis Testing Section  

Did you notice the use of the phrase "behave as if" in the previous discussion? We "behave as if" the defendant is guilty; we do not "prove" that the defendant is guilty. And, we "behave as if" the defendant is innocent; we do not "prove" that the defendant is innocent.

This is a very important distinction! We make our decision based on evidence not on 100% guaranteed proof. Again:

  • If we reject the null hypothesis, we do not prove that the alternative hypothesis is true.
  • If we do not reject the null hypothesis, we do not prove that the null hypothesis is true.

We merely state that there is enough evidence to behave one way or the other. This is always true in statistics! Because of this, whatever the decision, there is always a chance that we made an error .

Let's review the two types of errors that can be made in criminal trials:

Table S.3.1
Jury Decision Truth
  Not Guilty Guilty
Not Guilty OK ERROR
Guilty ERROR OK

Table S.3.2 shows how this corresponds to the two types of errors in hypothesis testing.

Table S.3.2
Decision
  Null Hypothesis Alternative Hypothesis
Do not Reject Null OK Type II Error
Reject Null Type I Error OK

Note that, in statistics, we call the two types of errors by two different  names -- one is called a "Type I error," and the other is called  a "Type II error." Here are the formal definitions of the two types of errors:

There is always a chance of making one of these errors. But, a good scientific study will minimize the chance of doing so!

Making the Decision Section  

Recall that it is either likely or unlikely that we would observe the evidence we did given our initial assumption. If it is likely , we do not reject the null hypothesis. If it is unlikely , then we reject the null hypothesis in favor of the alternative hypothesis. Effectively, then, making the decision reduces to determining "likely" or "unlikely."

In statistics, there are two ways to determine whether the evidence is likely or unlikely given the initial assumption:

  • We could take the " critical value approach " (favored in many of the older textbooks).
  • Or, we could take the " P -value approach " (what is used most often in research, journal articles, and statistical software).

In the next two sections, we review the procedures behind each of these two approaches. To make our review concrete, let's imagine that μ is the average grade point average of all American students who major in mathematics. We first review the critical value approach for conducting each of the following three hypothesis tests about the population mean $\mu$:

: = 3 : > 3
: = 3 : < 3
: = 3 : ≠ 3

In Practice

  • We would want to conduct the first hypothesis test if we were interested in concluding that the average grade point average of the group is more than 3.
  • We would want to conduct the second hypothesis test if we were interested in concluding that the average grade point average of the group is less than 3.
  • And, we would want to conduct the third hypothesis test if we were only interested in concluding that the average grade point average of the group differs from 3 (without caring whether it is more or less than 3).

Upon completing the review of the critical value approach, we review the P -value approach for conducting each of the above three hypothesis tests about the population mean \(\mu\). The procedures that we review here for both approaches easily extend to hypothesis tests about any other population parameter.

  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

The Four Types of Research Paradigms: A Comprehensive Guide

The Four Types of Research Paradigms: A Comprehensive Guide

5-minute read

  • 22nd January 2023

In this guide, you’ll learn all about the four research paradigms and how to choose the right one for your research.

Introduction to Research Paradigms

A paradigm is a system of beliefs, ideas, values, or habits that form the basis for a way of thinking about the world. Therefore, a research paradigm is an approach, model, or framework from which to conduct research. The research paradigm helps you to form a research philosophy, which in turn informs your research methodology.

Your research methodology is essentially the “how” of your research – how you design your study to not only accomplish your research’s aims and objectives but also to ensure your results are reliable and valid. Choosing the correct research paradigm is crucial because it provides a logical structure for conducting your research and improves the quality of your work, assuming it’s followed correctly.

Three Pillars: Ontology, Epistemology, and Methodology

Before we jump into the four types of research paradigms, we need to consider the three pillars of a research paradigm.

Ontology addresses the question, “What is reality?” It’s the study of being. This pillar is about finding out what you seek to research. What do you aim to examine?

Epistemology is the study of knowledge. It asks, “How is knowledge gathered and from what sources?”

Methodology involves the system in which you choose to investigate, measure, and analyze your research’s aims and objectives. It answers the “how” questions.

Let’s now take a look at the different research paradigms.

1.   Positivist Research Paradigm

The positivist research paradigm assumes that there is one objective reality, and people can know this reality and accurately describe and explain it. Positivists rely on their observations through their senses to gain knowledge of their surroundings.

In this singular objective reality, researchers can compare their claims and ascertain the truth. This means researchers are limited to data collection and interpretations from an objective viewpoint. As a result, positivists usually use quantitative methodologies in their research (e.g., statistics, social surveys, and structured questionnaires).

This research paradigm is mostly used in natural sciences, physical sciences, or whenever large sample sizes are being used.

2.   Interpretivist Research Paradigm

Interpretivists believe that different people in society experience and understand reality in different ways – while there may be only “one” reality, everyone interprets it according to their own view. They also believe that all research is influenced and shaped by researchers’ worldviews and theories.

As a result, interpretivists use qualitative methods and techniques to conduct their research. This includes interviews, focus groups, observations of a phenomenon, or collecting documentation on a phenomenon (e.g., newspaper articles, reports, or information from websites).

3.   Critical Theory Research Paradigm

The critical theory paradigm asserts that social science can never be 100% objective or value-free. This paradigm is focused on enacting social change through scientific investigation. Critical theorists question knowledge and procedures and acknowledge how power is used (or abused) in the phenomena or systems they’re investigating.

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

Researchers using this paradigm are more often than not aiming to create a more just, egalitarian society in which individual and collective freedoms are secure. Both quantitative and qualitative methods can be used with this paradigm.

4.   Constructivist Research Paradigm

Constructivism asserts that reality is a construct of our minds ; therefore, reality is subjective. Constructivists believe that all knowledge comes from our experiences and reflections on those experiences and oppose the idea that there is a single methodology to generate knowledge.

This paradigm is mostly associated with qualitative research approaches due to its focus on experiences and subjectivity. The researcher focuses on participants’ experiences as well as their own.

Choosing the Right Research Paradigm for Your Study

Once you have a comprehensive understanding of each paradigm, you’re faced with a big question: which paradigm should you choose? The answer to this will set the course of your research and determine its success, findings, and results.

To start, you need to identify your research problem, research objectives , and hypothesis . This will help you to establish what you want to accomplish or understand from your research and the path you need to take to achieve this.

You can begin this process by asking yourself some questions:

  • What is the nature of your research problem (i.e., quantitative or qualitative)?
  • How can you acquire the knowledge you need and communicate it to others? For example, is this knowledge already available in other forms (e.g., documents) and do you need to gain it by gathering or observing other people’s experiences or by experiencing it personally?
  • What is the nature of the reality that you want to study? Is it objective or subjective?

Depending on the problem and objective, other questions may arise during this process that lead you to a suitable paradigm. Ultimately, you must be able to state, explain, and justify the research paradigm you select for your research and be prepared to include this in your dissertation’s methodology and design section.

Using Two Paradigms

If the nature of your research problem and objectives involves both quantitative and qualitative aspects, then you might consider using two paradigms or a mixed methods approach . In this, one paradigm is used to frame the qualitative aspects of the study and another for the quantitative aspects. This is acceptable, although you will be tasked with explaining your rationale for using both of these paradigms in your research.

Choosing the right research paradigm for your research can seem like an insurmountable task. It requires you to:

●  Have a comprehensive understanding of the paradigms,

●  Identify your research problem, objectives, and hypothesis, and

●  Be able to state, explain, and justify the paradigm you select in your methodology and design section.

Although conducting your research and putting your dissertation together is no easy task, proofreading it can be! Our experts are here to make your writing shine. Your first 500 words are free !

Text reads: Make sure your hard work pays off. Discover academic proofreading and editing services. Button text: Learn more.

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

Free email newsletter template (2024).

Promoting a brand means sharing valuable insights to connect more deeply with your audience, and...

6-minute read

How to Write a Nonprofit Grant Proposal

If you’re seeking funding to support your charitable endeavors as a nonprofit organization, you’ll need...

9-minute read

How to Use Infographics to Boost Your Presentation

Is your content getting noticed? Capturing and maintaining an audience’s attention is a challenge when...

8-minute read

Why Interactive PDFs Are Better for Engagement

Are you looking to enhance engagement and captivate your audience through your professional documents? Interactive...

7-minute read

Seven Key Strategies for Voice Search Optimization

Voice search optimization is rapidly shaping the digital landscape, requiring content professionals to adapt their...

4-minute read

Five Creative Ways to Showcase Your Digital Portfolio

Are you a creative freelancer looking to make a lasting impression on potential clients or...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

The potential of working hypotheses for deductive exploratory research

  • Open access
  • Published: 08 December 2020
  • Volume 55 , pages 1703–1725, ( 2021 )

Cite this article

You have full access to this open access article

the research work is conducted to test the truth of

  • Mattia Casula   ORCID: orcid.org/0000-0002-7081-8153 1 ,
  • Nandhini Rangarajan 2 &
  • Patricia Shields   ORCID: orcid.org/0000-0002-0960-4869 2  

73k Accesses

95 Citations

4 Altmetric

Explore all metrics

While hypotheses frame explanatory studies and provide guidance for measurement and statistical tests, deductive, exploratory research does not have a framing device like the hypothesis. To this purpose, this article examines the landscape of deductive, exploratory research and offers the working hypothesis as a flexible, useful framework that can guide and bring coherence across the steps in the research process. The working hypothesis conceptual framework is introduced, placed in a philosophical context, defined, and applied to public administration and comparative public policy. Doing so, this article explains: the philosophical underpinning of exploratory, deductive research; how the working hypothesis informs the methodologies and evidence collection of deductive, explorative research; the nature of micro-conceptual frameworks for deductive exploratory research; and, how the working hypothesis informs data analysis when exploratory research is deductive.

Similar content being viewed by others

the research work is conducted to test the truth of

Reflections on Methodological Issues

the research work is conducted to test the truth of

Research: Meaning and Purpose

the research work is conducted to test the truth of

Research Design and Methodology

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

1 Introduction

Exploratory research is generally considered to be inductive and qualitative (Stebbins 2001 ). Exploratory qualitative studies adopting an inductive approach do not lend themselves to a priori theorizing and building upon prior bodies of knowledge (Reiter 2013 ; Bryman 2004 as cited in Pearse 2019 ). Juxtaposed against quantitative studies that employ deductive confirmatory approaches, exploratory qualitative research is often criticized for lack of methodological rigor and tentativeness in results (Thomas and Magilvy 2011 ). This paper focuses on the neglected topic of deductive, exploratory research and proposes working hypotheses as a useful framework for these studies.

To emphasize that certain types of applied research lend themselves more easily to deductive approaches, to address the downsides of exploratory qualitative research, and to ensure qualitative rigor in exploratory research, a significant body of work on deductive qualitative approaches has emerged (see for example, Gilgun 2005 , 2015 ; Hyde 2000 ; Pearse 2019 ). According to Gilgun ( 2015 , p. 3) the use of conceptual frameworks derived from comprehensive reviews of literature and a priori theorizing were common practices in qualitative research prior to the publication of Glaser and Strauss’s ( 1967 ) The Discovery of Grounded Theory . Gilgun ( 2015 ) coined the terms Deductive Qualitative Analysis (DQA) to arrive at some sort of “middle-ground” such that the benefits of a priori theorizing (structure) and allowing room for new theory to emerge (flexibility) are reaped simultaneously. According to Gilgun ( 2015 , p. 14) “in DQA, the initial conceptual framework and hypotheses are preliminary. The purpose of DQA is to come up with a better theory than researchers had constructed at the outset (Gilgun 2005 , 2009 ). Indeed, the production of new, more useful hypotheses is the goal of DQA”.

DQA provides greater level of structure for both the experienced and novice qualitative researcher (see for example Pearse 2019 ; Gilgun 2005 ). According to Gilgun ( 2015 , p. 4) “conceptual frameworks are the sources of hypotheses and sensitizing concepts”. Sensitizing concepts frame the exploratory research process and guide the researcher’s data collection and reporting efforts. Pearse ( 2019 ) discusses the usefulness for deductive thematic analysis and pattern matching to help guide DQA in business research. Gilgun ( 2005 ) discusses the usefulness of DQA for family research.

Given these rationales for DQA in exploratory research, the overarching purpose of this paper is to contribute to that growing corpus of work on deductive qualitative research. This paper is specifically aimed at guiding novice researchers and student scholars to the working hypothesis as a useful a priori framing tool. The applicability of the working hypothesis as a tool that provides more structure during the design and implementation phases of exploratory research is discussed in detail. Examples of research projects in public administration that use the working hypothesis as a framing tool for deductive exploratory research are provided.

In the next section, we introduce the three types of research purposes. Second, we examine the nature of the exploratory research purpose. Third, we provide a definition of working hypothesis. Fourth, we explore the philosophical roots of methodology to see where exploratory research fits. Fifth, we connect the discussion to the dominant research approaches (quantitative, qualitative and mixed methods) to see where deductive exploratory research fits. Sixth, we examine the nature of theory and the role of the hypothesis in theory. We contrast formal hypotheses and working hypotheses. Seven, we provide examples of student and scholarly work that illustrates how working hypotheses are developed and operationalized. Lastly, this paper synthesizes previous discussion with concluding remarks.

2 Three types of research purposes

The literature identifies three basic types of research purposes—explanation, description and exploration (Babbie 2007 ; Adler and Clark 2008 ; Strydom 2013 ; Shields and Whetsell 2017 ). Research purposes are similar to research questions; however, they focus on project goals or aims instead of questions.

Explanatory research answers the “why” question (Babbie 2007 , pp. 89–90), by explaining “why things are the way they are”, and by looking “for causes and reasons” (Adler and Clark 2008 , p. 14). Explanatory research is closely tied to hypothesis testing. Theory is tested using deductive reasoning, which goes from the general to the specific (Hyde 2000 , p. 83). Hypotheses provide a frame for explanatory research connecting the research purpose to other parts of the research process (variable construction, choice of data, statistical tests). They help provide alignment or coherence across stages in the research process and provide ways to critique the strengths and weakness of the study. For example, were the hypotheses grounded in the appropriate arguments and evidence in the literature? Are the concepts imbedded in the hypotheses appropriately measured? Was the best statistical test used? When the analysis is complete (hypothesis is tested), the results generally answer the research question (the evidence supported or failed to support the hypothesis) (Shields and Rangarajan 2013 ).

Descriptive research addresses the “What” question and is not primarily concerned with causes (Strydom 2013 ; Shields and Tajalli 2006 ). It lies at the “midpoint of the knowledge continuum” (Grinnell 2001 , p. 248) between exploration and explanation. Descriptive research is used in both quantitative and qualitative research. A field researcher might want to “have a more highly developed idea of social phenomena” (Strydom 2013 , p. 154) and develop thick descriptions using inductive logic. In science, categorization and classification systems such as the periodic table of chemistry or the taxonomies of biology inform descriptive research. These baseline classification systems are a type of theorizing and allow researchers to answer questions like “what kind” of plants and animals inhabit a forest. The answer to this question would usually be displayed in graphs and frequency distributions. This is also the data presentation system used in the social sciences (Ritchie and Lewis 2003 ; Strydom 2013 ). For example, if a scholar asked, what are the needs of homeless people? A quantitative approach would include a survey that incorporated a “needs” classification system (preferably based on a literature review). The data would be displayed as frequency distributions or as charts. Description can also be guided by inductive reasoning, which draws “inferences from specific observable phenomena to general rules or knowledge expansion” (Worster 2013 , p. 448). Theory and hypotheses are generated using inductive reasoning, which begins with data and the intention of making sense of it by theorizing. Inductive descriptive approaches would use a qualitative, naturalistic design (open ended interview questions with the homeless population). The data could provide a thick description of the homeless context. For deductive descriptive research, categories, serve a purpose similar to hypotheses for explanatory research. If developed with thought and a connection to the literature, categories can serve as a framework that inform measurement, link to data collection mechanisms and to data analysis. Like hypotheses they can provide horizontal coherence across the steps in the research process.

Table  1 demonstrated these connections for deductive, descriptive and explanatory research. The arrow at the top emphasizes the horizontal or across the research process view we emphasize. This article makes the case that the working hypothesis can serve the same purpose as the hypothesis for deductive, explanatory research and categories for deductive descriptive research. The cells for exploratory research are filled in with question marks.

The remainder of this paper focuses on exploratory research and the answers to questions found in the table:

What is the philosophical underpinning of exploratory, deductive research?

What is the Micro-conceptual framework for deductive exploratory research? [ As is clear from the article title we introduce the working hypothesis as the answer .]

How does the working hypothesis inform the methodologies and evidence collection of deductive exploratory research?

How does the working hypothesis inform data analysis of deductive exploratory research?

3 The nature of exploratory research purpose

Explorers enter the unknown to discover something new. The process can be fraught with struggle and surprises. Effective explorers creatively resolve unexpected problems. While we typically think of explorers as pioneers or mountain climbers, exploration is very much linked to the experience and intention of the explorer. Babies explore as they take their first steps. The exploratory purpose resonates with these insights. Exploratory research, like reconnaissance, is a type of inquiry that is in the preliminary or early stages (Babbie 2007 ). It is associated with discovery, creativity and serendipity (Stebbins 2001 ). But the person doing the discovery, also defines the activity or claims the act of exploration. It “typically occurs when a researcher examines a new interest or when the subject of study itself is relatively new” (Babbie 2007 , p. 88). Hence, exploration has an open character that emphasizes “flexibility, pragmatism, and the particular, biographically specific interests of an investigator” (Maanen et al. 2001 , p. v). These three purposes form a type of hierarchy. An area of inquiry is initially explored . This early work lays the ground for, description which in turn becomes the basis for explanation . Quantitative, explanatory studies dominate contemporary high impact journals (Twining et al. 2017 ).

Stebbins ( 2001 ) makes the point that exploration is often seen as something like a poor stepsister to confirmatory or hypothesis testing research. He has a problem with this because we live in a changing world and what is settled today will very likely be unsettled in the near future and in need of exploration. Further, exploratory research “generates initial insights into the nature of an issue and develops questions to be investigated by more extensive studies” (Marlow 2005 , p. 334). Exploration is widely applicable because all research topics were once “new.” Further, all research topics have the possibility of “innovation” or ongoing “newness”. Exploratory research may be appropriate to establish whether a phenomenon exists (Strydom 2013 ). The point here, of course, is that the exploratory purpose is far from trivial.

Stebbins’ Exploratory Research in the Social Sciences ( 2001 ), is the only book devoted to the nature of exploratory research as a form of social science inquiry. He views it as a “broad-ranging, purposive, systematic prearranged undertaking designed to maximize the discovery of generalizations leading to description and understanding of an area of social or psychological life” (p. 3). It is science conducted in a way distinct from confirmation. According to Stebbins ( 2001 , p. 6) the goal is discovery of potential generalizations, which can become future hypotheses and eventually theories that emerge from the data. He focuses on inductive logic (which stimulates creativity) and qualitative methods. He does not want exploratory research limited to the restrictive formulas and models he finds in confirmatory research. He links exploratory research to Glaser and Strauss’s ( 1967 ) flexible, immersive, Grounded Theory. Strydom’s ( 2013 ) analysis of contemporary social work research methods books echoes Stebbins’ ( 2001 ) position. Stebbins’s book is an important contribution, but it limits the potential scope of this flexible and versatile research purpose. If we accepted his conclusion, we would delete the “Exploratory” row from Table  1 .

Note that explanatory research can yield new questions, which lead to exploration. Inquiry is a process where inductive and deductive activities can occur simultaneously or in a back and forth manner, particularly as the literature is reviewed and the research design emerges. Footnote 1 Strict typologies such as explanation, description and exploration or inductive/deductive can obscures these larger connections and processes. We draw insight from Dewey’s ( 1896 ) vision of inquiry as depicted in his seminal “Reflex Arc” article. He notes that “stimulus” and “response” like other dualities (inductive/deductive) exist within a larger unifying system. Yet the terms have value. “We need not abandon terms like stimulus and response, so long as we remember that they are attached to events based upon their function in a wider dynamic context, one that includes interests and aims” (Hildebrand 2008 , p. 16). So too, in methodology typologies such as deductive/inductive capture useful distinctions with practical value and are widely used in the methodology literature.

We argue that there is a role for exploratory, deductive, and confirmatory research. We maintain all types of research logics and methods should be in the toolbox of exploratory research. First, as stated above, it makes no sense on its face to identify an extremely flexible purpose that is idiosyncratic to the researcher and then basically restrict its use to qualitative, inductive, non-confirmatory methods. Second, Stebbins’s ( 2001 ) work focused on social science ignoring the policy sciences. Exploratory research can be ideal for immediate practical problems faced by policy makers, who could find a framework of some kind useful. Third, deductive, exploratory research is more intentionally connected to previous research. Some kind of initial framing device is located or designed using the literature. This may be very important for new scholars who are developing research skills and exploring their field and profession. Stebbins’s insights are most pertinent for experienced scholars. Fourth, frameworks and deductive logic are useful for comparative work because some degree of consistency across cases is built into the design.

As we have seen, the hypotheses of explanatory and categories of descriptive research are the dominate frames of social science and policy science. We certainly concur that neither of these frames makes a lot of sense for exploratory research. They would tend to tie it down. We see the problem as a missing framework or missing way to frame deductive, exploratory research in the methodology literature. Inductive exploratory research would not work for many case studies that are trying to use evidence to make an argument. What exploratory deductive case studies need is a framework that incorporates flexibility. This is even more true for comparative case studies. A framework of this sort could be usefully applied to policy research (Casula 2020a ), particularly evaluative policy research, and applied research generally. We propose the Working Hypothesis as a flexible conceptual framework and as a useful tool for doing exploratory studies. It can be used as an evaluative criterion particularly for process evaluation and is useful for student research because students can develop theorizing skills using the literature.

Table  1 included a column specifying the philosophical basis for each research purpose. Shifting gears to the philosophical underpinning of methodology provides useful additional context for examination of deductive, exploratory research.

4 What is a working hypothesis

The working hypothesis is first and foremost a hypothesis or a statement of expectation that is tested in action. The term “working” suggest that these hypotheses are subject to change, are provisional and the possibility of finding contradictory evidence is real. In addition, a “working” hypothesis is active, it is a tool in an ongoing process of inquiry. If one begins with a research question, the working hypothesis could be viewed as a statement or group of statements that answer the question. It “works” to move purposeful inquiry forward. “Working” also implies some sort of community, mostly we work together in relationship to achieve some goal.

Working Hypothesis is a term found in earlier literature. Indeed, both pioneering pragmatists, John Dewey and George Herbert Mead use the term working hypothesis in important nineteenth century works. For both Dewey and Mead, the notion of a working hypothesis has a self-evident quality and it is applied in a big picture context. Footnote 2

Most notably, Dewey ( 1896 ), in one of his most pivotal early works (“Reflex Arc”), used “working hypothesis” to describe a key concept in psychology. “The idea of the reflex arc has upon the whole come nearer to meeting this demand for a general working hypothesis than any other single concept (Italics added)” (p. 357). The notion of a working hypothesis was developed more fully 42 years later, in Logic the Theory of Inquiry , where Dewey developed the notion of a working hypothesis that operated on a smaller scale. He defines working hypotheses as a “provisional, working means of advancing investigation” (Dewey 1938 , pp. 142). Dewey’s definition suggests that working hypotheses would be useful toward the beginning of a research project (e.g., exploratory research).

Mead ( 1899 ) used working hypothesis in a title of an American Journal of Sociology article “The Working Hypothesis and Social Reform” (italics added). He notes that a scientist’s foresight goes beyond testing a hypothesis.

Given its success, he may restate his world from this standpoint and get the basis for further investigation that again always takes the form of a problem. The solution of this problem is found over again in the possibility of fitting his hypothetical proposition into the whole within which it arises. And he must recognize that this statement is only a working hypothesis at the best, i.e., he knows that further investigation will show that the former statement of his world is only provisionally true, and must be false from the standpoint of a larger knowledge, as every partial truth is necessarily false over against the fuller knowledge which he will gain later (Mead 1899 , p. 370).

Cronbach ( 1975 ) developed a notion of working hypothesis consistent with inductive reasoning, but for him, the working hypothesis is a product or result of naturalistic inquiry. He makes the case that naturalistic inquiry is highly context dependent and therefore results or seeming generalizations that may come from a study and should be viewed as “working hypotheses”, which “are tentative both for the situation in which they first uncovered and for other situations” (as cited in Gobo 2008 , p. 196).

A quick Google scholar search using the term “working hypothesis” show that it is widely used in twentieth and twenty-first century science, particularly in titles. In these articles, the working hypothesis is treated as a conceptual tool that furthers investigation in its early or transitioning phases. We could find no explicit links to exploratory research. The exploratory nature of the problem is expressed implicitly. Terms such as “speculative” (Habib 2000 , p. 2391) or “rapidly evolving field” (Prater et al. 2007 , p. 1141) capture the exploratory nature of the study. The authors might describe how a topic is “new” or reference “change”. “As a working hypothesis, the picture is only new, however, in its interpretation” (Milnes 1974 , p. 1731). In a study of soil genesis, Arnold ( 1965 , p. 718) notes “Sequential models, formulated as working hypotheses, are subject to further investigation and change”. Any 2020 article dealing with COVID-19 and respiratory distress would be preliminary almost by definition (Ciceri et al. 2020 ).

5 Philosophical roots of methodology

According to Kaplan ( 1964 , p. 23) “the aim of methodology is to help us understand, in the broadest sense not the products of scientific inquiry but the process itself”. Methods contain philosophical principles that distinguish them from other “human enterprises and interests” (Kaplan 1964 , p. 23). Contemporary research methodology is generally classified as quantitative, qualitative and mixed methods. Leading scholars of methodology have associated each with a philosophical underpinning—positivism (or post-positivism), interpretivism or constructivist and pragmatism, respectively (Guba 1987 ; Guba and Lincoln 1981 ; Schrag 1992 ; Stebbins 2001 ; Mackenzi and Knipe 2006 ; Atieno 2009 ; Levers 2013 ; Morgan 2007 ; O’Connor et al. 2008 ; Johnson and Onwuegbuzie 2004 ; Twining et al. 2017 ). This section summarizes how the literature often describes these philosophies and informs contemporary methodology and its literature.

Positivism and its more contemporary version, post-positivism, maintains an objectivist ontology or assumes an objective reality, which can be uncovered (Levers 2013 ; Twining et al. 2017 ). Footnote 3 Time and context free generalizations are possible and “real causes of social scientific outcomes can be determined reliably and validly (Johnson and Onwuegbunzie 2004 , p. 14). Further, “explanation of the social world is possible through a logical reduction of social phenomena to physical terms”. It uses an empiricist epistemology which “implies testability against observation, experimentation, or comparison” (Whetsell and Shields 2015 , pp. 420–421). Correspondence theory, a tenet of positivism, asserts that “to each concept there corresponds a set of operations involved in its scientific use” (Kaplan 1964 , p. 40).

The interpretivist, constructivists or post-modernist approach is a reaction to positivism. It uses a relativist ontology and a subjectivist epistemology (Levers 2013 ). In this world of multiple realities, context free generalities are impossible as is the separation of facts and values. Causality, explanation, prediction, experimentation depend on assumptions about the correspondence between concepts and reality, which in the absence of an objective reality is impossible. Empirical research can yield “contextualized emergent understanding rather than the creation of testable theoretical structures” (O’Connor et al. 2008 , p. 30). The distinctively different world views of positivist/post positivist and interpretivist philosophy is at the core of many controversies in methodology, social and policy science literature (Casula 2020b ).

With its focus on dissolving dualisms, pragmatism steps outside the objective/subjective debate. Instead, it asks, “what difference would it make to us if the statement were true” (Kaplan 1964 , p. 42). Its epistemology is connected to purposeful inquiry. Pragmatism has a “transformative, experimental notion of inquiry” anchored in pluralism and a focus on constructing conceptual and practical tools to resolve “problematic situations” (Shields 1998 ; Shields and Rangarajan 2013 ). Exploration and working hypotheses are most comfortably situated within the pragmatic philosophical perspective.

6 Research approaches

Empirical investigation relies on three types of methodology—quantitative, qualitative and mixed methods.

6.1 Quantitative methods

Quantitative methods uses deductive logic and formal hypotheses or models to explain, predict, and eventually establish causation (Hyde 2000 ; Kaplan 1964 ; Johnson and Onwuegbunzie 2004 ; Morgan 2007 ). Footnote 4 The correspondence between the conceptual and empirical world make measures possible. Measurement assigns numbers to objects, events or situations and allows for standardization and subtle discrimination. It also allows researchers to draw on the power of mathematics and statistics (Kaplan 1964 , pp. 172–174). Using the power of inferential statistics, quantitative research employs research designs, which eliminate competing hypotheses. It is high in external validity or the ability to generalize to the whole. The research results are relatively independent of the researcher (Johnson & Onwuegbunzie 2004 ).

Quantitative methods depend on the quality of measurement and a priori conceptualization, and adherence to the underlying assumptions of inferential statistics. Critics charge that hypotheses and frameworks needlessly constrain inquiry (Johnson and Onwuegbunzie 2004 , p. 19). Hypothesis testing quantitative methods support the explanatory purpose.

6.2 Qualitative methods

Qualitative researchers who embrace the post-modern, interpretivist view, Footnote 5 question everything about the nature of quantitative methods (Willis et al. 2007 ). Rejecting the possibility of objectivity, correspondence between ideas and measures, and the constraints of a priori theorizing they focus on “unique impressions and understandings of events rather than to generalize the findings” (Kolb 2012 , p. 85). Characteristics of traditional qualitative research include “induction, discovery, exploration, theory/hypothesis generation and the researcher as the primary ‘instrument’ of data collection” (Johnson and Onwuegbunzie 2004 , p. 18). It also concerns itself with forming “unique impressions and understandings of events rather than to generalize findings” (Kolb 2012 , p. 85). The data of qualitative methods are generated via interviews, direct observation, focus groups and analysis of written records or artifacts.

Qualitative methods provide for understanding and “description of people’s personal experiences of phenomena”. They enable descriptions of detailed “phenomena as they are situated and embedded in local contexts.” Researchers use naturalistic settings to “study dynamic processes” and explore how participants interpret experiences. Qualitative methods have an inherent flexibility, allowing researchers to respond to changes in the research setting. They are particularly good at narrowing to the particular and on the flipside have limited external validity (Johnson and Onwuegbunzie 2004 , p. 20). Instead of specifying a suitable sample size to draw conclusions, qualitative research uses the notion of saturation (Morse 1995 ).

Saturation is used in grounded theory—a widely used and respected form of qualitative research, and a well-known interpretivist qualitative research method. Introduced by Glaser and Strauss ( 1967 ), this “grounded on observation” (Patten and Newhart 2000 , p. 27) methodology, focuses on “the creation of emergent understanding” (O’Connor et al. 2008 , p. 30). It uses the Constant Comparative method, whereby researchers develop theory from data as they code and analyze at the same time. Data collection, coding and analysis along with theoretical sampling are systematically combined to generate theory (Kolb 2012 , p. 83). The qualitative methods discussed here support exploratory research.

A close look at the two philosophies and assumptions of quantitative and qualitative research suggests two contradictory world views. The literature has labeled these contradictory views the Incompatibility Theory, which sets up a quantitative versus qualitative tension similar to the seeming separation of art and science or fact and values (Smith 1983a , b ; Guba 1987 ; Smith and Heshusius 1986 ; Howe 1988 ). The incompatibility theory does not make sense in practice. Yin ( 1981 , 1992 , 2011 , 2017 ), a prominent case study scholar, showcases a deductive research methodology that crosses boundaries using both quantaitive and qualitative evidence when appropriate.

6.3 Mixed methods

Turning the “Incompatibility Theory” on its head, Mixed Methods research “combines elements of qualitative and quantitative research approaches … for the broad purposes of breadth and depth of understanding and corroboration” (Johnson et al. 2007 , p. 123). It does this by partnering with philosophical pragmatism. Footnote 6 Pragmatism is productive because “it offers an immediate and useful middle position philosophically and methodologically; it offers a practical and outcome-oriented method of inquiry that is based on action and leads, iteratively, to further action and the elimination of doubt; it offers a method for selecting methodological mixes that can help researchers better answer many of their research questions” (Johnson and Onwuegbunzie 2004 , p. 17). What is theory for the pragmatist “any theoretical model is for the pragmatist, nothing more than a framework through which problems are perceived and subsequently organized ” (Hothersall 2019 , p. 5).

Brendel ( 2009 ) constructed a simple framework to capture the core elements of pragmatism. Brendel’s four “p”’s—practical, pluralism, participatory and provisional help to show the relevance of pragmatism to mixed methods. Pragmatism is purposeful and concerned with the practical consequences. The pluralism of pragmatism overcomes quantitative/qualitative dualism. Instead, it allows for multiple perspectives (including positivism and interpretivism) and, thus, gets around the incompatibility problem. Inquiry should be participatory or inclusive of the many views of participants, hence, it is consistent with multiple realities and is also tied to the common concern of a problematic situation. Finally, all inquiry is provisional . This is compatible with experimental methods, hypothesis testing and consistent with the back and forth of inductive and deductive reasoning. Mixed methods support exploratory research.

Advocates of mixed methods research note that it overcomes the weaknesses and employs the strengths of quantitative and qualitative methods. Quantitative methods provide precision. The pictures and narrative of qualitative techniques add meaning to the numbers. Quantitative analysis can provide a big picture, establish relationships and its results have great generalizability. On the other hand, the “why” behind the explanation is often missing and can be filled in through in-depth interviews. A deeper and more satisfying explanation is possible. Mixed-methods brings the benefits of triangulation or multiple sources of evidence that converge to support a conclusion. It can entertain a “broader and more complete range of research questions” (Johnson and Onwuegbunzie 2004 , p. 21) and can move between inductive and deductive methods. Case studies use multiple forms of evidence and are a natural context for mixed methods.

One thing that seems to be missing from mixed method literature and explicit design is a place for conceptual frameworks. For example, Heyvaert et al. ( 2013 ) examined nine mixed methods studies and found an explicit framework in only two studies (transformative and pragmatic) (p. 663).

7 Theory and hypotheses: where is and what is theory?

Theory is key to deductive research. In essence, empirical deductive methods test theory. Hence, we shift our attention to theory and the role and functions of the hypotheses in theory. Oppenheim and Putnam ( 1958 ) note that “by a ‘theory’ (in the widest sense) we mean any hypothesis, generalization or law (whether deterministic or statistical) or any conjunction of these” (p. 25). Van Evera ( 1997 ) uses a similar and more complex definition “theories are general statements that describe and explain the causes of effects of classes of phenomena. They are composed of causal laws or hypotheses, explanations, and antecedent conditions” (p. 8). Sutton and Staw ( 1995 , p. 376) in a highly cited article “What Theory is Not” assert the that hypotheses should contain logical arguments for “why” the hypothesis is expected. Hypotheses need an underlying causal argument before they can be considered theory. The point of this discussion is not to define theory but to establish the importance of hypotheses in theory.

Explanatory research is implicitly relational (A explains B). The hypotheses of explanatory research lay bare these relationships. Popular definitions of hypotheses capture this relational component. For example, the Cambridge Dictionary defines a hypothesis a “an idea or explanation for something that is based on known facts but has not yet been proven”. Vocabulary.Com’s definition emphasizes explanation, a hypothesis is “an idea or explanation that you then test through study and experimentation”. According to Wikipedia a hypothesis is “a proposed explanation for a phenomenon”. Other definitions remove the relational or explanatory reference. The Oxford English Dictionary defines a hypothesis as a “supposition or conjecture put forth to account for known facts.” Science Buddies defines a hypothesis as a “tentative, testable answer to a scientific question”. According to the Longman Dictionary the hypothesis is “an idea that can be tested to see if it is true or not”. The Urban Dictionary states a hypothesis is “a prediction or educated-guess based on current evidence that is yet be tested”. We argue that the hypotheses of exploratory research— working hypothesis — are not bound by relational expectations. It is this flexibility that distinguishes the working hypothesis.

Sutton and Staw (1995) maintain that hypotheses “serve as crucial bridges between theory and data, making explicit how the variables and relationships that follow from a logical argument will be operationalized” (p. 376, italics added). The highly rated journal, Computers and Education , Twining et al. ( 2017 ) created guidelines for qualitative research as a way to improve soundness and rigor. They identified the lack of alignment between theoretical stance and methodology as a common problem in qualitative research. In addition, they identified a lack of alignment between methodology, design, instruments of data collection and analysis. The authors created a guidance summary, which emphasized the need to enhance coherence throughout elements of research design (Twining et al. 2017 p. 12). Perhaps the bridging function of the hypothesis mentioned by Sutton and Staw (1995) is obscured and often missing in qualitative methods. Working hypotheses can be a tool to overcome this problem.

For reasons, similar to those used by mixed methods scholars, we look to classical pragmatism and the ideas of John Dewey to inform our discussion of theory and working hypotheses. Dewey ( 1938 ) treats theory as a tool of empirical inquiry and uses a map metaphor (p. 136). Theory is like a map that helps a traveler navigate the terrain—and should be judged by its usefulness. “There is no expectation that a map is a true representation of reality. Rather, it is a representation that allows a traveler to reach a destination (achieve a purpose). Hence, theories should be judged by how well they help resolve the problem or achieve a purpose ” (Shields and Rangarajan 2013 , p. 23). Note that we explicitly link theory to the research purpose. Theory is never treated as an unimpeachable Truth, rather it is a helpful tool that organizes inquiry connecting data and problem. Dewey’s approach also expands the definition of theory to include abstractions (categories) outside of causation and explanation. The micro-conceptual frameworks Footnote 7 introduced in Table  1 are a type of theory. We define conceptual frameworks as the “way the ideas are organized to achieve the project’s purpose” (Shields and Rangarajan 2013 p. 24). Micro-conceptual frameworks do this at the very close to the data level of analysis. Micro-conceptual frameworks can direct operationalization and ways to assess measurement or evidence at the individual research study level. Again, the research purpose plays a pivotal role in the functioning of theory (Shields and Tajalli 2006 ).

8 Working hypothesis: methods and data analysis

We move on to answer the remaining questions in the Table  1 . We have established that exploratory research is extremely flexible and idiosyncratic. Given this, we will proceed with a few examples and draw out lessons for developing an exploratory purpose, building a framework and from there identifying data collection techniques and the logics of hypotheses testing and analysis. Early on we noted the value of the Working Hypothesis framework for student empirical research and applied research. The next section uses a masters level student’s work to illustrate the usefulness of working hypotheses as a way to incorporate the literature and structure inquiry. This graduate student was also a mature professional with a research question that emerged from his job and is thus an example of applied research.

Master of Public Administration student, Swift ( 2010 ) worked for a public agency and was responsible for that agency’s sexual harassment training. The agency needed to evaluate its training but had never done so before. He also had never attempted a significant empirical research project. Both of these conditions suggest exploration as a possible approach. He was interested in evaluating the training program and hence the project had a normative sense. Given his job, he already knew a lot about the problem of sexual harassment and sexual harassment training. What he did not know much about was doing empirical research, reviewing the literature or building a framework to evaluate the training (working hypotheses). He wanted a framework that was flexible and comprehensive. In his research, he discovered Lundvall’s ( 2006 ) knowledge taxonomy summarized with four simple ways of knowing ( Know - what, Know - how, Know - why, Know - who ). He asked whether his agency’s training provided the participants with these kinds of knowledge? Lundvall’s categories of knowing became the basis of his working hypotheses. Lundvall’s knowledge taxonomy is well suited for working hypotheses because it is so simple and is easy to understand intuitively. It can also be tailored to the unique problematic situation of the researcher. Swift ( 2010 , pp. 38–39) developed four basic working hypotheses:

WH1: Capital Metro provides adequate know - what knowledge in its sexual harassment training

WH2: Capital Metro provides adequate know - how knowledge in its sexual harassment training

WH3: Capital Metro provides adequate know - why knowledge in its sexual harassment training

WH4: Capital Metro provides adequate know - who knowledge in its sexual harassment training

From here he needed to determine what would determine the different kinds of knowledge. For example, what constitutes “know what” knowledge for sexual harassment training. This is where his knowledge and experience working in the field as well as the literature come into play. According to Lundvall et al. ( 1988 , p. 12) “know what” knowledge is about facts and raw information. Swift ( 2010 ) learned through the literature that laws and rules were the basis for the mandated sexual harassment training. He read about specific anti-discrimination laws and the subsequent rules and regulations derived from the laws. These laws and rules used specific definitions and were enacted within a historical context. Laws, rules, definitions and history became the “facts” of Know-What knowledge for his working hypothesis. To make this clear, he created sub-hypotheses that explicitly took these into account. See how Swift ( 2010 , p. 38) constructed the sub-hypotheses below. Each sub-hypothesis was defended using material from the literature (Swift 2010 , pp. 22–26). The sub-hypotheses can also be easily tied to evidence. For example, he could document that the training covered anti-discrimination laws.

WH1: Capital Metro provides adequate know - what knowledge in its sexual Harassment training

WH1a: The sexual harassment training includes information on anti-discrimination laws (Title VII).

WH1b: The sexual harassment training includes information on key definitions.

WH1c: The sexual harassment training includes information on Capital Metro’s Equal Employment Opportunity and Harassment policy.

WH1d: Capital Metro provides training on sexual harassment history.

Know-How knowledge refers to the ability to do something and involves skills (Lundvall and Johnson 1994 , p. 12). It is a kind of expertise in action. The literature and his experience allowed James Smith to identify skills such as how to file a claim or how to document incidents of sexual harassment as important “know-how” knowledge that should be included in sexual harassment training. Again, these were depicted as sub-hypotheses.

WH2: Capital Metro provides adequate know - how knowledge in its sexual Harassment training

WH2a: Training is provided on how to file and report a claim of harassment

WH2b: Training is provided on how to document sexual harassment situations.

WH2c: Training is provided on how to investigate sexual harassment complaints.

WH2d: Training is provided on how to follow additional harassment policy procedures protocol

Note that the working hypotheses do not specify a relationship but rather are simple declarative sentences. If “know-how” knowledge was found in the sexual harassment training, he would be able to find evidence that participants learned about how to file a claim (WH2a). The working hypothesis provides the bridge between theory and data that Sutton and Staw (1995) found missing in exploratory work. The sub-hypotheses are designed to be refined enough that the researchers would know what to look for and tailor their hunt for evidence. Figure  1 captures the generic sub-hypothesis design.

figure 1

A Common structure used in the development of working hypotheses

When expected evidence is linked to the sub-hypotheses, data, framework and research purpose are aligned. This can be laid out in a planning document that operationalizes the data collection in something akin to an architect’s blueprint. This is where the scholar explicitly develops the alignment between purpose, framework and method (Shields and Rangarajan 2013 ; Shields et al. 2019b ).

Table  2 operationalizes Swift’s working hypotheses (and sub-hypotheses). The table provide clues as to what kind of evidence is needed to determine whether the hypotheses are supported. In this case, Smith used interviews with participants and trainers as well as a review of program documents. Column one repeats the sub-hypothesis, column two specifies the data collection method (here interviews with participants/managers and review of program documents) and column three specifies the unique questions that focus the investigation. For example, the interview questions are provided. In the less precise world of qualitative data, evidence supporting a hypothesis could have varying degrees of strength. This too can be specified.

For Swift’s example, neither the statistics of explanatory research nor the open-ended questions of interpretivist, inductive exploratory research is used. The deductive logic of inquiry here is somewhat intuitive and similar to a detective (Ulriksen and Dadalauri 2016 ). It is also a logic used in international law (Worster 2013 ). It should be noted that the working hypothesis and the corresponding data collection protocol does not stop inquiry and fieldwork outside the framework. The interviews could reveal an unexpected problem with Smith’s training program. The framework provides a very loose and perhaps useful ways to identify and make sense of the data that does not fit the expectations. Researchers using working hypotheses should be sensitive to interesting findings that fall outside their framework. These could be used in future studies, to refine theory or even in this case provide suggestions to improve sexual harassment training. The sensitizing concepts mentioned by Gilgun ( 2015 ) are free to emerge and should be encouraged.

Something akin to working hypotheses are hidden in plain sight in the professional literature. Take for example Kerry Crawford’s ( 2017 ) book Wartime Sexual Violence. Here she explores how basic changes in the way “advocates and decision makers think about and discuss conflict-related sexual violence” (p. 2). She focused on a subsequent shift from silence to action. The shift occurred as wartime sexual violence was reframed as a “weapon of war”. The new frame captured the attention of powerful members of the security community who demanded, initiated, and paid for institutional and policy change. Crawford ( 2017 ) examines the legacy of this key reframing. She develops a six-stage model of potential international responses to incidents of wartime violence. This model is fairly easily converted to working hypotheses and sub-hypotheses. Table  3 shows her model as a set of (non-relational) working hypotheses. She applied this model as a way to gather evidence among cases (e.g., the US response to sexual violence in the Democratic Republic of the Congo) to show the official level of response to sexual violence. Each case study chapter examined evidence to establish whether the case fit the pattern formalized in the working hypotheses. The framework was very useful in her comparative context. The framework allowed for consistent comparative analysis across cases. Her analysis of the three cases went well beyond the material covered in the framework. She freely incorporated useful inductively informed data in her analysis and discussion. The framework, however, allowed for alignment within and across cases.

9 Conclusion

In this article we argued that the exploratory research is also well suited for deductive approaches. By examining the landscape of deductive, exploratory research, we proposed the working hypothesis as a flexible conceptual framework and a useful tool for doing exploratory studies. It has the potential to guide and bring coherence across the steps in the research process. After presenting the nature of exploratory research purpose and how it differs from two types of research purposes identified in the literature—explanation, and description. We focused on answering four different questions in order to show the link between micro-conceptual frameworks and research purposes in a deductive setting. The answers to the four questions are summarized in Table  4 .

Firstly, we argued that working hypothesis and exploration are situated within the pragmatic philosophical perspective. Pragmatism allows for pluralism in theory and data collection techniques, which is compatible with the flexible exploratory purpose. Secondly, after introducing and discussing the four core elements of pragmatism (practical, pluralism, participatory, and provisional), we explained how the working hypothesis informs the methodologies and evidence collection of deductive exploratory research through a presentation of the benefits of triangulation provided by mixed methods research. Thirdly, as is clear from the article title, we introduced the working hypothesis as the micro-conceptual framework for deductive explorative research. We argued that the hypotheses of explorative research, which we call working hypotheses are distinguished from those of the explanatory research, since they do not require a relational component and are not bound by relational expectations. A working hypothesis is extremely flexible and idiosyncratic, and it could be viewed as a statement or group of statements of expectations tested in action depending on the research question. Using examples, we concluded by explaining how working hypotheses inform data collection and analysis for deductive exploratory research.

Crawford’s ( 2017 ) example showed how the structure of working hypotheses provide a framework for comparative case studies. Her criteria for analysis were specified ahead of time and used to frame each case. Thus, her comparisons were systemized across cases. Further, the framework ensured a connection between the data analysis and the literature review. Yet the flexible, working nature of the hypotheses allowed for unexpected findings to be discovered.

The evidence required to test working hypotheses is directed by the research purpose and potentially includes both quantitative and qualitative sources. Thus, all types of evidence, including quantitative methods should be part of the toolbox of deductive, explorative research. We show how the working hypotheses, as a flexible exploratory framework, resolves many seeming dualisms pervasive in the research methods literature.

To conclude, this article has provided an in-depth examination of working hypotheses taking into account philosophical questions and the larger formal research methods literature. By discussing working hypotheses as applied, theoretical tools, we demonstrated that working hypotheses fill a unique niche in the methods literature, since they provide a way to enhance alignment in deductive, explorative studies.

In practice, quantitative scholars often run multivariate analysis on data bases to find out if there are correlations. Hypotheses are tested because the statistical software does the math, not because the scholar has an a priori, relational expectation (hypothesis) well-grounded in the literature and supported by cogent arguments. Hunches are just fine. This is clearly an inductive approach to research and part of the large process of inquiry.

In 1958 , Philosophers of Science, Oppenheim and Putnam use the notion of Working Hypothesis in their title “Unity of Science as Working Hypothesis.” They too, use it as a big picture concept, “unity of science in this sense, can be fully realized constitutes an over-arching meta-scientific hypothesis, which enables one to see a unity in scientific activities that might otherwise appear disconnected or unrelated” (p. 4).

It should be noted that the positivism described in the research methods literature does not resemble philosophical positivism as developed by philosophers like Comte (Whetsell and Shields 2015 ). In the research methods literature “positivism means different things to different people….The term has long been emptied of any precise denotation …and is sometimes affixed to positions actually opposed to those espoused by the philosophers from whom the name derives” (Schrag 1992 , p. 5). For purposes of this paper, we are capturing a few essential ways positivism is presented in the research methods literature. This helps us to position the “working hypothesis” and “exploratory” research within the larger context in contemporary research methods. We are not arguing that the positivism presented here is anything more. The incompatibility theory discussed later, is an outgrowth of this research methods literature…

It should be noted that quantitative researchers often use inductive reasoning. They do this with existing data sets when they run correlations or regression analysis as a way to find relationships. They ask, what does the data tell us?

Qualitative researchers are also associated with phenomenology, hermeneutics, naturalistic inquiry and constructivism.

See Feilzer ( 2010 ), Howe ( 1988 ), Johnson and Onwuegbunzie ( 2004 ), Morgan ( 2007 ), Onwuegbuzie and Leech ( 2005 ), Biddle and Schafft ( 2015 ).

The term conceptual framework is applicable in a broad context (see Ravitch and Riggan 2012 ). The micro-conceptual framework narrows to the specific study and informs data collection (Shields and Rangarajan 2013 ; Shields et al. 2019a ) .

Adler, E., Clark, R.: How It’s Done: An Invitation to Social Research, 3rd edn. Thompson-Wadsworth, Belmont (2008)

Google Scholar  

Arnold, R.W.: Multiple working hypothesis in soil genesis. Soil Sci. Soc. Am. J. 29 (6), 717–724 (1965)

Article   Google Scholar  

Atieno, O.: An analysis of the strengths and limitation of qualitative and quantitative research paradigms. Probl. Educ. 21st Century 13 , 13–18 (2009)

Babbie, E.: The Practice of Social Research, 11th edn. Thompson-Wadsworth, Belmont (2007)

Biddle, C., Schafft, K.A.: Axiology and anomaly in the practice of mixed methods work: pragmatism, valuation, and the transformative paradigm. J. Mixed Methods Res. 9 (4), 320–334 (2015)

Brendel, D.H.: Healing Psychiatry: Bridging the Science/Humanism Divide. MIT Press, Cambridge (2009)

Bryman, A.: Qualitative research on leadership: a critical but appreciative review. Leadersh. Q. 15 (6), 729–769 (2004)

Casula, M.: Under which conditions is cohesion policy effective: proposing an Hirschmanian approach to EU structural funds, Regional & Federal Studies, https://doi.org/10.1080/13597566.2020.1713110 (2020a)

Casula, M.: Economic gowth and cohesion policy implementation in Italy and Spain, Palgrave Macmillan, Cham (2020b)

Ciceri, F., et al.: Microvascular COVID-19 lung vessels obstructive thromboinflammatory syndrome (MicroCLOTS): an atypical acute respiratory distress syndrome working hypothesis. Crit. Care Resusc. 15 , 1–3 (2020)

Crawford, K.F.: Wartime sexual violence: From silence to condemnation of a weapon of war. Georgetown University Press (2017)

Cronbach, L.: Beyond the two disciplines of scientific psychology American Psychologist. 30 116–127 (1975)

Dewey, J.: The reflex arc concept in psychology. Psychol. Rev. 3 (4), 357 (1896)

Dewey, J.: Logic: The Theory of Inquiry. Henry Holt & Co, New York (1938)

Feilzer, Y.: Doing mixed methods research pragmatically: implications for the rediscovery of pragmatism as a research paradigm. J. Mixed Methods Res. 4 (1), 6–16 (2010)

Gilgun, J.F.: Qualitative research and family psychology. J. Fam. Psychol. 19 (1), 40–50 (2005)

Gilgun, J.F.: Methods for enhancing theory and knowledge about problems, policies, and practice. In: Katherine Briar, Joan Orme., Roy Ruckdeschel., Ian Shaw. (eds.) The Sage handbook of social work research pp. 281–297. Thousand Oaks, CA: Sage (2009)

Gilgun, J.F.: Deductive Qualitative Analysis as Middle Ground: Theory-Guided Qualitative Research. Amazon Digital Services LLC, Seattle (2015)

Glaser, B.G., Strauss, A.L.: The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine, Chicago (1967)

Gobo, G.: Re-Conceptualizing Generalization: Old Issues in a New Frame. In: Alasuutari, P., Bickman, L., Brannen, J. (eds.) The Sage Handbook of Social Research Methods, pp. 193–213. Sage, Los Angeles (2008)

Chapter   Google Scholar  

Grinnell, R.M.: Social work research and evaluation: quantitative and qualitative approaches. New York: F.E. Peacock Publishers (2001)

Guba, E.G.: What have we learned about naturalistic evaluation? Eval. Pract. 8 (1), 23–43 (1987)

Guba, E., Lincoln, Y.: Effective Evaluation: Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. Jossey-Bass Publishers, San Francisco (1981)

Habib, M.: The neurological basis of developmental dyslexia: an overview and working hypothesis. Brain 123 (12), 2373–2399 (2000)

Heyvaert, M., Maes, B., Onghena, P.: Mixed methods research synthesis: definition, framework, and potential. Qual. Quant. 47 (2), 659–676 (2013)

Hildebrand, D.: Dewey: A Beginners Guide. Oneworld Oxford, Oxford (2008)

Howe, K.R.: Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Edu. Res. 17 (8), 10–16 (1988)

Hothersall, S.J.: Epistemology and social work: enhancing the integration of theory, practice and research through philosophical pragmatism. Eur. J. Social Work 22 (5), 860–870 (2019)

Hyde, K.F.: Recognising deductive processes in qualitative research. Qual. Market Res. Int. J. 3 (2), 82–90 (2000)

Johnson, R.B., Onwuegbuzie, A.J.: Mixed methods research: a research paradigm whose time has come. Educ. Res. 33 (7), 14–26 (2004)

Johnson, R.B., Onwuegbuzie, A.J., Turner, L.A.: Toward a definition of mixed methods research. J. Mixed Methods Res. 1 (2), 112–133 (2007)

Kaplan, A.: The Conduct of Inquiry. Chandler, Scranton (1964)

Kolb, S.M.: Grounded theory and the constant comparative method: valid research strategies for educators. J. Emerg. Trends Educ. Res. Policy Stud. 3 (1), 83–86 (2012)

Levers, M.J.D.: Philosophical paradigms, grounded theory, and perspectives on emergence. Sage Open 3 (4), 2158244013517243 (2013)

Lundvall, B.A.: Knowledge management in the learning economy. In: Danish Research Unit for Industrial Dynamics Working Paper Working Paper, vol. 6, pp. 3–5 (2006)

Lundvall, B.-Å., Johnson, B.: Knowledge management in the learning economy. J. Ind. Stud. 1 (2), 23–42 (1994)

Lundvall, B.-Å., Jenson, M.B., Johnson, B., Lorenz, E.: Forms of Knowledge and Modes of Innovation—From User-Producer Interaction to the National System of Innovation. In: Dosi, G., et al. (eds.) Technical Change and Economic Theory. Pinter Publishers, London (1988)

Maanen, J., Manning, P., Miller, M.: Series editors’ introduction. In: Stebbins, R. (ed.) Exploratory research in the social sciences. pp. v–vi. Thousands Oak, CA: SAGE (2001)

Mackenzie, N., Knipe, S.: Research dilemmas: paradigms, methods and methodology. Issues Educ. Res. 16 (2), 193–205 (2006)

Marlow, C.R.: Research Methods for Generalist Social Work. Thomson Brooks/Cole, New York (2005)

Mead, G.H.: The working hypothesis in social reform. Am. J. Sociol. 5 (3), 367–371 (1899)

Milnes, A.G.: Structure of the Pennine Zone (Central Alps): a new working hypothesis. Geol. Soc. Am. Bull. 85 (11), 1727–1732 (1974)

Morgan, D.L.: Paradigms lost and pragmatism regained: methodological implications of combining qualitative and quantitative methods. J. Mixed Methods Res. 1 (1), 48–76 (2007)

Morse, J.: The significance of saturation. Qual. Health Res. 5 (2), 147–149 (1995)

O’Connor, M.K., Netting, F.E., Thomas, M.L.: Grounded theory: managing the challenge for those facing institutional review board oversight. Qual. Inq. 14 (1), 28–45 (2008)

Onwuegbuzie, A.J., Leech, N.L.: On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. Int. J. Soc. Res. Methodol. 8 (5), 375–387 (2005)

Oppenheim, P., Putnam, H.: Unity of science as a working hypothesis. In: Minnesota Studies in the Philosophy of Science, vol. II, pp. 3–36 (1958)

Patten, M.L., Newhart, M.: Understanding Research Methods: An Overview of the Essentials, 2nd edn. Routledge, New York (2000)

Pearse, N.: An illustration of deductive analysis in qualitative research. In: European Conference on Research Methodology for Business and Management Studies, pp. 264–VII. Academic Conferences International Limited (2019)

Prater, D.N., Case, J., Ingram, D.A., Yoder, M.C.: Working hypothesis to redefine endothelial progenitor cells. Leukemia 21 (6), 1141–1149 (2007)

Ravitch, B., Riggan, M.: Reason and Rigor: How Conceptual Frameworks Guide Research. Sage, Beverley Hills (2012)

Reiter, B.: The epistemology and methodology of exploratory social science research: Crossing Popper with Marcuse. In: Government and International Affairs Faculty Publications. Paper 99. http://scholarcommons.usf.edu/gia_facpub/99 (2013)

Ritchie, J., Lewis, J.: Qualitative Research Practice: A Guide for Social Science Students and Researchers. Sage, London (2003)

Schrag, F.: In defense of positivist research paradigms. Educ. Res. 21 (5), 5–8 (1992)

Shields, P.M.: Pragmatism as a philosophy of science: A tool for public administration. Res. Pub. Admin. 41995-225 (1998)

Shields, P.M., Rangarajan, N.: A Playbook for Research Methods: Integrating Conceptual Frameworks and Project Management. New Forums Press (2013)

Shields, P.M., Tajalli, H.: Intermediate theory: the missing link in successful student scholarship. J. Public Aff. Educ. 12 (3), 313–334 (2006)

Shields, P., & Whetsell, T.: Public administration methodology: A pragmatic perspective. In: Raadshelders, J., Stillman, R., (eds). Foundations of Public Administration, pp. 75–92. New York: Melvin and Leigh (2017)

Shields, P., Rangarajan, N., Casula, M.: It is a Working Hypothesis: Searching for Truth in a Post-Truth World (part I). Sotsiologicheskie issledovaniya 10 , 39–47 (2019a)

Shields, P., Rangarajan, N., Casula, M.: It is a Working Hypothesis: Searching for Truth in a Post-Truth World (part 2). Sotsiologicheskie issledovaniya 11 , 40–51 (2019b)

Smith, J.K.: Quantitative versus qualitative research: an attempt to clarify the issue. Educ. Res. 12 (3), 6–13 (1983a)

Smith, J.K.: Quantitative versus interpretive: the problem of conducting social inquiry. In: House, E. (ed.) Philosophy of Evaluation, pp. 27–52. Jossey-Bass, San Francisco (1983b)

Smith, J.K., Heshusius, L.: Closing down the conversation: the end of the quantitative-qualitative debate among educational inquirers. Educ. Res. 15 (1), 4–12 (1986)

Stebbins, R.A.: Exploratory Research in the Social Sciences. Sage, Thousand Oaks (2001)

Book   Google Scholar  

Strydom, H.: An evaluation of the purposes of research in social work. Soc. Work/Maatskaplike Werk 49 (2), 149–164 (2013)

Sutton, R. I., Staw, B.M.: What theory is not. Administrative science quarterly. 371–384 (1995)

Swift, III, J.: Exploring Capital Metro’s Sexual Harassment Training using Dr. Bengt-Ake Lundvall’s taxonomy of knowledge principles. Applied Research Project, Texas State University https://digital.library.txstate.edu/handle/10877/3671 (2010)

Thomas, E., Magilvy, J.K.: Qualitative rigor or research validity in qualitative research. J. Spec. Pediatric Nurs. 16 (2), 151–155 (2011)

Twining, P., Heller, R.S., Nussbaum, M., Tsai, C.C.: Some guidance on conducting and reporting qualitative studies. Comput. Educ. 107 , A1–A9 (2017)

Ulriksen, M., Dadalauri, N.: Single case studies and theory-testing: the knots and dots of the process-tracing method. Int. J. Soc. Res. Methodol. 19 (2), 223–239 (2016)

Van Evera, S.: Guide to Methods for Students of Political Science. Cornell University Press, Ithaca (1997)

Whetsell, T.A., Shields, P.M.: The dynamics of positivism in the study of public administration: a brief intellectual history and reappraisal. Adm. Soc. 47 (4), 416–446 (2015)

Willis, J.W., Jost, M., Nilakanta, R.: Foundations of Qualitative Research: Interpretive and Critical Approaches. Sage, Beverley Hills (2007)

Worster, W.T.: The inductive and deductive methods in customary international law analysis: traditional and modern approaches. Georget. J. Int. Law 45 , 445 (2013)

Yin, R.K.: The case study as a serious research strategy. Knowledge 3 (1), 97–114 (1981)

Yin, R.K.: The case study method as a tool for doing evaluation. Curr. Sociol. 40 (1), 121–137 (1992)

Yin, R.K.: Applications of Case Study Research. Sage, Beverley Hills (2011)

Yin, R.K.: Case Study Research and Applications: Design and Methods. Sage Publications, Beverley Hills (2017)

Download references

Acknowledgements

The authors contributed equally to this work. The authors would like to thank Quality & Quantity’ s editors and the anonymous reviewers for their valuable advice and comments on previous versions of this paper.

Open access funding provided by Alma Mater Studiorum - Università di Bologna within the CRUI-CARE Agreement. There are no funders to report for this submission.

Author information

Authors and affiliations.

Department of Political and Social Sciences, University of Bologna, Strada Maggiore 45, 40125, Bologna, Italy

Mattia Casula

Texas State University, San Marcos, TX, USA

Nandhini Rangarajan & Patricia Shields

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mattia Casula .

Ethics declarations

Conflict of interest.

No potential conflict of interest was reported by the author.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Casula, M., Rangarajan, N. & Shields, P. The potential of working hypotheses for deductive exploratory research. Qual Quant 55 , 1703–1725 (2021). https://doi.org/10.1007/s11135-020-01072-9

Download citation

Accepted : 05 November 2020

Published : 08 December 2020

Issue Date : October 2021

DOI : https://doi.org/10.1007/s11135-020-01072-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Exploratory research
  • Working hypothesis
  • Deductive qualitative research
  • Find a journal
  • Publish with us
  • Track your research

Frequently asked questions

What is hypothesis testing.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Family Med Prim Care
  • v.4(3); Jul-Sep 2015

Validity, reliability, and generalizability in qualitative research

Lawrence leung.

1 Department of Family Medicine, Queen's University, Kingston, Ontario, Canada

2 Centre of Studies in Primary Care, Queen's University, Kingston, Ontario, Canada

In general practice, qualitative research contributes as significantly as quantitative research, in particular regarding psycho-social aspects of patient-care, health services provision, policy setting, and health administrations. In contrast to quantitative research, qualitative research as a whole has been constantly critiqued, if not disparaged, by the lack of consensus for assessing its quality and robustness. This article illustrates with five published studies how qualitative research can impact and reshape the discipline of primary care, spiraling out from clinic-based health screening to community-based disease monitoring, evaluation of out-of-hours triage services to provincial psychiatric care pathways model and finally, national legislation of core measures for children's healthcare insurance. Fundamental concepts of validity, reliability, and generalizability as applicable to qualitative research are then addressed with an update on the current views and controversies.

Nature of Qualitative Research versus Quantitative Research

The essence of qualitative research is to make sense of and recognize patterns among words in order to build up a meaningful picture without compromising its richness and dimensionality. Like quantitative research, the qualitative research aims to seek answers for questions of “how, where, when who and why” with a perspective to build a theory or refute an existing theory. Unlike quantitative research which deals primarily with numerical data and their statistical interpretations under a reductionist, logical and strictly objective paradigm, qualitative research handles nonnumerical information and their phenomenological interpretation, which inextricably tie in with human senses and subjectivity. While human emotions and perspectives from both subjects and researchers are considered undesirable biases confounding results in quantitative research, the same elements are considered essential and inevitable, if not treasurable, in qualitative research as they invariable add extra dimensions and colors to enrich the corpus of findings. However, the issue of subjectivity and contextual ramifications has fueled incessant controversies regarding yardsticks for quality and trustworthiness of qualitative research results for healthcare.

Impact of Qualitative Research upon Primary Care

In many ways, qualitative research contributes significantly, if not more so than quantitative research, to the field of primary care at various levels. Five qualitative studies are chosen to illustrate how various methodologies of qualitative research helped in advancing primary healthcare, from novel monitoring of chronic obstructive pulmonary disease (COPD) via mobile-health technology,[ 1 ] informed decision for colorectal cancer screening,[ 2 ] triaging out-of-hours GP services,[ 3 ] evaluating care pathways for community psychiatry[ 4 ] and finally prioritization of healthcare initiatives for legislation purposes at national levels.[ 5 ] With the recent advances of information technology and mobile connecting device, self-monitoring and management of chronic diseases via tele-health technology may seem beneficial to both the patient and healthcare provider. Recruiting COPD patients who were given tele-health devices that monitored lung functions, Williams et al. [ 1 ] conducted phone interviews and analyzed their transcripts via a grounded theory approach, identified themes which enabled them to conclude that such mobile-health setup and application helped to engage patients with better adherence to treatment and overall improvement in mood. Such positive findings were in contrast to previous studies, which opined that elderly patients were often challenged by operating computer tablets,[ 6 ] or, conversing with the tele-health software.[ 7 ] To explore the content of recommendations for colorectal cancer screening given out by family physicians, Wackerbarth, et al. [ 2 ] conducted semi-structure interviews with subsequent content analysis and found that most physicians delivered information to enrich patient knowledge with little regard to patients’ true understanding, ideas, and preferences in the matter. These findings suggested room for improvement for family physicians to better engage their patients in recommending preventative care. Faced with various models of out-of-hours triage services for GP consultations, Egbunike et al. [ 3 ] conducted thematic analysis on semi-structured telephone interviews with patients and doctors in various urban, rural and mixed settings. They found that the efficiency of triage services remained a prime concern from both users and providers, among issues of access to doctors and unfulfilled/mismatched expectations from users, which could arouse dissatisfaction and legal implications. In UK, a care pathways model for community psychiatry had been introduced but its benefits were unclear. Khandaker et al. [ 4 ] hence conducted a qualitative study using semi-structure interviews with medical staff and other stakeholders; adopting a grounded-theory approach, major themes emerged which included improved equality of access, more focused logistics, increased work throughput and better accountability for community psychiatry provided under the care pathway model. Finally, at the US national level, Mangione-Smith et al. [ 5 ] employed a modified Delphi method to gather consensus from a panel of nominators which were recognized experts and stakeholders in their disciplines, and identified a core set of quality measures for children's healthcare under the Medicaid and Children's Health Insurance Program. These core measures were made transparent for public opinion and later passed on for full legislation, hence illustrating the impact of qualitative research upon social welfare and policy improvement.

Overall Criteria for Quality in Qualitative Research

Given the diverse genera and forms of qualitative research, there is no consensus for assessing any piece of qualitative research work. Various approaches have been suggested, the two leading schools of thoughts being the school of Dixon-Woods et al. [ 8 ] which emphasizes on methodology, and that of Lincoln et al. [ 9 ] which stresses the rigor of interpretation of results. By identifying commonalities of qualitative research, Dixon-Woods produced a checklist of questions for assessing clarity and appropriateness of the research question; the description and appropriateness for sampling, data collection and data analysis; levels of support and evidence for claims; coherence between data, interpretation and conclusions, and finally level of contribution of the paper. These criteria foster the 10 questions for the Critical Appraisal Skills Program checklist for qualitative studies.[ 10 ] However, these methodology-weighted criteria may not do justice to qualitative studies that differ in epistemological and philosophical paradigms,[ 11 , 12 ] one classic example will be positivistic versus interpretivistic.[ 13 ] Equally, without a robust methodological layout, rigorous interpretation of results advocated by Lincoln et al. [ 9 ] will not be good either. Meyrick[ 14 ] argued from a different angle and proposed fulfillment of the dual core criteria of “transparency” and “systematicity” for good quality qualitative research. In brief, every step of the research logistics (from theory formation, design of study, sampling, data acquisition and analysis to results and conclusions) has to be validated if it is transparent or systematic enough. In this manner, both the research process and results can be assured of high rigor and robustness.[ 14 ] Finally, Kitto et al. [ 15 ] epitomized six criteria for assessing overall quality of qualitative research: (i) Clarification and justification, (ii) procedural rigor, (iii) sample representativeness, (iv) interpretative rigor, (v) reflexive and evaluative rigor and (vi) transferability/generalizability, which also double as evaluative landmarks for manuscript review to the Medical Journal of Australia. Same for quantitative research, quality for qualitative research can be assessed in terms of validity, reliability, and generalizability.

Validity in qualitative research means “appropriateness” of the tools, processes, and data. Whether the research question is valid for the desired outcome, the choice of methodology is appropriate for answering the research question, the design is valid for the methodology, the sampling and data analysis is appropriate, and finally the results and conclusions are valid for the sample and context. In assessing validity of qualitative research, the challenge can start from the ontology and epistemology of the issue being studied, e.g. the concept of “individual” is seen differently between humanistic and positive psychologists due to differing philosophical perspectives:[ 16 ] Where humanistic psychologists believe “individual” is a product of existential awareness and social interaction, positive psychologists think the “individual” exists side-by-side with formation of any human being. Set off in different pathways, qualitative research regarding the individual's wellbeing will be concluded with varying validity. Choice of methodology must enable detection of findings/phenomena in the appropriate context for it to be valid, with due regard to culturally and contextually variable. For sampling, procedures and methods must be appropriate for the research paradigm and be distinctive between systematic,[ 17 ] purposeful[ 18 ] or theoretical (adaptive) sampling[ 19 , 20 ] where the systematic sampling has no a priori theory, purposeful sampling often has a certain aim or framework and theoretical sampling is molded by the ongoing process of data collection and theory in evolution. For data extraction and analysis, several methods were adopted to enhance validity, including 1 st tier triangulation (of researchers) and 2 nd tier triangulation (of resources and theories),[ 17 , 21 ] well-documented audit trail of materials and processes,[ 22 , 23 , 24 ] multidimensional analysis as concept- or case-orientated[ 25 , 26 ] and respondent verification.[ 21 , 27 ]

Reliability

In quantitative research, reliability refers to exact replicability of the processes and the results. In qualitative research with diverse paradigms, such definition of reliability is challenging and epistemologically counter-intuitive. Hence, the essence of reliability for qualitative research lies with consistency.[ 24 , 28 ] A margin of variability for results is tolerated in qualitative research provided the methodology and epistemological logistics consistently yield data that are ontologically similar but may differ in richness and ambience within similar dimensions. Silverman[ 29 ] proposed five approaches in enhancing the reliability of process and results: Refutational analysis, constant data comparison, comprehensive data use, inclusive of the deviant case and use of tables. As data were extracted from the original sources, researchers must verify their accuracy in terms of form and context with constant comparison,[ 27 ] either alone or with peers (a form of triangulation).[ 30 ] The scope and analysis of data included should be as comprehensive and inclusive with reference to quantitative aspects if possible.[ 30 ] Adopting the Popperian dictum of falsifiability as essence of truth and science, attempted to refute the qualitative data and analytes should be performed to assess reliability.[ 31 ]

Generalizability

Most qualitative research studies, if not all, are meant to study a specific issue or phenomenon in a certain population or ethnic group, of a focused locality in a particular context, hence generalizability of qualitative research findings is usually not an expected attribute. However, with rising trend of knowledge synthesis from qualitative research via meta-synthesis, meta-narrative or meta-ethnography, evaluation of generalizability becomes pertinent. A pragmatic approach to assessing generalizability for qualitative studies is to adopt same criteria for validity: That is, use of systematic sampling, triangulation and constant comparison, proper audit and documentation, and multi-dimensional theory.[ 17 ] However, some researchers espouse the approach of analytical generalization[ 32 ] where one judges the extent to which the findings in one study can be generalized to another under similar theoretical, and the proximal similarity model, where generalizability of one study to another is judged by similarities between the time, place, people and other social contexts.[ 33 ] Thus said, Zimmer[ 34 ] questioned the suitability of meta-synthesis in view of the basic tenets of grounded theory,[ 35 ] phenomenology[ 36 ] and ethnography.[ 37 ] He concluded that any valid meta-synthesis must retain the other two goals of theory development and higher-level abstraction while in search of generalizability, and must be executed as a third level interpretation using Gadamer's concepts of the hermeneutic circle,[ 38 , 39 ] dialogic process[ 38 ] and fusion of horizons.[ 39 ] Finally, Toye et al. [ 40 ] reported the practicality of using “conceptual clarity” and “interpretative rigor” as intuitive criteria for assessing quality in meta-ethnography, which somehow echoed Rolfe's controversial aesthetic theory of research reports.[ 41 ]

Food for Thought

Despite various measures to enhance or ensure quality of qualitative studies, some researchers opined from a purist ontological and epistemological angle that qualitative research is not a unified, but ipso facto diverse field,[ 8 ] hence any attempt to synthesize or appraise different studies under one system is impossible and conceptually wrong. Barbour argued from a philosophical angle that these special measures or “technical fixes” (like purposive sampling, multiple-coding, triangulation, and respondent validation) can never confer the rigor as conceived.[ 11 ] In extremis, Rolfe et al. opined from the field of nursing research, that any set of formal criteria used to judge the quality of qualitative research are futile and without validity, and suggested that any qualitative report should be judged by the form it is written (aesthetic) and not by the contents (epistemic).[ 41 ] Rolfe's novel view is rebutted by Porter,[ 42 ] who argued via logical premises that two of Rolfe's fundamental statements were flawed: (i) “The content of research report is determined by their forms” may not be a fact, and (ii) that research appraisal being “subject to individual judgment based on insight and experience” will mean those without sufficient experience of performing research will be unable to judge adequately – hence an elitist's principle. From a realism standpoint, Porter then proposes multiple and open approaches for validity in qualitative research that incorporate parallel perspectives[ 43 , 44 ] and diversification of meanings.[ 44 ] Any work of qualitative research, when read by the readers, is always a two-way interactive process, such that validity and quality has to be judged by the receiving end too and not by the researcher end alone.

In summary, the three gold criteria of validity, reliability and generalizability apply in principle to assess quality for both quantitative and qualitative research, what differs will be the nature and type of processes that ontologically and epistemologically distinguish between the two.

Source of Support: Nil.

Conflict of Interest: None declared.

The Research Work Is Conducted to Test the Truth of

Question 14

The research work is conducted to test the truth of ???????????????????????.

A) Problem B) hidden things C) Hypothesis D) (???????????????) issue.

Correct Answer:

Unlock this answer now Get Access to more Verified Answers free of charge

Q9: ????????????? is a fact finding investigation with

Q10: What is the first step in research

Q11: ????????????????????? is a brief summary of proposed

Q12: The variable which has the effect on

Q13: The variable where change has affected the

Q15: What is the last step in research

Q16: ?????????????????? is a plan that specifies the

Q17: After formulating the research problem the research

Q18: ?????????????????????? refers to the procedure of selecting

Q19: One of the following is not included

Unlock this Answer For Free Now!

View this answer and more for free by performing one of the following actions

qr-code

Scan the QR code to install the App and get 2 free unlocks

upload documents

Unlock quizzes for free by uploading documents

NTRS - NASA Technical Reports Server

Available downloads, related records.

IMAGES

  1. Components of Research Process

    the research work is conducted to test the truth of

  2. Diagram of the conducted research

    the research work is conducted to test the truth of

  3. The structure of the conducted research

    the research work is conducted to test the truth of

  4. Illustration of the research conducted

    the research work is conducted to test the truth of

  5. Types of Research

    the research work is conducted to test the truth of

  6. Eight steps to conducting a research study

    the research work is conducted to test the truth of

VIDEO

  1. Focus Test For Genius 😱 I find theI tingi tingifocus test #focustest #tingiaitri #cartoon#shorts

  2. In the mind of a Researcher: How questions turn into experiments

  3. What is Research ? can normal person conduct research explained

  4. Statistics

  5. The Language of Research: Validity, Reliability, Credibility and Bias

  6. LESSON 27

COMMENTS

  1. Foundations of Integrity in Research: Core Values and Guiding Norms

    Synopsis:The integrity of research is based on adherence to core values—objectivity, honesty, openness, fairness, accountability, and stewardship. These core values help to ensure that the research enterprise advances knowledge. Integrity in science means planning, proposing, performing, reporting, and reviewing research in accordance with these values. Participants in the research ...

  2. Full article: What is scientific truth?

    Thus began the shared truth of scientific research. A hypothesis is developed, a trial conducted to test the hypothesis and the result submitted for publication. Others read the paper and conduct their own research to test the original claim. If they agree, they declare the original claim valid or 'significant' if it passes certain ...

  3. How Research Works: Understanding the Process of Science

    To test their hypotheses, scientists conduct experiments. They use many different tools and techniques, and sometimes they need to invent a new tool to fully answer their question. They may also work with one or more scientists with different areas of expertise to approach the question from other angles and get a more complete answer to their ...

  4. Replication and the Establishment of Scientific Truth

    The idea of replication is based on the premise that there are empirical regularities or universal laws to be replicated and verified, and the scientific method is adequate for doing it. Scientific truth, however, is not absolute but relative to time, context, and the method used. Time and context are inextricably intertwined in that time (e.g ...

  5. Scientific Rigor and the Quest for Truth

    Scientific rigor means implementing the highest standards and best practices of the scientific method and applying those to one's research. It is all about discovering the truth. Scientific rigor involves minimizing bias in subject selection and data analysis. It is about determining the appropriate sample size for your study so that you have ...

  6. 3.1 Psychologists Use the Scientific Method to Guide Their Research

    A sample informed consent form is shown in Figure 3.2, Informed consent, conducted before a participant begins a research session, is designed to explain the research procedures and inform the participant of his or her rights during the investigation. The informed consent explains as much as possible about the true nature of the study ...

  7. What is Research?: The Truth about Research

    Fact #4. Research is Answering a Question. Any time you do research, you are simply trying to answer a question. In the process of answering that question, you may find that even more questions pop up. For instance, you may wonder why the sky is blue. You learn that it has to do with the way light refracts.

  8. Is redoing scientific research the best way to find truth?

    Wrong answers. The Bayer pharmaceutical company tried to repeat studies in three research fields (gold chart), mostly cancer studies. Almost two-thirds of the redos (dark teal) produced results ...

  9. What is Research?

    The purpose of research is to further understand the world and to learn how this knowledge can be applied to better everyday life. It is an integral part of problem solving. Although research can take many forms, there are three main purposes of research: Exploratory: Exploratory research is the first research to be conducted around a problem ...

  10. Module 1: Introduction: What is Research?

    Research is a process to discover new knowledge. In the Code of Federal Regulations (45 CFR 46.102 (d)) pertaining to the protection of human subjects research is defined as: "A systematic investigation (i.e., the gathering and analysis of information) designed to develop or contribute to generalizable knowledge.".

  11. Hypothesis Testing

    Step 5: Present your findings. The results of hypothesis testing will be presented in the results and discussion sections of your research paper, dissertation or thesis.. In the results section you should give a brief summary of the data and a summary of the results of your statistical test (for example, the estimated difference between group means and associated p-value).

  12. The Scientific Method Steps, Uses, and Key Terms

    When conducting research, the scientific method steps to follow are: Observe what you want to investigate. Ask a research question and make predictions. Test the hypothesis and collect data. Examine the results and draw conclusions. Report and share the results. This process not only allows scientists to investigate and understand different ...

  13. Bias in research

    Definition of bias. Bias is any trend or deviation from the truth in data collection, data analysis, interpretation and publication which can cause false conclusions. Bias can occur either intentionally or unintentionally ( 1 ). Intention to introduce bias into someone's research is immoral.

  14. Finding Truths in Research and Approaching Development

    To make sure that the finding from the research is a truth, research has to be conducted in an appropriate method; otherwise, the finding would appear to be a truth, which is not actually a truth.

  15. S.3 Hypothesis Testing

    The general idea of hypothesis testing involves: Making an initial assumption. Collecting evidence (data). Based on the available evidence (data), deciding whether to reject or not reject the initial assumption. Every hypothesis test — regardless of the population parameter involved — requires the above three steps.

  16. The Four Types of Research Paradigms: A Comprehensive Guide

    A paradigm is a system of beliefs, ideas, values, or habits that form the basis for a way of thinking about the world. Therefore, a research paradigm is an approach, model, or framework from which to conduct research. The research paradigm helps you to form a research philosophy, which in turn informs your research methodology.

  17. The potential of working hypotheses for deductive exploratory research

    The working hypothesis provides the bridge between theory and data that Sutton and Staw (1995) found missing in exploratory work. The sub-hypotheses are designed to be refined enough that the researchers would know what to look for and tailor their hunt for evidence. Figure 1 captures the generic sub-hypothesis design.

  18. What is hypothesis testing?

    When a test has strong face validity, anyone would agree that the test's questions appear to measure what they are intended to measure. For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

  19. Validity, reliability, and generalizability in qualitative research

    In brief, every step of the research logistics (from theory formation, design of study, sampling, data acquisition and analysis to results and conclusions) has to be validated if it is transparent or systematic enough. In this manner, both the research process and results can be assured of high rigor and robustness.

  20. 9.1: Null and Alternative Hypotheses

    The actual test begins by considering two hypotheses.They are called the null hypothesis and the alternative hypothesis.These hypotheses contain opposing viewpoints. \(H_0\): The null hypothesis: It is a statement of no difference between the variables—they are not related. This can often be considered the status quo and as a result if you cannot accept the null it requires some action.

  21. 9.2: Hypothesis Testing

    Outcomes and the Type I and Type II Errors. When you perform a hypothesis test, there are four possible outcomes depending on the actual truth (or falseness) of the null hypothesis H0 and the decision to reject or not. The outcomes are summarized in the following table: ACTION. H0 is Actually True.

  22. The Research Work Is Conducted to Test the Truth of

    The research work is conducted to test the truth of ?????. A) Problem B) hidden things C) Hypothesis D) (?????) issue. Correct Answer: Verified. Unlock this answer now Get Access to more Verified Answers free of charge. Access For Free. Related Questions. Q9: ????? is a fact finding investigation with ...

  23. Is it possible to mislead or fool a polygraph test?

    The National Research Council of the United States of America (USA) conducted a comprehensive review in 2003 and concluded that polygraph accuracy for screening purposes was "little better than could be achieved by the flip of a coin."

  24. Characterization of Large Drop Velocity in the NASA Icing Research

    This paper presents experimental work conducted in the Icing Research Tunnel at NASA Glenn Research Center to characterize the velocity of large drops in the test section. Some icing spray clouds with large drops were generated with Mod1 nozzles at low nozzle air pressure of 2 to 4 psig for various tunnel air speeds. Drop diameters and drop velocities were measured via high-resolution imaging ...