World Library  
Flag as Inappropriate
Email this Article

Observer-expectancy effect

Article Id: WHEBN0000855760
Reproduction Date:

Title: Observer-expectancy effect  
Author: World Heritage Encyclopedia
Language: English
Subject: Cognitive Biases, Between-group design, Cognative Biases, Backmasking, Expectancy effect
Collection: Cognitive Biases, Cognitive Inertia, Design of Experiments
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Observer-expectancy effect

The observer-expectancy effect (also called the experimenter-expectancy effect, expectancy bias, observer effect, or experimenter effect) is a form of reactivity in which a researcher's cognitive bias causes them to unconsciously influence the participants of an experiment. Confirmation bias can lead to the experimenter interpreting results incorrectly because of the tendency to look for information that conforms to their hypothesis, and overlook information that argues against it.[1] It is a significant threat to a study's internal validity, and is therefore typically controlled using a double-blind experimental design.

An example of the observer-expectancy effect is demonstrated in music backmasking, in which hidden verbal messages are said to be audible when a recording is played backwards. Some people expect to hear hidden messages when reversing songs, and therefore hear the messages, but to others it sounds like nothing more than random sounds. Often when a song is played backwards, a listener will fail to notice the "hidden" lyrics until they are explicitly pointed out, after which they are obvious. Other prominent examples include facilitated communication and dowsing.

In research, experimenter bias occurs when experimenter expectancies regarding study results bias the research outcome.[2] Examples of experimenter bias include conscious or unconscious influences on subject behavior including creation of demand characteristics that influence subjects, and altered or selective recording of experimental results themselves.[3]

Contents

  • Observer-expectancy effect 1
  • Where bias can emerge 2
  • Classification 3
  • Prevention 4
  • Examples 5
    • Medical Sciences 5.1
    • In physical sciences 5.2
    • In forensic sciences 5.3
  • In social science 6
  • See also 7
  • References 8
  • External links 9

Observer-expectancy effect

The experimenter may introduce cognitive bias into a study in several ways. In what is called the observer-expectancy effect, the experimenter may subtly communicate their expectations for the outcome of the study to the participants, causing them to alter their behavior to conform to those expectations. Such observer bias effects are near-universal in human data interpretation under expectation and in the presence of imperfect cultural and methodological norms that promote or enforce objectivity.[4]

The classic example of experimenter bias is that of "Clever Hans" (in German, der Kluge Hans), a Orlov Trotter horse claimed by his owner von Osten to understand arithmetic. As a result of the large public interest in Clever Hans, Philosopher and psychologist Carl Stumpf, along with his assistant Oskar Pfungst, investigated these claims. Ruling out simple fraud, Pfungst determined that the horse could answer correctly even when von Osten did not ask the questions. However, the horse was unable to answer correctly when either it could not see the questioner, or if the questioner themselves was unaware of the correct answer: When von Osten knew the answers to the questions, Hans answered correctly 89 percent of the time. However when von Osten did not know the answers, Hans guessed only six percent of questions correctly.

Pfungst then proceeded to examine the behaviour of the questioner in detail, and showed that as the horse's taps approached the right answer, the questioner's posture and facial expression changed in ways that were consistent with an increase in tension, which was released when the horse made the final, correct tap. This provided a cue that the horse had learned to use as a reinforced cue to stop tapping.

Experimenter-bias also influences human subjects. As an example, researchers compared performance of two groups given the same task (rating portrait pictures and estimating how successful each individual was on a scale of -10 to 10), but with different experimenter expectations.

In one group (a), experimenters were told to expect positive ratings while in group b, experimenters were told to expect negative ratings. Data collected from the first group was significantly and substantially more optimistic appraisal than were data collected by group b. The researchers suggested that experimenters gave subtle but clear cues with which the subjects complied.[5]

Where bias can emerge

A review of bias in clinical studies concluded that bias can occur at any or all of the seven stages of research.[2] These include:

  1. Selective background reading
  2. Specifying and selecting the study sample
  3. Executing the experimental manoeuvre (or exposure)
  4. Measuring exposures and outcomes
  5. Data analysis
  6. Interpretation and discussion of results
  7. Publishing the results (or not)

The ultimate source of bias lies in a lack of objectivity. It may occur more often in sociological and medical studies, perhaps due to incentives . Experimenter bias can also be found in some physical sciences, for instance, where an experimenter selectively rounds off measurements. double blind techniques may be employed to combat bias.

Classification

Modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly designed analysis technique. Experimenter's bias was not well recognized until the 1950s and 60's, and then it was primarily in medical experiments and studies. Sackett (1979) catalogued 56 biases that can arise in sampling and measurement in clinical research, among the above-stated first six stages of research. These are as follows:

  1. In reading-up the field
  2. the biases of rhetoric
  3. the "all's well" literature bias
  4. one-sided reference bias
  5. positive results bias
  6. hot stuff bias
  7. In specifying and selecting the study sample
  8. popularity bias
  9. centripetal bias
  10. referral filter bias
  11. diagnostic access bias
  12. diagnostic suspicion bias
  13. unmasking (detection signal) bias
  14. mimicry bias
  15. previous opinion bias
  16. wrong sample size bias
  17. admission rate (Berkson) bias
  18. prevalence-incidence (Neyman) bias
  19. diagnostic vogue bias
  20. diagnostic purity bias
  21. procedure selection bias
  22. missing clinical data bias
  23. non-contemporaneous control bias
  24. starting time bias
  25. unacceptable disease bias
  26. migrator bias
  27. membership bias
  28. non-respondent bias
  29. volunteer bias
  30. In executing the experimental manoeuvre (or exposure)
  31. contamination bias
  32. withdrawal bias
  33. compliance bias
  34. therapeutic personality bias
  35. bogus control bias
  36. In measuring exposures and outcomes
  37. insensitive measure bias
  38. underlying cause bias (rumination bias)
  39. end-digit preference bias
  40. apprehension bias
  41. unacceptability bias
  42. obsequiousness bias
  43. expectation bias
  44. substitution game
  45. family information bias
  46. exposure suspicion bias
  47. recall bias
  48. attention bias
  49. instrument bias
  50. In analyzing the data
  51. post-hoc significance bias
  52. data dredging bias (looking for the pony)
  53. scale degradation bias
  54. tidying-up bias
  55. repeated peeks bias
  56. In interpreting the analysis
  57. mistaken identity bias
  58. cognitive dissonance bias
  59. magnitude bias
  60. significance bias
  61. correlation bias
  62. under-exhaustion bias

Prevention

Double blind techniques may be employed to combat bias by causing the experimenter and subject to be ignorant of which condition data flow from.

It might be thought that, due to the central limit theorem of statistics, collecting more independent measurements will improve the precision of estimates, thus decreasing bias. However this assumes that the measurements are statistically independent. In the case of experimenter bias, the measures share correlated bias: simply averaging such data will not lead to a better statistic but may merely reflect the correlations among the individual measurements and their non-independent nature.

Examples

Medical Sciences

In medical sciences, the complexity of living systems and ethical constraints may limit the ability of researchers to perform controlled experiments. In such circumstances scientific knowledge about the phenomenon under study, and the systematic elimination of probable causes of bias, by detecting confounding factors, is the only way to isolate true cause-effect relationships. Experimenter bias in epidemiology has been better studied than in other sciences.

A number of studies into Spiritual Healing illustrate how the design of the study can introduce experimenter bias into the results. A comparison of two studies illustrates that subtle differences in the design of the tests can adversely affect the results of one. The difference was due to the intended result: a positive or negative outcome rather than positive or neutral. A 1995 paper[6] by Hodges & Scofield of spiritual healing used the growth rate of cress seeds as their independent variable in order to eliminate a placebo response or participant bias. The study reported positive results as the test results for each sample were consistent with the healers intention that healing should or should not occur. However the healer involved in the experiment was a personal acquaintance of the study authors raising the distinct possibility of experimenter bias. A randomized clinical trial,[7] published in 2001, investigated the efficacy of spiritual healing (both at a distance and face-to-face) on the treatment of chronic pain in 120 patients. Healers were observed by "simulated healers" who then mimicked the healers movements on a control group while silently counting backwards in fives - a neutral rather than should not heal intention. The study found a decrease in pain in all patient groups but "no statistically significant differences between healing and control groups ... it was concluded that a specific effect of face-to-face or distant healing on chronic pain could not be demonstrated."

In physical sciences

When a signal under study is smaller than the rounding error of measurement and data are over-averaged , a positive result may be found where none exists (i.e. a more precise experimental apparatus would conclusively show no signal). For instance a study of variation in sidereal time, subject to rounding of measures by a human who is aware of the measurement value may lead to selectivity in rounding, effectively generating a false signal. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious sidereal variation.

In forensic sciences

Results of a scientific test may be distorted when the underlying data are ambiguous and the scientist is exposed to domain-irrelevant cues which engage emotion.[8] For instance, forensic DNA results are ambiguous, and resolving these ambiguities, particularly when interpreting difficult evidence samples such as those that contain mixtures of DNA from two or more individuals, degraded or inhibited DNA, or limited quantities of DNA template may introduce bias. The full potential of forensic DNA testing can only be realized if observer effects are minimized.[9]

In social science

After the data are collected, bias may be introduced during data interpretation and analysis. For example, in deciding which variables to control in analysis, social scientists often face a trade-off between omitted-variable bias and post-treatment bias.[10]

See also

References

  1. ^ Goldstein, Bruce. "Cognitive Psychology". Wadsworth, Cengage Learning, 2011, p. 374
  2. ^ a b Sackett, D. L. (1979). "Bias in analytic research". Journal of Chronic Diseases 32 (1–2): 51–63.  
  3. ^ Barry H. Kantowitz; Henry L. Roediger, III; David G. Elmes (2009). Experimental Psychology. Cengage Learning. p. 371.  
  4. ^ Rosenthal, R. (1966). Experimenter Effects in Behavioral Research. NY: Appleton-Century-Crofts. 
  5. ^ Rosenthal R. Experimenter effects in behavioral research. New York, NY: Appleton-Century-Crofts, 1966. 464 p.
  6. ^ Hodges, RD and Scofield, AM (1995). "Is spiritual healing a valid and effective therapy?". Journal of the Royal Society of Medicine 88 (4): 203–207.  
  7. ^ Abbot, NC, Harkness, EF, Stevinson, C, Marshall, FP, Conn, DA and Ernst, E. (2001). "Spiritual healing as a therapy for chronic pain: a randomized, clinical trial". Pain 91 (1–2): 79–89.  
  8. ^ Risinger, D. M.; Saks, M. J.; Thompson, W. C.; Rosenthal, R. (2002). "The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of Expectation and Suggestion".  
  9. ^ D. Krane, S. Ford, J. Gilder, K. Inman, A. Jamieson, R. Koppl, I. Kornfield, D. Risinger, N. Rudin, M. Taylor, W.C. Thompson (2008). "Sequential unmasking: A means of minimizing observer effects in forensic DNA interpretation".  
  10. ^ King, Gary. "Post-Treatment Bias in Big Social Science Questions", accessed February 7, 2011.

External links

  • Skeptic's Dictionary on the Experimenter Effect
  • Article discussing expectancy effects in paranormal investigation
  • Another article by Rupert Sheldrake
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.