It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Rather than present a stimulus and measure a subject's response, Bem measured the subject's response before the stimulus was presented. In some earlier experiments by other psi researchers, participants were hooked up to physiological measuring equipment similar to a lie detector that measured emotional arousal. They sat before a computer and watched randomly selected images; some were erotic or very negative ("like the bloody photos you see on CSI") images.
"Your physiology jumps when you see one of those pictures after watching a series of landscapes or neutral pictures," Bem said. "But the remarkable finding is that your physiology jumps before the provocative picture actually appears on the screen -- even before the computer decides which picture to show you. What it shows is that your physiology can anticipate an upcoming event even though your conscious self might not."
Bem's nine experiments demonstrated similar unconscious influences from future events. For example, in one experiment, participants saw a list of words and were then given a test in which they tried to retype as many of the words as they could remember. Next, a computer randomly selected some of the words from the list and gave the participants practice exercises on them. When their earlier memory test results were checked, it was found that they had remembered more of the words they were to practice later than words they were not going to practice. In other words, the practice exercises had reached back in time to help them on the earlier test.
All but one of the nine experiments confirmed the hypothesis that psi exists. The odds against the combined results being due to chance or statistical flukes are about 74 billion to 1, according to Bem.
The complete database comprises 90 experiments conducted between 2001 and 2013. These originated in 33 different laboratories located in 14 countries and involved 12,406 participants. The full database with corresponding effect sizes, standard errors, and category assignments is presented in Table S1 along with a forest plot of the individual effect sizes and their 95% confidence intervals.
The first question addressed by the meta-analysis is whether the database provides overall evidence for the anomalous anticipation of random future events. As shown in the first and second rows of Table 1, the answer is yes: The overall effect size (Hedges’ g) is 0.09, combined z = 6.33, p = 1.2 × 10-10. The Bayesian BF value is 5.1 × 109, greatly exceeding the criterion value of 100 that is considered to constitute “decisive evidence” for the experimental hypothesis (Jeffreys, 1998). Moreover, the BF value is robust across a wide range of the scaling factor r, ranging from a high value of 5.1 × 109 when we set r = 0.1 to a low value of 2.0 × 109 when r = 1.0.
The second question is whether independent investigators can successfully replicate Bem’s original experiments. As shown in the third and fourth rows of Table 1, the answer is again yes: When Bem’s experiments are excluded, the combined effect size for attempted replications by other investigators is 0.06, z = 4.16, p = 1.1 × 10-5, and the BF value is 3,853, which again greatly exceeds the criterion value of 100 for “decisive evidence.”
A study published last year in a scientific journal claimed to have found strong evidence for the existence of psychic powers such as ESP. The paper, written by Cornell professor Daryl J. Bem, was published in The Journal of Personality and Social Psychology and quickly made headlines around the world for its implication: that psychic powers had been scientifically proven.
Bem’s experiments suggested that college students could accurately predict random events, like whether a computer will flash a photograph on the left or right side of its screen. However scientists and skeptics soon questioned Bem’s study and methodology. Bem stood by his findings and invited other researchers to repeat his studies.
Replication is of course the hallmark of valid scientific research—if the findings are true and accurate, they should be able to be repeated by others. Otherwise the results may simply be due to normal and expected statistical variations and errors. If other experimenters cannot get the same result using the same techniques, it’s usually a sign that the original study was flawed in one or more ways.
Last year a group of British researchers tried and failed to replicate Bem’s experiments. A team of researchers including Professor Chris French, Stuart Ritchie and Professor Richard Wiseman collaborated to accurately replicate Bem’s final experiment, and found no evidence for precognition. Their results were published in the online journal PLoS ONE.
originally posted by: SuperFrog
A failed experiment is - A failed experiment...
originally posted by: PublicOpinion
Except there were problems with their lousy attempts to reproduce said findings. I had a big debate regarding this topic in another forum a while ago, but sadly can't remember the details.
Anyway, just writing to subscribe, taking a look into the new study now.
TY, OP! S&F.
Bem’s experiments have been extensively debated and critiqued. The first published critique appeared in the same issue of the journal as Bem’s original article (Wagenmakers et al., 2011). These authors argued that a Bayesian analysis of Bem’s results did not support
his psi-positive conclusions and recommended that all research psychologists abandon frequentist analyses in favor of Bayesian ones.
In his own critique, Francis (2012) remarks that “perhaps the most striking characteristic of [Bem’s] study is that [it meets] the current standards of experimental psychology.
...
Bem has put empirical psychologists in a difficult position: forced to consider either revising beliefs about the fundamental nature of time and causality or revising beliefs about the soundness of MRP (p. 371).”
originally posted by: PublicOpinion
a reply to: Agartha
Except there were problems with their lousy attempts to reproduce said findings.
The one exception was a replication failure conducted by Wagenmakers et al. (2012), which yielded a non-significant effect in the unpredicted direction, ES = -0.02, t(99) = -0.22, ns. These investigators wrote their own version of the software and used a set of erotic photographs that were much less sexually explicit than those used in Bem’s experiment and its exact replications.
So all of the independent replication attempts were "lousy"? Care to specify why?
This has nothing to do with science, as it is actually debunked. Sure, Dr. Bem has impressive bios, but his research is not worth anything as long as it is not repeatable.
Anything 'new' from him is just not worth the time or space here.
Of the seven experiments that we conducted, four were conducted online. It is not immediately clear why precognition would not be observed online (i.e., the theoretical development of the construct does not specify whether this should moderate the effect), but we thought that it was reasonable to give the online environment additional consideration. One possible concern might be that, if people are taking the test at some remote location, their surroundings might be sufficiently distracting to make them less attentive
At that point the program, using a pseudo-random number generator, randomly assigned 24 words to be practiced; six words were randomly chosen from each of the four groups of 12
words.
originally posted by: theantediluvian
I wholly disagree. As long as his experiments are competently designed, conducted in a careful and controlled manner and the results are reported in good faith with conclusions drawn using commonly accepted methods of analysis, then his experiments are just as scientific as any other.
originally posted by: theantediluvian
That doesn't mean that his hypothesis isn't wrong or that there aren't unintended flaws in his experiments or the analyses of the data but that's not the same as saying, "This has nothing to do with science" or "Anything 'new' from him is just not worth the time or space here" which in this case are more statements of your own opinions on the subject of his research than valid criticism of his professionalism or methodologies.
originally posted by: theantediluvian
I don't believe your criticism could be more off base. Bem has been completely open and forthcoming and has not only encouraged other scientists to replicate his experiments, he's gone to extraordinary means to facilitate these efforts.
originally posted by: theantediluvian
As to this statement: "it is actually debunked"
Is it? What's the criteria for debunked? One experiment that fails to replicate the observed results? Two? Three? Ten? How versed are you on the specifics of the experiments you're citing? Well enough to rule out differences in conformity to the designs of the original experiments? I noticed some differences reading Correcting the Past: Failures to Replicate Psi that are of some potential concern:
Conducted online? They're admitting that at the very least, 4 out of 7 of their experiments were conducted in a far less controlled environment. I'm not a scientist but I am an IT professional and this leaves me with a number of questions about external influences:
- How was the program structured and were the images or words selected by a server and the selections transmitted to a client software or were they generated on the participants computers? Where the images stored on a server and transmitted to the client or were they stored on the participant's PC?
- What about network related factors like latency? How about hardware performance factors? Perhaps the effects Bem measured in his experiments exist at intervals short enough to be adversely affected by any of these uncontrolled environmental factors.
- What PRNG did they use? IIRC, Bem (who importantly here has a background in physics) used a hardware RNG.
...and so on and so forth.
I've read a number of articles, blog posts, etc that are critical of Bem's research and analysis and I noticed an apparent regard for the man even among those who disagreed with him. I think perhaps in your own skeptical zeal you've gone beyond what could be reasonably justified.
You can see the psi assumption at work in this assessment. The real challenge, as I see it, is to prove that these statistical deviations from chance are not due to statistical flukes; faulty equipment; fine equipment affected by temperature, humidity, altitude, electro-magnetic interference from nearby equipment or personal items carried by subjects or researchers, etc.; errors in data recording, collection, collating, and in calculations from the data.
There is also the problem of cheating and sloppiness. Zealous psi researchers, depending on very small deviations from chance (Bem's subjects scored 1.7 to 3 percent above chance overall) to get the statistic they want, can't be assumed to always be honest and careful in the running of their labs.
In addition to the researchers mentioned above who failed to replicate Bem's experiment, Jeff Galak of Carnegie Mellon University and Leif D. Nelson of University of California, Berkeley, also failed in their attempt at replicating one of Bem's nine experiments (number 8, the one involving "recall"). The conclusion stated in the abstract of this other study reads: