It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Is controversial research into telepathy and other seeming ‘super-powers’ of the mind starting to be more accepted by orthodox science? In its latest issue, American Psychologist – the official peer-reviewed academic journal of the American Psychological Association – has published a paper that reviews the research so far into parapsychological (‘psi’) abilities, and concludes that the “evidence provides cumulative support for the reality of psi, which cannot be readily explained away by the quality of the studies, fraud, selective reporting, experimental or analytical incompetence, or other frequent criticisms.”
The new paper – “The experimental evidence for parapsychological phenomena: a review“, by Etzel Cardeña of Lund University – also discusses recent theories from physics and psychology “that present psi phenomena as at least plausible”, and concludes with recommendations for further progress in the field.
Cardeña also notes that, despite its current, controversial reputation, the field of psi research has a long history of introducing methods later integrated into psychology (e.g. the first use of randomization, along with systematic use of masking procedures; the first comprehensive use of meta-analysis; study preregistration; pioneering contributions to the psychology of hallucinations, eyewitness reports, and dissociative and hypnotic phenomena). And some of psychology’s most respected names, historically, have also shared an interest in parapsychology, including William James, Hans Berger (inventor of the EEG), Sigmund Freud, and former American Psychological Association (APA) president Gardner Murphy.
Lack of evidence is still lack of evidence though,
originally posted by: CreationBro
a reply to: neoholographic
Excellent thread neo.
The points you've touched on are very important to consider, in the very least.
Outright denial and logical fallacies such as claiming that a lack of evidence is evidence of lack, etc. have no place in true empiricism. Needless to say, some of us are certain of various evidences.
I've been down this road. Put the most hardened skeptic, cookie cutter, by the books psych doctor around me for even a single, in person conversation, and they'll realize they were quite simply, short sighted. What can I say...I bleed high strangeness, as many ATS'rs do.
I really don't mean that in offense, I mean that as a blunt fact, and many many folks out there are right on that level of awareness.
The paper begins by noting the reason for presenting an overview and discussion of the topic: “Most psychologists could reasonably be described as uninformed skeptics — a minority could reasonably be described as prejudiced bigots — where the paranormal is concerned”. Indeed, it quotes one cognitive scientist as stating that the acceptance of psi phenomena would “send all of science as we know it crashing to the ground”.
originally posted by: CreationBro
a reply to: neoholographic
Excellent thread neo.
The points you've touched on are very important to consider, in the very least.
Outright denial and logical fallacies such as claiming that a lack of evidence is evidence of lack, etc. have no place in true empiricism. Needless to say, some of us are certain of various evidences.
I've been down this road. Put the most hardened skeptic, cookie cutter, by the books psych doctor around me for even a single, in person conversation, and they'll realize they were quite simply, short sighted. What can I say...I bleed high strangeness, as many ATS'rs do.
I really don't mean that in offense, I mean that as a blunt fact, and many many folks out there are right on that level of awareness.
Results of the GCP studies have been published on many occasions over the past 16 years, but never widely noted by the general media. Now may be the time to start paying attention.
Why? For one thing, the statistical certainty has mounted to the point that it's hard to ignore. Toward the end of 1998, the odds against chance started exceeding one in 20, an acceptable level in many disciplines. Then, with added studies, the level of certainty began to zoom. By the year 2000, the odds against chance exceeded one in 1,000; and in 2006, they broke through the one in a million level; they're now more than one in a trillion with no upper limit in sight.
This far exceeds the bar for statistical significance used in many fields, such as medicine and weather forecasting. Odds against chance ranging from 20-to-one to 100-to-one are commonly considered sufficient. The certainty level is set unusually high for the Higgs Boson; data for validating its existence are considered acceptable if they exceed one in 3.5 million. The GCP level of statistical certainty is now more than 285,000 times greater than that.
originally posted by: Phage
a reply to: Woodcarver
You, you, you evidencist you!
Psi seems to fit into the category of gods. Not really testable.
When an experiment is shown to be statistically equal to chance it's because the "vibes" got messed up by the experiment. How very quantum like. The thing is there is a math behind quantum phenomenon which predict exactly that.
Where's the psi math?
originally posted by: Woodcarver
a reply to: neoholographic
It says that people who don’t believe are prejudiced bigots.
The paper begins by noting the reason for presenting an overview and discussion of the topic: “Most psychologists could reasonably be described as uninformed skeptics — a minority could reasonably be described as prejudiced bigots — where the paranormal is concerned”. Indeed, it quotes one cognitive scientist as stating that the acceptance of psi phenomena would “send all of science as we know it crashing to the ground”.
But we don’t believe because the “evidence” is not there
originally posted by: Phage
a reply to: surfer_soul
That's why science says "ick". Science likes to measure things. If you get my meaning.
You can't blame them for that. It's what they do. Can't measure God. Nope. Can't measure psi. Nope.
Continue doing your praying. Continue with your psi stuff.
Why do you require validation from those you disdain?
originally posted by: Phage
a reply to: surfer_soul
That's why science says "ick". Science likes to measure things. If you get my meaning.
You can't blame them for that. It's what they do. Can't measure God. Nope. Can't measure psi. Nope.
Continue doing your praying. Continue with your psi stuff.
Why do you require validation from those you disdain?
We included studies that used an experimental protocol to test cigarette pack warnings and reported data on both pictorial and text-only conditions. 37 studies with data on 48 independent samples (N=33 613) met criteria.
Pictorial warnings were more effective than text-only warnings for 12 of 17 effectiveness outcomes all (p < 0.05). Relative to text-only warnings, pictorial warnings (1) attracted and held attention better; (2) garnered stronger cognitive and emotional reactions; (3) elicited more negative pack attitudes and negative smoking attitudes and (4) more effectively increased intentions to not start smoking and to quit smoking. Participants also perceived pictorial warnings as being more effective than text-only warnings across all 8 perceived effectiveness outcomes.
Modeling dose-response relationships of drugs is essential to understanding their effect on patient outcomes under realistic circumstances. While intention-to-treat analyses of clinical trials provide the effect of assignment to a particular drug and dose, they do not capture observed exposure after factoring in non-adherence and dropout. We develop Bayesian methods to flexibly model dose-response relationships of binary outcomes with continuous treatment, allowing for treatment effect heterogeneity and a non-linear response surface. We use a hierarchical framework for meta-analysis with the explicit goal of combining information from multiple trials while accounting for heterogeneity. In an application, we examine the risk of excessive weight gain for patients with schizophrenia treated with the second generation antipsychotics paliperidone, risperidone, or olanzapine in 14 clinical trials. Averaging over the sample population, we found that olanzapine contributed to a 15.6% (95% CrI: 6.7, 27.1) excess risk of weight gain at a 500mg cumulative dose. Paliperidone conferred a 3.2% (95% CrI: 1.5, 5.2) and risperidone a 14.9% (95% CrI: 0.0, 38.7) excess risk at 500mg olanzapine equivalent cumulative doses. Blacks had an additional 6.8% (95% CrI: 1.0, 12.4) risk of weight gain over non-blacks at 1000mg olanzapine equivalent cumulative doses of paliperidone.
In 2011, one of the authors (DJB) published a report of nine experiments in the Journal of Personality and Social Psychology purporting to demonstrate that an individual’s cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition. To encourage replications, all materials needed to conduct them were made available on request. We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2 × 10-10 with an effect size (Hedges’ g) of 0.09. A Bayesian analysis yielded a Bayes Factor of 1.4 × 109, greatly exceeding the criterion value of 100 for “decisive evidence” in support of the experimental hypothesis. When DJB’s original experiments are excluded, the combined effect size for replications by independent investigators is 0.06, z = 4.16, p = 1.1 × 10-5, and the BF value is 3,853, again exceeding the criterion for “decisive evidence.” The number of potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01 is 544, and seven of eight additional statistical tests support the conclusion that the database is not significantly compromised by either selection bias or by “p-hacking”—the selective suppression of findings or analyses that failed to yield statistical significance. P-curve analysis, a recently introduced statistical technique, estimates the true effect size of our database to be 0.20, virtually identical to the effect size of DJB’s original experiments (0.22) and the closely related “presentiment” experiments (0.21). We discuss the controversial status of precognition and other anomalous effects collectively known as psi.
originally posted by: Phage
a reply to: surfer_soul
That's why science says "ick". Science likes to measure things. If you get my meaning.
You can't blame them for that. It's what they do. Can't measure God. Nope. Can't measure psi. Nope.
Continue doing your praying. Continue with your psi stuff.
Why do you require validation from those you disdain?
originally posted by: neoholographic
Indeed, it quotes one cognitive scientist as stating that the acceptance of psi phenomena would “send all of science as we know it crashing to the ground”.