It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: HarbingerOfShadows
[email protected]
originally posted by: HarbingerOfShadows
[email protected]
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.
www.plosbiology.org...
originally posted by: HarbingerOfShadows
[email protected]
originally posted by: Mary Rose
Your link is to "Scientists 'bad at judging peers' published work,' says new study," on mphys.org. Evidently mphys.org is Phys.Org Mobile.
Prof. Eyre-Walker and Dr Nina Stoletzki studied three methods of assessing published scientific papers, using two sets of peer-reviewed articles. The three assessment methods the researchers looked at were:
• Peer review: subjective post-publication peer review where other scientists give their opinion of a published work;
• Number of citations: the number of times a paper is referenced as a recognised source of information in another publication;
• Impact factor: a measure of a journal's importance, determined by the average number of times papers in a journal are cited by other scientific papers.
The findings, say the authors, show that scientists are unreliable judges of the importance of a scientific publication: they rarely agree on the importance of a particular paper and are strongly influenced by where the paper is published, over-rating science published in high-profile scientific journals. Furthermore, the authors show that the number of times a paper is subsequently referred to by other scientists bears little relation to the underlying merit of the science.
As Eyre-Walker puts it: "The three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased and expensive method by which to assess merit. While the impact factor may be the most satisfactory of the methods considered, since it is a form of prepublication review, it is likely to be a poor measure of merit, since it depends on subjective assessment."
m.phys.org...
originally posted by: Mary Rose
The third quote emphasized what is at stake when a reviewer peer reviews. Maybe that could be called “peer review conflict of interest.”
But it's all peer review tyranny if the end result is a procedure that is not working to safeguard scientific progress due to its absolute power.
originally posted by: MagoSA
. . . Peer review is a non-objective review of a scholar's paper, data, methodology, and conclusions that is submitted to multiple other scholars in the same discipline as the submitters in order to verify and authenticate the material contained.
As anthropologists can attest, the peer review does not ensure honesty and objective consideration of data and results. Often, peer review will reject cutting-edge material for a number of reasons, the most common of them being that they do not match what the reviewer has invested in his/her own research and conclusions. . . .
www.abovetopsecret.com...
originally posted by: biffcartright
they hired their own peer review team to keep their data inside
originally posted by: Mary Rose
originally posted by: krash661
but also nor is anything accurate or correct.
When you say that it could be as compared to official mainstream science dogma, including theory which masquerades as fact.