It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Faith, Fantasy, And The Protean Peer Review Process

page: 2
10
<< 1   >>

log in

join
share:

posted on Feb, 19 2013 @ 02:55 PM
link   
reply to post by Klassified
 
I'm hesitant to even post my opinions as at least two highly respectable, and respected, academics have already posted their views from the inside of that process. Each has expressed different views and each one is accurate.

I guess therein lies the flaws and the benefits of peer review? It's a human construct and prone to subjective biases. The thing is, how could the peer review system be improved? Or more critically, for those who have distrust or contempt for the process, what could replace it that would be an improvement? Tough call, huh?

Someone who Byrd might know in a professional capacity is the archaeologist Dr Michael E Smith. I've been reading his blog (Publishing Archaeology) for a long time now. He's described some of the issues surrounding peer review in articles like Rejected by Science!

It was one of his subsequent postings that has stayed in my mind and features a pretty decent quote that relates to the value of peer review.


“Inquiry of a scientific mature, I stipulate, aims to be cumulative, evidence-based (empirical), falsifiable, generalizing, nonsubjective, replicable, rigorous, skeptical, systematic, transparent, and grounded in rational argument. There are differences of opinion over whether, or to what extent, science lives up to these high ideals. Even so, these are the ideals to which natural and social scientists generally aspire, and they help to define the enterprise in a general way and to demarcate it from other realms.” (Gerring 2012:11).
Science Type 1 versus Science Type 2

I think that these views really express the spirit of peer review even though it might sometimes fall short.

In your examples of purported failures of peer review, my eye was caught by the Virginia Steen-McIntyre reference. That's a can of worms! It remains way more complex than a simple refusal of science and archaeology to accept extraordinary evidence of the early population of the Americas. I know Byrd is intimately familiar with this subject and will be far more insightful than I can be. Nevertheless, it's still part of an ongoing process of peer review with new evidence constantly being examined. A large problem is that a population of humans can't just be said to have existed ~250ky before accepted populations had even arrived in the Americas.

It's a popular topic in conspiracy culture, but I've yet to see any conspiracy theorists actually discussing the nitty gritty of conclusively dating diatoms at the site. There's been a long-standing discussion at Hall of Maat about problems surrounding the dating of the site. If you use their search engine and look for 'Sam VanLandingham,' you'll see what I mean.



posted on Feb, 19 2013 @ 04:13 PM
link   
reply to post by Kandinsky
 

Thanks for taking the time to add your thoughts K. I knew this was a sensitive topic at both ends of the spectrum when I took it on. But I felt it was something that needed to be addressed. Especially since we seem to have both extremes represented on ATS. Those who see it as infallible, and those who have inordinate disdain for it.

So far, I think one of the best things to come out of this thread is, that the system is aware of its own shortcomings, and at least endeavors to be self policing at some level. As I told Byrd, I don't have all the answers either. I have a few ideas. The important thing is, that those inside the system have a desire to address the issues. And it seems that both of the esteemed members you mentioned, do. That says much to me.

My hope for this thread was just to shine a more realistic light on the workings of the peer review process. Whether or not I accomplished that, I don't know. But I most certainly appreciate the input I've had. And I will be looking over the links you supplied for further reading.


In your examples of purported failures of peer review, my eye was caught by the Virginia Steen-McIntyre reference. That's a can of worms!

Yes. I knew it would be. And I questioned using it at first. But then, it is a good example of the debate that sometimes rages among even scientists.

This was supposed to be a precursor to another thread about my brief personal involvement with the Russell Burrows Cave story. But I'm still in the process of deciding how I'm going to lay that out without the names of people who would prefer I don't send attention their way on a public forum. At this point, I'm wondering if I'd be opening my own can of worms I, and others, will wish I hadn't opened. We'll see.



posted on Feb, 19 2013 @ 08:54 PM
link   
reply to post by Klassified
 


Have you heard of the Olmec

www.ipoaa.com...



posted on Feb, 19 2013 @ 10:42 PM
link   

Originally posted by Kashai
reply to post by Klassified
 


Have you heard of the Olmec

www.ipoaa.com...


Yes I have, and I shall be giving that page a read through.


But for now, it's time to go to bed.



posted on Feb, 20 2013 @ 12:16 PM
link   

Originally posted by Klassified
I have not been through the process. Although the last time I had a hypothesis examined by a "peer", she found all kinds of errors.

Did you rework the hypothesis after that? That's what we have to do. I'm currently doing my PhD dissertation, and it's a hair-puller. I gave the committee MY research design (and grand ideas) and they came after it with chainsaws and hatchets. I moaned and wailed because they whacked down my Grand Idea into something that I thought was trivial -- BUT -- when I took their approach, I found that what they were pointing me at is a nice little foundational concept.

Then they told me that this would be one of the foundations of my "Life's Work." (oh sure. No pressure THERE, right??)


But when you're looking at days worth of reviewing time. I can only imagine someone thinking "how can I weed out some of this, and lessen this mountain staring me in the face?"

What'll kill a paper first is Bad Reseach Design (or if it's not explained well.) After that, it's Bad Statistical Analysis.

Research Design transparency is CRITICAL. If we can't tell how you did the research, we can't prove that you really have something good, and we can't confirm what you did by replicating your research. In a recent paper presented to the City of Dallas by the Centers for Disease Control (their assessment of the West Nile Virus aerial spraying), the CDC went over how they gathered and measured data in very great detail before they gave their conclusions. While the "science-y stuff" probably put everyone to sleep, it told me that the CDC didn't just show up and blow smoke at everyone. It confirmed some of the data I had gathered independently and highlighted the real problems cities have in dealing with mosquitoes (they don't know how to collect data or what to do with it when they get it.)


The reviewer doesn't need to know the authors name, or their background.

When a paper is passed for review, the names of the authors and their affiliations NEVER show up anywhere on the review copies.


Reviewers should have guidelines established, and personal opinions should be noted as such. I would like to say reviewers should be certified by a board.

Magazines do have rubrics in place, yes, and where appropriate, most reviewers are board certified. Some disciplines don't have boards, though.


The problem is, so many who are not scientists, or academics, like myself (I'm self-employed, and I consult, as well as fix problems for consumers and businesses relating to computers and Audio-visual.) rely on peer review as something akin to the "good housekeeping" seal of approval. However, the more I read, and talk to a few folks I know, the more I have found that isn't what it is at all.

Perhaps it's just that the failures are so loudly talked about, and the millions (literally) of good papers are ignored?


Too much has been made of peer review to the public. And not enough measures have been taken among academia to insure that good theories don't slip and fall through the cracks, because of the faults and limitations of the review process.

How do you know they are "good theories"?

Seriously.

Look around the board at all the theories you see here. None of them would get into any journals (including PLOS, and that's got VERY low standards of "peer review" (I hesitate to call it that, frankly.)) There's no methodology, there's no background, it's just "wow! I got a great idea!"

My "great ideas" (I haz them) have to be well documented and explained before they can get into a paper. When I don't have enough data (example: the "heat island effect" on ponds) then I will write an article for a magazine about the topic (because I'm retired, and all the science I do is funded solely by my not-very-glorious pension and by social security (I'm older than dirt.))

I have seen a lot of self-made mathematicians on the Internet who explain how they've solved all sorts of grand mathematical things. I'm married to a mathematician, so have gotten Math By Osmosis, and the people who are complaining that "academics are ignoring me" or "the government is suppressing this" would fail your average senior level college math course. Most of them would fail a junior level math course, and some of them couldn't pass a high school algebra course... yet they're telling us they've solved everything with math.

So... how do you know good ideas are suppressed?


Evidently, those on ATS who see peer-review as the gospel, don't read those letters, or go to conferences.


True. We talk about it, but it's not as exciting as mentioning Kim Kardassian's newest boyfriend or bathing suit. There's a zillion conferences with exciting stuff and journals and all, but the latest moral outrage gets a lot more attention.
edit on 20-2-2013 by Byrd because: (no reason given)



posted on Feb, 20 2013 @ 05:35 PM
link   
reply to post by Byrd
 



Did you rework the hypothesis after that?

Yes I did. And found out she was right about some things, and in my searches, I came across other things I hadn't before. It was well worth the second time through. Since I really was fighting a losing battle to start with.



I'm currently doing my PhD dissertation, and it's a hair-puller. I gave the committee MY research design (and grand ideas) and they came after it with chainsaws and hatchets. I moaned and wailed because they whacked down my Grand Idea into something that I thought was trivial -- BUT -- when I took their approach, I found that what they were pointing me at is a nice little foundational concept.

This sounds almost identical to what my brother in law has said. Except for the "chainsaws and hatchets".
But the hair pulling? Yeah. His is for theology. Boy have we had some discussions.
Am I right though, in assuming that the process of review for dissertation ideas, and that for publication in a journal are two different processes, though with some similarities? And are all reviewers board certified at universities? They obviously are not all certified for journals.



What'll kill a paper first is Bad Reseach Design (or if it's not explained well.) After that, it's Bad Statistical Analysis.

Research Design transparency is CRITICAL. If we can't tell how you did the research, we can't prove that you really have something good, and we can't confirm what you did by replicating your research. In a recent paper presented to the City of Dallas by the Centers for Disease Control (their assessment of the West Nile Virus aerial spraying), the CDC went over how they gathered and measured data in very great detail before they gave their conclusions. While the "science-y stuff" probably put everyone to sleep, it told me that the CDC didn't just show up and blow smoke at everyone. It confirmed some of the data I had gathered independently and highlighted the real problems cities have in dealing with mosquitoes (they don't know how to collect data or what to do with it when they get it.)

All of this makes perfect sense to me. This is what I would think a reviewer would be looking for, among other things.



When a paper is passed for review, the names of the authors and their affiliations NEVER show up anywhere on the review copies.

I think that's great. But the same rule doesn't apply when an editor calls up a buddy or two, and has them go through it(Per the article I quoted from).

"The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin..."

Also, as you said. Not all disciplines have boards. So it may not hold true in many cases.



Perhaps it's just that the failures are so loudly talked about, and the millions (literally) of good papers are ignored?

This I can see as being true to some extent.



How do you know they are "good theories"?

I don't. I'm not the reviewer. But I know history shows that many of yesterdays bad theories, crackpots, and charlatans, are now accepted science. At least until the next crackpot comes along, dies, and everyone figures out he/she was right.



Look around the board at all the theories you see here.

No argument there. Or with what you say after that. But on here, we are allowed the luxury of speculation. None of us are submitting a paper for review. Although I'm reminded of a quote by Einstein..."If At First the Idea Is Not Absurd, Then There Is No Hope for It". Of course, that absurdity must be followed by some substance.



So... how do you know good ideas are suppressed?

Suppressed? This is something I tried to steer away from in my OP. I didn't want a bunch of unfounded accusations, even though I most certainly believe it happens. Partly because I am intimately familiar with the underhanded and unethical things that go on in the technology industries. Overlooked? Ridiculed without proper and thorough review? Way out of the consensus? Yes. And history shows it. The relationship between Edison, Westinghouse, and Tesla is an example that comes to mind. Also see the links in my OP.

Continued...
edit on 2/20/2013 by Klassified because: redaction



posted on Feb, 20 2013 @ 06:08 PM
link   
reply to post by Byrd
 

...Continued.
If I can digress for a moment. As I've said too many times on ATS, I spent a good portion of my life in the Christian ranks. I have been a deacon and a board member, as well as preacher and teacher. None of which I am proud of now. I had the chance to talk to scholars and professors, as well as others in the upper echelons of the church. One thing I know with certainty, is those things that go against consensus, no matter how sound, get thrown out quickly with no remorse. I've been told straight forwardly by a few, they didn't even want to hear anything that changes the status quo.
This attitude is by no means unique to the religious world of academics. However, I bring that up not to indict the academics of this world, but just to note that it does exist. It is not the main point of my OP.

Speaking of which... is simply what I have already stated.


My hope for this thread was just to shine a more realistic light on the workings of the peer review process.

It's not a perfect system, and after reading your responses, and other articles, I'm convinced it doesn't need to be. It is what it is. As long as those reading "peer-reviewed" publications have a realistic understanding and expectation of what it is, and it's limitations, it plays a vital role in our sciences. It's when it is subjected to either extreme that it becomes unrealistic, and portended to be what it isn't.
edit on 2/20/2013 by Klassified because: clarity



posted on Feb, 20 2013 @ 10:22 PM
link   
reply to post by Byrd
 


Lol, you are a bit gobbledygooky there mate.
Seems some reverse osmosis or what we pilots call an ils approach is at play.
Hope your osmosis has gone to sufficient depth to figure out equations from a given Metric.
But yes peer review is reasonable process, since the journals have no other way of
judging an article without it.
But regardless of how good an article is, you will not get past the established status quos
by the so called tptb science watch dogs



new topics

top topics



 
10
<< 1   >>

log in

join