Originally posted by ArchAngel
The cam CAN be used to find life, but it is not simple, and you must know what you are looking for first...
Thanks for finally coming out and admitting what I've been saying all along... that your ORIGINAL contention about the CAM being life-blind is quite
simply false. That's progress, and a hopeful sign.
If your original thread title, and subsequent arguments had been "Is NASA trying to avoid looking for life in their picture releases?", you would
have heard little (if any) objection from me... because you would be properly framing your argument as usage of the tool, not the tool being
broken.
I understand what you are saying.
For someone who "understands" what I'm saying, it's remarkable how little that "understanding" impinges on the argument that you are offering.
I'd expect exactly the same lack of addressing of substantive issues from someone who DOESN'T understand it.
If your responses are indistinguishable from someone who doesn't understand, why would anyone believe that you actually DO understand?
You do not understand what I am saying.
I'd wager that I understand the thrust of your point a heck of a lot better than you think. I simply believe that the "argument" you offer, even
when understood, is effectively meaningless.
To summarize, your objection is that the Pancam can be used to facilitate lying. It can be selectively used to create false impressions.
That makes it no different than any other data-gathering tool in existence, and makes it no different than the English language for that matter.
Just because a tool can be used for evil purposes does not make the tool evil, nor does it make it broken.
The tool can also be used for good purposes, and despite the fact that it is designed specifically for geological purposes, it can be used effectively
to search for a myriad of different life forms.
It is a tool that can be used for good, or for evil. It can be used effectively, or deliberately used ineffectively... for instance, if someone
wanted to hide something.
The problem is, you are acting as if this situation for the Pancam is somehow different from any other set of filters that might be used in front of
any CCD camera in existence.
It's not. Any tool, and any set of filters, can be used to deceive, and some are much less effective than others at DISTINGUISHING THE CAUSE of a
particular color signal... and that action is one that reveals much more truth, instead of creating more deception.
Let's take the two non-Pancam filter sets you included in the initial graphic you posted. (Astronomik and IDAS/Hutech Type III RGB)
Assume that either of those filter sets had been the set that was sent up instead of the existing ones. Here's your task:
Assume that the Pancam is looking at a uniform concentration of chlorophyll B laden material, lit with sunlight at high noon. Compute the (relative)
pre-normalized CCD counts for any given pixel in the image (assume all pixels give identical values), for each of the filter types. Show all work.
It's not necessary to come up with exact values, merely the relative signal sizes from each filter.
(Hint: it's proportional to the "area under the filter response curve" when the curve values are multiplied by the response curve values of the
underlying CCD camera, which is already published. You'll need to use the RESPONSE curves / reflectance data for chlorophyll B... not the absorption
curves.)
Based on that finding, for each of those filter sets, roughly what "color" (computer monitor RGB value set) would the hypothetical field of
cholorphyll B register as on your screen? How would that result differ from looking at it with your own eyes?
Most importantly, given raw image data from those filter sets and the combined color signal of an unknown image which showed a "greenish" tint, how
would you go about distinguishing whether the color value in the picture was a result of chlorophyll B being present, versus some kind of greenish
mineral like olivine?
When you can answer those questions, you should know why it is technically much BETTER to have multiple non-overlapping narrow bandpass filters,
instead of three broad overlapping curves.
The RGB curves you referenced are nice for producing images that mimic human vision reasonably well... but you should recognize that human vision is a
GROSSLY POOR TOOL for distinguishing WHY something is colored the way it is. Human vision basically can't tell the (color) difference between plant
life and paint that is of the same basic shade. Sure, it can give hints that something might be useful to examine in more detail, but so can a wide
variety of other filter data sets.
Since this is a science mission, it is far more useful to gather data that can be used to identify WHY a signal is present, than it is to simply mimic
what a human would see.
That's why so much of today's hyperspectral orbital analysis is done with a large number of narrow-bandpass, non-overlapping filters... because it
gives good answers for solving problems and answering questions, instead of just making pretty pictures.
[Edited on 2-16-2004 by BarryKearns]