It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

EXCLUSIVE: NASA Is Not Altering Mars Colors.

page: 1
1

log in

join
share:

posted on Jan, 18 2004 @ 09:34 AM
link   
This article is a brief summarised explanation of how the PanCam on the Mars Spirit Rover operates, in relation to the strange appearance of the calibration sundial in some pictures. The question was first raised by ATS member AArchAngel, and has been discussed at length in this AboveTopSecret forum thread and ATSNN story:
thread
 

Mars Spirit Rover Picture analysis.

In this thread I will attempt to summarise my posts to the larger thread.

What are you talking about?

Ok, the initial alarm was raised after it was noticed that the color-calibration sundial mounted on the rover, looked quite markedly different in the Mars-Panorama shots compared to its regular appearance.



Immediately wide-ranging theories began to pop up. At this stage I knew very little of the particulars of the PanCam so I decided to go and see what the Horses mouth had to say. I sent out a swag of emails to the NASA marsrover team, the Athena Instrument team at Cornell University, and the long shot, an email to Assoc. Professor James Bell. Who is the Pancam Payload Element Lead for the mission.

Now, getting no response from the Athena team, and an automated response from the NASA team. I was amazed and delighted to see that Dr. Bell had indeed taken the time out of his busy schedule to help explain this quirk in the panorama pictures. His email response is below:

Thanks for writing. The answer is that the color chips on the sundial have different colors in the near-infrared range of Pancam filters. For example, the blue chip is dark near 600 nm, where humans see red light, but is especially bright at 750 nm, which is used as "red" for many Pancam images. So it appears pink in RGB composites. We chose the pigments for the chips on purpose this way, so they could provide different patterns of brightnesses regardless of which filters we used. The details of the colors of the pigments are published in a paper I wrote in the December issue of the Journal of Geophysical Research (Planets), in case you want more details...


All of us tired folks on the team are really happy that so many people around the world are following the mission and sending their support and encouragement...


Thanks,


Jim Bell
Cornell U.


Now, as far as the pink tab where the blue one should be, that email is infact the complete answer. But its not easily understandable to the layman. Below I will attempt to explain why this occurs.





[Edited on 10-3-2004 by SkepticOverlord]



posted on Jan, 18 2004 @ 09:35 AM
link   
Digital Cameras

Firstly, we need to understand how the PanCam, and indeed digital photography in general works.

Luckily for us we have our good friends at www.howstuffworks.com... to turn to.

How Digital Cameras Work

It would be worthwhile to read the entire article on howstuffworks, for a fuller understanding of the processes at work. But because I know you are all busy (lazy?) I will summarise.

Basically, the heart of a digital camera is the charge coupled device or CCD. This CCD converts light hitting it into electrical impulses, the brighter the light, the stronger the impulse. Now, CCD's are color-blind. All they do is signal how bright the light hitting them is. All well and good for black and white photography. But for color we need to do more. To get a color-picture. We need to record images via the CCD using a series of 3 filters. A Red filter, a Green filter, and a Blue filter. These are then recombined afterwards to give a color-representation of the picture. (Note, cheaper options like the Bayer filter pattern are often used in commercial digital cameras, but they use interpolation and are subsequently less accurate than 3-filter methods.

Never True Color

Quite a big deal has been made of NASA not sending 'True Color' images back from Mars. The problem with this argument is the fact that no digital images are ever 'True Color'. They are all composites. We cannot at present make a digital camera that sees images as the human eye does. The human eye also has 3-color receptors, but, being biological, there is a range over which the receptors pick up the colors.

science.howstuffworks.com...

From howstuffworks.com


In the diagram above, the wavelengths of the three types of cones (red, green and blue) are shown. The peak absorbancy of blue-sensitive pigment is 445 nanometers, for green-sensitive pigment it is 535 nanometers, and for red-sensitive pigment it is 570 nanometers.


Now, the PanCam

Some reading about the PanCam:
athena.cornell.edu...

From this document, we can find the wavelength values for the different filters on the PanCam:


LEFT CAMERA..............RIGHT CAMERA

L1. EMPTY................R1. 430 (SP) *
L2. 750 (20).............R2. 750 (20)
L3. 670 (20).............R3. 800 (20)
L4. 600 (20).............R4. 860 (25)
L5. 530 (20).............R5. 900 (25)
L6. 480 (25).............R6. 930 (30)
L7. 430 (SP)*............R7. 980 (LP)*
L8. 440 Solar ND.........R8. 880 Solar ND

*SP indicates short-pass filter; LP indicates long-pass filter

Table 2.1.2-1: Pancam Multispectral Filter Set: Wavelength (and Bandpass) in nm


Typical RGB values for recording and display are Red-600nm, Green-530nm and Blue-480nm. As we can see these coincide with the L4, L5 and L6 filters on the PanCam. The difference is, in this panorama image, and in most images taken by the Rover, the L2 is used for the Red-Channel instead of the L4. The L2 is at 750nm, and right at the extreme end of the visible spectrum, the near infra-red range. This increases the range of the spectrum that can be recorded by the PanCam, allowing higher definition to be recorded, making it easier to see into the shadows and so forth.

Color-Chip Pigments

As Dr. Bell explained in his email, and as visible by viewing the Raw images hosted by NASA. The color-chips are not as simple as they appear. The pigments are designed to have different brightness at a variety of wavelengths. Not just RGB values. So as to "provide different patterns of brightnesses regardless of which filters we used". The blue pigment is very bright in the near-IR range. Thus the L2 plate has a very bright recording of the blue pigment.



posted on Jan, 18 2004 @ 09:37 AM
link   
Heres a quick way to re-create the effect yourself.

In the name of the image on the raw images directory

marsrovers.jpl.nasa.gov...

It shows what filter the image was taken with.

For example 2P126644567ESF0200P2095L2M1.JPG was taken with the L2 filter, which we know is at 750nm.

All the raw images seem to follow this format. A way to re-create the effect seen by shifting the redpoint. (Thats all thats been done, it actually makes the surface seem less red). Is this.

Photoshop-only explanation here.
Download these 2 sets of 3 images.
Series 1.
marsrovers.jpl.nasa.gov...
marsrovers.jpl.nasa.gov...
marsrovers.jpl.nasa.gov...

Series 2.
marsrovers.jpl.nasa.gov...
marsrovers.jpl.nasa.gov...
marsrovers.jpl.nasa.gov...

Now, In the first series the Red component is from filter L4, (600nm) and in the second the filter is L2, (750nm). The green and blue filters are the same for both at L5 (530nm), and L6 (480nm) respectively.

To combine these, we will start with the first series. Open the L4 filtered image in photoshop first. This will be the background Then open the L5 and copy/paste as a layer over the L4 (layer 1), then copy/paste the L6 image as a layer over both (layer 2). Now all you have to do is rightclick on the L6 layer, go to blending options, advanced blending, and make sure only the blue channel is selected (deselect the other 2) this makes this layer the blue channel. Now, do the same for layer 1 (L5) but select the green channel. You don't have to do the same for the last channel as if you havent changed the opacity. Red is the only thing that can show through from that layer.

You should now have a regular, true-colour image of the sundial. As RGB are typically those values.

Now, if we repeat the process with the second series of images. Using the L2 layer as the background (and therefore red channel). We get a completely different looking sundial.

Try it for yourself.

Here are the two processed images:

Series 1:


Series 2:


All this from a little shift of the redpoint by 150nm. You'll also notice it doesn't really change the look of much apart from the extreme colours. I will make a diagram to show how the color-space is transposed.

NOTE: This simple-combination method is only appropriate in a few instances, explanation a little later.



posted on Jan, 18 2004 @ 09:38 AM
link   
But Why?

Ok, explanation of why the color-chip pigments look so strange on the L2/L5/L6 filtered image.

Firstly we have the full visible spectrum.



We can then map our RGB colorspace onto it:



The curved grey region is the entire visible spectrum. The white triangle is the region of colours displayable by RGB. (The L4,L5 and L6 filters correspond to the points R,G and B).

This is the space recorded by replacing the L4 filter with the L2 filter (ie shifting the Red point by 150nm to the very edge of infra-red).



Notice there is a region recorded that is outside the visible spectrum. (The bottom right corner of the RGB triangle).

Now, when we display the composite RGB image of this back on our monitors. The colorspace that is recorded by the PanCam (with regions outside the visible) is transposed onto the displayable region shown in the first image. Thus a small region of infra-red is now added to the end of the red channel. As it is squashed into the displayable region.

Now, this means anything that is very reflective in the near infra-red spectrum (for example the blue pigment) has a massive boost in the Red channel when transposed. By comparing the L2 and L4 images for the green-chip, we can see the green chip also is quite a bit more reflective in L2 than L4. Thus the blue pigment appears pink and the green a kind of beige.

Also you'll notice that the transposition would actually make the environment look less red. As anything in the true red range (600nm) would be shifted to a slightly shorter wavelength, and appear more orange.



posted on Jan, 18 2004 @ 09:40 AM
link   
Thats It?

In a simplified way, yes. That is the explanation for the blue pigment showing as pink. There is a lot more to this story though. Firstly, we have to remember what the mission is for Spirit. It is a geological one, not a sightseeing one. More than half of the filters on the pancam are outside the visible spectrum. The way the filters and sundial are set up is to try and decrease the discoloration of the surface by the atmosphere, as it is better for the geological mission to see the colors of the rocks and ground as it would be when white-lit. Not all with a pink/red filter over it.

To most of us though, color pictures from Mars are much more satisfying than any data regarding the planets geological history. Athena and NASA have purpose built some high-end image processing software to re-create the images as close as they can get to what it would actually look like from the surface.



posted on Jan, 18 2004 @ 09:41 AM
link   
Not Quite it.

That is obviously a shortened explanation of the reasons behind the blue pigment appearing pink. As shown earlier we can re-create this effect ourselves.

I also mentioned in that post that the simple equal mix of the RGB color plates (from the Spirit Raw Images hosted by NASA) is only equivalent for some pictures.

Why not all?

To explain this we need to look a little more at the PanCam and how it transmits the data. From our Pancam Technical brief, we discover that the onboard computer on the rover (which controls PanCam) has the ability to perform a limited set of image-processing tasks, one of which is:

(4) rudimentary automatic exposure control capability to maximize the SNR of downlinked data while preventing data saturation


Channels Normalized
This means that the brightness of all three color plates has been amplified to give the highest range of brightness for each plate. I don't know the graphical term for it, but an equivalent Audio term would be something like Hard limiting. So I'll use that.

Basically, in each of the three filter pics. The exposure has been set so the brightest part of the picture from each filter, correlates with the absolute maximum brightness for that channel. For example the brightest part of the red channel is FF0000, green is 00FF00, and blue is 0000FF. (Obviously they all come in as b/w pics so in each black and white plate there is a perfect range from 000000 (absolute black) to FFFFFF (absolute white).

You can test this by opening one of the black and white plates (photoshop again sorry). Select either 000000 or FFFFFF as the working color. Then go to the select menu and color range. Set fuzziness to zero and OK. For each extreme you will find at least a few pixels of each.

On Earth
You can test the counter to this theory with a photo taken on Earth. Choose any photo taken on earth, (A good one to try is that Autumn road looking one that comes with Windows XP). Open it in photoshop and set its blending options so only the blue channel is showing. Its very dark, and there are no 0000FF pixels at all. In fact there are only a few 0000AA pixels, and they are in the whiteish parts. You can try this with any picture taken on earth. Try to avoid pictures with solid black and white in them however. Or something silly like a rainbow. White requires bright amounts of all RG and B to show. The rainbow is self-explanatory
.

Reason
By sending each plate of colors spread across each extreme, you gain the maximum amount of data from each plate. Once you know the calibration information it is easy to amplify each channel back down to its correct setting, and get the images looking as they should. If you were to send the images at equal exposure levels, the signal to noise ratio would be lower, and any slight error in one of the blue/green channels would be more noticeable.

Now, this only throws out the color-balance on images where the original plates were not almost even. Unfortunately this covers most of the pics where the rover isn't visible. Remember each plate is Hard Limited when transmitted back. For this to not change the look of the simple-combined image. The original plates would have to already be almost hard limited. There are a few where this is the case.

Now, a way to test this. Is to get these images:
marsrovers.jpl.nasa.gov...
marsrovers.jpl.nasa.gov...
marsrovers.jpl.nasa.gov...

You will have to shrink the first one from 1024 to 512. The 'EFF' is a prefix for 1024x1024 and 'EDN' is for 512x512. I don't know why, thats just the pattern I've noticed
. These are the 3 plates that make up the top of the little silver pole and corner of the sat-dish visible in the panorama.

marsrovers.jpl.nasa.gov...

Now, this pole has very bright almost white areas in the reflection. Thus all plates should be fairly even in exposure levels.

Combining them in photoshop. (In the manner mentioned before. We get:


Which is extremely close to the colors in the panorama. With slightly less of a red-tint.

Yet when we use other 3-plate series from Sol05 (which is largely the panorama). Such as these ones:
marsrovers.jpl.nasa.gov...
marsrovers.jpl.nasa.gov...
marsrovers.jpl.nasa.gov...

We get:


A completely different look. Even though they are combined in the exact same manner. This is the effect of having all channels Hard Limited. You can re-create this effect by choosing auto-levels in Photoshop. While this is often handy, brightening up images and so forth. It does not work well when you are dealing with images predominantly one color, and whos brightest and darkest point is not a shade of grey.

How does NASA do it?
Well, clearly high-end, purpose made image processing software is a big part of it. They also have all the relevant calibration and exposure information from the rover.

Are we boned?
Not at all. Any picture with White and black, or bright red, green and blue in it will look almost exact when mixed evenly. What is the one thing we know has these? The sundial.

So any photo of the sundial. (Such as the ones shown earlier in the thread). We can be fairly sure will be accurate when mixed evenly. The convenient thing with the sundial is the fact it has mirrors on it to show the Martian sky, so with any plate-series of L4, L5 and L6 filtered plates, we can see a close approximation of the Martian Sky.

For example:


We can see the sky color in the little mirrors at the edges of the sundial.

Now, the one flaw with all this is the fact that a slight and constant hue of any sort would be removed by the equalisation of all the channels. So if anything all these pics would likely have a slight red/orange tint.

Among the multitude of images from Spirit, a nice pair for comparison are these.

There is a series on sol 8 which looks like a test of almost all the filters at one hill. This was good news as it allows us to compare the difference when we use an L2 filter as the red channel and when we use an L4 filter.

The results are below. REMEMBER these are normalized color images, not real color.


The slide on the left looks less-red than the one on the right. Obviously the channels are normalised so the colours are not true. But it is a good visual example of the idea that the near-infrared selection of filter for the Red channel will actually give the appearance of less red than would using the L4 filter for the red plate.



posted on Jan, 18 2004 @ 09:42 AM
link   
Earth-Bound examples

Ok so, further explanation on why normalizing the colors is not a way to find the 'true-colors' of any given pics.

We have all seen those images from Mars where people have used photoshops 'auto-levels' function and 'proven' that Mars has a blue sky. This is so wrong as to be stupid. That function merely maximises each color channel. It can not know the circumstances of each picture and 'fix them up'.

Here are a few examples of pictures taken on earth. Try it for yourself.

Firstly this one. Taken in the Canberra bushfires in Januray 2003. abc.net.au...



In this one the color-change is extreme due to the original being largely red-tinted. With no white, blue or green coloring.

Further to this, I grabbed my trusty Canon A70 and went out the back and took some photos. Then came and Auto-Levelled them. You can see the difference.


In all cases, the picture on the left is the Original, and on the right is the image where all channels have been equalized.

Tomato Plants.


The pool I am too lazy to clean.


The bush.


Now, the ones on the right seem to have more definition (as all ranges of brightness are covered by each channel). But the colors are simply wrong. You can try this yourself at home. Use photoshops 'auto-levels' to equalize the color channels. Remember pictures where the brightest part is a shade of grey, or white, will not be changed very much, either will pictures that have all 3 primary colors visible.



posted on Jan, 18 2004 @ 09:44 AM
link   
Conclusions

Now after all that ranting. I suppose we need to summarise.

Basically, there is no way for us to recombine all the 'Raw' images to show the final images. We'd need to know more about the exposure levels and calibration settings to do that.

But, do not despair!. There are still a few which we can get a very close approximation of the actual colors. For example any picture that has the sundial or that pole (or any other white part of the rover) visible. These images we can be sure are close to the true-color images, with the only difference that any overall red tint will be lost.

So from the images we already have we can independantly check on the color of the ground and sky.

The sundial picture shows the reflected sky and the picture with the pole and sat-dish (I really should find out what that little pole is at some stage
) shows the ground.





So all we are able to do so far is show that the sky and ground color we have been seeing in the released NASA images have been accurate. As you would expect really.

Another relevant point. The 'Raw' data on the NASA webservers is not technically 'Raw'. As NASA uses its own image compression to transmit back from Spirit. Then converts to .jpg when hosted online. To save bandwidth, plus its a much more palatable format.

Why dont they tell us this?

Some people have posed the question, 'why doesn't NASA tell us all this on their site'. The simple response is 'why would they?'. The images shown by NASA are as close to the actual appearance from the surface as they can get. The colors are as true as fifteen million dollars worth of camera and image processing software can get them. As accurate as any digital image can be.

There is simply no point in adding on their site "caution these images are not 100% precisely actual colors" when no digital image is really 'actual colors'. It would just give the conspiracy types more things to panic about.

We can already see for ourselves that the color of the ground and sky shown in the released panoramas is correct. There is no suggestion whatsoever that any modification has been made of the data coming in from Mars.

[Edited on 20-1-2004 by Kano]



posted on Jan, 18 2004 @ 09:45 AM
link   
Resources and Additional Reading

A brief list of related and useful sites regarding this matter:

Mars Rover Home at NASA:
marsrovers.jpl.nasa.gov...

An Overview of the Mission of the Two Rovers:
marsrovers.jpl.nasa.gov...

All Raw images from the Spirit Rover:
marsrovers.jpl.nasa.gov...

Homepage of the Athena Instrument Team at Cornell University:
athena.cornell.edu...

Specifically the PanCam:
athena.cornell.edu...

PanCam Technical Briefing:
athena.cornell.edu...

HowStuffWorks Page on Digital Cameras:
electronics.howstuffworks.com...

HowStuffWorks Page on the Eye and Vision:
science.howstuffworks.com...

Georgia State University's Hyperphysics pages:
hyperphysics.phy-astr.gsu.edu...

Specifically Light and Vision:
hyperphysics.phy-astr.gsu.edu...

Including Color Vision:
hyperphysics.phy-astr.gsu.edu...


EDIT: * these are the papers submitted to the JGR(Planets) by Dr. Bell and associates.

The Mars Exploration Rover Athena Panoramic Camera (Pancam) Investigation*
europa.la.asu.edu:8585...

Mars Exploration Rover Engineering Cameras*
robotics.jpl.nasa.gov...

[Edited on 4-2-2004 by Kano]

EDIT: Some articles at JPL about the challenges of recreating what a human would see on Mars.

Revealing Mars? True Colors: Part One
marsrovers.jpl.nasa.gov...

Revealing Mars? True Colors: Part Two
marsrovers.jpl.nasa.gov...

[Edited on 7-2-2004 by Kano]

EDIT: JPL has also now put up a page explaining the image filenames.

JPL: How to decode the image filenames
marsrovers.jpl.nasa.gov...

[Edited on 22-2-2004 by Kano]

EDIT: Excellent discussion on this topic over at BadAstronomy's BBAB.
www.badastronomy.com...

[Edited on 21-3-2004 by Kano]



posted on Jan, 18 2004 @ 09:48 AM
link   
Created a thread for members to discuss and ask questions about this thread.

www.abovetopsecret.com...



posted on Jan, 18 2004 @ 01:39 PM
link   
What an amazing analysis you have provided Kano! Thank you for uncovering the truth of this mystery.





posted on Jan, 19 2004 @ 02:55 PM
link   
They took the red/infrareds at 1024x1024 and the blue/greens at 512x512 simply because most of the detail can be found in the most prominent color. It's a way to reduce data transmissions requirements.

In these cases where the green/blue images are 512x512 and red/infrared images are 1024x1024, the correct procedure is to scale up the respective green image/blue images to 1024x1024 and superimpose with either the red or infrared.

It's also why most consumer digital camera CCD's usually have two green pixels for every red pixel / every blue pixel. Green is the brightest color of the three primary colors, and thus more important to get the best resolution from green than from blue or red.

It's also the same sort of principle that Luma information on DVD is 720 pixels wide (black and white information) while chroma information is only 360 pixels wide (color information). Also, you Microsoft DirectShow programmers will understand this well -- with YUY2 and YVYU frame buffers, where you notice the luma (mono info) is double the resolution of a chroma component (color info - either the U component or the V component). For this, U corresponds to Pr and V corresponds to Pb in a Component color connection (Y Pr Pb).

Thanks,
Mark Rejhon
www.marky.com

[Edited on 19-1-2004 by mdrejhon]



posted on Jan, 21 2004 @ 10:53 AM
link   
Just a further extension of this article. Some handy additional information.

Dr. Bell has sent an email clarifying a few things and we have had a visit from Dr. Mark Adler (Spirit Mission Manager). Dr. Adlers post is here:
www.abovetopsecret.com...

Ok, now for the interesting stuff.

Spectral data from the Pigments

From Dr. Bell

There's details and a figure published in that JGR paper I mentioned (Figure 20), and I attach it here for reference. On the left are spectra of the cal target materials measured in a lab at NASA/JSC. On the right are the same materials measured by Pancam in a lab at JPL before launch. We're working on compiling a version of this from Pancam measurements on Mars, but basically (thankfully!) it looks very much like the panels on the right. Look at how whopping bright that blue chip is at 750 nm, for example!


I chopped the original figure into the 4 parts to avoid blowing out the page width or shrinking it so much as to be unreadable.

Spectra of the color chip pigments:


As measured by PanCam:


Spectra of the White Gray and Black rings on the sundial.


The same measured by PanCam


Now this shows us why the blue chip is showing up so amazingly bright on the L2-filtered images. Also the longer wavelenth filters, which make up almost all the filters on the Right-hand lens.

For those interested there is a lot more data on the spectra of many materials available online.

USGS Digital Spectral Library
pubs.usgs.gov...

Thermo Galactic's Spectra Online (you have to become a member to view the full images, its free though).
spectra.galactic.com...

Or you could use their list of other spectra databases
spectra.galactic.com...


That Little Pole

I suppose we can now start referring to 'the little pole' as the Low-Gain antenna, even though its nowhere near as cute a name. Both Dr. Bell and Dr. Adler were able to clear this up, as was Phil Karn on the discussion thread for this article.

From Dr. Adler

The little silver pole is the low-gain antenna. It is used for low data rate communication, a few hundred bits per second, directly with Earth using X-band frequencies (around 8 GHz). It works over a wide range of angles, and so doesn't have to be pointed like the high-gain dish antenna. We use the low-gain antenna to send commands to the rover at low rates, around 30 bits per second, when the rover is awake but not using the high-gain to send data to us at the time. The low-gain is also a backup in case the high-gain pointing isn't working for some reason. We can work through the low-gain to fix it.


So goodbye little pole, hello low-gain antenna.

Raw Filenames

As mentioned in the original thread, I was attempting to figure out how the filenames were assigned to the images in the Raw image directory at JPL. Luckily Dr. Adler was able to help out with this, it was proving to be quite difficult.

From Dr. Adlers post:

Now the secret decoder ring for the image file names. Taking one example file name from your article:

2P126644567ESF0200P2095L2M1.JPG

The breakdown is:

"2" for Spirit. "1" is Opportunity. (Don't ask.)

"P" is Pancam. Other choices are N - navcam, F - front hazcam, R - rear hazcam, M - microscopic imager, and E - EDL camera.

The next nine digits are the time the image was taken in seconds since noon UTC on January 1st, 2000.

The "ESF" is the product identifier, meaning a raw sub-framed image. There are many three-letter identifiers. Some common ones: EFF - raw full frame, FFL - full frame linearized, SFL - sub-frame linearized, EDN - downsampled raw image, DNL - linearized down-sampled, ETH - raw thumbnail, THN - thumbnail linearized (doesn't quite follow the convention). Linearized means that geometric optical distortions have been corrected. There are others for various levels of processing of the images.

"0200" is site 2 and position 0. We increment those counters when driving. Position is automatically incremented for each piece of a drive. We decide when we want to declare a new "site" to help distinguish the images.

"P2095" is the identifier of the command sequence that produced the image. This makes it easy, for example, for the person who wrote the sequence to find the images that were taken by their sequence.

"L" is the left eye. It can also be R - right, B - both, M - microscopic, or N - not an image.

"2" is the filter position, in the range 0..8 where 0 is no filter or not applicable.

"M" is the product creator, in this case the MIPL automatic image processing that is part of the MER downlink system. Other choices are A - Arizona State University, C - Cornell, F - USGS at Flagstaff, J - Johannes Gutenburg University, N - NASA Ames, P - Max Planck Institute, S - science operations team at JPL, U - University of Arizona, V - visualization team at JPL, or X - other.

"1" is the version identifier.


Now I imagine this is going to prove to be very handy knowledge to have, as more and more images become available.

For those playing the home game, I had gotten as far as knowing The filter positions, that P referred to Pancam, and that EFF meant 1024x1024 and EDN was 512x512. So I had a little way to go



More Why didnt they tell us this?

This one direct from 'they' to you. Dr. Bell included a response to this in his email.

One additional reason to add to your "Why don't they tell us this" section is that we are simply so incredibly tired and overworked trying to keep up with this great stream of data that we don't have time to stop and spell it all out.


Much more Juicy Data to come

As we figured, Spirit sends a lot more data along with each image to assist in the calibration and correction of the images. It is not available at present but will be in the future.

From Dr. Adler

Yes it does send back a bunch of information about the image, like the exposure time, camera pointing, temperature, etc. However as far as I know, that data is not available online. Once this data is archived in a few months, all of that data will be included and documented. All of the mission data will be available at the cost of duplication.


Dr. Bell also touched on this issue.

Ultimately you are right that people will need the calibrated data to do the color balancing correctly. We are working on doing that and will eventually get all those images out to the public using the NASA/JPL "Planetary Image Atlas" web site. It will take several months or more to get the work done, however. In the meantime, we thought it would be best to get *something* out there, and so that's why we opted to get the raw data out fast, even though it's still raw. The team has taken some criticism for this within the planetary science community because not many past missions have adopted such an open-data policy.


The last quote from Dr. Bells latest email is in my opinion the best. On the topic of the team taking some criticism for giving the public access to such raw images.

Let them whine, I say. People want and deserve to see the pictures as soon as we do.


I think all of you, especially our regular ATS members, understand what an important and respectful decision that the team has made in doing this, and we thank them for it.



Ok, I am going to keep this thread for further updates and information that comes to hand. Discussion and questions about this thread can be directed here:
www.abovetopsecret.com...

[Edited on 21-1-2004 by Kano]



new topics

top topics



 
1

log in

join