It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
perhaps they DID rotate the station so that when they took the supposed Zenith shots, they were still in the Cupola but looking sideways, not out! Wouldn't put it past NASA at all!
I use Stellarium, and a very simple and fairly quick method to learn where the Moon was relative to the ISS, by plugging the "nadir coordinates" into the Stellarium location window, and setting the same time and date the photo was taken (just make sure it's in GMT).
Again, you're showing that you don't understand basic photography. A lower ISO would make the sensor less sensitive to light, not more.
originally posted by: GaryN
The reason for NASA using the ISO 400 setting and the wide lens is then explained. At lower ISO, it is possible that the stars may have showed up even with such short exposure times, raising questions
originally posted by: GaryN
The reason for NASA using the ISO 400 setting and the wide lens is then explained. At lower ISO, it is possible that the stars may have showed up even with such short exposure times, raising questions
Again, you're showing that you don't understand basic photography. A lower ISO would make the sensor less sensitive to light, not more.
originally posted by: GaryN
The reason for NASA using the ISO 400 setting and the wide lens is then explained. At lower ISO, it is possible that the stars may have showed up even with such short exposure times, raising questions. The wide lens and thus small moon will make the colour changes of the Moon less noticeable on quick examination, but if one was to enlarge the Moon and examine the colour profiles of each image with the appropriate software, great variation would be noted. The bracketed images are just as important as the 0 correction images, to someone who knows how to interpret the data.
originally posted by: GaryN
a reply to: nataylor
originally posted by: GaryN
The reason for NASA using the ISO 400 setting and the wide lens is then explained. At lower ISO, it is possible that the stars may have showed up even with such short exposure times, raising questions
Again, you're showing that you don't understand basic photography. A lower ISO would make the sensor less sensitive to light, not more.
That was what I was trying to say. If they had used ISO 100, the exposure times would have been longer, and stars, if they are brighter from certain altitudes as described by the Voskhod 2 crew (and perhaps why ed mitchell said the stars were 10 times brighter) may have shown up at say 1/30 sec from the ISS. You can't image stars from Earth with a 1/30 sec exposure at any ISO. Well, maybe with the A7S, but that's another story. Us with older cameras might be looking for 10 or 15 seconds at ISO 400 to 1600, so stars visible with 1/30 sec at ISO 100 from the ISS would really have raised some eyebrows. If anyone ever noticed.
originally posted by: GaryN
That was what I was trying to say. If they had used ISO 100, the exposure times would have been longer, and stars, if they are brighter from certain altitudes as described by the Voskhod 2 crew (and perhaps why ed mitchell said the stars were 10 times brighter) may have shown up at say 1/30 sec from the ISS.
So now there are plasma bubbles that make the invisible light visible... Nice story bro.
What's so special about the visible part of the EM spectrum?
Did the universe pull a nasty one on us humans and made itself invisible to us whenever we go to deep space? If an animal that can see in UV or infrared (such as bees) was sent to space, would they see the stars and everything else?
originally posted by: GaryN
This is why there are no other visible light telescopes in space
Two 2.2 megapixel CCD HDTV cameras, one wide-angle and one telephoto, were also on board
originally posted by: GaryN
a reply to: wildespace
Seeing as this is a conspiracy site, let's have some. The visible portion of the spectrum carries very low energy, so unless you can focus all that energy on a CCD pixel, you can't get an electron out of it.
This is why there were UV and x-ray telescopes in orbit from the 60s, tho they weren't using CCDs then,
but no visible light telescope until Hubble,
when they had finally managed to figure out the still classified optics that are needed to put enough of the wavefront onto a CCD pixel.
This is why there are no other visible light telescopes in space, nobody else has the technology to do it
originally posted by: GaryN
a reply to: wildespace
It's clear that you don't understand how any of these space based instruments work. If another country with space capability such as India or Japan wanted a space based, visible wavelength telescope in space, they could have used something like a 32 inch Ritchey-Chretien, like this one:
www.mistisoftware.com...
Theoretically, this would see almost as well as Hubble, and for a lot less money. Nobody has ever put a conventional telescope in orbit, because it would see nothing.
Hubble's optical system is a straightforward design known as Ritchey-Chretien Cassegrain, in which two special mirrors form focused images over the largest possible field of view.
originally posted by: GaryN
a reply to: onebigmonkey
No, it's you who don't understand. The telescope can collect the light OK, but the wavefront is not the same as the wavefront generated in the ionosphere/plasmasphere that Earth based telescopes see. You folk who think you know everything really annoy those of us who do.