It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
That's where I believe you are wrong: what you are explaining is the conventional PTM methodology. I am saying that you can have similar results using other variables with the "light" variable.
funkster4
I am saying that you can have similar results using other variables with the "light" variable.
Originally posted by SkepticOverlord
funkster4
The image I used show a very visible smudge, contrary to the image you refer to. Again. I am not aware of NASA disclaiming the image with the smudge:
You've provided no evidence that the image you used has a provenance traced back to NASA or the Navy.
Images that do have such provenance lack the smudge.
It's time to either say you've been fooled, or your deception failed.
funkster4
...again, last I heard of, NASA's explanation was that the smudge was due to "data loss"...
I resent the fact that you imply I am trying to deceive anyone here. I really do.
That's just not what I expected, from the recommandations that were given to me regarding this forum...
...again, last I heard of, NASA's explanation was that the smudge was due to "data loss"...
Originally posted by SkepticOverlord
funkster4
I am saying that you can have similar results using other variables with the "light" variable.
Explain the process whereby you achieve those results.
*interpolate the new iteration with the source image: you are now getting a new (second) iteration which contains both the information of the source and the data content of the first iteration.
funkster4
*interpolate the new iteration with the source image: you are now getting a new (second) iteration which contains both the information of the source and the data content of the first iteration. This second iteration (the third iteration in the data base) can now be interpolated with either the original image or the first iteration, etc,enriching ad infinitum the reference data base . This goes on and on, ultimately sorting out objective data from noise by mere iteration. I think I explained the use of frequency as a filter.
Originally posted by funkster4
Originally posted by SkepticOverlord
funkster4
I am saying that you can have similar results using other variables with the "light" variable.
Explain the process whereby you achieve those results.
...quite simple:
*chose a source image
*apply to it any conventional settings (lighting, sharpness, contour, contrast, etc)
*save the resulting iteration
*interpolate the new iteration with the source image: you are now getting a new (second) iteration which contains both the information of the source and the data content of the first iteration.
this is starting to sound like intervention or something..."we all are here for you funkster, but all you are doing is creating fake data...its not real. the fake data makes you see things that aren't really there and it makes you think everything is OK, but its not. The lying and the abuse has to stop...you are staying out at all hours of the night and nobody knows where you are or what you are doing. the fake data is destroying you...its destroying... us....please..."
Originally posted by raymundoko
reply to post by funkster4
You are getting a new image yes, but no new data. All you are doing is creating fake data.
Originally posted by Phage
reply to post by ZetaRediculian
Can you recommend an appropriate facility?
We can all chip in for the enrollment costs.
Originally posted by Deaf Alien
reply to post by funkster4
That's where I believe you are wrong: what you are explaining is the conventional PTM methodology. I am saying that you can have similar results using other variables with the "light" variable.
That makes no sense.
That's like saying there's a software that will make a 3D image out of Mona Lisa painting from a single image (to say that painting).
Although the technology works better than any other has so far, Ng said, it is not perfect. The software is at its best with landscapes and scenery rather than close-ups of individual objects. Also, he and Saxena hope to improve it by introducing object recognition. The idea is that if the software can recognize a human form in a photo it can make more accurate distance judgments based on the size of the person in the photo.
Extracting 3-D information from still images is an emerging class of technology. In the past, some researchers have synthesized 3-D models by analyzing multiple images of a scene.