reply to post by vze2xjjk
I have thought about it and I will try to explain why your idea is not the best and would not work as you think.
Suppose you have two images to send, image1 and image2.
Image1
Image2
Each image has only two colours. I have put the numbers in different places to make it easier to see what would happen if the images were
overlapped.
Now, if you overlap the images you will get something like this.
As you can see, there is no way of knowing which image had the "1" and which image had the "2", the only way is to make some change in the image
that is the result of the overlapping.
As this was an oversimplified case, imagine that the images are these two.
Image1b
Image2b
In this case, if we overlap the images we must provide a way of retrieving the original images, we must use a more complex algorithm to mix the images
than the previous one, if we just add the two values for each colour we do not have a way of knowing what values gave that result, for example, 4
could be 3+1 or 1+3.
The best way of doing it is to have a palette with different colours attributed to different images.
That would be something like this, where the top row shows the original colours of Image 1, the left column the original colours of Image 2 and the
grid the result of all possible combinations of both images. In this simplified case, two colours are common, so the last combination never
happens.
Using this palette, the resulting image, the one that would be transmitted, would look like
this.
So, to use the palette, we would need four(*) colours to transmit the two overlapping images, so we are doubling the size of the data, meaning that
the "window of opportunity" must have twice the size of the one needed to transmit just one image.
No gain there.
Now imagine that there is failure on the transmission, because of some reason, during the transmission, some information was lost and the image
received looks like this.
It would be impossible to know what was on both images, while if the images were transmitted sequentially, only if the loss of information had
affected the end of the first image and the beginning of the second there would have been loss of data in both images. Considering that the centre of
the image is usually where the most important data is, a loss of data at the end or at the beginning would not be as bad as a data loss at the middle
of the data block.
A data loss at the middle of one data block would only affect one image, the other image would be received in perfect conditions.
I hope that you (and everybody else) understands what I was trying to explain, I sometimes make my explanations more complicated than what they
explain.
* In this case, as I used two colours in each image, it may look like we need to double the number of colours, but that is only in this case. If each
image had 4 colours, the colours for the resulting image would fill a grid made by 4 rows and 4 columns, in a total of 16 colours; the total colours
is not the sum of the possible colours in both images, its the product of the possible number of colours in both images. So, for two images with a
maximum possible 256 shades of grey we would need 256*256=65536 colours. As a result, the data transmitted must be more than the double, it only looks
like the double in this over simplified example.
Also, I ignored error correction data and used a possible transmission where the values for each point would be transmitted sequentially, in reality
they are probably transmitted in blocks with error correction information.
Edit: Why that completely out of topic image?
[edit on 13/8/2008 by ArMaP]