posted on Jun, 27 2009 @ 01:51 PM
reply to post by helen670
Google replicates the pages from other sites, and when the sites remove the pages Google does the same, they do not archive all versions of the sites
like the Way-back Machine.
As for the scrambled results, what is the type they show? Most sites are pure HTML or they are created on the fly by some background working code,
like what happens with ATS, where the posts are on a database and the pages are created when they are asked for.
But if it is, for example, a RSS feed, then it will be XML, and although human-readable, it does not display on the browser as well (or sometimes it
does not even show correctly, it depends on the browser).
Other file type have the same problem, if you find a DOC file but you do not have any software able of reading it, it will look garbled on the
browser.
I hope I have understood you problem now, but I am not sure yet.