posted on Mar, 29 2005 @ 09:19 PM
Well, y'all finally pulled a longtime lurker out of hiding with this one. This is my first post.
I aced real analysis during my math studies, but its been a while, so sorry if I make a mistake.
.999... = 1 by definition. An infinite decimal sequence is defined to represent the its limit. You can think of .999... as the sequence 9/10,
99/100, 999/1000..., which has a limit of 1. This isn't really a flaw in the real number set. You just end up with a dual representation of every
finite decimal sequence. If you were so inclined you could simply disallow sequences ending in repeat nines. In my studies in mathematics though, I
have come to believe the idea of infinite decimal expansions is flawed. I'll have to give the fast explanation, but I'll elaborate more if y'all
are interested.
Someone metioned countable infinities versus uncountable infinities. I think it was Cantor that thought this up, and it did make him a little nutty
by the way. A set can be finite, countably infinite, or uncountably infinite. The integers are a countable set. Any set that has a bijection with
the integers is countable. That means you could make an infinitely long list of the elements of the set assigning each one an integer, or natural
number. It turns out the reals are uncountable. Cantor came up with a brand new method of proof for it, diagnolization. Its slightly easier to
prove that (0,1) is uncountable. (0,1) can be put into a bijection with the reals, so they have the same cardinality.
Suppose the all the numbers in (0,1) could be put into bijection with the natural numbers, the consider the list:
.a_0,a_1,a_2,...
.b_0,b_1,b_2,...
.c_0,c_1,c_2,...
.
.
.
now, create a number by going down the list, if a_0,b_1,c_2 and for each one adding a 5, unless that digit is 5, and then add a 6, Hence, this new
number is different from each number in the list, so (0,1) is uncountable.
This is a really interesting fact. Notice that if you tried the above proof with binary, it wouldn't work, because the list could arranged to have
all 0s on the diagonal and to make your number different it would be .111..., which didn't need to be in the list to begin with since .111 = 1 in
binary.
Another interesting fact is that in a real analysis book, the only time decimal expansions are used as a representation of the reals is for the above
proof.
The above was pretty much fact (except for some sloppiness), the following is sort of my opinion on math philosophy:
I think it is a flaw to include all possible infinite decimal expansions in the real numbers, because some of them cannot be "chosen". A number
isn't much use to anybody if you can't represent it in some sort of finite form. Don't confuse expressable in finite form with having a finite
decimal expression. 1/3 is a perfectly valid representation of a number, even though it has an ifinite decimal representation.
A guy named Alan Turing is the father of computer science (I'm sure many of you here know this, but humor me). He came up with the idea of another
kind of number: a "computable" number. A computable number is any number that can be the output of a computer program in some sort of decimal form,
given that the program was given an infinite time to run. For instance, Pi is computable, because we can write a program that will keep spitting out
digits of Pi as long as the computer runs. This program could be taken as a finite representation of Pi. It turns out there are only countably many
computable numbers.
This leads me to wonder what the heck the rest of the numbers are. There are uncountably many real numbers that cannot be expressed by a computer
program. These numbers don't seem useful at all. If there exists no process for creating the number, how would you express it in any finite form?
The problem seems to lie in an over generous definition of convergence. It makes sense to allow .9999... to equal 1, since 1 is the limit of .9999,
but real analysis allows any decimal expansion to be valid, because it defines convergence as Cauchy (sp?) convergence, where sequences are allowed to
converge without converging to anything. It suffices to simply be able to create an infinite sequences of monotically decreasing bounds on the
original sequence. By allowing any infinite decimal expansion into the real numbers, we have let in uncoutably many wacky numbers that aren't any
good to anybody.
Wow, that was a long post. Hope I don't get in trouble for it. There's information available on the net about all this and I would love to discuss
it further if anybody is interested.