You are sort of catching my interest. I'm not real advanced, but do like to dabble.
Question: Is there any connection between chaos theory and the butterfly effect?
The "butterfly effect" i think was named as such in a popularization of chaos theory...it doesn't really have any formal definition, but in a
general sense it means "sensitive dependence on initial conditions", or is at least an analogy to understand sensitive dependence on initial
conditions. Here's my best try at an explanation:
Say we're in the army and we're doing ballistics calculations -- trying to figure out how to aim our cannon and how much powder to put in our
cannon, say, like in that "scorched earth" videogame if you're familiar. Let's also say we're on a totally flat field, so we can ignore the shape
of the ground, and we can also (for now) ignore the wind. Say we set as our "default" configuration a cannon angle of 45 degrees and 100 grams of
powder, and that the cannonball lands 100 feet away if we fire it with those initial conditions.
If we experiment a bit, we'll see that changing the angle a bit changes the location a bit -- say, something like
42 degrees -> 94 ft
43 degrees -> 98 ft
44 degrees -> 99 ft
45 degrees -> 100 ft
46 degrees -> 99 ft
47 degrees -> 98 ft
48 degrees -> 94 ft,
etc., and similarly changing the amount of powder (but keeping the angle at 45) makes something like the following table:
98 grams -> 95 ft
99 grams -> 98 ft
100 grams -> 100 ft
101 grams -> 102 ft
102 grams -> 105 ft
And now let's say we have a tub of exactly 100 grams of powder and we want to predict where our soldiers are going to be able to hit. Our soldiers
are good, but they're not perfect, so they could be off by a degree either way when they aim...so if they think they've set the angle to 45, it
could be either 44 or 46 (it could also be 45, we're just considering the total range it could fall under). That level of accuracy would mean that
we'd have to predict that the cannonball's going to travel somewhere between 99 and 100 ft -- just look at the table -- so with one degree of
possible error in our measurement of the angle we wind up with at most 1 ft of possible error in our calculation.
That's not that bad for battlefield work -- although we'd be pretty sorry soldiers if we're having to calculate how to hit something 100ft away --
but what if we're trying to hit something tiny, say only an inch across (and let's say our cannonball's small, too, so we can't just get close and
assume the ball's big enough to crush it). With our toy cannon the solution's pretty obvious -- we can just train the troops so they're only, say,
going to get the angle wrong by at most 0.001 of a degree (or however precise they need to be) we can rest assured the cannonball's going to fall
within an inch of where we need it to go.
So far so good -- this toy cannon is pretty intuitive, and as your accuracy of setup increases so does the accuracy of your predictions -- but what
"chaos theory" studies is in a nutshell systems where this intuition breaks down....in a chaotic system, there's no longer a nice relationship
between your measurement accuracy and your prediction accuracy. In ordinary life -- and for most simple systems and objects -- the usual intuition
holds, and if your ability to set things up accurately (or measure your variables accurately) improves, your ability to predict what happens will
improve as well; in a chaotic system your ability to predict what will happen won't necessarily get any better with better information -- sometimes
it does, but most often it doesn't.
As an example, take the function
f(x) = 4x(1-x)
from my earlier post; it's usually called the logistic function or logistic map, because it's a pretty simple function and you can use it as a toy
model of much more complex chaotic systems. For points from 0 to 1 (i'll write [0,1] to mean all points between 0 and 1, inclusive from here on out)
it's very easy to tell where the point goes to under the map:
if x is my point, f takes x to 4x(1-x), which is just the definition of a function. Similarly, if I apply f twice, (call this f2 for here), f2(x) =
f(x(f(x)) = 4(4x(1-x)(1 -4x(1-x)), which while more complicated is still pretty easy to calculate, and given any particular x in [0,1] all we have to
do is run it through that expression to find f2(x).
So, in theory our ability to calculate fn(x) for any x in [0,1] and any n a positive integer (1,2,3,...) is straightforward, and so in theory our
predictive ability for this system is perfect -- if we want to know fn(x) all we have to do is calculate it. But, what about the sequence
(x,f1(x),f2(x),f3(x),....)
which for a particular x you can think of as the "trajectory" of x under f. How well can we predict the trajectories of x?
Again in theory we should be perfectly able to do so -- just run the calculations -- but let's take a look at an example (here's where a calculator
comes in handy):
if x = 1/2, then f(x) = 1/2, so the whole "trajectory" is (1/2,1/2,....), but if x = 1/2 + 0.0001, say, x is going to go all over the place....try a
few examples if you've got a nice calculator, but otherwise take my word for it; 0.50001 isn't going to have a trajectory that looks at all like x =
0.5.
If this was a cannon (or a cannon-like system) you could find a small enough number - call it delta -- so that as long as x was within delta of 0.5
(ie, if delta = 0.0000001 then as long as x is in [0.499999999,0.50000001] (i might have messed up the number of digits there, but it doesn't
matter)) then the trajectory would be "close enough" to the trajectory for x = 0.5....but this is actually impossible for the logistic map; you may
be able to find individual points nearby to x = 0.5 such that their trajectory is similar to the one for x = 0.5, but you won't ever be able to find
a range so small that every point within that range behaves similarly to x = 0.5, if you see what I'm saying.
That's what it means to be chaotic, in a nutshell: although in principal one could run all the necessary calculations to find out what's going to
happen, in the real world you run into two problems:
a) your measurements won't ever be perfect
b) your ability to do calculations is limited
Non-chaotic systems are those for which a) isn't that big a problem, because it's possible to figure out "how good" your measurements have to be
and as long as they're that good or better you know how good your predictions will be; in a chaotic system no matter how accurate you are you're
never accurate enough, because even a tiny error in measurement can lead to a huge difference in outcome.
b) is also a problem: obviously for x = 0.5 there's a pattern to its trajectory (because for x = 0.5 f(x) = 0.5 and so it's always the same) and so
if someone asked you "where will x =0.5 be after 25 applications of f" you can take a shortcut and just say " it'll still be 0.5", but for x =
0.500001 you'll have to actually do the calculation yourself. For our simple function f this isn't that big a deal, but a lot of very complicated
systems are only computable if you can figure out some simplifications or shortcuts. For chaotic systems it's not generally possible to find accurate
approximations to most of the system, because each simplification builds in error which causes part a) to blow up on you.
That's basically the butterfly effect right there -- for systems where even a small initial error can lead to enormous differences in outcome, the
butterfly flapping its wings is supposed to be the tiny error (ie, the 0.000001 in our example) that can lead to a dramatic difference in outcome
(say, the difference between rain and sunshine).
What this means in general is that the ability to predict the behavior of complicated systems accurately is surprisingly limited if one wishes to
predict for the long-term, even when the short-term can be predicted very accurately indeed. You can see this with the "trajectories" from the
logistic map above:
if you limit yourself to a small window -- say, x,f1,....f5 or so -- you can see that if you pick two numbers really close to each other (like,
0.50001, 0.500011, say) in that window their behavior's pretty similar, but the further outside that window you get the more the two paths diverge
(which is what i'm guessing is what happens in the movie that started this thread -- slightly different beginnings leading to vastly different
endings in time). So, even though at each step of the way you can predict with perfect accuracy what comes next, if you want to step back a bit it's
hard to predict things accurately in the long term, which is what I think you're getting at with
think I remember reading that for some systems mathematicians could create predictive models of future events, but the probability of accuracy
was a function on top of it. Like at 10:00am tommorrow event x has a 40% chance of happening and at 10:34am tommorrow event y has a 70% chance of
happening, with perhaps mutual independence of x and y. And some periods of time were quite unpredictable.
The basics of dynamical systems are really straightforward and accessible if you can do algebra and especially if you know basic calc...I really do
recommend the book I linked to earlier if you know calculus...it's at least worth checking out at a bookstore and/or library. But the best thing to
do is just to play around with this a bit...