I came across this interesting piece of information while reading today. I believe this information is important when considering the science vs
religion debate. The two main researchers im going to discuss are
stephen wolfram who
researches cellular automata (
Rule 110) and
edward
fredkin Who researched his theory of
digital physics. Bare with me as I explain this, it
will be a long post, but surely worth it if you find this type of thing interesting.
Stephen Wolfram provides extensive evidence to show how increasing complexity can originate from a universe that at its core is a deterministic,
algorithmic system (a system based on fixed ruled with a predetermined outcome).
First we must understand what a cellular automaton is.
wiki-
is a discrete model studied in computability theory, mathematics, physics, complexity science, theoretical biology and microstructure
modeling. It consists of a regular grid of cells, each in one of a finite number of states, such as "On" and "Off"
"A simple computation mechanism that, for example, changed the color of each cell on a grid based on the color of adjacent or nearby cells according
to a transformation rule... The process involves repetitive application of a very simple rule. From such repetitive and deterministic process, one
would expect repetitive and predictable behavior... There are two surprising results here"
its basically trying to model how plants build themselves.
There are 4 classes of cellular automata.
class 1 - produces basic checkerboard patterns
class 2 - produces arbitrarily spaces streaks
class 3 - starts to become more interesting as recognizable features such as triangles appear in the pattern in random order
class 4 - the most famous example being Rule 110. This one however produced the "aha experience". and resulted in wolfram dedicating over a decade
to this topic. Class 4 rules produce surprisingly complex patterns that do not repeat themselves. We see in them many different types of artifacts
however the pattern is neither regular nor completely random; it appears to have some order but is never predictable.
Exampl of Rule 110
[atsimg]http://files.abovetopsecret.com/images/member/2bcee454a2ac.gif[/atsimg]
Why is this important? Keep in mind we began with the simplest starting point: a single black cell. the process involves repetitive application of a
very simple rule. From such a basic rule we see complex and interesting features that shows some order and apparent intelligence.
Localized structures appear and interact in various complicated-looking ways
Wolfram makes the following point repeatedly: "whenever a phenomena is encountered that seems complex it is taken almost for granted that the
phenomenon must be the result of of some underlying mechanism that is itself complex. By my discovery that simple programs can produce great
complexity makes it clear that this is not in fact correct. Furthermore, the idea tha a completely deterministic process can produce results that are
completely unpredictable is of great important, as it provides an explanation for how the world can be inherently unpredictable while still based on
fully deterministic rules. We also see these same principles at work with fractals, chaos, and complexity theory, and self-organizing systems such as
neural nets, which start with a simple network but organize themselves to produce apparently intelligent behavior. at a different level we see it in
the human brain itself, which starts with only about 30-100 million bytes of specification in the compressed genome yet ends up with a complexity that
is about a billion times greater.
To put this in contrast, the number of bits (DNA) needed to create the brain is LESS than the number of bits it takes to describe Your microsoft
word program!
In 2000,
matthew cook verified a 1985 conjecture by Stephen Wolfram by proving that Rule 110 is
Turing complete, i.e., capable of universal computation. Among the 256 possible elementary cellular automata, Rule 110 is the only one for which this
has been proven.
And thats why this is important.
"In computability theory, a collection of data-manipulation rules (an instruction set, programming language, or cellular automaton) is said to be
Turing Complete when the rules followed in sequence on arbitrary data can produce the result of any calculation. A device with a Turing complete
instruction set is the definition of a
universal computer. To be Turing complete, it is enough to have conditional branching (an "if" and
"goto" statement), and the ability to change memory."
However, rule 110 by itself isn't enough by itself to explain insect, humans, and chopin for example. Rules 110 can help explain how living things
build networked structures though*.
But what happens when we add another simple concept - An evolution or genetic algorithm?
A genetic algorithm can start with randomly generated potential solutions to a problem, which are encoded in digital genetic code. we then have
solutions compete with one anoher in a simulated evolutionary battles. the better solutions survive and procreate in a simulated sexual reproduction
in which offspring solutions are created, drawing their genetic code (encoded solutions) from two parents. We can also add in other variables such as,
rate of genetic mutation or rate of offspring and are called "god parameters". It is the job of the engineer designing the evolutionary algorithm
to set them at reasonably optimal values. the process is run for many thousand generations of simulated evolution and at the end of the process of
will find solutions that are of a distinctly higher order than the starting ones.
In 1948,
Norbert Wiener heralded a fundamental change in the focus of science from energy to
information with his book cybernetics. He suggested that transformation of information, not energy was the fundamental building block