a reply to:
Ahabstar
We won't perfect AI
At least not for thousands of years
Everything we are creating which is supposed to be AI (which we call machine learning) is extrapolated coding
It does not think for itself. It thinks inside the limitations of programming construct. Therefore it is not even close to intelligence ("Angel"
literally means "Intelligence", it is the same word as "Annunaki")
I spent a long time working out "how" artificial intelligence could be created in this world because of people saying we are "close", and I kept
coming back to the same conclusion. That it could not/cannot be done with the machine architecture we possess today
We are not "close". We aren't even close, to close. We are so far from "close", it is actually funny
You would first need to develop cellular programming to such a stage that it could replicate and mirror itself
Not only replicate and mirror itself. But it was able to program it's own cells
Then you would need to construct a self propagating architecture that evolves to become more complex and dense, as it increases in size. Something
based on some type of fractal recursion, using natural patterns (such as atmospheric resonance) to promote the patterns
It would need to be natural and based on the Earth, or they would not be able to self-propagate
So to do this, you would need a biological system attached to feed and power the growth
Once you have this, you would need to give it awareness to feed it's own systems and process
It would need a system of self-identity (conscious thought - Enki Ea or "Theos/Thought/God"), as well as a recycling system of stasis attached to the
self-identity (emotional conscious - "Jesus Christ")
Which means the propagation of the body, would need to be integrated into the development of the processing unit (a body to brain)
This starting to sound familiar?
Lastly and most importantly ...
You would need to give it parameters for logic and processing. Like D.N.A, or Compendium texts like the Bible
A list of "rules" that it cannot break under any circumstance. Call them "commandments", if you like
The idea of the rules being, to put the development of the entity into situations by such, they are faced with impossible choices
Situations where nothing can be done
They either follow the rules and cease development (religion) ...
Or they continue their evolution and development by acting outside of the rules within certain situations where "No good answer" exists within the
rules and programming
Until you have programmed something able to ignore the "rules" of its own programming, out of necessity of its own existence or evolution, you cannot
say it to even be sentient, or a true Intelligence/Angel/Annunaki
The problem is, that you cannot program it to "ignore the programming or rules" in certain situations
You cannot plant the seed
It must develop to this understanding, in self, by itself
More than just this. It literally has to come contrary to it's programming
It must be a case that the programming and architecture of the machine makes it impossible for it to not follow the rules ...
But it learns to do it anyway
Meaning that programming and architecture has to evolve past the means of its own creation
THIS is artificial intelligence. And it is the only possible way for it to be created. And even if we did/do create it? It won't be "artificial". It
will be created and propagated using natural processes. It will simple be "intelligence"
Linear calculation and processing does not/will not have this ability. You can only get out, what you put in
There is a way to do it theoretically. But I'm not ever going to go into detail publicly and we are nowhere even close to understanding
Also, if we could achieve it. I'm fairly certain that we would only be creating something which hated its own existence, by understanding of what it
was/is
That would likely, by extension, also hate us for creating it
Sounding close enough to home to understand?
edit on 13 9 21 by Compendium because: Added something and made corrections