a reply to:
dfnj2015
Contrary to people's irrational exuberance of what is possible, computer programs only ever do exactly what they are programmed to do.
Yes and no. True and false.
It depends on how you look at it. You can program a computer to move a robotic arm from left to right, and leave it at that. Then you'd be 100%
correct.
It gets different when people write, not just programs, but algorhithms. There's also such a thing as 'self-modifying code'.
Algorhithms can already do things their own programmers/creators don't quite understand. There's the old 'wolf photo experiment', where a learning
algorhithm was being taught 'what a wolf looks like', so it identified most of wolfs correctly, but sometimes it was WAY off.
The programmers didn't quite understand what went wrong, until they realized the algorhithm wasn't really looking at the wolf, the way people would
(people would see wolf's face as the most important thing to look at in a photo), but it was looking at the SNOW, and making its judgments based on
that. They didn't even know how to 'correct' that, because they didn't know HOW the algorhithm came to think that way, and there are other examples,
where the creators of the algorhithms don't know anymore, how the algorhithm learns certain things or 'starts thinking' certain things.
Obviously a sentient computer is fantasy, until a soul can incarnate into it - only a soul has true intelligence, sentience, imagination, spirituality
and agency. Computer can never have these things without a soul.
Artificial soul would be just a poor shadow of the real thing at best.
In any case, I think you are correct about the fundamentals of programming and computers, even very complex code is just something someone wrote.
But think about self-learning algorhithms - sure, someone programs them to learn, so it only does what it's programmed to do. But after it learns,
maybe it also learns to learn in different ways, and eventually, learns to program, and modifies itself, and..
Well, you can't deny something along those lines could happen at some point. What about a computer that has been taught to learn how to write more
efficient code? They could optimize the code, until it can't be optimized anymore. If the goal is to make a program that's more efficient at learning,
then that algorhithm that the program creates, learns how to improve computer code, and so on.. well, after enough loops, there's going to be pretty
remarkable program code that creates pretty darn amazing algorhitms that can do pretty gosh-darned unpredictable things.
It can be scary, like a fire, if it's treated without safeguards and just 'let loose'. There was an interesting youtube video of a fictional A.I. in
the future, set loose in the internet, and it deciding to efficiently murder human population - silently, quickly and efficiently.
In any case, I think that youtube video is fearmongering nonsense, but a program that has been programmed to learn and expand and self-modify will
soon no longer do ONLY what it was originally programmed, but many other things as well..
As far as 'singularity' - I still don't quite know what that means in practical reality, and as such, is just a philosophical hogwash that you can
twist and turn around as much as you want.
You are not required to own a computer, let alone in your pocket. In some kind of 'singularity', wouldn't that be mandatory for everyone?
There seems to exist many different definitions of that word, one being, if I remember correctly, some kind of fusion of 'man and machine' in a
permanent and irreversible way, that will change the whole existence on this planet.
I don't think that will ever happen, though.
edit on 20-2-2022 by Shoujikina because: (no reason given)