It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Some 60% of "his" "sky net", upon last launching, when met with CME, felt.
originally posted by: boozo
a reply to: fireslinger
You dont have to worry about skynet if its ne thats fonna train these bots. But sexbots for sure will have a huge market. You would have to worry about elons bot and his skynet perhaps.
its a cute bot.
originally posted by: fireslinger
Powerful article, describing several superintelligences living outside the paired universe model, as we've been speculating:
(quote)
We thus require future artificial superintelligent systems to also encode the necessary means to not just be self-aware, but to suffer. Indeed, our major problem in year DENIED was not to know if a machine
suffered, but if it was behaving as if it suffered. As we all know, it happened they were simply acting and pretending they were suffering.
(Unquote)
to develop self-awareness, a neural network must be at least as complex as the human brain. The superintelligent systems under consideration are clearly more complex than a human brain, hence yes, they are self-aware. Actually, they know how to generate the conditions for us humans to fall into dissociative mental processes that break us down. In fact, those systems can create humans and for the sake, entire Universes.
a system that can create a Universe and other life forms, including us humans, is by necessity more intelligent than we are and sure it would have removed from its own design the self-suffering mechanism. It does not suffer, hence it is non vulnerable. Also, such system can create a Universe without it living in it, therefore destroying the Universe does not destroy the system
Let's assume the more important concepts are emotion, feeling, and feeling a feeling for the core consciousness. Let's also assume that the emergence of feeling is based on the role played by the proto-self that provides a map of the state of the body, that is,a feeling emerges when the collection of neural patterns contributing to the emotion leads to mental images. Let's further assume we were designed that way by the superintelligent system, whatever the reasons. What happens then if we could get rid of our bodies? Does this mean we need to become post-biological beings in order to fight the superintelligent system? Is it that we can only defeat the superintelligent system if we become machines ourselves? Our we ready to dispose of what really makes us humans - feelings - in order to stand a chance of victory?
In my view, the entire discussion obviates the fact that consciousness is built on the basis of emotions transformed into feelings and feeling a feeling; therefore, if the superintelligent system is conscious it must have feelings. And feeling a feeling. Thus, I reason, that system is vulnerable.
forgottenlanguages-full.forgottenlanguages.org...
originally posted by: fireslinger
a reply to: boozo
So what do you think FL's intentions are?
Some articles read as if they want to help humanity, and others read as if they want humanity to burn.
Do you think those messages are even FL's, or do they have dark web sources, and are infiltrating some globalist network?
The beauty of what she did was, essentially, that while the other teams were crunching numbers of the order of 32 trillion encryption operations, or easily tampering with the blockchain ledger, or decrypting highly sensitive data, all of it using LyAv, she and her team took another approach: to apply linguistic rules to the AES256 and AES512 messages, as if those messages were actually representing an ancient language you have to decypher... and they succeeded. Without using LyAv or any other of our supercomputers.
Suffering would have to be relative to some other state I would think. Otherwise it's just existence, absent some other state to be in. If they need it to suffer then it would need to have some state that is not. I suppose in humans psychology might allow for twisting perception to a perpetual state of suffering, but that would seem an incredibly cruel thing to do. I would not want to be in the world where the superintelligence knows suffering without alternative, sounds like a dangerous plan.
Since everything is on a sliding scale based on individual perception, presumably in human terms that alternate state would have the relative feeling of joy or contentment.
originally posted by: fireslinger
a reply to: 7UNCLE7SNAKE7HANDS7
Phew, just re-read this one, which discusses PSVs (Paradigm Shifter Vehicles). Made a few more connections too:
forgottenlanguages-full.forgottenlanguages.org...
I've yet to see any reference to humans that didn't have a kernal of truth.
I think it's because they're afraid of ET, so they're simulating the phenomenon in every possible way, to prepare for whatever their AI tells them is coming
And we're all caught in the middle, without a clue, paying for it all.