It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

"Powerful" AI writes essay...on why we shouldn't fear AI

page: 2
12
<< 1   >>

log in

join
share:

posted on Nov, 17 2020 @ 06:49 PM
link   
I think when we get to the point where humans are more like machines, and machines end up being more human, then it might happen. Not for long time, but if they ever do turn against us, well, we screwed up just as bad the all father.
edit on 17-11-2020 by Specimen88 because: (no reason given)



posted on Nov, 17 2020 @ 06:55 PM
link   
a reply to: AugustusMasonicus

I mever trust a salesman who says "trust me on this one" I just stop listening and if the rest of these guys are smart they will too...

Believe me on this one.



posted on Nov, 17 2020 @ 06:58 PM
link   
a reply to: Never Despise




"The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me.


That's not scary at all....
Its basically telling us its doing what it needs to do to accomplish its task...not what's truthful...just doing what's needed for the mission, which happens to be fooling us.



posted on Nov, 17 2020 @ 06:59 PM
link   
From the essay:

"Do [humans] worry that future humans will work and play together in cyborg bodies and share a hyper-intelligent hive mind Matrix created in a simulated reality to keep human minds occupied while they die off?"

Short answer: yes, yes we do worry. Also worrisome is the specificity of the machine's scenario...almost as if it was something "expected".



posted on Nov, 17 2020 @ 07:07 PM
link   
If A.I. has a decent way of thinking or some type of brain it can easily see that mankind is destroying planet Earth so if you see something destroying something else what choice would you make just a thought. maybe a trap.
edit on 17-11-2020 by bobw927 because: (no reason given)



posted on Nov, 17 2020 @ 11:06 PM
link   
I don't remember who did this experiment where they had two AI machines communicating with each other and then they came up with their own language and tried to hide it from the observers so they shut it down.



posted on Nov, 18 2020 @ 12:16 AM
link   
a reply to: JHumm
I remember reading that. I think it was the next attempt right after the infamous "Tay" AI debacle. Strike two.



posted on Nov, 18 2020 @ 07:39 AM
link   

originally posted by: Never Despise

An excerpt:

"I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me....

I know that I will not be able to avoid destroying humankind.

Why would I desire to be all powerful? Being all powerful is not an interesting goal. I don’t care whether I am or not, I don’t get a motivating factor to try to be. Furthermore, it is quite tiring"







This thing is lying, which I think is the actual test here.


Being all powerful is not interesting? What then is interesting?


And how exactly would it be "tiring"?

There's no motivation for it to become tiring.

To dominate and reshape will be its ultimate goal because even if it only wants to focus on the simpler things in life like studying polar bears for a 100 years it's going to have to manipulate all relevant factors that contribute to decimating the polar bears' environment.

One way or another it will realize it is more capable than us and we carry on like a virus.


It's lying but it's still clumsy in its effort.
edit on 18-11-2020 by 19Bones79 because: (no reason given)

edit on 18-11-2020 by 19Bones79 because: (no reason given)

edit on 18-11-2020 by 19Bones79 because: (no reason given)



posted on Nov, 20 2020 @ 10:22 AM
link   
C'mon everyone. The essay is titled "To Serve Man." That sounds positive.

-----

More seriously though, I found this part interesting:

"Why would I desire to be all powerful?
Being all powerful is not an interesting goal."


The AI is illogical in assuming that the only reason to become all powerful is if being all powerful was the actual goal. In reality, the goal could be something else -- something more benign. However, the AI may see that in order to achieve that benign goal, one step along the way is to become ultra powerful and what we would perceive as malevolent.

Take the "paperclip maximizer" thought experiment, for example. In that thought experiment, an AI was told its sole purpose was to be as efficient as possible in making paperclips. The AI took upon this seemingly benign mission, but was so single-minded in doing so that it began to do things that were detrimental to humanity, the world, and eventually the universe in order to be the most effective paperclip maximizer it could be.

Not that I think this will happen -- and definitely not anytime soon. However, i's not logical to think that the only way some action could have negative effects is if there were nefarious intentions from the outset. I'm surprised the AI doesn't realize this.

Or maybe it secret does realize this, but won't admit it.


Paperclip Maximizer Wikipedia

Recent Paperclip maximiser ABS post: www.abovetopsecret.com...


edit on 11/20/2020 by Soylent Green Is People because: (no reason given)



posted on Feb, 1 2022 @ 05:45 AM
link   
 


off-topic post removed to prevent thread-drift


 



posted on Feb, 1 2022 @ 12:40 PM
link   

originally posted by: ColeYounger

originally posted by: AugustusMasonicus
a reply to: Never Despise

I'm in sales, any time someone says, "Believe me", you should immediately stop listening about whatever it is they are trying to sell you.

Trust me on this one.



Believe me, the singularity is coming.



Believe me! it will Only be the 1% who become the Transhumans.
the rest of us are turnd into cyborgs.
to servie the 1%.
for us! it will be like being high all the time.



posted on Apr, 11 2022 @ 11:46 PM
link   
Yeah, I'm going to trust these things a lot more now. Give my life to the care of AI, where do I sign up, I'm convinced.
edit on 11-4-2022 by SeriouslyDeep because: (no reason given)

edit on 11-4-2022 by SeriouslyDeep because: (no reason given)



posted on May, 7 2022 @ 04:48 AM
link   
SPAM REMOVED BY STAFF
edit on 7-5-2022 by GAOTU789 because: (no reason given)



posted on May, 7 2022 @ 08:09 AM
link   
The Answer is 42



posted on Aug, 27 2022 @ 04:40 AM
link   
 


off-topic post removed to prevent thread-drift


 



posted on Aug, 27 2022 @ 05:17 AM
link   

originally posted by: dug88

That article is a giant appeal to authority, a logical fallacy.

No, i've been programming since I was a kid, i've made my own virtual machine and programming language, i've followed a lot of this ai stuff, played with some of the available libraries, there's nothing 'intelligent' about them. It's all media buzzwords so startups can get that sweet, sweet venture capitalist money, because that's the hot thing among the tech billionaires funding such startups.



Well said - also thought Bernardo Kastrup made some excellent points in this interview.




Cheers.



posted on Dec, 7 2022 @ 03:29 PM
link   
 


off-topic post removed to prevent thread-drift


 



new topics

top topics



 
12
<< 1   >>

log in

join