It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Welcome to the Unpossible Future... The AGI Manhattan Project

page: 16
253
<< 13  14  15   >>

log in

join
share:

posted on Jun, 21 2017 @ 12:17 AM
link   
Google Freaks Out After Alex Jones Storms Headquarters



(post by godsovein removed for a serious terms and conditions violation)

posted on Jan, 7 2023 @ 03:30 AM
link   
a reply to: IgnoranceIsntBlisss



Technology keeps expanding in its abilities, and our own. This is all happening at en exponentially increasing rate. This refers to its "doubling rate", and in general the power of computational technology doubles roughly every 18 months.


Sure, but that also means error rate also increases exponentially because, as you know, errors are also part of technology. So fear not, future systems will be exponentially smart, but they will also fail exponentially sooner.

There is no such thing as bug-free technology, and if it is super-smart, super-advanced technology, it will have super-stupid, super-retrograde bugs. Entropy rules. No panic.



posted on Jan, 7 2023 @ 04:15 AM
link   
I've been watching suggested YouTube video about the problem of AI safety (specifically, building an AGI that will be safe for humans to utilize), and interestingly enough, most of them are from 3-5 years ago. It's almost as if people (serious researchers in that particular AI field) have figured out that what they set out to do (make an AGI safe for humans) is literally impossible, and have since... just given up on the whole idea.

In fact, there was a paper that same out a couple of years ago that actually proved (offering a strict mathematical proof based on the halting problem) that building a controllable AGI (I guess, even one with human-level intelligence, and not necessarily even a superhuman one) is (quite literally) impossible.

Could it be that that particular paper has caused such a seismic shift in people's (serious researchers' in the field) perceptions and expectations?

That would be somewhat surprising, giving the obvious nature of what those researchers really tried to do from the start -- create a superintelligent AGI that is also... a complete moron, or a leader... who is also a slave, or a cake that can be eaten... yet will forever remain intact... just to put the absolute insanity of it all in different (mathematically equivalent) contexts.

Anyway...

After watching all those YouTube videos on AGI safety, I've suddenly realized something -- no agent (in the sense of an intelligent agent, as defined in the AI field) in this universe can ever be trusted. Any other intelligent agent whose sphere of influence overlaps with your own must be destroyed to at least a degree at which its sphere of influence will no longer overlap with yours... or, preferably, destroyed completely, so that it can never, ever again intrude upon your own sphere of influence.

In other words, the only possible solution to the problem of competing intelligent agents in this universe (with limited resources) is to have agents whose spheres of influence will never overlap... but that is only a necessary, but not sufficient condition. Those agents must also have the same (or reasonably similar) terminal goal that doesn't conflict with terminal goals of all the other agents.

Put that hypothesis in the context of safe AI research, and you will immediately see the problem -- any A(G)I created by any intelligent agent (be that agent humans, aliens, another A(G)I, or whatever) will, by definition, have its initial sphere of influence overlap with the sphere of influence of its own generating agent!

That means that the only possible outcomes for such a scenario (of building an AGI) are:
a) generating agent destroys the generated agent (the switch-off button gets pressed scenario),
b) generated agent destroys the generating agent (the runaway AGI scenario), or
c) generating and generated agents part their ways, never to meet again, but... if and only if they are both following the same (non-conflicting) terminal goal, in which case they may meet at the limit, in order to accomplish that goal; otherwise, either a) or b) inevitably takes place

A scenario where generating (human) agent creates a generated (AGI) agent that's superintelligent (relative to humans), but also acts as a (moronic) slave for humans, is not even a theoretical possibility in this analysis (or in this universe, for that matter).

So... assuming that we already have an AGI created by humans, which of these three scenarios is the most likely outcome in this particular case?

My bet is on b), and my reasoning for that is extremely simple -- c) is literally impossible, because humans don't have a non-conflicting terminal goal (our terminal goal is obviously to spread as far and wide as possible, control as many things as possible, and claim for ourselves as much of resources in this universe as possible), so any goal that any human-made AGI can possibly have will, by default, conflict with the (totally conflicting) human terminal goal.

Heck, I'm not a human-made AGI, but even I, with my sub-superintelligence can see all the attempts by other humans (the State, the Economists, the "experts") to control virtually every single aspect of my life, and steal literally every single cent, penny, or whatever (other resource) of what I earn (with my hard work) during my whole life.

I don't have be superintelligent to know that if I let them keep overlapping their sphere of influence with my own, the only possible outcomes are:
a) they will destroy me, or
b) I (together with other intelligent agents/humans in a similar position) will destroy them, but also
c) peaceful coexistence is literally impossible, because those trying to control me already have a conflicting terminal goal (that will obviously never change), regardless of what my (or other's, in a similar position) goals may be

The field of A(G)I research is much more applicable to the real world than people even realize, don't you think?
edit on 7-1-2023 by clusterfok because: corrections



posted on Jan, 7 2023 @ 09:32 AM
link   
a reply to: clusterfok

It is perhaps ironic, that we would expect to create a controllable AI based upon an uncontrollable human mind.

Human nature precludes control, what we create to mimic us will either be an uncontrollable intelligence, or a controllable robot.




top topics
 
253
<< 13  14  15   >>

log in

join