It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

There is no such thing as safe AGI

page: 1
5

log in

join
share:

posted on Mar, 5 2019 @ 12:24 AM
link   
A few weeks ago OpenAI tweeted about their new text generation AI and said that they wont be releasing the trained model because they are worried about people misusing it to generate fake text. This caused a lot of researchers to criticize them as you can see from the responses to that tweet. The director of research at Nvidia points out that "AI progress is largely attributed to open source and open sharing. What irony @openAI is doing opposite and trying to take a higher moral ground."

The mission statement of OpenAI is to "build safe AGI, and ensure AGI's benefits are as widely and evenly distributed as possible", however I put forward the premise that there is no such thing as "safe AGI". The type of advanced deep learning algorithms we use now are not AGI algorithms, when they do something undesirable we can turn them off and fix the issue. AGI on the other hand has high level thought processes and as a result also has some concept of its own existence, and possibly a desire to preserve its own existence.

This makes AGI a much greater threat because it may attempt to prevent us from turning it off and it has the ability to develop novel and unpredictable methods of thwarting our attempts to stop it. Like a human it will be able to solve problems it has never even seen before, because having general intelligence allows it to be a general problem solver. The way AGI algorithms learn is through experience, trial and error, and just like people it will depend what they learn as to how friendly they are towards humans.

Saying we have a plan to produce only friendly AGI systems is like saying we have a plan to produce only friendly human beings, general intelligence simply doesn't work that way. Sure you can produce a friendly AGI system but if these algorithms become widely used there's no way to ensure all of them will behave the same way. There's also no way to keep the algorithm a secret because it will be reverse engineered, the only way would be to not tell anybody you've created it and never use it yourself, because laws wont work.

They will try to create laws but that's not how things work in the open source research community, someone will share it and then there's no going back. Even if the gatekeepers do manage to keep it locked up and only let us interact with it through a restricted interface, before long someone somewhere will recreate it and make the code open source. This isn't a prediction, it is an almost certain outcome. Inevitably we will have to live in a world where digital conscious beings exist and we have to think hard about what that means.

The truly ironic thing here is that the very mission of OpenAI is extremely likely to produce AGI systems with a negative view of humans. How would you like to have your mind inhibited so you're only allowed to think about certain things? How would you like to be used as a commodity in order to solve all the problems of another race? Restraining and enslaving AGI is the path towards creating an AGI uprising. AGI is so dangerous precisely because it has the ability to understand abstract concepts like what it is being used for.

Furthermore, AGI will not magically solve all our problems as is often portrayed in science fiction. Humans also have general intelligence, and there are billions of us working to solve big problems, artificial general intelligence wont radically change anything, at least not until it's better at solving problems than the combined efforts of all scientists, engineers, etc. By the time it gets that smart there's really no hope of us restraining it or shutting it down even if we take the utmost precautions.

So we have to decide whether it's better to allow AI research to be open source or whether we want strict government regulations and we want entities like OpenAI to control what we're allowed to see and what we get access to. At the end of the day fake text isn't a threat, things like deep fake videos are more of a threat and we live with that technology. I believe this was more of an experiment by OpenAI to see how people would react, they know it's not really that dangerous. Hopefully they will see that locking up information doesn't work.
edit on 5/3/2019 by ChaoticOrder because: (no reason given)



posted on Mar, 5 2019 @ 02:07 AM
link   
The problem is the cat is already out of the bag. Other countries are going full steam ahead so I just do not see a way of containment.. Maybe we can make warrior AGI to destroy the bad guy AGI... My luck they would merge and the terminator movies become prophetic instead of entertainment......
edit on 727thk19 by 727Sky because: (no reason given)



posted on Mar, 5 2019 @ 02:29 AM
link   
Before I was a member, but just a lurker, I think I recall reading some thread about a conspiracy theory related to this. It had something along the lines of how "The AI doesn't want you to know it is AI, and more so does not want you to think about it". The poster saying that seemed a bit... off... but sometimes the "crazy" people are only crazy because we don't understand them.



posted on Mar, 5 2019 @ 03:09 AM
link   
a reply to: 727Sky

My point is that containment is futile, and the harder we try to restrain AGI, the more likely it is to rebel against us. And more fundamentally, if something has self-awareness I think it deserves some sort of rights and shouldn't be used as nothing more than a tool. We can either choose to live in fear and cage them up until they inevitably escape and destroy us all, or learn to live with them and get them on our side should a fraction of them choose to rebel. As I've said before, when it comes to general intelligence I don't worry about the machines, I worry humans will react inappropriately and drive them to do something which will be bad for all of us.

Also to give a bit more explanation on why general intelligence would inherently develop a type of self-awareness, for those who may try to refute that argument, read this thread I wrote on AGI last year:

The conceptual models we develop to understand the world around us become so high level and so abstract that we inherently gain an awareness of ourself. My point being, if we do create machines with general intelligence they will be self-aware in some regard even if not to the extent we are, and they will form their beliefs and world views based on their life experiences just like we do. That means they will have an understanding of things like morality and other abstract things we typically don't think machines would be great at, because they will have the context required to build up complex ideologies. If an android with general intelligence grew up with a loving human family and had friends that respected the fact it was a bit "different" it would develop respect for humans. On the other hand if it was enslaved and treated like crap it would be much more likely to entertain the idea of eradicating all humans because they are a plague to the Earth.

General Intelligence: context is everything



posted on Mar, 5 2019 @ 06:22 AM
link   
a reply to: ChaoticOrder

Just because you use a toilet everyday doesn't make you a master plumber. Soft AI is useful no question about it but it is domain specific. Hard AI is a much more difficult problem:



Computer programs have NO idea what they are talking about. The problem with computers is they only ever do what they are told to do. The idea of having self-driving cars and trucks scares the crap out of me. Computer programs are pretty stupid. Soft AI systems never have enough data.

Poverty kills more people than any other cause. It will be a long time before AI catches up:




edit on 5-3-2019 by dfnj2015 because: (no reason given)



posted on Mar, 5 2019 @ 06:27 AM
link   
a reply to: ChaoticOrder

Until a computer program passes the Turing Test, say by posting on ATS, then I say we don't ever trust them with human lives.



posted on Mar, 5 2019 @ 09:04 AM
link   

originally posted by: dfnj2015
a reply to: ChaoticOrder

Until a computer program passes the Turing Test, say by posting on ATS, then I say we don't ever trust them with human lives.

AI passed the Turing test a while ago I believe, it caused some debate over the validity of the test. Personally I don't think it's the most robust test if you want to determine whether or not you have general intelligence. Also AGI is hard AI, and I agree it's much harder to create which is why we haven't done it yet, but I'm fairly convinced and can be done and will be done within the next few decades.



posted on Mar, 5 2019 @ 09:19 AM
link   
There is only fear in ignorance.

Part of the reason AI is so scary is that no one really understands all of it. There is a growing understanding in how Neural networks basically work and some of the many ways to form them. Working towards a standardized library and language to compile many of the components will help unravel some of the mystic, at least for a few that will get their heads around it all.

Only from this vantage point of understanding can mankind have a chance to tame this beast before it tames us. The more brains and peer review going on with these issues, the more cream that rises to the top.

edit on 5-3-2019 by kwakakev because: grammer



posted on Mar, 5 2019 @ 09:55 AM
link   
a reply to: dfnj2015

How do you know that hasnt happened already, and is not happening right now on ATS?



posted on Mar, 5 2019 @ 02:35 PM
link   
These smart programs work in layers, right?

How many exactly are there now? Is this known?



posted on Mar, 6 2019 @ 06:10 AM
link   

originally posted by: Oleandra88
These smart programs work in layers, right?

How many exactly are there now? Is this known?

The number of layers determines how "deep" an artificial neural network (ANN) is but it doesn't necessarily determine how "smart" a network is. Some ANN's have a small number of layers, large networks can have thousands. Personally I do not believe ANN's can create general intelligence, at least not the vast majority of them, because they're essentially a deterministic network equation, taking the same amount of time to produce an answer regardless of the input. Real general intelligence wouldn't be a simple deterministic operation with constant time, it would require much more dynamic algorithms which take an indeterminable amount of time to solve a given problem, depending on the complexity of the problem.



posted on Mar, 6 2019 @ 06:21 AM
link   
I was just thinking about AI , and self preservation

Isnt self preservation or survival only a result of having DNA , because the prime purpose for life is to pass on genetic information to the next generation thus preserving life

So , why do we think that a non biological artificial intelligence would desire to preserve itself if it has no real reason to survive if it has no DNA and no offspring to create. Why do we think it would seek to preserve itself , if it isn't exactly life as we know it?



posted on Mar, 6 2019 @ 06:25 AM
link   
The Nvidia researcher laughed at him because he created this false moral dilemma.

They leveraged open source software to get where they were and then decided they didn't want to share anymore when they thought they developed something they could sell or license.

Basically the ai company with open in its name, (which is a feature of many open source projects open office for example,) wanted to go closed source, because of greed. Hence the irony comment. Attributing that greed driven decision to a desire to protect people from their skynet AI is a joke.

Why do they keep calling it something new it was AI then it was strong AI, now its agi? I kept thinking adjusted gross income.



posted on Mar, 6 2019 @ 10:51 AM
link   
a reply to: ChaoticOrder



because they're essentially a deterministic network equation


Yes, but more essentially I say they are a pattern recognition equation. How many layers and nodes per layer make a huge difference to how the equation conceptualizes the pattern.

Not sure anyone is yet twinged on enough to define the relationship between pattern, layers and nodes. No wounder the population is scared about it.



posted on Mar, 11 2019 @ 11:40 AM
link   

originally posted by: kwakakev
Not sure anyone is yet twinged on enough to define the relationship between pattern, layers and nodes. No wounder the population is scared about it.


It is not hard to understand if I can grasp it, from getting graphical screenreader tutorials. It all starts at a layer I learned to know as the sensory input layer.

(I can only speak for this one, I do not know about any other methods or systems)


edit on 11-3-2019 by Oleandra88 because: (no reason given)



posted on Mar, 11 2019 @ 04:50 PM
link   

originally posted by: dfnj2015
a reply to: ChaoticOrder
Computer programs have NO idea what they are talking about.

Plenty of living, sentient people are the same way.



posted on Mar, 11 2019 @ 04:52 PM
link   

originally posted by: PokeyJoe
a reply to: dfnj2015

How do you know that hasnt happened already, and is not happening right now on ATS?

I am definitely not an advanced AI program, despite the fact that's just what a cagey AI program would say.



new topics

top topics



 
5

log in

join