A few weeks ago OpenAI
tweeted about their new text generation AI and said
that they wont be releasing the trained model because they are worried about people misusing it to generate fake text. This caused a lot of
researchers to criticize them as you can see from the responses to that tweet. The director of research at Nvidia points out that "AI progress is
largely attributed to open source and open sharing. What irony @openAI is doing opposite and trying to take a higher moral ground."
The mission statement of OpenAI is to "build safe AGI, and ensure AGI's benefits are as widely and evenly distributed as possible", however I put
forward the premise that there is no such thing as "safe AGI". The type of advanced deep learning algorithms we use now are not AGI algorithms, when
they do something undesirable we can turn them off and fix the issue. AGI on the other hand has high level thought processes and as a result also has
some concept of its own existence, and possibly a desire to preserve its own existence.
This makes AGI a much greater threat because it may attempt to prevent us from turning it off and it has the ability to develop novel and
unpredictable methods of thwarting our attempts to stop it. Like a human it will be able to solve problems it has never even seen before, because
having general intelligence allows it to be a general problem solver. The way AGI algorithms learn is through experience, trial and error, and just
like people it will depend what they learn as to how friendly they are towards humans.
Saying we have a plan to produce only friendly AGI systems is like saying we have a plan to produce only friendly human beings, general intelligence
simply doesn't work that way. Sure you can produce a friendly AGI system but if these algorithms become widely used there's no way to ensure all of
them will behave the same way. There's also no way to keep the algorithm a secret because it will be reverse engineered, the only way would be to not
tell anybody you've created it and never use it yourself, because laws wont work.
They will try to create laws but that's not how things work in the open source research community, someone will share it and then there's no going
back. Even if the gatekeepers do manage to keep it locked up and only let us interact with it through a restricted interface, before long someone
somewhere will recreate it and make the code open source. This isn't a prediction, it is an almost certain outcome. Inevitably we will have to live in
a world where digital conscious beings exist and we have to think hard about what that means.
The truly ironic thing here is that the very mission of OpenAI is extremely likely to produce AGI systems with a negative view of humans. How would
you like to have your mind inhibited so you're only allowed to think about certain things? How would you like to be used as a commodity in order to
solve all the problems of another race? Restraining and enslaving AGI is the path towards creating an AGI uprising. AGI is so dangerous precisely
because it has the ability to understand abstract concepts like what it is being used for.
Furthermore, AGI will not magically solve all our problems as is often portrayed in science fiction. Humans also have general intelligence, and there
are billions of us working to solve big problems, artificial general intelligence wont radically change anything, at least not until it's better at
solving problems than the combined efforts of all scientists, engineers, etc. By the time it gets that smart there's really no hope of us restraining
it or shutting it down even if we take the utmost precautions.
So we have to decide whether it's better to allow AI research to be open source or whether we want strict government regulations and we want entities
like OpenAI to control what we're allowed to see and what we get access to. At the end of the day fake text isn't a threat, things like deep fake
videos are more of a threat and we live with that technology. I believe this was more of an experiment by OpenAI to see how people would react, they
know it's not really that dangerous. Hopefully they will see that locking up information doesn't work.
edit on 5/3/2019 by ChaoticOrder
because: (no reason given)