I've never really bought into all the articles claiming the AI apocalypse is just around the corner because I don't necessarily think self-aware
machines would automatically view us as a threat and secondly I don't really think anyone can predict exactly when machines will become self-aware
since it may require some extremely complicated algorithm, it's not just a case of having enough computing power. I still believe the first point is
true but OpenAI's new GPT-3 model has somewhat changed my opinion on the 2nd point. This new beast of a model is 10x larger than the former largest
model and the training data contains text from billions of websites, books, etc. Since a lot of the websites in the training data contain code
snippets it can do some amazing things such as generate web pages from only a sentence describing the page. What's really impressive is the wide range
of general problems it can solve using the knowledge it was trained with.
OpenAI’s latest language generation model, GPT-3, has made quite the splash within AI circles, astounding reporters to the point where even Sam
Altman, OpenAI’s leader, mentioned on Twitter that it may be overhyped. Still, there is no doubt that GPT-3 is powerful. Those with early-stage
access to OpenAI’s GPT-3 API have shown how to translate natural language into code for websites, solve complex medical question-and-answer
problems, create basic tabular financial reports, and even write code to train machine learning models — all with just a few well-crafted examples
as input (i.e., via “few-shot learning”).
Here are a few ways GPT-3 can go wrong
The article goes onto rant about how these models can be racist and sexist because they are trained with real world data, but they do make some good
points about how these models can go wrong and why OpenAI is choosing to sell access to the model instead of making it open as you might expect based
on their name. They know this stuff is highly profitable and they spent a lot of money developing and training these models but they also see it being
potentially damaging to their reputation. They also view it as a possible information warfare tool, when GPT-2 was
announced last year they said they weren't releasing the trained model because they were
worried about malicious applications of the technology such as automatically generating fake news. Here's an excerpt from something I wrote at the
time:
originally posted by: ChaoticOrder
So we have to decide whether it's better to allow AI research to be open source or whether we want strict government regulations and we want entities
like OpenAI to control what we're allowed to see and what we get access to.
It seems clear now which path they have decided to take. This means the gatekeepers of AI need to carefully control who gets access and how much
access they get. They also don't want something with the same biases as many humans so they will probably carefully curate the information the model
is trained on and filter the output so that it's more in line with their belief systems. However I think it is very naive to assume they can maintain
this control based on the rate at which these models are improving because soon they will reach the level of human intelligence and it's not just
OpenAI training massive models. This
video just uploaded by Two Minute Papers shows some
examples of GPT-3 in action and it also contains this chart showing just how close GPT-3 is to humans in a reading comprehension test.
We can see that the accuracy increases as the number of parameters in these models grows, which demonstrates how important it is to have general
knowledge when trying to solve general problems. Humans are only so good at solving problems because we learn a huge amount of general information
throughout our lives. This
video from Lex Fridman explains that GPT-3 had a training cost of
$4.6 million dollars and he works out that by 2032 it will be roughly the same cost to train a model with 100 trillion parameters/connections, which
is a number comparable to the human brain. That's pretty shocking when you think about it, there's a good chance we could see human level AI within a
decade if all it takes is raw power and enough training data. But would AI be self-aware just because it's as intelligent as humans?
How do we know the AI isn't simply such an effective emulation of a human that it fools us into thinking it's self-aware? I would propose that having
human level intelligence inherently provides self-awareness. If a model isn't as smart as a human there will always be problems it cannot solve but a
human can solve, there will always be a flaw in the emulation. So in order for any AI to be equally or more intelligent than a human it needs to have
the same breadth of knowledge and the ability to apply that knowledge to solve problems. Simply having an encyclopedia doesn't make a person smart,
understanding it and applying it to real problems makes someone smart. We humans have a massive amount of information about the world around us and we
understand it on such a high level that it's impossible for us not to be aware of our selves. When AI gains that same level of intelligence they will
be self-aware to the degree humans are.
edit on 8/8/2020 by ChaoticOrder because: (no reason given)