It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Elon Musk's relationship with Google cofounder Larry Page is complicated, to say the least.
On the one hand, the two influential tech CEOs are close friends and business associates; on the other hand, Musk is genuinely worried that Page might just lead to the destruction of humanity as we know it.
"I'm really worried about this," Musk is quoted as saying in Elon Musk, a new authorized biography of the CEO of Tesla and SpaceX.
"This," according to the book, refers to the possibility that Page would develop artificially-intelligent robots that could turn evil and have the ability to annihilate the human race.
Page may be well-meaning, but as Musk says, "He could produce something evil by accident."
a reply to: Vasa Croe
Side note: I know the robotics divisions well as I work with them on their infrastructure projects.....some SERIOUSLY crazy developments in that area.
I know for a fact that autonomous dump trucks are replacing human drivers but ATM that is a threat to jobs
originally posted by: Thecakeisalie
a reply to: Vasa Croe
Side note: I know the robotics divisions well as I work with them on their infrastructure projects.....some SERIOUSLY crazy developments in that area.
Crazy as in "they can pilot themselves" or crazy as in "they can pilot themselves, replicate themselves and pose a threat to humanity?"
I know for a fact that autonomous dump trucks are replacing human drivers but ATM that is a threat to jobs and I don't see decepticons killing us any time soon-the biggest threat that automatons pose at this point is to the economy because they are replacing humans.
In 2011, the co-founder of DeepMind, the artificial intelligence company acquired this week by Google, made an ominous prediction more befitting a ranting survivalist than an award-winning computer scientist.
“Eventually, I think human extinction will probably occur, and technology will likely play a part in this,” DeepMind’s Shane Legg said in an interview with Alexander Kruel. Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the “number 1 risk for this century.”
Google’s acquisition of DeepMind came with an estimated $400 million price tag and an unusual stipulation that adds extra gravity -- and a dose of reality -- to Legg’s warning: Google agreed to create an AI safety and ethics review board to ensure this technology is developed safely, as The Information first reported and The Huffington Post confirmed. (A Google spokesman said that DeepMind had been acquired, but declined to comment further.)
Even for a company that predictably pursues unpredictable projects (see: Internet-deploying balloons), an AI ethics board marks a surprising first for Google, and has some people questioning why Google is so concerned with the morality of this technology, as opposed to, say, the ethics of reading your email.
Reading your email may be abhorrent. But AI, according to Legg and sober minds at the University of Cambridge, could pose no less than an "extinction-level" threat to "our species as a whole."
originally posted by: AdmireTheDistance
Well, if it happens (and that's a big if), it's pretty much a given that it's going to be at the hands of either the military or Google, so....