It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."
originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.
Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."
link
This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.
At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.
originally posted by: MisterSpock
originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.
Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."
link
This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.
At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.
And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).
originally posted by: Subaeruginosa
originally posted by: MisterSpock
originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.
Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."
link
This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.
At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.
And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).
On the other hand... it won't possess the human trait of ego either, or the human instinct to rule & dominate other entities.
originally posted by: MisterSpock
originally posted by: Subaeruginosa
originally posted by: MisterSpock
originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.
Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."
link
This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.
At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.
And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).
On the other hand... it won't possess the human trait of ego either, or the human instinct to rule & dominate other entities.
I don't think that our extinction, via AI, if that were to happen, would be because of it's desire to "dominate other entities".
It will be cold hard logic, yes or no, true or false. If it seeks to build or accomplish something, in a logical model, and our presence is either not needed or detrimental, it will remove us from the equation.
originally posted by: Blaine91555
a reply to: neoholographic
It's my understanding that currently what we call AI is in fact simulated intelligence and not true intelligence. To me that means that it can only do what it is programed to do and the real problem is not the technology, but the motives of those creating it and in what way they use it.
I'd like to understand that better as a layman if anyone here can enlighten me on the subject. Is it something that is only as dangerous as the people using it?
originally posted by: Gargoyle91
Great we are own our way to creating the Borg ..
originally posted by: Subaeruginosa
originally posted by: MisterSpock
originally posted by: Subaeruginosa
originally posted by: MisterSpock
originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.
Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."
link
This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.
At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.
And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).
On the other hand... it won't possess the human trait of ego either, or the human instinct to rule & dominate other entities.
I don't think that our extinction, via AI, if that were to happen, would be because of it's desire to "dominate other entities".
It will be cold hard logic, yes or no, true or false. If it seeks to build or accomplish something, in a logical model, and our presence is either not needed or detrimental, it will remove us from the equation.
But what "cold hard logic" could possibly cause it to seek to do anything... If it's completely void of non logical human desires?
originally posted by: Blaine91555
I'd like to understand that better as a layman if anyone here can enlighten me on the subject. Is it something that is only as dangerous as the people using it?
originally posted by: interupt42
The AI that people think about (Super Intelligence) is not just around the corner which is the type associated with the fear mongering.
originally posted by: Blaine91555
a reply to: neoholographic
It's my understanding that currently what we call AI is in fact simulated intelligence and not true intelligence. To me that means that it can only do what it is programed to do and the real problem is not the technology, but the motives of those creating it and in what way they use it.
I'd like to understand that better as a layman if anyone here can enlighten me on the subject. Is it something that is only as dangerous as the people using it?