It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: andy06shake
a reply to: IndieA
Buddy, I'm thinking AI may very well build us a better Zoo that "they" do, to be honest.
I'm just now learning about mind blowing technology that was developed in the 90's, and you know they say that the military is decades ahead when it comes to technology.
It's crucial for society to establish robust ethical guidelines and regulatory frameworks to ensure the responsible development and deployment of AGI.
I tend to believe commercial technology and research is ahead of the military in a lot of respects.
For the most part military technology is robust, tried and tested, so it does not fail.
Comprised of components and systems that are known to work well and already established, meaning its generally a generation behind cutting edge.
AI would be developed to serve humanity at best, itself or an evil master at worse, but in any case would most likely seek to keep society running productivity, at least until enough robots are built.
What kind of person thinks it's an allegory or a parable?
Top artificial intelligence executives including OpenAI CEO Sam Altman on Tuesday joined experts and professors in raising the "risk of extinction from AI", which they urged policymakers to equate at par with risks posed by pandemics and nuclear war.
Not an allegory or parable, but a possible future reality that warrants concern and caution.
People that read the news and pay attention to what's going on in the world.
Top AI CEOs, experts raise 'risk of extinction' from AI
Mankind had created nuclear weapons and deadly diseases, is it so hard to believe that we could create something else that threatens life as we know it?
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter published, opens new tab by the nonprofit Center for AI Safety (CAIS).
REALLY?!?!?!??!?!? I didn't know that, thank you for that information.
If you are too blind to see the possibility that this could become reality, there really is no helping you.
Keep saying we are the ones who are mistaken an "damaged".
BTW, glad you can reply to an almost year old thread of mine but can't respond to a reply I gave you last week.