It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.
Initially, pilots aboard the plane directed the missile, but halfway to its destination, it severed communication with its operators. Alone, without human oversight, the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot unmanned freighter.
Warfare is increasingly guided by software. Today, armed drones can be operated by remote pilots peering into video screens thousands of miles from the battlefield. But now, some scientists say, arms makers have crossed into troubling territory: They are developing weapons that rely on artificial intelligence, not human instruction, to decide what to target and whom to kill.
As these weapons become smarter and nimbler, critics fear they will become increasingly difficult for humans to control — or to defend against. And while pinpoint accuracy could save civilian lives, critics fear weapons without human oversight could make war more likely, as easy as flipping a switch.
originally posted by: OrphanApology
a reply to: Melbourne_Militia
There's also the possibility of artificial intelligence becoming so intelligent that it chooses not to kill anything. That wouldn't make for a very interesting sci-fi story me guess.
originally posted by: tavi45
a reply to: OrphanApology
I've always said that myself but we all know that it's unlikely. I think it's called Murphy's law.
you are correct! John Carpenter.. but Kubrick for the win!
originally posted by: MystikMushroom
a reply to: hellobruce
I find it interesting that the ship's name is "Dark Star" ...
originally posted by: MystikMushroom
a reply to: hellobruce
I find it interesting that the ship's name is "Dark Star" ...
originally posted by: OrphanApology
a reply to: Melbourne_Militia
There's also the possibility of artificial intelligence becoming so intelligent that it chooses not to kill anything. That wouldn't make for a very interesting sci-fi story me guess.
originally posted by: OrphanApology
a reply to: Melbourne_Militia
There's also the possibility of artificial intelligence becoming so intelligent that it chooses not to kill anything. That wouldn't make for a very interesting sci-fi story me guess.