It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The future for drones: Automated killing

page: 1
3

log in

join
share:

posted on Sep, 20 2011 @ 03:16 PM
link   
I bet they will name it [url=http://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_story.html?hpid=z1]SkyNet[/u rl].


The Fort Benning tarp “is a rather simple target, but think of it as a surrogate,” said Charles E. Pippin, a scientist at the Georgia Tech Research Institute, which developed the software to run the demonstration. “You can imagine real-time scenarios where you have 10 of these things up in the air and something is happening on the ground and you don’t have time for a human to say, ‘I need you to do these tasks.’ It needs to happen faster than that.”

The demonstration laid the groundwork for scientific advances that would allow drones to search for a human target and then make an identification based on facial-recognition or other software. Once a match was made, a drone could launch a missile to kill the target.

Military systems with some degree of autonomy — such as robotic, weaponized sentries — have been deployed in the demilitarized zone between South and North Korea and other potential battle areas. Researchers are uncertain how soon machines capable of collaborating and adapting intelligently in battlefield conditions will come online. It could take one or two decades, or longer. The U.S. military is funding numerous research projects on autonomy to develop machines that will perform some dull or dangerous tasks and to maintain its advantage over potential adversaries who are also working on such systems.



edit on 20-9-2011 by SirMike because: (no reason given)



posted on Sep, 20 2011 @ 03:37 PM
link   
Horrible, now no person is to blame if an innocent person is killed. There is no way they will put the software creator on trial for murder. Someone better be responsible, there should be a law.



posted on Sep, 20 2011 @ 03:40 PM
link   
Umm Video Games FTW

The military is looking forward to the next Gen of gamers that will enter the military to pilot their drones air and ground, a good pilot and tech could tandem a bunch of individual units controlling an automated squad.



posted on Sep, 20 2011 @ 03:41 PM
link   
Doesn't surprise me. The way I see this going, is since we are a heavy gaming society, if played out correctly...

Every citizen will have the capability of piloting a drone, maybe even more than one. Dangerous aspects.



posted on Sep, 20 2011 @ 03:41 PM
link   
reply to post by LanternOfDiogenes
 


Looks like you beat me to it by a couple of seconds, but yeah, looks like we think on the same wavelength.



posted on Sep, 20 2011 @ 03:49 PM
link   
reply to post by LeTan
 


Both you guys don't realize that pilots will not be necessary. Computers will do the flying and killing. The gamers will be replaced by robots.



posted on Sep, 20 2011 @ 04:01 PM
link   
Movie parallels aside, this is a pretty scary prospect.

Machines deployed with facial recognition software huh?

So..does a South Korean look much different from a North Korean facially then?

Mistakes happen in war, innocents always suffer great casualties..will these automatons increase of decrease the amount is the question i suppose.

Personally, it's running before we can walk..building autonomous machines with the purpose and ability to kill selectively, is a mistake before we can build autonomous machines with the ability for compassion and higher reasoning.

The dilemma is one of creating machines that are too smart, which are able to recognise or realise that their programming to murder is incompatible with compassion and reason and so, simply refuse to obey orders.

Make them smart without compassion?

Then we really start drawing on parallels from the movies!



posted on Sep, 20 2011 @ 04:05 PM
link   
Reminds me of that old movie "Westworld", with Yul Brenner as a robot saying "Nothing can go wrong......go wrong......go wrong.".



posted on Sep, 20 2011 @ 04:06 PM
link   
reply to post by earthdude
 


No I understood the center point of the article was automated drones and hard points, however I can not see it passing. The military has a thing about, if it doesn't take a finger, or hand to start reload, or pull the trigger, they get itchy, imagine a base defenses being hacked or just the wavelength connected to one unit is appropriated, there will always be a need for the human back-up I can never see the military going more than 20% automation.
edit on 12/08/11 by LanternOfDiogenes because: (no reason given)



posted on Sep, 20 2011 @ 04:16 PM
link   
reply to post by LanternOfDiogenes
 


At first, no neither can i.

But a smart and patient potential enemy would allow complacency to build over time...one who would allow for oversight committees, themselves overseen by those who want votes and budget savings, to reference the success and security of the 20% automated force, and how great it is that American or (insert your own country here) young lives have been saved on the battlefields, and how much money has been saved by using droids which don't eat, don't sleep, don't need equipment and don't complain or need treatment when they get shot apart, they don't ask for a salary or pension either.,,,that 20% would soon rise to 50%, then 80% or more.

Then that patient enemy would reveal it's ace in the hole...it's had the ability to hack the systems all through the programme, and only feigned failed attempts to gain access, increasing the military confidence in it's 'unbreakable' encryption, until the numbers of droids were sufficient to act as an exterminating force, for the wrong side.



posted on Sep, 21 2011 @ 12:55 PM
link   
Doesn't this violate the first of Isaac Asimov's Three Laws of Robotics?

source


Isaac Asimov's "Three Laws of Robotics"

1 - A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2 - A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3 - A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



posted on Sep, 21 2011 @ 02:56 PM
link   
I dont think they will let it get too automated, "terrorists" had already been able to look at the somehow un-encrypted video feeds from drones via satellite, imagine when the army of chinese hackers learns how to break military encryption...our robot army will be our own robot enemy.




top topics



 
3

log in

join