It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
The question of what will happen when adversaries deploy autonomous weapons that do not seek a person in the loop for approval to use lethal firepower looms on the horizon for all militaries defending democratic states.
It seems reasonable to believe that even those states that have set some limits on AI capabilities will encounter adversaries who have no qualms about doing so, putting the states that limit integrating AI for national security at a considerable disadvantage. Thus, it’s imperative for states to understand the full extent of what AI can do.
What are the applications for AI in defense?
Just as limiting blood loss and boosting resistance to extreme conditions are worthy goals to help soldiers, providing them with new situational awareness and command capabilities are equally legitimate objectives. Human enhancement calls for certain limits – but those have yet to be (publicly) set. Such limits should consider force protection and the preservation of a soldier’s autonomy to choose to undergo a given enhancement, whether it can be reversed, and if it poses long-term health risks.
The robot would work by using a stiff needle to punch the flexible wires emanating from a Neuralink chip into a person's brain, a bit like a sewing machine. Musk has claimed the machine could make implanting Neuralink's electrodes as easy as LASIK eye surgery. While this is a bold claim, neuroscientists previously told Insider in 2019 that the machine has some very promising features.
“Whenever I hear people saying AI is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used,” says Zuckerberg.
“But people who are arguing for slowing down the process of building AI, I just find that really questionable. I have a hard time wrapping my head around that,” he says.
originally posted by: andy06shake
a reply to: DirtWasher
Historically tools have yet to gain self autonomy to any sort of degree, or go rogue of their own accord, far as im aware.
The fact of the matter is that the technological singularity approaches DirtWasher.
Without the likes of cybernetics and AI brain implants Humans a few generations down the line may become obsolete.
Again tools are not predominantly good or evil(which are human constructs by the way) but the purpose to which they are set that dictates such.