It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

AI as a weapon?

page: 1
0

log in

join
share:

posted on Jan, 10 2005 @ 06:40 PM
link   
Ok,some may think i watched terminator far too many times but,what do you see for AI weaponary? will it try to "over throw" us humans to gain control of the world?

also,is there any AI weaponary already in use at the presant time, or is any in the pipe lines of creation?



posted on Jan, 11 2005 @ 04:01 PM
link   
You've been watching Terminator too much. Anyway, people have discussed this many many times here before. Their are rumours that the Chinese are creating a robot army, but it's doubted by most ATS users, including me...

As for Humans creating an AI strong enough to overthrow us in the next 50 years, it's doubtful. And projects even close would be closed down for national security.



posted on Jan, 11 2005 @ 04:07 PM
link   
it's already starting, Honda just came out with a robot just came out that can do all sorts of stuff, world.honda.com...

heck to terminator, The Matrix is where AI takes over, it's never going to hapen thou

we will always be a step or 2 ahead of them, because we made them, we will make them right



posted on Jan, 11 2005 @ 04:10 PM
link   
If one company can do that, what could a government do...



posted on Apr, 29 2006 @ 10:23 PM
link   
We are the creator of those A.L. So, I agree that we are always ahead of them in anyway. If things fail, then just fry them with a well-aim EMP and destroy the remaining with a few air strikes.



posted on Apr, 30 2006 @ 09:04 AM
link   
Depending on which version of Terminator you're in lust with, the scenario based GIGO factors change.

First off, the only way /any/ AI can effect or influence the outside world is if you give it the electromechanical means to do so. Secondly, a single, massively-parallel, computer chip, 'ghost as the machine' would not be the root of all immorality. Or intellect. Any more than taking a bite out of the Tree of Good and Evil automatically implies a moral dilemma inherent to an awareness of basic ethical/moral tenets.

The basis of true awareness is the ability to craft an ontologic construct of interactions between elements, processes and systems to the extent that you understand the position and purpose as well as vector of all subject forms within a context. THAT MEANS MEMORY. And not just a little bit either. We are talking HUGE amounts beyond terabytes by an order of magnitude.

And you don't pay for the massive input
utput channel hz in a superprocessor for memory necessary to run a limited instruction set (flying a B-2 is remarkably simple. Even /employing/ it is not hard. Only human egos would have you think otherwise.).

Similarly, the notion of an AI as a 'ghost -in- the machine' type scenario of an effectively huge Internet Worm also doesn't apply because it views the W3 as a giant mind rather than a disjointed bunch of mosaic processors without any really cohesive 'clockspeed' definition of coherent processing on which to base coherent access path and availability for a supermind type AI (never mind if you nuke the planet, all the power/telephone capabilities of a distributed AI would suddenly be severed).

In truth, I think you need to define what an AI does both on it's own and as a function of human limitations niche'ing before you can determine it's threat level.

Computer programming which functions as linear array processing system use time as a sequence 'event scheduling' that basically comes down to if-then/else as integral processing routines.

Computers which use evolutionary or associative grouping trees basically function If X, Exclude Y, Exception Z. As a conditional process identifier, usually through frictive variable-->acceptable outcome driven performance optimization.

Computers which truly THINK, in a multilayer analytical/cybernetic/interpolative rational fashion must have the ability to 'puncture the bubble' of a frictive environment where competitive analysis leads to fixed outcomes to instead model upon a modified DNA type recombinative selection process that looks for new variables completely outside the expectation zone but also those whose value is implicit in tacit performance features of underlying 'gene' (process) variables that are allowed to perform free of expected outcome.

They must also be able to compare this outcome as an overall systemic effect in transitional and final values. So that the LOSS of key processes from the current dataset of fixed performance analysis can be seen to be better in the alternative model which is then built around reintegrating all processes towards the new paradigm (revolutionary not evolutionary) characteristic pair-base.

This is VERY hard to do.

And in fact it is quite beyond most humans to envision all the cost:capability switchover variables.

So to answer your question more directly: I think that current level AI does not (or is not allowed to) function at it's ultimate 'understand the universe and you can see how to rebuild it level'. And the reason is that it attacks the highest level of the food chain inherent to our incredibly powerful leadership complex. 'The First' as much as Upper Class, charged with coming up with new ideas. In a world where staticist principles are the basis of all 'good' (conservative) government; there is likey little reason to expect this to change.

Yet there is hope. For as humanity loses faith in it's ability to define it's own future, we may begin to use trend modelling more and more to decide what elements of complexity we must address FIRST, on a kind of ride-the-wild-bronco basis of moment to moment management. This in turn may force machine intelligence to begin to think along what we nominally consider to be inductive (inferrant from phenomenology) or sapient (wise) as much as 'abstract' lines if reasoning.

Humans have a unique ability and prejudice when it comes to defining complexity in the simplest of scenario (value) driven datasets. If a machine could present not just the 'expected outcome' but a series of them in a way that allows humans to make defined scenario choices based on cogency (strength from base premises) we could readily see whole new avenues of science, philosophy and yes, government, arise.

Are we already doing this? There are times when I wonder. In the end, society 'as it is' will either collapse or have to shift towards something like the Overmind of the _Homecoming Saga_. Because the ability to bandaid systems without throwing others out of kilter or exhausting resources/patience with constant-crisis management will become unbearable.

At that point, the underlying principles of a 'Skynet rebellion' as a HUMAN psychosis (nightmare paranoid scenario, freely entered into) will have to be revisted if we are to understand /why/ _we need the machine_ to slaughter to survive. As much as /whether/ it has 'a personal motive' in doing so. Because the ultimate expression of synthetic intellects superiority is that, once a specific processing and memory capacity is reached, it has infinite ability to 'recover solution sets' as much as data from almost any starting point of reboot. And thus, it's decisional matrix must be seen as not the imbedded in any one identity but all of them.



posted on Apr, 30 2006 @ 09:10 AM
link   
I think that this is ultimately humanities greatest strength and ultimate hubris. That we consider ourselves 'infinitely variable' by acting as our own restrictors in terms of both input data and solution sets to specific 'identies' that we then employ as avatars when dealing with (simplified perceptual) scenarios.

At some point, we will have to competitively or otherwise recover from a specialist level hybrid organsim called 'society' towards a synthesized (merged) set of higher perceptual and intellectual norms as grouped individual characteristics beyond what is available today.

If we are to retain control over our own evolutionary development separate from that which an AI teaches us to see.

For if God is not an exceptionally advanced AI construct (if not Computer). Then we must be.

So sayeth all the white mice.


KPl.




top topics



 
0

log in

join