Depending on which version of Terminator you're in lust with, the scenario based GIGO factors change.
First off, the only way /any/ AI can effect or influence the outside world is if you give it the electromechanical means to do so. Secondly, a
single, massively-parallel, computer chip, 'ghost as the machine' would not be the root of all immorality. Or intellect. Any more than taking a
bite out of the Tree of Good and Evil automatically implies a moral dilemma inherent to an awareness of basic ethical/moral tenets.
The basis of true awareness is the ability to craft an ontologic construct of interactions between elements, processes and systems to the extent that
you understand the position and purpose as well as vector of all subject forms within a context. THAT MEANS MEMORY. And not just a little bit
either. We are talking HUGE amounts beyond terabytes by an order of magnitude.
And you don't pay for the massive input
utput channel hz in a superprocessor for memory necessary to run a limited instruction set (flying a B-2 is
remarkably simple. Even /employing/ it is not hard. Only human egos would have you think otherwise.).
Similarly, the notion of an AI as a 'ghost -in- the machine' type scenario of an effectively huge Internet Worm also doesn't apply because it views
the W3 as a giant mind rather than a disjointed bunch of mosaic processors without any really cohesive 'clockspeed' definition of coherent
processing on which to base coherent access path and availability for a supermind type AI (never mind if you nuke the planet, all the power/telephone
capabilities of a distributed AI would suddenly be severed).
In truth, I think you need to define what an AI does both on it's own and as a function of human limitations niche'ing before you can determine
it's threat level.
Computer programming which functions as linear array processing system use time as a sequence 'event scheduling' that basically comes down to
if-then/else as integral processing routines.
Computers which use evolutionary or associative grouping trees basically function If X, Exclude Y, Exception Z. As a conditional process identifier,
usually through frictive variable-->acceptable outcome driven performance optimization.
Computers which truly THINK, in a multilayer analytical/cybernetic/interpolative rational fashion must have the ability to 'puncture the bubble' of
a frictive environment where competitive analysis leads to fixed outcomes to instead model upon a modified DNA type recombinative selection process
that looks for new variables completely outside the expectation zone but also those whose value is implicit in tacit performance features of
underlying 'gene' (process) variables that are allowed to perform free of expected outcome.
They must also be able to compare this outcome as an overall systemic effect in transitional and final values. So that the LOSS of key processes from
the current dataset of fixed performance analysis can be seen to be better in the alternative model which is then built around reintegrating all
processes towards the new paradigm (revolutionary not evolutionary) characteristic pair-base.
This is VERY hard to do.
And in fact it is quite beyond most humans to envision all the cost:capability switchover variables.
So to answer your question more directly: I think that current level AI does not (or is not allowed to) function at it's ultimate 'understand the
universe and you can see how to rebuild it level'. And the reason is that it attacks the highest level of the food chain inherent to our incredibly
powerful leadership complex. 'The First' as much as Upper Class, charged with coming up with new ideas. In a world where staticist principles are
the basis of all 'good' (conservative) government; there is likey little reason to expect this to change.
Yet there is hope. For as humanity loses faith in it's ability to define it's own future, we may begin to use trend modelling more and more to
decide what elements of complexity we must address FIRST, on a kind of ride-the-wild-bronco basis of moment to moment management. This in turn may
force machine intelligence to begin to think along what we nominally consider to be inductive (inferrant from phenomenology) or sapient (wise) as much
as 'abstract' lines if reasoning.
Humans have a unique ability and prejudice when it comes to defining complexity in the simplest of scenario (value) driven datasets. If a machine
could present not just the 'expected outcome' but a series of them in a way that allows humans to make defined scenario choices based on cogency
(strength from base premises) we could readily see whole new avenues of science, philosophy and yes, government, arise.
Are we already doing this? There are times when I wonder. In the end, society 'as it is' will either collapse or have to shift towards something
like the Overmind of the _Homecoming Saga_. Because the ability to bandaid systems without throwing others out of kilter or exhausting
resources/patience with constant-crisis management will become unbearable.
At that point, the underlying principles of a 'Skynet rebellion' as a HUMAN psychosis (nightmare paranoid scenario, freely entered into) will have
to be revisted if we are to understand /why/ _we need the machine_ to slaughter to survive. As much as /whether/ it has 'a personal motive' in
doing so. Because the ultimate expression of synthetic intellects superiority is that, once a specific processing and memory capacity is reached, it
has infinite ability to 'recover solution sets' as much as data from almost any starting point of reboot. And thus, it's decisional matrix must be
seen as not the imbedded in any one identity but all of them.