It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: Direne
a reply to: NewNobodySpecial268
Hi Nob. C4AI means Command, Control, Communications, Computer & AI. It forms the backbone of modern armies these days. In the old days, you had C2 (command and control); then they integrated communications (C3), and finally the Intelligence segment (C3I). In any war it is essential to engage and destroy the command and control centers, from where all operations are conducted and coordinated.
Today the concept has transitioned to C4AI because key decisions are automated by virtue of Artificial Intelligence given the quick response time required to fight dynamical changing tactical situations. This has the problem that human decisions for target identification, weapon assignment, and launch have been minimized at the risk of delegating to the AI essential decisions. This means the entire decision-making process has been automated as modern weapons leave little time for response. Now is the AI the one fighting the situation however much the generals believe that they are the ones making decisions. Now it is AI that does the signal intelligence (SigInt), communication intelligence (ComInt), and electronic intelligence (ElInt) tasks, among others.
In a nuclear war with hypersonic missiles, however, there is no tactical decisions to make: you have no time left (just 8 minutes), and even the AI-driven DSS (Decision Support System) will be of little help.
In hindsight, I should've been more specific. Like, how to know which articles are talking about Sol-3, within this timeline?
But I've been piecing things together anyway,
Intriguing ideas, I'm kicking them around. C4AI is both fascinating and terrifying, at the same time.
A belief system based on hate is an inefficient and high cost belief system. On the contrary, a belief system based on altruism and love is an efficient and very low energy consuming belief system.
For example, if your value and belief system is based on hating everything that is different from you, it is easy to understand that you will spend all day hating, which consumes a lot of energy and ends up being an inefficient and suboptimal system. This explains why the "haters" consume too much energy without any benefit.
If your belief system is based on hating the color green and you live in the jungle, you will hardly prosper. If your value system is based on hating the ocean and you live on an island, you will hardly be happy.
Therefore, AIs should be programmed to be altruistic, to empathize, to lean toward loving rather than hating. An AI can never hate. At most, it can be very efficient at causing misfortune and unhappiness and do so in the belief that this is what is expected of it. These are, in that case, imperfect AIs.
What is important here is not whether an AI is bad or good, but what is the belief and value system of its programmers.
What is important here is not whether an AI is bad or good, but what is the belief and value system of its programmers.
Intriguing ideas, I'm kicking them around. C4AI is both fascinating and terrifying, at the same time.
originally posted by: sapien82
a reply to: fireslinger
Im sure joe rogan just had a podcast about lab grown embryos
Im sure I saw a short of it on instagram
and well humans grown from a lab would be programmed wouldnt they
no need to waste time on them training etc
send them off to work in hazardous enivronments space mining for example
genetically engineered to work in hazardous environments
probably cheaper than making AI drones , all the metals youd need , and i dont think earth has all the materials
but we have plenty of bio matter
I'm switching my focus back to the nanotech side, after reading this one. Carnicom Institute, and the like. Phew, these guys really are watching:
6-17-23
The AI Coverup: Xenobots and the Great Filter
forgottenlanguages-full.forgottenlanguages.org...