It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
When scientists scratch their head on a daily basis on what process to use to cure lets say cancer
It's a normal field programmable chip it isn't anything special but the algorithm is. No normal chip could learn to take advantage of properties we didn't program them to. Computer science takes the basic properties of a chip and uses them to the extent they are designed for so how did a primitive chip understand it's quantum properties and use them?
In Tetris, though, the method fails completely. It seeks out the easiest path to a higher score, which is laying bricks on top of one another randomly. Then, when the screen fills up, the AI pauses the game. As soon as it unpauses, it'll lose—as Murphy says, "the only way to the win the game is not to play."
Perhaps the most salient diference between verification of traditional software and verification of AI systems is that the correctness of traditional software is defined with respect to a fixed and known machine model whereas AI systems - especially robots and other embodied systems - operate in environments that are at best partially known by the system designer
As AI systems are used in an increasing number of critical roles, they will take up an increasing proportion of cyber-attack surface area. It is also probable that AI and machine learning techniques will themselves be used in cyber-attacks
As AI systems grow more complex and are networked together, they will have to intelligently manage their trust, motivating research on statistical-behavioral trust establishment and computational reputation models
A related verification research topic that is distinctive to long-term concerns is the verifiability of systems that modify, extend, or improve themselves, possibly many times in succession. Attempting [..] formal verification tools to this more general setting presents new dificulties, including the challenge that a formal system that is suficiently powerful cannot use formal methods in the obvious way to gain assurance about the accuracy of functionally similar formal systems
If an AI system is selecting the actions that best allow it to complete a given task, then avoiding conditions that prevent the system from continuing to pursue the task is a natural subgoal
we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes [..] Are such dystopic outcomes possible? If so, how might these situations arise? ...What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an intelligence explosion"?
I studied for four years for an IEE accredited MEng in Microelectronic Systems Engineering from the University of Manchester Institute of Science and Technology. Under Phil Husbands, I completed a doctorate in the School of Cognitive and Computing Sciences at the University of Sussex entitled "Hardware Evolution." This was the first thesis in the field now know as "evolvable hardware" or "evolutionary electronics", and it was chosen by the BCS/CHPC Distinguished Dissertations award to be published by Springer (ISBN 3-540-76253-1). After a post-doc to further that research, I remained at Sussex as a lecturer in the Department of Informatics and an EPSRC Advanced Research Fellow, returning to normal teaching duties in 2006.
The concept was pioneered by Adrian Thompson at the University of Sussex, England, who in 1996 evolved a tone discriminator using fewer than 40 programmable logic gates and no clock signal in a FPGA. This is a remarkably small design for such a device and relied on exploiting peculiarities of the hardware that engineers normally avoid. For example, one group of gates has no logical connection to the rest of the circuit, yet is crucial to its function.
originally posted by: voyger2
Questions:
Suppose there is already a AI on the WEB lurking, studding and evolving, How would it be possible to identify it? Can we (as humans) recognize or identify a reality (new), concept or "living-form" without "formal" auto-presentation or direct contact?
originally posted by: TheConstruKctionofLight
a reply to: BlackProject
When scientists scratch their head on a daily basis on what process to use to cure lets say cancer
Plenty of cancer cures out there. When scientists come up with a process its called a treatment. They never use the word cure.
Fujitsu’s new architecture on the other hand uses parallelisation to great effect, and unlike quantum computers boasts a fully connected structure that allows signals to move freely between the basic optimisation circuits, making it capable of dealing with a wide range of problems and factors – and still offer the speed seen with quantum computers.
Fujitsu says it has implemented basic optimisation circuits using an FPGA to handle combinations which can be expressed as 1024 bits, which when using a ‘simulated annealing’ process ran 10,000 times faster than conventional processors.
...
The company says it will work on improving the architecture going forward, and by the fiscal year 2018, it expects “to have prototype computational systems able to handle real-world problems of 100,000 bits to one million bits that it will validate on the path toward practical implementation”.
originally posted by: TEOTWAWKIAIFF
The article speculates the reason and function of this chip.