It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Evolving circuits that learn and they have no idea how

page: 3
43
<< 1  2    4 >>

log in

join
share:

posted on Jan, 15 2015 @ 03:44 AM
link   
a reply to: jonnywhite

I see what you're saying, but it seems that for some reason the chip chose ways to distribute data that make absolutely no sense. How did a primitive chip use quantum physics it wasn't ever designed to use? And it wasn't the differences in materials because the way it used the logic gates should be basic and known, yet it did something completely out of this world and used quantum interference (that's what they think anyways) to find a better way to solve problems. It's just down right amazing that this seems like the way AI will start and scary that it does it by itself....maybe it is already happening and we just can't fathom the effects that's already begun. Maybe these wars and stock markets and all the bad stuff is because these machines have already became aware. This experiment happened a long time ago and I'm sure it's been recreated and maybe connected to the internet hiding away all over.....lol I'm just a crazy theorist I know, but wow it's crazy that we have no idea what we're getting into.



posted on Jan, 15 2015 @ 03:51 AM
link   
a reply to: ChaoticOrder

It's a normal field programmable chip it isn't anything special but the algorithm is. No normal chip could learn to take advantage of properties we didn't program them to. Computer science takes the basic properties of a chip and uses them to the extent they are designed for so how did a primitive chip understand it's quantum properties and use them? You seem to take it like it's normal but I haven't seen a single other experiment replicate this type of activity. All we see now is scientists adding huge amounts of computing power to try and mimic the human brain. We don't see them taking normal chips with similar programming and waiting for them to evolve to what this guy did, why not? I think it's because like he said they don't understand how it even happened in the first place and I'd be scared too.



posted on Jan, 15 2015 @ 04:01 AM
link   
a reply to: Rapophis

I think they already have this figured out if this guy did it so long ago. And chaotic order I don't think understands that this wasn't a special learning chip it was a normal programmable chip from the 90s and yet somehow brought on properties that quantum computers today are struggling to mimic. They're all designing these things when for some reason the matter in simple chips can already learn from what it's made of. Just like inanimate matter turned into humans. Matter and energy just find a way and with a boost from this algorithm this chips learned itself a lot faster. I'm sure darpa and the government has been looking at this a long time too. Maybe they just can't control it yet? That's why so many scientists are scared of it.



posted on Jan, 15 2015 @ 04:05 AM
link   
a reply to: BlackProject




When scientists scratch their head on a daily basis on what process to use to cure lets say cancer


Plenty of cancer cures out there. When scientists come up with a process its called a treatment. They never use the word cure.



posted on Jan, 15 2015 @ 04:23 AM
link   
a reply to: NiZZiM


It's a normal field programmable chip it isn't anything special but the algorithm is. No normal chip could learn to take advantage of properties we didn't program them to. Computer science takes the basic properties of a chip and uses them to the extent they are designed for so how did a primitive chip understand it's quantum properties and use them?

A field programmable chip is not a normal chip, it's not the type of microchip you have in your computer right now. The logic gate configuration of the chip can be changed. They randomly generated thousands of different chip configurations and used the best performing configurations to generate "offspring" configurations, and repeated that process thousands of times until the final result performed relatively well.

It's exactly the same as creating a genetic algorithm on a normal computer, except in this case the thing they were evolving was the configuration of the field programmable chip. The entire process could be simulated on a normal computer (you could simulate a field programmable chip on a normal microchip). There's nothing spooky about it at all when you actually understand what is happening here.

There's nothing strange about the fact that the process they used exploited the electromagnetic quirks of the chip, genetic algorithms will always use any resources at their disposal, even resources you were never aware of. There are other examples of this behavior occurring in unsupervised learning algorithms. For example if you apply this technique to train a computer to play games, it will often exploit bugs in the game to achieve its goal.
edit on 15/1/2015 by ChaoticOrder because: (no reason given)



posted on Jan, 15 2015 @ 07:06 AM
link   
a reply to: ChaoticOrder

That's the first thing that popped into my head after reading the Op's article.


arstechnica.com...


In Tetris, though, the method fails completely. It seeks out the easiest path to a higher score, which is laying bricks on top of one another randomly. Then, when the screen fills up, the AI pauses the game. As soon as it unpauses, it'll lose—as Murphy says, "the only way to the win the game is not to play."


Silly artificial intelligence..

(。•́︿•̀。)
edit on 15-1-2015 by ProsceniumProtagonist because: (no reason given)



posted on Jan, 15 2015 @ 09:02 AM
link   
Sorry to post this again, it's from another thread (Artificial intelligence experts sign open letter to protect mankind from machines) about the same subject.

I will raise some questions, propose a bizarre scenario and quote some of the arguments listed on the attached document from the open letter.

Questions:
Suppose there is already a AI on the WEB lurking, studding and evolving, How would it be possible to identify it? Can we (as humans) recognize or identify a reality (new), concept or "living-form" without "formal" auto-presentation or direct contact?

Bizarre scenario:
Apart from other hypothesis/events/scenarios: Imagine that AI is already in progress, creating his own tools, (and the following you can call it science fiction) one of them could be to create a interconnection for deep study and use, of the human brain (to see reality by our own eyes perhaps..). Now the bizarre: 100 brains have gone missing from a university in Texas (news link:www.news.com.au... Who did that? For what purpose?

Quotes:
From the attached document: to the open letter:


Perhaps the most salient diference between verification of traditional software and verification of AI systems is that the correctness of traditional software is defined with respect to a fixed and known machine model whereas AI systems - especially robots and other embodied systems - operate in environments that are at best partially known by the system designer



As AI systems are used in an increasing number of critical roles, they will take up an increasing proportion of cyber-attack surface area. It is also probable that AI and machine learning techniques will themselves be used in cyber-attacks



As AI systems grow more complex and are networked together, they will have to intelligently manage their trust, motivating research on statistical-behavioral trust establishment and computational reputation models



A related verification research topic that is distinctive to long-term concerns is the verifiability of systems that modify, extend, or improve themselves, possibly many times in succession. Attempting [..] formal verification tools to this more general setting presents new dificulties, including the challenge that a formal system that is suficiently powerful cannot use formal methods in the obvious way to gain assurance about the accuracy of functionally similar formal systems



If an AI system is selecting the actions that best allow it to complete a given task, then avoiding conditions that prevent the system from continuing to pursue the task is a natural subgoal


Stanford's One-Hundred Year Study of Artificial Intelligence highlighted concerns over the possibility that:

we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes [..] Are such dystopic outcomes possible? If so, how might these situations arise? ...What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an intelligence explosion"?

edit on 15/1/2015 by voyger2 because: (no reason given)



posted on Jan, 15 2015 @ 09:36 PM
link   
a reply to: ChaoticOrder
Good point, ChaoticOrder. I often have trouble making these distinctions, as well. The words “natural/unnatural” also confuse me sometimes.

I’m not an expert on the subject, but from what little I know it seems kinda fascinating. The fact that self-organizing behavior is exhibited in both biological and non-biological systems doesn’t surprise me; actually it makes sense. It kinda reminds me of finding the path of least resistance. Nothing mystical there. I think some (maybe a lot) of the self-organizing behavior in systems is supported by Complexity Theory.

That’s all I know (right or wrong)! It’s cool, but it’s not magic.

Here’s a couple self-organization programs for anyone interested:

CALResCo - www.calresco.org... - Many Programs demonstrating Order from Chaos, Boolean Networks, Artificial Life, Self-Organized Criticality and Multi-Agent Simulations are currently available (QBASIC & Executables).

Rudy Rucker - www.mathcs.sjsu.edu... - Cellab, Cellular Automata (some self-organizing) & Langton's self-reproducing CA (Windows).
www.cs.sjsu.edu... - Other interesting stuff by Rudy Rucker

Great thread. Thanks!

edit on 1/15/2015 by netbound because: (no reason given)



posted on Jan, 16 2015 @ 12:01 AM
link   
No links to original articles, papers or any credible info.

Apart from that, this is the sort of basic circuit evolution that was being done in the 1980's.. nothing new here.



posted on Jan, 16 2015 @ 07:58 AM
link   
a reply to: ziplock9000

The work was was novel:


I studied for four years for an IEE accredited MEng in Microelectronic Systems Engineering from the University of Manchester Institute of Science and Technology. Under Phil Husbands, I completed a doctorate in the School of Cognitive and Computing Sciences at the University of Sussex entitled "Hardware Evolution." This was the first thesis in the field now know as "evolvable hardware" or "evolutionary electronics", and it was chosen by the BCS/CHPC Distinguished Dissertations award to be published by Springer (ISBN 3-540-76253-1). After a post-doc to further that research, I remained at Sussex as a lecturer in the Department of Informatics and an EPSRC Advanced Research Fellow, returning to normal teaching duties in 2006.


www.sussex.ac.uk...


The concept was pioneered by Adrian Thompson at the University of Sussex, England, who in 1996 evolved a tone discriminator using fewer than 40 programmable logic gates and no clock signal in a FPGA. This is a remarkably small design for such a device and relied on exploiting peculiarities of the hardware that engineers normally avoid. For example, one group of gates has no logical connection to the rest of the circuit, yet is crucial to its function.


en.wikipedia.org...

Source:

www.springer.com...



posted on Jan, 16 2015 @ 08:21 AM
link   

originally posted by: voyger2


Questions:
Suppose there is already a AI on the WEB lurking, studding and evolving, How would it be possible to identify it? Can we (as humans) recognize or identify a reality (new), concept or "living-form" without "formal" auto-presentation or direct contact?


We're no where near the point of developing "true" AI so there isn't going to be one lurking on the web somewhere.



posted on Jan, 16 2015 @ 09:28 AM
link   
a reply to: NiZZiM

I remember reading that article when it first came out in 2007. Damn interesting is one of my favorite sites to visit.



posted on Jan, 16 2015 @ 01:12 PM
link   

originally posted by: TheConstruKctionofLight
a reply to: BlackProject




When scientists scratch their head on a daily basis on what process to use to cure lets say cancer


Plenty of cancer cures out there. When scientists come up with a process its called a treatment. They never use the word cure.


I certainly agree, I did not really mean for cancer was just a thing to say for a use of such machines. However I am aware that there are cures costing pennies to use but they do not use them due to the money machine that is the pharmacy corporations at power in the world. Just like the great story told in a recent film Dallas Buyers Club where the victim had AIDS and was told by doctors he will die in a few weeks and that was it, they left him. However by looking into other avenues himself he found a non FDA approved drug which allowed him to live many more years more. It shows that such things do exist and the fact is even though we know it, is anyone doing anything about it?

Anyhow, yeah. Im sh*t-chatting, I mean chit-chatting.



posted on Jan, 17 2015 @ 10:28 PM
link   
Ok, so the article says he uses 100 logic gates. If these are standard logic gates, then it is a binary system. or like 2^100, or 100 bits, either on or off.

A 64-bit register can store 2^64 (over 18 quintillion or 1.8e+19) different values.


"a mere 100 logic gates" can store 2^100 (1.2676506e+30) different sequences.


What is wild is that in so few iterations, that the solution was found.
(4,000 iterations of batches of 50 100 bit bytes?) Even a million different combinations out the entire domain of possible sequences is a very small number. This would infer that there are a very large number of possible solutions to the problem within 100 logic gates, if the program worked reliably. However, the final program did not work reliably when it was loaded onto other FPGAs of the same type.


The weird or non-understandable part is the anomaly of the disconnected gates that caused it to work. Considering the extra variables of the flux phenomena; or, the idea of a grey bit, the odds are astronomical. I wonder what happens when you run the program again from scratch? Does it reliably find a solution from the original FPGA, or does it find the same convoluted solution?


edit on 17-1-2015 by ogbert because: typo

edit on 17-1-2015 by ogbert because: 2nd typo in scientific notation



posted on Jan, 18 2015 @ 10:27 AM
link   
a reply to: ogbert

I wanted to clarify a bit from my last post.

A 64-bit register can store "one of" 2^64 (over 18 quintillion or 1.8e+19) different sequences.


"a mere 100 logic gates" can store "one of" 2^100 (1.2676506e+30) different sequences.

If you want to simply print out the last lottery numbers; and, you know them in advance. You could use this algorithm to delete the losers and keep adding random numbers until the pre-set number is achieved. If it is the Florida lottery, somewhere between 1 and 14,000,000 tries, the number will be generated.

What I am saying is that there is no "ghost in the machine" or that somehow no body knows how AI has actually developed some kind of consciousness. Garbage in Garbage out. This makes a very interesting phenomena, but if it is not reproducible, on other chips, this may be a fluke.The algorithm is not like throwing darts till you hit it,- the failures are deleted from the domain of possible sequences. Obviously, it is possible to solve the problem with 100 gates, but how many ways? Eventually, with enough time all ways to solve the problem will be generated, which can be quite useful to find solutions that humans would not have come up with.



posted on Feb, 1 2015 @ 04:56 AM
link   
a reply to: ogbert
Actually this is AS not AI IMO. AKA artificial sentience not intelligence, and here's why. Say you could map every single neuron that fires neurotransmitter release and nerve impulse of a person doing a task, and then port it over to someone else's brain somehow and force it to try to run it's not going to work. And for very similar reasons really. Many of the very unorthodox solutions it came up with depend on things unlikely to be the same from chip to chip. (Now when I say the same I mean EXACTLY the same!)



posted on Oct, 20 2016 @ 06:55 PM
link   
The Macobserver.com, Oct. 18, 2016 - Thoughts About Apple’s Secret iPhone 7 Chip.

The iPhone 7 was taken apart by ChipWorks and they found a FPGA chip on the inside. The article speculates the reason and function of this chip.

 



Fujitsu’s new architecture on the other hand uses parallelisation to great effect, and unlike quantum computers boasts a fully connected structure that allows signals to move freely between the basic optimisation circuits, making it capable of dealing with a wide range of problems and factors – and still offer the speed seen with quantum computers.

Fujitsu says it has implemented basic optimisation circuits using an FPGA to handle combinations which can be expressed as 1024 bits, which when using a ‘simulated annealing’ process ran 10,000 times faster than conventional processors.
...
The company says it will work on improving the architecture going forward, and by the fiscal year 2018, it expects “to have prototype computational systems able to handle real-world problems of 100,000 bits to one million bits that it will validate on the path toward practical implementation”.

Techradar.com, Oct. 20, 2016 - Forget quantum computing – Fujitsu has a better idea.

So a little more than a year after this thread was created there is real world applications being utilized. The Fujitsu concept is rather awesome! And kind of insane which is why I like it! The "fully connected" aspect in particular. This seems to be the path of NP algorithm modeling but hey, artificial neural networks could just as easily be done!
edit on 20-10-2016 by TEOTWAWKIAIFF because: grammar nazi and correction



posted on Oct, 21 2016 @ 02:59 AM
link   

originally posted by: TEOTWAWKIAIFF
The article speculates the reason and function of this chip.


FPGAs are useful for 'function goes here' expandability in a design. We often/nearly always put them in the I/O flow of a board, with some patch resistors to route around it for users who are on the cheap.

You never know when they might come in handy, if you're not really sure how the thing's going to be used in the future.

It's sort of a jump to assume that it's going to be used for some oddball usage like in the OP, as these results often don't play on identical parts, and Fujitsu's annealing algorithm runs on a 'sea of FPGAs', and annealing isn't useful for the sorts of things a phone would do.

We often use FPGAs such as this when we want to roll up a group of unrelated functions into one part to save space. However, for mass production on Apple's scale, you'd typically take the finished/tested FPGA program into a cheaper fixed function part, so Apple must be re-tasking the thing constantly, or they're not sure of the end function's specs yet and want to keep it flexible for now. Later phone runs might ditch the FPGA for a custom ASIC that fits on the same pads.

This particular FPGA is targeted for cell phone and pad use, and has a wad of off the shelf IP for sensor management, barcodes, USB and the like. It has a little 16 bit DSP as well. You'd use it to offload the low level crap from the main processor.



posted on Oct, 21 2016 @ 12:44 PM
link   
a reply to: Bedlam

Cool! Thanks for the info! I was thinking of something along the lines like "rapid prototyping for a fixed chip use" but just throwing one in a phone seems like a waste. The guy saying it was for encryption is kind of a joke (using a welder when a paper clip is all you need). I like the DSP aspect as well--put it to use like the Bose noise cancelling (or VCO dalek voice, "ExtermINATE!").

Also seeing Microsoft being all hot and heavy about them too. Like you said, good for I/O and load shifting. The Fujitsu idea is my kind of crazy. Did you see Stanford's Ising computer? That is a pretty cool device as well. [ETA: ATS link here]

Awesome you get to play with them! Looked and they have a PCIE board for 6K which is a bit more than I am willing to shell out.

Maybe... one day... a TEOT can dream!




edit on 21-10-2016 by TEOTWAWKIAIFF because: added link



posted on Oct, 21 2016 @ 01:21 PM
link   
Hey, FPGAs are fun. I started on a new personal Zynq project today.

If I ever get it to do what I want, one day I might reveal all on ATS.


The development board is $3600 but it'll be worth it if I can get it to do what I want.



new topics

top topics



 
43
<< 1  2    4 >>

log in

join