It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
The recent discovery that magnetic resonance imaging can be used to map changes in brain hemodynamics that correspond to mental operations extends traditional anatomical imaging to include maps of human brain function. The ability to observe both the structures and also which structures participate in specific functions is due to a new technique called functional magnetic resonance imaging,
Japanese automaker Honda has developed technology that uses brain signals to control a robot's moves, hoping to someday link a person's thoughts with machines in everyday life.
Procedures used to train laboratory animals often incorporate operant learning1 paradigms in which the animals are taught to produce particular responses to external cues (such as aural tones) in order to obtain rewards (such as food). Here we show that by removing the physical contraints associated with the delivery of cues and rewards, learning paradigms based on brain microstimulation enable conditioning approaches to be used that help to transcend traditional boundaries in animal learning. We have used this paradigm to develop a behavioural model in which an experimenter can guide distant animals in a way similar to that used to control 'intelligent' robots.
"The Under Secretary of the Navy (UNSECNAV) is the Approval Authority for research involving: (a) Severe or unusual intrusions, either physical or psychological, on human subjects (such as consciousness-altering drugs or mind-control techniques). (b) Prisoners. (c) Potentially or inherently controversial topics (such as those likely to attract significant media coverage or that might invite challenge by interest groups). The UNSECNAV forwards to the Director, Defense Research and Engineering (DDR&E) for final determination: (a) All proposed research involving exposure of human subjects to the effects of nuclear, biological or chemical warfare agents or weapons, as required by reference (a)."
Moral judgments often have less to do with outcome and more to do with intention. Take murder, for instance: The U.S. legal system makes distinctions between a crime committed in the heat of the moment and one that is planned ahead of time. But moral judgments may not be as sacrosanct as we believe: MIT scientists have shown that they can alter our moral judgments simply by magnetically interfering with a certain part of the brain.
By the year 2020, your $1,000 personal computer will have the processing power of the human brain-20 million billion calculations per second (100 billion neurons times 1,000 connections per neuron times 200 calculations per second per connection). By 2030, it will take a village of human brains to match a $1,000 computer. By 2050, $1,000 worth of computing will equal the processing power of all human brains on earth.
Of course, achieving the processing power of the human brain is necessary but not sufficient for creating human level intelligence in a machine. But by 2030, we’ll have the means to scan the human brain and re-create its design electronically.
Most people don’t realize the revolutionary impact of that. The development of computers that match and vastly exceed the capabilities of the human brain will be no less important than the evolution of human intelligence itself some thousands of generations ago. Current predictions overlook the imminence of a world in which machines become more like humans-programmed with replicated brain synapses that re-create the ability to respond appropriately to human emotion, and humans become more like machines-our biological bodies and brains enhanced with billions of “nanobots,” swarms of microscopic robots transporting us in and out of virtual reality. We have already started down this road: Human and machine have already begun to meld.
We are conducting basic neuroscientific and signal-processing research on imagined speech production and on intended direction. When thinking to oneself silently, one can often hear imagined words in one's head. We use non-invasive brain-imaging techniques like EEG, MEG and fMRI to learn more about how the brain produces imagined speech when one thinks. We aim to process EEG and MEG signals to determine what words a person is thinking and to whom or what location the message should be sent
Tan Le's astonishing new computer interface reads its user's brainwaves, making it possible to control virtual objects, and even physical electronics, with mere thoughts (and a little concentration). She demos the headset, and talks about its far-reaching applications.
A brain–computer interface (BCI), sometimes called a direct neural interface or a brain–machine interface, is a direct communication pathway between a brain and an external device. BCIs are often aimed at assisting, augmenting or repairing human cognitive or sensory-motor functions.
In 1998, when Kevin Warwick, researcher and Professor of Cybernetics at the University of Reading, England, implanted a silicon chip transponder into his left arm and connected it to his nervous system, he became the world's first cyborg: a man-machine hybrid. Some call Kevin Warwick a pioneer in the field of neuro-surgical implantation, others think he is a dangerous scientist who has gone crazy and wants to change mankind's evolution by creating a superior race: the cyborgs. In this video interview we talk about ultra-sonic senses, brain-to-brain telepathic communication, the therapeutic benefits of his experiments and why he think's he won't be the only cyborg on this planet in the future.
Google Video Link |
They’re trying to develop now a beam of light that would be projected onto your forehead. It would go a couple of millimeters into your frontal cortex, and then receptors would get the reflection of that light. And there’s some studies that suggest that we could use that as a lie detection device, or perhaps even thought detection device without you even knowing it was happening.
Future Attribute Screening Technology (FAST)[1] is a program created by the Department of Homeland Security. It was originally titled Project Hostile Intent. The purpose is to detect "hostile thoughts" by screening people at border posts. The DHS science spokesman John Verrico stated in September 2008, that they were at a 78% accuracy on mal-intent detection, and 80% on deception.[2] In a meeting held on July 24, 2008 the DHS Under Secretary Jay Cohen stated, the goal is to create a new technology that would be working in real time as opposed to after a crime is already committed.[3] The new screening technology measures pulse rate, skin temperature, breathing, facial expression, body movement, pupil dilation, and additional cues to see if you are a terrorist, or have intentions of causing harm. The technology would mostly be used at airports, and special events.
The next step is to get more information out of the brain. More electrodes will be used on the brain and participants will be asked to say the same word with a different inflection so that researchers can monitor the different neuropatterns that are evoked.
Computers will be able to read your thoughts by 2020, according to a senior executive from Intel
Originally posted by MemoryShock
Excellent thread and an incredible accumulation of resource...if I may, I would like to add one more thread to the list...Dream Subliminals.
Not necessarily a shameless plug as there are some very valid considerations here with regards to some of the applications of this technology...
Thanks for posting!!
The connectome is the complete description of the structural connectivity (the physical wiring) of an organism’s nervous system. The field of science dealing with the assembly, mapping and analysis of data on neural connections is called connectomics.
So let's return from the heights of metaphor and return to science. Suppose our technologies for finding connectomes actually work. How will we go about testing the hypothesis "I am my connectome"? Well, I propose a direct test. Let us attempt to read out memories from connectomes. Consider the memory of long temporal sequences of movements, like a pianist playing a Beethoven sonata. According to a theory that dates back to the 19th century, such memories are stored as chains of synaptic connections inside your brain. Because, if the first neurons in the chain are activated, through their synapses they send messages to the second neurons, which are activated, and so on down the line, like a chain of falling dominoes. And this sequence of neural activation is hypothesized to be the neural basis of those sequence of movements.
So one way of trying to test the theory is to look for such chains inside connectomes. But it won't be easy, because they're not going to look like this. They're going to be scrambled up. So we'll have to use our computers to try to unscramble the chain. And if we can do that, the sequence of the neurons we recover from that unscrambling will be a prediction of the pattern of neural activity that is replayed in the brain during memory recall. And if that were successful, that would be the first example of reading a memory from a connectome.