Robotics, Transhumanism and Mind Control
Mind Reading Revisited
January 21, 2009Singularity Hub - Just recently we posted about a story in which researchers from Carnegie Mellon were able to read people’s actual thoughts with a machine. The machine uses a technique called fMRI to noninvasively monitor a person’s brain activation patterns as they think about different objects. Today we would like to followup on this story with new information about how it works. Science Central has a decent article, but more importantly to our delight we have discovered that the actual paper published by the Carnegie Mellon researchers is freely available online.
So how does fMRI work anyway?
It turns out that when we think about an object only a small subset of the neurons in our brains are actively firing to produce the thought. These neurons in our brain that are firing require more energy, and they need it quickly, so they recieve more blood flow and their relative oxygen level changes. This change in oxygen level can be detected magnetically, and hence regions of the brain with firing neurons will give off a different magnetic signal than regions with neurons at rest.
Current technology does not allow us to magnetically monitor the oxygen level of every single one of the billions of neurons in the brain. Instead the brain is logically divided into several thousand 45 mm3 cubic groups of neurons, called voxels. When you are thinking about an object, a certain subset of these thousands of voxels “lights up”, representing your thought. Machine learning algorithms are used to make a mapping of voxel activation patterns to thoughts about individual objects.
The most exciting revelation from this research is the discovery that brain activity of a person thinking about an object, such as a hammer, is very similar to the brain activity of a completely different person that is also thinking of a hammer. People grow up in different places and have completely different experiences, yet thoughts of common objects, such as hammers, seem to be held within our brains in a representation that is fairly consistent among all of us.
So how far can we go with this technology? Will we really be able to read people’s minds beyond the level of simple objects? The Carnegie Mellon researches seem pretty optimistic that they can do much better in the coming years. At least in terms of vision (instead of thinking) it appears that they can interpret almost exactly what you are seeing with fMRI. The following quote from the science central article is quite telling:
The researchers excluded the vision area of the brain from the scans “because it’s almost too easy a target,” explains Just. “The visual cortex really contains a very faithful, accurate representation of a shape that your looking at -- whatever is on your retina gets translated to your visual cortex at the back of your brain. And if you look for that pattern, that’s a lot easier, so we can be very accurate there.”
Reading Your Mind to Tag Images (and Work With Computers)
January 10, 2010Singularity Hub - The most valuable machine you own may be between your ears. Work done at Microsoft Research is using electroencephalograph (EEG) measurements to “read” minds in order to help tag images.
When someone looks at an image, different areas of their brain have different levels of activity. This activity can be measured and scientists can reasonably determine what the person is looking at. It only takes about half a second to read the brain activity associated with each image, making the EEG process much faster than traditional manual tagging.
The “mind-reading” technique may be the first step towards a hybrid system of computer and human analysis for images and many other forms of data.
Whenever an image is entered into a database, it is typically tagged with labels manually by humans. This work is tedious and repetitive so companies have to come up with interesting ways to get it done on the cheap.
Amazon’s Mechanical Turk offers very small payments to those who wish to tag images online. Google Image Labeler has turned the process into a game by pairing taggers to counterparts with whom they can work together. Because EEG image tagging requires no conscious effort, workers may be able to perform other tasks during the process.
Eventually EEG readings, or those fMRI techniques that some hope to adopt into security checks, could be used to harness the brain as a valuable analytical tool. Human and computer visual processing have separate strengths. While computers can recognize shapes and movements very well (as seen with computers learning sign language), they have a harder time with categorizing objects in human terms. Brains and computers working in conjunction could one day provide rapid identification and decision making, even without human conscious effort. This could have a big impact on security surveillance and robotic warfare. Most other work we’ve seen with EEGs or “mind-reading” is aimed at either discerning what someone is thinking for security, or for control of computers and machines.
The work at Microsoft Research was headed by Desney Tan and published over the past few years at IEEE (pdf) and the Computer Human Interaction Conference (pdf). The EEG image tagging process is just one of many projects that Tan and his team hope to explore in the realm of human-computer interfaces. We’ve heard from Tan before, he was one of those developing muscle-sensing input devices ...
The future could see human brains and computers cooperating together in new ways. We may not even have to be paying attention to work.
The work at Microsoft Research may provide a better means of tagging images. I can just picture the “tagging farms” now – vast rows of people sitting at computer screens looking at images while they work on other jobs. Yet the long term implications are much broader.
Using Light and Genes to Probe the Brain
Optogenetics emerges as a potent tool to study the brain's inner workingsJanuary 2010
Scientific American Magazine - In 1979 Francis Crick, famed co-discoverer of DNA’s structure, published an article in Scientific American that set out a wish list of techniques needed to fundamentally improve understanding of the way the brain processes information. High on his wish list was a method of gaining control over specific classes of neurons while, he wrote, “leaving the others more or less unaltered.”
Over the past few years Crick’s vision for targeting neurons has begun to materialize thanks to a sophisticated combination of fiber optics and genetic engineering. The advent of what is known as optogenetics has even captured popular attention because of its ability to alter animal behavior—one research group demonstrated how light piped into a mouse’s brain can drive it to turn endlessly in circles. Such feats have inspired much public comment, including a joke made by comedian Jay Leno in 2006 about the prospect for an optogenetically controlled fly pestering George W. Bush.
Controlling a subordinate or a spouse with a souped-up laser pointer may be essential for science-fiction dystopia and late-night humor, but in reality optogenetics has emerged as the most important new technology for providing insight into the numbingly complex circuitry of the mammalian brain. It has already furnished clues as to how neural miswiring underlies neurological and mental disorders, including Parkinson’s disease and schizophrenia.
... Deisseroth’s lab has attached laser diodes to tiny fiber-optic cables that reach the brain’s innermost structures. Along with the optical fibers, electrodes are implanted that record when neurons fire.
... Last spring Deisseroth’s group published an optogenetics study that helped to elucidate the workings of deep-brain stimulation, which uses electrodes implanted deep in the brain to alleviate the abnormal movements of Parkinson’s disease.
The experiment called into question the leading theory of how the technology works—activation of an area called the subthalamic nucleus. Instead the electrodes appear to exert their effects on nerve fibers that reach the subthalamic nucleus from the motor cortex and perhaps other areas. The finding has already led to a better understanding of how to deploy deep-brain electrodes. Given its fine-tuned specificity, optoelectronics might eventually replace deep-brain stimulation.
Although optogenetic control of human behavior may be years away, Deisseroth comments that the longer-range implications of the technology must be considered:
“I’m not writing ethics papers, but I think about these issues every day, what it might mean to gain understanding and control over what is a desire, what is a need, what is hope.”
No comments:
Post a Comment