A Brain Scan One Step Closer To Reading Minds
Our brains play a large part in making us who we are. We have neurons that interact with the outside world, and we have neurons that interact within our bodies. These neurons create action potentials, and these action potentials make up our thoughts. We interact with the world around us by thinking and speaking and acting. Our thought processes are seemingly quite different, as no two people seem to think exactly alike, nor can most people (with the exception of the claims of seers and the such) read flawlessly other’s thoughts. New, controversial technology has been found to do what seemed the impossible- to predict people’s intentions- it’s scientific mind reading, with a scan to show the results. As huge a step as this is in the scheme of advancing knowledge of the brain, it’s important to consider just how complete this study is, and how dangerous this potentially could be.
John-Dylan Haynes, from the Max Planck Institute for Human Cognitive and Brain Sciences in Germany led the study on reading intentions using a brain scan, with colleagues at the University College London and Oxford University (1). First, before being shown anything, the volunteers in this study were given a brain scan using a technique called functional magnetic imaging resonance. Then, the researchers asked volunteers to decide whether to add or subtract two numbers shown on a screen. With 70% accuracy, the machine was able to “predict” whether the participant was to add or subtract the numbers. The researchers found signatures of activity in a marble-sized part of the brain called the medial prefrontal cortex that differed between the intention to add and the intention to subtract.
This study has several holes, creating problems. These findings don’t seem to be as large a breakthrough as the media claims. Our brain is being depicted and taken apart little by little, and we’re understanding slowly which part of the brain is responsible for what, but if this is going to be called reading intentions, then so should many other things that have already been discovered. A link has been suggested between increased activity in the amygdala and decreased trustworthiness (2). Does this mean that if someone the person being scanned didn’t trust came into the room, and the scan of the amygdala became quite active, that the scan had just predicted an intention? Or is a lack of trust not the kind of intention the researchers were referring to? Perhaps the researchers meant a kind of active intention- an action, to be carried out, to be distinguished from feelings, which may potentially have led to an action.
Defining an intention seems to warrant more than a look up in a dictionary. Direct definitions for the word ‘intention’ range from “a determination to act in a certain way”, “an intended goal”, to a kind of meaning or significance (3). If I decide that I want to get up and get food, my intention is that, exactly- to stand up, walk over to the refrigerator, open it, and get food. But am I leaving something out? Is the fact that I’m hungry, or bored, part- maybe even the beginning- of the intention? Does my intention need an emotional- or physical- reason, to begin, or is the intention simply the ending point- the last thing I want to do that I’m focused on? I would think that the reason I want to get up and go get food in the first place has a great deal of importance, for had it not been for that initial craving, I would not have started the process in the first place. If someone I don’t trust is going to ask me a question, and the brain scan shows lots of activity in my amygdala, is it predicting my intention that I may not tell the person the whole truth- by responding to the activity produced by my initial first feeling? Perhaps, and, if so, Haynes’ study doesn’t seem to breaking as many boundaries as he may think.
Possibly more useful could be if Haynes looked to discover what it was that made a person choose whether to add or subtract- if there was some biological reason- perhaps one a brain scan could still pick up- as to why some people chose one way over another. Between these two choices, maybe there was a pattern- something from these participants’ brains- something from their past- something from somewhere, preparing them for when asked to add or subtract- an ultimate decision built into their brains. Perhaps Haynes’ study is at the second part of an intention, if feelings or emotions or anything comes before the actual action or direct decision making for the action. As in the example before, if a person I didn’t trust came into the room, and I thought to myself, “I don’t trust this person, and therefore will not tell the whole truth”- it seems this is point where Haynes found activity in the brain, rather than at the initial feeling of distrust. Digging closer, deeper, less wrong- finding the initial thoughts, feelings, of the participants before they made their decision- which possibly could have been a mass of unconscious thinking- may make huge advances in this study and advance the argument for mind reading and intentions.
The 70% accuracy is also a potential problem, as is the problem that they need to “hone the technique to distinguish between passing thoughts and genuine intentions” (1), briefly mentioned in the last sentence of the article. If this scan is going to be used in the future as a kind of lie detector test, or something of that nature, 70% accuracy isn’t nearly close enough, and distinguishing between passing thoughts and true intentions is a tremendous problem. If asked in a law trial where we were a certain night, we would probably think to ourselves “where was I the night of the crime?”, and, in those passing thoughts, may suggest, on a scan, that we somehow had something to do with the crime. The 70% accuracy is probably largely due to this discrepancy between passing thoughts and intentions, but then an interesting thought on this study arises- if perhaps 30% of this study had passing thoughts that they threw off the scan, does that mean the other 70% were able to quickly- instinctively, perhaps choose whether to add or subtract? Or, could that mean there were passing thoughts in probably close to 60% of the participants, who each had 2 choices, and the machine was right 50% of the time? If that was the case, then true intention read correctly by the scan could potentially only have accounted for a small amount of the participants.
The ethical considerations of this study are important as well. If these studies progress- if we eventually can read minds, read intentions, read thoughts- there’s a danger involved. If scans get into hands of criminals, and our security people are captured by them, every secret, every fact, is at the criminal’s disposal. Crime aside, being able to read into other people’s minds perhaps seems a bit surreal, and maybe that’s not something we should be exploring. As fascinating as it would be, as much as we could learn, and medically help- educationally advance, it would make our lives become public. Privacy would become a piece of the past- we would all become, more or less, the same. Our thoughts would be transmitted by machines, making us, effectively, relatively useless. Our brains could run the world, bringing into question whether there’s more to us than just brains- whether there is a soul, or something more, that no machines could “figure out”. It’s a problem- knowing just how far we can go to advance and learn, but when to take a step back and realize, maybe this isn’t somewhere we should be going.
As far as the study goes, it’s controversial, but quite exciting to the science world especially. It is currently the main study/breakthrough with a title involving reading intentions with a brain scan, and much is being explored/built off of the study. The scientists are seeing- and telling the public- all of the benefits this sort of study/advancements could bring, from potentially stopping criminals before they commit crimes, to having artificial limbs controlled by thoughts- and the world is responding with a great deal of interest and attention. With ethical problems and considerations, there is some discouragement to advancing this study, but at the moment, there seems to be greater support leaning towards all the good these incredible technological advancements could bring. Many neurobiologists express interest in this study- in seeing how much we can find out about our brains, in seeing our brains taken apart little by little, focusing more on the present than on the ethical dilemmas the future could bring.
If our thoughts could be predicted, if a machine could read our minds, our world would be a very different place. It would be interesting- since half the time I barely know what I’m thinking, let alone have a machine be able to read my thoughts in a clear and translatable version. As exciting as it is that technology is advancing, I still think we have a long, long way to go. I think reading thoughts is probably a lot harder and a lot further off than anyone wants to say. I think having thoughts translated across a machine, and from one person to the next, may potentially be an impossible feat, just because of how different we all are, and how one thought may represent something to one person, and something completely different to the next. I think it’s dangerous, since our minds are our ultimate storage compartment for our past and our present, and having someone else, besides ourselves, be able to see into that is a scary thought. For now though, while actual mind reading, beyond an adding/subtracting choice, is fairly far in the future, we can marvel over how far we’ve come. We can proceed with caution- weighing the benefits and risks associated with mind reading and brain development, and figuring out just how far we can- and should, go, in figuring out what our brains do, and how much we should know about ourselves and other’s thoughts.