This is a guest blog post by Professor Tim Bayne, University of Manchester. Tim is the Principal Investigator for AHRC Science in Culture Theme Research Network ‘Mind Reading: from Functional Neuro-imaging to the Philosophy of the Mind’. Tim is the author of Thought: A Very Short Introduction (OUP, 2013).

The prospect of being able to identify a person’s thoughts by interrogating their brain activity has long fascinated philosophers and scientists. Writing in the 18th Century, Gottfried Leibniz wondered whether it would be possible to tell what a person was thinking by enlarging their brain and inspecting its workings as one might inspect the workings of a mill. Leibniz thought it obvious that no such deciphering of thought would be possible, and concluded that thought should not be identified with neural activity, ‘however organized it may be’.

Leibniz Shutterstock image

Advances in neuroscience have given a new twist to ‘Leibniz’s mill’, for neuroscientists have developed methods that enable them to decode —albeit in a highly limited manner—a person’s thoughts by measuring their neural activity. Rather than “expanding the brain” in the manner of Leibniz’s Mill, these methods—known as multi-voxel pattern analysis (MVPA)— involve the use of algorithms that track the correlations between patterns of brain activity on the one hand and mental states on the other. At Berkeley, Jack Gallant and his colleagues have used MVPA to decode the visual experiences of individuals. In one study, neuroimaging data was used to construct a recognizable film clip of the stimuli that subjects were viewing.  At the Bernstein Centre for Computational Neuroscience in Berlin, John-Dylan Haynes and colleagues presented their subjects with two numbers, and told them to decide whether to add the numbers or subtract them. Using MVPA, Haynes and colleges were able to identify the intentions of their subjects up to accuracy levels of 70%.

Although these results are striking, it is important to recognize that we do not yet have a “mind-reading device.” For one thing, current methods often fail to generalize from one individual to another, and the data that would enable someone to decode my mental states may not enable them to decode your mental states. Furthermore, the capacity to decode thoughts with any degree of precision requires artificially constraining the subjects’ range of mental states. Haynes and colleagues were able to identify the intentions of their subjects only because they had told them to either add the presented numbers together or subject one number from the other.

Q2_ai-page-001

Using neuroimaging to probe mental states in a vegetative state patient. Images courtesy of Lorina Naci and Adrian Owen (University of Western Ontario)

But although our mind-reading capacities are currently rudimentary they are likely to improve. The prospect of such advances raises many philosophical questions. Some of those questions are ethical.  Do we as a society want the capacity to read the thoughts of others? As with many scientific advances, advances in brain decoding techniques will bring with them the potential for both benefits and harms. Some of the benefits are already beginning to be realized. For example, the neuroscientist Adrian Owen and his colleagues have used brain-decoding techniques to identify signs of mental activity in a small number of “vegetative state” patients; in one instance, these techniques were used to allow the patient to answer a number of questions that had been put to him. But the capacity to decode thoughts also threatens to undermine certain values that we cherish. At present those who desire to know what we’re thinking must rely on the clues provided by our behaviour, such as what we say, where we look, and how we move. Although such clues can reveal a great deal about a person’s mental life their reach is limited, and we are typically able to keep many of our thoughts to ourselves. The development of an all-purpose brain decoder might threaten to undermine that capacity.

Whether or not we should be worried about this threat depends on whether our rudimentary capacity to decode mental states will ‘scale up’. Will we be able to decode mental states in real-world contexts, where the range of thoughts that subjects can entertain (“I wish it would stop snowing” “Did I leave the oven on?” “Oh – so that’s what she meant!”) is unconstrained? Will we be able to identify correlations between neural activity and mental states that apply not just to specific individuals but to populations of individuals? And will we be able to develop ways of brain reading that enable us to identify novel thoughts—thoughts that are not already included in our database of correlations?

My AHRC project, “Mindreading: From Functional Neuroimaging to the Philosophy of Mind”, aims to answer some of these questions. Together with Colin Klein (Macquarie University) and Michael Anderson (Franklin & Marshall), I am exploring ways in which philosophical accounts of the nature of thought might inform—and in turn be informed by—developments in the science of mental state decoding. Although today’s brain-scanners don’t look much like the mill that Leibniz imagined, he would have felt entirely at home with the philosophical questions that their use raises.

This is one of a series of guest blog posts written by AHRC Science in Culture Theme Award Holders. The Science in Culture Theme is a key area of AHRC Funding and supports projects committed to developing reciprocal relationships between scientists and arts and humanities researchers. More information about Professor Tim Bayne’s Research Network ‘Mind Reading: from Functional Neuro-imaging to the Philosophy of the Mind’ is available here.

Follow us on twitter at @AHRCSciCulture for updates from the theme.