Pages

Saturday 15 March 2014

123. Probing the Human Brain


If we want to reverse-engineer the human brain, the most important thing to do first is to probe its structure and function with experimental tools that have the highest possible spatial and temporal resolution. I give in this and the next post a very brief historical account of the progress made in achieving this objective. My information is based largely on the work of Kurzweil (2005, 2012).



1. At the beginning of the 20th century, crude tools were developed for examining the physical processes inside the brain. In 1928 E. D. Adrian measured the electrical output of nerve cells, thus demonstrating that there are electrical processes occurring inside the brain. To quote Adrian: 'I had arranged electrodes on the optic nerve of a toad in connection with some experiments on the retina. The room was nearly dark and I was puzzled to hear repeated noises in the loudspeaker attached to the amplifier, noises indicating that a great deal of impulse activity was going on. It was not until I compared the noises with my own movements around the room that I realized I was in the field of vision of the toad's eye and that it was signalling what I was doing'.

As Kurzweil (2005) remarks, 'Adrian's key insight from this experiment remains a cornerstone of neuroscience today: the frequency of the impulses from the sensory nerve is proportional to the intensity of the sensory phenomena being measured. For example, the higher the intensity of light, the higher the frequency (pulses per second) of the neural impulses from the retina to the brain'.

2. Horace Barlow, a student of Adrian, provided another crucial insight, namely 'trigger features' in neurons. He discovered that the retinas of frogs and rabbits have single neurons that trigger on 'seeing' specific shapes, directions, or velocities. This meant that perception involves a series of stages, with each layer of neurons recognizing more sophisticated features of the image.

Even today, electroencephalography (EEG) is a common investigative and diagnostic tool that records the electrical activity occurring along the scalp. It measures the voltage fluctuations resulting from the ionic currents flowing within the neurons (see below). The spectral content of an EEG can provide information for, for example, the epileptic activity in the brain of a patient. This technique can provide millisecond-level temporal resolution.



3. In 1939 A. L. Hodgkin and A. F. Huxley began developing an idea of how neurons perform, namely by accumulating their inputs and then producing a spike in membrane conductance: There is a sudden increase in the ability of the neuron's membrane to conduct a signal and the corresponding voltage along the axon of the neuron. This was described by Hodgkin and Huxley as the axon's 'action potential' (voltage). They actually measured the action potential on an animal neuron in 1952. Squid neurons were chosen by them for this, because of their large size and accessibility.

4. Building on the work of Hodgkin and Huxley, W. S. McCulloch and W. Pitts worked out in 1943 a simple model of neurons and neural nets. I described their model in Part 74, under the title 'Artificial Neural Networks'. This model was further refined by Hodgkin and Huxley in 1952. This very basic model for neural nets, whether in the brain or in a computer simulation, introduces the idea of a neural 'weight' which represents the 'strength' of the neural connection (synapse), and also a nonlinearity (firing threshold) in the neural cell body (the soma).

5. As I described in Part 74, another breakthrough idea was put forward in 1949 by Donald Hebb. His theory of neural learning (the 'Hebbian response theory') said that if a synapse is stimulated repeatedly, it becomes stronger. Over time this conditioning produces a learning response. Such 'connectionist' ideas flourished during the 1950s and 1960s, and led to much research on artificial neural nets.



6. There is another form of Hebbian learning, namely a loop in which the excitation of a neuron feeds back on itself, causing reverberation (a continued reexcitation of the neurons in the loop). Hebb suggested that this type of reverberation could result in short-term memory: 'Let us assume that the persistence or repetition of a reverberatory activity (or 'trace') tends to induce lasting cellular changes that add to its stability. . .  When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased'. This form of Hebbian learning is well captured by the popular phrase 'cells that fire together wire together'. Brain assemblies can create new connections and strengthen them, based on their own activity. The actual development of such connections by neurons has been seen in brain scans.

In Hebb's theory the central assumption is that the basic unit of learning in the neocortex is the neuron: a single neuron. But the current theory, described by Kurzweil (2012), of how the brain functions (I shall describe it in a future post) is based, not on the neuron itself, but rather on an assembly of neurons. This basic unit of learning comprises of ~100 neurons. According to Kurzweil (2012) 'the wiring and synaptic strengths within each unit are relatively stable and determined genetically . . . . Learning takes place in the creation of connections between these units, not within them, and probably in the synaptic strengths of those interunit connections'. As we shall see in the next post, experimental evidence has indeed been obtained for the existence of 100-neuron thick modules as the basic units of learning.

7. The connectionist movement suffered a temporary setback in 1969 when Marvin Minsky and Seymour Papert published the book Perceptrons. This book included a theorem which demonstrated that the most common neural net used at that time (namely Rosenblatt's Perceptron) was unable to answer whether or not a line-drawing was fully connected.

8. But the neural-net movement staged a resurgence in the 1980s when the 'back propagation' method was invented. In this, the strength of each simulated synapse is governed by a learning algorithm that adjusts the synaptic weight or the strength of the output of each artificial neuron after each training trial, thus enabling the net to learn to match the right answer more correctly. This type of self-organization has helped solve a whole range of pattern-recognition problems. But back propagation is not a feasible model for the training occurring in real mammalian biological neural nets.

9. Spectacular progress continues to be made in developing experimental techniques for peering into the brain. According to Kurzweil (2005) the resolution of noninvasive brain-scanning devices has been doubling every 12 months or so (per unit volume). There is also a comparable improvement in the speed of brain scanning image reconstruction.

A commonly used brain-scanning technique is fMRI (functional magnetic resonance imaging). This technique is based on the fact that cerebral blood flow and neuronal activation are coupled. When an area of the brain is in use, blood flow to that region increases. fMRI provides a spatial resolution of ~1 mm, and a time resolution of ~1 second (or 0.1 second for a thin brain slice). It measures blood-oxygen levels, and is an indirect technique for recording neuronal activity. Another such indirect technique is PET (positron emission tomography).  It measures the regional cerebral blood flow (rCBF).

Both fMRI and PET reflect local synaptic activity, rather than the spiking of neurons. They are particularly reliable for recording the relative changes in the state of the brain, for example when a particular task is being carried out by the subject.

10. Another brain-scanning technique is MEG (magnetoencephalography). It measures the magnetic fields outside the skull, coming mainly from the pyramidal neurons of the neocortex. It can achieve millisecond-level temporal resolution, but has a very poor spatial resolution (~1 cm).

11. 'Optical imaging' is an invasive technique capable of providing high spatial and temporal resolution. It involves removing a part of the skull, staining the brain tissue with a dye that fluoresces during neural activity, and imaging the emitted light.

12. When it is feasible to destroy a brain for the purpose scanning it, immensely high spatial resolutions become possible. It has been possible to scan the nervous system of the brain and body of a mouse with a resolution better than 200 nm.

More on probing the brain in the next post.

No comments:

Post a Comment