On the brain

Raffaelespinoni
8 min readJul 16, 2022

How much do we know and how do we know it.

Photo by Milad Fakurian on Unsplash

We can identify mainly three areas of research over the brain:

  • Connectomics: it has the objective of creating a map of brain connections, the so-called “connectome”.
  • The Cartography of the brain activities, and the information exchanged.
  • The Simulation in a virtual environment of the whole brain. An ambitious plan that finds its way into different research projects (BRAIN, Human Brain Project, …).

Understanding the most complex organ of our body is a means of fighting neurodegenerative problems such as Parkinson’s or Alzheimer’s disease. But also an opportunity to understand our mental activities as well as the possibility to increase our cognitive abilities.

A bit of history

The study of the brain was stuck in the past because nobody was able to see clearly how it is made, before the invention of Camillo Golgi in 1873 who developed a way of coloring the brain cells allowing scientists to see the structure of the nervous system. This allowed understanding that the brain is not a uniform tissue like other parts of the body but it is composed of distinct cells in close contact one with another.

Drawing performed by Golgi

At the beginning of the XX century, the development of electronics like the oscilloscope made it possible to study the signals exchanged between neurons. Scientists discovered that the communications between the neurons were just electrical impulses, carried by ions since the velocity of the signal is too slow with respect to the speed of light. More study on this subject made clear how the signal was propagated in a single neuron: thanks to the polarization and un-polarization of different parts of the neuron’s membrane, the charges are pushed through the axon. The electrical signal is present or absent, and it’s not varying the intensity, but instead the frequency. The intensity of what we feel is not given by a higher intensity signal, but by one with a higher frequency. The propagation of the signal between different neurons was still under debate; someone claimed that the discontinuity between the neurons made it impossible to have an electric signal between them, and others claimed that the speed of reflexes (like the knee reflex) was too fast to be explained with a discontinuous approach.

The German Otto Loewi closed the debate, discovering the existence of neurotransmitters between neuron terminations, thanks to experiments done over frog’s hearts. We know, today, that the brain is using different types of neurotransmitters, which are constituting the alphabet of the neural code.

Different neurotransmitters and their structures

How do we explain the velocity of the reflexes then? Since the signal needs to go through these neurotransmitters between each cell, the speed of the automatic reflexes is too high. The explanation for this phenomenon is the presence of different and redundant paths in the neural system, some of which are more integrated and evolved in order to respond with rapidity to external signals.

In the same century, the work done by David Hubel and Torsten Wiesel put a light on the way the vision works. Knowing that the signal extracted in the retina, coming from the optical nerve has a frequency that depends on the light combinations that hit the photoreceptors, they studied the path followed by them. They find that some neurons were activated only when specific areas of the visual field were enlighted. They also noticed that the activation of certain neurons depended on the movement of the light source. They find a hierarchical structure of different neurons responding to signals at different levels of complexity. This reminds me a lot about Convolutional Neural Networks features. Their study showed also how spread a single mechanism could be in the brain, and shows that the brain isn’t made of boxes that are responsible for a single aspect of its activity. Moreover, we know nowadays that the neuronal circuits evolve in time and place and that is essential for us to learn and remember.

The neurobiologist Eric Kandel contributed substantially to the study of memory obtaining the Nobel prize in 2000. He discovered that there are two types of memory: long and short-term. Those types are distinguished by the way the brain is storing information; the short term memory is working thanks to changes in the synapsis existing between neurons, while the long-term memory is producing a more substantial change in the interior of the neurons, which is going to engage the DNA in order to synthesize new proteins and change the structure of the neuron itself.

The Cartography of the brain

During the nineteens, the development of the Magnetic Resonance Imaging (MRI) technique has made it possible to see the brain’s activity in real-time. For the first time, the scientist could observe in a non-invasive manner the brain’s structure and activity on the scale of millimeters and seconds. Moreover, the invention of the electronic microscope allowed scientists to see on scales smaller than ever before. All these factors allowed the researcher to analyze and reconstruct the whole connectome of the Caenorhabditis elegans, a worm whose brain is composed of around 300 neurons and 7000 connections (numbers of a completely different scale with respect to the human brain). Nevertheless, the task took 10 years of work, done by cutting the worm into thin slices (having the thickness of some microns) and analyzing them with an electronic microscope. This task is definitely not suited for the human brain since a single cube millimeter contains 100.000.000 synapses. Is possible anyway to automate the task with advanced Artificial Intelligence in order to identify the synapses, but there is still the challenge of identifying the correct type of neuron and synapse. During the second half of the nineteens, some researchers created an advanced multi-electrode system, called multielectrode array (MEA), a device with hundreds of electrodes able to register the activity of a single neuron. Thanks to this technique we discovered the mechanism of mirror neurons, which are activated both when the individual is performing some action and when he sees somebody performing the same action.

MEA

Thanks to the MEA we are able to track the neural activity down to a single neuron, but only in a small region of the brain, on the other hand, the MRI can track with less precision the whole brain’s activity. At the state of the research, there still is a gap between the two levels, a new technique able to connect or incorporate the local and global points of view is what is missing in order to obtain complete cartography of the brain.

Neurotechnological Revolution

What does the future reserve for us?

Some futurists and historians claim that in the future we will be able to manipulate and increase human capabilities thanks to the close interaction between computers and human brains. Is this Sci-Fi?

Photo by Possessed Photography on Unsplash

The first medical approaches that tried to stimulate the brain with electricity were introduced in the XX century and called electrotherapy. This technique has been refined over the years leading to more targeted and precise electric signals sent. Nowadays, the same type of stimulation is done by two techniques: Transcranial Electrical Stimulation (tES) and Transcranial Magnetic Stimulation (tMS), which is indirectly inducing electrical fields thanks to a magnetic one. Those techniques are able to overstimulate some regions while reducing the activity of others, if we apply them with enough frequency we will obtain persistent results thanks to the plasticity of the brain. The problem with these techniques is the fact that we don’t fully understand the effect that they are delivering.

Transcranial Magnetic Stimulation (Magnetic field in red, Electrical field in green).

The main limitation of these techniques is the fact that the only areas that could be targeted are spread over the cortex; in order to target deeper regions of the brain, we will need to invasively enter into the organ. Deep brain stimulation (DBS) works thanks to a neurostimulator injected into the patient’s brain; it is used nowadays to target patients affected by Parkinson’s disease and it’s able to target deeper regions of the organ.

Those processes are anyway too coarse to be used to excite single specific neurons, for such a precise objective we will need another technique: Optogenetic. The general idea behind the optogenetic was given by Francis Crick, who suggested that a possibility to excite specific neurons could be the usage of light rays, since, in general, the neurons are not sensitive to it, we would just need to make some of them photosensitive. During the last decade, researchers have found a way to make neurons photosensitive thanks to a virus that is obtaining the wished effect over the infected neurons. A popular experiment done by MIT researchers involved a mouse, and the usage of optogenetics to activate and deactivate the regions of its brain responsible for memory leading the animal to hurt itself repetitively while no light was present.

There exist, today, computers able to partially provide the brain with the signal it needs in order to replace diseases or lesions, some people have gained back the ability to see, to ear, or walk thanks to these technological devices. Those interactions between the brain and the machines are still in experimental phases mainly because the scientists still don’t know which neurons to target or how, with precision. It’s not surprising to imagine a future where the interaction between brain and computer is more deep and fast. When we accept that the complete set of psychological activities could be detected and modified by a computer, then we can think of infinite scenarios and infinite possibilities for advanced human beings interacting with the machines so much that they couldn’t be considered two distinct entities.

--

--

Raffaelespinoni

Hi, i’m a fullstack dev, i enjoy reading, and i’m here on medium to share my readings with you!