Information

Memory capacity of the human brain in bytes?

Memory capacity of the human brain in bytes?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Is there an estimate out there for how much memory (in bytes) an average human brain can have? Is there a physical limit?


'Bombshell' Study Shows Our Brains Are Even More Awesome Than We Knew

In a finding they're calling a "real bombshell in the field of neuroscience," researchers have uncovered evidence that the human brain's memory capacity is an order of magnitude greater than previously thought.

"Our new measurements of the brain's memory capacity increase conservative estimates by a factor of 10 to at least a petabyte," Dr. Terry Sejnowski, a professor at the Salk Institute in La Jolla, California, and co-senior author of a paper describing the research, said in a written statement.

In other words, the human brain may be able to store one petabyte of data, which is 1 quadrillion bytes. That's enough memory to store 13.3 years of high-definition video.

The finding, published recently in the journal eLife, is considered preliminary and must be confirmed by future research. But it constitutes a significant advance in our understanding of neuroanatomy and could prove to be a step toward the creation of a complete "wiring diagram" of the human brain, Sejnowsky told The Huffington Post.

In addition, the finding could point the way to a new generation of computers that combine enormous processing power with low energy consumption. Such "probabilistic" computing devices -- so called because they process data in a way that is more intuitive than conventional computers -- are considered a game-changer for applications ranging from translation to machine vision.

Sejnowski and his collaborators at Salk and the University of Texas at Austin made the discovery as part of a detailed anatomical examination and subsequent 3D computer reconstruction of the cells within a tiny portion of tissue from the brain of a rat.

The reconstruction showed that the variation in the sizes of the synapses within the sample -- the tiny gaps between brain cells that are known to be key to memory formation and storage -- was far smaller than previous research had suggested. In fact, the synapses varied in size by only about 8 percent. (Synapses in the rat brain are believed to be similar to those in the human brain.)

"No one thought it would be such a small difference," Dr. Tom Bartol, a staff scientist at the institute and one of the researchers, said in the statement. "This was a curveball from nature."

When the researchers plugged the 8-percent figure into their computer model of the brain, they determined that there must be more than two dozen discrete sizes of synapse rather than just a few. That bigger number, in turn, meant that the synapses must be able to store far more information than anyone knew.

Having more "bits" per synapse is a little like a high-definition TV having more bits per pixel than a conventional TV, Sejnowski said, adding that, "We think the brain is high-resolution now."

Or, offering up another metaphor, he said, "We have to think of the brain not as an old grandfather clock but as a high-precision watch."

Some scientists think that the human brain is capable of storing even more information. The brain's true memory capacity may be even greater -- as much as 3 to 5 petabytes, Dr. Paul Reber, director of the Brain, Behavior, & Cognition program in the psychology department at Northwestern University, told the San Diego Union-Tribune.


The Human Brain's Memory Could Store the Entire Internet

The human brain may be able to hold as much information in its memory as is contained on the entire Internet, new research suggests.

Researchers discovered that, unlike a classical computer that codes information as 0s and 1s, a brain cell uses 26 different ways to code its "bits." They calculated that the brain could store 1 petabyte (or a quadrillion bytes) of information.

"This is a real bombshell in the field of neuroscience," Terry Sejnowski, a biologist at the Salk Institute in La Jolla, California, said in a statement. "Our new measurements of the brain&rsquos memory capacity increase conservative estimates by a factor of 10."

Amazing computer

What's more, the human brain can store this mind-boggling amount of information while sipping just enough power to run a dim light bulb. [Top 10 Mysteries of the Mind]

By contrast, a computer with the same memory and processing power would require 1 gigawatt of power, or "basically a whole nuclear power station to run one computer that does what our 'computer' does with 20 watts," said study co-author Tom Bartol, a neuroscientist at the Salk Institute.

In particular, the team wanted to take a closer look at the hippocampus, a brain region that plays a key role in learning and short-term memory.

To untangle the mysteries of the mind, the research team took a teensy slice of a rat's hippocampus, placed it in embalming fluid, then sliced it thinly with an extremely sharp diamond knife, a process akin to "slicing an orange," Bartol said. (Though a rat's brain is not identical to a human brain, the basic anatomical features and function of synapses are very similar across all mammals.) The team then embedded the thin tissue into plastic, looked at it under a microscope and created digital images.

Next, researchers spent one year tracing, with pen and paper, every type of cell they saw. After all that effort, the team had traced all the cells in the sample, a staggeringly tiny volume of tissue. [Image Gallery: Einstein's Brain]

"You could fit 20 of these samples across the width of a single human hair," Bartol told Live Science.

Size distribution

Next, the team counted up all the complete neurons, or brain cells, in the tissue, which totaled 450. Of that number, 287 had the complete structures the researchers were interested in.

Neurons look a bit like swollen, misshapen balloons, with long tendrils called axons and dendrites snaking out from the cell body. Axons act as the brain cell's output wire, sending out a flurry of molecules called neurotransmitters, while tiny spines on dendrites receive the chemical messages sent by the axon across a narrow gap, called the synapse. (The specific spot on the dendrite at which these chemical messages are transmitted across the synapse is called the dendritic spine.) The receiving brain cell can then fire out its own cache of neurotransmitters to relay that message to other neurons, though most often, it does nothing in response.

Past work had shown that the biggest synapses dwarf the smallest ones by a factor of 60. That size difference reflects the strength of the underlying connection &mdash while the average neuron relays incoming signals about 20 percent of the time, that percentage can increase over time. The more a brain circuit gets a workout (that is, the more one network of neurons is activated), the higher the odds are that one neuron in that circuit will fire when another sends it a signal. The process of strengthening these neural networks seems to enlarge the physical point of contact at the synapses, increasing the amount of neurotransmitters they can release, Bartol said.

If neurons are essentially chattering to each other across a synapse, then a brain cell communicating across a bigger synapse has a louder voice than one communicating across a smaller synapse, Bartol said.

But scientists haven't understood much about how many sizes of neurons there were and how they changed in response to signals.

Then Bartol, Sejnowski and their colleagues noticed something funny in their hippocampal slice. About 10 percent of the time, a single axon snaked out and connected to the same dendrite at two different dendritic spines. These oddball axons were sending exactly the same input to each of the spots on the dendrite, yet the sizes of the synapses, where axons "talk" to dendrites, varied by an average of 8 percent. That meant that the natural variance in how much a message between the two altered the underlying synapse was 8 percent.

So the team then asked: If synapses can differ in size by a factor of 60, and the size of a synapse varies by about 8 percent due to pure chance, how many different types of synaptic sizes could fit within that size range and be detected as different by the brain?

By combining that data with signal-detection theory, which dictates how different two signals must be before the brain can detect a difference between them, the researchers found that neurons could come in 26 different size ranges. This, in essence, revealed how many different volumes of "voices" neurons use to chatter with each other. Previously, researchers thought that these brain cells came in just a few sizes.

From there, they could calculate exactly how much information could be transmitted between any two neurons. Computers store data as bits, which can have two potential values &mdash 0 or 1. But that binary message from a neuron (to fire or not) can produce 26 different sizes of neurons. So they used basic information theory to calculate just how many bits of data each neuron can hold.

"To convert the number 26 into units of bits we simply say 2 raised to the n power equals 26 and solve for n. In this case n equals 4.7 bits," Bartol said.

That storage capacity translates to about 10 times what was previously believed, the researchers reported online in the journal eLife.

Incredibly efficient

The new findings also shed light on how the brain stores information while remaining fairly active. The fact that most neurons don't fire in response to incoming signals, but the body is highly precise in translating those signals into the physical structures, explains in part why the brain is more efficient than a computer: Most of its heavy lifters are not doing anything most of the time.

However, even if the average brain cell is inactive 80 percent of the time, that still doesn't explain why a computer requires 50 million times more energy to do the same tasks as a human brain.

"The other part of the story might have to do with how biochemistry works compared to how electrons work in a computer. Computers are using electrons to do the calculations and electrons flowing in a wire make a lot of heat, and that heat is wasted energy," Bartol said. Biochemical pathways may simply be much more efficient, he added.


How much can our brain store? GB..TB… or more?

According to Paul Reber, professor of psychology, Northwestern University, the human brain consists of about one billion neurons. Each neuron forms about 1,000 connections to other neurons, amounting to more than a trillion connections. If each neuron could only help store a single memory, running out of space would be a problem. You might have only a few gigabytes of storage space, similar to the space in an iPod or a USB flash drive. Yet neurons combine so that each one helps with many memories at a time, exponentially increasing the brain’s memory storage capacity to something closer to around 2.5 petabytes (or a million gigabytes). For comparison, if your brain worked like a digital video recorder in a television, 2.5 petabytes would be enough to hold three million hours of TV shows. You would have to leave the TV running continuously for more than 300 years to use up all that storage.

Hollywood by Steven Wiltshire

The human brain has always been one of the most intriguing mysteries on earth. Meet Steven Wiltshire, also known as the human camera. When he was 11, he drew a perfect aerial view of London after a helicopter ride.

Watch this living camera in action in this amazing video, where he draws a panoramic view of the city of Rome from memory after a single hour long helicopter ride. Miracles such as Steven have been given several names throughout history. But it is only today that scientists are beginning to be able to watch the brain as it is thinking, to unravel the mysteries it holds.


Memory capacity of brain is 10 times more than previously thought

Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient and could help engineers build computers that are incredibly powerful but also conserve energy.

"This is a real bombshell in the field of neuroscience," says Terry Sejnowski, Salk professor and co-senior author of the paper, which was published in eLife. "We discovered the key to unlocking the design principle for how hippocampal neurons function with low energy but high computation power. Our new measurements of the brain's memory capacity increase conservative estimates by a factor of 10 to at least a petabyte, in the same ballpark as the World Wide Web."

Our memories and thoughts are the result of patterns of electrical and chemical activity in the brain. A key part of the activity happens when branches of neurons, much like electrical wire, interact at certain junctions, known as synapses. An output 'wire' (an axon) from one neuron connects to an input 'wire' (a dendrite) of a second neuron. Signals travel across the synapse as chemicals called neurotransmitters to tell the receiving neuron whether to convey an electrical signal to other neurons. Each neuron can have thousands of these synapses with thousands of other neurons.

"When we first reconstructed every dendrite, axon, glial process, and synapse from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses," says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. "While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report."

Synapses are still a mystery, though their dysfunction can cause a range of neurological diseases. Larger synapses--with more surface area and vesicles of neurotransmitters--are stronger, making them more likely to activate their surrounding neurons than medium or small synapses.

The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.

At first, the researchers didn't think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.

To do this, researchers used advanced microscopy and computational algorithms they had developed to image rat brains and reconstruct the connectivity, shapes, volumes and surface area of the brain tissue down to a nanomolecular level.

The scientists expected the synapses would be roughly similar in size, but were surprised to discover the synapses were nearly identical.

"We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about eight percent different in size. No one thought it would be such a small difference. This was a curveball from nature," says Bartol.

Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.

It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small.

But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.

"Our data suggests there are 10 times more discrete sizes of synapses than previously thought," says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 "bits" of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.

"This is roughly an order of magnitude of precision more than anyone has ever imagined," says Sejnowski.

What makes this precision puzzling is that hippocampal synapses are notoriously unreliable. When a signal travels from one neuron to another, it typically activates that second neuron only 10 to 20 percent of the time.

"We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses," says Bartol. One answer, it seems, is in the constant adjustment of synapses, averaging out their success and failure rates over time. The team used their new data and a statistical model to find out how many signals it would take a pair of synapses to get to that eight percent difference.

The researchers calculated that for the smallest synapses, about 1,500 events cause a change in their size/ability (20 minutes) and for the largest synapses, only a couple hundred signaling events (1 to 2 minutes) cause a change.

"This means that every 2 or 20 minutes, your synapses are going up or down to the next size. The synapses are adjusting themselves according to the signals they receive," says Bartol.

"Our prior work had hinted at the possibility that spines and axons that synapse together would be similar in size, but the reality of the precision is truly remarkable and lays the foundation for whole new ways to think about brains and computers," says Harris. "The work resulting from this collaboration has opened a new chapter in the search for learning and memory mechanisms." Harris adds that the findings suggest more questions to explore, for example, if similar rules apply for synapses in other regions of the brain and how those rules differ during development and as synapses change during the initial stages of learning.

"The implications of what we found are far-reaching," adds Sejnowski. "Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us."

The findings also offer a valuable explanation for the brain's surprising efficiency. The waking adult brain generates only about 20 watts of continuous power--as much as a very dim light bulb. The Salk discovery could help computer scientists build ultraprecise, but energy-efficient, computers, particularly ones that employ "deep learning" and artificial neural nets--techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.

"This trick of the brain absolutely points to a way to design better computers," says Sejnowski. "Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains."


THE HUMAN MEMORY

Since time immemorial, humans have tried to understand what memory is, how it works and why it goes wrong. It is an important part of what makes us truly human, and yet it is one of the most elusive and misunderstood of human attributes.

The popular image of memory is as a kind of tiny filing cabinet full of individual memory folders in which information is stored away, or perhaps as a neural super-computer of huge capacity and speed. However, in the light of modern biological and psychological knowledge, these metaphors may not be entirely useful and, today, experts believe that memory is in fact far more complex and subtle than that

It seems that our memory is located not in one particular place in the brain, but is instead a brain-wide process in which several different areas of the brain act in conjunction with one another (sometimes referred to as distributed processing). For example, the simple act of riding a bike is actively and seamlessly reconstructed by the brain from many different areas: the memory of how to operate the bike comes from one area, the memory of how to get from here to the end of the block comes from another, the memory of biking safety rules from another, and that nervous feeling when a car veers dangerously close comes from still another. Each element of a memory (sights, sounds, words, emotions) is encoded in the same part of the brain that originally created that fragment (visual cortex, motor cortex, language area, etc), and recall of a memory effectively reactivates the neural patterns generated during the original encoding. Thus, a better image might be that of a complex web, in which the threads symbolize the various elements of a memory, that join at nodes or intersection points to form a whole rounded memory of a person, object or event. This kind of distributed memory ensures that even if part of the brain is damaged, some parts of an experience may still remain. Neurologists are only beginning to understand how the parts are reassembled into a coherent whole.

The human brain, one of the most complex living structures in the universe, is the seat of memory

Neither is memory a single unitary process but there are different types of memory. Our short term and long-term memories are encoded and stored in different ways and in different parts of the brain, for reasons that we are only beginning to guess at. Years of case studies of patients suffering from accidents and brain-related diseases and other disorders (especially in elderly persons) have begun to indicate some of the complexities of the memory processes, and great strides have been made in neuroscience and cognitive psychology, but many of the exact mechanisms involved remain elusive.

This website, written by a layman for the layman, attempts to piece together some of what we DO know about the enigma that is…The Human Memory.Hypertension affects the cardiovascular system as well as the blood flow to the brain. This can cause many symptoms including memory loss.


Memory capacity of the human brain in bytes? - Biology

Unlike digital cameras with full memory cards that cannot snap any more pictures, our brains never seem to run out of room. Yet it defies logic that one adult human brain – a "blood-soaked sponge," in writer Kurt Vonnegut's words – should be able to limitlessly record new facts and experiences.

Neuroscientists have long tried to measure our maximum mental volume. However, what scrambles any simple reckoning of memory capacity is the astounding cognitive feats achieved by dedicated individuals, and people with atypical brains.

Many of us struggle to commit a phone number to memory. How about 67,980 digits? That's how many digits of pi that Chao Lu of China, a 24-year-old graduate student at the time, recited in 2005. Chao uttered the string of numbers during a 24-hour stretch without so much as a bathroom break, breaking the world record.

Savants have pulled off arguably even more amazing performances, capable of astounding feats of recall, from names and dates to the details of complex visual scenes. And in rare instances, injuries to previously healthy people have seemingly triggered "acquired savant syndrome." When Orlando Serrell was 10-years-old, for example, he was struck by a baseball in the left side of his head. He suddenly found he could recall countless licence plates and compute complex calendrical items, such as what day of the week a date from decades ago fell.

How is it that these peoples’ noodles put the average brain's memory to shame? And what do the abilities of pi reciters and savants say about the true capacity of the human brain?

Brain bytes

On a quantifiable level, our memory capacity must have some basis in the physiology of the brain. A crude, but perhaps useful metric in this regard: the approximately 100 billion neurons that compose our brains. Only around a billion, however, play a role in long-term memory storage – they’re called pyramidal cells.

If you were to assume that a neuron could merely hold a single "unit" of memory, then our brains would fill to the brim. "If you could have as many memories as neurons, that's not a very big number," says Paul Reber, a professor of psychology at Northwestern University. "You'd run out of space in your brain pretty fast."

Could it be possible to unlock hidden memory talents? (Credit: Getty Images)

Instead, researchers believe memories form in the connections between neurons and across neural networks. Each neuron sprouts extensions like train lines from a commuter hub, looping in about a thousand other nerve cells neurons. This architecture, it is thought, makes the elements of memories available across the whole tangled web. As such, the concept of a blue sky, say, can show up in countless, notionally discrete memories of outdoor scenes.

Reber calls this effect "exponential storage," and with it the brain's memory capacity "goes through the roof."

"Under any reasonable guess, it gets into the several petabyte range," says Reber. One petabyte equates to 2,000 years-worth of MP3 song files. We don’t yet know exactly how many connections a single memory needs, of course – or even if its storage can be compared to a digital computer at all – so such comparisons should perhaps be taken with a pinch of salt. Suffice to say, according to Reber, "you have tonnes and tonnes of space."

More up top?

Could people endowed with super-memories, then, have exceptional brains?

The short answer: no. Pi record holders, like Lu, as well as most winners of memory championships swear they are just regular people who have dedicated themselves to training their brains for holding and retrieving selected pieces of information.

Nelson Dellis, a USA Memory Championship winner, says that his memory was actually awful before he became a competitive mental athlete. Practice made all the difference. "Within weeks of training, maybe even less, you're doing something that seems almost impossible to the normal person," says Dellis. "We all have this skill within us."

Several years ago, when Dellis first started his cerebral workouts, it took him 20 minutes to memorise a deck of cards. Nowadays, he can commit to memory all 52 cards in under 30 seconds – in other words, in a single pass. Dellis is trained up to five hours daily on card-counting and other memory competition events ahead of his successful title defence at the 2015 USA Memory Championship on 29 March in New York City.

Some people can remember the order of a shuffled pack of cards in 30 seconds (Credit: Thinkstock)


How does the human brain compare to a computer?

We live in a world where computers can outperform humans at chess, Go, and even Jeopardy. Artificial intelligence and machine learning are creating new breakthroughs all the time, leaving us wondering whether we’ll soon be living in a technological utopia or battling for survival against a cyborg Arnold Schwarzenegger.

But do computers outperform the human brain overall? Let’s find out.

For the purpose of this article, let’s define a computer as a personal desktop for non-professional use (i.e. not a server running 24/7).

And to keep things simple, we’ll limit the comparisons to four areas:

Storage

For day-to-day usage, most computer users will get by with 500GB of storage. Creatives, gamers, and other data-heavy users will often rely on additional storage on the cloud or on a portable SSD. For the sake of argument, we’ll give the computer an average of 1TB of storage space.

What about the brain’s storage capacity? Well, it’s complicated.

Estimates vary on how many nerve cells, or neurons, exist in a typical brain. Many studies rely on 100 billion neurons, while a Stanford University study estimates that the brain actually has 200 billion neurons.

You might be thinking, “Wait, the computer has bytes and the brain has neurons. How do we compare the two?”

One marked difference between the human brain and computer flash memory is the ability of neurons to combine with one another to assist with the creation and storage of memories. Each neuron has roughly a thousand connections to other neurons. With over a trillion connections in an average human brain, this overlap effect creates an exponentially larger storage capacity.

Based on our understanding of neurons today, which is very limited, we would estimate the brain’s storage capacity at 1 petabyte, which would be the equivalent of over a thousand 1TB SSDs.

Advantage: Human Brain.

Memory

So far, it’s an even contest. The human brain has significantly more storage than an average computer. And a computer can process information exponentially faster than a human brain.

How about accessing memory? Can a human recall information better than a computer?

Well, it depends on what kinds of information we’re talking about.

For basic facts, the answer is unequivocally no. If a computer “knows” that the capital of Nevada is Carson City, that fact will always be accessible. A human, on the other hand, may get confused or forget that fact over time, particularly after a long weekend in Vegas.

Where computers lag behind humans is the ability to assign qualitative rankings to information. For a computer, all information is exactly the same. Humans, on the other hand, have many different types of memories and prioritize memories based on their importance. You will undoubtedly remember numerous details about your wedding day, but you probably forgot what you had for lunch last Thursday. (It was a tuna sandwich on rye, in case you were wondering.)

Humans also relate memories to one another, so your memory of New Year’s Eve will tie to all of your other New Year celebrations over the course of your life. A computer lacks this ability, at least for now.

Energy Efficiency

The contest is still a toss-up. Computers are faster and more precise, while humans have more storage capacity and nuance in accessing memories.

What about energy efficiency? Here is where it gets really fun.

A typical computer runs on about 100 watts of power. A human brain, on the other hand, requires roughly 10 watts. That’s right, your brain is ten times more energy-efficient than a computer. The brain requires less power than a lightbulb.

We may not be the brightest bulbs in the box, but then again, we don’t have to be.

Advantage: Human Brain

Conclusion

Ultimately, there is no clear winner overall. Human beings and computers have their own advantages, depending on the category. If you want precision and raw processing speed, a computer is the clear choice. If you want creativity, energy efficiency, and prioritization, a human is your best bet.

The good news is that we don’t have to choose. It doesn’t have to be a contest of humans against computers. We can work together and enjoy the best of both worlds. That is, until Skynet becomes self-aware.


Memory-related brain lateralisation in birds and humans

Visual imprinting in chicks and song learning in songbirds are prominent model systems for the study of the neural mechanisms of memory. In both systems, neural lateralisation has been found to be involved in memory formation. Although many processes in the human brain are lateralised--spatial memory and musical processing involves mostly right hemisphere dominance, whilst language is mostly left hemisphere dominant--it is unclear what the function of lateralisation is. It might enhance brain capacity, make processing more efficient, or prevent occurrence of conflicting signals. In both avian paradigms we find memory-related lateralisation. We will discuss avian lateralisation findings and propose that birds provide a strong model for studying neural mechanisms of memory-related lateralisation.

Keywords: Auditory-vocal learning Avian brain Domestic chick Hemispheric dominance Human language lateralisation Imprinting Lateralisation Learning Memory Memory consolidation Memory formation Sensory learning Song learning Songbirds.


Conclusion and Perspectives

The MeshCODE theory presented here provides an original concept for the molecular basis of memory storage. I propose that memory is biochemical in nature, written in the form of different protein conformations in each of the trillions of synapses. This concept is based on the discovery of a complex network of mechanical switches in proteins like talin (Yao et al., 2016 Goult et al., 2018 Figure 2) that are built into the scaffolds of every synapse (Park and Goda, 2016 Lilja and Ivaska, 2018 Figure 3). These binary switches can be operated by the force-generation machinery of the cells cytoskeleton, offering a new view of the brain as a mechanical computer.

The capacity for storage of data in a binary form in each synapse identifies an addressable read-write memory mechanism, clearly pointing to a way, in which the brain might carry information forward in time and perform computation. Data written in binary, symbolic form would provide a basis for how the brain might function as an input-output system, in which its computation and data processing systems are founded on physical and mathematical principles (Gallistel and King, 2009). Remarkably, humankind’s efforts to produce optimal computation in silico may have led to architectures that bear a striking similarity to what nature might already have arrived at in vivo.

Sensory inputs are processed by the brain and trigger the appropriate motor responses as outputs allowing the animal to interact with the world. Action potential spike trains are well established as an organism’s way of sending information over long distances (Perkel et al., 1967 Strong et al., 1998 Fellous et al., 2004), similar to how electrical pulses carry information in electronic systems, yet quite how these voltage spikes that travel down axons carry information is not yet fully understood. In the MeshCODE framework proposed here, these spikes transfer information by altering the mechanical coding of both the sender and receiver cell. Diverse input signals, including visual, auditory, olfactory, temporal cues, self-movement (idiothetic), among others, are converted into electrical signals in the form of spike trains, and the precise patterns of these spikes trigger exact changes to the neurons. These changes include cytoskeletal alterations (Yao et al., 2006 Cingolani and Goda, 2008) which in the MeshCODE framework would update the switch patterns, such that the information the spike trains carry is integrated into the organism’s binary coding. This complex mechanical coding amounts to a machine code that is constantly running in all animals. From an initial state at birth, the life experiences and environmental conditions of the animal would be written into the code, creating a constantly updating, mathematical representation of the animal’s unique life. It is possible that consciousness is simply an emergent property arising from the interconnectedness of electrical signals connecting all these MeshCODEs, forming a complete mathematical representation of the world that gives rise to precise electrical signals that coordinate an entire biochemical organism in the context of its world.

The key to biochemical data storage would be the precise conformations of each mechanical switch in each and every synaptic adhesion. These conformations are mostly unmeasurable with existing technologies using microscopy the talin visible in adhesions will not appear to change, even as the conformations of each alters during memory formation. However, as the size and composition of each synaptic adhesion complex will change in response to these altered patterns then observation of the adhesions themselves, identification of the ligands that engage them, and correlating these with the synapses activity should provide a readout of the process. Visualising these complexes is further complicated as any perturbation of the system will result in altered MeshCODE arrangements. However, the technical capabilities to observe protein states and forces acting on proteins in cells are advancing rapidly (Kumar et al., 2016 Ringer et al., 2017 Lemke et al., 2019) and used in conjunction with super-resolution microscopy techniques (Leterrier et al., 2017 Schnitzbauer et al., 2017 Jacquemet et al., 2020), optogenetic techniques (Liu et al., 2012), and the well-established strategies for studying neurotransmission (reviewed in Kandel et al., 2014) such conformational changes during memory formation should be detectable. Further, a number of talin-binding compounds have recently been identified (Yang et al., 2017 Bryce et al., 2019) and the effect of such compounds on learning and memory in animal systems might provide opportunities to pharmaceutically modulate these processes.

As a final comment, physical storage of memory would have significant potential future implications, not least that it might make the stuff of science fiction possible. If memory and consciousness are biochemical in nature, it is possible that one day we will fully decipher how the MeshCODE stores and computes information to form a mathematical representation of the world. In doing so we may not only understand the computations of the human mind, but also allow the transfer of the human mind from neural networks onto silicon chips running the human Operating System. A biochemical basis of memory storage also raises the possibility to not only read the memory of the living, but also the dead. Although short term memory might be accessible only transiently after death, long term MeshCODEs that are “write protected” might be possible to read for the duration of the integrity of the brain.


Watch the video: Πώς μπορούμε να αναπτύξουμε νέους νευρώνες στον εγκέφαλο. TED (August 2022).