Algorithm of the Mind

Contents:

1. Discreteness and Continuity of the Mind.

The chapter takes the essence of the definition of the Mind as signals transduction process and shows what this point of view leads to. Any signal of the environment is a continuous energy flow even if we measure it in pieces and discrete time intervals. The line between discreteness and continuity depends on the resolution characteristics of the perceiving system and its elements. Each discrete measurement takes time (window of perception) as the duration of the basic operation, which determines the maximum possible ‘piece’ of the signal that will pass through during this period. It remains to unite these parts into a single stream: the model of continuous reality must also be continuous. Normally, the Mind perceives the surrounding world and the internal state as a continuous process, although everything is split into discrete components during the primary processing of signals.

This is the core of the issue: the Mind is about transducing signals from continuous waves into a discrete code and back into continuous representations of these signals. The integrity and smoothness of the Mind algorithm is a condition for successful adaptation. The chapter takes the visual modality of perception as an example and shows how this marvel of biotechnology performs continuous-discrete-continuous transformations. Our vision does the job so well that to this day neuroscientists are divided into camps that argue whether perception is discrete or continuous. The secret is simple: instead of ‘or’ we should put ‘and.’

The reason for the debate is that normally we perceive the world as a continuous picture and it is hard to believe that the brain can do such a fast transduction that to us it seems instant. In fact, only failures reveal the complexity and multi-stage process. It immediately becomes clear how difficult it is for our brain to create such a diverse and, at the same time, coherent picture of the world. The chapter proposes hypotheses on how the brain manages to do it and what algorithm and mechanism does it use.

2. Principles of Signal Processing.

Any signal is a change in the physical state of the phenomena of this world as energy processes. A living system can measure this change only indirectly by assessing the states of its channels of perception. Signal processing is about converting the flow of energy into patterns of the internal code of the system. Processing a constant stream of signals is not easy at all. The only way to do this is to create optimal algorithms. The process itself can be complicated, but the underlying algorithm should be simple enough. Any unnecessary complications may reduce efficiency.

The chapter looks at the basic principles of signal processing and shows how the Mind uses them in performing the analysis and subsequent synthesis of the signals to compile the ongoing picture of the world in the range of modalities that are available to a certain living system. Obviously, there are various modalities and their level of development can be significantly different but the general principles are universal.

The chapter contains the graphical representation of the Mind’s algorithm and details each signal filtering process step. It offers an intuitive visual and mathematical model of what functions various brain filters perform in sampling, quantizing, modulating, evaluating and integrating signals of the outer and inner world. This approach to the nervous system elements offers the outline of a new brain map focusing on the technological chain of signal transduction.

3. Space and Time of Living Systems.

The chapter is devoted to two fundamental concepts that have been at the center of philosophy and physics for ages. It looks at them from the perspective of the definition of the Mind as a process of measurement and representation of signals. The proposed model joins the camp of relationists that consider space and time as our concepts by which we define the measurement process as opposed to absolutists that look at space and time as some independent entities.

Is there some kind of thing as Space being a container, where everything is placed? Is there some kind of thing as Time that is separate from other things but rules their changes and movements? Is there some omnipresent and omnipotent super-entity ‘fabric of Space-Time’? Absolutists say ‘yes’ and explain things by these entities’ superpowers. Relationists say ‘no’ but they have to explain things some other way as there is no help from phantoms of non-tangible entities.

TTT avoids the objectification error and proceeds from the notion that space and time are not transcendent objects but just words by which we define a process of measuring signals relationships and dynamics. But it makes the next step by showing how this measuring is done by the Mind. It is not about using external tools like a ruler or a clock which are just a continuation of the Mind’s internal tools. Other animals do not use such external tools but are quite good at orienting in signals of the world. The chapter is about the inner workings of the Mind and it continues to reveal the secrets of its technology. Moreover, it demonstrates the true relativity of space and time. Being measurements, they depend upon the parameters of the system that samples signals of the environment. The chapter contains examples of how space and time perception is different for systems with dissimilar parameters and varies for one system when its parameters change.

4. Encoder and Decoder Rolled into One.

In any system of interacting elements, the question arises about creating and transmitting information, about the language and means of communication. The basic scheme in artificial communication systems goes from the source to the receiver. At the input, there is some source that sends a coded signal. At the output, there is a receiver of the encoded signal. For them to understand each other, the code must be the same for both elements of the circuit. In an artificial system, a programmer defines this language. So, the linear scheme works only with an external encoder.

When describing a living system’s operation, this is not just a technical subtlety but a philosophical topic: is there an external programmer, and is he needed at all? The chapter proceeds from the hypothesis within TTT about the algorithm of the Mind and shows how living systems solve the internal coding problem without any external programmer. It is an elegant solution to the philosophical dilemma. But it leads to the inevitable question about the internal code. If the encoder and decoder are combined in one operationally closed loop, what code does this system have? This question has haunted neuroscience for a century.

There are many proposed versions of the neuronal code. But they have major flaws: internal contradictions and discrepancy with empirical evidence. Some say that there are multiple codes in the brain. Some say there is no code at all and it exists only in the models of neuroscientists.

TTT proceeds from the notion that the brain converts environmental signals into information flows that make sense for all elements of the system. Information is the encoded signal. The chapter contains hypotheses about the neural code and takes a new look at its nature.

5. Hybrid Analog-Digital Brain.

The mainstream neuroscience models of the neural code consider it to consist of identical action potentials and just provide different versions of ‘spike train’ counting (average firing rate, spike count rate, time-dependent firing rate, temporal coding, phase of firing code, correlation coding, independent spike coding, etc.). Thus, the spikes are considered discrete symbols of a ‘digital’ neural code. TTT takes a different view at the Mind as a signal encoding process and at the brain as the encoding device. The chapter considers the advantages and limitations of both digital and analog computing. It proposes the hypothesis that the brain exploits both ways of coding and is in essence an analog-digital device. Step by step it reveals the hybrid signal transduction paradigm and gives clear examples of technological solutions used by the brain in different perception modalities.

6. Symphonic Neural Code Hypothesis.

Some mainstream models of neural code are technologically absurd and contradict the realities of brain efficiency and speed. Some cover just part of the observed phenomena and fail at explaining the others. The chapter explains these shortcomings and gives concrete examples from empirical research. As the way out of a conceptual impasse that has lasted for decades, the chapter develops the hypothesis of the Symphonic Neural Code based on the idea about the hybrid nature of neural computing. This hypothesis reveals the mystery of high performance, speed and efficiency of the brain, which cannot be provided by coding with an average spike tempo (firing rate theory) or with just the temporal structure of a spike sequence (temporal code theory).

7. Brain Logistics.

This chapter begins the journey into the intricacies of intra- and interneural communication.

8. Brain Logic.

This chapter uncovers the logic of an algorithm that produces the result as structured energy flow thus creating meaning (information).

9. Evolution of Brain Information Technology.

Here the reader is taken for a trip into the billions of years of evolution during which living systems have been improving their signal processing technologies: internal communication channels, methods of encoding, transmitting, decoding, storing and reproducing information. For living systems, solving this engineering problem is a condition for survival.

10. Neuron as a Signal Converter.

If we proceed from the primary hypothesis that neurons are the main actors in the play that we call Mind, we need to find out how they perform the act. The chapter looks at a neuron from physiological, physical, technological and informational points of view showing how this element of the system performs the encoding-decoding function. It also contains the graphical representation and detailed description of the algorithm that the brain uses at the level of a single cell and even at the level of a single ion channel of the cell membrane.

11. Bridges of the Mind.

Here the model begins to explore how the brain manages to bridge the discrete world of samples of the incoming signals with the continuous world of the final product (signals’ representations and reality model in general). How does the brain interpolate and combine the samples? How does it reduce interpolation error and inevitable distortions? How does it walk the tight rope between oversampling and undersampling? The chapter contains the hypotheses that look at the issue from the physical perspective (what mechanisms are used) and technological (what algorithms are used). It offers intuitive verbal and graphic descriptions and mathematical modeling of the process.

12. Brain Tunings.

This chapter is devoted mainly to the part of the technological chain of the brain that stands in the middle of the signal processing between sampling and integrating. It is about the modulating part of the brain filters. The chapter shows how the brain does amplitude, frequency, width and phase modulation of the external and internal signals. After describing the whole technological chain, the study returns to the general definition of the Mind given in the previous part and offers a new version that gives a more detailed picture of the process.

13. Butterfly Algorithm.

The Mind balances between tolerance and intolerance to information entropy (uncertainty). A living system must be ready for surprises (the difference between expectation and result). Still, at the same time, it strives to reduce surprises and create a model with high explanatory and predictive power. Complete certainty is impossible due to the dynamism and potential infinity of signals of the environment. A hypothetical zero information entropy would mean a stop to the reality testing process. But the absence of a stable model of reality is also maladaptive. The paradox is that low information entropy (certainty) is as deadly as high entropy (uncertainty).

The Mind has to reduce the discrepancy between the model and data. But the living system cannot shut off from the world and totally exclude external signals. Moreover, it cannot shut off from itself and exclude internal bodily signals. So, the Mind has to decrease surprise (prediction error) by expanding the information field, the range of search and comparison. But its expansion is possible only through a collision with the new, with the uncertain.

How does the brain manage to strike the balance? The chapter takes us through the details of algorithm configuration and settings of the brain filters that allow living systems to escape from the dead-ends of “I know nothing” and “I know everything.”

14. The Amazing Self-Learning Machine.

An engineer and machine learning programmer in creating an artificial self-learning machine has a direct engineering task: to build a working system. A neuroscientist has a reverse engineering task: to understand how an existing self-learning brain works. Perhaps, we should pay attention to what our Mind creates as external signal processing technologies to understand how it works inside while performing the same job? The hypothesis about the essence of the Mind helps us get over the old problem of ‘computer analogies.’ We are not like computers, but computers are like us in many aspects.

As the book is about the algorithm of the Mind, the chapter considers only this side of the analogy. Engineers of artificial systems solve their direct problem by creating algorithms that allow a machine to learn and make predictions about data parameters to overcome the limitations of a priori program and create conditions for decisions and actions based on data, past and present. This is what living systems do too.

In artificial technologies, there is a division of algorithms into supervised learning and unsupervised learning. But in the unsupervised version, an initial input is also required. It will be filled with incoming data and refined in the process of self-learning. Do living systems have something like that? Obviously, we are not tabula rasa but have genetically inherited ‘programs.’ Living systems are amazing self-learning machines that constantly update their ‘software’ and change model of reality whenever needed. Otherwise, they wouldn’t be alive in this dynamic world.

The chapter shows how the algorithm proposed within the TTT model allows the brain to solve the problems of survival and adaptation based on the accumulated database and generated predictions about trends in the environment signals that are constantly evaluated against incoming data with the assessment of the difference and its effect on the state of the system. Thus, the reality model is constantly tested for efficiency, adequacy and adaptability. The algorithm also allows passing between the ‘Scylla and Charybdis’ of a computational ‘explosion’ (data overload) and the loss of vital data.

The brain is a self-learning system (autoencoder) and the Mind’s algorithm does not require an external encoder. This way, the philosophical problem is technologically resolved: a well-functioning self-learning algorithm is necessary and sufficient for free will. The Teleological Transduction Theory is based on the hypothesis that such an algorithm is universal, and its main principle is embedded in all levels from an individual cell to the entire system.

However, a good algorithm is part of the story. To understand how the brain processes different types of data, we need a model of technological solutions within the chains of this algorithm, and a model of a physical mechanism that allows the algorithm itself and all chains to be implemented. We need a unified concept of physics, physiology and technology of the process that we call the Mind. It will be developed further in subsequent volumes of the series.