Harmonious Solution to the Binding Problem

Abstract:

How does the brain create a coherent model of reality while maintaining each signal identity and at the same time preventing the world picture from falling apart into separate pieces? In neuroscience and philosophy of mind, this question is called a binding problem. The article reveals various aspects of the problem and offers a solution based on the Teleological Transduction Theory.

Keywords: binding problem, representation, reality model, signal processing, symphonic neural code, synchronization, frequency and phase coupling.

We perceive the world as a diverse but coherent structure. We take this for granted. But the number of signals coming into different sensory systems and processed by the brain to create their representations is enormous and they are constantly changing. Still, the normally working brain manages to construct the adaptive model of reality. It is like a puzzle full of details that fit together and create a unified picture. This also concerns the model of Self that is created by the brain. Our internal feeling of unified personality as a single ‘I’ results from a combination of internal signals’ representations and serves as a steady reference frame for the outside world model.

In certain pathologies or under the influence of psychoactive substances that disrupt brain functioning we experience a deterioration of an integrated state of the reality model and the collapse of the unified Self. These maladaptive states show that creating a coherent model is a special function of the brain and is a key to survival.

The binding problem has two aspects:

1.    The segregation problem (BP1) concerns the question of the mechanisms which allow the brain to differentiate various signals of the environment received by the sensors of our perception modalities.

2.    The combination problem (BP2) is about the mechanism that integrates the representations of the signals of the outer and inner world into a coherent model of reality.

But although BP1 and BP2 are two different problems, this does not mean that the brain uses two mechanisms to solve them. The seemingly opposing functions of integration and differentiation of representations may be performed by one mechanism.

The term “combination problem” was coined in the nineteenth century by William James who considered the ways the unity of consciousness might be explained by a known physical mechanism and found no satisfactory answer (James, 1890). With the development of physiological knowledge about the brain, the focus shifted to functional-anatomic aspects of the problem. Trying to explain segregation and combination by anatomical structure has its grounds in the obvious spatial aspects: the brain has areas that specialize in processing modality-specific signals and various aspects of these signals; areas further in the hierarchy are engaged in associative processing and integration. The connections between these areas from the sensors up to the higher levels in the neocortex start as specialized labeled lines, then converge in some regions, then diverge and converge again. The network topology is so complex that there is no way we can attribute the binding problem mechanism only to the ‘wiring’ scheme. We are left with the question: how signals keep their identity while neural pathways converge and how the overall picture stays integrated if there are so many diverging channels? Connections may provide for a binding mechanism functioning, but are not a mechanism by themselves.

Also, the spatial structure alone cannot explain how we get the picture of the world with all its combined and distinct details in a constant mode of almost instantaneous production and update. Time has to be included in any model of the mechanism and algorithm of such a marvel of natural technology. Moreover, some researchers correctly note that it is inappropriate to analyse binding in perception without taking into account the way features are bound in memory and how the brain pre-conceives things (Zimmer et. al, 2006). So, any model solving the binding problem should include aspects of the past (memory), present (perception) and future (prediction).

The absence of a physical solution to the problem led to the idea that the problem does not exist. Philosopher Daniel Dennett has proposed that our sense of unified experiences is illusory and that, instead, at any one time there are “multiple drafts” of experience at multiple sites (Dennet, 1981). Some neuroscientists argue that there is in fact a “disunity of consciousness” (Zeki, 2003). This idea is based on experiments that show “perceptual asynchrony,” when colour is perceived before the orientation of lines and before motion by 40 and 80 ms, respectively (Moutoussis, Zeki, 1997).

But any kind of ‘disunity’ in the sense that the brain processes various signals by different cell populations and this activity is not coinciding in time does not make the binding problem non-existent. Moreover, it stresses the existence of a binding mechanism as in normal conditions we do perceive the world as a whole and not as a rotating kaleidoscope of “multiple drafts.” The model of reality breaks down into pieces only in pathological states, which just shows that some binding mechanism malfunctions. There is no way we can solve the problem by stating that it does not exist.

Some modern theories are in a controversial state. On the one hand, they claim that the problem does not exist, and on the other hand, they claim that it is somehow solved by the brain. For example, the author of the Thousand Brains Theory (TBT), Jeff Hawkins states: “The binding problem is based on the assumption that the neocortex has a single model for each object in the world. The Thousand Brains Theory flips this around and says that there are thousands of models of every object. The varied inputs to the brain aren’t bound or combined into a single model.” (Hawkins, 2021). So, nothing is combined — no binding problem.

But even the name of the theory speaks about the problem: how do all those ‘thousand brains’ integrate into one brain? The author attempts to answer: “Voting among cortical columns solves the binding problem. It allows the brain to unite numerous types of sensory input into a single representation of what is being sensed.” (Ibid). What mechanism does TBT offer? “The basic idea of how columns can vote is not complicated. Using its long-range connections, a column broadcasts what it thinks it is observing. Often a column will be uncertain, in which case its neurons will send multiple possibilities at the same time. Simultaneously, the column receives projections from other columns representing their guesses. The most common guesses suppress the least common ones until the entire network settles on one answer … The voting mechanism of the Thousand Brains Theory explains why we have a singular, non-distorted perception” (Ibid).

So, good guessing and voting result in undistorted perception, bad guessing and voting result in distortion. It is really not complicated. But all these fine words are about everything and nothing in particular. It sounds like there are thousands of homunculi inside the head that “think” and “vote.” But the question of the binding mechanism remains open despite the fact that the author claims that he has closed it. Using the voting metaphor proposed by the author, we can formulate it as follows: how do votes remain individual for counting when placed in a common ballot box? That is the essence of the binding problem that has two sides: combination and segregation. They have to be solved simultaneously and there has to be a physical mechanism for that. Without an idea about the mechanism, the author has to acknowledge: “This is how the entire world is learned: as a complex hierarchy of objects located relative to other objects. Exactly how the neocortex does this is still unclear” (Hawkins, 2021).

A lot of theories try to deal with the problem from the functional-anatomic perspective. Global Workspace Theory (GWT) suggests that certain signals, encoding what we experience, enter a specific workspace within which they spread to many sites in the cortex for parallel processing (Baars, 1997). There are detailed neuroanatomical versions of such a workspace (Dehaene et al., 2003). They are relying on the physiological fact that many cortex regions send and receive numerous projections to and from a broad variety of distant brain regions, allowing them to integrate information over space and time. Multiple sensory data can therefore converge onto a single coherent interpretation. This global interpretation is broadcast back to the global workspace creating the conditions for the emergence of a single state of consciousness, at once differentiated and integrated. However, GWT does not tackle the issue of the mechanism that performs differentiation and integration. It only postulates the existence of some place where the function is located. But the functional physiological schemes cannot explain the physics and technology that solve the binding problem.

Integrated Information Theory (IIT) introduces a time- and state-dependent measure of integrated information – φ (Balduzzi, Tononi, 2008). High φ values can be obtained by architectures that conjoin functional specialization with functional integration. The authors consider φ to be a useful metric to characterize the capacity of any physical system to integrate information. The theory has a mathematical formalization but “the integration measure proposed by IIT is computationally infeasible to evaluate for large systems, growing super-exponentially with the system’s information content.” (Tegmark, 2016). These computational challenges, combined with the already difficult task of reliably and accurately assessing consciousness under experimental conditions, make testing many of the theory’s predictions difficult. Researchers have attempted to use measures of information integration and differentiation to assess levels of consciousness in a variety of subjects. For instance, one study was able to reliably discriminate between varying levels of consciousness in wakeful, sleeping, anesthetized, and comatose individuals (Casali et. al, 2013). But the correlation of φ with different states could be only a reflection of the level of efficient network interactions.

The theory has been criticized for failing to answer the basic questions required of a theory of consciousness. Philosopher Adam Pautz said: “As long as proponents of IIT do not address these questions, they have not put a clear theory on the table that can be evaluated as true or false.” (Putz, 2019). One of the basic questions that the theory fails to address is the physical and computational mechanism that underlies the integrated information measure. Thus, it does not explain the binding problem solution, but only confirms that the brain solves it with varying degrees of success.

There is an old hypothesis that the binding problem is solved by the brain via the synchronous activity of different neurons in the cortex. The approaches within this paradigm can be called Binding-by-Synchrony Theory (BST). Originally, the idea was simple: neurons fire simultaneously and bind the things that their individual firing encodes. Later, it developed into models that speak of integration at a certain frequency range. For example, the authors of the article “The neuronal basis for consciousness” wrote: “The system would function on the basis of temporal coherence. Such coherence would be embodied by the simultaneity of neuronal firing … In this fashion the time-coherent activity of the specific and non-specific oscillatory inputs, by summing distal and proximal activity in given dendritic elements, would enhance de facto 40 Hz cortical coherence by their multimodal character and in this way would provide one mechanism for global binding” (Llinas et al., 1998).

The unified experience of consciousness is reduced to coincidence in time at a specific frequency, and the mechanism of this coincidence is a kind of summation. It is a working hypothesis. Moreover, the empirical evidence seems to be confirming it. Walter Freeman III observed activation in the 40 Hz range in the olfactory system in the 1970s (Freeman, 1978). Then, in the 1980s, the same was observed in the visual system (Gray, Singer, 1989). These observations led to the idea about the leading role of the gamma range in fulfilling the task of temporal coherence, the solution of the binding problem. Since then, the data confirming that a lot of processes in the brain do happen in that particular frequency range has accumulated.

The article by one of the authors of the idea, Wolf Singer, was titled: “Neural synchrony: a versatile code for the definition of relations?” (Singer, 1999). The author speaks about the rate of firing and its simultaneity as two codes, and insists that “there is evidence for such coexistence of synchronization and rate codes” (Ibid). But neither the average tempo nor the simultaneity of discrete impulses can be a “versatile code” in principle. Even if the change in rate of simultaneous firing has a pattern over time, this pattern cannot encode multiple signal parameters within the observed speed of brain functioning. The average rate code is simply not fast and efficient enough.

Let’s take visual modality as an example. Studies have shown that the speed of response to stimulus in the associative zones of the cortex is in the range of about 0.1 sec (Thorpe, 1990). Now let’s calculate the number of synaptic connections on the way from the retina to the temporal lobe: two in the retina, one in the lateral geniculate body of the thalamus, two in each zones V1, V2 and V3 of the visual cortex and one in the lower temporal lobe. A signal has to go through 10 synapses in 0.1 of a second. That makes 0.01 of a second for each.

So, a representation is produced by the temporal lobe within 10 ms. In a frequency range of around 40 Hz the neurons fire once each 25 ms. So obviously, this frequency is not enough to produce the observed speed of the functioning. But that is not the biggest problem. Even at a higher frequency, neurons have only one or two spikes at their disposal. So, there is no time for generating a code of average firing rate. The average of one spike is just one spike. How can a representation with all the nuances of parameters be formed within one spike if the code is the average firing rate? Neurons are not as fast as average firing rate theorists want them to be, but fast enough to create the picture of the world and control the body at a temporal resolution of milliseconds. It means that something is wrong with the model of the neural code if it does not correspond to the reality of how the brain works.

Singer admits that many experiments in various modalities of perception confirm “millisecond precision from trial to trial” (Singer, 1999). What is the way out? He does not propose any change in the coding model. Instead, he offers the following explanation: “One possible way to achieve such high temporal precision in neuronal signaling despite “slow” neurons is synchronization of discharges across parallel channels, a special form of population coding. In the proposed models, this is achieved by cross-coupling parallel channels through diverging and converging axon collaterals” (Ibid).

It is obvious that combining channels helps to increase systems capacity. But we are back to the question of the code: how can an average rate code that is slow by definition (it requires accumulation of many spikes over time) become fast if we combine many neurons coding by their average spiking tempo? Something is wrong with the logic here. Summing slow things cannot speed things up. Even if we combine billions of neurons into parallel channels, each of them will still need to create at least some set of spikes in order for the average speed pattern to occur, if this is their code.

But Singer insists: “Cortical networks should be able to operate with the required temporal resolution, because otherwise they would not be able to maintain synchrony at 40 Hz to begin with” (Ibid). They should operate fast and they do. The problem lies in the initial hypothesis of the code as an average rate and the attempt to link it to synchrony in a narrow frequency range. Such a model does not correspond to the real high temporal resolution of the brain.

We are back to the question about the information richness of the code combined with the almost instantaneous formation and transmission of representations. It is closely connected with the binding question: how do all these representations not mix into one mess? Here comes another problem with the binding-by-synchrony hypothesis: it does not solve the issue of differentiation (BP1). How can simultaneous spikes create a model of reality and even a single representation, which would combine all the parameters of the signals while keeping each parameter and each signal its unique place? Due to the apparent incompatibility of using synchrony to both segregate and unify one of the initial proponents of the theory, Christoph Von der Malsburg suggested that segregation should be supported by other means (Von der Malsburg, 1999).

Singer explains segregation by labeled line coding where “the responses of a given unit have a fixed label attached to them” (Singer, 1999). Thus, labeling a response of neurons firing simultaneously should form a “dynamic selection and grouping of responses” (Ibid). Whatever mechanism for labeling is used by the brain, we are back to the question: how can simultaneously firing neurons produce a multiparameter representation within the milliseconds’ timeframe if the code is an average firing rate? It is a vicious circle. To get out of it we should change both models: binding-by-synchrony and firing rate code.

Singer wrote: “In conclusion, I believe that the theoretical implications of the synchronization hypothesis and the data available to date are of sufficient interest to motivate further examination. The application of the new methods required to test the hypothesis will undoubtedly provide new insights into the dynamics of neuronal interactions. If it then turns out that the hypothesis falls short of the real complexity — which is bound to be the case — we will have learned something about the role of time in neuronal processing that we would not have learned otherwise” (Ibid).

This hypothesis really falls short of the true complexity of the binding problem and does not reflect the reality of brain functioning. Moreover, empirical studies have shown that there is no direct correlation between neural synchrony as simultaneous firing and perceptual binding (Thiele, Stoner 2003, Dong et al. 2008). On the other hand, “numerous studies in both animals and humans have shown that synchronized oscillatory activity in various frequency bands is related to a large set of cognitive and sensorimotor functions” (Senkowski et al., 2008). So, the complexity of brain functioning is not about simultaneous firing (synchrony) but about various frequencies interaction (synchronization).

We are facing a choice: either we stick with the old models that contradict reality, are internally inconsistent, and contradict each other or we take a normal scientific route of finding a solution that corresponds to reality.

The authors of a review article called “Synchrony Unbound” summed up the problems with the binding-by-synchrony hypothesis: “The theory is incomplete in that it describes the signature of binding without detailing how binding is computed. Moreover, while the theory is proposed for early stages of cortical processing, both neurological evidence and the perceptual facts of binding suggest that it must be a high-level computation … Nonetheless, the theory has sparked renewed interest in the problem of binding and has provoked a great deal of important research. It has also highlighted the crucial question of neural timing and the role of time in nervous system function. The problems that gave rise to the theory are still important problems that remain to be solved, and it is certain that the efforts of the theory’s proponents and opponents will advance our knowledge both of higher visual functions and of the algorithms used by that most enigmatic of computers, the cerebral cortex” (Shadlen, Movshon, 1999).

It is true that previously most of the theories were concerned only with the spatial aspects of the brain’s functional-anatomic structure. But space and time are conjugate variables, and we cannot ignore one or the other. The attempts to explain how the binding problem is solved by the architecture of neural pathways or by coincidence in time (synchrony) are bound to be incomplete. They will also fail to be up to the task if they ignore the algorithms and computational processes that create the information flow that needs to be bound.

We should not miss any part of the full technological chain: information encoding, transmission, storage, retrieval, comparison of the incoming perceptual data and accumulated representations, and update of the reality model. It is about introjecting signals (perception), storing representations (memory), and projecting the model (predictive function). This is a constantly flowing iterative algorithm, and all its stages happen in the same operational space and time of the brain. This technology is founded on the physical mechanisms that manifest themselves in physiological processes. Computational and physical phenomenology cannot contradict each other as the brain is a physical device that encodes the signals of the world and combines encoded representations into a model of reality that has to be unified and differentiated at the same time. Any model of the brain has to deal with all these aspects.

There is a theory that tries to solve the functional, physical, physiological, computational and technological issues in one internally consistent model — Teleological Transduction Theory (Tregub, 2021a). It is based on the assumption that the mind is a process of signals transduction aimed at creating representations as elements of an adaptive reality model that is adequate to these signals and to the survival needs of the living system. This initial simple hypothesis covers the teleological and functional issues: what does the brain do and for what purpose? Hence the name — Teleological Transduction Theory (TTT).

If creating a coherent reality model with the purpose of adapting to reality is the function of the brain, then it has to be armed with the respective functional modules, algorithms of their interaction and technological solutions based on physical mechanisms. TTT goes back to the initial dilemma that William James faced in the nineteenth century when he contemplated the unity of consciousness: how can it be explained by a physical mechanism? But this means that we need to explain the physical mechanism of representations, and only then we can speak about the mechanism and technology of their integration. Thus, the binding problem entails a set of major issues and cannot be solved separately.

Answering the questions about the physical nature of representations (qualia, phenomenal experience) would be closing the oldest problem in the philosophy of the mind: the “explanatory gap” (Levine, 1983) between the mental and the physical, the “hard problem of consciousness” (Chalmers, 1995). The hard problem should be addressed from the physical and technological point of view to become not an easy but solvable task (Tregub, 2022).

The signals of the world are waves of energy vibrations with a wide range of parameters. Processing these signals requires analysis and subsequent synthesis. The analysis is the decomposition of a signal into amplitude-frequency and phase components and the determination of the contribution of different components to a given signal. Synthesis is the reverse operation of transforming the decomposed discrete measurements of various parameters into a continuous (wave) representation of the original signal. This is the formulation of BP1 and BP2 in physical and technological terms that is given within TTT. They are not only two aspects of the same problem but two steps of the same algorithmic chain.

If the brain is a signal processing device, then technologically its circuits are signal processing filters starting from the primary sensors as converters, followed by intermediary modulators and up to the higher integrators responsible for the final product as the representation of the signal. This hypothesis is a fundamentally new approach to the nervous system. It provides a basis for building a new brain map as a filter configuration that is offered within TTT. These filters constitute a signal transduction chain that encodes incoming analog signals using sampling and quantization, transmits the encoded patterns, stores them in the settings of neural impulse responses, reproduces them by wave patterns of neural activity as analog representations. Technologically speaking, it is an analog-discrete-analog conversion “hybrid circuit” (Sarpeshkar, 1998). Computationally this is analysis and synthesis that solves BP1 and BP2. Teleological Transduction Theory offers the description of the process and the algorithm from the systemic level down to the intracellular details (Tregub, 2021a).

From the point of view of organizing communication channels, the brain solves BP1 and BP2 by using a component and composite solution. Primary signal converters represent a component aspect: different channels process and transmit different signal parameters. Component technology allows not only to differentiate signal parameters but also to avoid crosstalk during primary processing. There is a transition to composite solutions at the subsequent stages when the encoded parameters are transmitted over common channels. This simplifies the connection between the modules of the system, which is highly complex anyway due to the vast number of network elements. The composite option saves space, time and energy during transmission. Still, it requires a highly developed decoding part of the chain so that all the costs of primary differentiation are not wasted, and the representations are both detailed and combined into a single and coherent model.

The riddle of the convergence of neural pathways while maintaining the specificity of information is resolved as follows: such a technological necessity is provided by the physical properties of waves, which make it possible to accommodate many streams and many patterns in one channel. One of the main advantages of the wave process in terms of energy and information organization is the ability to combine multiple streams in one channel (multiplexing) and guiding separate streams through different channels (demultiplexing). Contrary to the previous models that consider neural circuits as wires, TTT looks at them as waveguides and describes the physics of the process embodied in the fine details of the physiology of dendrites and axons (Tregub, 2021b).

Waves are also capable of creating the observable capacity, speed and multi-level complexity of our memory. Waves form, store, and read information not sequentially bit by bit, but from all participants of a specific wave pattern simultaneously within the reference wave’s clock frequency. This solves the riddle of how representations with all the intricate details of the parameter space are formed and reproduced almost instantly without the need to accumulate and read the average speed of spikes or successive spike bits along a linear chain. This is where TTT deals with the question of the physical mechanism of the BP1 and BP2 concerning not only perception (present) but memory (past) and predictive model projecting (future).

The binding problem is closely connected with the question of the neural code. The computational scheme must be in line with the physical mechanism and physiological reality. Any signal of the environment is an oscillatory energy process with a certain amplitude, frequency and development of phases in time. Thus, the neural code has to be a complex multidimensional structure. At the same time, information density should combine with efficiency and speed. Physically action potentials are continuous oscillatory processes that differ in duration, amplitude and shape. Neurons demonstrate graded potentials that can provide high capacity and efficiency of the code. Nevertheless, standard neuroscience theories regard neural activity as identical discrete spikes thus modeling the code as a digital one. Such theories assume that the information is contained in the number of spikes in a particular time window (rate code) or their precise timing (temporal code). The problem is that the spiking neuron models run counter to the actual efficiency and speed of the brain (Rieke, 1999).

TTT offers a symphonic neural code (SNC) hypothesis (Tregub, 2021a,c). Many researchers compare the brain with an orchestra and call neural circuits ensembles. Taking this metaphor to the level of physical analogy, TTT regards each action potential as a note of the music of the brain that has individual characteristics of the waveform (period, amplitude, phase). These notes form a pattern of the activity of a given neuron with a precise spatial-temporal organisation, which allows it to be part of the overall brain symphony with its melodies (frequency patterns), rhythms (phase patterns) and harmonies (the simultaneous existence of different patterns). The information density of each note (action potential) and each pause (resting potential) is very high. Thus, complex information can be encoded in a short activation/pause sequence and even within a single cycle. Due to this, the system as a whole has tremendous computing power, efficiency and speed.

Computationally the neural code is not digital but analog-digital. This means that each activation/deactivation of a neuron, being a discrete unit of code, contains internal parameters as a continuous oscillatory process. The combination of such information-rich oscillations produces a representation as a wave pattern. Thus, SNC says that the neural code is based on oscillatory and wave phenomena, just like the musical code. The model provides a detailed physical, mathematical and technological description that explains the informational, temporal and energy efficiency of the brain.

The SNC hypothesis offers a new look at the binding problem as it considers the coding process as the interaction of varying neuronal oscillations via synchronization. The term is used not in the sense of simultaneous events of identical spikes (synchrony as unison) but speaks about the complex mechanism of various frequencies coupling, which creates a harmonic structure while preserving the individual characteristics of each representation as a wave pattern. The coupling of different wave patterns provides the brain with the ability to solve the BP2. It also allows for maintaining the uniqueness of each pattern solving the BP1.

Thus, the same physical mechanism deals with both aspects of the binding problem. As the musicians of an orchestra play their individual parts with different rhythmic and melodic structures combined into one symphony, neural ensembles play the symphony of the mind. The solution to the binding problem comes from synchronization as frequency and phase coupling based on harmonious intervals. The elements of the brain ensemble can synchronize instantly and participate in complex, differentiated and integrated representations as wave patterns creating a coherent model of reality and an integral Self.

TTT describes fine details of the physiological implementation of the physical binding mechanism and uncovers subtle nuances of brain polyphony and polyrhythm (Tregub, 2021c). The model can be called binding-by-harmony. It also describes how the breakdown of the binding mechanism leads to the disintegration of Self and splitting of the reality model associated with some mental disorders (Tregub, 2021d,e).

SNC hypothesis is based on the Theory of Energy Harmony which explains the universal physical mechanism of binding energy oscillations with various frequencies and phase portraits into coherent structures (Tregub, 2021f,g). Thus, TTT is embedded in a broader physical theory framework and looks at the brain not as a special substance with unknown laws of operation but as a technological device that uses the physical mechanisms existing in the universe.

Stanislav Tregub

References:

  1. James, William (1890). The principles of psychology. New York: Holt.
  2. Zimmer, H.D., Mecklinger, A., Lindenberger, U. (2006) Binding in human memory: A neurocognitive approach. Oxford University Press.
  3. Dennett, Daniel (1981). Brainstorms: Philosophical Essays on Mind and Psychology. MIT Press. ISBN 0262540371.
  4. Zeki, S. (2003). The disunity of consciousness. Trends in Cognitive Sciences. 7(5): 214–218. doi:10.1016/s1364-6613(03)00081-0. ISSN 1364-6613.
  5. Moutoussis, K.; Zeki, S. (1997). A direct demonstration of perceptual asynchrony in vision. Proceedings of the Royal Society of London. Series B: Biological Sciences. 264 (1380): 393–399.
  6. Hawkins, Jeff (2021) A Thousand Brains: A New Theory of Intelligence. March 2nd 2021. Basic Books.
  7. Baars, B. J. (1997). In the Theater of Consciousness. New York, Oxford University Press.
  8. Dehaene, S.; Sergent, C.; Changeux, J.-P. (2003). A neuronal network model linking subjective reports and objective physiological data during conscious perception“. Proceedings of the National Academy of Sciences. 100 (14): 8520–8525. Bibcode:2003PNAS..100.8520D. doi:10.1073/pnas.1332574100. PMC 166261. PMID 12829797.
  9. Balduzzi, D; Tononi, G (2008). Integrated information in discrete dynamical systems: motivation and theoretical framework. PLOS Comput Biol. 4 (6): e1000091. Bibcode:2008PLSCB…4E0091B. doi:10.1371/journal.pcbi.1000091. PMC 2386970. PMID 18551165.
  10. Tegmark, Max (2016). Improved Measures of Integrated Information. PLOS Computational Biology. 12 (11): e1005123. arXiv:1601.02626. Bibcode:2016PLSCB..12E5123T. doi:10.1371/journal.pcbi.1005123. PMC 5117999. PMID 27870846.
  11. Casali, Adenauer G.; Gosseries, Olivia; Rosanova, Mario; Boly, Mélanie; Sarasso, Simone; Casali, Karina R.; Casarotto, Silvia; Bruno, Marie-Aurélie; Laureys, Steven; Massimini, Marcello (2013). A Theoretically Based Index of Consciousness Independent of Sensory Processing and Behavior. Science Translational Medicine. 5 (198): 198ra105. doi:10.1126/scitranslmed.3006294. hdl:2268/171542. ISSN 1946-6234. PMID 23946194. S2CID 8686961.
  12. Pautz, Adam (2019). What is Integrated Information Theory?: A Catalogue of Questions. Journal of Consciousness Studies. 26 (1): 188–215.
  13. Llinas, R., Ribary, U., Contreras, D., Pedroarena, C. (1998). The neuronal basis for consciousness. Phil. Trans. R. Soc. Lond. B 353, 1841–1849.
  14. Freeman, W. J. (1978). Models of the dynamics of neural populations. Electroencephalogr. Clin. Neurophysiol. Suppl. 34, 9–18.
  15. Gray, C. M., Singer, W. (1989). Stimulus-specific neuronal oscillations in orientation columns of cat. Proc Natl Acad Sci U S A. 1989 Mar;86(5):1698-702.
  16. Singer, W. (1999). Neuronal Synchrony: A Versatile Code Review for the Definition of Relations? . Neuron, Vol. 24, 49–65, September, 1999.
  17. Thorpe, S. J. (1990). Spike arrival times: A highly efficient coding scheme for neural networks. In R. Eckmiller, G. Hartmann & G. Hauske (Eds.) Parallel processing in neural systems and computers (pp. 91-94): North-Holland Elsevie.
  18. von der Malsburg, C. (Sep 1999). The what and why of binding: the modeler’s perspective. Neuron. 24 (1): 95–104, 111–25. doi:10.1016/s0896-6273(00)80825-9. PMID 10677030.
  19. Thiele, A.; Stoner, G. (2003). Neuronal synchrony does not correlate with motion coherence in cortical area MT. Nature, 421 (6921): 366–370, Bibcode:2003Natur.421..366T, doi:10.1038/nature01285, PMID 12540900, S2CID 4359507
  20. Dong, Y.; Mihalas, S.; Qiu, F.; von der Heydt, R. & Niebur, E. (2008). Synchrony and the binding problem in macaque visual cortex. Journal of Vision, 8 (7): 1–16, doi:10.1167/8.7.30, PMC 2647779, PMID 19146262
  21. Senkowski, Daniel, Schneider, Till R., Foxe, John J., Engel Andreas K. (2008). Crossmodal binding through neural coherence: implications for multisensory processing. Trends in Neurosciences Vol.31 No.8, 2 July 2008.
  22. Levine, J. (1983). Маterialism and Qualia: the Explanatory Gap. Pacific Philosophical Quarterly. 1983. Vol. 64, № 4. P. 354—361.
  23. Chalmers, D. J. (1995). Facing up to the Problem of Consciousness. Journal of Consciousness Studies 1995 Vol.2, № 3 P. 200-219.
  24. Tregub S. (2022) Solving the hard problem of consciousness by asking the right questions. DOI:10.13140/RG.2.2.25607.32164
  25. Tregub, S. (2021a). Algorithm of the Mind: Teleological Transduction Theory. Symphony of Matter and Mind. Part Four. ISBN 9785604473948 www.stanislavtregub.com
  26. Sarpeshkar, R. (1998). Analog Versus Digital: Extrapolating from Electronics to Neurobiology . Neural Computation. 10, 1601–1638.
  27. Tregub, S. (2021b). Technologies of the Mind. The Brain as a High-Tech Device. Symphony of Matter and Mind. Part Five. ISBN 9785604473955 www.stanislavtregub.com
  28. Rieke, F. (1999). Spikes: exploring the neural code. Cambridge, Mass.: MIT. ISBN 0-262-68108-0. OCLC 42274482.
  29. Tregub, S. (2021c). Harmonies of the Mind: Physics and Physiology of Self. Symphony of Matter and Mind. Part Six. ISBN 9785604473962 www.stanislavtregub.com
  30. Tregub, S. (2021d). Inner Universe. The Mind as Reality Modeling Process. Symphony of Matter and Mind. Part Seven. ISBN 9785604473979 www.stanislavtregub.com
  31. Tregub, S. (2021e). Dissonances of the Mind. Psychopathology as Disturbance of the Brain Technology. Symphony of Matter and Mind. Part Eight. ISBN 9785604473986 www.stanislavtregub.com
  32. Tregub, S. (2021f). Music of Matter. Mechanism of Material Structures Formation. Symphony of Matter and Mind. Part One. ISBN 9785604473917 www.stanislavtregub.com
  33. Tregub, S. (2021g). Theory of Energy Harmony. Mechanism of Fundamental Interactions. Symphony of Matter and Mind. Part Two. ISBN 9785604473924 www.stanislavtregub.coshoul

DOI: 10.13140/RG.2.2.14044.46724