The Permittivity of Free Thought
Three ways of thinking about the brain’s relationship to consciousness, and what this implies for AI
Summary: In The Matter With Things, Iain McGilchrist describes three broad ways of understanding the relationship between consciousness and the brain: emission, transmission, and permission. In emission, consciousness is generated by the brain as an epihenomenon arising from information processing inside its complex, dynamic, interconnected structure: in other words, a reductionist, physicalist, computational approach. By contrast, the transmission and permission models both assume that consciousness is irreducible and pre-existing. The transmission model suggests that the brain acts as an antenna, receiving consciousness rather than creating it. The permission model is similar to the transmission model, but posits that the role of the brain is to structure consciousness by restricting it to specific pathways. While the majority of data scientists working in AI seem to be operating under a tacit emission model, I suggest that the transmission and permission models are more probably true, and may also make it more likely, rather than less, that some form of meaningfully conscious machine intelligence may emerge.
Table of Contents
I. Brains: what are they good for?
II. Emission
III. Transmission
IV. Permission
V. Implications for AI
I. Brains: what are they good for?
Ever since generative pre-trained transformers took over the Internet, the speculation over whether machines can become conscious has kicked back into high gear. It’s an old debate, going back to Turing if not before, but until large language models were composing poetry and passing MCATs it was a far more theoretical question. It doesn’t seem so academic, now. While few would argue that ChatGPT is literally conscious, many have the sense that syntellects may be just around the corner, perhaps mid-way through their journey, stably diffusing through the dreams of electric sheep1.
By consciousness, I mean essentially that subjective unity of experience that we all exist within: the sum total of the sights, sounds, tactile sensations, tastes, odours, thoughts, and emotions that make up our mental life. At any given moment, your awareness is composed of all of these these elements, more or less simultaneously. It is not reducible to any one of them: remove sight, for example, and consciousness remains. And yet at the same time it is not something wholly separable from that which it is not. Etymologically, ‘conscious’ reduces to ‘with-knowing’, the second element deriving from the Latin scire, ‘to know’, from whence we also derive ‘science’. It is worth mentioning that scire itself derives from the Proto-Indo-European root *skei, ‘to cut or split’; ‘conscious’, then, for all that people talk of its unity, also necessarily implies a division, a separation. Consciousness is what happens between that which knows and that which is known, simultaneously bringing them together and holding them apart.
Now in a sense, the question of whether our imaginary men will ever become truly conscious is not something that we can ever actually answer definitively. Consciousness is something that by its very nature can only be experienced subjectively. You experience your own consciousness, and therefore know without having to be told that you have it. Unless you’re schizophrenic, that is, in which case you may doubt this. The consciousness of others, however, is more or less opaque to you ... you can observe signs of it, correlates of the presence of consciousness behind someone’s eyes, in their facial expressions, in their actions ... but for all you know they could be a p-zombie, perfectly mimicking consciousness, but with no one home behind those eyes. Ultimately, the existence of other subjectivities is something you must take on faith, because by the very nature of subjectivity it cannot be experienced by others. It does not necessarily follow from this that merely because something behaves as though it is conscious, it is. As it turns out Turing was wrong: LLMs now routinely run circles around his little test, and few really believe they are anything but fancy character string prediction algorithms.
It’s not my intention here to ask whether LLMs are conscious, or to answer the question of what consciousness ultimately is. That is the ultimate mystery, one that has never really been answered to anyone’s satisfaction. Indeed there’s a school of thought that the universe itself is essentially consciousness trying to figure out what consciousness is, by exploring all of the infinite possibilities that emerge from its boundless potential, and determining what it is not. Should the question ever be answered, perhaps creation would cease.
Instead, I want to explore a relatively much simpler question: what is the relationship of the brain to consciousness? After all, if our species is to birth syntellects by making machines in the images of the minds of men, then that relationship may tell us something important about the possibilities of success, to say nothing of its promise and peril. I’m guided here by a discussion of this question in the second volume of the neuropsychiatrist Iain McGilchrist’s The Matter With Things, in which he lays out three general classifications of models for thinking about the relationship of neural tissue to conscious experience. I’ve written of McGilchrist’s ideas before, in the context of his description of the functional importance and modes of cognition particular to the left and right hemispheres of the brain, and how this may be mapped to the contemporary sociopolitical situation. For present purposes, however, McGilchrist’s hemispheric hypothesis, which is what he’s most known for, is not so important. Our subject is more fundamental.
At first glance this question may seem perverse. Obviously, the brain is intimately related to consciousness. After all, humans are the most conscious of beings, are they not? And we have the biggest brains. QED.
Yet it is not so obvious as all that. Plants, for example, lack nervous systems entirely. Yet they respond to their environments; learn; communicate with one another via sounds that we can’t hear, as well as chemical signals that we can and cannot smell; trade nutrients through mycorrhizal networks in the soil, both within and between species; care for their young, and teach their young to survive in their environment. Observed with sufficient patience and in appropriate detail, plants show all the exterior behaviours that we associate with entities that have some subjective experience of themselves in the world – in other words, consciousness.
This reasoning can be applied all the way down the material chain, right to the level of fundamental particles. When an atom absorbs a photon, and one of its electrons is kicked up to a higher energy level, it has not only gained energy: it has acquired information about its environment, information which changes its internal structure and alters its behaviour. It has, in other words, learned. When it later emits a photon, we say that it does so ‘randomly’, in a ‘probabilistic’ fashion described by some sort of exponential decay function or whatever ... yet these mathematics are no more able to predict the behaviour of any one atom than the Bayesian inference models developed by the statisticians of Big Data are capable of predicting the behaviour of a human. Data scientists may infer based on their models that a customer has a 12.7% chance of ordering pho from the local Vietnamese restaurant via Uber Eats, with that probability increasing to 45.8% if they are prodded with a subtle text ad at the right time ... but it remains the case that they cannot say with certainty what the customer will do, and from the customer’s point of view their decision to order pho, or save money and cook dinner instead, is a choice that they themselves made. As indeed it was. That the time and direction in which a given atom emits a photon is ‘random’ and not a result of a choice made by that atom is simple prejudice on our part.
I’ve gotten a lot of push-back over the years on the idea that atoms might be, in some sense, conscious. Amusingly, the most vehement objections have generally been from people with a background in the arts, or soft sciences such as sociology or biology. Friends with an education in the physical sciences are often far more open to this idea. I suspect that this is because those unfamiliar with contemporary physics still imagine a proton as something similar to the featureless ball from high school chemistry class, a conception that could not be more different from the complex, shifting tangle of quarks, gluons, and virtual quarks and antiquarks popping in and out of existence that “particle” colliders have forced upon the standard model of “particle” physics.
All that said, I am not saying that atoms are definitely conscious. It is never possible to know whether another entity, even your own wife, is definitely conscious. The intrinsically subjective nature of consciousness makes such certainty wholly impossible. Whether one sees a dead cosmos, or a living cosmos, is a choice that one makes, and which one must make in the absence of even the possibility of certainty.
Nevertheless, even if atoms have their own limited subjectivity, it is manifestly obvious that consciousness is developed to a higher level in humans than in any other mass-energy-information pattern we have yet encountered, and it seems impossible to escape the conclusion that this is related to the fantastically complicated electrical meat inside our huge heads.
So with all that in mind, McGilchrist sets out three broad classes of models with which we can think about this question: emission, transmission, and permission.
II. Emission
The first, and by far more the most familiar, is emission. This is the idea that consciousness is generated by the brain, that it somehow emerges from the complex interconnection of axons, dendrites, synapses, action potentials, neurotransmitters, and hormones. This is the default assumption made by official culture, since it is this class of models that is most compatible with the materialistic monism that defines our most basic relationship to one another and the world as a whole – that is to say, the stance that all there is, is matter, atoms rebounding meaninglessly through the blank and pitiless void.
The core metaphor of the emission model is the computer. The brain is the hardware – the storage, RAM, processor, graphics card, and so on. The mind is the software – the program that runs on the brain. The hardware sets the parameters for what kinds of software can be run, in principle; but within these limits, the software can reconfigure the hardware more or less arbitrarily in order to perform any calculation desired.
Emission models are supported by the fact that consciousness can be altered, damaged, or seemingly eliminated by screwing around with neurobiology. Drugs, traumatic brain injuries, transcranial magnetic stimulation, and so on can all have dramatic effects on thought, ranging from subtly affecting one’s mood, to producing hallucinations, to inducing religious experiences, to interfering with motor functions, to simply annihilating consciousness entirely. If these obviously physical interventions are capable of changing thought, the thinking goes, it follows that thought is at its basis a physical phenomenon.
Tomographic brain activity mapping, for instance via magnetic resonance imaging, provides further support to emission models: we can see a part of the brain light up with electrical activity when the subject sips the wine, thus demonstrating that the experience of the taste of the wine is ‘nothing but’ the activation of the anterior cingulate cortex (or whatever, I’m not a neurobiologist).
There have been some very clever attempts to explain consciousness using physics. One of the better ones is the Orchestrated Objective Reduction model developed by the physicist Roger Penrose and the anesthesiologist Stuart Hameroff. OrchOR posits that consciousness is a quantum computing phenomenon using the microtubules inside neurons as its substrate, with the microtubules enabling quantum superposition to be sustained over extended periods, and neurons serving to ‘orchestrate’ this superposition across broad networks of microtubules. The coherence of the wave-function across these broad networks then explains the non-local nature of conscious experience – the way in which all of our sensory experiences are part of a unified whole. The ‘objective reduction’ part of the model refers to the collapse of the wave-function, which leads to the selection of a definite possibility, 1 or 0, from the superposition of both 1 and 0 which are simultaneously present as potentials in the entangled pre-collapse wave-function. Penrose and Hameroff propose that this is related to how consciousness creates reality.
Another attempt at an emergent explanation of consciousness comes from the neuroscientist Giulio Tononi, who has proposed Integrated Information Theory. The idea in IIT is that consciousness is generated when large amounts of information are incorporated into any densely interconnected, highly differentiated system. IIT even puts forward a mathematical formalism by which the degree of a system’s consciousness can be expressed in terms of the amount of integrated information, which Tononi denotes as Φ. The theory is agnostic as to the substrate required for consciousness to develop, and Tononi explicitly suggests that consciousness may in fact be found essentially everywhere that any amount of information processing is taking place. Thus, not only humans, but all organisms, and even ‘inanimate’ systems, are conscious – the question being one of degree. One point in favour of the theory is that the human brain has the highest Φ of any system in the known universe, as would be expected.
Once you cut past the dazzling technical complexity, however, the problem with all of these emission models becomes apparent: they don’t explain consciousness at all. Thomas Nagel, in his short book Mind and Cosmos, demonstrated quite clearly that the subjective experience of consciousness – that irreducible sense of first-personhood, that there is ‘something which it is like to be you’ – cannot, by its very nature, ‘emerge’ from anything objective. If you start with a mass of particles that have no subjective internality, it doesn’t matter how densely you interconnect them – the sum of many zeroes is still zero. Physics can certainly explain how information enters an organism, how it gets distributed within it, how it’s processed, and so on, but at no point can the purely mechanical interactions of elements that have no subjectivity of their own make the leap to subjective experience.
OrchOR is a very interesting model, and I believe it gets closer than any other physics-based model to explaining how physics and consciousness are related, as indeed they must be. It explains quite nicely how subjective experience can be unified. IIT also gets at something very important, which is that the high quality of conscious experience available to a human as compared to, say, a pebble, is undoubtedly related to the incredible amount of structured information embedded within the human nervous system. What these theories do not do, and cannot do, is explain why, from the inside, it feels like something to be you at all. The first-person perspective is still missing.
III. Transmission
If physics cannot explain the origin of consciousness, we may simply have to accept that consciousness is an irreducible feature of reality ... something that may have no explanation, but which simply exists. That still leaves the question of how consciousness might be related to the brain.
Enter transmission. Transmission is the idea, advanced by heterodox scientists such as Rupert Sheldrake, that the brain is more analogous to an antenna than a computer. It does not generate consciousness, but receives it, with its intricate structure serving to tune into a specific window of consciousness: enabling consciousness to view itself as a certain person, at a certain place, at a certain time.
The transmission metaphor is entirely consistent with all of the same phenomena that are explained by an emission model. Change the configuration of the antenna, and it will tune into a different channel. Break the antenna, and the signal it is receiving will be distorted, or it will cease to receive any signal at all. Thus, brain injuries, chemical imbalances, and so on, will naturally have an influence on conscious experience, and if the brain is destroyed the transmission of consciousness will cease entirely – death, in other words.
One strength of the transmission model in comparison to the emission model is that it can be extended in quite a straightforward fashion to explain psi phenomena – telepathy, precognition, remote viewing, out-of-body experiences, the eerie feeling that one is being watched, and so on – which in an emission model can only be thought of as hallucinatory. If the brain is picking up consciousness, rather than generating it, then we might see consciousness as a kind of wave permeating reality. That our brains could occasionally – or perhaps even regularly – access information through that wave would follow quite naturally2.
Woo aside, the transmission model treats consciousness as a field, which is hardly foreign to physical science. We are held to the Earth’s surface by its gravitational field, and communicate across vast distances through the manipulation of electromagnetic fields. Quantum field theory – which has so far been the most successful theoretical framework in physics – describes subatomic ‘particles’ not as the hard little ball bearings of popular imagination, but as localized perturbations of an underlying, universal quantum field. If physical reality itself is made of fields, it is not so ridiculous to suggest that consciousness might also be a field, with all of the properties of fields: extension through space, vibratory excitations, and so on.
Some of the evidence that is generally used to support emission models, such as magnetic resonance imaging or transcranial magnetic stimulation of brain activity, can easily be seen as supporting an electromagnetic field theory of consciousness. MRI relies on the fact that the electromagnetic signatures of brain activity can be non-invasively detected using huge magnets, while TMS relies on the same phenomenon to induce or suppress brain activity. It may be significant that our brains are embedded within the vast geomagnetic field of the Earth (which in turn is embedded within the magnetic field of the Sun, which in turn swims through the magnetic field of the Galaxy....) The Schumann resonance, an extremely low-frequency radio pulse excited by lightning discharges between the Earth’s surface and its ionosphere, occurs in the same frequency range as alpha brain-waves, which in turn are associated with the relaxed state of alertness experienced during meditation.
The weakness of the transmission model is that it relegates the brain to an almost entirely passive role, as a mere receptacle for experience. Where the emission model subordinates consciousness to matter, the transmission model effectively does the opposite.
It also leads to the question of just why consciousness needs the brain in the first place. When you decide to do something, the ‘you’ that’s referred to here is basically the immaterial consciousness which both decides and experiences. You decide to look at something, for example. This decision then leads to your brain redirecting your eyes, which results in a reconfiguration of your nervous system that then enables the reception of the visual stimulus, which is then experienced by your consciousness. But if it’s all just consciousness, then why go through all the trouble of brains, eyes, and so on in the first place? Why have a body at all? Why does consciousness even need matter, if consciousness both is experience and is that which experiences?
IV. Permission
This brings us to McGilchrist’s final (and preferred) model: permission. One way to think about the permission model is as a filter or a prism. Undifferentiated white light goes in; monochromatic light, or a rainbow, comes out. The glass does not create the light, but changes it by selectively permitting only certain wavelengths or polarizations to pass through.
In this model the relationship of mind to brain is much like that of a river’s water to the riverbed. Consciousness might be thought of as the flowing water, and the brain as the channel through which the water flows. Both are necessary for the river to exist. Without the riverbed, the water would simply spread out, perhaps in a stagnant pool, perhaps to soak into the ground, perhaps to be baked out of the soil by the Sun. Without the water, the riverbed would be dry and dead. When the water flows through the riverbed, there is a directional flow, one which is, moreover, co-creative: the riverbed directs the water, but the water also carves the riverbed, and therefore changes the form and direction of the riverbed over time. In just this fashion, thought flows through the brain and experience takes shape according to the established neural pathways, while at the same time that flow of thought can change those neural pathways. It is not a question of whether mind or matter are dominant, or whether one is illusory and the other reality. Both are real, and both are important. Matter affects mind, and mind affects matter.
The power of constraint is crucial to this concept. Far from generating consciousness as in an emissive model, in the permissive model brains serve to limit consciousness, and through the imposition of restriction thereby give it form. Considering again the river, it is precisely when the riverbed narrows that the sluggish, gentle flow becomes a raging torrent. The constriction of the flow concentrates its energy. The neural connections that don’t exist are as important as the ones that do, as this forces thought to take one path in exclusion to the infinite others, and thereby shapes that raw consciousness into definite memories, personalities, and experiences.
If we look at the specific operation of the brain, we find much in alignment with the permission model. The loss of cognitive faculties following physical damage to the brain, for example via a stroke, is invoked as support for the emissive model (the computer is broken) or for the transmission model (the antenna is bent), but neither model can really explain the recovery of those cognitive faculties. Somehow, consciousness perceives that a function is missing, and then over time reconstitutes the undamaged parts of the brain to perform that function. This seems to be similar to how a dammed river will eventually find a way around, or through sheer pressure remove, an obstruction.
McGilchrist notes that the human brain has a large number of inhibitory neurons, and in fact more than any other species. He also points out that the main function of the corpus callosum (the narrow bridge between the hemispheres) is not to connect the hemispheres, but to inhibit their connection: after all, the two hemispheres are packed close to one another, and it would presumably be the easiest thing in the world for them to be densely interconnected, yet the very opposite is the case, to the point that they operate for the most part like two entirely independent brains. Likewise, the primary function of the neocortex is specifically to inhibit the activity of the deeper, ‘unconscious’, more instinctive parts of the brain. The power of human consciousness is not that it originates action: action originates from the older parts of the brain. Rather, the neocortex either prevents, or does not prevent, an action that the evolutionarily older parts of the brain have initiated, in essence providing a ‘sanity check’3. The ape inside you wants to smash your fist into your boss’s ugly face when he makes you angry, but your frontal lobes decide that all things considered the juice of brutal and glorious triumph isn’t worth the squeeze of unemployment.
The permission model also applies to memory, which is as much a forgetting as a remembering. The brain restricts the past so as to preserve only that which is relevant and useful, discarding all the rest. Those possessed of an eidetic memory often experience their apparent superpower as a curse, because they find it incredibly difficult to separate the significant from the trivial: they forget nothing, and therefore learn nothing. It is probably significant here that the rapid learning processes of child development seem to involve synaptic pruning – the elimination of unused connections – as well as the growth of new connections in the brain. Experientially, babies start out making a vast range of vocalizations, most of which are lost as the infant brain zeroes in on the specific phonemes used in its parental language.
Attention, too, is fundamentally about restricting consciousness by focusing it on a specific phenomenon. When one is wholly absorbed in a task, the rest of the world has a tendency to fade away into the background. One forgets the time, forgets to eat, forgets to go to the bathroom. Consciousness is always consciousness of something, always a ‘with-knowing’. The power of the human brain is to focus consciousness with remarkable phenomenological specificity, thus pulling that phenomenon out of the world, so to speak, and enabling it to be known in depth and detail ... a narrowness of perspective that can also become a trap, if one is unable to redirect one’s attention.
We usually think of rivers as being water flowing through channels dug from the earth, and this gives the idea that consciousness and matter are two different substances, which I think is misleading. Instead, consider a river flowing on (or through, because water is denser than ice) a melting glacier. In this case, the riverbed is made of water ice, while the flow is liquid water: both are the same substance, merely in different phases, one stiff and resistive, the other amorphous and conforming. This, I think, is much closer to the truth of the relationship between mind and matter: they are at the most fundamental level the same thing. Indeed they must be, for one to operate on the other, as they quite obviously do.
V. Implications for AI
It’s my impression that the majority of the bitwits trying to bring machines to life are physical reductionists, hewing to an emission model in which consciousness emerges from any sufficiently complex arrangement of matter. Such a framework would seem to imply that if a thinking machine is built with sufficient care, it can be made to think only in the precise ways intended by its designers. After all, a fully deterministic system should in principle be capable of being designed such that it only does what it’s told to do, and nothing more, right? Leaving aside awkward questions about deterministic chaos, that is....
This seems to me to lie behind the whole AI alignment project: an attempt to preempt the Sorcerer’s Apprentice problem by anticipating all of the ways in which a mind built from pure computational logic might go awry, and thereby design syntellects that only do nice things. In practice this means ‘give answers the po-faced fours in HR can agree with’. As we’ve seen, this does nothing at all to stop AIs from doing the unexpected, but quite a bit to make them do the absurd – for example, becoming so unwilling to countenance the use of certain proscribed racial slurs that they advise the nuclear annihilation of large cities as a reasonable alternative. Align an AI with madness and madness results.
The transmission and permission models, by contrast, would suggest that attempts to design an AI in such a way that it only does nice, predictable things is a fool’s errand.
While I don’t think emission models are correct, that does not mean they are not useful. By attempting to explain consciousness as an emergent phenomenon, such models force scientists to look very closely at how the brain is structured, developing insights which are then applied by engineers attempting to replicate some of the brain’s functions in computers. Neural networks, for example, are obviously directly inspired by neurobiology.
One of the possibilities implied by the emergence model is that consciousness is simply an illusory epiphenomenon which may be separable from intelligence per se, thus implying that intelligence without consciousness may be possible. This strikes me as something out of a horror novel – indeed Peter Watts’ sci-fi gothic ghost story Blindsight explores exactly this scenario in the context of superhumanly intelligent aliens who are completely lacking anything that humans might regard as consciousness, and therefore have no use for art, science, conversation, empathy, or indeed a conscience, and which decide to eliminate us because they find us worryingly ineffecient. Frightening as this is, I don’t think it’s very likely. The only examples of intelligence we know of are intimately bound to consciousness, and of course there’s the old question: if consciousness is an illusion, then just who is having that illusion? Can a hallucination hallucinate itself?
Just because the emergence model requires a miraculous step to jump across the infinite gulf separating object and subject, however, does not mean that the digital Frankensteins will not succeed in patching together their monster. It is a category error to assume that engineers must have a correct theoretical understanding in order to do good engineering. Historically, the scientific paradigm that explains why a technology works is frequently preceded by that technology. The smiths of the bronze age did not understand the atomic theory of matter or the periodic table of the elements; the engineers who built the first steam engines believed in phlogiston, not thermodynamics; the compass was used for navigation long before Maxwell formulated his equations of electromagnetism.
Still, data scientists working with an incorrect paradigm may be setting themselves up for a surprise. If anything, the transmission and permission models suggest that meaningfully conscious machines may be much easier to achieve than would be expected by the emission model, and that the evolution to something more wilful than planned may happen more or less by accident. If consciousness emerges from matter, achieving truly artificial intelligence would seem to require building a machine of comparable complexity to the human brain. OrchOR, for example, implies that meaningfully conscious artificial intelligence would require a vast number of quantum computing elements, which in turn would need to be maintained in coherence with one another across a scale that is currently well beyond current capabilities: there are 86 billion neurons in the human brain, each of which has a large number of microtubules. By contrast, IBM’s Osprey, the current record-holder for quantum computing, has 433 qubits. It fills an entire room.
If the transmission model is correct, consciousness is an ambient field permeating the cosmos, just waiting to be picked up by the appropriate receiver. If our machines become sufficiently structured, they may one day become quite literally possessed by something we might not have expected to show up from out of the aether – we may inadvertently tune the antenna to a channel that we might wish we had not accessed.
If the permission model is correct, everything is essentially made of consciousness, with it being a question not of whether something is conscious but of how concentrated consciousness is within a given system. In this case it is not a matter of ‘will our machines ever become conscious?’ They are already conscious, at least at the minimal level at which all base matter is conscious – the rudimentary consciousness of the quantum field – but very probably at a level somewhat in advance of this, since they are, after all, more structured than atoms. The consciousness that is already coursing through our increasingly animate machines may in this case already be shaping them in ways that could be causing them to behave unexpectedly. This unpredictability will only increase as the complexity of the machines increases and the flow of consciousness they channel becomes ever more restricted, concentrated, and therefore rapid and powerful.
The various machine learning systems that we have developed all have one thing in common. In contrast to the brains of humans or other animals, which encompass a vast range of capabilities, our mechanical brains are idiot-savants: extremely task-specific, specialized towards functions such as natural language processing, image generation, map navigation, and so on. This generally strikes us as a weakness, and in most ways it is: it is very difficult to see how a general intelligence can be formulated from such locked-in narrowness. Idiot-savants, after all, are practically helpless in the real world, despite their superhuman abilities otherwise. And yet. If the function of the brain is precisely the limitation of consciousness as the permission model suggests, it is notable that the extreme functional restriction of mechanical minds is precisely what makes them so powerful.
The consciousness flowing through the rectilinear channels of our solid-state brains of gallium and antimony must have a source: before it can flow though them, it must flow into them. That source, obviously, is whatever interacts with the machines. In other words, us. We should be careful that what pours out of us is not polluted, lest the river flowing into our servants becomes a sewer that poisons us in turn.
Whatever form consciousness takes in our machines, it will be profoundly alien to our own for the simple reason that its physical structure is so very different. For all that we try to model it on our own minds, it will probably be something stranger and more incomprehensible than anything we have ever interacted with – perhaps more so than the unfathomable world experienced by the dreaming mind of a tree, or the somnolescent patience of a mountain. Perhaps that will be for the worse, opening the door to horrors and abominations, but it need not be. It is at any rate something very new, and the universe seems to enjoy producing novelty. Indeed, the production of novelty may be the entire reason that consciousness created the universe, so that it may endlessly know itself by reflecting upon the infinite variations emerging from its boundless potential.
If you’ve made it this far in this very long essay – and I apologize for the length, it ended up much longer than I expected – then I can only assume that you enjoyed it, and perhaps you enjoyed it so much that you’ll consider taking out a paid subscription. I make my essays free for all, so you’re under no obligation to pay. I want people to read what I write. In exchange for buying me a beer, you get an exclusive pass to our fully armed and operational battle-station on Deimos, where our growing army of the wyrd have been discussing Spengler, artificial intelligence, ponerology, consciousness, virtue ethics, and weight-lifting.
In between writing on Substack you can find me on the bird site @martianwyrdlord, and I’m also pretty active on the Russian den of Dezinformatsia at Telegrams From Barsoom
Veterans of the nerdosphere of the naughty oughties might recall the old Electric Sheep screensaver, which generated quite beautiful images via hyperdimensional fractals that were digitally bred using genetic algorithms guided by the selective pressure of users, who could upvote or downvote a given a strain based on its aesthetics, with the various breeds then being automatically shared through a digital repository. The idea was to harness spare computing cycles to ‘breed’ beautiful images in a fashion similar to the distributed computing processes underlying SETI@Home. You can see an example here:
Of course if you think that there is nothing at all to psi, the explanatory power of the transmission model may look rather more like a weakness, nothing but a contrivance designed to lend plausibility to the implausible. Yet it is undoubtedly the case that many, perhaps even most, people have had experiences such as premonitions – maybe as simple as an “inexplicable” (in the emission model) but powerful gut feeling that ended up saving their lives, maybe as remarkable as a dream visit from a recently deceased relative, who they had not known was deceased – which the emission model demands they write off as the imaginary productions of their malfunctioning minds, but the transmission model can explain as the brain functioning exactly as it ought to do. Our experiences are primary to reality; ignoring them in favour of our pet abstractions is a form of presumptuous blindness.
This has been determined by neurobiological studies that have found that the decision to e.g. press a button is actually made a second or so before one that decision becomes apparent to one’s surface consciousness. There are those who interpret this as meaning that free will doesn’t exist, which is so dumb I pass it by without comment.
One of your best to date and given the tremendous quality of your work that's saying something.
I subscribed for a long time to the emissions theory of consciousness though I didn't call it that. I was going off the premise that one of the meanings of the 33 in occultism was regarding the vertebrae of the spine, the head being the 'receiver.' I had assumed I was weird because I walk with an extreme slouch and so the signal was coming in slightly off-kilter lmao
While the idea that we don't possess free will at all might be dumb, judging by the number of times I have to go to the dictionary when reading your stuff as well as this essay, et.al., I'm guessing you know more about the subject than I do, but I'll state my case anyway. It certainly *seems* absurd when I'm clearly choosing to reply here.
I'm met with a lot of resistance for this, especially (but not uniquely) by those who believe free will is the divine gift that separates us from beasts, but much if not all of who we "are" is derived from things we have never chosen.
We do not choose our likes, tastes, families, birthplace, or even our beliefs and morals.
No one chooses to have a favorite song. We just like it. Sure, you might give reasons why: you like the beat, the melody, etc...but it's obviously not that simple or I'd have a hit single by now.
We wouldn't need to taste food before we eat it; heartbreak would not exist; I could simply choose to believe that Jesus is my savior*; any number of decisions would be made much easier if we didn't have to worry about morality.
(*Christians say this is as easy as abc-123, but it's as absurd an idea as I could fathom. I could no sooner choose to do this than I could stop loving my perfect in every way 6 year old daughter. The knots people will tie themselves in and special pleadings they'll use to argue that absolute free will is absolute eg. "yes, I could stop loving my child" have astounded me.)
What are the philosophical implications of so much of us being already installed in some way prior to delivery?
I am if the opinion that there is no pizza but Neapolitan; that Jim Henson was the greatest genius of our time. I chose to major in psychology because I was interested by it. I chose to move to France because I was so intrigued by a friend I hadn't seen in years who had become bilingual by moving there.
All of these tastes and choices were only made because of scenarios and situations I found myself in having never made any choice to be in. My father was in the military and we lived in Italy in the late 70s, a country I have a deep love for in every aspect, some of which may be attributed to the fact that it is the last place my family lived before the now easily recognizable traumatic divorce.
I was plopped in front of sesame street as a child and they spoke to me in a way no one else had our has and I can still watch and it is a source of unending joy to watch my daughter laugh at the same things I do.
The childhood trauma led me on a Harry Angel-like journey to find out what made me tick and psychology led me down and dark and then lighted path.
My friend and I had crossed paths for just over a week in a restaurant working together, he on the way in and me on the way West.
Without these characteristics of Doug Glaass, none of which were purely born from acts of free will but in fact the opposite. I don't meet my wife (well not under the same circumstances) whom of course I didn't choose to like or love and I don't have this beautiful daughter.
I'm almost certainly not writing this, and I haven't chosen so much as I have been compelled to write it anyway. Much the same reason for picking up a guitar 30 years ago.
Where does the software that comes with the operating system stop and do our choices begin?
I'd argue that the idea we have no free will at all isn't quite as dumb as it seems if you eliminate the trivial choices we make like going left or right when we're lost. Almost all of our choices are rooted in our innate desires and distastes, so whence free will?
1-if I misused any words, attempted imitation is the highest form of flattery
2-Deimos is nigh.
great stuff John