(This is the eighth post in a series about our precarious existence “Between the Falls”. If you haven’t read any of the previous ones, you might want to go all the way back and begin with the first, which explains the idea behind this series, but here it is in a nutshell: technological progress and the move towards transhumanism have us on the precipice of a second great fall of man, the first one being that famous, symbolic or literal, bite of apple that drove us out of our primal state of ignorance and grace. In this post I discuss Donald Hoffman’s “headset” theory of consciousness and the broader idea of Not Even Right theories. I don’t want to give too much away, but there will be talk of Flatland, Death Stars, device drivers, Gödel, Black Mirror, and the Law of Leaky Abstractions.)
Reasoning about our transhuman future requires that we first understand our human present, at least at the meta level, as in, What kind of world is our self-consciousness embedded within? To be able to see truth, we need to account for the inherently distortionary nature of the lenses we are looking through. What would reality look like if we could undo the effects of the filters we put on it? And yes, this is why my Substack and occasionally active podcast are called “The Filter.”
If we start with the premise that, as humans, our current lenses reveal only a fraction of reality, and that reality comes to us in a highly processed form, then it’s possible the second fall of man could come with a broader reveal of the truth about our world, the way that putting scotch tape on top of frosted glass sometimes turns it transparent. If you examine the extent to which scientific advances have already allowed us to perceive a much broader spectrum of reality, this would seem to be just extrapolating a trend. I spoke about that possibility in the fifth post in this series, “Transhumanism and Sensing the Invisible.”
As transhumans, we will see, process, and interact with the world in a way that is technologically assisted at all times, because our brains will be chipped and our eyes fitted with an AR or VR overlay. It will be as if we decided to put on a headset that embeds us into another world, one that may or may not get us closer to seeing truth in a less distorted, less filtered, way. This view of what we will become is similar to cognitive psychologist Donald Hoffman’s “headset” theory of consciousness, which I mentioned briefly in that same fifth BtF post.
Putting on the headset
I find Hoffman’s theory suspect, for reasons I’ll examine, but it’s compelling enough that it has to be taken seriously, especially now that we are beginning to put on a headset of our own creation. In Hoffman’s theory everything, absolutely everything, we perceive is merely an interface that exists to give us an evolutionarily useful view of underlying data structures that would be impossible for us to handle directly. Think about the icons on your computer desktop. To delete a file, you drag its icon to the trash. But every part of that action is symbolic, a mere analogy. The icon isn’t the file, and the trash can has no direct role in destroying the file. Dragging a file icon to the trash icon is our interface for a much more complex set of data structures and algorithms that, in the case of computers, ends with the toggling of voltages in billions of transistors. As a human, there’s no way you could directly toggle those voltages to achieve the results you want, so the operating system provides you with an easy interface for doing things like deleting files.
To explain his theory, Hoffman uses the analogy that our perception of reality is like putting on a headset to play immersive games like Grand Theft Auto (GTA). We think we are seeing a red Mustang, and in multi-player mode our friend might also think he is seeing the same Mustang, but what we are seeing is an illusion, an artifact that’s generated only when looked at, which “disappears” as soon as our headset looks away. What we think of as reality is merely in-game reality, which is to say a rendered-on-demand facade that helps us interact with the underlying data structures in useful ways, at least from a survival perspective. Hoffman makes the argument that the forces of evolution are guaranteed, at least probabilistically, to hide reality from us behind such a facade.
This is where I think Hoffman ends up in what I’ll be calling “Not Even Right” territory, but for now, let’s assume that the headset theory is correct. Not only are humans subject to the well known limitations and biases of our perceptions, but there is no there, there. We see not just a filtered or funhouse-mirror version of reality, but an abstract interface that’s been invented out of whole cloth. There is no Mustang. There is no spoon. Our senses were not built to perceive the realities of space and time, but to hide the fact that space-time is a literal figment of our imagination. An adaptive artifact we conjure into existence in our head.
If Hoffman is correct, then what happens when we place another lens or filter between us and the underlying data structures that create our universe? I could see this going one of several ways. It’s possible this could get us closer to seeing the universe in a more raw way. As in the scene from “The Matrix” where Neo spots Cypher looking at a streaming jumble of green symbols on a black background. Neo asks if Cypher always looks at the virtual world in code and Cypher replies, “Well, you have to… there’s way too much information to decode the Matrix.”
There’s an expression in computer science about working “close to the metal.” So called “high level” programming languages, like Python, hide their actual operations behind an interface that looks almost like English sentences. In lower level, closer to the metal languages like Assembly, you directly tell the program how to manage the 0’s and 1’s stored at specific memory locations. Often device drivers, like the ones that manage a network card in your computer, are written in Assembly language. If the newly implanted chips in our brains let us consume the raw data of our universe like Cypher reading the Matrix, that could brings us closer to the metal, figuratively speaking, and let us see the universe as it actually is.
Another possibility is that this additional filtering layer on top of our perceptions results in more of a translation. This would be like a color shift. If all our reds get more purple, and so on, this doesn’t get us closer or farther from the metal, though it might bring into visibility colors that were previously outside of our wavelength detection capabilities. Infrared light detectors do this, turning heat into something we can see. If we were headset wearers embedded into a game of GTA, this layer might let us look at the hood of a car we just drove and see it radiating heat. Or, going back to the programming languages analogy, this would be like subbing out Python for Javascript. There may be reasons to prefer one language over another, but either way you have the same level of insight into what’s going on closer to the metal.
It’s also possible that additional filters of perception could take us further away from the truth. Extending the computer science metaphor, this is what’s happening with AI assisted programming. If instead of directly writing a program in Python to update records in an HR database, I tell my AI assistant to write a Python program to update records in an HR database, then unless I review the code it generates, the implementation details are completely hidden from me. I’m now farther away from the metal.
To be clear about this analogy, from the perspective of effectiveness, getting further away from the metal, to higher level languages, can be an amazingly useful. I’ve created programs in Python that would have taken me a hundred times as long to write in Assembly language, if I could have done that at all. I once aborted an experiment in mobile app development precisely because I found working with those development tools and environments so complicated and punitive. If at some point I can describe a mobile app with English sentences, and generate working code from that, I might just give it another try.
The catch here is that you have to be aware of what software guru Joel Spolsky called the Law of Leaky Abstractions. That is, you can get a lot done working at a higher level, far away from the metal, but sooner or later something will break in a way you won’t be able to understand unless you go down a level, closer to the metal, and really understand what’s going on. All abstractions fail, eventually, in one way or another. They have to because, in my words, not Spolsky’s, abstractions are entropy reducers. If I’m controlling a million-pixel canvas with a couple sentences of description fed into a generative AI tool, I can create amazing art very easily. But if I want to alter a specific fake brush stroke to change its hue or shape, I either need to edit those pixels directly in photoshop, or dive deep enough into the AI’s code to figure out how to get it to make exactly the change I want. Either way I have to get closer to the metal.
When talking about computer science, there are no obvious moral implications to adding or removing these layers of abstraction. But when it comes to putting another headset over the one we have on right now, the exact nature of what that layer is a spiritually charged choice.
There’s an episode of the Netflix show Black Mirror called “Men Against Fire” (spoiler ahead). In that episode you have these soldiers with advanced neural implants who are fighting grotesque mutants they call “roaches.” Maybe you’ve already guessed the spoiler? When one of the soldier’s implants malfunctions, he gets an unfiltered view of reality, and he’s been killing other humans all along. The chip was just an embedded, much more effective version of the age-old propaganda technique of dehumanizing the enemy to make it psychologically easier for your soldiers to kill them.
And, as a very much related aside, when references to “Star Wars” come up in conversation I sometimes ask people to imagine the film with unmasked storm troopers who sustain realistic damage to their bodies. All of a sudden “Star Wars” feels a lot more like “All Quiet on the Western Front.”
There is a very real danger to adding another layer on top of our perceptive functions. Years ago I recorded a podcast episode about how AR adoption will be driven at first by commercial interests, but eventually turn into a mechanism of powerful social and political control. And if you don’t think the current regimes in power in the West would be evil enough to turn their enemies into an icon of a cockroach, then you must have slept through the Covid years.
The headset problem
It’s time to dig into Hoffman’s model, which I find problematic in an interesting, and perhaps informative, way. I’ll break down Hoffman’s view into two main claims. Claim one is that, based only on evolutionary pressures, we know with near certainty that the world we perceive is not an accurate representation of reality. In other words, when it comes to perception, evolution has chosen fitness over truth, and fitness requires fakery. Claim two, which Hoffman acknowledges can be hard to even reason about, is that our consciousness has been “embedded” into the space we appear to inhabit. The headset analogy isn’t just someone in our world putting on a VR headset and a haptic feedback suit to enter a place like OASIS in “Ready Player One.” It’s a much more complex universe, perhaps with more dimensions than we are able to perceive, that has injected our human consciousness into what seem like physical bodies in a physical space, but is actually a virtual space or projection of some kind.
This is a tricky distinction to hold on to, but it’s huge, so let me explain in another way. In the pure materialist view, we perceive the world as it is, to the best of our abilities with our limited brains and bodies. Our consciousness, generated by our gray matter, is an embodied phenomenon that ends with death.
In the “traditional” simulation model, which I am very sympathetic to, our consciousness is an emergent phenomenon of the simulation, or perhaps it was built into the simulation at large for strategic reasons. Either way, we are simulation dwellers who live and die within this space (though in my view our actions, which determine the history of our simulation, are likely to impact our parent universe, because that’s why simulations are built in the first place).
In the Hoffman model, our consciousnesses have been injected into a world our consciousness imagines it inhabits. Note that this injection, by itself, wouldn’t necessarily entail that the space we inhabit is virtual. If you could imagine a game of air hockey that has been squished down to its idealized 2-dimensional form, yet could still house our consciousness inside elements like the pucks or the paddles, that would be the same thing. We may be dwellers of a real world that has the same relationship to the broader universe that inhabitants of the 2D world in “Flatland” have to their broader universe. However, combined with Hoffman’s first claim, claim two does imply that our word is, at least insofar as we perceive it, virtual.
Not Even Right
Before evaluating Hoffman’s first claim, I want to introduce a new concept (or, at least, a concept Google and ChatGPT tell me is new), that of a Not Even Right argument. You may have seen people call a theory Not Even Wrong. What they mean is that it’s so far divorced from reality that one can’t even test it. If I assert that “vibes are the driving force of human history,” you might call that Not Even Wrong, because while it looks formally similar to a real theory, it’s vague, untestable, unscientific. It doesn’t even rise to the level of something you could declare to be “wrong.”
The mirror image of Not Even Wrong exists. Not Even Right theories are vacuously true, or true under a vague and squishy set of assumptions that can be bent in ways that preclude falsifiability. The concept of survival of the fittest, viewed in isolation, suffers from this problem. Who survives? The fittest. How do you know who is the fittest? They are the ones who survive. Survival of the fittest is Not Even Right. Evolutionary theory only gets interesting when you add in concepts such as heritable traits, speciation, and filling ecological niches. That gives you testable predictions about the emergence of specific adaptive traits, like how peppered moths turned black to better hide in coal saturated, industrial era England.
In general, sometimes a Not Even Right theory can be “rescued” by adding some dimension to it, or taking a meta view of things. This was the genius of Gödel’s first incompleteness theorem: he found a higher dimension in which to evaluate the assertion that things are true if and only if we can prove they are true, an idea so stupidly obvious that it would have immediately been declared unfalsifiable by anyone doing math before the 20th century. And yet, by going meta, Gödel showed that this Not Even Right theory was actually wrong, by constructing a sentence in formal logic that was true but unprovable.
Hoffman’s first claim is that the world we perceive must be a false representation of reality, because evolution favors fitness over fidelity. The problem here, having listened to Hoffman on several podcasts, and having read “The Case Against Reality” and his paper on this specific subject, is that we never get a tight definition of inaccurate perception.
Hoffman understands that no one claims we perceive all of reality in perfect detail and undistorted. But then, how much of a disconnect counts as “not seeing the world as it actually is”? For example, if you’ve been Scuba diving, you know that objects look bigger underwater, and the deeper you go, the more color drains away. Assuming that seeing objects on land in the daytime is our baseline for accurately (or in Hoffman’s framework, “truthfully”) perceiving reality, would this mean that seeing things underwater is no longer perceiving reality? Either answer to this is going to be a challenge to his hypothesis. If you are still perceiving reality underwater, just through a lens that increases size and mutes colors, then there are lots of other perceptual changes that still fall within the category of seeing the world basically as it is, and making the case that our perceptions are disconnected from reality is much harder. On the other hand, if you think the Scuba diver doesn’t see the world as it is, if you claim she is disconnected from reality in a way that’s different from our truth-seeing guy on dry land in bright sunlight, then your case for disconnection is made under essentially all cases. You are arguing the straw man that any deviation from some idealized perception of all things, at all times, is a disconnect. As one blinks their vision briefly gets blurry and then disappears. Therefore anyone who blinks doesn’t accurately perceive the real world. That’s not a particularly interesting case to make. You’ve defined inaccuracy of perception in a way that makes your theory Not Even Right.
This is the core of Hoffman’s confusion, which comes into clear view when he talks about tastes being different in his book. Yes, experiences are subjective, this is a known thing, though taste perceptions do tend to overlap within a species (e.g. most humans find sugary things pleasant, not so much dung balls). But to say you aren’t getting truth because human evolution has primed you to savor a popsicle, and not a poopsicle, don’t mean evolution has disconnected you from reality. We have evolved to perceive certain aspects of the world in certain ways, and we are incapable of perceiving some things at all, and all of our perceptions are filtered like the Scuba diver sensing size and color. None of this implies that what you see are merely icons our brains have invented to hide a truth we are incapable of handling directly. Or that we live in a headset.
In Game Reality
This brings me to Hoffman’s second claim, that we are a consciousness that has been injected into a simulated universe, or some dimensionally restricted projection of a more complex space. To me the puzzle here, even after accepting the mind-melting premise that such a thing is possible, is why would you bind consciousness itself to the pixels of your virtual meat suit? In other words, if we aren’t taking the materialist view, if we think the neurons in our brain have no direct relationship to consciousness, but instead map down to computer code that tracks their state, then why would changes to those neuron icons be programed to effect our consciousness?
This is a tricky question to understand, so I’m going to repeat it in a different way. If you start with the materialist belief that the neurons firing in our brains are a key part of our consciousness, even if they aren’t the full story, then it makes sense why damage to our gray matter would alter our consciousness, or even destroy it. Our perception of the world is tied to our brain is tied to our mind is tied to our ability to self-reflect. Thinking is the firing of neurons. The effect of severing our two lobes proves that not only is consciousness directly tied to brain architecture, but that we may be two separate consciousnesses glued together by the corpus callosum, and this is one of the reason we can have unknown knowns that sit outside of our unitary active consciousness.
But why would that be the case if our consciousness has been embedded, or injected, into this space? To take the analogy Donald Hoffman uses often, if I am playing Grand Theft Auto, here in our world, my consciousness is outside the game. If the character I’m playing gets hit with a kill shot to the head, I might feel empathy for him, but my thoughts go on. No amount of damage to my character’s virtual axions, if the computer simulation was programmed down to that level of complexity and realism, has any (direct) effect on the axions in my own brain. This holds regardless of whether I’m playing the game with a bird’s eye view, or if I have on a VR headset that makes me feel like I am in the game. Maybe it’s just because I’m using own my limited brain to ponder this, but I don’t see any reason why the game designer who built our world would want to bind the consciousness of higher dimensional game players so tightly to virtual neurons that are just icons, rendered-on-demand pixels. These are, in Hoffman’s framework, neurons that don’t even exist in the first place, except as part of a computer model that stores them in some kind of data structures in our creator’s mainframe. Binding real consciousness to fake, in-game brain icons seems like an odd decision, to say the least.
Put another way, despite what “The Matrix” movies would have you believe, there’s no reason why dying inside the Matrix would have to mean that your physical body and brain, comfortably encased in a fluid-filled pod and connected by wires to the mainframe, would itself have to die. It would have to be a programming decision to destroy your real body and consciousness upon death inside the Matrix. Why not, instead, preserve the real body and do something, anything, with the consciousness? It would also have to be a programming decision to couple consciousness so tightly with all those virtual in-game brain cells. The closest I can come to an explanation for why our creator would do that, within the Hoffman model, is that we are meant to figure out how to alter our own consciousnesses, that this has some value for our creators. In that view, the transhuman future, the next great fall of man, which necessarily involves a much deeper understanding of how to bridge brain and mind with silicon chips, might be a key goal of the game. In other words, our real consciousness is bound to our fake gray matter so that we have an interface with which to hack it. I’m honestly not so sure how well that idea sits with me.
There is no spoon
I want to return to the idea of evolution favoring a disconnect from reality, because I think we can now tie Hoffman’s two claims together and see how they could either be rescued, or both dismissed as Not Even Right.
Suppose once again that we are like characters navigating a GTA world. Our headset paints pixels in front of us, and gives us tools to move around that are like a VR headset and a joystick, or some kind of motion controller. That raises the question of why our headset renders things that particular way, and why our controller works the way it does. The “fitness beats fealty” argument says that it does so because that helps us navigate and survive in this GTA world, much more than if we had to, say, send input to the computer program with a series of electrical impulses. Here’s the issue, though. Why would the game/simulation we inhabit lend itself to the specific perceived interface, like cars and steering wheels, that our consciousness is rendering on the fly?
The reason a steering wheel controller is a good interface for racing games is that, while the underlying system involves the unintelligible toggling of computer chip voltages, it was programmed to be modeled after car racing. If the code is object oriented, there’s almost certainly some class, which is to say a grouping of code, that defines a “car” and its attributes, and that class has things called methods, which map user actions like turning left, to in-game steering. Or, put another way, suppose some creator generated a synthetic universe that was modeled after the endless car chase battles in the “Mad Max” franchise. Now imagine you were dropped into that universe, but at first all you saw was the compiled code running, and you had to send in your own binary stream of information to interact with that universe. What kind of headset, and interface, would you build to help you survive in this sea of apparent nonsense? If your headset presented the world based on some completely different model than the “Mad Max” universe, how would that even work? It’s one thing to selectively emphasize certain aspects of this universe, like making the moving objects more salient. It’s another to build a headset that bears no relationship to the creator’s model at all. Again, how would that even work? How could it work?
If favoring fitness over fidelity means we end up with a model that hides the voltages, but aligns with in-game truth at some higher, more abstract level (farther away from the metal), then the theory is rescued, but I don’t think this is what Hoffman means. As far as I can tell, evolutionary forces are the prime mover in his model. The through-line between our perceived universe, and the one beyond the headset is fitness, or evolution. It’s the meta concept that makes sense to him even without a particular form of embodiment. Adaptive stuff survives, at whatever level or context we are examining. But, as you may have noticed, we’ve now cleaved off the elaborate specifics of evolutionary theory. If moths are fake and smog is fake and industrial era England is just a figment of our imaginations, which itself is only incidentally connected to the pixels painted to look like our gray matter in our imaginations, then we’ve disembodied evolution, and the theory is pared it down to it’s most abstract, Not Even Right kernel.
Is there a way to put the theory back into the realm of the testable? Of falsifiable? If so, I think that would have to come in the form of us finding a way to get out of the headset, an idea that Hoffman discusses as possible, but he presents no road map to get there. I’m not faulting him for this. Just reasoning about the kind of map we’d need is an extraordinary challenge. I’m not even sure the next great fall of man will get us a level of enlightenment sufficient to evaluate his claims. It seems more likely that our highly expanded ability to manipulate consciousness and perceptions won’t be used metaphysically, but tactically. It will be weaponized to serve the interests of a ruling elite, the ones who control the firmware inside the chip your grandson will have injected into his brain at birth because that’s a requirement of Patriot Act #4.
Going beyond Not Even Right
Recently I’ve been drawn, reluctantly but firmly, to another model of our universe at the same level as Hoffman’s, but one that I believe can actually be tested, because I have tested it, at least in some limited way, and the results I got left me disturbed and puzzled.
Next week we’ll be taking a trip to Seahaven.