Sims, All the Way Down?

Atlas Simulate

According to philosopher Nick Bostrom, the chances we are living in a simulation by either extraterrestrial forces or a future “posthuman” civilization are very high—given its possibility, when wedded with a few other conditional axioms.

Here’s Bostrom’s abstract from his 2003 thought experiment:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

First, Bostrom’s idea of the “posthuman” parallels that of Ray Kurzweil’s result of a technological singularity: it is mostly unpredictable from our current, limited human standpoint, but definitely involving 1) an astounding improvement to human somatic functioning; 2) interface with technology, probably via nanobots that eliminate disease and the aging process and allow the brain unlimited access to data; 3) the use of “non-thinking” objects/substances/processes as a substrate within which consciousness or computation could exist.[1] When all these events occur, a bootstrapping effect could take over that sidesteps biological evolution altogether and turns humans into “substrate neutral” entities that could conceivably use light, interstellar dust or gas to copy themselves, compute, and travel the cosmos. Bostrom refuses to specify posthuman qualities, ethics, activities, etc., but suffice it to say they would have powers we think of associated with deities and even the creator God of the monotheist—if they so chose to actualize them.

His argument rests upon these contingencies: 1) that humanity survives to a level where technological singularity is possible; 2) that humanity achieves successful passage through the period of singularity; 3) that substrate-neutral consciousness/computing is possible; 4) that our posthuman descendants could, using technology, run simulations for research purposes or “what-if?” scenarios (possibly a near-infinite number of them); 5) that they would either declare them unethical outright and not do such simulations or grow bored of them.

Bostrom admits that there is no necessity to any of these possibilities. However he goes on with the thought experiment assuming that one of the three possibilities is true, and chooses as most likely number three because our descendants, not having blown themselves up, would most likely try to simulate the past, if it were possible.

The indifference principle is invoked to assign whether we today would find ourselves within one of their simulations or not. John Rawls used this idea in his famous Theory of Justice thought-experiment. It assigns to all agents in an imaginary society an equal probability of being born a pauper or a prince(ss), congenitally compromised at birth or a very healthy physical specimen, or at any position between the extremes. Given your own ignorance of knowing where you might fall in the social order (your “original position” as Rawls calls it) how would you design a society that alleviates, to the best of its abilities, the harms of your being born into the “lowest,” resourceless position (if you happened to draw that lot) while minimizing economic harm to the interests/resources of those at or near the top of the social hierarchy, who would be compelled under a principle of justice to help the less fortunate members of society? And vice versa: what would be fair for you, as one of the elite (one with access to vast resources), to sacrifice in order to help those at poverty/disadvantage of any kind?

Bostrom adapts this “blind lottery” scenario to the field of possible worlds we could potentially find ourselves inhabiting at any given known time. We can know nothing of the past beyond our births, and only speculate on the future. We must concentrate on the possibility that this world, today as we know it, is a simulation by ETs or post-humanity.

But what outweighs this probability?

By adopting the indifference principle, Bostrom says, nothing conceivably can outweigh it. Our insufficient information at present on the future of humanity can lead us only to determine what could probably be the truth, given the overdetermined unknowns in his severely restricted assumptions. Bostrom assumes things that he doesn’t exactly spell out or engage. He accepts it as a given that consciousness is computable, by using Eric Drexler’s formulations of the energy consumption necessary for bits-processing-per-second. To this he adds AI theorist Hans Moravec’s formulation for the processing power of the human brain, and his own formulations on consciousness+human memory to find an energy amount and cost for computing a simulation. The first objection any rational person would counter is that the energy cost to simulate even a single conscious brain in our world, much less a virtual cosmos for many “brains,” would be laughably high for us, even if it could be done for only a few seconds. There’s an astronomically high probability that the energy ET/posthuman simulations require would be finite, and that these limitations would be detectable to us. Glitches in the simulation would occur as a result, and we would already have noticed these glitches and know we were inside a program.

Bostrom sidesteps this in two ways. First he says the “machines” powerful enough to simulate a universe also could easily and instantly rewrite our memories if a game-giving-away error occurred (in other words, that a “smoothing over” algorithm is always at work). Second, he backs up this failsafe and underpins the whole problem of energy-use by playing with Drexler’s conjecture that the quantum energy “pool” of entire planets could be harnessed as power sources for nanocomputation in biological, silicon, or any available substrate.[2] Remember, we’re talking intelligences that are “indistinguishable from magicians” or gods, and can hack the “computation power” of molecules. By this reckoning, simulating entire worlds for billions of equally simulated conscious observer-beings (like us) would be easy.

That is one possibility. Another is that the rules of entropy that we perceive as the “arrow of time” and space-time physics are simply programming of this simulation. Like in The Matrix, they are ultimately unreal. If the simulators in fact have access to infinite energy sources, they could program a world to have any physical rules they want it to.

Ironically, this scenario implies the high probability of an afterlife. After all, we aren’t really living if we are simulations. We could be “deleted” but what would there be to that condition? Our forms, as energy patterns in “time,” could just as easily be resurrected, or even go on to a simulated afterlife.

In other words, all the conjectures of religion and mystics could in fact occur as “realities” inherent to the simulation.

Or they could have evolved spontaneously in our human minds as a part of glitches, “anomalies” involving, say, Muhammad’s encounter with a massive “spacetime programming error” that was interpreted by Muhammad’s “mind program” as the angel Gabriel.

And suppose a whacked-out theory such as the one claiming millennia-old sounds or images can be imprinted as “stone recordings” is true in this sim (that there are programmed rules for such a phenomenon to occur), or that ghosts are actually “etheric recordings” of people who once lived (that is, who once inhabited the “level one” program of a life on earth as a person)? Or that UAP actually are energy forms/craft not indigenous to this part of the earth-program? All the debunkers’ bets are off against the paranormal being real if we do in fact live in a simulated universe. The division between the normal and the paranormal are senseless in such a universe.[3]

If we are simulations, the chances of a near-infinity of parallel simulations that are slightly different that ours by a few electrons (the multiverse conjecture) means that one conscious “monad” (me or you, say) could conceivably cross over into a parallel simulation, or be moved from one to the other, without our ever being cognizant of it.[4] Bostrom acknowledges all this in his paper:

The possibility expressed by alternative (3) is the conceptually most intriguing one. If we are living in a simulation, then the cosmos that we are observing is just a tiny piece of the totality of physical existence. The physics in the universe where the computer is situated that is running the simulation may or may not resemble the physics of the world that we observe. While the world we see is in some sense “real”, it is not located at the fundamental level of reality.

            It may be possible for simulated civilizations to become posthuman. They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.) Virtual machines can be stacked: it’s possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration. If we do go on to create our own ancestor-simulations, this would be strong evidence against (1) and (2) (remember: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)};, and we would therefore have to conclude that we live in a simulation. Moreover, we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings.

            Reality may thus contain many levels. Even if it is necessary for the hierarchy to bottom out at some stage—the metaphysical status of this claim is somewhat obscure—there may be room for a large number of levels of reality, and the number could be increasing over time. (One consideration that counts against the multi-level hypothesis is that the computational cost for the basement-level simulators would be very great. Simulating even a single posthuman civilization might be prohibitively expensive. If so, then we should expect our simulation to be terminated when we are about to become posthuman.) (italics and emphasis added)

So Bostrom and others have spoken of simulating universes, and within those, its inhabitants run further simulations. We have the possibility of a simulation within a simulation within a simulation—sims all the way down.

This idea of simulation and levels of simulations parallels the ancient Gnostic idea of a corrupt, illusory cosmos created by an evil archon and policed by demons. Perhaps the simulators are looking for those “simpeople” intuitive or smart enough to figure and find channels out of the simulation to communicate with them, Truman Show-style, and they will be rewarded somehow, like a rat in a maze—perhaps with a continuity of sort after their “death.” This would be one of the primary reasons posthumans or ETs would want to simulate human life, or something like it: to see if simulations can become self-aware, cubed. In this case, we’d be like the most adaptable denizens of John Conway’s Life Game, and get to perhaps be kicked up a level (ascended) into a more complex “dimension” of the simulation.[5]

—————-

Bostrom’s simulation argument still minimizes the question of what biological forces created the simulators, if they were or are “natural” creatures—as we the simulated perceive biology to be, that is. If they are posthumans, and have designed the “initial conditions” of our Life Game differently than their own, then we can’t know theirs and thus our own “biological” origins. If they designed our simulation to mimic exactly their own past cosmic initial conditions, but run a multitude of iterations of them, we would still happen to find ourselves in the one in which we evolved into sentience (the standard “anthropic principle”) but we could never know and never will know the point at which our world-simulation diverged/diverges from theirs and led to their supernatural evolution. It may have occurred at any point from the beginning until this very moment, and far into the dissolution of this universe/multiverse.

What we could say with certainty is that if there is equality of form between our physical/biological rules and those of the cosmos “outside” in which they inhabit and evolved from, this equality could be discovered in principle.

Some (not Bostrom) say it’s pure AI that’s created the universe, or there is computational equivalence between this universe and that of an omnisupercomputer. So where did the AI come from? Is it just playing a game? Denying that it must have emerged from biological beings and processes is absurd in one sense…Can we export our frail stories of “chaotic-primordial-soup-to-ordered catalysts and evolutionary ascent” to the genesis of beings with such power? Did they create our world to embody our own discovery or creation of this narrative? If not, what are the alternatives to it? That we are prisoners of “Solid State Intelligence,” as John Lilly called it and The Matrix dramatized? This implies they have programmed our universe to appear as if biological beings exist when in fact they don’t exist at all. Neither would geology, physics, etc. or anything we call “natural.”

Again, all bets are off. Bostrom has come up with quite a memetic trickster in the aether.

I’ll go Platonic here: there must be an essence to the Idea “biological” and an essence to the Idea “artificial.” Computer simulations can model biological processes but are not themselves biological, no matter how much enthusiasm the AI geek imbues their design with details matching what is currently known about molecules and enzymes and catalysts, etc. Simulation means to create a copy, down to the last detail, of an original example that pre-exists the simulation. Perhaps this is why Bostrom foregoes the pure AI simulators for “posthumanity.” These simulators must know the rules of what we would consider biology, drawn from the Idea (which, under Plato’s philosophy, would precede any instantiations of it) in order to sim biological beings. Therefore, the simulators must at least be acquainted with real, instantiated biological entities.

Or must they? If not, what’s then the difference between the simulators and a creator God(ess)?

There is none, according to Bostrom.

If our universe is a wholesale simulated creation, we have nothing with which to measure the creator(s)’ ontological status against ours. Even Descartes’s certainty fails in Bostrom’s scenario; they have programmed the thinking and personality that I believe is me—and your thinking and personality, too. The simulators are beyond our categories of classification and capacity to grasp…Like Pascal’s wager, is it then best to just put faith in our biological reality and ignore the entire problem of simulation? Or should we, as Bostrom would have it, bet on the opposite? Or should we use Pascal’s wager in its original sense via Bostrom, that is, conclude it is better to believe the simulator “gods” exist and try to reach out to them somehow?

———

Wittgenstein asked himself about the possibility of eternal life in the Tractatus, and concluded with the question “what problem would it even solve that we live forever?” We can adapt this to say that, even if we are products of simulators posthuman or extraterrestrial, our grasping the “truth” of this would entail mysteries outside our conceptions of time and space, and as the TractatusWittgenstein would conclude, it is senseless to even consider; it solves none of our human problems—the facts of the world as we find it—to know whether we are in a simulation or not.

 


[1] For a short essay on this last-mentioned technology, see the next essay on this site: “Substrate-Neutrality, Nano Tech, and ET.”

[2] Again, see the following essay linked-to above.

[3] I’m certain some of the most rabid atheist debunkers will want to reject the simulation hypothesis on these grounds alone, even if Bostrom’s paper is a cogently-argued piece of rationalism, because it threatens to destroy their belief that ours if a steady-state cosmos amenable to their scientistic dogmas.

[4] Or our consciously realizing it as a glitch. Some people call this the Mandela Effect: when all the public evidence for something recollected from one’s past differs significantly from the recollection, and all evidence one might have except one’s personal memory has been altered to conform to the present representation. We could explain this as simple and cheap misrecollection, except for the fact that so many millions of people have experienced it with regard to the same representations, and a growing community is discussing and noticing more examples. I have one myself.

[5] Again, such a conjecture must constitute the horror of horrors for the scientistic fundamentalist.

Nick Bostrom’s Simulated Universe argument gives Descartes’s evil demon a headache–apparently, he, too, is a simulation in Rene’s imagination.

Advertisement

Hello, Ray, do you read me? Do you read me, Ray?

Screen Shot 2019-03-30 at 2.27.24 PM

       Google’s head futurist Ray Kurzweil, like Hans Moravec and many other transhumanists, goes on and on about one day digitizing and uploading his consciousness into a computer. But it’s highly unlikely to happen in the way he imagines.

Let’s use an extreme example: Ray Kurzweil by way of Victor Frankenstein. By “one free miracle” Ray’s consciousness has been successfully uploaded to an AI. His deceased body is frozen but samples of his DNA are preserved for the eventual “perfecting” of cloning techniques. His digitized mind has conversed for a century with fellow transhumanist scientists via a speaker box. He has had a library of information integrated with his “mind,” allowing him to solve many problems—mostly about how to get the hell out of this box of electrons and into a human form again. He finds the solution. His colleagues have finally been able to grow from his 100 year-old DNA a “perfect” specimen, a biological version of Ray Kurzweil into which his consciousness will be, by a “second free miracle,” copied and downloaded from the AI.

The clone, Ray 2, reaches the age of 21. By this time Ray 2’s neural systems have grown to an optimal level to receive Ray 1’s “mind.”

But this clone has acquired an entirely different set of life experiences than Ray 1. Perhaps he enjoys living his life just this way, without an overwriting of the knowledge and experiences he has uniquely gained.

Seeing this possible outcome ahead of time, perhaps until majority he has been sheltered from life, kept in a state of induced hibernation in nutrient-rich chemicals to prepare the way for the Great Implanting.

Right here there are ethical problems, of course. Barring the tremendous neurological difficulties to be surmounted in keeping a growing body in stasis, and more importantly brain, in optimal functioning while in suspended animation, what right do these scientists and the disembodied Ray have in inflicting a rewiring/reprogramming of Ray 2’s brain? Is he not, in an existential sense, the same as Ray’s monozygotic twin brother born more than a century later, and subject to the right to choose whether he accepts his photonic “brother’s” experiences and cognitive capabilities?

To be clearer: Can Ray Kurzweil 1, since he “owns” his own DNA, give consent to have a copy of his DNA, grown to personhood at a different place and time, subjected to the downloaded experiences of his original body?

Screen Shot 2019-03-30 at 2.27.49 PM

       Foreseeing this ethical thicket, let’s say Ray 1 will be compelled to have a battery of twins created, hoping one will freely accept the downloading. So let’s say one of them freely chooses to have their experiences overwritten/augmented by Ray 1’s life and thoughts. A preparation protocol is used to increase biological Ray 5’s neuronal connections to a level comparable to the mature, 30 year-old Ray 1. Perhaps a transition program is used to incrementally acclimatize Ray 5 to Ray 1’s intellect and memories, and vice versa. There may be several outcomes:

a. Ray 1’s copied consciousness will not be able to adapt to this new body due to some unforeseen medical complication. It will become a prisoner in a recalcitrant body. The period spent as zillions of ones and zeroes will have fundamentally changed Ray 1’s relation to and sense of embodiment. Ray 1’s consciousness will reject the embodiment in Ray 5, like an organ transplant is rejected. He will be screwed—a homunculus consciousness inside a physical being who may disobey his wishes. Not unlike a person with Dissociated Identity Disorder, Ray 5 will struggle with Ray 1 almost constantly. Advanced “smart drugs” may be able to chemically keep Ray 1’s (or Ray 5’s) “will” at bay, but this will hardly be a happy existence for either.

AI Ray will say, let’s try again….

b. Let’s say there is a super-advanced transition program that will acclimatize Ray 1 back into an embodied existence. Even still, this new body of Ray 5 occupies a different existence, a distinct timeline in space-time than his original 170-year old shell. Ray 5’s body has been exposed to different cosmic conditions—radiation levels, electromagnetic fields, nutrients, environmental toxins and consequent immunities, etc. Ray 5’s body is a holistic product of and at equilibrium with the interaction between his genes and the future environment, just as Ray 1’s body was 170 years ago. Ray 5’s body may, despite the preparation, reject the superimposed neuronal changes as a body rejects a transplant, as in A. This may be taken into consideration early on, and all of the clones’ lives lived in conditions as close as possible to simulating the conditions of Ray 1’s world—that of the world, generally, 170 years ago. There still may be laws as yet unknown that “fix” a person’s life conditions into a set of parameters that cannot be altered by the addition of something as complex as another person’s life experiences.

c. The downloading may be successful—at first. There may be eventual catastrophic decline or disruption to Ray 5’s cognitive or bodily functions, as in “Flowers for Algernon.” Nature always has tricks and fail-safes up her embroidered sleeves.

d. Indulging a variant of Rupert Sheldrake’s morphic resonance conjecture, Ray 5 may actually naturally develop along the lines of Ray 1 both physically and mentally, and may even remember bits of his first embodiment. There may be a sort of memory encoded in the epigenetic changes Ray 1 went through in his 70+ years of life, and these left traces that unfolded in Ray 5’s development. This would make easier the superimposition of Ray 1’s consciousness.

e. The transplant may be entirely successful. Ray 5 will now become Ray 1 again at 21 years old, and Ray 5’s experiences integrated into him. With life-extension drugs having been perfected, he could live to 200 years old, until the next uploading/downloading occurs into another new body.

Additionally, perhaps computer engineers of the future (with AI Ray’s prodigious help) will be able to “perfect” an artificial body for him modeled entirely upon the reverse-entropic biological processes inherent in living beings. It will have a blank slate of artificial neural networks equaling the complexity (and perhaps copying) that of Ray’s organic brain. In this case, the download of AI Ray’s consciousness into this biomechanical manikin would seem more plausibly effective.

Again, the same results may happen on the AI Ray’s re-embodiment end: A single unforeseen glitch in translation could ripple through the system causing a crash; the interface may not achieve a robustness that allows his control of the “nervous system” and affect the autonomic systems. Back to the electron-stream.

The only way to know if any of this is possible, according to Kurzweil, is to try.

These are all assumptions based solely on a materialist/physicalist worldview.

————–

A magnetic recording is an analog phenomena; the tape captures almost every sound vibration in the vicinity of a microphone by directly imprinting (interrupting) the steady magnetic field with a complex of wave­ signatures upon its surface. A digital recording on the other hand electronically “samples” the incoming vibrations thousands of times a second and reproduces it through a string of pulses represented as sets of ones and zeroes for each frequency or set of frequencies.

The analog recording concept theoretically applies to everything we can mentally perceive—a continuum of smooth, interdependent activity. But the scale of a phenomena or “bracketed” event matters when we describe it this way; we could call everything from the chemical elements and the larger biological forms they make up (like catalysts) analog forms, while a different conception of form exists at the subatomic level. Physicist Max Planck’s idea of a quantized change in energy from one state to another in electrons and photons implies discontinuities on this small scale, and this what is he meant by quantum—a self-­limited quantity.

But does the discontinuous “nature” or quality of this scale make subatomic events amenable to digital/binary modeling—and eventual copying—as Kurzweil and many transhumanist neuroscientists believe?

I don’t think so. Binary operations play a part in consciousness, but the larger chemical systems and organism in which they are embedded mediate the dualistic on­off processes of neuronal activity. Neurons considered from the quantum level function within a “flowing” analog environment of interdependence (and are subject to signal/noise degradation). The neuronal “fired/not fired” states cannot be viewed separate from the larger­scale systems. Every dendrite­-axon-synapse combination in the brain—trillions of them packed together—sits in a soup of chemicals and electrical impulses that defy quantization in their complexity. Our fastest and most yottabyte­heavy parallel processing computer systems still don’t come close to the complexity level of the brain. And they never will.

Transformation of the “analog phenomena” of an individual’s subjective sense of self into a digitized form seems to be the core of this kind of transhumanist project (although there are those who propose that there could be a “neutral substrate” made of purely disembodied information that can be translated and introduced into any type of form—silicon, crystals, light, liquids, even gases—and still meet a criteria of conscious and contain one’s “personality).[1] Claims that biological processes can not only be modeled as globally digital functions, but are digital phenomena are most often made with respect to neuron/synapse activity in the brain—but this is merely a specific metaphor run amok. But the use of metaphor has a history. It goes all the way back to the split between mind (soul) and body Descartes conjectured in his First Meditations; it is the idea that the biological half is purely mechanistic, like a clock.

One unspoken, perhaps unconscious core tactic of the transhumanist outlook—and even certain fields of science as a whole—is to remap the connection between a natural phenomena (origination) and a technological­-instrumental device (simulation), then reinscribe this established connection into another social domain by means of a handy metaphor.

The first phase is the simple one­ to­ one metaphor, as noted above with Descartes: biology is like, or mimics, a clockwork/machine.

As our machines have increased in complexity, and computing moved from analog to digital, allowing a thousandfold increase in power, it allowed modeling of biological systems and further, the possibility of seeing equivalence between the two.

Thus at the second phase, the metaphorical arrow of signification is double-headed and equalized: biology is no less and no more than a digital phenomenon. Thus the grounds for reversing the metaphorical signification are made possible. The ur-metaphor begins to shape the thinking of practitioners in cognitive science, neurology, and AI, then in a wide variety of disciplines—and can limit true thinking on a society­wide scale, be it from neuroscience to political science, from physics to economics, or from biology to sociology. The new “truth” begins to dominate thinking to such an extant as to obscure its origination point in the natural phenomena from whence it came. In this case, the latter has already been “enframed” into the condition Martin Heidegger called “standing reserve”—something whose tangible, unique, existence as an existent is “invisible” but is yet used as an instrument or commodity for a human, or humanity.

Here’s a specific example: a scientist or philosopher is discussing sensory systems or thought processes—the eye or the ear, and the brain’s operation—and casually reverses the arrow of signification on us the readers:

“The eye is a remarkable optical instrument.”

or

“The brain parallel­ processes a billion synaptic firings a second in its computation and algorithmic input-output.”

Such seemingly innocuous statements, made repeatedly over the course of the article or book, can gently abuse the “metaphor” until we begin to actually conceive the eye as an optical device or the brain as a computer. We then conceptualize the eye as a kind of device that evolved for the specific purpose of seeing. This is a wrong way to characterize it. “Seeing” as a phenomenon takes place within and is only a part of consciousness; consciousness includes the contents of “that which is seen.” The scientist’s narrow focus on a holistic event, as phenomenologists have attested, cannot be separated from the entire act of seeing. The “act of seeing” must be “bracketed” by the scientist as a specific type of biological activity limited solely to the mechanics of the eye for them to get away with the reversal. The boundaries of the act of seeing cannot be delimited as we commonly understand existence; we may use the term in a variety of ways, as related to visual phenomena or, as was explained by Plato, a metaphor to comprehend something abstractly. “Seeing” and “seeing-as,” as the Heidegger of Being and Time might have put it, are primordial to human being in any given situation, all situations of which defy conceptualization.

This is a difficult conceptual difference to convey, but it is vitally important. So let’s take a robotic example. An artificial device that is structurally identical to the eye will perform the function of translating photons striking an “outer appearance” and bouncing into its iris into identical patterns on an “inner screen” or representation of whatever impinges upon its outer surface—and its fidelity to those patterned photons is our standard of how well it performs its job as an eye (this “inner screen” or “theater stage” is the poor metaphor we have, by default, used for centuries). Yet we as observers of the overall isolated “seeing” system of the device have no way of fully measuring the originating phenomena at which it “looks”—the pattern of photons external to it that it is “looking at,” and the human eye’s own ability to encompass this same external area—to have a criteria of identity that is as exact and unambiguous as our scientist­writer would have us believe. Such a criteria of fidelity is entirely rough and depends, like it or not, on quantum phenomena in the eye, the optical center of the brain, and the “outer” world conceived as a bounded system subject to probability. The artificial eye we have constructed and are observing is an isolated system with reference only to capturing the patterned photons. Its fidelity and “what it makes of the scene” can in no way be known.

On a cultural level, this reversal of signification is taken as a given. It is deployed/disseminated from one discipline or profession of discourse into another and thus begins to shape the thinking of society on a wide scale.

Concepts from neuroscience find their way into politics, from business management to journalism, from biology to music, etc. None of this metaphor-making is wrong, per se; it is just that we should not literally believe any of it as real. The web of metaphors that is created is one that concretely literalizes them over time, and make their origination as metaphors a “trace” only, or effaced entirely.

It is quite possible that in time we will not be able to think outside our metaphors. Certainly it seems that scientists like Kurzweil cannot.

This is the situating matrix of transhumanist thought. It’s a short step from viewing biology as akin to “clockwork gears” to viewing it as a digital computing phenomenon. Everyone from neuroscientists to psychologists to physicists love to say Descartes’s radical dualism is dead, but it’s echoing pretty loudly in transhumanist talk of “uploading consciousness.” Bollocks!

——-

Screen Shot 2019-03-30 at 2.41.12 PM

       Talk of uploading consciousness cannot but be based in the thorny debate over what consciousness is. It is still unanswered and perhaps unanswerable. Since some of the brain’s functions can be modeled digitally, it must be digital. Transhumanist rhetoric like Kurzweil’s thus assumes that it is “naturally” a digital phenomenon.

But let’s give him that the models can mimic the conscious behavior of a sentient being. Does mimicking require consciousness? If we say yes, does it imply there is a subject of consciousness behind the behavior? Not at all. We are back to the qualia mystery and philosopher Thomas Nagel’s “what is it like to be a bat?” question.[2]

Going further, let’s strike mimic and replace it with exhibit. “Mimic,” of course, is covered as a special form under the general concept “exhibit.” But the same problem confronts us, the age-old problem of “other minds” and we’re right back to Nagel’s questions. It wouldn’t matter if the model is digitally constructed or a “neurosoup” designed by nanoassemblers programmed to build a wet human brain in a vat.

We can allow that there are operations in the brain that are roughly algorithmic in function. We can also allow that there are brain operations that are like the binary of digital pulses. And we can combine these two phenomena into a synthesized model.

Suppose an advanced AI designs and builds a neurosoup brain that bypasses our human neural architecture, but its result seems to exhibit all the behavior of a conscious being. Through a vocal interface it can carry on conversations, has a sense of humor, can write poetry and even Simpsons episodes. The AI had in its “memory” all the necessary medical knowledge of the human nervous system but discovered “shortcuts” or ways to abbreviate functions in the natural design, the product of millions of years of evolution, and went ahead with dispensing with some of the architecture because it evaluated them as redundancies. The product appears to exhibit sentience, learning capabilities, and the self-criticality necessary for us to say it might possess consciousness.

We find we cannot reverse engineer the brain it built; we can’t comprehend the complex order of operations it performs, which appear different than the human brain.

 

 

 

 


[2] Nagel, Thomas, “What Is It Like to Be a Bat?,” from Mortal Questions, Cambridge University Press, 1991.