Hello, Ray, do you read me? Do you read me, Ray?

Screen Shot 2019-03-30 at 2.27.24 PM

       Google’s head futurist Ray Kurzweil, like Hans Moravec and many other transhumanists, goes on and on about one day digitizing and uploading his consciousness into a computer. But it’s highly unlikely to happen in the way he imagines.

Let’s use an extreme example: Ray Kurzweil by way of Victor Frankenstein. By “one free miracle” Ray’s consciousness has been successfully uploaded to an AI. His deceased body is frozen but samples of his DNA are preserved for the eventual “perfecting” of cloning techniques. His digitized mind has conversed for a century with fellow transhumanist scientists via a speaker box. He has had a library of information integrated with his “mind,” allowing him to solve many problems—mostly about how to get the hell out of this box of electrons and into a human form again. He finds the solution. His colleagues have finally been able to grow from his 100 year-old DNA a “perfect” specimen, a biological version of Ray Kurzweil into which his consciousness will be, by a “second free miracle,” copied and downloaded from the AI.

The clone, Ray 2, reaches the age of 21. By this time Ray 2’s neural systems have grown to an optimal level to receive Ray 1’s “mind.”

But this clone has acquired an entirely different set of life experiences than Ray 1. Perhaps he enjoys living his life just this way, without an overwriting of the knowledge and experiences he has uniquely gained.

Seeing this possible outcome ahead of time, perhaps until majority he has been sheltered from life, kept in a state of induced hibernation in nutrient-rich chemicals to prepare the way for the Great Implanting.

Right here there are ethical problems, of course. Barring the tremendous neurological difficulties to be surmounted in keeping a growing body in stasis, and more importantly brain, in optimal functioning while in suspended animation, what right do these scientists and the disembodied Ray have in inflicting a rewiring/reprogramming of Ray 2’s brain? Is he not, in an existential sense, the same as Ray’s monozygotic twin brother born more than a century later, and subject to the right to choose whether he accepts his photonic “brother’s” experiences and cognitive capabilities?

To be clearer: Can Ray Kurzweil 1, since he “owns” his own DNA, give consent to have a copy of his DNA, grown to personhood at a different place and time, subjected to the downloaded experiences of his original body?

Screen Shot 2019-03-30 at 2.27.49 PM

       Foreseeing this ethical thicket, let’s say Ray 1 will be compelled to have a battery of twins created, hoping one will freely accept the downloading. So let’s say one of them freely chooses to have their experiences overwritten/augmented by Ray 1’s life and thoughts. A preparation protocol is used to increase biological Ray 5’s neuronal connections to a level comparable to the mature, 30 year-old Ray 1. Perhaps a transition program is used to incrementally acclimatize Ray 5 to Ray 1’s intellect and memories, and vice versa. There may be several outcomes:

a. Ray 1’s copied consciousness will not be able to adapt to this new body due to some unforeseen medical complication. It will become a prisoner in a recalcitrant body. The period spent as zillions of ones and zeroes will have fundamentally changed Ray 1’s relation to and sense of embodiment. Ray 1’s consciousness will reject the embodiment in Ray 5, like an organ transplant is rejected. He will be screwed—a homunculus consciousness inside a physical being who may disobey his wishes. Not unlike a person with Dissociated Identity Disorder, Ray 5 will struggle with Ray 1 almost constantly. Advanced “smart drugs” may be able to chemically keep Ray 1’s (or Ray 5’s) “will” at bay, but this will hardly be a happy existence for either.

AI Ray will say, let’s try again….

b. Let’s say there is a super-advanced transition program that will acclimatize Ray 1 back into an embodied existence. Even still, this new body of Ray 5 occupies a different existence, a distinct timeline in space-time than his original 170-year old shell. Ray 5’s body has been exposed to different cosmic conditions—radiation levels, electromagnetic fields, nutrients, environmental toxins and consequent immunities, etc. Ray 5’s body is a holistic product of and at equilibrium with the interaction between his genes and the future environment, just as Ray 1’s body was 170 years ago. Ray 5’s body may, despite the preparation, reject the superimposed neuronal changes as a body rejects a transplant, as in A. This may be taken into consideration early on, and all of the clones’ lives lived in conditions as close as possible to simulating the conditions of Ray 1’s world—that of the world, generally, 170 years ago. There still may be laws as yet unknown that “fix” a person’s life conditions into a set of parameters that cannot be altered by the addition of something as complex as another person’s life experiences.

c. The downloading may be successful—at first. There may be eventual catastrophic decline or disruption to Ray 5’s cognitive or bodily functions, as in “Flowers for Algernon.” Nature always has tricks and fail-safes up her embroidered sleeves.

d. Indulging a variant of Rupert Sheldrake’s morphic resonance conjecture, Ray 5 may actually naturally develop along the lines of Ray 1 both physically and mentally, and may even remember bits of his first embodiment. There may be a sort of memory encoded in the epigenetic changes Ray 1 went through in his 70+ years of life, and these left traces that unfolded in Ray 5’s development. This would make easier the superimposition of Ray 1’s consciousness.

e. The transplant may be entirely successful. Ray 5 will now become Ray 1 again at 21 years old, and Ray 5’s experiences integrated into him. With life-extension drugs having been perfected, he could live to 200 years old, until the next uploading/downloading occurs into another new body.

Additionally, perhaps computer engineers of the future (with AI Ray’s prodigious help) will be able to “perfect” an artificial body for him modeled entirely upon the reverse-entropic biological processes inherent in living beings. It will have a blank slate of artificial neural networks equaling the complexity (and perhaps copying) that of Ray’s organic brain. In this case, the download of AI Ray’s consciousness into this biomechanical manikin would seem more plausibly effective.

Again, the same results may happen on the AI Ray’s re-embodiment end: A single unforeseen glitch in translation could ripple through the system causing a crash; the interface may not achieve a robustness that allows his control of the “nervous system” and affect the autonomic systems. Back to the electron-stream.

The only way to know if any of this is possible, according to Kurzweil, is to try.

These are all assumptions based solely on a materialist/physicalist worldview.

————–

A magnetic recording is an analog phenomena; the tape captures almost every sound vibration in the vicinity of a microphone by directly imprinting (interrupting) the steady magnetic field with a complex of wave­ signatures upon its surface. A digital recording on the other hand electronically “samples” the incoming vibrations thousands of times a second and reproduces it through a string of pulses represented as sets of ones and zeroes for each frequency or set of frequencies.

The analog recording concept theoretically applies to everything we can mentally perceive—a continuum of smooth, interdependent activity. But the scale of a phenomena or “bracketed” event matters when we describe it this way; we could call everything from the chemical elements and the larger biological forms they make up (like catalysts) analog forms, while a different conception of form exists at the subatomic level. Physicist Max Planck’s idea of a quantized change in energy from one state to another in electrons and photons implies discontinuities on this small scale, and this what is he meant by quantum—a self-­limited quantity.

But does the discontinuous “nature” or quality of this scale make subatomic events amenable to digital/binary modeling—and eventual copying—as Kurzweil and many transhumanist neuroscientists believe?

I don’t think so. Binary operations play a part in consciousness, but the larger chemical systems and organism in which they are embedded mediate the dualistic on­off processes of neuronal activity. Neurons considered from the quantum level function within a “flowing” analog environment of interdependence (and are subject to signal/noise degradation). The neuronal “fired/not fired” states cannot be viewed separate from the larger­scale systems. Every dendrite­-axon-synapse combination in the brain—trillions of them packed together—sits in a soup of chemicals and electrical impulses that defy quantization in their complexity. Our fastest and most yottabyte­heavy parallel processing computer systems still don’t come close to the complexity level of the brain. And they never will.

Transformation of the “analog phenomena” of an individual’s subjective sense of self into a digitized form seems to be the core of this kind of transhumanist project (although there are those who propose that there could be a “neutral substrate” made of purely disembodied information that can be translated and introduced into any type of form—silicon, crystals, light, liquids, even gases—and still meet a criteria of conscious and contain one’s “personality).[1] Claims that biological processes can not only be modeled as globally digital functions, but are digital phenomena are most often made with respect to neuron/synapse activity in the brain—but this is merely a specific metaphor run amok. But the use of metaphor has a history. It goes all the way back to the split between mind (soul) and body Descartes conjectured in his First Meditations; it is the idea that the biological half is purely mechanistic, like a clock.

One unspoken, perhaps unconscious core tactic of the transhumanist outlook—and even certain fields of science as a whole—is to remap the connection between a natural phenomena (origination) and a technological­-instrumental device (simulation), then reinscribe this established connection into another social domain by means of a handy metaphor.

The first phase is the simple one­ to­ one metaphor, as noted above with Descartes: biology is like, or mimics, a clockwork/machine.

As our machines have increased in complexity, and computing moved from analog to digital, allowing a thousandfold increase in power, it allowed modeling of biological systems and further, the possibility of seeing equivalence between the two.

Thus at the second phase, the metaphorical arrow of signification is double-headed and equalized: biology is no less and no more than a digital phenomenon. Thus the grounds for reversing the metaphorical signification are made possible. The ur-metaphor begins to shape the thinking of practitioners in cognitive science, neurology, and AI, then in a wide variety of disciplines—and can limit true thinking on a society­wide scale, be it from neuroscience to political science, from physics to economics, or from biology to sociology. The new “truth” begins to dominate thinking to such an extant as to obscure its origination point in the natural phenomena from whence it came. In this case, the latter has already been “enframed” into the condition Martin Heidegger called “standing reserve”—something whose tangible, unique, existence as an existent is “invisible” but is yet used as an instrument or commodity for a human, or humanity.

Here’s a specific example: a scientist or philosopher is discussing sensory systems or thought processes—the eye or the ear, and the brain’s operation—and casually reverses the arrow of signification on us the readers:

“The eye is a remarkable optical instrument.”

or

“The brain parallel­ processes a billion synaptic firings a second in its computation and algorithmic input-output.”

Such seemingly innocuous statements, made repeatedly over the course of the article or book, can gently abuse the “metaphor” until we begin to actually conceive the eye as an optical device or the brain as a computer. We then conceptualize the eye as a kind of device that evolved for the specific purpose of seeing. This is a wrong way to characterize it. “Seeing” as a phenomenon takes place within and is only a part of consciousness; consciousness includes the contents of “that which is seen.” The scientist’s narrow focus on a holistic event, as phenomenologists have attested, cannot be separated from the entire act of seeing. The “act of seeing” must be “bracketed” by the scientist as a specific type of biological activity limited solely to the mechanics of the eye for them to get away with the reversal. The boundaries of the act of seeing cannot be delimited as we commonly understand existence; we may use the term in a variety of ways, as related to visual phenomena or, as was explained by Plato, a metaphor to comprehend something abstractly. “Seeing” and “seeing-as,” as the Heidegger of Being and Time might have put it, are primordial to human being in any given situation, all situations of which defy conceptualization.

This is a difficult conceptual difference to convey, but it is vitally important. So let’s take a robotic example. An artificial device that is structurally identical to the eye will perform the function of translating photons striking an “outer appearance” and bouncing into its iris into identical patterns on an “inner screen” or representation of whatever impinges upon its outer surface—and its fidelity to those patterned photons is our standard of how well it performs its job as an eye (this “inner screen” or “theater stage” is the poor metaphor we have, by default, used for centuries). Yet we as observers of the overall isolated “seeing” system of the device have no way of fully measuring the originating phenomena at which it “looks”—the pattern of photons external to it that it is “looking at,” and the human eye’s own ability to encompass this same external area—to have a criteria of identity that is as exact and unambiguous as our scientist­writer would have us believe. Such a criteria of fidelity is entirely rough and depends, like it or not, on quantum phenomena in the eye, the optical center of the brain, and the “outer” world conceived as a bounded system subject to probability. The artificial eye we have constructed and are observing is an isolated system with reference only to capturing the patterned photons. Its fidelity and “what it makes of the scene” can in no way be known.

On a cultural level, this reversal of signification is taken as a given. It is deployed/disseminated from one discipline or profession of discourse into another and thus begins to shape the thinking of society on a wide scale.

Concepts from neuroscience find their way into politics, from business management to journalism, from biology to music, etc. None of this metaphor-making is wrong, per se; it is just that we should not literally believe any of it as real. The web of metaphors that is created is one that concretely literalizes them over time, and make their origination as metaphors a “trace” only, or effaced entirely.

It is quite possible that in time we will not be able to think outside our metaphors. Certainly it seems that scientists like Kurzweil cannot.

This is the situating matrix of transhumanist thought. It’s a short step from viewing biology as akin to “clockwork gears” to viewing it as a digital computing phenomenon. Everyone from neuroscientists to psychologists to physicists love to say Descartes’s radical dualism is dead, but it’s echoing pretty loudly in transhumanist talk of “uploading consciousness.” Bollocks!

——-

Screen Shot 2019-03-30 at 2.41.12 PM

       Talk of uploading consciousness cannot but be based in the thorny debate over what consciousness is. It is still unanswered and perhaps unanswerable. Since some of the brain’s functions can be modeled digitally, it must be digital. Transhumanist rhetoric like Kurzweil’s thus assumes that it is “naturally” a digital phenomenon.

But let’s give him that the models can mimic the conscious behavior of a sentient being. Does mimicking require consciousness? If we say yes, does it imply there is a subject of consciousness behind the behavior? Not at all. We are back to the qualia mystery and philosopher Thomas Nagel’s “what is it like to be a bat?” question.[2]

Going further, let’s strike mimic and replace it with exhibit. “Mimic,” of course, is covered as a special form under the general concept “exhibit.” But the same problem confronts us, the age-old problem of “other minds” and we’re right back to Nagel’s questions. It wouldn’t matter if the model is digitally constructed or a “neurosoup” designed by nanoassemblers programmed to build a wet human brain in a vat.

We can allow that there are operations in the brain that are roughly algorithmic in function. We can also allow that there are brain operations that are like the binary of digital pulses. And we can combine these two phenomena into a synthesized model.

Suppose an advanced AI designs and builds a neurosoup brain that bypasses our human neural architecture, but its result seems to exhibit all the behavior of a conscious being. Through a vocal interface it can carry on conversations, has a sense of humor, can write poetry and even Simpsons episodes. The AI had in its “memory” all the necessary medical knowledge of the human nervous system but discovered “shortcuts” or ways to abbreviate functions in the natural design, the product of millions of years of evolution, and went ahead with dispensing with some of the architecture because it evaluated them as redundancies. The product appears to exhibit sentience, learning capabilities, and the self-criticality necessary for us to say it might possess consciousness.

We find we cannot reverse engineer the brain it built; we can’t comprehend the complex order of operations it performs, which appear different than the human brain.

 

 

 

 


[2] Nagel, Thomas, “What Is It Like to Be a Bat?,” from Mortal Questions, Cambridge University Press, 1991.