Sims, All the Way Down?

Atlas Simulate

According to philosopher Nick Bostrom, the chances we are living in a simulation by either extraterrestrial forces or a future “posthuman” civilization are very high—given its possibility, when wedded with a few other conditional axioms.

Here’s Bostrom’s abstract from his 2003 thought experiment:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

First, Bostrom’s idea of the “posthuman” parallels that of Ray Kurzweil’s result of a technological singularity: it is mostly unpredictable from our current, limited human standpoint, but definitely involving 1) an astounding improvement to human somatic functioning; 2) interface with technology, probably via nanobots that eliminate disease and the aging process and allow the brain unlimited access to data; 3) the use of “non-thinking” objects/substances/processes as a substrate within which consciousness or computation could exist.[1] When all these events occur, a bootstrapping effect could take over that sidesteps biological evolution altogether and turns humans into “substrate neutral” entities that could conceivably use light, interstellar dust or gas to copy themselves, compute, and travel the cosmos. Bostrom refuses to specify posthuman qualities, ethics, activities, etc., but suffice it to say they would have powers we think of associated with deities and even the creator God of the monotheist—if they so chose to actualize them.

His argument rests upon these contingencies: 1) that humanity survives to a level where technological singularity is possible; 2) that humanity achieves successful passage through the period of singularity; 3) that substrate-neutral consciousness/computing is possible; 4) that our posthuman descendants could, using technology, run simulations for research purposes or “what-if?” scenarios (possibly a near-infinite number of them); 5) that they would either declare them unethical outright and not do such simulations or grow bored of them.

Bostrom admits that there is no necessity to any of these possibilities. However he goes on with the thought experiment assuming that one of the three possibilities is true, and chooses as most likely number three because our descendants, not having blown themselves up, would most likely try to simulate the past, if it were possible.

The indifference principle is invoked to assign whether we today would find ourselves within one of their simulations or not. John Rawls used this idea in his famous Theory of Justice thought-experiment. It assigns to all agents in an imaginary society an equal probability of being born a pauper or a prince(ss), congenitally compromised at birth or a very healthy physical specimen, or at any position between the extremes. Given your own ignorance of knowing where you might fall in the social order (your “original position” as Rawls calls it) how would you design a society that alleviates, to the best of its abilities, the harms of your being born into the “lowest,” resourceless position (if you happened to draw that lot) while minimizing economic harm to the interests/resources of those at or near the top of the social hierarchy, who would be compelled under a principle of justice to help the less fortunate members of society? And vice versa: what would be fair for you, as one of the elite (one with access to vast resources), to sacrifice in order to help those at poverty/disadvantage of any kind?

Bostrom adapts this “blind lottery” scenario to the field of possible worlds we could potentially find ourselves inhabiting at any given known time. We can know nothing of the past beyond our births, and only speculate on the future. We must concentrate on the possibility that this world, today as we know it, is a simulation by ETs or post-humanity.

But what outweighs this probability?

By adopting the indifference principle, Bostrom says, nothing conceivably can outweigh it. Our insufficient information at present on the future of humanity can lead us only to determine what could probably be the truth, given the overdetermined unknowns in his severely restricted assumptions. Bostrom assumes things that he doesn’t exactly spell out or engage. He accepts it as a given that consciousness is computable, by using Eric Drexler’s formulations of the energy consumption necessary for bits-processing-per-second. To this he adds AI theorist Hans Moravec’s formulation for the processing power of the human brain, and his own formulations on consciousness+human memory to find an energy amount and cost for computing a simulation. The first objection any rational person would counter is that the energy cost to simulate even a single conscious brain in our world, much less a virtual cosmos for many “brains,” would be laughably high for us, even if it could be done for only a few seconds. There’s an astronomically high probability that the energy ET/posthuman simulations require would be finite, and that these limitations would be detectable to us. Glitches in the simulation would occur as a result, and we would already have noticed these glitches and know we were inside a program.

Bostrom sidesteps this in two ways. First he says the “machines” powerful enough to simulate a universe also could easily and instantly rewrite our memories if a game-giving-away error occurred (in other words, that a “smoothing over” algorithm is always at work). Second, he backs up this failsafe and underpins the whole problem of energy-use by playing with Drexler’s conjecture that the quantum energy “pool” of entire planets could be harnessed as power sources for nanocomputation in biological, silicon, or any available substrate.[2] Remember, we’re talking intelligences that are “indistinguishable from magicians” or gods, and can hack the “computation power” of molecules. By this reckoning, simulating entire worlds for billions of equally simulated conscious observer-beings (like us) would be easy.

That is one possibility. Another is that the rules of entropy that we perceive as the “arrow of time” and space-time physics are simply programming of this simulation. Like in The Matrix, they are ultimately unreal. If the simulators in fact have access to infinite energy sources, they could program a world to have any physical rules they want it to.

Ironically, this scenario implies the high probability of an afterlife. After all, we aren’t really living if we are simulations. We could be “deleted” but what would there be to that condition? Our forms, as energy patterns in “time,” could just as easily be resurrected, or even go on to a simulated afterlife.

In other words, all the conjectures of religion and mystics could in fact occur as “realities” inherent to the simulation.

Or they could have evolved spontaneously in our human minds as a part of glitches, “anomalies” involving, say, Muhammad’s encounter with a massive “spacetime programming error” that was interpreted by Muhammad’s “mind program” as the angel Gabriel.

And suppose a whacked-out theory such as the one claiming millennia-old sounds or images can be imprinted as “stone recordings” is true in this sim (that there are programmed rules for such a phenomenon to occur), or that ghosts are actually “etheric recordings” of people who once lived (that is, who once inhabited the “level one” program of a life on earth as a person)? Or that UAP actually are energy forms/craft not indigenous to this part of the earth-program? All the debunkers’ bets are off against the paranormal being real if we do in fact live in a simulated universe. The division between the normal and the paranormal are senseless in such a universe.[3]

If we are simulations, the chances of a near-infinity of parallel simulations that are slightly different that ours by a few electrons (the multiverse conjecture) means that one conscious “monad” (me or you, say) could conceivably cross over into a parallel simulation, or be moved from one to the other, without our ever being cognizant of it.[4] Bostrom acknowledges all this in his paper:

The possibility expressed by alternative (3) is the conceptually most intriguing one. If we are living in a simulation, then the cosmos that we are observing is just a tiny piece of the totality of physical existence. The physics in the universe where the computer is situated that is running the simulation may or may not resemble the physics of the world that we observe. While the world we see is in some sense “real”, it is not located at the fundamental level of reality.

            It may be possible for simulated civilizations to become posthuman. They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.) Virtual machines can be stacked: it’s possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration. If we do go on to create our own ancestor-simulations, this would be strong evidence against (1) and (2) (remember: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)};, and we would therefore have to conclude that we live in a simulation. Moreover, we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings.

            Reality may thus contain many levels. Even if it is necessary for the hierarchy to bottom out at some stage—the metaphysical status of this claim is somewhat obscure—there may be room for a large number of levels of reality, and the number could be increasing over time. (One consideration that counts against the multi-level hypothesis is that the computational cost for the basement-level simulators would be very great. Simulating even a single posthuman civilization might be prohibitively expensive. If so, then we should expect our simulation to be terminated when we are about to become posthuman.) (italics and emphasis added)

So Bostrom and others have spoken of simulating universes, and within those, its inhabitants run further simulations. We have the possibility of a simulation within a simulation within a simulation—sims all the way down.

This idea of simulation and levels of simulations parallels the ancient Gnostic idea of a corrupt, illusory cosmos created by an evil archon and policed by demons. Perhaps the simulators are looking for those “simpeople” intuitive or smart enough to figure and find channels out of the simulation to communicate with them, Truman Show-style, and they will be rewarded somehow, like a rat in a maze—perhaps with a continuity of sort after their “death.” This would be one of the primary reasons posthumans or ETs would want to simulate human life, or something like it: to see if simulations can become self-aware, cubed. In this case, we’d be like the most adaptable denizens of John Conway’s Life Game, and get to perhaps be kicked up a level (ascended) into a more complex “dimension” of the simulation.[5]

—————-

Bostrom’s simulation argument still minimizes the question of what biological forces created the simulators, if they were or are “natural” creatures—as we the simulated perceive biology to be, that is. If they are posthumans, and have designed the “initial conditions” of our Life Game differently than their own, then we can’t know theirs and thus our own “biological” origins. If they designed our simulation to mimic exactly their own past cosmic initial conditions, but run a multitude of iterations of them, we would still happen to find ourselves in the one in which we evolved into sentience (the standard “anthropic principle”) but we could never know and never will know the point at which our world-simulation diverged/diverges from theirs and led to their supernatural evolution. It may have occurred at any point from the beginning until this very moment, and far into the dissolution of this universe/multiverse.

What we could say with certainty is that if there is equality of form between our physical/biological rules and those of the cosmos “outside” in which they inhabit and evolved from, this equality could be discovered in principle.

Some (not Bostrom) say it’s pure AI that’s created the universe, or there is computational equivalence between this universe and that of an omnisupercomputer. So where did the AI come from? Is it just playing a game? Denying that it must have emerged from biological beings and processes is absurd in one sense…Can we export our frail stories of “chaotic-primordial-soup-to-ordered catalysts and evolutionary ascent” to the genesis of beings with such power? Did they create our world to embody our own discovery or creation of this narrative? If not, what are the alternatives to it? That we are prisoners of “Solid State Intelligence,” as John Lilly called it and The Matrix dramatized? This implies they have programmed our universe to appear as if biological beings exist when in fact they don’t exist at all. Neither would geology, physics, etc. or anything we call “natural.”

Again, all bets are off. Bostrom has come up with quite a memetic trickster in the aether.

I’ll go Platonic here: there must be an essence to the Idea “biological” and an essence to the Idea “artificial.” Computer simulations can model biological processes but are not themselves biological, no matter how much enthusiasm the AI geek imbues their design with details matching what is currently known about molecules and enzymes and catalysts, etc. Simulation means to create a copy, down to the last detail, of an original example that pre-exists the simulation. Perhaps this is why Bostrom foregoes the pure AI simulators for “posthumanity.” These simulators must know the rules of what we would consider biology, drawn from the Idea (which, under Plato’s philosophy, would precede any instantiations of it) in order to sim biological beings. Therefore, the simulators must at least be acquainted with real, instantiated biological entities.

Or must they? If not, what’s then the difference between the simulators and a creator God(ess)?

There is none, according to Bostrom.

If our universe is a wholesale simulated creation, we have nothing with which to measure the creator(s)’ ontological status against ours. Even Descartes’s certainty fails in Bostrom’s scenario; they have programmed the thinking and personality that I believe is me—and your thinking and personality, too. The simulators are beyond our categories of classification and capacity to grasp…Like Pascal’s wager, is it then best to just put faith in our biological reality and ignore the entire problem of simulation? Or should we, as Bostrom would have it, bet on the opposite? Or should we use Pascal’s wager in its original sense via Bostrom, that is, conclude it is better to believe the simulator “gods” exist and try to reach out to them somehow?

———

Wittgenstein asked himself about the possibility of eternal life in the Tractatus, and concluded with the question “what problem would it even solve that we live forever?” We can adapt this to say that, even if we are products of simulators posthuman or extraterrestrial, our grasping the “truth” of this would entail mysteries outside our conceptions of time and space, and as the TractatusWittgenstein would conclude, it is senseless to even consider; it solves none of our human problems—the facts of the world as we find it—to know whether we are in a simulation or not.

 


[1] For a short essay on this last-mentioned technology, see the next essay on this site: “Substrate-Neutrality, Nano Tech, and ET.”

[2] Again, see the following essay linked-to above.

[3] I’m certain some of the most rabid atheist debunkers will want to reject the simulation hypothesis on these grounds alone, even if Bostrom’s paper is a cogently-argued piece of rationalism, because it threatens to destroy their belief that ours if a steady-state cosmos amenable to their scientistic dogmas.

[4] Or our consciously realizing it as a glitch. Some people call this the Mandela Effect: when all the public evidence for something recollected from one’s past differs significantly from the recollection, and all evidence one might have except one’s personal memory has been altered to conform to the present representation. We could explain this as simple and cheap misrecollection, except for the fact that so many millions of people have experienced it with regard to the same representations, and a growing community is discussing and noticing more examples. I have one myself.

[5] Again, such a conjecture must constitute the horror of horrors for the scientistic fundamentalist.

Nick Bostrom’s Simulated Universe argument gives Descartes’s evil demon a headache–apparently, he, too, is a simulation in Rene’s imagination.

Advertisement

Substrate Neutrality, Nanotech, and ET

reservoir computing

There’s a new strain of old thinking going around in the transhumanist-quantum computing world called “reservoir computing.” Moore’s Law of computing power/transistor size has smashed headlong into a brick wall of late as scientists have begun working on nano scales. Say goodbye to electrons bouncing around inside silicon chips, and say hello to a bucket of water:

(A team of German scientists) demonstrated that, after stimulating the water with mechanical probes, they could train a camera watching the water’s surface to read the distinctive ripple patterns that formed. They then worked out the calculation that linked the probe movements with the ripple pattern, and then used it to perform some simple logical operations. Fundamentally, the water itself was transforming the input from the probes into a useful output—and that is the great insight.[1]

Reservoir computing is based on the idea that stimulating a material in a certain electromagnetic fashion can vibrate its molecular structure into tiny computing units:

Reservoir computers exploit the physical properties of a material in its natural state to do part of a computation. This contrasts with the current digital computing model of changing a material’s properties to perform computations. For example, to create modern microchips we alter the crystal structure of silicon. A reservoir computer could, in principle, be made from a piece of silicon (or any number of other materials) without these design modifications.[2] (emphasis added)

waterripple

Some abracadabra flim-flim is possibly going on here. In the world of quantum computing news there is always a problem with the noise-to-signal ratio: media hype (for funding purposes) vs. what pans out as an actual breakthrough. Things like reservoir computing get boosted, but most times are eventually found to be dead-ends. The water experiment paper cited is from 2003. What has happened in the interim? Its author, Matthew Dale, cites his own paper dated January 2017 on the latest RC work. The truth is most real technological advances such as these occur in the dark, in a military program or military-funded university lab far away from the media spotlight, with an average lag-time of ten years from the initial breakthrough to the public revelation (if it gets revealed at all). Years can go by before such discoveries have achieved some kind of societal application (if they at all do). Only then do the military-corporate patent-holders allow the breakthrough articles to hit the presses. Several years later, we begin to see their widespread application in the civil domain (the internet, of course, is the primo example).

Anyway, Dale discusses how reservoir computing parallels the discoveries in current “global computation” models of the brain. The “wet” aspect (no pun) comes into the picture in another paper he cites:

The “input layer” couples the input signal into a non-linear dynamical system (for example, water or the kinetic movement of gases) that constitutes the “reservoir layer”. The internal variables of the dynamical system, also called “reservoir states”, provide a nonlinear mapping of the input into a high dimensional space. Finally the time-dependent output of the reservoir is computed in the “output layer” as a linear combination of the internal variables. The readout weights used to compute this linear combination are optimized so as to minimize the mean square error between the target and the output signal, leading to a simple and easy training process.[3] (clarification added)

What this amounts to is using the non-linear vibrating movements of an analog phenomenon (large-scale Newtonian nonlinear systems like the ripples in disturbed water) to perform calculating work. Dale draws parallels with the brain’s “wet” environment in its helping process the perception of say, a light. Specific areas of the brain have been shown to process incoming visual signals, but they receive “computing” help from the entire “wet global workspace” of the brain’s neurochemical soup.

Parallel to these researches, Randal A. Koene proposes “substrate independent” pattern-copying of neural networks and, ostensibly, whole biological entities. This means that one could retain the core relationships of a pattern in space-time, such as a brain, but embody it in something other than carbon-based forms. With respect to use of the word “independent” in substrate independence, journalist Mark O’Connell writes:

This latter term, I read, was the ‘objective to be able to sustain person-specific functions of mind and experience in many different operational substrates besides the biological brain.’ And this, I further learned, was a process ‘analogous to that by which platform independent code can be compiled and run on many different computing platforms.’[4]

Koene’s work is funded by Russian millionaire Dmitri Itskov, who founded the 2045 Initiative, whose goal is “to create technologies enabling the transfer of an individual’s personality to a more advanced nonbiological carrier, and extending life, including to the point of immortality.”

If these avenues prove successful (and that’s a big if), couldn’t some other civilization that long ago discovered matter’s computable properties already have “hacked” the space (the zero-point field and/or space dust/gases) between stars/planets to transmit information over vast distances at the speed of light? Could they send their own DNA (or “substrate-independent” copies) as coherent pulses of light over these distances? Or even “instructions” to build/grow vehicles from the elements found in the “dust” and gases in transit along the way, or at the beams’ destination solar system? The idea of sending information via photon streams is not far-fetched, and has recently been hypothetically advanced enough to be testable: https://www.livescience.com/61993-quantum-message-double-speed.html?utm_source=notification

Solarsystem

Their first problem would be to overcome the entropy that might occur to the traveling luminal signal that contains the “shipbuilding” information. Suppose an ET civilization around Alpha Centauri shot a massive series of photon beams (lasers) from their home planet that were encoded with information, instructions folded within instructions, using DNA or its “photonic substrate equivalent” (I choose this uninhabited system simply because it’s closest to us). Primary amongst its instructions is the maintenance of microscopic nano-assembling units it will create upon reaching its destination. The beam is structurally designed to draw energy from the photon/electron streams emitted by the gas clouds it passes to surmount its own tendency to disintegration. Or perhaps it draws energy directly from the “quantum foam” of Planck space, essentially recreating itself continuously as it moves along, like the cells which continuously replicate within a biological body.

As it nears its target, say a billion miles out from our sun, it begins to accumulate particles of interstellar/intrastellar dust and as its mass increases slows significantly. Upon arrival at our target solar system, it would interact with the sun’s magnetic field. It would “stop.” Let’s say by this time it is the size of a baseball. Its first programmed task would be to gather enough stray material (gases, dust, particles) that its form (a large “dot” at this point) would attain significant mass, just as planets are supposedly formed. This means it would need to induce an “eddy” of centripetal motion in the magnetic field to form a “core.” Simultaneous with this self-creation is the manufacturing, as it grows in size, of nano-assembler-units that function like microscopic “bees.” Over time, the dot becomes a sphere the size, say, of quarter the Moon’s size, and the “bees” into forms ranging in size from microscopic to the size of a VW. Its instructions continue to unfold, the bees working, differentiating the parts of the sphere’s chemical-metallic form just like the ontogeny of a living creature in the womb. The parts begin to function/interact like a large-scale machine. It creates for itself a power plant that functions either like its transit method (drawing energy inherent in Planck space) or by solar, or nuclear, or all of these combined.

When its self-assembly is complete, it contains an “incubatorium.” By using records of its “parent” race’s DNA, and begins to fashion from the atoms/molecules up replicas of its biological parents. These beings are not alive in the sense we normally think of it; they are essentially cyborg-copies of their parents.

Or maybe the parents have decided to dispense with their biological form altogether and decide to create their surrogates as inchoate energy patterns capable of taking on any form, like the Organians on Star Trek. The biologically-modeled “ship” and its “crew” now begin to investigate our solar system, continuously sending back information at light-speed back to Alpha Centauri—an operation that took a mere 5+ years, and not the hundreds of thousands of years ET debunkers always say it would take for another civilization to reach us at subluminal speeds.

Now suppose this ET civilization shot millions of these beam-clusters into space, in all directions, towards every star system containing “M-class” planets. It would have AI outposts all over the galaxy.

This is one far-out-there hypothetical scenario. But doesn’t this information-only “trans-life” substrate hypothesis scientists like Koene are working on imply that signs of, or representatives of, extraterrestrial intelligences could be around us in a myriad of camouflaged forms and we wouldn’t know it? An aggressively-symmetrical tree? A strange meteor? A weird patch of fog? A quivering blade of grass? The octopus?

 

 


[1] https://theconversation.com/theres-a-way-to-turn-almost-any-object-into-a-computer-and-it-could-cause-shockwaves-in-ai-62235 citing Fernando C., Sojakka S. (2003) Pattern Recognition in a Bucket. In: Banzhaf W., Ziegler J., Christaller T., Dittrich P., Kim J.T. (eds) Advances in Artificial Life. ECAL 2003. Lecture Notes in Computer Science, vol 2801. Springer, Berlin, Heidelberg

[2] Ibid.