Is Your Reality Just a Simulation? Calculate the System’s Capacity.

Consider the idea that, unless intelligent civilizations tend to self-destruct, or they tend to lack the introspection to simulate their own origins, it is (apparently) statistically more likely that we (including you, dear reader) are a simulation – not the apparent, pinchable, free-willed “reality” we prefer to interpret our sensory inputs as. Personally I find this idea vaguely… off. Like the anthropic principle, it seems to smell of too much… convenience. Or like it might suffer the self-immolating trajectory of the ontological argument. Or the cringe-inducing conclusion that all things – including you – and a billion copies of you – must inevitably be found inside a universe that is infinite. But play along for a bit, and it is possible to roughly calculate the minimum processing capacity – or at least memory – of the system we’re running within, and some evidence in favor of the proposal.

Call it the number of Plank bits. The memory size of the apparent universe is the number of separate spaces that can be found within the hyperspace volume we find ourselves within, that are still causally-linked, measured in Planck lengths. If we assume the current Lambda-CDM model (Big Bang + Inflation + cold Dark Matter) isn’t significantly incorrect, we can think of the Hubble Horizon as a kind of inverted event horizon forming a causality-limiting sphere around us. That horizon marks the point at which space (or the stuff found within that space) is moving away from us at the speed of light, relative to us. Since nothing travels faster than the speed of light (which has also been called merely the speed of causality – as gravitational waves are also limited to this speed limit), nothing at or beyond this horizon can ever be reached or even communicated with.

This horizon forms a sphere around us with a radius of something like 14 billion light years (about 2 * 10^58 Planck lengths), or about 11.5 trillion cubic light years (3.5 * 10^175 cubic Planck lengths). If the bits of the simulation correspond to volume, the minimum memory capacity of the system is that Planck volume – about 3.5 * 10^175 bits (but subject to additional dimensions if they turn out to exist).

On the other hand, if AdS/CFT Correspondence and the holographic principle hold up, the real information content of the simulation we seem to find ourselves in would be the surface area of the Hubble volume, or about 5 * 10^117 bits. This is certainly a lot of bits, but in an infinite universe or one that exists infinitely long such a simulation not only may occur – it will occur. More interestingly, given a large enough universe, such an information-based simulation can arise spontaneously like a Boltzmann brain – or as the product of a Boltzmann brain. The fact that a simulation is only composed of information (that is, after all, the essential quality that separates a “simulation” from “reality”), the medium in which the information exists could be anything from charged particles, to… cupcakes! Don’t think that a simulation written in cupcakes wouldn’t work, either – the time it takes for the medium to evolve from one state of the simulation to the next state (eg the passage of one unit of Plank time within the simulation) can take any length of time in the informational medium – it doesn’t matter how long the “real” time-velocity is.

What I find really interesting about these considerations is that the Quantum Mechanical rules that make the Planck length arise have the effect of making a universe-simulation practical by limiting the fine resolution required, as opposed to a universe that has infinite detail, like a fractal. A perfectly accurate simulation would not be possible in an infinitely-fine, perfect-fractal universe. You could say that the fact that our universe has this fineness-limit is possibly evidence that we really do live in a simulation, and if we really are in a simulation, it may be that the Planck length was just an arbitrary fineness-limit our “creator” chose. Perhaps 1.6 * 10^-35 just happens to give our simulation good-enough results to still be meaningfully likely in the real universe that created our simulation. Likewise, something like the Hubble horizon is exactly what a simulation-programmer would want to use to prevent her from having to simulate “everything.” Looking at the way the Hubble horizon seems to have evolved over the life of our apparent universe, it could be that it – as well as the apparent Dark Energy expansion – is being used to set an upper limit to the system resources needed by the simulation to provide accurate results, enormous as this upper limit is, relative to the size of our apparent universe. A curious reverse way of interpreting the Hubble horizon and Hubble expansion of the universe is that it might not be a real phenomenon happening within our simulation, but rather is just an apparent feature – an emergent phenomenon that is simply a consequence of our simulation having a computational limit, similar to how the apparent flow-of-time would also not be a real phenomenon in the simulation, but rather just be an apparent phenomenon that emerges as a consequence of the “size” (in fractions of a second) of the “temporal steps” our simulation just happens to employ (corresponding to the Planck time!)…

 

Advertisements

About stormculture

In pursuit of reality.
This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s