1 The Quantum Zeno Effect on the DeSitter Horizon

0 downloads 0 Views 4MB Size Report
Sep 15, 1981 - DeSitter Horizon, and has a different local quantization than our locally quantized ... We call this continuity the Planck Flow, and it has no known phenomenon that ... This has been referred to as the self-energy problem in physics. .... The Bohr electron makes a quantum jump from one orbital to the next, not.
1

The Quantum Zeno Effect on the DeSitter Horizon Dr. WJ Bray. DOD. This is Section II of a series describing the temporal environment of DeSitter space, the Markov Processes, Entanglement, Complexity, the Holographic Principle, the ER=EPR problem, Black Hole dynamics, all in relationship to the QZE and Inverse QZE (or as I prefer, the Quantum Anti-Zeno Effect, QAZE). Section I was a series of historic attempts at definitions [See DESCRIPTIONS AND DEFINITIONS OF THE QZE AND ANTI-QZE, wjb]. Please note I tend to use in-line references rather than displace them at the end of the paper. Before proceeding on, a description of the Quantum Zeno Effect, in a context that hasn’t yet been described, is required. The reason I say, ‘hasn’t yet been described’ is because whereas people have measured it, observed it, even used it in industrial processes, and no one has yet to render a description of it other than ‘the alteration of the progression of unitary time by observation.’ Here, unitary time refers directly to the Planck Flow, the progression of one Planck interval of time to the next, which also bears no description. The relationship to Holographic Theory, Conformal Field Theory (CFT), and our models of emergent space-time, is absolute. I will describe all of these things and their interdependency in this section. This section will actually derive exactly what and why the QZE and Anti-QZE are, why they occur, how they occur, and every possible detail. Much of this work pulls from personal lecture notes from Susskind, Buosso, t’ Hooft, Verlinde, Thorne, Wilczek, and even some very old notes from lectures by Wheeler. Since each of their lectures (on multiple occasions over the past 20 years) pulls on several of their own as well as papers from others, it is difficult to directly cite the source of some of these derivations. The first 30 pages or so is introductory material as an agreed upon scaffold for the argument. The greater (later) portion of this section of text describes the Tensor Network Complexity as it occurs behind the Schwarzschild Horizon, which we find is the surface of DeSitter space-time. The surface of DeSitter space-time is right now, and every Planck interval prior is below the DeSitter Horizon, which translates to a Schwarzschild Horizon (Holographic Principle), which is derived in full. The Conformal nature of the surface of DeSitter space-time is such that our locally quantized meter stick (on the surface of DeSitter Space-time) is constantly changing, with everything below the DeSitter surface remaining fixed (in the past). As a result, everything we measure is below the DeSitter Horizon, and has a different local quantization than our locally quantized meter stick on the DeSitter surface. The resolution of many ‘paradoxes’ lies in the proper Conformal Transformations according to the growing Tensor Network Complexity as it progresses via a Markov Process. As such, unfortunately the first 30 pages or so of this section of text will be cumbersome, even annoying background, restating the obvious, but necessary as a scaffold for argument. One of his most famous paradoxes (see Zeno's Paradoxes) is that of an arrow in flight. Zeno states that if you look at an arrow at any given instant in time, it is not moving. So how then, can it be moving, when it is not moving? That is, in modern terms, if you had a camera that could take a snapshot of an arrow in flight with an infinitely fast speed the arrow would be stationary. The argument that you add up an infinite number of snapshots each taken at an infinitesimal slice of time is an error of calculus that does not apply here, historically referred to as a Riemann sum. The problem in Zeno's argument is that if it is not moving in one frame (snapshot) how can it get to the next frame (snapshot)? You would be adding up an infinite number of snapshots of the arrow in the same position. 1. 2.

Aristotle (1930) [ancient]. "Physics," from The Works of Aristotle, Vol. 2, (R. P. Hardie & R. K. Gaye, translators, W.D. Ross, ed.), Oxford, UK:Clarendon, see [1], accessed 14 October 2015. Laertius, Diogenes (about 230 CE). "Pyrrho". Lives and Opinions of Eminent Philosophers IX. passage 72. ISBN 1-116-71900-2.

If you apply Zeno's statement to taking the smallest allowable slice of time, 10-44 seconds, and the rule still applies, 2500 years later, the arrow is still not moving. Somehow, it gets from one Planck instant to the next. Our ‘projector’ running at 1044 frames per second is somehow ticking from one frame to the next with nothing in between. Furthermore, there is no connection or ‘wiring’ between the frames to transfer energy or information. It seems that each 10-44 seconds is entirely annihilated and appears, with no continuity whatsoever, in the next frame in a state that has progressed by 10-44 seconds. There is no continuity or known process by which energy and information are transferred between these intervals. The arrow should be frozen in space-time for infinity. We call this continuity the Planck Flow, and it has no known phenomenon that describes how or why it happens. By all accounts, time should not flow, but it does, and we don’t know why. In 1977 George Sudarshan and Baidyanath Misra of the University of Texas discovered that if you continually observe an unstable particle it will never decay. 

Sudarshan, E.C.G.; Misra, B. (1977). "The Zeno's paradox in quantum theory". Journal of Mathematical Physics 18 (4): 756–763.

Sudarshan and Misra discovered that as you take those measurements at smaller and smaller intervals, slices of time – in effect, the measurement rate becoming so rapid that it is approaching continuous; if you never take your eyes off it, nothing changes; the observed system becomes frozen in time, or slowed to a near stop. The simplicity is; the Quantum Zeno Effect is the suppression of ‘unitary’ time by constant observation:

2 ‘The Quantum Zeno effect is the suppression of unitary time evolution caused by quantum Decoherence in quantum systems provided by a variety of sources: measurement, interactions with the environment, stochastic fields, and so on.’ [T. Nakanishi, K. Yamane, and M. Kitano: Absorption-free optical control of spin systems: the quantum Zeno effect in optical pumping Phys. Rev. A 65, 013404 (2001)]. Where the term ‘unitary’ refers specifically to the progression of one Planck interval of time to the next, that is, a unit of time. Physics has no explanation how one Planck interval of time (approximately 10-44 seconds) advances to the next, referred to as the ‘Planck Flow.’ Time itself is not continuous, but flickers like an old style movie projector, yet too fine to detect by even our most advanced instrumentation. This is where the term ‘Zeno’ comes into the term Quantum Zeno Effect. We call this the Planck Flow, the seeming continuity of space-time. To date, there is no explanation for it, nor has there been a notable attempt at it. I will explain it later in this paper. No one has any hypothesis why the ‘progression’ occurs at all. There is nothing pushing or puling at it, no force of nature we know of. By every standard and principle in physics, time should stand still at that one interval. Before you even consider time as continuous as a solution, continuous time means infinitely divisible. The problem now is that by making time continuous, you have increased the difficulty of the Planck Flow by a factor of infinity. There is no reason why infinitely divisible time would progress any more so than quantized time. As stated, it would be more difficult by a factor of infinity. In addition, in order for time to progress under these conditions by one infinitesimal would require infinite information and as such infinite energy, be infinitely complex, infinitely entangled, and collapse into a Black Hole; by definition. This has been referred to as the self-energy problem in physics. A simple example is Newton’s law of gravitation. When the distance between objects is zero, the denominator is zero, and Force is infinite. This is a common problem with quantum approaches to gravitation. However, since the photon (virtual) which mediates the EM forces is quantized, and therefore does not suffer this problem. In 1955, Wheeler quantized space-time in the study of Gravitational Wave hypotheses. There is nothing that suggests time (and space) are infinitely divisible. As such, I will proceed using Planck intervals. In this quantization, we see the Markov processes, Ceyley trees, Conformal Field Theory, Information Theory, the Holographic Principle, Quantum Entanglement, Superposition, and so on all tied together in this, as Verlinde has noted, emergent space-time. Furthermore, the Tensor Networks and Complexity increase (as discussed in agreement with Rovelli et al’s hypothesis) behind the Schwarzschild Horizon, where it is an unobservable. The Bohr electron, as I stated or will state elsewhere, is proof that space-time is quantized. If space-time were continuous, smooth, infinitely divisible, the electron would have to pass through an infinite number of states to get from one orbital to the next. It is written in the hardest stone that it does not happen that way. We look at the complexity and entropy of the temporal aspect of a wave function, considering linear time to go one of two directions; we have the same information complexity issue with continuous time, namely:

In this case, that equation (due to its run away magnitude) says that in 600 Planck intervals of time, the system would be more complex than all of the Planck volumes of space-time in the visible cosmos. Even though we are arguing continuous time, we measure n not by an infinitesimal, as this is non-sequitur, but in Planck intervals. It is here that I introduce the term, Standing Interval. That is, there is this Standing Interval of 10-44 seconds; the immediate prior interval of 10-44 second ceases to exist altogether. That is, the past ceases to exist. Let’s be clear on this. The past has ceased to exist, you cannot reach out and grab it, touch it, and you absolutely cannot affect the past in any way by anything you can do now. That concept doesn’t just apply to billions of years ago; it applies to just 10 -44 seconds ago. You cannot affect the Planet Theia as it crashes into Earth 4 billion years ago, wiping out the first single celled life as it just begins to form. You cannot go back in time and warn yourself not to marry him/her 20 years ago. You cannot affect 10 -44 seconds ago… Neither does the future exist, yet. There is nothing for time to progress toward, absolute nothingness. The idea that you can affect the future is an illusion of thought. This Planck interval changes, or rather, will change, with an amount of uncertainty. There is no 100% Certainty of any action now affecting the outcome in the future, not even 10-44 seconds from now. All that exists is a tiny slither of time exactly 10-44 seconds in width. The past 10-44 second has ceased to exist; the next 10-44 second does not exist, yet. This has bizarre implications on the concept of the wave function being smeared out over many Planck intervals, which I will get into later.

3 Any notion that the Planck interval 10-44 seconds ago or that 10-44 seconds ahead exists is dismissed. As I will describe a bit further on, we live on the surface of DeSitter space. The surface of DeSitter space is 1 Planck interval of space-time (1Lp/1tp) and nothing exists beyond the surface of DeSitter space, nor does anything actually exist below it. The only reference we have to any ‘nodes’ (which will also be defined a bit further on) in the past, or below the surface of DeSitter space is information carried forward by real photons (NOT virtual photons). No other reference to what exists below the surface of DeSitter space exists, nor is it possible. Also, note that I have deliberately excluded (in all caps) virtual photons, which make up the electric and magnetic field components of the real photon. These virtual photons have mass, and as such cannot travel at v = c, and as such, experience and are slave to the properties of space and (the flow of) time. They in fact, lag behind the ‘real’ photon, and having different masses (as I will derive via the electric permittivity and magnetic permeability constants) go out of phase with each other, as well as dragging far behind the real photon they are associated with. The electric and magnetic field components are the only means by which we know of the real photon’s existence, and by no other means can we possibly detect a real photon. A real photon does not exist in, nor does it experience space or time. It is improperly defined in physics, an issue I will remedy in this text. A real photon is superpositioned at the very least (has the greatest probability) at its origin and destination, regardless of time or distance as we experience those aspects of ‘space-time.’ To a real photon, the entire cosmos is an infinitesimal. That which connects all of DeSitter space, from the origin of the Big Bang (I’m not getting into the debate of which model here) to the surface (the present) is the real photon. Thus, the only thing that is real in this cosmos is the photon, all else is in fact, an infinitesimal. Larson, Calculus 9th Edition, Theorem 3.10 [You will see this as many times as I have to repeat it]

Given that when r = 1, it is just c/x.

4

Only photons carry the ‘information’ regarding the past forward in time, as photons do not experience time nor distance, rather they seem to exist on the edge of the space-time domain, not in it. I picture the Standing Interval like a card standing on end, 10-44 seconds thick, or 10-35 meters thick, take your pick. It is shedding ‘cards’ into the past, which become photons in its wake, carrying the information of that state into the present. Yet, the card is not ‘moving,’ or progressing. In front of it is pure absolute nothingness. There is nothing to ‘progress’ toward; absolute nothingness. The future does not exist; you have to get that idea out of your head. Neither does the past exist; the only record of it is via photons, which do not technically exist in space-time. When the term Planck Flow is mentioned, immediately the mind goes to something like water progressing forward from one place to another. You can capture a slice, a Planck Interval, but like a river, we think of it as moving. This is not the case. It is better to think of an old style movie projector, clicking along frame by frame. In reality, the only frame that exists is the one that is being projected. There are no frames ahead or behind the current frame. Time exists in a state of superposition for reasons that have not been properly explained in light of information theory in Black Hole physics and Holographic Theory. In Black Hole physics and Holographic Theory, time is not a valid dimension. We use Conformal Field Theory (CFT as in AdS/CFT) to reduce any number of dimensions down to n=2. Like a flat projection map. As an example, a Mobius is often described as ‘infinite in length.’ However, like a ring, it is not of infinite length. If one marks the starting point, it is exactly the length of its 1-dimensional counterpart. The ‘infinite length’ is an illusion of eyesight. A ring for instance is not of infinite circumference; the Mobius is merely a twisted ring. Yet, I have seen entire models of the cosmos based on the Mobius as an infinite manifold, and expand the Mobius out to a plethora of inter-twisted mobii… In Black Hole physics and Holographic Theory, everything is 2-dimensional, without exception. As Verlinde has brilliantly discovered, dimensionality above n=2 is an emergent phenomenon or set of complex interacting phenomena on what is referred to as a Schwarzschild surface. (My own terminology). Simply, a Schwarzschild surface is like the surface of a Black Hole, where time has infinitely dilated, in effect, a nonsequitur, since ours is a finite domain. (You cannot fit an infinite thing into a finite domain). I will show later that in Holographic Theory, a Schwarzschild surface where time apparently is not a valid dimension is resolved by looking at entropy vs. Ordiny (from which the forces emerge) and our proximity to the Big Bang. There is one Planck interval of time we’ll focus on for a moment. There are no Planck intervals of time in front of it, e.g. the future, they do not exist yet, it is therefore not progressing toward anywhere; it is standing still. There cannot be any Planck intervals of time behind it, e.g. the past; the past has ceased to exist. Any reference to the past can only be referenced by photons, which do not exist in or experience time. The idea that the past leaves a physical record, such as a rock, is invalid. A rock is merely a plethora of wave functions; it is not real. There is nothing about the rock that is anything more than wave functions; we only perceive it as other than a plethora of wave functions by dragging it into our human frame of reference upon observation. In fact, the only thing that describes the rock as solid is the Pauli Exclusion Principle, which is a phenomenon of wave functions. The allure of continuous space-time (smooth, infinitely divisible) is that one erroneously imagines that for some reason it will flow, rather than tick. However, this illusion in thinking doesn’t comprehend that we have merely made an infinite number of ticks required to progress one infinitesimal degree. This, in turn, would require an infinite amount of energy over an infinite amount of time. This is not a proper Riemann sum, where in Calculus 101 we add an infinite number of infinitesimals together to get a whole number (integration). In this case, we are dealing with the Projectively Extended Real Line of Infinitesimal Calculus, because we are not dealing with a hypothetical, we are dealing with a real line. As such:

5

In this case, we are adding an infinite number of zeros, which is undefined, not an infinite number of infinitesimals. In simple terms, if the values in question were any value other than zero, they would then be quantized to that value, not continuous. As such, either space-time is quantized or zero; there is no third option. The Bohr electron makes a quantum jump from one orbital to the next, not a smooth transition. It is in one orbit, then suddenly in another. If space-time were infinitely divisible, the electron would have to pass through an infinite number of states in order to transit between one orbital to another. That does not happen. That is my quantized argument. There is therefore only one Planck interval of time, no Planck intervals of time behind it; the past, they have ceased to exist, none in front of it; the future, they do not exist yet. It has come from literally nowhere since the past has ceased to exist altogether, and is progressing toward nothing, since the future does not exist. It is truly a Standing Interval, it came out of nowhere (its origin has ceased to exist) and is progressing toward nothing (the ‘destination’ is non-sequitur, does not exist). That is the Planck Flow, a Standing Interval whose information changes. I will describe how and why that change takes place later. A Planck interval of time ceases to exist and that information is carried forward by photons (real for most things) which do not exist in space-time nor do they experience space-time. That is why we can see distant galaxies, and by no other means. Time must never be thought of as pre-existing in any sense; it is a manufactured quantity. – Sir Hermann Bondi [Paul Davies, About Time; Einstein’s Unfinished Revolution, p21; 1995 Simon and Schuster] Also written as: Time is that which is measured by a clock. This is a sound way of looking at things. A quantity like time, or any other physical measurement, does not exist in a completely abstract way. We find no sense in talking about something unless we specify how we measure it. It is the definition by the method of measuring a quantity that is the one sure way of avoiding talking nonsense about this kind of thing. — Sir Hermann Bondi; From Relativity and Common Sense: A New Approach to Einstein (1980), 65. Time is that which is manufactured by clocks. [A-z Quotes.com] His statement was verbal, not written, so it appears different from different sources. Keep in mind that this is exactly equivalent to stating: Space must never be thought of as pre-existing in any sense; it is a manufactured quantity. That is the demand of General Relativity. Although it is not exactly clear exactly what he said, or when and where he said it, Bondi regarded time as a cognitive construct, not as a natural artifact of this cosmos. As such, the demand in General Relativity is that space is equally a cognitive construct, not a genuine artifact of this cosmos. In this case, we are considering our smallest allowable slice to be one Planck interval of time (designated t p), zero being an impossible state in normal space-time. As we raise the observation rate slightly above tp, extremely rapid observations (observing without ceasing), the uncertainty of where Zeno’s arrow will be in the next frame is high enough that over enough such rapid observations it appears that the arrow’s flight has slowed down or stopped. Why? This goes back to the Planck Flow. The Standing Interval. As you approach an observation rate equal to the Standing Interval’s life, you will merely see that card standing on end, because that is exactly what it is doing. The Standing Interval is not progressing toward a future that does not exist, nor is it departing a past that has ceased to exist. It is a solid stone, just standing there. In short, Zeno’s arrow is not actually moving. In Bondi terms, the motion is merely a cognitive construct, and so is the space through which it appears to move (as a demand of General Relativity). Keep in mind that as we raise the observation (acquisition) rate to greater intervals of time, the arrow seems to ‘move faster.’ It is very important to note that we cannot see the destination (future) as it does not yet exist. However, what we can see is the past ‘cards’ shedding into the past, via the photons they leave in their wake. This ‘smear’ is interpreted as the flow of linear time to our cognitive world-view. The longer the smear, the faster it appears to go. A good analogy is filming at a slower acquisition rate. Assuming you play back the film at the same standard speed as usual, the slower acquired film will appear as though everything is occurring faster. Slower filming is somewhat analogous to less constant observation. If you could actually film at a rate of one frame per Planck interval of time, motion would appear to cease, as at 1tp, there is no Planck Flow. On a less ontological level, the information that describes the arrow is changing state, that does not imply time or motion. Although the word ‘change in state’ may imply a temporal component, there is no formality other than rhetoric that demands this. For instance, we discuss the Einstein Rosen Bridge.

6

Where here it is shown that the length and as such the volume increase with time; a change in state, regardless of the fact that S and G remain fixed. This is referred to as the ER=EPR problem. The ER bridge, as it turns out, is not a shortcut through space-time, but a phenomenon of infinite depth, actually larger on the inside then it is on the outside. When Thorne et al proposed an Einstein-Rosen Bridge could form on a Planck scale as a result of the fluctuations within the Quantum Foam (Wheeler) it didn’t occur to them at the time that the Planck Scale ER bridge would be of infinite depth. [84. Morris, Michael; Thorne, Kip; Yurtsever, Ulvi (1988). "Wormholes, Time Machines, and the Weak Energy Condition" (PDF). Physical Review Letters. 61 (13): 1446–1449. Bibcode:1988PhRvL..61.1446M. doi:10.1103/PhysRevLett.61.1446. PMID 10038800.] [Wheeler, Spacetime Foam, 1955; “Quantum Foam". New Scientist. 29 June 2008]. However, the only two components are S (entropy) which is maxed out at infinity and cannot change, and G (Gravitational Constant) which is supposedly fixed and does not change. Again, the magnitude and speed at which this thing grows in length and depth is super-exponential:

As noted, my calculations have it such that at n~600 the number of cubits would fill every Planck volume in the visible cosmos, 10 180 cubits of information. The time to reach n=600 would be 600 Planck intervals of time. Again, the ER Bridge is no shortcut. This is Holography, not my math. My point is, the ER Bridge is changing state in this super exponential runaway fashion where everything on the right hand side of the equation (S x G) are presumably fixed. Furthermore, the |ψ> is the cause of this runaway exponential growth, however, f(i) is SG, which are presumably fixed values. That is, the more rapid the observations, the greater the Uncertainty (not Heisenberg Uncertainty) of the arrow’s next state in the next frame. Eventually, your observations become so rapid (continuous) that the Uncertainty of the arrow’s next state in the next frame forbids its progression altogether and the arrow ceases to progress altogether. During this dive into increasingly more rapid observations, the growing Uncertainty of the arrows next state in the next frame is causing it to stall in its progression, causing the arrow to slow down. That is, as our measurement (observation) rate approaches the Standing Interval of 10-44 seconds, which applies to both the observer and the observed system, which are isolated from one another in time because of distance [1 Planck length is equal to 1 Planck interval of time]; both the observer and the observed system reveal themselves as a Standing Interval: No Planck Flow. Time comes to a stand-still. We can make a simple statement such as: lim 𝑓(𝑂 ) = 𝑆𝐼 →

Where r is the rate of measurement, tp is 1 Planck interval of time, f(Or) is the function of observation, and SI is the standing Interval. Again, as the observation rate approaches one Planck interval of time, the observed system approaches the Standing Interval, and the Planck Flow slows until r=tp, at which point the Planck Flow ‘ceases to flow,’ and becomes a Standing Interval. At the point where the Planck Flow ceases, and reveals itself as it is, a Standing Interval of time, one Planck interval of time that is unchanging, infinite uncertainty, in every sense of the term (mathematical and philosophical), becomes infinite. Zeno’s arrow ceases to move. It is important to note here that although Heisenberg Uncertainty seemingly limits infinite uncertainty by the notion that Δt has a lower limit of the Planck interval, this is not the case, as the Planck interval has become the Zero Point in the system, provided it remains in normal space-time. In this case, the Zero Point is literal, and tp has effectively become zero in normal space-time; as we consider it now to be the Standing Interval, unchanging. Again, however, the Uncertainty I am using here is not to be confused with Heisenberg Uncertainty. The HUP is not designed for a system where time is not a valid dimension and/or time, as a Planck Flow, has ceased to ‘progress,’ that is, change of state. My reference here is stating that if the HUP were valid under these conditions, where the progression of the Planck Flow is artificially altered, the nature of the wave function becomes something we have not seen before. The HUP cannot describe the wave-function as a Standing Interval, as the HUP is a timedependent description. In the simplest sense, Planck’s constant is in joule*seconds. The SI units work out to (Kg*m)/s. In the denominator we have 4π, a dimensionless irrational number with no defined endpoint. This leaves the HUP itself as having no defined domain on the right hand side of

7 the equation. The HUP describes the wave-function (becomes non-sequitur upon detection, resulting in a ‘particle’) as a distribution of superpositions of space and time (velocity; momentum) the minimum width of which is 4π, 2 wavelengths. Two wavelengths are necessary to describe a wave-function as being in another position, rather than somewhere else along the wave-function. I will go into greater detail on this later. As we raise the observation rate slowly toward tp (in integer multiples of tp only), extremely rapid observations, the uncertainty of where the arrow will be in the next frame is high enough that over enough such rapid observations it appears that the arrow’s flight has slowed down. That is, the Planck Flow of observation is becoming equal to the Planck Flow of the observed system; and both equal zero, a Standing Interval. In the simplest sense, we can regard linear time as we choose to perceive it as a fixed playback speed of a movie projector. The acquisition rate will affect how fast the progression of the observed phenomenon appears to play. Ultimately, as will be discussed in detail, the emergence of space-time, its geometry, and temporal components are all the result of quantum entanglement, Tensor Networks, and their complexity and entropy. For example, a Black Hole is in a state of maximum entropy. The Tensor Networks are considered infinitely entangled. The dimension of time has dilated to the extent that it has stopped, and is no longer a valid dimension on a Schwarzschild surface. Thus, the complexity and entropy of a Tensor Network of entangled elements defines the flow and progression of time, e.g., the Planck Flow. The rate of observation of a phenomenon has an aspect of it that is described rather plainly in the Delayed Choice Quantum Eraser experiments by Kim, et al. Time is seen to superposition to the extent that in 19th century classical terms, we would be tempted to use a term like ‘causality violation.’ In the simplest sense, a photon observed as either wave function or particle quanta at a detector several microseconds downline appears to retroactively show up on a detector as the same (wave function or particle quanta) several microseconds earlier. In this, we do not consider the information as travelling backward in time, but the superposition of time for the information describes the information as not having a discrete locality in linear time. As a result, the rate of observation of the information also describes the width of the superposition, rather, the available number of superpositions increases as the observation rate becomes slower, wider. For example, if our observation were 1tp, the width of available superpositions is exactly 1. If the information is not in that locality, it will not be observed at all. As we widen the window to a second, the probability that the information is in some temporal locality within that wide window is quite large, and will likely be observed. Entropy is when the number of available superpositions increases, like a 19th century expanding gas. Entropy is often explained as the ‘loss of information regarding the microstates of individual elements of a system.’ However, I regard this as a purely technological issue. For instance, in the 19th century when entropy referred to the thermodynamics of a gas, the limit of such information was the macro-state of an expanding system of gas molecules. Then we can regard the loss of the knowledge of that information regarding the disposition of each molecule, in Susskind verbiage, as the ignorance of the individual molecules disposition. Nonetheless, the tracking of each gas molecule is a technological issue, nothing more. I define Entropy as when the number of available superpositions increases. Ordiny, then, is when the number of available superpositions decreases. This is the emergence of Force, in the simplest sense. It is universal to all of the Forces of nature. What then makes the number of available superpositions increase or decrease? This is important to understand in the QZE description because time is an element that is superpositioned. This comes directly from the HUP. The HUP describes a wave function in its native state, prior to detection, as being a distribution of superpositions of localities, and superpositions of velocities (hence momentum) that got them to those superpositions of locality. Being time dependent, the momentum component is then a temporal superposition. There are two major components to changing the number of available superpositions. In this, we are referring to a Tensor Network, and the two components are Complexity and Information Entropy. We use a Black Hole model to represent the final state of a system. This is a Holographic approach to Information Theory. In a Black Hole, entropy is in its final state, it is at its maximum value and cannot change. However, the complexity can change exponentially with exponential time. The time cannot go on for infinity, however, it is generally agreed to be a very long time, perhaps trillions of years, which is beyond the point we expect thermalization of the cosmos. The Complexity is simple, given by Cmax = eN where N represents the number of cubits. The Complexity, on the other hand is given by a more complicated expression:

The Entropy is simple Smax = N log 2. This makes sense, as when a mass reaches its Schwarzschild radius, which takes about a millisecond, the entropy remains fixed, as per Bekenstein. The complexity of the Tensor Networks behind the face of that Schwarzschild Radius will continue to grow in complexity for a very long time, at an exponential rate, and growing exponentially according to the Complexity function above, behind the Black Hole horizon. That is, where N bits of binary digits, for instance, 1,000 bits, can be written as a string of 1,000 zeros and ones. A Quantum Mechanical ‘cubit’ as

8 two superpositioned states for each bit, N. Therefore, rather than a simple string of 1,000 bits of zeros and ones, each zero is also superpositioned with one, and each one is superpositioned with zero. As a result, the string turns out to be 21,000 cubits, or states. That number turns out to be about 1 x 10300, or a googol cubed. The cosmos, by my calculations, is about 2E185Lp3. Whereas a thousand N bits of binary bits can be written on a page, a thousand cubits of 21,000 superposed states. That is sufficient to fill every Planck volume of space-time in a googol universes. For example, whereas an Einstein-Rosen Bridge was once considered as a ‘shortcut’ through space-time, e.g., wormhole, it has been determined in light of information theory that the ER Bridge grows in length exponentially. This phenomenon is referred to as the ER=EPR ‘paradox.’

Here, we have the length, and as such the volume, increasing exponentially with time, yet the values on the right, S (entropy) and G (gravitational constant) remain seemingly fixed. The reason the length grows is that the Tensor Networks (every bit becoming increasingly entangled with every other bit) do not stop below the Schwarzschild threshold, because the formation of the Tensor Networks (entanglement) is not time dependent, or rather, they are time independent. The ‘paradox’ is resolved in this ER = EPR problem by looking at G (gravitational constant). As I discuss elsewhere in this paper, G is readily a function, G’, of time, as it (time) is subject to Lorentzian (Special Relativity) and Schwarzschild (General Relativity) transformations. We look at G’: G = 6.67384(80)×10-11 m3/Kg (s)2 Substitute t’ for s G’ = 6.67384(80)×10-11 m3/Kg (t’)2 Here, we consider two cases for t’, one for velocity of recession, which I use in Physical Cosmology, and the other in Gravimetrics: In Special Relativity, t’ is defined as:

±𝑡 =

𝑡 𝑣 1−( ) 𝑐

I use the (+/-) because it is correct, due to the square root on the right of the equation, and as we discussed, causality is absurd, time is superpositioned. In General Relativity, I usually like to express t’ as a dilation, for clarity. However in this case we are describing an unobservable, which in turn means we have to use the form of the Schwarzschild transformation that describes the ‘tick rate’ as observed by an observer at a distance from the phenomenon. In this case, we are observing an Einstein-Rosen Bridge from a safe distance. Functionally, an ER Bridge is essentially a Black Hole. At this point, the metrics suggest such infinitely expanding length and complexity that what is at the other end doesn’t exist. Therefore, we use the ‘tick rate’ form of the Schwarzschild metric:

𝑡 ′′ = 𝑡

1−

𝑟 𝑟

Here, rs is the Schwarzschild radius, and r is the distance of the surface of the forming Black Hole to the Schwarzschild radius, which in this case is equal to the Schwarzschild radius (meaning we have a Black Hole) and the right hand side of the equation becomes zero, and t’’ = 0.

9 It is important to note that you will see me flip this equation upside down at will. The reason is that one version, as shown above, represents the observer, and the reciprocal of that equation represents the observed. The observed looks like:

𝑡 ′=𝑡 / 1−

𝑟 𝑟

This represents time dilation, as it occurs in a gravitational locality. However, we are using the former state of the equation, representing what we observe. Henceforth, you shall see them differentiated (along with l’ and l’’) as tp’’ for the ‘tick rate,’ and tp’ for time dilation. Note that most references insist on using tick rate, a thing I find most annoying. Tick Rate

lim 𝑡𝑝 = 0 →

Time Dilation

lim 𝑡𝑝 = ∞ →

Then

G’ = 6.67384(80)×10-11 m3/Kg (t’)2 And

lim 𝐺′ = ∞ →

This is the feature that allows the ER Bridge to expand without bound. In the simplest sense, for reasons I do not understand, physicists insist that G must remain a constant. Yet, clearly, it is a time dependent variable, as it has seconds2 (an acceleration, no less) in the denominator. There is simply no way that G can remain a constant under Gravimetric (Schwarzschild transformation) or velocity (Lorentzian transformation) conditions. I will go into the details of the implications regarding the Hubble parameter, velocity of recession, and so on later. It is important to note here, that the key to understanding the Quantum Zeno Effect is being able to comprehend the difference between the observer and the observed, along with the unobservable from unobserved. In the case of the Einstein-Rosen Bridge, we have the observer’s perspective of an unobservable domain. Here I have to take Dirac notation and tweak it because it was not formulated to include logic operators and functions. However, logic operators are necessary in Information Theory so we have to take Dirac notation of the mid-20th century and introduce logic operators into it. There are at least four states: {𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑, 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟, 𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑, 𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒} From which there are at least five relationships: ⟨𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|∀|𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟⟩ For all observed things, there is an observer. ⟨𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|∃|𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑⟩ For all things observed, there exists any number of things that are unobserved. ⟨𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒| ∉ | ∌ |𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒⟩ Neither observable nor unobservable are elements of one another, nor is there any overlap.

10 ⟨𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|ͼ|𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒⟩ For any number (but not all) unobserved things, they may be elements of the unobservable. I’m using a symbol I found in Cambria Math ⟨𝑛𝑜 − 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟| ∴ |𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒⟩ If there is no observer, the system is unobservable, rather it exists or not. The very first mistake in Quantum Anything is to not differentiate these values, namely those listed above as entirely different states separated into independent functions. The key elements are: ⟨𝑛𝑜 − 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟| † |𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟⟩ The use of ‘dagger’ means that if there is no observer, a system is transposed from one state to another by introducing an observer. ⟨𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|! † |𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒⟩ If a system is unobserved, it is not transposed in such a way that it behaves as an unobservable. ⟨𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|∀|𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟⟩ For all things observed, there exists an observer. The use of ‘dagger’ may not seem obvious at first. However, rather than do the hundreds of pages of derivation, I’ll leave that up to you to figure out. They are transposed conjugates of a matrix, thus, ‘dagger.’ The reason they are transposable should be obvious also. Is a thing unobserved or is it unobservable? Behind the Schwarzschild Horizon of a Black Hole, for instance, we say it is unobservable. However, there is no 100% certainty at this time that this is not merely a technical limitation. Therefore, the possibility exists that it is in fact observable, merely unobserved. For all observed things there is an observer, ∀ is this absolute. There are numerous other states, and the idea that observation is an issue does not begin to express the true complexity of the proper mathematical treatment. Now, I understand that it is new and different to see logic operators, relational operators, transformations, and such, instead of simple functions in this (Bra-ket) notation. However, this is how the universe works. There are relationships, and they are not always described conveniently to fit the rules, so, I am introducing new ones. I am not going to explain it any further than that. Your math is insufficient as a language, it needs upgrading. You haven’t seen the beginning of it yet. As you can see, the childish quandary over the double-slit experiment becomes ludicrous. The system is much more complex than that. And as I am about to describe (from work by the greatest physicists alive) the system(s) become much more complicated than the juvenile debates raging on the internet about observers and such. This gives us an arbitrary matrix, I say arbitrary because there is no cross product or sum, nor any sensible arrangement (that I can find right now). Since there is no matrix that represents a superposition of four states, I will write it as such, representing four simultaneous, superpositioned states:

𝐺′𝛼 〈𝜃 (𝐴)𝜃 (𝐵)𝜃 (𝑉 )𝜃 (𝑊 )〉 Where we have Alice {observed}, Bob {observer}, Victor {unobserved}, and Wilma {unobservable}. Note that I am using α to designate some yet undefined operator (merely designating a relationship exists). I may or may not define it, depending on rather I can figure it out or not as I write this. Alice and Bob are the observed and observer, respectively. Victor and Wilma are the unobserved and unobservable, respectively. However, the selection of observer and observed is arbitrary, as Special Relativity fails in defining this issue, leading to confusion and paradoxes. For example, Alice and Bob are speeding away from one another at v = c, both therefore see nothing of one another, and both see time stop upon looking at one another. In this case, both/either the observer and observed become transposed as unobservable. Which in turn introduces yet another relationship: ⟨𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|!∀|𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟⟩|∈ |〈𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒〉 You have to forgive the notation; it was simply not designed to deal with functions, logic, operators, and conditions simultaneously as Dirac originally formulated it. I have to make adjustments as I see fit. They are not, however, incorrect, nor difficult. This is logical negation of the

11 statement, for all observed things there exists an observer, as a result of being made unobservable by a special set of conditions. In this case, special Relativity negates the relationship between the observed and the observer, as it is now impossible. However, where Special Relativity truly fails is in the case where Alice and Bob are speeding toward one another at v = c. They also see nothing, because the photons, regardless of ‘frame of reference,’ cannot get ahead of either Alice or Bob (such as is the case in Chirality) to relate information of one another’s state. Therefore, in SR we have a simple case where both Alice and Bob are each the observer and the observed, and rather they are speeding away or toward one another at v = c, they are unobservable. Here, we have an exception, where the term, unobserved cannot be applied, because unobserved implies that a system is observable. In both cases, where Alice and Bob are speeding toward or away from one another at v = c, they are unobservable, and unobserved does not apply. The term, unobserved, implies that observation is possible; it is not. As such, we introduce yet another condition: ⟨𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒| ≡ |𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑⟩ The reason you have not seen this to date is because of the failure for scientists and mathematicians to consider all of these states and resulting conditions and relationships. Therefore, they keep Bob and Alice either stationary or safely at v < c where they can deal with it. Unfortunately, for the Big Bang, the visible horizon of the cosmos is defined by the above relationships. We can see back in time until the velocity of recession is equal to v = c, at which point everything, and we do not know what, is unobservable. Regardless of your mistreatment of the math and relationships, the cosmological expansion is therefore a result of the unobservable. For those who fail to comprehend, in simpler terms, if the expansion were to suddenly stop, all of that unobservable light from behind the visible horizon would reach us, and become observable. When I discuss DeSitter space, I will come back to this, because the implications are overwhelmingly more complicated than the simplistic thermodynamic expansion of the cosmos, governed by 19 th century gas laws. Given that in this case of Special Relativistic Lorentzian transformation:

𝑡

𝑡 =

1− 𝑡 =𝑡

𝑣 𝑐

𝑣 1−( ) 𝑐

Again, here t’’ represents the observed tick rate between observer and observed. Simple algebra resolves the century old confusion:

𝑡′′

= 0 = 𝑡′

Thus, the observer and the observed are indifferentiable. The historic confusion comes from using some value other than v = c. At v = c both observer and observed are unobservable. I will correct Einstein’s treatment of this later. Later in another section I will re-derive Lorentz contraction, and show that it has been upside down for over a century, and also provide the paper written by Einstein, and proof that his then wife, Mileva Maric resolved Albert’s problem by introducing an arbitrary operator. That is, Albert could not reconcile what he knew to be time dilation with the Lorentz transformation as it was (upside down). I will give a detailed derivation later, but in short:

Then

c=1Lp/1tp c=L’/t’

The conditions that time dilates:

But length contracts:

12

Requiring that: c=L’/t’ Results in:

When v = c

Or simply

In which case, the numerator will always approach zero, as length contracts:

𝑐=

0 𝑡′

Also, note that t’ expands toward infinity:

lim 𝑓(𝑡 ) = →

𝑡 𝑣 1−( ) 𝑐

𝑐=

=∞

0 ∞

Length contraction is in short the most absurd upside down thing in all of human history. Again, I will detail this later. Albert could not solve time dilation using Lorentz upside down equation, so his wife Mileva simply introduced an arbitrary operator, φ, stated explicitly she had no idea what it was, then merely flipped the equation upside down in the next page to get the result Albert wanted. This was recorded in history by an eyewitness, a famous Russian physicist. (Abram Hoffe). The Einsteins showed him what they did and how, and that it was Mileva’s work. In short, she had to introduce a fudge factor, φ, so that Albert, a mere patent clerk, could publish a paper that flew in the face of Lorentz, a giant in his time. However, Lorentz was initially wrong. Again, later.

13 In this case, of Special Relativity, we can make a simple 2 x 2 matrix demonstrating the conditions of Alice and Bob either approaching or receding toward/away one another by:

𝐺′𝛼

𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟 𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑

𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑

In this example, only unobservable is sequitur, and thus equals zero. We can therefore write it as an anti-diagonal lower diagonal matrix of order (0,1):

𝐺′𝛼

1 0 1 1

The G’ is the result of t’. In this case, t’ is superpositioned as zero and infinity, resulting in G’ being in a superposition of zero and infinity. This may not seem relevant at first, but if we apply the laws of Information Entropy, and Verlinde’s approach to emergent space-tie geometry, G’ is critical to the substructure of not only the cosmos as we perceive it, but all potential False Vacua, in particular, behind all Schwarzschild horizons and unobservable domains. That is, amidst all of the observers and observed things in the cosmos, there is certainly unobserved ‘stuff.’ As for the unobservable, that gets into an entire new area of physics, as I will discuss. However, I need remind you that everything beyond the visible horizon of the cosmos is unobservable. This term, unobservable is therefore no stranger to physics, and nothing new, as it has been there for billions of years. As weird as this sounds, it brings us to the Anthropic Principle in the form: ⟨𝑛𝑜 − 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟| ∴ |𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒⟩ Prior to us (I’m just going to stick with us, and leave pondering over other life in the cosmos as an unknown variable for now), even our immediate surroundings would be unobservable. The ontologically challenged look at this as proof that we are not required for a system to exist, nor responsible for its state, nor its change of state. However, the obvious point that the ontologically challenged face is the vacuum of evidence that anything existed before we were here to observe it. That is, the rocks, the distant stars, galaxies, and so on, all of it, is information carried forward by photons. Even your hand in front of your face, right now, is information carried forward by photons. In the simplest terms, without an observer, there is no discussion. You can argue unto death about what a system does ‘all by itself.’ However, the only way you know about the system, even the fact that it exists, is from prior observation. There is nothing in human gnosis that is not the result of prior observation. However, in the case of Gravimetric time transformations, we have the same type of relationship with respect to being on the Schwarzschild horizon and being far away, with one major exception. The observer on the Schwarzschild horizon (Victor) sees the distant observer (Wilma) at an infinite tick rate, the distant observer (Wilma) sees Victor, on the Schwarzschild horizon as having a tick rate of zero. Keep in mind that the observer/observed relationship in Special Relativity DOES NOT APPLY to General Relativity, as demonstrated by the LIGO interferometer detecting its own change of state under GR, e.g., Gravity Wave. As odd as this sounds, this renders both the observer (Victor sees Wilma as an infinitesimal) and the observed (Wilma sees Victor as infinity) as non-sequitur, and the entire matrix becomes one of zeros.

𝐺′𝛼

0 0 0 0

That is, the set {observer, observed, observable, unobservable} is a null set. We can then just write:

𝐺′∅

0 0 0 0

This is the relationship(s) between being at or below the visible horizon from either vantage point of either trying to observe it, or observe from it. What does this crazy matrix of zeros mean? Simply, we cannot see a Black Hole; neither can a Black Hole see us. Everyone is unobservable, and observation is non-sequitur to the description. Why is this not the case with our SR matrix? Actually, it is the case, which was my point. Alice and Bob are both unobservables, regardless of speeding toward or away from one another. As such, there can be no description of the observer nor the observed, as neither case can exist. As a result, the matrix is the same, a block of zeros:

𝐺′𝛼 As a result, the entire set is unobservable:

0 0 0 0

14

𝐺 𝛼 〈𝜃 (𝐴)𝜃 (𝐵)𝜃 (𝑉 )𝜃 (𝑊 )〉 = 0 G’ = 0 means, as I will show later, that the velocity of recession for the Hubble Parameter, falls off via a nice second order curve from its current value to exactly zero at the Big Bang. Sitting on top of a perch at the Big Bang, you see the future cosmos as G’’, which passes from Bang to nothingness in an infinitesimal. The only case where G’ ≠ 0 and/or G’ ≠  is everything between. In fact, as I will show in another section, the proposed point in the expansion of the cosmos where we supposedly went from ‘coasting’ to ‘accelerating’ is an artifact of G’ passing through the nominal tangent intercept, 7 billion years ago. Funny, the shit you see when you’re not stupid. Keep in mind I have not done anything inherently complex, or used any mystical math or multi-dimensional manifolds for this solution. This is solely basic algebraic manipulation of very simple and irrefutable equations. What is happening is that the DeSitter space, which can look like this:

Where we live on the outer rim, and Gravitation, where Quantum Field Theory lives, is sort of like the ‘soup can’ model Susskind uses:

Note that these images are screenshots from videos of the lectures. In any case, QFT lives on the outer surface of that soup can. Looking down into the soup can is the DeSitter space of demons in the prior image. What is happening in the case of a Black Hole, to start the argument, is that the Complexity is increasing with time:

This is what is happening behind the Schwarzschild radius, where everything is unobservable. Rather this occurs in normal, observable space-time is still being worked out by people smarter than me. Verlinde has suggested that yes, this emergence phenomenon of space-time and its geometry

15 (Gravitation) are occurring in normal, observable space-time. I tend to agree for the simple reason that space-time is in fact expanding. The two terms, unobservable and unobserved are completely different. Unobservable is what happens behind the horizon of a Black Hole. Unobserved is obvious. They do not mean the same thing, nor do they share the same physics. Whereas it doesn’t seem that many have considered what the expansion of space-time actually is, given the choices. For instance, I heard a physicist say in a TV documentary that gravity ‘holds space-time together,’ that the atoms in your body are not apparently expanding away at a Hubble rate of 70 some odd Km/s/megaparsec. This gives a value of about 2.4E-19 m/s recession for an object 1 meter away. That is 1/10,000 of a proton diameter per second, for an object a meter away. The only detector on Earth that can detect this would be LIGO. And to the best of my knowledge it has never been used to confirm this. As for the atoms in your body, that becomes about 2E-34 m/s between atoms. This is a Planckian scale of motion. ‘Stuff,’ AKA, mass does not hold space-time together. Mass and space-time geometry are linked, but not causally. That is, a Gravity Wave is gravitation without the presence of mass. As for the opposite, mass without gravitation, we only need to look at the plethora of virtual photons mediating the electric and magnetic fields as an immediate example. Is the expansion of normal space-time the lengthening of the Planck length? Is it the creation of new Planck lengths? This requires additional information. A change in the value of Lp only requires a change in the information already present. Physical Cosmologists love their accelerating expansion, but cannot define expansion. Thus, the entire discussion is non-sequitur. Some require space-time to be smooth, continuous, infinitely divisible. This increases the problem by requiring infinite information to grow one infinitesimal. Quantized space-time is a better solution. However, I will provide the correct result later in a set of fractals that extend from the Planck Length out to cosmological distances. However, this is a hypothesis, as with most things. The argument I am setting up here is a new approach to understanding the QZE and Anti-QZE (the quickening of time by less frequent observation). It is shown by Maldecena, in Anti-DeSitter Space and Conformal Field Theories, along with Holography and Black Hole physics as per Bekenstein, that behind the face of the Black Hole, where all is unobservable, that complexity increases exponentially. Behind the face of a Black Hole, the complexity gives rise (emergence) to a type of Tensor Networks, defining in effect a space-time and its geometry, as dropping to infinite depth without bound and an exponentially increasing rate. The further the drop, the faster the drop, and so on to a run-away effect. This is what happens when a thing is unobserved or unobservable. When a thing is observed without ceasing, nothing happens. It is only when we take our eyes off that a thing becomes unobserved, or unobservable. This is when time progresses to the next Planck interval, the Planck Flow. The Planck Flow is the result of not being observed or unobservable. This is the QZE. So where does this ‘extra’ space-time and so on come? It does not come from the magical HUP, as is the case with all failed explanations. The point I am getting to is that the region of quantized space-time does not have to exist. Oddly, if it does not exist, it has a 50/50 chance of existing. The reason is that being an unobservable domain, we cannot say with certainty that it does not exist. That is what is happening behind the horizon of a Black Hole. It is unobservable. Because it is unobservable, it grows in scope and complexity at an exponential rate. That is the most important thing to know in all of Quantum Theory. In fact, if you can understand that, you can actually do what Feynman thought would be impossible: understand all of Quantum Theory. Observation: nothing happens. Unobserved or unobservable: an exponential explosion in size and scope of everything. Space-time and its geometry, even the forces of nature emerge. Exactly the same principle of two entangled wave functions that would be particles if detected or interact in some way, the probability of is 50/50 until we somehow observe or detect that it does or does not exist. As weird as that is, that is the way it is because it is that way. This is because there can be no certainty of an unobservable domain. Then we assign each condition as either having information in it, or being void of information. The Verlinde-Nicolini definition for information being

16 AΩ is the ‘world-sheet,’ and note the denominator is quantized. No one has derived this next thing yet, oddly obvious, but in order to define exactly what N is: (what a bit of information is)

I don’t know why that is obvious to me but not anyone else. 1 ‘bit’ of information in Quantum Theory then is sort of like a trigonal bipyramid. However, reduced to 2 dimensions it is a Schwarzschild surface, where time is not a valid dimension, like a Black Hole’s surface, provided we do not try and assign any shape. The shapeless character is the result of the fact that you can slice any known shape with non-integer values of Lp. A triangle, for instance, has a hypotenuse that is not an integer value of Lp, and therefore cannot exist on a Planck scale, likewise a circle has a diameter to circumference ratio that is not an integer value of Lp, and therefore cannot exist, and so on. In short, it all comes back to those fractal lizards or demons in our Anti-DeSitter Conformal Field Theory world: (I prefer the lizards or demons, but the fish is the best image I could find in good resolution).

The world we live is on the surface, the outer edge of that circle. Inside the circle, are scale invariant (fractal, self-similar) fish. On the surface, shape really doesn’t have any meaning. This following image is from a Ramsdon lecture at Berkeley:

Here, he is using the Escher lizards. The prior slide is:

17

All of the logs and natural logs are Conformal Field Theory, as if a projection map of the world makes Greenland look HUGE, when it is actually like this:

This is the fractal lizard (demon, fishes) effect of scale invariance in Conformal Field Theory. Greenland, at the center (equator) is actually quite small. Likewise, the Multiverse expands in the same fashion.

In the former image, d.o.f refers to the number of dimensions of the field. The shortest length is through the fewest lizards, which, is back (backward) in time to a prior time. As weird as this sounds, this is from a Susskind lecture on the Fractal Flows and the Arrow of Time. Here, Susskind uses a binary type fractal branching to describe the Conformal fractal of time from its origin onward, similar to this:

18

We can sort of think of this ‘Root’ as the Big Bang. However, Susskind is taking it back further, to the Multiverse Bang. Then, superposed states start splitting off. I’ve taken the liberty of labeling A through F. A is causally connected to B, to C. A is ultimately causally connected to E and F. However, C and D are not causally connected, nor are E and F, regardless of their close proximity. Here, I am not suggesting causality, merely that a fractal has a linear progression of which, for example, E and F do not progress from one to the other.

Therefore, we take our former definition: (I refer to this section as ‘Spectator Hypothesis’) Moreover, say that if it does exist, we’ll call that DE (does exist) and does not exist as DNE. DE has a 50/50 chance of having information in it. We’ll say does have information as a, and does not have information as a’. We’ll call the observable (does exist) as X, then we have the two conditions

Defining the wave function psi

Is given by summing over a’ and a for the observable X, normally written as

Becomes the sum

19

This is a case where N= 0 or 1. If n=1, we are talking about system A, if N=0, then we are talking about system B. System B may or may not exist, it is a spectator, dependent on A as being in one of two states a or a’. So for the non-existing system or domain B, the whole thing simply becomes

I’ve only added b to the end as a spectator. Even though it may not exist, it is entangled with a or a’. If X is a, then psi-b is b’, if X is a’, the psi-b is b. The fact that it might not exist is described a bit further on when we get to Piadic trees of exponential growth behind a Schwarzschild horizon, which makes it Holographic Theory. That is, we exist on a Holographic Schwarzschild horizon, and as such, the Piadic exponential growth of Tensor Networks (described in detail a bit further on) makes ‘b’ part of this system, even though it does not exist. This will become clear later. Keep in mind, that entanglement involves non-observation; observation then ‘detangles,’ for lack of a better expression. Victor’s word of the month would be an eigenstate, made eiganvalue by way of detection. By non-observation, A is in a state of {a,a’} simultaneously, e.g., entangled. Therefore, when I say that B may or may not exist, like Schrodinger’s Cat, it is superpositioned as {does, does not} exist. Attaching B, as b {does exist}, and b’ {does not exist} prior to observation is no less valid than A being entangled as {a,a’} prior to observation. The density matrix for a,a’ is

That is, it is definitely in one state or the other, with a probability density = 1. [Keep in mind later we will see ‘P’ appear in a Markov Process, they are NOT the same. This P is a density matrix (diagonal) and the Markov P is ‘Probability’] Since b is a spectator, (entangled, rather it exists or not) then

The density of B is now 1. Going all the way back to our definition for a bit of information, N

20 Where the formerly unobservable B was uncertain, it now has a density matrix of 1, meaning it exists. We have just given birth to a new baby B, simply by not observing it. This phenomenon of b’ becoming a ‘real’ thing will become clear as I describe the Markov process by which it is derived later in this section. Rather than answering your question and collapsing a wave function, which technically is spread over all of space-time from the Big Bang until present in time, and every corner of the cosmos, simultaneously, I have just created a Planck volume of space-time out of absolute nothingness, just the uncertainty of it existing because I cannot observe it. Going back to our definition for our world sheet

If N=0, then AΩ does not exist, if N=1, then AΩ exists. Thus, space-time is an emergent phenomenon of entanglement. As we saw, we gave birth to a new baby B because of the density matrix N=1, or P=1, take your pick, from a blob we know exists, A, which is in a state of a or a’ until measured. The geometry of space-time is a 2-dimensional (Holographic) construct of AΩ, where AΩ is the dependent variable and N is independent. That is, AΩ depends on the size of N, as a pure surface phenomenon. Interestingly, the geometry is the result of AΩ. Entropy is the result of more choices of superposition. Ordiny is the result of less choices of superposition. Thus, superposition defines the entropy or Ordiny of the world-sheet, AΩ. We find that all of the forces of nature obey this simple principle. As the number of available superpositions increases, entropy results. As the number of available superpositions decreases, Ordiny results. Gravitation is obvious; it is explained by the Black Hole physics above. As the number of available superpositions in Lp 2 increases, entropy results.

Marios Christodoulou, Carlo Rovelli [Marios Christodoulou, Carlo Rovelli, Phys. Rev. D 91, 064046 (2015) DOI:10.1103/PhysRevD.91.064046] showed that the volume increases with time, however, the area remains fixed. The area would be given by

𝐴=

𝑆

4𝐿𝑝 𝑘

Where k is the kinetic energy, e.g., temperature. The entropy is maxed out and fixed, and k remains essentially constant. The amount of volume increase they gave as:

21

How does this occur? It is all of the Complexity growth behind the horizon I have been describing up to this point. First, we have to discuss a host of topics in Quantum Cosmology that are not very well known, but are the topics of the greatest minds in physics. The first is called a ‘False Vacuum.’ How this differs from a ‘True Vacuum’ is a very grey area, if a True Vacuum actually exists. Essentially, a Truer Vacuum, as I prefer to call it, is more stable than a False Vacuum, which is in turn less stable.

No Vacuum state can actually touch the bottom line. That is equivalent to our Zero Point, an integer of Planck’s constant. In this diagram, the Truer Vacuum is 1 Planck interval from the baseline. The False Vacuum is higher, and thus, less stable. Why is it less stable? Simply because where the Truer Vacuum is sitting essentially on the ground (Zero Point) the False Vacuum can ‘fall.’ At this point, we do not know the status of our cosmos regarding the ‘trueness’ of it. Since we do not know the height of our true Zero Point, we could be in an unstable False Vacuum, and a catastrophic collapse of this cosmos could occur at any moment. However, you will not know until you are already gone. However, if and when a Vacua achieves true zero (or some negative value) it is at that point irreversible, and will collapse to non-existence. That is a definition, not an idea. Thus, the distance between true zero and the lower limit of the Zero Point as set by Planck’s constant is infinite. In simple terms, as odd as it sounds, there are an infinite number of true zeros out there, an infinite number of Vacua that do not exist. That is not a play on words. That is also a definition. Some claim that the instability of a False Vacuum may be due to Quantum Tunneling of pseudo-particles to lower energy states. The Higg’s is a prime candidate for this since it already has enough mass-energy to ‘jump’ the barrier, so to speak, and tunnel. The tunneling energy required is between 124-135 GeV [Alekhin, Djouadi and Moch (2012-08-13). "The top quark and Higgs boson masses and the stability of the electroweak vacuum". Physics Letters B. 716: 214–219. arXiv:1207.0980 Freely accessible. Bibcode:2012PhLB..716..214A. doi:10.1016/j.physletb.2012.08.024]. The Higg’s weighs in at 125 GeV. Thus, the Higg’s could be the doomsday prophet particle of the universe. However, it would fall to a more stable Trier Vacuum, and thus be stable. THE ANTHROPIC PRINCIPLE The Anthropic Principle in its native form states that the universe was made for man. It does not state by God or by chance. The first postulate requires that the universe exists, which in turn means the Cosmological Constant, Einstein’s Ʌ, is such that the universe has not expanded away or recollapsed. These would be False Vacuum states, local bubbles of space-time. In order to produce a universe, by chance, like ours (A Truer Vacuum that still exists) the Multiverse would require 10500 such False Vacuum universes, or local bubbles, to have formed. [Michael R. Douglas. Understanding the landscape. arXiv:hep-th/0602266v1 24 Feb 2006; This appears to be the earliest reference to this number, calculated in String Theory]. Another reference is by Leonard Susskind, ‘The Cosmic Landscape: String Theory and the Illusion of Intelligent Design.’ 2005 ISBN-10: 0739473115. I am not certain who actually performed the original calculation. Thus, the Anthropic Principle is one that states the odds of a universe forming by chance that could support us is 1:10500. The Anthropic Principle originally was intended to state the claim that, yes, there is a Multiverse, and we live in the one Vacua of 10 500, by chance, if we lived in a False Vacua that could not support us, then we would simply end up in one that could. Now, it is not only evolving into a more Ontoligical argument, but also one of Intelligent Design. Keep in mind that when I say; Intelligent Design, I may or may not be referring to a deity, e.g., God (Yahweh, Jehovah, Allah, Krishna…). This is where we get into the ‘Anthropic Principle.’ [Carter, B. in IAU Symposium 63 (ed. Longair, M. S.) 291–298 (Reidel, Dordrecht, 1974). Bostrom, N. Anthropic Bias: Observation Selection Effects in Science and Philosophy (Routledge, New York, 2002).] If you recall, in another

22 chapter or another text, I discussed the issue with the ‘Cat Paradox,’ where in a century of variations on this paradox, no one has yet to assign consciousness to the cat. This is the Anthropic Principle. Tegmark discusses [M. Tegmark; N. Bostrom (2005). "Is a doomsday catastrophe likely?" (PDF). Nature. 438 (5875): 754. Bibcode:2005Natur.438..754T. doi:10.1038/438754a. PMID 16341005.] the notion that the Anthropic Principle may be what is holding this universe together. That is, we may be in an unstable False Vacuum, but because of the ‘Observer Selection’ effect, the Vacuum state we exist in remains stable. There are variations on the Anthropic Principle, all based on the Observer Selection bias. The Strong Anthropic Principle (SAP) states simply that we are an artifact of this cosmos because the universe is for some inexplicable reason compelled to create observers. The Weak Anthropic Principle (WAP) states that the cosmos is one of 10500 False Vacua, and we obviously live in the one Vacua that can support an observer. That is the simplified description(s). ‘Selection Bias’ is where the fine tuning of all of the constants and so on that permit life to exist (and the universe for that matter) is not the result of us, by observation. ‘Survivor Bias’ refers to the Natural Selection of species, as an artifact of the cosmos. In all of the SAP arguments, there must be a Multiverse, such that a great number of universes form, and we live in the one where all of the fine-tuning parameters just worked out perfectly such that we might become an artifact of the cosmos that is capable of both existing and supporting life. In simple terms, the entire premise is that Darwinian Natural Selection resulted in you being an artifact of 1:10E500 Fasle Vacua capable of resulting in you being an artifact of said Vacua, sentient, AKA self-aware – an artifact of a False Vacua. The Strong Anthropic Principle (SAP) states that the universe must be as it is, complete with all the fine tuning, so as to produce conscious observers. The Participatory Anthropic Principle (PAP) states that no universe can exist unless it is observed or observable. The Final Anthropic Principle (FAP) states that given the PAP, the sentient life requirement is such that the universe is indestructible. That is, unless all of the conscious observers cease to exist, the universe is observable and therefore continues to exist. The source of the issue is that the cosmos we exist in is so finely tuned that if, for instance, the Critical Density parameter, usually denoted as Ω, were off by 1 part in 10E120 in the first moments of the Big Bang, the universe would have already have evaporated off or collapsed back in on itself by now. That is, if the mass-energy were of by 1 part in one hundred million, trillion, googol, there would be no universe. I could go on to list the host of constants that are fine tuned in such a fashion, however, the list is so extensive, right down to the 1 part in 10E10,000 of DNA forming the most basic life form, that it would fill volumes. (If I go on to probability to include the enzymatic pathways, we approach a googol-plex, for the most basic single celled life form to form spontaneously). Somewhere, someone calculated 10E500 False Vacua in String Theory, and no one seems to know where this originated, albeit, it was probably a lecture, not a publication. All of this, including the flavors of the Anthropic Principle has to do with what is commonly referred to as the ‘fine tuning’ problem. This in turn delineates back to Dirac’s ‘Large Number Hypothesis.’ The history of the subject is rather rich, yet mostly unpublished. We have to keep the Anthropic Principle in mind as we progress on to describe the difference between observation (or unobserved) and ‘unobservable.’ In addition, the QZE is a phenomenon where the Planck Flow of time is altered via the rate (constancy) of conscious observation. I say conscious observation simply because if a thing is unobservable, there is no sequitur discussion or argument to be made. That is, the number of man-hours and papers dedicated to arguing about a mechanistic cosmos void of any conscious observer is non-sequitur in that if it is unobservable there is no sequitur description, discussion, nor argument to be made. The QZE is the result of conscious observation. No observation has been made that is void of a conscious observer. I will therefore dismiss all counter arguments regarding the role of the conscious observer, as the absence of a conscious observer only leaves an unobservable. WHAT ACTUALLY HAPPENS WHEN THERE IS NO OBSERVER OR THE SYSTEM IS UNOBSERVED OR UNOBSERVABLE THE TWO TERMS, UNOBSERVED AND UNOBSERVABLE ARE TWO COMPLETELY SEPARATE PHENOMENA THAT PRODUCE TWO ENTIRELY DIFFERENT PHENOMENA. THAT IS; THEY ARE NOT AT ALL RELATED. In short, unobservable results in exponentially expanding False Vacua that increase without bound at an exponential rate (runaway). Unobserved results in nothing. Everything [that exists] upon observation makes an instantaneous transition from wave function to ‘particle,’ e.g., tiny cannon ball. The transition is like a Bohr quantized electron jump, it does not go through any states. For example, if space-time were continuous, ‘smooth,’ infinitely divisible, the electron would have to pass through an infinite number of states to jump from one orbital to another. It does not happen that way. Space-time is quantized, and the jump is in one state, then another, with no transition in between the two states. The same is true of a wave function. There is no ‘wave-particle duality.’ A ‘thing’ is in its native state, a wave function, upon observation it is a ‘particle,’ tiny cannon ball, with no transition of states in between. There is no ‘decoherence,’ as there is nothing to ‘decohere’ from. As I get into this brief description, which I didn’t want to waste our time on, keep in mind that the entire scenario of Decoherence was to describe the Schrodinger superpositioned state, which in turn involves superposition of states. This, in turn must explain Quantum Entanglement between our characters Alice and Bob, separated by billions of light-years.

23 This is where we come to the term ‘eigen-selection.’ This evolves from ‘pointer states,’ which evolves from ‘Quantum Darwinism.’ That is, [you will not get it until you read the correct description] quantum entanglement, superposition, and all of their estuary concepts, according to Zurek [W. H. Zurek. Pointer basis of quantum apparatus: Into what mixture does the wave packet collapse? Phys. Rev. D 24, 1516 – Published 15 September 1981] just as the environment in our macroscopic world hypothetically selected which species would survive and which would cease to exist, the same process occurs to each and every quanta in the time dependent process of Decoherence. Time dependent in turn, according to General Relativity, demands space dependent. Thus, as I get into describing what I can only call the most stupid, ignorant concept ever put forth in physics, keep in mind that being both time and space dependent defeats the very argument the concept of Quantum Decoherence sets out to describe. Those quanta that are suited for their environment materialize (decohere) into ‘particles,’ and those which are not suited cease to exist, or perhaps find their way to one of Everett’s Many Worlds. In any case, of the literally thousands of papers I have read on quantum Decoherence, aside from Zurek being the most absurd, none address quantum entanglement, superposition, and so on, although such terms appear in the title. They address the ‘measurement problem,’ which is our problem, not the quanta. Although the environment conditions bring about Decoherence in these models, there are no environmental factors in the equations. Looking at the most basic form of two wave functions in two spin states:

| ↑〉〈↑ |, | ↓〉〈↓ |(|0〉〈0|, |1〉〈1|) Are the proposed eigenstates of

Ĵz. The proposed transformation is given by: |0 〉 ⟶ |0 〉 , |1 〉 ⟶ 𝑒

|1 〉

And

𝑅 (𝜙) =

1 0

0 𝑒

Two things to keep in mind, ‘e’ is in infinite perturbation with no endpoint, and requires slicing the Planck intervals of length, time, and the quanta into an infinite number of slices. Furthermore, the presence of ‘e’ requires that this proposed transformation must go through an infinite number of states to reach its endpoint (where the big arrow points), which is of infinite distance away, raised to an imaginary value. Then, we raise the irrational number, ‘e’ to an imaginary number, which, without a complex conjugate, has no real meaning in this real cosmos; not even on a Quantum Scale. To make the issue worse, the density matrix (I’m just going to paste a snapshot here) is given by:

If you recall, earlier we used real Infinitesimal Calculus to establish:

Therefore

With this in mind, the list of issues includes but is not limited to: The integral is over two values that are equal to one another, and therefore is not an integral. The term, 𝑅 (𝜙), as discussed has no real meaning in real space-time. At best, it is an irrational, infinite perturbation with no endpoint, and therefore no result. This is just another capture:

24

There can be no probability density for this, since the infinite perturbation e is present, along with the irrational value pi. There is no evidence whatsoever that this process represents anything observed. Furthermore, the entire integral of the probability density is over zero (given the lower limit of negative infinity and the upper limit of positive infinity are equal values). And there is nothing here to suggest the superposition over distances greater than an infinitesimal (limited by ‘e’) are in this description. Therefore, we toss out integrating over zero, the irrational (pi), and the infinite, and dismiss Decoherence as anything other than another mathematical artifact. In order to accept Quantum Decoherence, you also have to accept ‘Quantum Darwinism.’ Only those quantum states suited for survival in this universe will exist, all of those quantum states not suited for this environment will cease to exist. In short, we have an integral summed over zero defining the density matrix of a process which is an infinite perturbation with no endpoint in an imaginary number system, an irrational number multiplied by another irrational number, raised to the infinite perturbation with no endpoint, describing the outcome of either one of two choices (heads or tails) . Half of all of the wave functions in the universe, bosons or fermions spin one way, exactly (to our ability to measure) the other half spin the other way. They did not come to that endpoint by the above described process, and ‘Quantum Darwinism’ doesn’t seem to have creatively selected any particular spin over another. First, the Piadic numbering system. In the simplest sense, you will note that there is a place holder of a ‘.’ In the middle. As you extend out with a string of zeros and ones on either side, the opposite side takes on the opposite value, like this:

Np (Global slicing) ...100110.100110… Qp (Flat slicing) ...100110.011001… Where both forms extend infinitely in both directions.

Keep in mind I am using this diagram for clarity, the curvature of the lines bothers me for reasons I will describe later.

I call this The Fractal Tree of Unobservable ‘Stuff.’ If the world were governed by the system Qp there would be a branching of 0.0; meaning that the cosmos would be in a constant superposition. This is important to know, because as I will allude to later, the world is Qp while or when we look at it. That is, make an observation, which solidifies an outcome, and the branching Np ceases. This is called a Markov process. It is governed by the following equation:

25

𝑁 (𝑡 + 1) = 2𝑁 (𝑡 + 1) −

Ɣ

𝑁 +Ɣ

𝑁

In this, we are spawning two types of things, one of type m and the other of type n. The weird looking gamma, Ɣ is a probability, where Ɣnm is spawning type m from type n, and Ɣmn is spawning type n from type m. (Note that I used 1’s and 0’s rather than n and m). N is simply ‘the number of,’ such that Nm refers to the number of n’s. The Big Crunch looks like this:

Here, the m or n (does not matter which) stops or is stopped. The idea behind the Piadic numbering system is that rather than |n|, a real outcome evolves, or in Verlinde terms, emerges. However, this does not occur when a system is observed or observable. This statement seems counter intuitive to having a real observed nonsuperpositioned outcome. That is, the system is supposed to be a superposition of

|n| Until it is observed, at which point either a 1 or 0 is realized. However, the outcome, 1 or 0 is, in Einstein terms, a roll of dice, or otherwise a coin flip. Once the coin lands, time and all other processes stop, and these laws regarding wave functions and probabilities and so on are no longer sequitur. The coin is heads up, fixed for infinity, static, non-dynamic, unchanging, apply random synonyms here… Observation will result in a coin flip type outcome, in which case the tree above would be a Chaotic system, not organized. The system above is highly organized, so much so you have to stare at it for a while to grasp the complexity of the high order of organization. For example, in the 2-slit experiment, we have either a wave interference pattern or a bunch of dots. When we look, we see dots, a random splay of dots on a screen. When we do not look, an interference pattern emerges. That is, when we look, the information is random, and behaves like tiny cannon balls. When we do not look, the information is a highly organized system of overlapping wave functions in interference patterns. This is also true of the Delayed Choice 2-slit, as well as the Delayed Choice Quantum Eraser (DCQE). In the DCQE, we have simply gone to great extent to show that this phenomenon requires a conscious observer, and that the information is superpositioned in time as well as the two possible outcomes (wave or particle). There is no wave-particle duality. It is a wave function or it is a ‘particle,’ dependent on observation. Observation results in ‘particles,’ e.g., tiny cannon balls that fit into our human frame of reference. When a system is not observed or unobservable, it is I its native state, a wave function. NOTE: In the image above, note that the values 0., 00., and 000. Are in causal contact. Whereas, for instance, 1. And 000. Are not in causal contact.

26

With respect to the 2-slit experiment, we get a plethora of dots as the result of observation, results in a system which is not causally connected. When we do not observe the system, we get an interference pattern, where the causal connection is so well defined that the arrow of time seems to go both ways. We need to understand this in the QZE. Observation without ceasing will result in no causal connection. In the diagram above, the 1 and the 000 have no relationship to one another with respect to proper time, marked by μ. Not observing results in a highly organized causal connection, indicated by 0 through 000 in the diagram. So, the QZE is not just an alteration of the Planck Flow of time, but also the organization of the Planck Flow, as indicated by μ, and the proper Planck Flow, tp. The error of EPR, Bell, and so on, was to try and assign an expectation value to such superposition as though akin to a coin flipping indefinitely in midair, whose value was only realized upon observation. The fact is, while the coin is flipping indefinitely in midair, unobserved, an extremely organized evolution is occurring, the passage of time itself and as General Relativity demands, and the geometry of space. Upon observation, all of that stops, and we have a static 1 or 0. The fact that it is this way is upon realizing that the outcome, a 1 or 0, is in fact a static, atemporal result. As a result of being atemporal, there is no evolution or geometry of space. If we take a second look at that fractal tree:

The dots represent Bob and Alice. It is easy to see that, with respect to our AdS Scale Invariant (Conformal Field Theory) surface:

27

Where I labeled Alice and Bob as A and B, on level 0000. It is incorrect to think that they are separated by a distance of L because they have a causal connection at point 1000. Instead, I refer back to our lizard diagram below, and you can see that the actual distance between them is by drawing through the least number of lizards, which in this case, takes us back to the fractal point 1000. (If I recall, this was a slide from a Buosso lecture some years ago):

However, Alice1 and Bob1 do not share a clear causal connection where we can easily draw a line through back to point (I may have labeled this Piadic number incorrectly) 1111. Where Bob1 is. Bob1 is not only in the past, but apparently in a different past than Alice1 would have experienced. One could argue for a Multiverse with multiple ‘timelines’ based on this approach. However, as stated, the past has ceased to exist. All we have is a Standing Interval. There is no ‘timeline.’ Hence, there are not multiple ‘timelines’ nor are there Multiple universes; by this approach. Don’t mistake that for ‘there is no Multiverse.’ I am a strong proponent of the Multiverse. It is simply not an artifact of this model. If we want to now the temporal distance between Alice and Bob on the surface of the diagram above, we look at the proper time in Planck units, indicated by u, and we get: (Where A is Alice on the surface and B is Bob on the surface) |𝐴 − 𝐵| = 2 Where ui is the proper time as indicated at a level u, for instance the point 1000. Where the branch splits. This does not contradict the notion that Quantum Entanglement is characterized by a seemingly instantaneous coordination of information between Alice and Bob. That coordination is accomplished by drawing a line through the least number of fish, or lizards, demons, whichever Escher drawing you favor. However, that distance is still not zero. It is, however, a scale invariant temporal value smaller than the Planck interval of time and distance at the surface. When and how does Bubble Nucleation begin? We go back to our Tree of Unobservable stuff:

28

Keep in mind that this is a poor representation of this Markov process because of the curved lines. However, it was the best representation I could find that didn’t have the ends meeting to form boxes, as you will see a few pages onward. The Bubble begins where the circle is. Note that the universe is Qp, all n’s up to that circle, where it suddenly splits off to n↦nm. At that point, the system becomes Np. And the line I drew through the entire left hand side is because it is in a state of superposition, and remains unobservable, because it is all Qp, all n’s. The probability and stability is given by:

The term Sm is the entropy of m. The probability, gamma, Ɣ, is also the rate, and Ɣmn is read as:

That is, Ɣmn means we are going from n to m. This is the framework from Coleman-DeLuccia Instanton Calculus. Here, the mass scale M that measures the variation of the potential in the interval between the two elements of the vacua. If the value Ɣmn rises to 1, then infinite mass is achieved infinitely fast. 1. 2. 3. 4. 5.

V. A. Rubakov, S. M. Sibiryakov, False vacuum decay in de Sitter space-time. Institute for Nuclear Research of the Russian Aademy of Sciences, 117312, Moscow, 60th October Anniversary Prospect, 7a Djuna Croon, The Holographic framework for the Coleman-De Luccia instanton. 6192300 May 2012 Sugumi Kanno and Jiro Soda, Exact Coleman-De Luccia Instantons.arXiv:1111.0720v2 [hep-th] 25 May 2012 Ying-li Zhang,Ryo Saito, Dong-han Yeom, and Misao Sasaki, Coleman-de Luccia instanton in dRGT massive gravity. arXiv:1312.0709v2 [hep-th] 14 Feb 2014 Matthew C. Johnson. VACUUM TRANSITIONS AND ETERNAL INFLATION. Dissertation, UNIVERSITY OF CALIFORNIA SANTA CRUZ

The probability of any particular branch being of type m is:

29

The rate of the probability change, when moving up one unit of time, u, is given by:

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

Ɣ

𝑃 +

Ɣ

𝑃

Here, the we are looking at m, and moving up one unit of time, (u + 1). The probability of n is the first term, Pn(u). However, we have to account for what is referred to ‘leakage’ in the spawning from n to m by subtracting it out using the term:



Ɣ

𝑃

And we have to account for duplication (the opposite of ‘leakage) by the term:

+

Ɣ

𝑃

The entire term is equal to the Trace (Matrix)

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

Ɣ

𝑃 +

Ɣ

𝑃 =𝐺

𝑃

Where

=𝐺

𝑃

Is the diagonal Trace, which is not symmetric (for Pm):

𝐺

≠𝐺

The lack of symmetry is that Ɣmn is not symmetric with Ɣnm. This seems to contradict common sense, as this is essentially binary information, and the tessellation of AdS space would seem to require them to be symmetric. However, it is this lack of symmetry that causes Bubble Nucleation in the first place. Also note that P is a vector, as it describes the branching ratio for a particular species, m or n. If you are having difficulty with the m ↔ n thing, think of it as 1 and 0, or heads, tails. It is that simple. A coin flips in midair, when it comes to a branch, the heads go one way and the tails the other. Both outcomes are realized is the key. If both outcomes are not realized, we have a Qp system, and no Nucleation, no growth occurs. That is, to make it perfectly clear, an EPR/Bell universe could not exist. They have both issues backward: 1) trying to assign an expectation value of a coin flipping in midair, without knowing rather the coin exists or not (you have to observe the coin to know that it exists, if you observe the

30 coin, it is no longer in a state of superposition or entanglement, rendering everything the state non-sequitur), and 2) All things come to a stop once the coin lands and we have heads or tails. Time stops, space stops, information takes the impossible form of being 4-dimensional (time dependent), as such, the system collapses to a 2-d system where time is not a valid dimension (Holographic). In this AdS/CFT system, the coin keeps flipping, both outcomes are observed after they have occurred. If you try and observe the system, no outcome is realized. This is the QZE. Thus, you never see the surface of the Lizard, Demon, fish, or whatever: Today I feel like lizards:

You never see the absolute surface. You see the trail immediately behind it, even on Planck interval of time into the past. In order to see the ‘real’ present, you have to be able to see the immediate Planck interval. The number of reasons this is impossible are countless. A photon, the only way information is carried from the past Standing Interval to the subjective (smeary) present is via photons, cannot carry information further than 1 Planck length in 1 Planck interval of time, as the definition for the speed of light is c = 1Lp/1tp. We can make the entropies symmetric, however, by multiplying by a symmetric Trace (diagonal) of the Matrix:

Where T is the diagonal trace across the Matrices of Gmn and Gnm. We then look at the vector, P: Let

31

Where T is the trace. The term, Im is the result of the symmetry, and represents the scaling factor for the progression of time (the Planck Flow) by altering the Planck interval of time via the scale invariance of the CFT transformations. The result states that the vectors (eigenvectors) are orthogonal to one another (at right angles). The necessity of them being at right angles to one another is the proper CFT scaffold of Lizards drawn above couldn’t take the proper CFT, disk-like form as shown otherwise. I do not know what it would be, but it would not be a proper CFT scale invariant disk of lizards. In the simplest sense, that is why we draw the Markov tree for this process as shown a few pages on, rather than the curved, as I stated, poor depiction a few pages back. You’ll understand better when you see the two images. The eigenvectors, I are time dependent, or otherwise that property from which proper scale invariant time emerges. If you can imagine the disk of lizards growing in some form other than a disk, perhaps as a type of… I don’t know what. I morphed the lizard image online to get an idea of how f&cked up the evolution of time might occur:

A more symmetric image, distorted:

Anyway, you get the idea. In the second depiction the evolution of time is somewhat Chaotic, as opposed to the image a few pages back.

32 So, the evolution of time must also be symmetric in our superpositioned, coin flipping emergence of space-time. Otherwise, expansion is Chaotic, and General Relativity demands that both space and time expand chaotically, together. This is not trivial, but is among the most important concepts in physics. All things are governed by a very (scale invariant) Draconian set of fractals, which I will derive and show later. They affect everything from the Planck interval to the Hubble parameter, the apparent size and age of the cosmos, everything. Every constant is also described and slave to this Draconian set of fractals. I simply refer to it as the Fractal Set. I will dedicate an entire section to their derivation and interdependent relationship to one another. Getting back to our eiganvalues, there is a smooth scaling invariance factor, given by:

⋋ = 2𝑒 This is the scale invariance factor. It is a function of conventional Conformal Field Theory. ‘ΔI’ tells us we are progressing upward in that tree model by u steps, it is the scaling step factor. Each step, u, is a different size (fractal) than the last, and is 1 Planck interval of time. Thus, the Planck interval of time is a fractal, scale invariant, and governed by my Draconian Fractal Set. I’ll get into larger or smaller later. The process will seem counter-intuitive. The important thing to know is that when ΔI= 1, time stops.

⋋ = 2𝑒 ⋋=1 ln⋋ = (−∆I)ln 2𝑒 ln 1 = (−∆I)ln 2𝑒 ln 1 = −∆𝐼; ln 1 = 0 ln 2𝑒 0 = −∆𝐼 Zero is the eigenvector, when the eigenvalue = 1. Since ln 1 = 0, the entire term is zero. I don’t think the original author ascertained why ΔI ceases to progress (if I could find which of a thousand papers it is in). Nonetheless, this is the reason. We could use the more complicated approach of looking at the matrixes that are orthonormal to one another. The identity matrix follows the form:

When A is m×n, it is a property of matrix multiplication that:

33

ImA = IAn = A. Matrix A is orthogonal if its transpose is equal to its inverse:

AT = A-1 This is not to be confused with the skew matrix:

AT = -A When ⋋I = 1, there is no variance, when ΔI = 0, time ceases to progress. It is an equilibrium factor at that point. That is, our step factor, I, which describes stepping upward one interval u, has ceased to progress. This is important, because somewhere between stopping and flowing, there is slowing, also, there can be speeding up of the progression. The CFT transformation for the Planck Flow, by altering the scale of the Planck interval of time, can make the rate seem faster or slower by changing the size, like our lizards.

As for the rate equation of ΔI, (Susskind derived this, I don’t know the actual derivation), he gave:

ℜ =2



Where I have replaced ⋋ with ℜ, because I intend to treat this very differently from convention. And yes, this is the nearly the same as the formerly presented:

⋋ = 2𝑒 The difference is that the ⋋I term is continuous, whereas ℜI is the quantized representation, which I have assigned. The value ‘e’ is a, for lack of better term, Quantum Smoothing Factor. That is, 2-ΔI simply doubles every interval. According to our Markov process above, this is in fact what happens. There is no gradual, smooth, infinitely divisible means by which this process can occur with respect to Quantum Entanglement:

34 Here, n has split off to the Quantum Entangled set, {n,m}. There is no sensible means in quantum Theory by which this process (the actual act of splitting or propagating) can occur as a ‘smooth,’ infinitely divisible process. It must be quantized, like an electron orbital. It is n, then it is simply {n,m}. This is not at all remarkable if you liken it to a coin flip. When the coin lands, there is no gradual shift from heads to tails, it is simply tails. Therefore, the equation

⋋ = 2𝑒 Has no real meaning in this universe unless you are thinking in macroscopic terms, where even the Planck length and so on appear on a superficial level to be ‘smooth.’ However, as I believe I already stated or will state later, the Bohr Quantized Electron Model proves absolutely that space-time and its components, scaffold, source, are quantized. That is, if space-time were ‘smooth,’ infinitely divisible, the electron would have to pass through an infinite number of states in order to jump or drop from one orbital to another. It does not occur this way. In the same fashion, this transformation, or ‘spawning’ from n↦nm would have to pass through an infinite number of states if described by ‘e,’ smooth space-time. I’m not sure who first derived this ⋋I term, I would have to search through a thousand papers to find the first derivation or reference to it, but it cannot be correct. I therefore dismiss is as being impossible. The correct result is given by:

ℜ =2



This, in turn describes the Planck Flow from one Planck interval of time to the next. On our graphic representations this is indicated by ui. Let me just note here that the continuous, seemingly infinite drop to think of space-time as some continuous thing has been a virus in human thinking for, I don’t know how many centuries. Apparently, Zeno understood that this could not be the case 2500 years ago, which Newton and Leibniz turned into Infinitesimal Calculus. Keep in mind that the term e is an infinite perturbation, and its use is only possible in an infinitely divisible process. Ultimately, there are no ‘infinitely divisible’ processes anywhere in any case in nature on a Planck scale, and e has no place, value, or use in this universe. I need to point out here another issue I have (I have issues). The idea that Quantum Entanglement involves two (2) things. This idea has been handed down like an ancient mantra because we have only conducted tests and measurements that allow for two possible states or between two ‘things.’ Even if we look at our Markov tree, the root of the tree is equally entangled with every element on the surface of DeSitter Space. The idea that n↦nm, or actually, the convention is n↦mn, is an oversimplification as a result of working with ‘box matrixes.’ There is no good reason that anyone can give me that the process n↦mnopΩώσItz… is in any way forbidden. In fact, as I will describe later, it is perfectly valid for ř, which I will use to represent an unknown, even nothingness, to spawn ř↦mn, or ř↦σώќƨǯ… In this or any other Vacua, False or otherwise. In fact, since ř exists in some unobservable False Vacua, it is safe to say we have no idea what it is doing or capable of doing. Since we only do experiments that involve two possible outcomes, we observe and/or two results. As a simple well-known example, we take Kim ET. Al and the Delayed Choice Quantum Eraser experiment(s) they performed.

If you have color (if I can fit this in a text, it will be black and white), the red line, which in black and white is the upper most path, starts as a single photon, and has at least three separate results, each in a different time. People have marveled and debated the ontological issues in countless nonsequitur and improvable ways, noting that the interference pattern observed at D0, which in Kim’s setup was 8 nanoseconds apart from detectors D1 and D2. However, (this drawing is not to scale) detectors D3 and D4 are about half that temporal distance, perhaps 4 nanoseconds apart from D0.

35 If the photon produces an interference pattern at D0, that means we do not know the path information, e.g., if it passed through slit A or B. That is the Delayed Choice portion (Wheeler’s idea) of the experiment. Placing a measuring device to the right of the slit, (Delayed Choice) results in a clump pattern, not an interference pattern, because we know the path information. I have seen countless papers that try and use ludicrous math that is entirely non-sequitur to the entire rig, to take the conscious observer out of the system. Most notably is the work of Rovelli, who has applied more non-sequitur math to the most unrelated phenomenon than any other published scientist in human history. It is simple, if you take the conscious observer out of the system you have no experiment, and no results, you in fact have an unobservable domain; like behind the horizon of a Black Hole. My Border Collie can understand this phenomenal relationship. This reference to the horizon of a Black Hole is important in understanding the difference between unobserved and unobservable, and most importantly, since unobserved does not mean unobservable, the time of observation is irrelevant. Unobservable is infinite, unobserved is therefore reduced in this system to an infinitesimal. This means the time of observation occurs within an infinitesimal. In Quantum Field Theory everyone gets all confused because they have no idea how to treat infinity or infinitesimals (they missed Calculus 101), which is why QFT was allowed to escape into the environment like a virus. Under Conformal Field Theory, the use of infinitesimals is as obvious as I have previously written and treated as such. There are three (3) temporal events occurring within our observable domain, D0 is at one, D3 and D4 at the next, then finally D1 and D2 are at the end of the temporal path. There are three, not two temporal locations in this experiment. At D3 and D4, we always know the path information, as the prism, PS directs the photon toward D3 and D4 if the beam splitters BSa and BSb direct these (this) photon(s) from either slit A or slit B. Note that in this description, I cannot even say how many photons we are dealing with, how many photons are in play. As a result of knowing which slit the photon(s) has gone through, the detectors D3 and D4 always show a clump pattern characteristic of a ‘particle.’ Which yields the definition of the question, what is the true difference between a ‘particle’ and the wave function from which it was spawned? The difference, without all of the mathematically incoherent babble is that a ‘particle’ is a localized phenomenon, whereas a wave function is, in its native state, always superpositioned in space and time. This is the absolute definition and must be fully ‘grokked.’ That is, you must become one with this definition. Without comprehending it, the HUP will always be an incoherent babble of nonsensity to you, as it is to a world of ‘physicists’ out there, who seem to think it has something to do with particles and measurement, not to mention yielding mass-energy for every miscalculation in ‘particle’ physics. If, however, the photon(s) pass through beam splitters BSa and BSb, the are directed toward mirrors A and B, where they (it) passes through beam splitter c, the path, the information regarding which slit the photon(s) has passed through is lost, and we always see an interference pattern at detectors D1 and D2. In summary, when we look, a thing is observable and observed, it is localized, a ‘particle.’ It is not capable of being superpositioned or doing anything that a tiny cannon ball could do. When we do not look, a thing is unobserved (or unobservable), it remains a wave function, superpositioned, non-localized, essentially, Chaotic. Of further note, in Kim’s experiment, detectors D1 and D2 were separated from D0, by 8 nanoseconds. When the photon(s) produced an interference pattern at these detectors, we always see an interference pattern at D0. Before going on, there is no such thing as a ‘timeline.’ Time is not a ‘line.’ There is no such thing as ‘causality.’ The vocabulary word, ‘causality’ is an antiquated term (which will be laughed at by future generations, assuming they survive) by people who could not comprehend that in an infinite domain, time is an infinitesimal, not a valid dimension. Behind the horizon of a Black Hole, as we saw earlier in the chapter, where all is unobservable, there is exponential growth of Complexity of the Tensor Networks increasing without bound. This is that property which reduces time to an infinitesimal. There is no ‘line,’ and no ‘causality.’ There are also no ‘circles’ and no ‘loops,’ as grad students have tried to push such viral ideas. To a ‘real’ photon (excluding virtual photons, which I will describe later) all time and distance to every corner of the cosmos is zero. Back to our DCQE, if conscious knowledge of the path of the photon(s) were not involved in the system, all of the detectors would randomly display inference and/or clump patters. If the detector interaction caused ‘collapse’ to a localized state, we would always see a clump pattern, and only see information at detectors D3 and D4, as they are first causally in the temporal path. Detectors D0, D1, and D2 would detect nothing. The reason D0 would see nothing (the Delayed Choice) is that a single photon could not pass through both slits. Sir Rudolf Peierls: “The moment at which you can throw away one possibility and keep only the other is the moment you finally become conscious of the fact that the experiment has given one result. You see, the Quantum Mechanical description is in terms of knowledge, and knowledge requires someone who knows.” John Wheeler: “…it begins to look as if we ourselves, by a last-minute decision, have an influence on what a photon will do when it has already accomplished most of its doing. We have to say that we ourselves have an undeniable part in shaping what we have always called the past. The past is not really the past until it has been registered. Or put it another way, the past has no meaning or existence unless it has a record in the present.” Rather than ‘believe’ me, I am going with Susskind, Buosso, Thorne, Wheeler, t’ Hooft, Wilczek, Misner, Wigner, Von Neumann, Bondi, Heisenberg (called it transcendental no less) [146], Dirac, Bohr, Boltzmann, Kleban, Dyson, Funag, Bell, Fine, Aspect, Bohm, Fock, and oddly, Einstein remained Agnostic on the issue, and so on. [147-148]

36 My point being that the small-minded try to find explanations that clearly, suggest only that they, themselves do not exist, but are merely artifacts of a system that they have failed to describe. Therefore, if you are going to toss an ‘opinion’ at me that differ from the great minds in the history of Quantum Theory, based on a thing that you have failed to describe and/or understand the limited descriptions of, look in the mirror (a real, not metaphorical one) and realize the person in the mirror is clueless. Back from insulting you, my original statement was that I have issues with the line of reasoning that Quantum Entanglement has to do with two (2) things. In the DCQE above, I have just shown that we have a system where entanglement involves at least five detectors, two prisms, three beam splitters, two mirrors, two slits, a coincidence counter, all simultaneously coordinated, with at least, I count 15 paths (including the slits), all true, all real, all correct. That is 30 events, all true, and all real, not two. The DCQE is not a ‘Bob and Alice’ coin flip scenario. Among the myriad of problems the EPR team and Bell screwed up is a Bob/Alice coin flip, again, without an observer, no evidence that the coin even exists, until observation, at which point the entire EPR/Bell exercise is/was non-sequitur. Before going further, I need to get something out that really pisses me off. This has to do with the Everett Many Worlds interpretation. I had one ‘person’ in my face trying to describe that conscious observation plays no role in a system. That is, a photon has a choice of rather to turn left or right when bending around a gravity well. The choice is actually an infinite perturbation that can be made an infinite number of times, as time is non-sequitur to superposition. Thus, we have a coin flipping in mid-air for infinity. The Everett argument is that rather than me, one conscious observer affecting the system, the single photon has an infinite possible set of choices, and as a result, an infinite number of me’s results, an infinite number of mes’ as an artifact of the path of a single wave function. In this discussion, we have to consider the absurdity of any one of these infinite number of mes’ as being void of consciousness. Therefore, rather than having one conscious me as an observer, the ‘particles’’ infinite perturbation of outcomes results in an infinite number of conscious mes’. This, was supposed to eliminate the conscious observer from the system. The attempt resulted in an infinite number of conscious mes’, keeping in mind that not one me can be void of consciousness. This goes beyond absurdity, beyond irrationality, and far into the realm of ‘need medication.’ Then we get to that cat. Schrodinger’s cat. Neither science nor medicine could define death for the courts in the days when voluntary euthanasia was before the Supreme Court, so the Supreme Court considered all of the arguments, and rendered a definition that fills an entire volume of text. However, as with all legal jargon, this definition can be rendered in one sentence: “The irreversible lack of conscious response to external stimuli.” Irreversible is an important feature, as many medical procedures (deep brain surgery in particular) involve stopping the heart, draining all of the blood, and keeping the body on ice during the operation. In this scenario, there is no heartbeat, they do not artificially respirate, because the blood has been drained, and the brain wave (EEG) is completely flat for several hours. Then the blood is pumped back in, the body warmed back up, and the patient resuscitated. Typical procedure time exceeds 12 hours. However, my point is ‘lack of conscious response’ is a bizarre feature that means Schrodinger’s cat must be a conscious being in order to be dead. The cat cannot be superpositioned as alive and dead if it is not a conscious being. As a result, the cat is then the primary observer, Schrodinger is superfluous, and the entire scenario is non-sequitur. This trickles back down the line from Schrodinger, to the cat, to the bottle of poison (in the scenario description) and all the way down to the atomic decay that causes the mechanism to break the bottle of poison. If there were no interdependency of these elements, people would not have been debating it for a century. In the Xiao experiment below, I send Schrodinger off at v = c, not some value v < c, but at v = c. Regardless of the atom, the detector, bottle, cat, and Schrodinger, when Schrodinger opens the box whatever state collapses from superposition in Schrodinger’s frame of reference remains in superposition to me. This is because time has infinitely dilated as I observe Schrodinger speeding away at v = c. We now have a dual state, where the wave function has collapsed in Schrodinger’s frame of reference, but to my frame of reference, it remains in a state of superposition. This ‘dual state’ of being both superpositioned and collapsed form super position has never been described or considered before. However, the Xiao-Peres experiment below helps understand what they termed, Entanglement Swapping.’ In another experiment [Xiao-song Ma, Stefan Zotter, Johannes Kofler, Rupert Ursin, Thomas Jennewein, Časlav Brukner & Anton Zeilinger; Experimental delayed-choice entanglement swapping. Nature; Physics volume 8, pages 479–484 (2012)] they produced two pair of entangled photons as such, and separated them such that they would go to different detectors, Alice, Bob, and Victor.

37

In this experiment, the last person to receive an entangled photon in the chain is Victor, however, even though Bob and Alice have detected their photons already but have not yet looked at the results. The method by which Victor chooses to detect his photons will determine the results that Alice and Bob will see when they look at the measurements, which had already been taken. Asher Peres, who designed the experiment, stated: “If we attempt to attribute an objective meaning to the quantum state of a single system, curious paradoxes appear: Quantum effects not only mimic instantaneous action-at-a-distance, but also, as seen here, influence on future actions by past events, even after these events have been irrevocably recorded.” [A. Peres. Delayed Choice for Entanglement Swapping. J. Modern Optics, 47: 139-143 (2000)] In other words, we can have a situation where a photon is emitted from a galaxy billions of light-years away. Along its path, it must pass around a galaxy via Gravitational Lensing.

If we use a photographic plate to detect the photon, billions of years later, it will record that it passed around both sides of the galaxy:

However, if we use a stereo imaging system to detect the photon, we will record which side the photon passes around, and obtain only one result, rather than an interference pattern.

38

Our measurement now affects the photon’s path billions of years ago, billions of light-years away. Keep in mind the Kim experiment was 20 years ago (as of this edit) and the Peres experiment was 7 years ago. Still, I see absurdities published, and even worse absurdities in ‘chat rooms’ devoted to QM, where the authors are completely unaware of anything written since the Cuban Missile Crisis. In addition, they get angry when you write the correct answer. The idea that everyone is entitled to an ‘opinion’ (Dictionary.com: a view or judgment formed about something, not necessarily based on fact or knowledge) is something the uncertain tell each other to comfort and validate their state of uncertainty, and require the same from others, else their ‘opinion’ faces invalidation; the same definition for humility. I do not possess this handicap. In another text, I wrote: One meteorologist accidently found a relationship between seemingly random lightning strikes (we still don’t actually know what causes lightning) and gamma ray bursts (supernovae remnants). The relationship between lightning here on Earth, billions of light years away has a relationship in nature with a supernova that popped billions of years ago, and was accidently stumbled upon by accident such that the meteorologist actually thought the gamma ray bursts caused the lightning strikes. He thought he’d figured out lightning had a causal relationship with supernova events that occurred spread out over space-time from million to billions of years ago, in a global coordinate system, all unified here, now on Earth. [1-5] Neither supernova, which occur at a rate of about 20/second, some hundreds of millions of lightyears away (meaning hundreds of millions of years ago) to billions of lightyears away (meaning billions of years ago) spread out across the sky, have a non-Chaotic pattern. When mixed with the apparent randomness of lightning strikes here on Earth today, the relationship is so obvious a meteorologist thought there was a causal relationship, by accident. 1. 2. 3. 4. 5.

IAN POPPLE, Chaos in the rain, McGill Reporter Sadruddin Benkadda, George M. Zaslavsky, Chaos, Kinetics and Nonlinear Dynamics in Fluids and Plasmas: Proceedings of ... H. Kikuchi, Dusty and Dirty Plasmas, Noise, and Chaos in Space and in the Laboratory Brian Handwerk, Thunderstorm Gamma Rays Linked to Lightning, National Geographic, October 11, 2007 Hasselblatt, Boris; Anatole Katok (2003). A First Course in Dynamics: With a Panorama of Recent Developments. Cambridge University Press. ISBN 0-521-58750-6.

Now, I’m not certain how to interpret this. Causality is dismissed simply because it does not exist. However, the Chaotic attractor between at least two unrelated phenomena spread over billions of years of time and distance, are interdependent, as though entangled. That is, and this is proven in the references above, a raindrop falls here, a lightning strikes over there, and a star went supernova six billion years ago… As much as I hate to use analogies, because they can be ambiguous, I like the ‘river analogy,’ of events. An event is when information changes from one state to another. Without information changing state, there is no event, and no ‘flow’ of time, or anything else. A river flows, and in the river, there are rocks jutting up. This river is a favorite among Kayakers with a death wish, because it is very rapid, and the rocks are a real obstacle course of skill. If you were to calculate the probability of vortexes forming along this river as a result of the rocks jutting up, you would get a number. This is normal, and we can reliably describe, prior to observation, given enough information, the characteristics of these eddies and vortexes that form around the up-jutting rocks. This is where we get to ‘weird world.’ If the river were stagnant, not static, as there are random ripples in it (fluctuations), but not flowing, the probability of other formations, like a car, cat, rabbit, or fish, (like you see in the clouds) instead of a vortex is much higher than a vortex. However, they are not predictable, just possible, more so than a typical vortex. The reason is that a typical vortex is one of an infinite number of things that can come out of that stagnant river. Just like you see bunny rabbits in the clouds, there is no property of the stagnant river that favors a vortex over a rabbit in a random ripple. In fact, the bunny rabbits, dragons, and faces in the clouds dominate what we observe in nature, over any predictable pattern of cloud formation. This is a real world example of random processes outweighing predictable processes resulting in the observation of things that are impossible in ‘cloud world.’ That is, there are no bunnies, dragons, and faces in the clouds, yet we observe that there are such. This is where people go south with the Heisenberg Uncertainty Principle, which is an oxymoron as used in physics today. First, the cosmos is not a freeking bank; you cannot ‘burrow’ energy from it. The idea is essentially, ‘I am not violating any laws of thermodynamics, I will not

39 have it for long, and I will return it when done.’ That term has become so engrained in urban myth it is yet another thing that really pisses me off (yes, I am an angry man). Let me explain: In a moving river, where positions of the rocks jutting up and the speed of the river are well known, we can make fairly reliable predictions of where a common vortex will occur. In the common use of the HUP, information is being created out of nothingness, not ‘burrowed,’ nor is it lended. Information that did not exist; now exists. The idea that anti-Information also was ‘burrowed’ does not cut it. There is no such thing as anti-Information. The -1 Law of Thermodynamics is violated. The most likely case is that the entire description of the scenario is bogus. The model is bogus. We call on the magic Black Box of the HUP to salvage our bogus model. Currently, the bogus model is kept from sinking into the toilet where it belongs by: I will violate the -1 Law of Thermodynamics by ‘borrowing’ both Information and Anti-Information for the briefest moment, so that its loss is not noticed, then I will return both the Information and the Anti-Information to the Cosmic Bank. As such, although I both created and destroyed Information, it was really quick, and it’s not really a violation of the -1 Law of Thermodynamics unless I keep it longer than the due date. In a nucleus, the Standard Model suggest that the HUP ‘borrows’ information, not just random noise, as should be the case, but extremely complex and highly organized information, such as the mythical ‘gluon,’ which holds the ‘quarks,’ together. The information required to ‘paint a gluon into existence’ is so highly complex, that the entire field of Quantum Chromo Dynamics is devoted to this single boson. Yet, the HUP, which makes no such provision for the production of complex information, according to the Standard Model, supposedly gives rise to complex information, along with anti-information, burrowed from the cosmic bank on some short term loan basis. Keep in mind that quarks were never intended to be real things in the first place, as they were merely a mathematical Group Theory approach to particle decay observations. The decay of human thinking was to make these mathematical groups ‘real things,’ and resulted in the need to expel huge amounts of Force to hold them together. As the quarks were being formulated into real things, it occurred to those people that they should fly apart, because of ‘fields,’ which also do not exist. The EM ‘fields’ in question are exchanges of localized virtual (and massive) photons, according to the standard model. A photon is not a ‘field,’ yet, not only is the localized quanta of the photon ignored in favor of the field, but the massive photon, which cannot travel at v = c because it possesses mass is also ignored, and the ‘field’ propagates at the speed of light, regardless. So, QFT comes to the rescue and describes the riddle of the HUP producing such complex information by way of a plethora of multidimensional fields extending out to infinity, in our finite domain. Whereas the ‘field’ was a phenomenon of 19th century thinking, abandoned by the exchange of virtual, massive photons in every process, the field remains undefined, other than non-localized (meaning, it must be of infinite scope). Since what the people of the 19th century called a ‘field’ is a massive virtual photon, it cannot even travel as fast as light, meaning that not only is it localized, but localized to a region smaller than our finite domain. Thus, we toss in SuperSymmetry, and we have a dimensional set that exists on the other side of light speed, such that we can salvage our ‘fields.’ I am painting a history of oops and bandages. Every aspect is improvable, and although the claim is that, it is ‘elegant’ and matches observation, neither is the case. In the past few paragraphs, I have defined how and why it has absolutely nothing to do with any observation, and is in fact impossible. The newly invented ‘real’ quarks repel one another violently, and must be held together somehow to define a nucleus. This requires a huge amount of Force. As a result, 99% of the mass of the nucleus is via the HUP, thus 99% of the mass of the visible cosmos is virtual, not real. If you select to ‘believe’ this, stop reading here, because I do not want you knowing anything. You have cherry picked elements out of your life experience to suit your penchant, and spin a creative worldview. You then seek validation of this bizarre worldview by gathering into groups, validating, and comforting one another via Rational Charisma. In the 19th century, physicists stated, ‘energy cannot be created or destroyed.’ They had absolutely no means or evidence whatsoever to support such an idea. Yet, it became an immutable ‘Law of Nature.’ When they came to the point where they needed to violate this Law of Nature, they replaced the term, created with ‘borrowed.’ Albeit quarks were never intended to be ‘real’ things, in the Standard Model the three valence quarks make up less than 1% of the mass energy of a proton or neutron. The rest of the mass-energy is via the HUP, virtual mass-energy; to a figure so exacting it make the Uncertainty Principle an oxymoron. As a result, 99% of the observable universe is virtual, not real. Like the Fractional Reserve Banking System, the cosmos is 1% tangible and the rest is debt. The HUP, in this case, the ‘random quantum fluctuations’ are more likely to produce Information in forms so bizarre we cannot recognize them, rather than a very exacting; out to 100 decimal places requirements of this cosmos. How much more likely is it that the random quantum fluctuations will produce bizarre things rather than the super accurate orchestra of things and events that make up a single proton? The answer is it is infinitely more likely that a ‘random quantum fluctuation,’ if at all possible, will produce a random, bizarre thing, rather than the orchestra of finely tuned events that make a single proton. It is therefore infinitely impossible that the HUP has anything to do with the Standard Model, and therefore the Standard Model is infinitely impossible.

40 Because of the scale invariance of Conformal Field Theory, the difference between the quantized and continuous versions is subtle. This will become clear when I present the Fractal Set, which is my own work, and describes the scale invariance. As you look at the way the fractals evolve, the difference between ⋋I and ℜI is indeed subtle. However, if you study the interdependency of the Fractal Set very closely, you will find that it is nonetheless quantized. This quantization of ℜI is extremely important to understand, as it represents the evolution of the Planck Flow, which we intend to alter via the QZE. In fact, the definition of the QZE I will say at this point is controlling the evolution of ℜI by observation. Where that ℜ is the rate factor, which Susskind presents as ⋋, however I’ve used the ℜ so as not to confuse it with the ‘smooth’ process proposed, it must be quantized. Also, I need to add here that the orthonormal aspect of these eigenvectors is coincidental to the extreme case of ‘shear mapping:’

In this case, our shearing arrow, in this case, Ln ⋋I = 0 because ⋋I =1. Keep in mind that the values are not ‘smooth,’ as in ‘e’, but can only take on sheering values of the Mitrofanov angle, given by: [IGOR G. MITROFANOV Space Research Institute, Profsojuznaya str. 84/32, 117810 Moscow, COSMIC GAMMA-RAY BURST SOURCES: THE PHENOMENON WITH THE SMALLEST ANGULAR SIZE IN THE OBSERVABLE UNIVERSE. THE ASTROPHYSICAL JOURNAL, 424:546-549, 1994 April 1 (i) 1994. The American Astronomical Society. April 1, 1994...424:546-549]

Meaning it is possible to take on as many as about 1.7E20 values in a 2-dimensional scaffold. In a 3-dimensional scaffold that increases to:

𝑁 =

4𝜋 ~2𝐸41 𝜑

Why is this important, and why contradict convention? It seems to me that the orthogonal approach only has meaning in our heads. That is, regardless of what form the matrix takes, (it appears as a n x m matrix because we write it on paper) it extends out to infinity, and has no ‘real’ endpoint or value. Furthermore, I think reducing this Markov process to an orthogonal set of, essentially, binary information, when about ten pages on I will show that any value, even if it does not exist, can enter or exit this Markov chain. Therefore, assigning it an oversimplified ‘box’ matrix is unwarranted. We then consider the relationship to the Probability of states, Pm,n, and say that:

∆= 𝛿 = 0 That is, the delta term before ΔI. Then we have, such that we have σI:

41 𝜎 = 𝑒(

)

This result (presented by Susskind) bothered me at first, then upon closer examination, we have, at equilibrium:

𝜎 =𝑒 ln 𝜎𝑚 = (

𝑠𝑚 ) ln 𝑒 2

𝜎 =0 Here is where I am not certain if I am interpreting Susskind correctly. Ln 0 is undefined, at least, unless we apply infinitesimal calculus, in which case:

lim ln(𝑥) = −∞ →

Likewise

𝑒 = 0; 𝑢𝑛𝑑𝑒𝑓𝑖𝑛𝑒𝑑 However, again we apply infinitesimal calculus and we have:

𝑥 ln 𝑒 = 0 lim x ln(𝑒) = 0 →

If we are going to use it in this way, then we can proceed:

ln 0 = −∞ =

𝑆 2

Which certainly represents the final equilibrium states of entropy. Then we are left with this nagging:

𝜎 = 𝑒( 𝑃 𝑒(

)

(

)

=𝑒

ln 𝑃 = 𝑆 ln 𝑒; ln 𝑒 = 1

Given the two cases, he could be saying one of or both of two things: Given:

ln 𝑒 = 1 We are then left with

)

42 𝑆 = 0; 𝑆 = 1 The entire thing becomes a statement that simply says at equilibrium we are at infinite entropy, or, entropy is increasing without bound. This is completely in line with our Complexity Model, which will be discussed later. Meaning that all of the microstates, {m,n} are equally probable, just like a coin flip, or a live/dead cat. Thus, we have spawned entire universes, some stable, some will not be, and we are still stuck with opening or not opening the box. The difference is, not opening the box is ‘unobserved.’ Behind a Schwarzschild horizon (the inability to open the box) is unobservable. In the case where we have not decided to open the box, yet, but we are able, the system is unobserved, and as such, superpositioned. In the case where we cannot open the box, perhaps because the box is behind a Schwarzschild horizon, the system is unobservable, and as such, expands exponentially without bound as a Quantum Entangled system of growing Tensor Networks (connecting two or more things) of exponentially increasing Complexity. I need to note here that the concept of Quantum Decoherence (QD) in this, as you can see, is impossible. QD is an entirely random process, with a density matrix (probability of states) summed over zero, meaning the result is therefore zero, with no endpoint to the outcome (due to the infinite perturbation, e), in this case, n or m, meaning that our Markov process and Complexity, Entanglement, and everything Unobservable (behind the Schwarzschild horizon) is impossible. QD is time dependent, meaning that it is space dependent, and our process I am describing is that from which space and time emerge, along with its geometry (Gravitation). All of this happens as a result of the interdependency between two factors: the unobservable and the unobserved. With no conscious observer, there is no spoon. Most physicists are not up to speed on these things, and as a result, go on debating an ontological argument where all of the ingredients they have available to them are irrelevant, outdated, and the entire exercise is a futile one of non-sequiturs piled on top of non-sequiturs. They have not even defined what a conscious observer is, nor have they differentiated unobserved from unobservable. I will define what a conscious observer is, and what observation, perception is, later. And the answer is completely math based, and compatible with the AdS/CFT model, in fact, defines it. An important note that the QZE is like a coin flipping in midair indefinitely, spawning heads and tails as it does so. Upon observation, the result, say heads, is permanent, fixed, time independent. Time is not even sequitur to the discussion. In the classic coin flip, when you catch it and flip it over on the back of your palm and observe the result, it remains fixed. That is what observation is in the QZE, it makes Information static, nondynamic, unchanging. When the coin is not observed, it is spinning in midair, spawning heads and tails. All of these Bubble Nucleations and so forth only occur when we are not observing or the thing is unobservable. This is my reason for bring the Anthropic Principle into the discussion, in particular, the Player Participatory Anthropic Principle. The PAP states that the universe (any universe) does not exist without observers. This both is and is not true. Of the potential 10E500 False Vacua that String and m-theorists calculate are within proximity to us, only 1 is useful. We cannot observe the other Vacua and therefore cannot validate that they do or do not exist. In any case, they are Falser Vacua than ours, which appears to be at least somewhat stable up to this point. Without observers, and it does not matter when you observe a thing, such as light that is 13.8 billion years old, as the system is time independent, that is the nature is Quantum Entanglement and superposition, You stop or slow a dynamic system. We go back to this:

The reason I mentioned not liking the curved lines of the original diagram are because of the orthogonal issues. However, I have issues with this image because it is misleading in that the orthogonal lines appear to form closed boxes (loops), which is also incorrect. In any case, since I am not a graphic artist I am stuck with what I can find. Classically, as mentioned, this appears to most Quantum Theorists to be a Big Crunch, or other doomsday scenario, the end of expansion. Keep in mind that it does not matter which tail we cut:

43

The observer brings order out of Chaos. The Markov process described in Bubble Nucleation, assuming a 2-adic system (binary) occurring in this highly ordered fashion without losing either of the two terms above at completely random intervals is defined in the Markov equation itself. That is, the term:

Ɣ

𝑁 +Ɣ

𝑁

We could sum this over m, the choice is arbitrary. In any case, this equation is absolutely dependent that both gamma terms occur equally, like a coin flipped a trillion times will come out 50/50 heads/tails quite reliably. However, the stability of our Vacua is dependent on that ratio to come out 50/50 to 120 decimal places, or the cosmos would have either collapsed, or evaporated off by now. Some String and m-theorists, in calculating 10E500 False Vacua (False meaning that they are less stable than our Vacua); in this suggests that the 50/50 ratio actually has to be precise to 500 decimal places. We subtract our gamma terms by selection, by observation. It is not possible to make an observation (heads, or tails, for instance) without therefore subtracting out one of the two terms above. That is, unless you are capable of perception of the world around you as Quantum Entangled, superpositioned, wave functions… If you are capable of this type of perception, then we go back to our Qp system as I drew upon earlier:

Here again, the entire Qp branch is in a state of superposition, ‘flat,’ and remains therefore unobservable. That is, everything I crossed out on the left is {n,n}, which is the EPR/Bell coin flipping midair for infinity, useless information. In fact, it is not information in our frame of reference – keep that thought in mind, for infinity. We have the EPR team and Bell laying bets on the table while the coin spins faster than the eye can see in zero gravity. We have the EPR team betting heads and Bell betting tails. For infinity, no one will win, and no one will lose. There is no information. That is a Qp universe, a universe with no observers, or no one observing. Everything, everywhere, is gibberish, because there is no information. Without information, regardless of one’s anti-ontological thrombo, there is pure and absolute nothingness, at best a False Vacua so unstable it evaporated instantaneously, because a Vacua requires information to define it. There is a difference between unobserved and unobservable. What lies behind the horizon of a Black Hole is an example of an unobservable. Schrodinger’s cat, prior to opening the box, is unobserved. In the case of the Black Hole, there will never be a time, within reason, when what lies behind the horizon can be observed, and is therefore unobservable. In the case of Schrodinger’s box, it is merely a choice of when to open the box. An unobservable, to the best of our knowledge, only exists as the most inhospitable environment to life, e.g. a Black Hole. An unobserved may refer to anything within an observable domain which has yet to be observed. As a result, the suggestion that the universe has some mechanistic approach, void of conscious observers becomes rather ludicrous. If the cosmos is void of conscious observers, it is unobservable, and to the best

44 of our knowledge, the domain can only take a form similar to that behind a Schwarzschild horizon. That is, clearly, all evidence, and even proof, is that an unobservable domain (one that is void of conscious observers) can only exist as a domain in form like that behind a Schwarzschild horizon. That is not this universe. Just say NO to Qp. When Schrodinger opens the box, the superpositon splits off to n↦nm. At that point, the system becomes Np. Note that I drew to stars on the Holographic surface of the system in the image above. They are causally connected to the split where n↦nm occurs. However, the entire left branch (with the big line going through it) is not causally connect to either of the stars, the left branch is unobservable. Not ‘unobserved,’ but unobservable. There is no causal connection, meaning that, unlike our cosmos, where we can look back in time via the photons at point n↦nm, we cannot look through any telescope and see anything on the left branch of Qp. There is a probability that the system on the left will have a random fluctuation resulting in n↦nm. However, unless this system is remarkably (1:10E500) stable, it will be a False Vacuum that collapses. A collapsed False Vacuum is usually described via Quantum Tunneling of some pseudoparticle, or perhaps Higg’s out of the Vacua. If you look at the diagram on the prior page where the trail suddenly comes to a stop, that is where the pseudo-particle tunneled out. We had our probability of stability as:

The term, Sm is the entropy of m. Of course, the stability for n is:

If you recall, M is our factor for mass scale that measures the variation of the potential in the interval between the two elements of the vacua (n and m). This equation is easier to look at rearranged: (note, ln e = 1)

We look at

Ɣ

=𝑀

𝑒

45 And find that

Ɣ

=𝑀

𝑆 ln 𝑒

Where ln e = 1, it simply drops out of the equation. I think, therefore, the equation is incorrect, and rather than the exponent ‘e’, it should be:

Ɣ Ɣ

=𝑀 =𝑀

𝑆 ln 2

Ɣ 𝑀

2

ln 2

=𝑆

This makes sense, since the model for this is from Black Hole physics, under extreme mass conditions. No mass would suggest that the gamma term is zero, and no nucleation occurs. Herein lies a predicament. We need mass for gamma to be greater than zero, but we need gamma to be greater than zero in order to provide the space-time scaffold in which mass can exist. The Entropy is a spectator variable in all of this. This would suggest an interdependency between mass and gamma, the emergence of space-time scaffold. As with all things, I turn such interdependent things into fractals. The key is that without mass, we have Qp, all n’s or all m’s. Mass then, is interdependent with the processes n↦nm and m↦mn. We look therefore, again, at the probability of each branch emerging from the other: (in this case m from n)

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

Ɣ

𝑃 +

Ɣ

𝑃

𝑆 ln 2 𝑃 +

𝑀

𝑆 ln 2 𝑃

We substitute for the gamma term

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

𝑀

That is, if the term (u +1) occurs. If the system is Qp, for instance, then since n↦nn, we can effectively say that there is no progression of u, because u is indifferentiable from (u + 1). The entire equation is of course identical to both terms, {m.n}

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

𝑀

𝑆 ln 2 𝑃 +

𝑀

𝑆 ln 2 𝑃

We therefore have to consider the entire thing a ratio

𝑃 (𝑢 + 1) = 𝑃 (𝑢) − ∑ 𝑀 𝑃 (𝑢 + 1) = 𝑃 (𝑢) − ∑ 𝑀

𝑆 ln 2 𝑃 + ∑ 𝑀 𝑆 ln 2 𝑃 𝑆 ln 2 𝑃 + ∑ 𝑀 𝑆 ln 2 𝑃

Now we have fixed an inherent problem in the Markov process as it pertains to the emergence of real space-time. But, we still have to find that interdependency term between mass and this emergence phenomenon, as the term, (u + 1) will not occur without mass, and without the space-time scaffold, there is no geometry within which mass can exist. As stated, (u + 1) represents 1tp, therefore we go back to:

G = 6.67384(80)×10-11 m3/Kg (s)2

46 Substitute t’ for s G’ = 6.67384(80)×10-11 m3/Kg (t’)2 𝑡 =𝑡

1−

𝑟 𝑟

→ → We therefore replace (u + 1) with tp’

𝑃 𝑡′ = 𝑃 (𝑢) − ∑ 𝑀 𝑃 𝑡′ = 𝑃 (𝑢) − ∑ 𝑀

𝑆 ln 2 𝑃 + ∑ 𝑀 𝑆 ln 2 𝑃 𝑆 ln 2 𝑃 + ∑ 𝑀 𝑆 ln 2 𝑃

And the term for expansion then becomes:

← This merely states that as we back away from the horizon tp’ increases from zero (infinite dilation) and G’ begins to fall. Oddly, G’ defines massenergy. For instance, two of the Fractal Set include:

𝑡𝑝 ⇆

ℎ𝐺 2𝜋𝑐

𝐿𝑝 ⇋

ℎ𝐺′ 2𝜋𝑐

And we can see from here that the entire Planck spectrum of units is slave to G’. Therefore, I can write this giant fractal for the emergence of an Np system as:

lim 𝐺′ ⇌ ⟷

𝑃 (𝑢 + 1) = 𝑃 (𝑢) − ∑ 𝑀 𝑃 (𝑢 + 1) = 𝑃 (𝑢) − ∑ 𝑀

𝑆 ln 2 𝑃 + ∑ 𝑀 𝑆 ln 2 𝑃 𝑆 ln 2 𝑃 + ∑ 𝑀 𝑆 ln 2 𝑃

The key to know is that at rs we are at a definitive 2-dimensional scaffold, the Black Hole surface. As we rise above the horizon to r, we increase the dimensionality. This is where the confusion lies with respect to the debate over how many dimensions is the correct quantity. For our cosmos, the term (u + 1) represents proper time in Planck intervals. This raises the dimensionality to 2+1. As for the 3rd dimension of space, the term:

47

Describes the orthogonal emergence via the modified Markov process. The term c 3 in the denominator is where G’ gets the m3 kg-1 s-2. As a result, G’ gives us our dimensionality of (3+1). Note also that the term:

→ Describes the horizon, and the phenomenon behind it as:

lim 𝐺 = →

𝑃 (𝑢 + 1) = 𝑃 (𝑢) − ∑ 𝑀 𝑃 (𝑢 + 1) = 𝑃 (𝑢) − ∑ 𝑀

𝑆 ln 2 𝑃 + ∑ 𝑀 𝑆 ln 2 𝑃 =∞ 𝑆 ln 2 𝑃 + ∑ 𝑀 𝑆 ln 2 𝑃

And this is where we get our infinite drop, infinite Tensor Network Complexity behind the horizon. At this point it is incorrect to think of it as a fractal, it is increasing without bound. That is, the probability of spawning either n↦nm or m↦mn divided by the mass is equal to the entropy of either element, n or m (I solved for n here as an example, but it is the same for both elements {n,m}.) Behind the horizon of a Black Hole, the entropy is fixed, the mass may or may not change, regardless, if the mass remains fixed, the probability of spawning either n↦nm or m↦mn continues behind the horizon. In many cases I make reference to ‘falling into a Black Hole,’ as an infinite temporal phenomenon for both the observer and observed. One might think this contradicts this set of equations and statements above. However, the Black Hole is neither observer or observed, or in any way temporally dependent. It is an unobservable, time independent. We can also use velocity of ‘recession’ as our proper measure of G’:

𝑡

𝑡 =

1−

𝑣 𝑐

But we need to quantize both velocity and the speed of light. The speed of light is defined as:

𝑐=

1𝐿𝑝 1𝑡

𝑣=

𝑛𝐿𝑝 𝑥𝑡

Velocity then, will be defined as:

±𝑡′ ⇄

𝑡 𝑛𝐿𝑝 𝑥𝑡𝑝 1 − ( 1𝐿𝑝 ) 1𝑡𝑝

The (+/-) term is a natural result of the square root on the right hand side of the equation.

48 This is a direct result of Quantized Motion on a Planck Scale, which was discussed earlier. If we look at that phenomenon, using again, 0.5c as our example: At v = 0.5c, we are faced with (we review this) 1.

go ½ Lp in 1 tp

That is not possible because this requires a structure finer than Lp (a Planck length) will allow: Figure 1 (you cannot slice Lp in half, or take any other slice in normal space-time)

2. Or, go 1Lp in 2 tp: Since proceeding at >1Lp/ tp is exceeding light speed, this is forbidden. Since going < 1Lp is forbidden because it requires a structure finer than Lp will allow. We are faced with motion taking on the following characteristic: Go at v=c for 1 tp, stop for 1tp, go at v=c for 1 Lp, stop for 1 tp; etc.,

Note here that I am using the convention of ‘length contraction,’ so as not to confuse the reader that this property is an artifact of flipping any of the equations upside down. It is not possible to travel one Planck length in distance at any ‘velocity’ less than c, because that requires splitting a Planck unit of length and/or time into slices smaller than space-time will allow. It is not possible to travel one Planck unit of distance at any ‘velocity’ greater than c, because that violates Special Relativity. Therefore, on a Planck scale, only the velocities zero and c are possible. Motion on a Planck scale is thus quantized into jumps, or leaps, alternating between zero and c. Furthermore, since each Planck volume of

49 space-time is isolated from each and every other Planck volume of space-time, this phenomenon of quantized motion occurs separately for each Planck volume of space-time with no apparent means of coordination between Planck volumes of space-time. There is thus some unifying factor involved in this phenomenon. Just as there is some unifying factor that provides a seeming continuity of the progression of Planck intervals of time (tp) or seeming continuity of space (Lp), mass-energy, the forces of nature, and so on, referred to as the ‘Planck Flow.’ In this case, what we are finding is that information is defined as a change of state. That change of state occurs every 10-44 seconds, our Standing Interval masquerading as a Planck Flow. We consider the velocity of recession, the ‘Hubble Parameter,’ H0 as the result of ‘stuff’ moving away from other ‘stuff’ in an existing scaffold of space-time, as per Alan Guth’s Inflationary model. In this case, we are looking at the lizards, fish, demons, and so on:

The reason for quantizing velocity is because of the lizards. If we are using our local meter stick, which is quantized to our local environment, we have no choice but to take not only a quantized measurement of any motion (which means Special Relativistic velocity changes are quantized; explained in greater detail later), but that according to our locally quantized meter stick, the things crossing the lizards at the center of this CFT lizard field are moving faster. That is, if a ‘thing’ crosses 1 lizard in the center, according to our locally quantized meter stick (which is much smaller at the outer rim), it seems to have crossed a far greater number of lizards at the center, where it takes many of our local lizards to equal the length of one lizard at the center. As a result, the artifact is that the ‘velocity of recession’ from the lizards at the center seems greater than the velocity across lizards local to us. This requires that the space-time scaffold (the disk of lizards), as per Guth’s Inflation model, already exists. This H0 is an artifact, however, that is how the universe works. The motion is real, the velocity of recession is ‘real,’ in the sense that Conformal Field Theory (CFT) tells us that the scale invariance of my locally quantized meter stick, when used to measure the quantized meter stick 13.8 billion light-years away gives a different result, which is ‘real.’ I will go into more detail on this local quantization later. The non-symmetric Gmn is part of the rate equation, and the description that includes the variance/invariance is given by:

𝐺

=

𝑒

𝐼 ⋋ 𝐼

The portion (Sm – Sn) is the entropies of n to m, if we were looking at Gnm they would be reversed. The portion:

𝐼 ⋋ 𝐼

50 Is our symmetry factor. Remember that when

⋋=1 ⋋ = 2𝑒 ∆𝐼 = 0 What happens if we have a weird universe where we neither have n or m (e.g., within a False Vacua where the properties are different from ours)? Susskind calculated this situation with respect to the probability that it would spawn a n or m type that is fitting for our universe (Vacua).

Here in our weird universe, a False Vacua, we start off with something of type r. The u is the progression of time in ui steps. The probability that a type n will spontaneously emerge he gives as:

𝑃 (𝑢) = [𝐺 ]

=

𝑒

𝐼 Λ 𝐼

Why is this important? Because of the possible 10E500 False Vacua that we propose to be less stable than ours, this probabilistic equation gives the potential that one of these False Vacua can become more stable. Since we do not know the stability of the Vacua we exist in, we cannot say how stable. We come back to our Factorial Tree:

We want to know the distance between X1 and X2, referred to as the ‘piadic distance,’ as you recall our system is Piadic. In order to do this, we have to trace the distance, as shown in an earlier diagram through the least number of lizards, in this case the absolute distance is where they can be traced back to a common interval of time, ui, where they split off. The absolute distance is given by:

51

|𝑋 − 𝑋 | = 2 In Cosmology, this is comoving distance. This gives us a grip on the expansion we observe with respect to my former description of moving across that field of lizards using our locally quantized meter stick. We perceive this as an artifact we call the Hubble Parameter. The next thing we come to is the Equilibrium Correlation Boundary, which is a future boundary:

(𝐶 )

=𝑃 =𝑒

𝑃ř 𝑃 ř (𝑢 − 𝑢 )𝑃 ř (𝑢 − 𝑢 )

𝐶 = ř

Here, I am tracing the correlation between a future x and y at time u0 back to some origin, marked by ř at time ui, where the subscript I refers to ‘initial,’ and the subscript zero refers to origin, as we are tracing backward through time. Also, note that I am using ř because this is not the same r as we used a few pages back. In Conformal Coordinates, the actual distance between x and y is how deep down you have to go until their ‘light cones’ intersect. In this case, we are going to focus on type m. Here C refers to Correlation, in this case, of x and y. The term Př refers to the probability of all the weird things that can enter that point at ui, we then have the probabilities of that weird thing ř↦m between times ui and u0, times the probabilities of the weird thing ř↦n over the same time interval. We’ll then say that there is an orthogonal eigenvector, J to the eigenvector I:

𝐼∘𝐽 =𝛿 (That circle is a dot product). This makes our C2 become, with respect to I dot J:

𝐶 =

𝑒 ř𝑒 ř

ř

𝐼 𝐼ř ⋋

(

)

52

𝑒

ř

𝐽 𝐽ř ⋋

(

)

Keep in mind that ui describes the Piadic distance, that is, how deep down the tree we have to go to find a common point in the Markov tree. It is proper time as measured (defined by) the Planck interval of time, tp; that is the definition, not the debate. I’ve taken the liberty of trying to line up the I and J eigenvectors so as to make them obvious. In this, the Sř terms of the exponents (circled) cancel, and we get:

𝐶 =𝑒

𝑒

𝐼 𝐼 ⋋

That is the equilibrium eigenvector. Also referred to as cluster decomposition, saturated by the identity operator (a non-zero one point function in CFT).

𝐶 =

1 |𝑥 − 𝑦 |

C1 is way out n some indefinite future, and C2 is perhaps right now. Somewhere between now and forever, that weird thing, ř, can become type n or type m, native to our cosmos as we have defined it. That weird thing ř can be ‘nothingness.’ That gets into a thing I call Spectator Hypothesis, which I will get into later. In short, ř becomes part of the {n,m} system by becoming Quantum Entangled with it, that is the point where it enters the vertex, ui. The idea that it can be nothing is no more remarkable than flipping a digit from 0 to 1 in basic Information Theory. Where I am not likening the 0 in the binary system to a true zero, nothing in Quantum Theory means a Vacua or vacuum state. A False Vacua Bubble Nucleates by Quantum Tunneling past the Vacua region. Where does it go? The two possibilities include, nowhere, a chance to get away from it all, or into a more stable Vacua. Our Vacua may be more stable, we don’t know. We measure things in billions of years, which seems ‘Big’ to us, but in an infinite domain, it is an infinitesimal. That’s Calculus 101, The Projectively Extended Real Line of Infinitesimal Calculus. In Larson Calculus, 9th edition, this is chapter 3 (Theorem 3.10); which means it is about three weeks into the course of Calculus 101. This is the reason I find QFT offensive, in that it ignores the most basic edicts and theorems of mathematics by selectively ignoring those theorems and axioms and theorems that do not support the Big Theory and making up new ones out of thin air. The very scaffold of nonlocality requires an infinite field, if it is finite, it is localized, and so as to explain Quantum entanglement was the original intent. However, you cannot ‘fit’ this infinite field into our finite domain (the lower boundary of the Big Bang defines our domain as finite). Since the very scaffold of all QFT is inherently flawed, the entire model is dismissed. As for ‘predictability’ being any factor suggesting it is somehow correct, I think I used Ptolemy’s geocentric mathematical model which stood uncontested from the 1 st century until 1915 when Schwarzschild finally matched Ptolemy’s precession of Mercury. (The urban myth is that Einstein did the math, but he kept getting it wrong, so Schwarzschild literally handed Einstein the correct result). The moral to the story thus far is that the path integral between two points in space along the Schwarzschild surface and their distance in time is a quantized path integral (denoted by each split n↦mn or m↦nm) that is connected in the past. This makes perfect sense, as nothing we perceive in in the present. The present is a slice of time 10E-44 seconds wide, the prior 10E-44 second has ceased to exist; the next 10E-44 second does not exist yet. This is the Standing Interval. The QZE determines ‘the size of the lizard.’ Staring at the lizard (there is a spoon) makes them very small, and more lizards are required to equal our locally quantized meter stick, making quantized time (the Planck Flow) slow, not appear to slow, but slow. Turning our back to the lizards (maybe there’s a spoon, maybe not) makes the lizards larger. Less large lizards are required to equal our locally quantized meter stick, and in fact, may be larger than our locally quantized meter stick. The part of the discussion that is really going to piss off the mechanistic ‘opinionated,’ but disproven by 1. 2.

Xiao-song Ma, Stefan Zotter, Johannes Kofler, Rupert Ursin, Thomas Jennewein, Časlav Brukner & Anton Zeilinger; Experimental delayed-choice entanglement swapping. Nature; Physics volume 8, pages 479–484 (2012) A. Peres. Delayed Choice for Entanglement Swapping. J. Modern Optics, 47: 139-143 (2000)

Is that because we have not looked at the lizards (there is no spoon) for 13.8 billion years, according to our locally quantized meter stick, whose value is determined by conscious observation, the path the photons took in the past was longer, as measured by our currently locally quantized meter stick. Having a second look at our chart regarding the New Laws of Motion on a Planck Scale, I see an interesting feature that makes things even more complicated than the first time we looked at this thing a few pages back, where I stated: It is not possible to travel one Planck length in distance at any ‘velocity’ less than c, because that requires splitting a Planck unit of length and/or time into slices smaller than

53 space-time will allow. It is not possible to travel one Planck unit of distance at any ‘velocity’ greater than c, because that violates Special Relativity. Therefore, on a Planck scale, only the velocities zero and c are possible. Motion on a Planck scale is thus quantized into jumps, or leaps, alternating between zero and c. Furthermore, since each Planck volume of space-time is isolated from each and every other Planck volume of space-time, this phenomenon of quantized motion occurs separately for each Planck volume of spacetime with no apparent means of coordination between Planck volumes of space-time. There is thus some unifying factor involved in this phenomenon. Just as there is some unifying factor that provides a seeming continuity of the progression of Planck intervals of time (tp) or seeming continuity of space (Lp), mass-energy, the forces of nature, and so on, referred to as the ‘Planck Flow.’

Here is where it gets even weirder:

Every Planck interval is not only isolated in space-time, but under CFT, less scale invariance, is not the same size as the prior Planck interval. As a fractal, they are increasing in size as we look backward in time. Proof? Everything is redshifted. The urban myth is that the Hubble Parameter only applies to the space between galaxies, that the mass of the galaxies ‘holds space-time together.’ This was dispelled 20 years ago as another absurdity. Every Planck interval is a locally quantized meter stick that is slightly smaller than the one that preceded it. The only way we know of the prior Planck interval is that the information is carried forward via photons. The photons from a greater distance are in the center of the disk of lizards, and longer ion wavelength than those at the surface, which is the subjective present. (By subjective I mean it is a smear, not a well defined interval, as our brains are literally smeared across 10E106 Planck volumes of space-time, each of a different quantized value, via a fractal that is

54 another section of this text. As our observation rate approaches the change of state rate, the Standing Interval, if we are 10 Planck lengths away, as in the chart above, it then becomes impossible to observe the actual current state of the observed system. That is, there is no ‘real time’ observation of a change of state between any two Planck Intervals of space or time, each is isolated in a domain of 1Lp3. Furthermore, observation of a prior Planck interval, again, is by a locally quantized meter stick that is smaller than the one being observed at any distance, even 1 Planck interval away. For lack of a better term, the illusion of continuity is the result of spanning our observation out to macroscopic scales, that is, in the chart above, to observe the entire chart, not any specific interval on the chart. In the real world, our macroscopic world; that means the Planck Flow is only apparent upon observing large chunks of time, e.g., zooming out to observe the entire chart as a whole, not a singular cell on the chart, each cell being a completely isolated space-time domain of 1 cubic Planck interval. That is the Quantum Zeno Effect. That is, in fact the structure of space-time on a Planck scale out to cosmological distances. There is no cosmological distance, any distance means the past; the past has ceased to exist. That is true rather we regard 13.8 billion light years or 10 -35 meters. That is, the more rapid the observations, the greater the Uncertainty of the arrow’s next state in the next frame (NOT Heisenberg Uncertainty). This is the QZE, and to date, all proposed mechanisms attempting to explain it are gibberish. Eventually, your observations become so rapid (continuous) that the Uncertainty of the arrow’s next state in the next frame forbids its progression altogether and the arrow ceases to progress altogether. Why? This goes back to the difference between unobserved and unobservable. As an unobservable it will drop exponentially to infinite depth, if our domain were permitted to exist for infinity. As unobserved, it follows the pattern of the DCQE. When unobserved a system remains in its native state, a wave function, which is superpositioned in both space and time. That is why we see an interference pattern. There is no limit, nor is there any valid description [that is correct] that describes how far reaching this superposition can be. As far as we can tell, the least upper limit is the entire domain of this cosmos, which is not infinite in size, nor duration. In fact, again, it has a lower limit of the Big Bang, defining it unambiguously and rigorously as finite. When a system is observed, it becomes a tiny cannon ball, which can be superpositioned in neither space nor time. Here is the key to know. The Markov process that describes the Complexity of increasing Tensor Networks, which in turn define space-time and its geometry, is dependent on not looking. None of this magic can occur as tiny cannon balls. They are just static, non-dynamic, unchanging cannon balls. The error in thinking in the DCQE or any other n-slit experiment is to think that the photon, electron, whatever, has a position and velocity as it strikes the detector, whose uncertainty is governed by the misuse and miscomprehension of the Heisenberg Uncertainty Principle. When the photon hits the detector, perhaps a bit of emulsion film, it ceases to exist altogether. It has become a bit of angular momentum in an electron in some silver ion orbital. There is no Heisenberg Uncertainty regarding position or velocity, it is gone. Once a thing is detected, it is snatched out of its native form, which is not within our frame of reference, into our frame of reference, where the HUP and Quantum entanglement, superposition, and so on appear paradoxical. More of that in a minute. Let’s go back to our Alice, Bob, and Victor scenario, but this time, put them on a Markov tree.

55 Where {x,y,z} is typical for a 3-d Cartesian system, we are interested in Alice, Bob, and Victor:

We look at our equation of state for the future surface as C2:

𝐶 =

1 |𝑥 − 𝑦 |

We used ‘C’ for Correlation Value. Rather than ΔI, I’ll assign ΔA for Alice, ΔB for Bob, and of course, ΔV for Victor. Remember in the Peres experiment (carried out by Xiao-song Ma, Stefan Zotter, Johannes Kofler, Rupert Ursin, Thomas Jennewein, Časlav Brukner & Anton Zeilinger), it was determined that Alice and Bob had already detected the entangled photons, and recorded the events, but had not yet looked at the results. When the entangled photon(s) reached Victor, his method of detection determined what both Alice and Bob would see after they had already recorded the results. I want the ontologically challenged to be as pissed off as possible. For instance, Alice will look like:

𝐶 =

1 |𝑥 − 𝑦|

The Conformal Correlations will look like (where we had I and J representing our orthogonal eigenvectors):

𝜎 =𝑒 Is a dominant factor in the cluster correlation when |X-Y| is very large for:

〈𝜃 (𝑥 )𝜃 (𝑦)〉 =

𝛿 |𝑋 − 𝑌 |

In simpler terms, as

|

lim ( |→

𝛿𝐼𝐽 𝑋−𝑌

2Δ𝐼

)=0

Thus, we see the term δIJ shrink away to zero as the term |X-Y| approaches infinity. If you recall, X and Y were coordinates on the surface of DeSitter space:

56 The terms I and J represent orthogonal eigenvectors. On the surface of DeSitter space, when X and Y are separated by large DeSitter distances, the orthogonal feature shrinks away. A simple way of picturing this is that in this diagram, I have labeled the distances as X2 and Y2, and you can see that the route to the origin is then more direct, taking less twists and turns through the labyrinth below the horizon of DeSitter Space:

In our Escher Fractal Lizard example, we can see that X2 and Y2 each only ‘pass through one lizard.’ Whereas X and Y must pass through a labyrinth of lizards. We can gather from that, the orthogonal nature of DeSitter space has to do with navigating the labyrinth of lizards. Exactly what does this orthogonal nature of DeSitter space mean in real space-time? In QFT, it is a common error to think that each particle and/or force has its own native field. In QFT, each cubit of information that describes a particle must be a separate field. The reason is simply that in order to be non-local (for the hundredth time) the field must be infinite, if the field is finite, then it is localized. QFT arbitrarily assigns as many dimensions to a field as is required. When the particular model doesn’t match observed results, the knob is turned, and the number and type of dimensions is changed, and the process repeats until the model matches observation, and that is called prediction. Therefore, assuming the QFT field has greater than two dimensions, and is infinite (imagine a 3-d sphere), there is therefore infinite overlap of every portion of the infinite field. For instance, in just 3-dimensions, if the field is infinite, then the overlap of the infinite field components is infinite. As a result of infinite overlap, there can be no differentiation between field components, making the infinite field an absurdity. No differentiation between components of the field, along with infinite overlap, means that there can be no ‘quantum fluctuation.’ In order to visualizer this; we take a simple 2-d depiction of a classic sin wave:

If just the X and Y axes have infinite overlap, then this up-down ‘fluctuation’ has no meaning. We can regard it either as infinitesimal, e.g., zero, or (at the same time) infinite in height. Furthermore, even in 1-d, just considering x to extend to infinity, makes 2π (1 cycle) an infinitesimal, zero. The more dimensions and components of the infinite field we add the more absurd the scenario becomes. It is not a mystery beyond my, nor even your human understanding. It is understood quite fully, and as a result, dismissed. As an example, QFT can in part describe an electron, by adding improvable dimensions, referred to as ‘spinor space,’ to explain the seemingly odd feature of the ½ spin requiring 720 degrees of rotation to return to its initial state. The QFT description for the electron didn’t work, because it did not reflect observations. Therefore, the ‘knob’ was turned, and unique improvable dimensions we added, just for the electron (because this same dimension set does not match observations of other spin ½ particles). Then, the spinor space model does not fit the Bohr quantization of the electron. The Bohr quantized electron makes a quantum jump from one orbital to another, with no states of transition in between. The QFT model treats all of these fields and dimensions as continuous, infinitely divisible. This in turn requires then that the electron has to pass through an infinite number of states between one orbital to the next. As a result, QFT has failed to describe the electron. The Bohr quantization is rock solid and irrefutable. The QFT description is impossible to be correct in any way. Now that we have seen how QFT has failed to describe the electron, we also look at how and why the QFT model fails to explain it. I will therefore reconform the electron’s wave function such that we see the wave function as a ‘pancake.’ It is like an old style LP record in two dimensions, ‘spinning,’ which describes the angular momentum in one of two orthogonal conformations. Then we add a very observable third dimension, such that the pancake has some limited depth. Therefore, we have a sort of ‘thick pancake.’ Then, like dropping a tiny pebble into the center of a large coffee cup, we see the ripples from the pebble moving outward from the center to the edge. The ripple is a classic wave with an angular momentum characteristic orthogonal to the 2-d disk of the pancake. When the ripple reaches the edge of the coffee cup, it is reflected back to the center. Both the outward ripple and the reflected inward ripple have the same component, there is no difference in the angular momentum component of either

57 ripple. Thus, I describe the ½ spin characteristic of a fermion as once around, and once head-over-heels. This head-over-heels component has not been observed because it is the same rather the spin is clockwise or counter clockwise for the 2-d orthogonal component, as there is no orientation to the ripple within a magnetic field, because it is not coupled to the magnetic field. In addition, the ‘edge of the coffee cup,’ so to speak, the limit of the ripple is always contained within the angular momentum of the electron, which defines its orbit about a nucleus. It is therefore Scale Invariant, like DeSitter space must be. All attempts to reconcile the QFT model of the electron with the AdS/CFT definition of space-time have been less than successful (that is politically correct as I can tolerate) with respect to the variables I have described. I have added no invisible, improvable, and ‘tunable’ dimensions, turned no great variable knob, tweaked, and re-tweaked no parameters whatsoever. I have both described and explained the fermion, and this extends to all fermions, in a few paragraphs, in one try, not 35.7 million papers over a century. Furthermore, the entire model only requires simple Pythagorean geometry, in purely Euclidean space, and intermediate algebra for the entire description, all in observable dimensions. What then is the Pauli Exclusion Principle? Spin 1 ‘particles,’ such as a photon, of the same spin (say, +1) can pile up indefinitely, occupying the same volume of space-time. However, we know that this is a population of photons; they have not lost their identity to become one gestalt photon. I say this because in order to explain the quark star degeneracy, Calibi-Yau manifolds are needed, with the knob turned to add the quantity and quality of dimensions necessary for fermions, namely quarks, to do this trick. If we add a second wave component, my ‘pebble in a coffee cup’ ripple, when two fermions of the same spin (the spin component we measure as being coupled to a magnetic field), if the ‘ripples’ are not in perfect phase, we have destructive interference. Since the wave function is information, and information cannot be destroyed (-1 law of Thermodynamics; Susskind), they bounce off one another. The -1 Law of Thermodynamics is so powerful, that this rule holds even unto degeneracy of a neutron star. In the boson, the lack of this second orthogonal wave component makes it such that although we observe an interference pattern, as per the doubleslit, the actual photon’s state has not altered, it has merely projected a third component of interference. For those of you who have difficulty envisioning the difference, the photon interference pattern is exactly the method by which a Bose speaker system works. Originally taken from phased array RADAR, the idea is to set up a wave, then a retarded (delayed) wave following. Each wave may be an amplitude (we are in sound mechanics now, not electromagnetic waves) appropriate for the potential of a 5-centimeter speaker, but the waves cross at perhaps 2 meters away. Depending on the delay and the angle, they literally create a virtual 20-inch speaker out of thin air, and the signal is capable of an amplitude and dynamic range of a 20-inch speaker. Here is an image of a phased array RADAR system, showing the angle as being altered. This allows a jet fighter, for instance, to sweep back and forth thousands of times per second, rather than the slow mechanical means. However, if you look carefully at this image, what you will notice is that where the waves overlap, at a greater distance, the ‘virtual wave front’ is much larger. In addition, at the critical distance, it is many times more powerful (in amplitude), as, in the case of sound, it is apparently moving a larger mass of air, which is amplitude in acoustics.

The moral to the story is, interference patterns of photons are not ‘two photons,’ but at least one ‘virtual photon’ as demonstrated in the phased array system. The original two photons are intact, unaltered, and have not even noticed one another’s existence, as though invisible to each other. What we see is different from our frame of reference. However, this does require at least two photons; a single photon cannot create a phase effect. On the other hand, if the electric and magnetic fields, each of which are associated with their own ‘virtual photon,’ which mediates each field, are made to go out of phase for a single photon, such a phase effect can be caused. In nature, as we saw in the Xiao-Peres (Alice, Bob, Victor) scenario of a photon passing around a galaxy via Gravitational Lensing, such an effect can be explained by the detection method being capable of ‘seeing’ the two fields go out of phase for a single photon. However, this phenomenon cannot explain the results of Xia-Peres results of the Delayed Choice Entanglement Swapping experiments. In fact, I would vote that noting such a behavior of a single photon bending around a galaxy is definitely and irrefutably the result of the electric and magnetic fields (for a single photon) going out of phase with one another. I would also vote that the Delayed Choice Entanglement Swapping experimental results has absolutely nothing to do with this phenomenon of phasing. They are not at all related.

58 Phased squeezing is usually explained in a fashion that avoids directly noting the difference in phase of the electric and magnetic fields. I am not sure why, but from reading endless literature on the subject, it seems to be inherently left to mathematicians, who by definition are not physicists, and think it has something to do with Heisenberg’s Uncertainty Principle. However, in every case, they note the photons’ ‘quadrature,’ which is defined by the relationship:

If we go back to our photon, as it corkscrews through space-time:

It then becomes obvious that the math that describes phase squeezing is in fact describing the electric and magnetic field components, at 90 0 to one another. We have the Faraday Effect, which affects the rotation of light in a medium, and the Kerr effect, which affects the rotation via reflection from a magnetized surface. In general, both effects are referred to as Electro-gyration. Prati [E. Prati (2003) PROPAGATION IN GYROELECTROMAGNETIC GUIDING SYSTEMS, Journal of Electromagnetic Waves and Applications, 17:8, 1177-1196, DOI: 10.1163/156939303322519810] determined that the permittivity and permeability constants have different values (of course), but also have different diagonal (trace) Tensors.

Prati describes this Tensor in such a way that when ɛ2 vanishes (as in the case of a sinusoidal signal passing through zero) the magnetic permeability also becomes a diagonal product. At this point, a material can selectively become translucent (slowing the phase velocity of one polarization state) or opaque to a particular polarization. Back to our fermions… If, however, the magnetic component of the ‘particles’’ spin is opposite one another for a fermion), there is no interaction of the orthogonal wave components. As a result, no destructive interference can be observed. The lack of interaction goes back to that equation set above, which describes the δIJ as being maximized. Why is it maximized? In order for a single fermion to spin, you have to accept the fact that the wave function is spread out over more than one Planck length, and therefore over more than one Planck interval of time. In the case of a single fermion, the value |X-Y| is very small, and δIJ is at its maximum. Furthermore, the value ΔI, which meters proper time in Planck intervals, is also a small value, but not zero. One end of the wave function is isolated in both space and time from the other end. As a result, because |X-Y| is small, the common vertex, |u i-u0| is very ‘shallow.’ In short, we have the orthogonal wave in the fermion (the ripple from the center of our coffee cup scenario) that does not exist in a boson. If two fermions have the same spin vector (in that vector we can detect in the presence of a magnetic field, I’ll call polar vector as it seems to possess angular momentum as though ‘spinning’ on this polar coordinate) the orthogonal components must be in perfect phase, else, destructive interference occurs, and information is destroyed. This is not the same as an interference pattern for a photon, where the third component is ‘virtual,’ not real. In this case, we have a true wave function of set mass-energy, and no ‘fractional’ value is possible because the charge cannot be so divided. If the polar spins are opposite one another, (work out the image and math in your head) the orthogonal wave components have no interaction with

59 one another, as they are not associated with the magnetic field of the fermion. That is, the orthogonal wave is not coupled to the fermion’s magnetic field. If it (the orthogonal wave component) were, we would be able to observe such in the classic Stern-Gerlach setup do detect particle spin via a magnetic field. Thus, as I have said, the fermion spin is best described as once around clockwise (or counter-clockwise) and once head over heels (or heels over heads). The head over heels is a ripple, a classic wave starting at the center, rippling out to the edge and reflecting back. Sound familiar? This is exactly the same as the Lin-Shu Density Wave for a spiral galaxy. Except in this case we are not discussing Gravitation, nor am I at this time suggesting any relationship, although I have not ruled it out. HERE In order to comprehend the consequence of this, we look at a photon that is redshifted to nearly black as a result of ‘velocity of recession.’ This means the photon’s wavelength is, for simplicity sake, 3E8 meters long, or 1 second from one end of the photon to the other. It therefore has 10E44 slices of time and 10E35 slices of space across it. If we think of the spin as conventional, for visualization purposes, the wave will take a little more than 3 seconds to make a full turn, to return to its initial state. Thus, the depth, |ui-u0| is very large and complex by ‘particle’ standards. Neglecting superposition for simplification, every portion of this great big spinning photon exists in an isolated region of space-time. As a flat 2-d thing that is roughly 10E89 Planck areas, as for volumes, the 3-d shape of a photon is rather complex. In a paper, Radosław Chrapkiewicz, Michał Jachura, Konrad Banaszek & Wojciech Wasilewski. Hologram of a single photon. Nature Photonics volume 10, pages 576–579 (2016), the authors used a holographic interference pattern to determine the shape of a single photon. Below is the data on the left, the Schrodinger prediction on the right.

That is, I’ll forego making any references to the Planck volumes of a single photon for the time being. The thing to know is that this image is as viewed head on. If it were spin zero, it would look like this from every angle. Exactly where the spin vector is in this is unclear. This is a composite of thousands of overlaid images of single photon interference patterns. What we need to do is to connect Alice, Bob, and Victor. You will note the root is ř, which I earlier defined as being anything other than n or m in the Markov chain, even nothingness. In the above equation, |X-Y| is an arbitrary assignment, simply meaning ‘Piadic distance.’ We used I and J as our orthonormal eigenvectors earlier. We make a few adjustments to accommodate the experiment of Peres with Alice, Bob, and Victor (among the million reasons Carlos Rovelli is wrong). Note that ϴK is arbitrarily assigned as an eigenvector for Victor. The question is, is Victor’s eigenvector orthonormal under CFT? As it turns out, CFT is made to accommodate all of those dimensions we can come up with, and keep them orthonormal to each other:

This means, referring back to our image of the Mona Liza, that we can rotate reality thus:

60

How to connect the dots? Our Markov tree, which represents the exponential Complexity of the growing Tensor Network, can start from any source, even a False Vacua, and be entangled with Alice, Bob, and Victor on the edge of DeSitter Space (the soup label), which we mistakenly think is QFT. If the universe were actually described by QFT, Alice, Bob, and Victor could not possibly be entangled, or observe any entangled feature in real space-time, and furthermore, any such features would be time dependent, because they have to travel the distance around the perimeter of the can – action at a distance. The key to know is that Alice, Bob, and Victor are conformal, under this AdS/CFT model of space-time. Which is good, because the experimental results by Xiao-Peres would be strange. Quantum Field Theory, on the other hand, without the CFT correction of Scale Invariance and limitation to a finite domain has no resolution for this Alice, Bob, and Victor scenario. In any case, the above conformation is called associativity, meaning it can be associated in more than one way, which is not true of QFT. QFT has a zillion fields, all of them infinite (to be non-local, they must be infinite, finite means localized), and none of them associate or transform. Thus, whatever happens in Victor’s domain has no relationship whatsoever with Alice or Bob. In fact, in QFT Alice and Bob are time-ordered. The story goes when you measure one you know the other instantly. However, that is a misnomer, because the infinite field has reduced the Alice-Bob universe to an infinitesimal. However, since the individual fields that make up a particle’s state each require a different infinite dimension, which do not associate, because they are infinite, and have no endpoint or point of intersection (a slice of an infinite thing results in an infinitely large slice – thus, they cannot intersect or commute), we can only observe one of a particle’s state in one dimension. It is a myth that a particle has ‘a’ field. Each state of the particle’s information requires a different infinite field in QFT. For example, a particle’s spin (usually the Alice-Bob coin flip) does not associate with the particle’s velocity, position, nor any other aspect of the particle. That is intentional, because if the velocity, position, and so on did commute, that would place Alice and Bob far apart again, rather than reduced to an infinitesimal, and Entanglement would again be a mystery not resolved by QFT. The spin is a single field, in two dimensions, like a flat pancake, yet asymmetric. That is, information can only go forward in time, not backward, and not omnidirectionaly, as is observed in both the DCQE and the Xiao-Peres Delayed Choice Entanglement Swapping (DCES) experiment. Penrose was one who suggested that the ‘arrow of time’ is defined by a small value for S, entropy, which increases over ‘time.’ However, let’s forget about the cosmos as being a mere 13.8 billion years old, and think clearly. In every model except recollapse (Big Crunch), the cosmos expands for infinity. As a result, no matter how large of a value S becomes a googolplex years from now, it is still infinitesimal, because inflation is infinite. As a result of this simple use of a calculus 101 Theorem, we can dismiss entropy as being related to any ‘arrow of time.’ The value S, entropy, will remain an infinitesimal for infinity, in simpler terms, it cannot grow, it cannot change, it remains an infinitesimal, and therefore there is no ‘arrow of time.’ Not only does this state that there is no arrow of time, but time remains an infinitesimal, for infinity. As such, it is not symmetric, but like our Cayley Tree but with an infinite number of nodes, which increase in Complexity at an exponential rate, is going every possible direction, not a line, but ore like a fur ball. Furthermore, this meets with the experimental results of the Delayed Choice, Delayed Choice Quantum Eraser, as well as Delayed Choice Quantum Swapping experiments. No other model begins to explain these in any sensible (not irrational, ludicrous) way. Feynman said, “The first principle [of learning] is not to fool yourself, and you are the easiest person to fool.” I add to this; ‘going on to explain such crap to others, to convince them while they aren’t looking, is nothing more than a vain attempt at self-validation of that which you have fooled yourself into cognitively believing.’ In Sociology this is referred to as Rational Charisma, AKA, Adolf Hitler, Himmler, Jim Jones, David Koresh, Moon, Shoko Asahara, Manson, Applewhite, and so on. If you think I am going overboard, think clearly of the fact that no technology surrounding you at this moment is less than half a century old, most of it a century or more old. The only entropy is the Chaotic collapse of knowledge into true randomness, which I will define in Chaos Theory later.

61 Again, I described the Laws of Motion on a Planck Scale:

This funny go-stop-go feature. Now we consider tp10, looking back at tp1. Tp1 is a larger lizard than tp10.

This is true even if we are looking just a single Planck interval backward in proper time, because we are not looking the distance along the outer edge (the soup can label). The past exists deeper into the center of that disk of fractal lizards, where the lizards get larger. I’ll get back to a Planck scale in a moment, but for now, let’s look back billions of years toward the Big Bang. My locally quantized meter stick is much smaller than those in the center are. This means, if I am measuring the distance across one lizard, it will appear as though the distance is many of my locally quantized meter sticks. This in turn means that if I am measuring the velocity of a thing crossing one (1) lizard at the center, it is ‘moving’ many of my locally quantized meter sticks, in a time interval that is also locally quantized to my locality on the edge of DeSitter space. As a result, the velocity I measure will be very large, regardless of the fact that if I were still in the center, I would only be moving a distance of one (1) lizard. At the point where it appears that the velocity is one of my Planck lengths in one of my Planck intervals of time, it appears as though the velocity I am measuring at the center is v = c, as c=1Lp/1tp. This fractal, again, is governed by the simple relationship:

𝑡 =

𝑡 1−

𝑣 𝑐

62

𝑣=

𝑛𝐿𝑝 𝑥𝑡

𝑐=

1𝐿𝑝 1𝑡

Quantizing:

𝑡

±𝑡 =

𝑛𝐿𝑝 𝑥𝑡𝑝 1𝐿𝑝 1𝑡𝑝

1−

The ± is there because of the square root on the right, and because it is casually omitted by way of ignorance to the fact, that causality is a purely cognitive construct, not a real property of the universe. Then with respect to length, I will spend the next chapter explaining how and why Lorentz equation has been upside down for a century, for now, just bite it.

𝑙

𝑙 =

𝑣 1−( ) 𝑐 𝑙

±𝑙 =

𝑛𝐿𝑝 𝑥𝑡𝑝 1𝐿𝑝 1𝑡𝑝

1−

Fixing the quantization issue that is absolutely necessary in DeSitter space, the causality issue, and as I will devote an entire section to, the upside down Lorentz equation. I do the same thing to gravitational (Schwarzschild) transformations:

𝑡 =

𝑡 1−

𝑙 =

𝑟 𝑟

𝑙 1−

𝑟 𝑟

Here, we simply consider r to be in Planck units. In both cases the hard definition for the speed of light is:

𝑐=

1𝐿𝑝 1𝑡

63 Then, going back to our ER=EPR ‘paradox,’ and the resolution, we substitute back in for G’

G’ = 6.67384(80)×10-11 m3/Kg (s)2 Substitute t’ for s G’ = 6.67384(80)×10-11 m3/Kg (t’)2 And obtain:

±𝑡𝑝 ⇌

ℎ𝐺 2𝜋𝑐

±𝐿

ℎ𝐺 2𝜋𝑐



These are the principle fractals that govern all of AdS/CFT. From these two, all other fractals emerge, starting with why the speed of light is constant in all reference frames, even though separated in time, and showing scale invariance:

𝑐=

𝑐⇆

𝐿𝑝′ 𝑡′ ℎ𝐺′ 2𝜋𝑐 ℎ𝐺′ 2𝜋𝑐

𝑐⇆

𝑐⇆

𝑐 ⇆ 𝑐

1 𝑐 1 𝑐

𝑐 ⇆𝑐

Moving on to:

It becomes immediately apparent that the Λ term, the entire concept is an artifact of AdS/CFT space, and we drop it:

64

This is crossing out the artifact of seeing those lizards as larger because of our locally quantized meter stick. Eventually, the entire equation goes away, and continues to whittle down to the point where we are frozen at our Standing Interval. However, for the time being, it is sufficient just to get rid of that stupid Cosmological Constant. The 8πG’ is an artifact of General Relativity, which will be eliminated as we define the emergent properties of space-time and its geometry. The value k is an arbitrary ‘fix-all-knob,’ a giant knob of adjustment such that whenever a new value of H0 is published, the ‘fix-all-knob,’ k can be tweaked such that this ridiculous equation is said to have ‘predictive power.’ The rho term for density is non-sequitur on a Planck scale, and non-sequitur on an infinite scale. On an infinite scale, the rho term for the entire cosmos has fallen to zero. On an infinite scale, the rho term for a localized region is an infinitesimal. The rho term is non-sequitur on a Planck scale by virtue of the fact that it has no meaning. This makes the a term also non-sequitur. As a result, ultimately we return to, H0 is an artifact of Scale Invariance. I discussed this when I described the fractal lizards being measured in the middle, by using meter sticks that are locally quantized on the surface of DeSitter Space. To use anther analogy, for the Scale Invariant Challenged: (SIC) As an example of Scale Invariance, if I take a hemocytometer, a tiny glass slide marked in microns, put a lactobacilli on it, and look at it under a microscope, I will see it is about 10 microns long. If I project that image of the bacteria on the hemocytometer onto a big screen TV, then I take out my ruler, according to my ruler is about a meter long. Furthermore, this is proven by the fact that the hemocytometer, now clearly indicates on the big screen that it is indeed a meter long, because that is the scale on the screen of the markings on the hemocytometer are now a decimeter wide. If I don’t know that the markings are supposed to represent microns, my meter stick tells me they are decimeters, and the bacilli is a meter long. In this scenario, there is no common frame of reference save for my knowledge of how big the markings are supposed to be. Then we see a microbacilli crossing the length of the lactobacilli. The microbacilli is 1/10 the length of the lactobacilli (1 micron). It takes ten seconds for the microbacilli to cross the length of the 10 micron lactobacilli. Therefore, its true velocity is 1 micron per second. However, as projected onto my big screen HDTV, it takes ten seconds to cross 1 meter, so it is moving one meter per second… I will devote an entire section to this artifact of Physical Cosmology. The entirety of Physical Cosmology is a viral plague of hypotheses all as a result of this single artifact. Next, we turn to Nicolini

𝐹=

𝐺𝑀𝑚 1 + 4𝐿 𝑟

𝜕𝑠 𝜕𝐴

Substitute

𝑀=

𝑅𝑐 2𝐺

In order to deal with the presence of this ±t’ issue that is going to get everyone who is temporally challenged upset; I will use a Cayley Tree and stretch it out so that the spine is flat: (±l’ is simply dependent on t’)

65

A tree in which each non-leaf graph vertex has a constant number of branches unique -Cayley tree on

is called an -Cayley tree. 2-Cayley trees are path graphs. The

nodes is the star graph. The illustration above shows the first few 3-Cayley trees (also called trivalent trees, binary

trees, or boron trees). The numbers of binary trees on , 2, ... nodes (i.e., -node trees having vertex degree either 1 or 3; also called 3-Cayley trees, 3-valent trees, or boron trees) are 1, 1, 0, 1, 0, 1, 0, 1, 0, 2, 0, 2, 0 ,4, 0, 6, 0, 11, ... (OEIS A052120). In the field of percolation theory, the term percolation threshold is used to denote the probability which "marks the arrival" (Grimmett 1999) of an infinite connected component (i.e., of a percolation) within a particular model. The percolation threshold is commonly denoted p c and is sometimes called the critical phenomenon of the model.

The illustrations above show the first few 4-Cayley and 5-Cayley trees. The percolation threshold for a Cayley tree having

branches is

Special attention is paid to probabilities p both below and above the percolation threshold; a percolation model for which p pc is called a supercritical percolation. Because of this distinction, the value pc is also sometimes called the phase transition of the model as it marks the exact point of transition between the subcritical phase p< p c and the supercritical phase p> pc. Note that by definition, subcritical percolation models are necessarily devoid of infinite connected components, whereas supercritical models always contain at least one such component.

We’ll consider this some endpoint nearing infinity. The transformation from the branched state to the straight spine is called the Projective Linear Transformation and represented by: PGL (2,Qp) Where Qp refers to the Piadic system and 2 refers to the fact that the Piadic system in use is a Piadic numbering system of 2 elements {n,m}.

66 Then let {α, β, ɣ, δ}∈ Qp Then the unwinding into a straight spine, designated by X’ is given by:

𝑋 =

αX + β

ɣX + δ

I’m going to re-bend this spine just a little so that the n to m transitions are more visible:

The question is, is the probability of u-1 equal to the probability of u+1 from point u? If they are equal, then time has no arrow. Time is only asymmetric (as we cognitively chose to perceive it) if the probabilities of u+1 and u-1 from point u are not equal. We will think of u+1 as one Planck interval of time forward and u-1 as one Planck interval of time backward; from the point u. Susskind did this calculation, this is from a lecture he gave back in 2013.

𝑃(𝑛, 𝑢 + 1|𝑚, 𝑢) = 𝑃(𝑛, 𝑢 − 1|𝑚, 𝑢) = 𝑃(𝑚𝑢|𝑛, 𝑢 − 1) The two terms turn out to be

𝑃 (𝑢 − 1) = 𝑃 (𝑢)

𝑒 𝑒

=

As such, they are equal, and time has no arrow. We can therefore dismiss the 19th century gas law thermodynamic definitions of time as having any asymmetric feature based on where the microstates of the elements within a gas decide to go, or came from. In CFT, the lack of an arrow of time means that there is a Conformal Fixed Point, or rather attractor. What does that mean? If we look back at our fractal lizards for a moment:

67 If we stare at that long enough, it becomes clear that there is no way one lizard can look at another lizard and see the same Invariant Scale. Even adjacent lizards are fracked. There is no center to this phenomenon, albeit Escher did not consider when drawing this that his fracked lizards would be the subject of the origin and structure of the cosmos, and consequently, drew it as though it had a center. However, if we consider the two lizards above and below one another at the center of the drawing, and realize that they are each 1Lp/1tp, the a few things become clear. The first is that each are in isolated regions of space-time. One is in the past/future with respect to the other, with no differentiating which. The second thing to consider is that since they are 1Lp fracked from one another, even though they are adjacent, they are fracks, and the Scale Invariant nature of them states that 1) they are different sizes. 2) Scale Invariance disallows us to determine which order they occur in by any attempt to measure their size (the larger would be the origin). 3) The time it takes to measure said size moves us further form the phenomenon (u n →un+1). 4) The most important feature; nothing can be observed that is not of the form un-x, where n represents the current Planck interval and n-x represents any other Planck interval. This is not a contradiction that suggests there is a past, the past has ceased to exist, even 1 Planck interval distant. Neither does this suggest there is a future, as this does not yet exist. In all of the cosmos, there is only the current Planck interval, all else is some n-x, which can only be observable permitted a photon from that point reaches us. If the photon created in some (momentary willful suspension of disbelief) ‘now’ in Andromeda is emitted has not reached us yet, it is unobservable. In which case all of the rules of an unobservable are true. Our Standing Interval is extremely small, 1 Planck volume. This is our Conformal Attractor, the Standing Interval. Everything else is either un-x, or unobservable. The future is also an unobservable, and as such qualifies for all of the rules of an unobservable. In this case, we see the future expanding at an exponential rate, governed by the exponential properties discussed so far. As an unobservable, the future has as many degrees of freedom in its exponential growth of magnitude, Complexity, Tensor Networks, and so on, exactly as if it were behind the Schwarzschild horizon of a Black Hole. All of this is then un+x, and still, not a single lizard is the foci nor a marker for any ‘line’ nor indication of any arrow. If a line or arrow did exist, it would suggest some type of continuity of the sort, and this is where it gets really weird, there is a lower limit. We assign the lower limit to this arrow as some part of the process of the Big Bang. Time is therefore finite because it has a lower boundary. It is also bound on the upper end by the subjective present, making it finite, and bound on both sides. If it is finite, then, as discussed, exists as an infinitesimal. As an infinitesimal, it cannot have a direction, and arrow, it is a zero dimensional point. As a result, even suggesting the arrow exists defeats the presence of the possibility of an arrow of time, or a line, for that matter. It is also a mistake of human thinking to consider that it is a line without an arrow pointing one of two directions. Time is best represented by Escher’s lizards, moving outward from the origin (attractor) of DeSitter space as a Scale Invariant phenomenon. If one meditates on this concept deeply, after a while the notions that the Hubble Parameter, Dark Energy, Dark Matter, the inability to reconcile redshift with (supernovae) brightness, and so on, are indeed all artifacts of this single phenomenon. The failure in human thinking is to think that time is some type of continuous, linear, progression, and worse yet, a scaler phenomenon. By now, a century later, one would think the scaler nature of time would have been laid to rest. The idea that time ticked at the same rate 13.8 billion years ago as it does today is an absurdity that is completely disallowed by General Relativity, and even observing the tick rate is a violation of Special Relativity. That same principle holds true looking at this page less than a meter from your eyes. The urban myth that space-time is expanding, but not on a local scale because the local mass is holding space-time together (I actually heard it stated in those exact words, more or less) is an improvable irrationality that violates every law of physics. The simplest, most immediate example is simple thermodynamic entropy, of the 19th century type, no less. If we project an imaginary line to a googol years from now, we know that the cosmos has thermalized. A billion cubic lightyears of space-time may have a stray photon or lepton, but no baryonic matter exists, because spacetime has fracked to the point where a ‘quark’ cannot exchange information with another ‘quark,’ if you want to hold on to that Standard Model. In that Standard Model, which rests solely on the HUP as 99% of its story-line, the Δt has stretched to some obscenely large value, and ΔE of the Standard Model has fallen to zero; there is no spoon. This is not a contradiction, that our lizards are fracking smaller but Δt has stretched to some obscenely large value. Again, upside-down thinking. It now takes an infinite number of Δt’s to make one lizard. It’s merely an matter of how you arrange your fractions in the equation. (Who’s on top?) The point is to explain and define, and describe time, not about time, but define and describe what time is. For instance, I will show this image again (below), but you would not otherwise connect the dots. This is a computer rendition of the LIGO observation GW150914 in September 2015. The surrounding space-time is ‘flat.’ The gravity wells of the two Black Holes are to the left and right, where t’ has dilated to some very large value, approaching infinity. The spires sticking up above the flat space-time Zero Point baseline are the inversion of space-time. If time had an arrow, this upward spire that was detected in nature, would in fact require some weird Exotic Matter or Negative Energy, or some other fantasy to produce. However, the space-time inversion seen here, occurring in nature can happen because time has no arrow. I need point out that when I say no arrow, I am not referring to a line with a choice of two (2) directions, forward and backward. I am referring to a seemingly Chaotic system, with a hypothetically exponentially increasing arrows point directions that make no sense to the human brain. They are not within the human frame of reference. Why? Because they are unobservable. That is the entire point of this discussion, to explain that space-time, its geometry and Complexity are all emergent phenomenon that results from the state . This space-time inversion was what physicists thought would require half a universe worth of ‘Exotic Matter’ to create ‘Negative Energy’ to produce the upward spire in Alcubierre’s space-time manifold. Here, it is occurring in nature, by a detector (interferometer) that was not supposed to work because physicists thought the observer observed rule in Special Relativity somehow automatically applied to General Relativity, when the former has yet to be proven.

68 No one has determined what length does under Special Relativity. No one has observed a clock in real time under Special Relativity, only the difference between clocks after they have moved back into the same frame of reference. The fact that GPS satellites have to be manually recalibrated every 3 hours, not auto-fixed, is a clear indication that their hypotheses don’t work.

The upward spire in the LIGO result is the upward spire in Alcubierre’s space-time manifold, occurring in nature. However, since Alcubierre used Lorentz equation as is, the manifold is upside down and backward, which I correct later on. The one measurement of ‘Length Contraction’ was the particle physicist’s claim that the electrostatic distance between particles increased at high velocity. To account for this, they squished the particles into pancakes, rather than note the direct observation that the distance increased.

Particle physicists are dumb as a bag of hammers. The electrostatic distance appears to increase because the virtual photons that mediate the electromagnetic force, which also possess mass, have dilated in length, and thus weakened (longer wavelength, less energy). I will repeat this so that even you understand. For instance, I just had to correct an entry on Quora.com where some idiot wrote the viral statement: ‘no, a conscious observer plays no role in the outcome of a ‘particle’s’ state, the ‘particle’ takes each of an infinite number of paths, the result [is that you are an artifact of] is an infinite number of conscious observers…’ Do you have any idea how fucking stupid that idiotic, pathetic, completely senseless, daft line of reasoning is? I didn’t even think a Homo sapiens was capable of such idiocracy. Dinosaur bones are buried deeper than monkey bones, which are buried deeper than human bones… You cannot excavate at the poles because the conditions at the poles are such that fossilization cannot occur, but bone dissolves completely as you move from the equator toward the poles. The conditions of fossilization only occur at latitudes and conditions where the exchange of mineral ions with that of calcium and phosphorous ions can occur very slowly. This requires a specific temperature and very exacting water content. You therefore excavate at the equatorial regions, because there are no fossils at the poles. Of those equatorial regions other than Africa, very few square miles are devoted to the conditions suitable for fossilization.

69 Out of Africa… Regardless of what is written in the Vedas, the Norse, and so on, you can only excavate for fossils in very few places for human remains. Dinosaur remains fossilized before conditions changed, conditions changed; that is why the dinosaurs became extinct. Since that time, the conditions for fossilization became possible where the first deserts formed, Africa. Therefore, the oldest human fossils are ‘out of Africa.’ As a result of this artifact of finding a thing in the only place it can exist, we assign the only place it could exist as the origin. That is ‘science.’ What does this have to do with a Faster Than Light Engine? I can understand your disability of only comprehending those things which fall into your frame of observational reference, but I cannot understand when a thing is in your observational frame of reference, you get it fucked up. Squished particles, Dark Energy, Dark Matter, accelerating expansion, the inability to line up the Hubble parameter measurements, the very obvious space-time inversion LIGO detected in nature, ‘length contraction…’ Out of Africa. I do not ‘believe in God’ is mathematically different from “I ‘believe’ God does not exist.’ I need proof. You have seen the proofs; the result has been ‘length contraction,’ Dark Matter, Dark Energy, accelerating expansion of a changing Hubble Parameter {e.g., not an artifact of DeSitter Space}, out of Africa… This is the shit you do with proof. To make the situation more pathetic, the average person claims, ‘we do not need God or myths, we have science.’ And the poor slob knows nothing of science, cannot even do the math. They in fact bitch, angrily, when I show math, like it was a crucifix to the little girl in The Exorcist. Now that I am done insulting you for now, which I thoroughly enjoy, back to my irregularly unscheduled program. In order to terminate the expansion of a vacua, where we had the following for the Piadic expansion, using base 2 because our entanglement approach is a Tensor Network of two potential states, rather than some other number. The choice of two is not arbitrary, it is the number that is most observed in entangled systems. However, as shown in the Alice, Bob, Victor scenario, among other things, two is not written in stone. For now we consider two to be a starting point for the comprehension of Cosmology from the perspective of Quantum Theory. If you leave Cosmology to Astronomers and Astrophysicists, you will get a bunch of nonsensical crap, like ‘accelerating expansions, Dark Matter and Dark Energy, along with a host of other unobservable and intentionally improvable hypotheses. Among my favorites are Weakly Interacting Massive Particles (WIMPs). It were as though the acronym demanded an absurd hypothesis. In any case, our order-2 Piadic expansion was given by

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

Ɣ

𝑃 +

Ɣ

𝑃

If we wish to terminate the expansion, we simply add a third term, which results in a dead end:

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

Ɣ

𝑃 +

Ɣ

𝑃 − Ɣ 𝑃 (𝑢)

In this case

𝐺

=

𝑒

𝐼 ⋋ 𝐼

Still holds true. However, we are leaking probability into the frac of

−Ɣ 𝑃 (𝑢) And where we had our False Vacua starting off from r

𝑃 (𝑢) = [𝐺 ]

=

𝑒

𝐼 Λ 𝐼

70 As a consequence, we have where the eigenvalue I was 1, now it is less than 1

⋋ = 2𝑒 ⋋ 0 ⋋ < 1; ∆𝐼 > 0 Let the dominant eigenvector be Dm of the eigenvalue ΔI

𝐷 (𝑢 ) = 2



𝐷 (0)

Since

∆ >0 ∆ 𝑢 >0 And the system is now time dependent. Then the probability of m is also time dependent

𝑃

(𝑢) = 𝑒

𝐷 (𝑢)

How did this happen? [Raphael Bousso, Daniel Harlow,and Leonardo Senatore. Inflation after false vacuum decay: Observational prospects after Planck. PHYSICAL REVIEW D 91, 083527 (2015)] He references Coleman [Sidney Co1eman. Frank DeLucia. Gravitational effects on and of vacuum decay. PHYSICAL REVIE% D VOLUME 21, NUMBER 12 15 June 1980] Although some of the results in the body of this paper hold in more general circumstances, we will restrict ourselves here to the thin-wall approximation, the approximation that is valid in the limit of small energy-density difference between true and false vacuum, and to the two

71 cases of special interest identified in Sec. I. The first special case is decay from a space of vanishing cosmological constant, the case that applies if we are currently living in a false vacuum. In the absence of gravitation, vacuum decay proceeds through the quantum materialization of a bubble of true vacuum, separated by a thin wall from the surrounding false vacuum. The bubble is at rest at the moment of materialization, but it rapidly grows; its wall traces out a hyperboloid in Minkowski space, asymptotic to the light cone. Here Coleman and DeLucia describe False Vacua going through maxima and minima, rather than the imminent collapse approach. In our first look at False Vacua, we had this diagram

Coleman and DeLucia describe False Vacua describe the vacua going through a series of these states rather than being static. That is, they can expand, contract, and so on. Buosso et al expand on this by defining more parameters, and showing that our own seemingly anomalous Planck Mission results follow the mathematical model of paths through False Vacuum states are occurring in our cosmos as we watch:

Let me re-examine that earlier statement: for the Big Bang, the visible horizon of the cosmos is defined by the above relationships. We can see back in time until the velocity of recession is equal to v = c, at which point everything, and we do not know what, is unobservable. Regardless of your mistreatment of the math and relationships, the cosmological expansion is therefore a result of the unobservable. For those who fail to comprehend, in simpler terms, if the expansion were to suddenly stop, all of that unobservable light from behind the visible horizon would reach us, and become observable. The principle guiding this mechanism is as per the prior discussion, the term:

72

−Ɣ 𝑃 (𝑢) Is probabilistically going through minima and maxima, Chaotically, but not randomly (nothing in nature is truly random). Thus, it is a sort of roller coaster ride. The effect we observe appears to be anomalies in the expansion, but it is more accurate to say that rather than accelerating or such, the expansion is more like breathing, as the gamma term goes through a series of maxima and minima. The other thing that can happen is the random appearance of an anomalous fork, rather than our {m,n} we can say that an anomalous element is simple a’, referring to any element that is not a subset of {m,n}. That is 𝑎 ∉ {𝑚, 𝑛}; 𝑎 ⊈ {𝑚, 𝑛}

This effectively results in a lack of ‘expansion’ of DeSitter space-time, as we perceive it from our domain (this cosmos). What it becomes is as anomalous as to the nature of a’. In this case, our Markov process begins to look like, where we had the termination of expansion on a Planck scale as:

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

Ɣ

𝑃 +

Ɣ

𝑃 − Ɣ 𝑃 (𝑢)

A large-scale variation from a logical domain to an anomalous (‘freak’) domain is dependent on the abundance of:

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

Ɣ

𝑃 +

Ɣ

𝑃 + Ɣ 𝑃 (𝑢)

Given that the set of elements contained within the parameter, a’ is infinite, a’ has an infinite number of possibilities of what the anomaly could be.

𝑎′ ⊇ ∞ The probability of the cessation of expansion is inevitable to an infinite degree. Once an anomalous element a’ occurs, it then only has an infinitesimal probability of ‘reverting’ back to {m,n}. This notation is going to look a little weird:

lim 𝑓: 𝑋{ →

}

⟶ 𝑌{

, }

=0

The infinite set of anomalous elements of a’ mapped back into the limited set {m,n} is zero. As a result:

𝑓: Ɣ 𝑃 (𝑢) ↦ {Ɣ

𝑃 ,Ɣ

𝑃 }=0

That is, the probability that an element m or n can be mapped from any of the infinite anomalous elements of a’ is zero. However, we go back to our drawing:

73

And we see that n carries onward. The reason it typically splits off as ɣmn or ɣnm is yet unknown, but probably has to do with the environment of DeSitter space-time. If we think of the metaphoric image of a road that continuously splits off (forks) at regular intervals, on our 2-dimensional surface of the Earth, where the road lies, it ultimately goes left or right, to some degree. The four possibilities include only, it goes forward, in which case it does not split, it goes backward, in which case it also does not split, or it goes left or right to any degree. The angle at which it departs is non-sequitur, as any angle other than zero or 180 degrees is a split. Interestingly, in the case where the element, n for instance, either proceeds forward at zero degrees or backward at 180 degrees, we have:

Since the proper time, u, as the Planck interval is a function of the split ɣmn or ɣnm, as a temporal limit (a marker), time does not proceed, neither forward nor backward. It is most certain that this also occurs in nature, and its function would then be represented by, where we had for any n splits off to r, now when we have, when r = n:

𝑃 (𝑢) = [𝐺 ]

=

𝑒

𝐼 Λ 𝐼

The Sm-Sr term becomes non-sequitur, it is now Sn-Sn=0, and:

𝑃 (𝑢) = [𝐺 ]

=

𝑒

𝐼 Λ 𝐼 =1

𝑃 (𝑢) = 1 Time stops. That is, the Planck Flow halts. It is also noteworthy that the Eigenvector Λu, rather than saying it is zero, it is more correct to say it does not exist, as no transition, which defines the eigenvector, has occurred. How often this occurs by random or Chaotic chance in nature is unknown, however, it can be made to occur via the QZE. What then is the Inverse QZE, or as I prefer the Quantum Anti-Zeno Effect (QAZE)? As generic as this answer may seem, it is rather obvious.

74 In every paper written on the subject, invariably the definitions rendered for the QZE are a reference to the Hamiltonian, Schrodinger’s equation, and a ‘return to its initial state.’ In simple terms, n makes a 180 degree ‘split’ and returns to its initial state. In the case of the QAZE, which no one has yet to try to define, n proceeds at zero degrees. This makes our quantized meter stick for n to n above longer than the locally quantized meter stick of the observer. As a result of requiring more of my locally quantized meter sticks to make the measurement, it appears to me as though it is ‘going faster,’ exactly the same way we observe the distant past in DeSitter space-time. Why does this all occur as a result of observation? We have already discussed what happens below or behind the Schwarzschild horizon, the effect of differentiating unobserved and unobservable. We had the logic operator, used in a modified Dirac notation stating the obvious, if a system is unobservable, it is unobserved, and however, if a system is unobserved, it is not therefore unobservable. This is an important distinction. We come back to our Alice, Bob, Victor scenario. Xiao-Peres referred to this experiment as ‘Delayed Choice Entanglement Swapping.’ This brings the definition of observation to a new level, as we can now look directly at the condition: This is a complete restatement of what I have already said, but in context. {𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑, 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟, 𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑, 𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒} From which there are at least five relationships: ⟨𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|∀|𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟⟩ For all observed things, there is an observer. ⟨𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|∃|𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑⟩ For all things observed, there exists any number of things that are unobserved. ⟨𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒| ∉ | ∌ |𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒⟩ Neither observable nor unobservable are elements of one another, nor is there any overlap. ⟨𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|ͼ|𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒⟩ For any number (but not all) unobserved things, they may be elements of the unobservable. I’m using a symbol I found in Cambria Math ⟨𝑛𝑜 − 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟| ∴ |𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒⟩ If there is no observer, the system is unobservable, rather it exists or not. The very first mistake in Quantum Anything is to not differentiate these values, namely those listed above as entirely different states separated into independent functions. The key elements are: ⟨𝑛𝑜 − 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟| † |𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟⟩ The use of ‘dagger’ means that if there is no observer, a system is transposed from one state to another by introducing an observer. ⟨𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|! † |𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒⟩ If a system is unobserved, it is not transposed in such a way that it behaves as an unobservable. ⟨𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|∀|𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟⟩ For all things observed, there exists an observer.

If you recall, the statement: However, the selection of observer and observed is arbitrary, as Special Relativity fails in defining this issue, leading to confusion and paradoxes. For example, Alice and Bob are speeding away from one another at v = c, both therefore see nothing of one another, and both see time stop upon looking at one another. In this case, both/either the observer and observed become transposed as unobservable. Which

75 in turn introduces yet another relationship: ⟨𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑|!∀|𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟⟩|∈ |〈𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒〉 Again, you have to forgive the notation; it was simply not designed to deal with functions, logic, operators, and conditions simultaneously as Dirac originally formulated it. I have to make adjustments as I see fit. They are not, however, incorrect, nor difficult. This is logical negation of the statement, for all observed things there exists an observer, as a result of being made unobservable by a special set of conditions. In this case, special Relativity negates the relationship between the observed and the observer, as it is now impossible. However, where Special Relativity truly fails is in the case where Alice and Bob are speeding toward one another at v = c. They also see nothing, because the photons, regardless of ‘frame of reference,’ cannot get ahead of either Alice or Bob (such as is the case in Chirality) to relate information of one another’s state. Therefore, in SR we have a simple case where both Alice and Bob are each the observer and the observed, and rather they are speeding away or toward one another at v = c, they are unobservable. Here, we have an exception, where the term, unobserved cannot be applied, because unobserved implies that a system is observable. In both cases, where Alice and Bob are speeding toward or away from one another at v = c, they are unobservable, and unobserved does not apply. The term, unobserved, implies that observation is possible; it is not. As such, we introduce yet another condition: ⟨𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒| ≡ |𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑⟩ The reason you have not seen this to date is because of the failure for scientists and mathematicians to consider all of these states and resulting conditions and relationships. Therefore, they keep Bob and Alice either stationary or safely at v < c where they can deal with it. Unfortunately, for the Big Bang, the visible horizon of the cosmos is defined by the above relationships. We can see back in time until the velocity of recession is equal to v = c, at which point everything, and we do not know what, is unobservable. Regardless of your mistreatment of the math and relationships, the cosmological expansion is therefore a result of the unobservable. For those who fail to comprehend, in simpler terms, if the expansion were to suddenly stop, all of that unobservable light from behind the visible horizon would reach us, and become observable. Then the Anthropic Principle: ⟨𝑛𝑜 − 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑟| ∴ |𝑢𝑛𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑏𝑙𝑒⟩ Then we return to our Conformal translation matrix:

That is, Alice, Bob, and Victor are orthonormal to one another, and entangled in a 3-way Tensor Network. What does this have to do with a quantum scaled QZE and/or QAZE? We’ll start with a simple system, a weak decay of carbon-14, for our cat. The carbon-14 atom is Alice, the beta ‘particle’ is Bob, and Schrodinger is Victor. First, we need to describe an obvious but hitherto overlooked feature of DeSitter space-time, Information Theory, Complexity, Tensor Networks, and Cosmological Expansion, with respect to the Schwarzschild Horizon, and the unobservable. As previously discussed, what is happening behind a Schwarzschild Horizon is unobservable, and the derivations are quite clear that this exponential growth is in fact the result of it being unobservable. The cosmos is unobservable. Let that ring in your head back and forth a few thousand times so that it sinks in. What we observe is the cosmos at any time in the distant past. The moon is 1.3 seconds in the past, Wolf 359 is 7.8 years in the past, Andromeda about 2.5 million years in the past, the galaxy dubbed Gn-z11 is 13.4 billion years in the distant past. If you hold your hand out at arm’s length, your fingers are about 10 picoseconds ion the past. Two immediately adjacent brain cells are about 10 femtoseconds in the past from one another…

76 Nothing in the cosmos is observable, until such time as the information, which may be traveling at or below v = c (v  c), gets there. The entire surface of DeSitter Space, even down to two immediate adjacent brain cells is governed by our former slide:

The closest distance between those two brain cells is through the least number of lizards, given by the equation for the length of the arc. That means, the information must pass through the past in some non-conventional way described by DeSitter space if it is to be observable before the information reaches us at v = c along the surface, but that is still not immediate. In addition, as we saw in our Piadic Ceyley tree, everything, everywhere, is entangled at some point ui – up where ‘p’ refers to any past Planck Interval in the Planck flow.

This image brings us to the QZE, QAZE, how and why it happens. And oddly, the anomalous results obtained by F. Delgado,J. G. Muga,and G. Garc´ıa-Calderon. Suppression of Zeno effect for distant detectors. arXiv:quant-ph/0604005v2 27 Jul 2006:

We describe the influence of continuous measurement in a decaying system and the role of the distance from the detector to the initial location of the system. The detector is modeled first by a step absorbing potential. For a close and strong detector, the decay rate of the system is reduced; weaker detectors do not modify the exponential decay rate but suppress the long-time deviations above a coupling threshold. Nevertheless, these perturbing effects of measurement disappear by increasing the distance between the initial state and the detector, as well as by improving the efficiency of the detector. In particular, we will show that the slowdown of the decay, i.e. the generalized Zeno effect, disappears when the distance to the detector is increased. We shall also analyze the suppression of the deviations from exponential decay at long times as a function of the strength and quality of the absorber and the influence of the “observation distance”.

77 Here, we are beginning to see the first observation that the space-time between the observed system and the detector is affected. That is, the QZE is a phenomenon where the progression of time has slowed, because the information is taking a longer path through space-time. The key feature to the paper is a simple graph:

The QZE is stronger at closer distances and weakens to the asymptote marked by the vertical dashed line with increasing distance. Unfortunately, the authors used some arbitrary and unrevealed units of measure for this ‘distance.’ That is, it could be in femtometers or kilometers, unknown. As a result of this arbitrary unit of measure, only referred to as Δ, setting Δ =1 as the asymptote at which the QZE is no longer observed, they then resort to wave equations and so on, particular to fluorescence. It seems clear that the authors did not report the arbitrary unit of measure for delta because it kept changing. Delta will not only be unique to the system, but to the day, what you had for breakfast this morning, and so on, literally. This is going to be a huge challenge in an FTL engine design, making delta stable. Delta is the Markov tree:

If I place my detector at Z, the points on the surface of DeSitter space-time, y and J are closer than u i at (proper Planck) time interval u-4. Similarly, If I place my detector at point E, the points on the surface of DeSitter space-time, X and K are closer than u i at (proper Planck) time interval u-2. However, tomorrow, when I come in to reproduce my experiment, the surface of DeSitter space-time, marked by u 0 has moved further away from my detector at point Z, which is no longer at u-4, but is now at some greater depth in DeSitter’s interior.

78 So, I scratch my head, I pull out my locally quantized meter stick, and measure that the distance is physically the same, as my locally quantized meter stick (which is always on the surface of DeSitter space-time) has also changed in reference to point Z, which is fixed, in the past. My locally quantized meter stick is forever changing, but the ‘lizard’ at point Z remains fixed. As a result, because I am a mere primate, I think that Z has moved, or some other phenomenon. This is exactly the same as trying to measure ‘a lizard’ in the center of DeSitter space-time. More than one thing is going on here. 1. I am forever moving away from point Z, and my locally quantized meter stick seems to always remain as 1 meter in length. 2. Anything and everything I measure out to any distance greater than or equal to 2Lp (must be >1 to say it is in more than the same position; just like the HUP is 4π, two wave cycles) is in the past. Thus, the information regarding anything I detect and/or measure is deeper in DeSitter space than my detector. The exact point of the asymptote encountered by Delgado et al, e.g., the point at which we know longer observe any Zeno effect is when we back off in distance or time to Ui. Where backing off in distance is our locally quantized meter stick distance, and time referring to the constancy, or rate of measurement. Then there is the probability that somewhere between u0 and u-n we encounter any one of the probabilistic phenomena: The other thing that can happen is the random appearance of an anomalous fork, rather than our {m,n} we can say that an anomalous element is simple a’, referring to any element that is not a subset of {m,n}. That is 𝑎 ∉ {𝑚, 𝑛}; 𝑎 ⊈ {𝑚, 𝑛}

This effectively results in a lack of ‘expansion’ of DeSitter space-time, as we perceive it from our domain (this cosmos). What it becomes is as anomalous as to the nature of a’. In this case, our Markov process begins to look like, where we had the termination of expansion on a Planck scale as:

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

Ɣ

𝑃 +

Ɣ

𝑃 − Ɣ 𝑃 (𝑢)

A large-scale variation from a logical domain to an anomalous (‘freak’) domain is dependent on the abundance of:

𝑃 (𝑢 + 1) = 𝑃 (𝑢) −

Ɣ

𝑃 +

Ɣ

𝑃 + Ɣ 𝑃 (𝑢)

Given that the set of elements contained within the parameter, a’ is infinite, a’ has an infinite number of possibilities of what the anomaly could be.

𝑎′ ⊇ ∞ And is equal rather we are considering the past, present, or future: The probability of u-1 is equal to the probability of u+1 from point u. If they are equal, then time has no arrow. Time is only asymmetric (as we cognitively chose to perceive it) if the probabilities of u+1 and u-1 from point u are not equal. We will think of u+1 as one Planck interval of time forward and u-1 as one Planck interval of time backward; from the point u. Susskind did this calculation, this is from a lecture he gave back in 2013.

79

𝑃(𝑛, 𝑢 + 1|𝑚, 𝑢) = 𝑃(𝑛, 𝑢 − 1|𝑚, 𝑢) = 𝑃(𝑚𝑢|𝑛, 𝑢 − 1) The two terms turn out to be

𝑃 (𝑢 − 1) = 𝑃 (𝑢)

𝑒 𝑒

=

As shown, all of these processes occur behind Schwarzschild Horizons, and resolve the ER=EPR problem (where Exponential Complexity causes the ER bridge to expand without bound [e.g. infinite] on the interior rather than being a shortcut through space-time). Everything below the surface of DeSitter space, which is a 1 Planck interval shell expanding outward from the center of the ‘lizards,’ is behind the Schwarzschild Horizon, unobservable until the information reaches us, at best, v = c. That means that everything ≥ 2Lp is below the DeSitter surface, and unobservable until such time that the information reaches the detector, at which point, the system has changed state, and there is no such thing as ‘real time observation.’ This is the Quantum Zeno Effect, and Quantum Anti-Zeno Effect. Changing the scale and depth in the Markov tree below the DeSitter Horizon means that our locally quantized meter stick has changed, not the fixed point in the past. This is equated to both General and Special Relativity in the section ‘The Quantized Meter Stick.’ In short, all observers in all frames of reference have meter sticks that are quantized to their local environment, and taking a measurement that is not an integer value of one’s locally quantized meter stick is not possible. Therefore, GR, SR, and all phenomenon are quantized environments. That is, in SR if I am taking a measurement of a clock on a speeding ship, this way or that, since my meter stick is quantized to my local environment, I can only take an integer quantized value of the speeding ship’s clock, and vice versa for the traveler on the speeding ship trying to measure me. The same principle applies in GR with respect to where an observer is in a gravity well, and where the observed is in the gravity well, which works both ways. Everyone in every frame of reference has a locally quantized meter stick, according to his or her environment. There is no possibility of any observer or observed in any frame of reference taking a non-integer quantized measurement, less one can slice Lp and tp finer than normal space-time will allow. The same principle holds true for being on the surface of DeSitter space, looking at a phenomenon at any depth in the Markov chain of DeSitter space, accounting for freaks and anomalies as discussed. Ultimately, the depth from the DeSitter Horizon determines the degree of entanglement between any number of systems with any common origin in the Markov tree.

80