Welcome to Fractal Forums

Real World Examples & Fractical Applications => Fractals Applied or in Nature => Topic started by: Chillheimer on May 27, 2015, 11:02:44 PM




Title: Resolution of the Universe
Post by: Chillheimer on May 27, 2015, 11:02:44 PM
Hey folks!

Can anyone help me out by calculating 3 pretty large numbers?

After many months of intense and freaky thinking I'm absolutely confident that the universe unfolds through the recursive calculation (aka "time") of a probably very simple formula that results in what we perceive as our fractal cosmos.
Our reality is surfing the fractal border between 3d:Space and 4d:Time.
What's calculating? What's the formula??
Still working on the details ;)
Pretty sure I'll never know - but still fun to imagine. O0



This has now led me to the following question:
What is the actual resolution of the universe, when you use the smallest possible steps? What are the actual numbers?
If you consider the planck-lenght as the smallest possible unit that physically makes any sense, how many plancklenghts (aka voxels) are packed into the expanding sphere of the observable universe right now, approximately?
edit 29.may.2015 - got it myself:
6,596e185 Voxels

The next number in question:  
What is the framerate of the universe?
If the smallest (physically meaningful) time interval is a plancktime, and I consider each one a single iteration of the universal formula, what would be the equivalent framerate, expressed in frames per second?
How many planck-times are there in one second?
got it:
1.85492e43 fps

And the last one, easy if you know the previous one:
What is the actual iteration count?
Just adding up all planck-times that have passed since the "big bang" or the beginning of the calculatio.. Approximately of course.  I think we can ignore the one or other septillion  ;)
got it:
8.0713e60 Iterations



Cheers!  
Chillheimer



Oh, I nearly forgot, some convenience:  
Planck-Lenght: 1.61619997E−35 m
Planck-Time: 5,39106E-44 s
Diameter of the observable Universe:  8.8E26 m
Age of the observable Universe: 13.798*10^9 years (or 4.3542e17 seconds if I calculated correctly)


Title: Re: Resolution of the Universe
Post by: hobold on May 27, 2015, 11:39:12 PM
Here is a fun one that will blow your mind :alien:.

What do you get when you divide Planck-length by Planck-time?


Title: Re: Resolution of the Universe
Post by: Chillheimer on May 27, 2015, 11:43:16 PM
What do you get when you divide Planck-length by Planck-time?
I don't even know how to put these into the windows calculator!  :crazy:

come on, enlighten me! :scared:  :yes:


Title: Re: Resolution of the Universe
Post by: hobold on May 28, 2015, 07:26:05 AM
This arbitrary precision calculator might be of use: http://apfloat.appspot.com


Title: Re: Resolution of the Universe
Post by: youhn on May 28, 2015, 08:01:36 AM
Uhm. Are the planck length bits smaller near black holes compared to those far away from any mass? So the "resolution density" at high gravity locations is higher or something ... ?


Title: Re: Resolution of the Universe
Post by: mclarekin on May 28, 2015, 09:52:48 AM
@ Chillheimer - I would call it "Age of the observable Universe: 13.798*10^9 years". Though my theories aren't mainstream. ( The Universe is one big animated fractal with different parts expanding and contracting, the actions of the major iterating force is dictated by the randomness of a well known set of butterfly wings.) :)

@ Hobold -
Quote
What do you get when you divide Planck-length by Planck-time?
. Is the correct answer  - Planck Velocity? :)


@ youhn, - that may be right , then we would have something like Planck Acceleration. But then I have never seen a Planck so I honestly don't know what I'm talking about. :D


Title: Re: Resolution of the Universe
Post by: Sockratease on May 28, 2015, 10:22:30 AM
Here is a fun one that will blow your mind :alien:.

What do you get when you divide Planck-length by Planck-time?

One. 

Yeah, I know  1.61×10^35 meters divided by 1.2 × 10^17 seconds isn't one meter per second, but it's an elegant answer and I like it - so I choose to believe it  :beer:

As for any useful answers...

Sorry, can't help.


Title: Re: Resolution of the Universe
Post by: Chillheimer on May 28, 2015, 10:45:17 AM
thanks hobold for the calculator link, I'm starting to understand how to use it.
with a few more tips i might actually be able to calculate the numbers myself..
right now I'm struggling how to write pi in there to calculate the volume of the observable universe..

@ sockratease: though 1 really would be an elegant answer, i find the real answer is even more beautiful.
it is 2.99792e-10.
familiar?  :o

Uhm. Are the planck length bits smaller near black holes compared to those far away from any mass? So the "resolution density" at high gravity locations is higher or something ... ?
I've come to similar conclusion but still have a hard time wrapping my head around it.. closely tied to Einsteins curved spacetime..
and to this: http://www.fractalforums.com/general-discussion-b77/is-there-a-namevariable-for-'amount-of-detail-visible-at-fixed-resolution'/
but adding this would make any calculation impossible, as it would lead to infinity.
so for the sake of sanity I'll stick with "fixed space" relative to the scale we are living in. at least for now..

@ Chillheimer - I would call it "Age of the observable Universe: 13.798*10^9 years". Though my theories aren't mainstream. ( The Universe is one big animated fractal with different parts expanding and contracting, the actions of the major iterating force is dictated by the randomness of a well known set of butterfly wings.) :)
yep, added "observable" in there as well..
so do you believe that at different "places", e.g. outside the observable universe the iteration count is different?
I doubt that, why would that be? Doesn't seem logic to me..
I believe that the age and the flow of time, recursion is a universal "constant". I call it the zoom speed of the universe - probably the hubble constant.


Title: Re: Resolution of the Universe
Post by: mclarekin on May 28, 2015, 11:58:03 AM
@ Sockratease . If we convert earthling meters and earthling seconds into universe meters and universe seconds, then  the answer may well be ONE. ;D

@ Chillheimer,  Same amount of iterations but random mixtures of forces acting (like random parameter changes). I guess my answer to why these forces exist in a random nature, would be "Just because they do".  Not a good answer, but fills in a big blank in my theory of tonight. ;D


Title: Re: Resolution of the Universe
Post by: hobold on May 28, 2015, 12:23:35 PM
@ Hobold - . Is the correct answer  - Planck Velocity? :)
Sorry for being such a tease. I was hoping I could share a moment of enlightenment, such as I experienced myself, if I leave that little discovery to you, dear readers.

The result is indeed a velocity. A well known one in our universe.


Title: Re: Resolution of the Universe
Post by: cKleinhuis on May 28, 2015, 12:43:08 PM
Sorry for being such a tease. I was hoping I could share a moment of enlightenment, such as I experienced myself, if I leave that little discovery to you, dear readers.

The result is indeed a velocity. A well known one in our universe.

so, is it the speed of light ?  :angel1:


Title: Re: Resolution of the Universe
Post by: Chillheimer on May 28, 2015, 01:06:18 PM
I was hoping I could share a moment of enlightenment, such as I experienced myself, if I leave that little discovery to you, dear readers.
you're right! edited my spoiler..
we really need a hide function in the forum  ;)


Title: Re: Resolution of the Universe
Post by: youhn on May 28, 2015, 02:08:27 PM
so for the sake of sanity I'll stick with "fixed space" relative to the scale we are living in. at least for now..
....

Agree on the fixed space relative to us. I will dig up my calculation and finalize it, it's all from human perspective (scale). Ah, found it. From the space perspective we live around level 2e-27 :

Human ratio compared to whole (known) universe = human-length / universe-size = 1.75 / 8.8e26 = 2e-27

For the time perspective the choice on human scale is harder. Shall we take our average lifespan (80 years), or the smallest observable human time unit (some milliseconds?). Anyway lets just take 1 year because I'm lazy and happened to find the age of the universe in years.

Ratio = human time / life of known universe = 1 / 1.38e10 = 7,25e-11

The difference is huge. You could say we humans use up more time then space (compared to the whole known).

I believe that the age and the flow of time, recursion is a universal "constant". I call it the zoom speed of the universe - probably the hubble constant.
No no no, don't call it by that confusing name. If you're talking about iterations/recursion ... then it should be speed of iteration instead.

And what about the case of discrete VS continuous? Those planck things point in the direction that everything is made out of bit or pieces. So no space between planck "bits"? They cannot move, only "switch" in some kind of jumpy way? And how does time relate to this question. If time is just the result of the difference between things (in space), then it must be the same as space. Discrete space leads to discrete time. Or both continuous.


Title: Re: Resolution of the Universe
Post by: Chillheimer on May 29, 2015, 10:51:56 AM
phew, these fractal cosmology-questions always turn out to zoom into so many different details ;)

With a little help I finally managed to calculate the numbers myself.
thx for not serving me the results on a silver-plate, so I was forced to learn how to do it on my own. which is great!

Resolution of the observable Universe:

6,596e185 Voxels

Framerate of the Universe

1.85492e43 fps

Number of iterations of the Universe until today
fps: 1.85492e43
Age of the observable Universe: around 13.798*10e9 years --> 4.3513e17 seconds

8.0713e60 Iterations




youhn, I don't really get what you're saying.. did you answer my initial questions by avoiding "real numbers" and by switching to ratios? not really what I was looking for but very interesting.
especially your remark about us using more time than space, which strongly resonates with my personal perception of speeding up time (and moores law, which I think is much more universal than just for cpu-speed)
No no no, don't call it by that confusing name. If you're talking about iterations/recursion ... then it should be speed of iteration instead.
what confusing name? zoom speed or hubble constant?
why do you find these confusing? In my view these explain it much better - though it remains a very strange and hard to think concept.
I believe that the expansion of the observable universe at the rate of the hubble constant is equivalent to the fixed speed of a zoom-movie into the mandelbrot-set.
Iteration count /speed isn't tied to the expansion.
but zoom speed and the hubble constant are.

iteration speed imho is one iteration per planck-time. thats the smallest resolution in which time happens. but that doesn't really describe "the flow of time" and the unfolding of a fractal... damn, our language (or at least my language) isn't made to talk about things like this.. ;)

And what about the case of discrete VS continuous? Those planck things point in the direction that everything is made out of bit or pieces. So no space between planck "bits"? They cannot move, only "switch" in some kind of jumpy way?
Afaik this is what quantum-physics is all about. So I'd say it's definitely discrete. although the resolution of our perception isn't fine enough so we perceive it as continuous.

And how does time relate to this question. If time is just the result of the difference between things (in space), then it must be the same as space. Discrete space leads to discrete time. Or both continuous.
I don't understand how from this you come to the conclusion that space and time are the same..

My view is this:
a line 1D is the infinite to a point 0D
a plane 2D is the infinite to a line 1D
space 3D is the infinite to a plane 2D
time 4D is the infinite to space 3D
the bubble of a observable universe (from each point in space/time) 5D is the infinite to 4D time ..(?)

so space is part of time, but time is not part of space and thus defitely not the same


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 04, 2017, 06:01:37 PM
Greetings Chillheimer,  For the last couple weeks I had been thinking about how a simulation of the universe might work.
It became apparent to me during this time that there are 2 modes of 'universe expression'.  

One expression of the universe can be considered as "singularity", followed by "big bang", followed by "expansion", followed by "collapse" to the singularity.
So this expression of the universe (an entire iteration) we'll just call an "iteration".  In typical thinking this "iteration" is a one time affair and we are somewhere among it.  The rest of it has yet to transpire so (even though billions of pages have been filled to describe it) it cannot be defined as "the universe" yet since all of its information has not yet been released.  But that doesn't stop people from thinking that "the universe" exists here and now.  It actually is still in the process of expressing itself in its iterative form.  Just as the person "Chillheimer" cannot yet be fully defined until the full expression of "Chillheimer" is complete.

The other expression is as a series of iterations, just as you realize.  We'll call this series of iterations "The Universe".

How this series of iterations would work is the crux of my thoughts over the last two weeks.
I ran a thought experiment on this and as it progressed it became clear that "The Universe", in all likelihood, behaves reiterively....and EXTREMELY quickly.
It also became clear that the nature of our interface with, combined with the behavior of, "The Universe" gives rise to many mathematical artifacts in physics since a contiguous universe is a base assumption in science.

So, I'm actually not a maths wizard by any means.  But it appears to me that this forum has a generous share of wizardry in that area.  And since it's kinda obvious to me that fractals are a natural byproduct of a reiterative Universe, this forum may appreciate what I have found.

I would like to lay out my model here and see if it's possible that it may shed some light on these mathematical artifacts so they can be better dealt with....for science.

Annnnnd...here we go:  (At the bottom are some interesting questions the model seems to address.)

Let's assume:

A computer has been developed with enough computational speed and muscle to model every particle and wave form in the physical Universe.

To build the model several important assumptions are made:
1)..The physical, observable Universe is finite.

2)..Physics has most things correct; things such as:
.a....a finite limit to observable light speed.

.b...There is no universal perspective.

.c...The smallest possible unit of time is the Planck Time

.d...The smallest unit of height, width or length is the Planck Length.
[link to en.wikipedia.org (secure)]

.e...The Second Law of Thermodynamics is accurate in that,
"the total entropy of an isolated system always increases over time, or remains constant in ideal cases where the system is in a steady state or undergoing a reversible process. The increase in entropy accounts for the irreversibility of natural processes, and the asymmetry between future and past.

.f...Information cannot be destroyed.

.g...Spin (in all its forms) is an integral part of the information of the universe.
.............. http://hyperphysics.phy-astr.gsu.edu/hbase/spin.html

.h...A singularity is the starting point for the expansion (BIG BANG) of The Universe.

3)..The "consciousness" of living entities interfaces with "sensory perception". As such the two are different but interdependent aspects of life as it exists within the Universe. (ie perception is NOT consciousness and both originate and exist wholly within the physical Universe).

4)..The "observer" (or whatever you want to label it) of a person interfaces with "consciousness". The "observer" directs "consciousness" to carry out choices based on the input from sensation as interpreted by "consciousness" (ie consciousness is not observer).

5)..The biggest assumption to the model is that the origin and domain of the observer is OUTSIDE the physical Universe. The only way, in this model, to experience the Universe as we do is for an extrauniversal "observer" to interface with an interuniversal "consciousness". (note: This is NOT a "religious" perspective I'm taking here, it is a necessary assumption for the model.)


OK, back to The Computer...and the Second Law of Thermodynamics and some interesting observations made recently (not in the future but lately).

If you take the 2nd Law to its logical conclusion, the end state of every wave form (ie everything) in the Universe will be to become entirely perfectly homogeneous. Which happens to be a state of affairs which is exactly the opposite of what it seems like entropy is doing in its drive toward more and more disorder.  The "disorder" during the process is the activity involved in seeking a "steady state".

If two particles are together, there is an order to their bound state. Entropy "wants" it broken and "disordered".
If two particles are close enough to interact, that interaction represents order. Entropy "wants" the two particles far enough away from each other to avoid any order there.

When all particles/wave forms eventually reach the ultimate steady state, "heat" in the Universe is immeasurably small, all particles/wave forms have separated out and are now identical. They are oriented identically to each other.

The Universe is now a "perfect" crystal structure made up of the smallest, discrete units of energy physically possible, currently labelled "bosons", AND all the information of the entire "life" of the observable Universe is represented by this crystal structure.
There can be only one configuration attainable that produces this logical result.  Presently, exploration of the possibilities for this configuration has yet to be realized.  An unfortunate oversight.

SO, for our future computer (in a time when this configuration has been described) this being a known value for a starting point from which crank up the model, The Computer has no trouble setting up The Universe here. No amount of computing prowess can model the point source theorized as the starting point of the BIG BANG because at that point, no information has been released. There is no data to input.

How can what seems to be the end point of The Universe be used to model The Universe from it's theorized beginning?
This is where a couple recent observations come in.

First a bit of info:
"Practically, the work needed to remove heat from a gas increases the colder you get, and an infinite amount of work would be needed to cool something to absolute zero. In quantum terms, you can blame Heisenberg’s uncertainty principle, which says the more precisely we know a particle’s speed, the less we know about its position, and vice versa. If you know your atoms are inside your experiment, there must be some uncertainty in their momentum keeping them above absolute zero – unless your experiment is the size of the whole universe."
https://www.newscientist.com/article/dn18541-what-happens-at-absolute-zero/

Fortunately, thanks to our future computer, this experiment is a model of The Whole Universe.

So now we can observe what happens to these particle/wave forms once "the end of The Universe" is modeled.
"In everyday solids, liquids and gases, heat or thermal energy arises from the motion of atoms and molecules as they zing around and bounce off each other. But at very low temperatures, the odd rules of quantum mechanics reign. Molecules don’t collide in the conventional sense; instead, their quantum mechanical waves stretch and overlap. When they overlap like this, they sometimes form a so-called Bose-Einstein condensate, in which all the atoms act identically like a single “super-atom”. The first pure Bose-Einstein condensate was created in Colorado in 1995 using a cloud of rubidium atoms cooled to less than 170 nanokelvin."
https://www.newscientist.com/article/dn18541-what-happens-at-absolute-zero/

Now The Universe in our model is no longer a HUGE amount of separate particles/wave forms but is ONE entity with no (zero, nil, none) distinguishable measurements since every possible position, motion, wavelength, spin, etc is identical when "measured" from any reference point. It has become a singularity.
Instead of a "perfect" "crystal" it is now a "perfect" particle/wave form. A "perfect fractal" in every sense.

This structure can also be seen as a "supersolid" A solid that is also superfluid with zero relative resistance to motion.
[link to en.wikipedia.org (secure)]

At this point, The Universe, instantaneously collapses to a point singularity with a "spin value" equal to the entire combined spin exhibited during The Universe's just completed expansion to "perfection" (its "iteration"). It is still the same "perfect fractal" that it was pre-collapse and The Information of the entire existence of The Universe is still retained in the totality of the singularity. Here, The Universe is now The Information.

If all The Information was released from this point, theoretically, The Universe would experience another Big Bang and expand outward EXACTLY THE SAME WAY AS BEFORE to reach its state of being a pure Bose-Einstein condensate. Then it would instantly collapse again with The Information remaining preserved...then BIG BANG again...then collapse again...then BIG BANG again...exactly the same way every time.  "The Information" doesn't change, The Universe is THE information...constantly repeating.  There is no other information in the Universe that impinges to change it.

So, we run our model this way. But who has time to wait for the model to run for what could be trillions and trillions of years?
Since we're on a monster thought experiment computer of the future with nearly unimaginable speed, we can cycle this thing up.....REALLY, REALLY, REALLY fast.

The fastest possible speed for this is One Planck Time per run (holding to assumption 2c above)
At this frequency, the total spin advances ONE Planck length.
One Planck Length per Planck Time is the speed of light.

Now, if one were to represent this model holographically one would see 5.39 × 10 to the 44th identical reiterations of The Universe every second. It would only ever appear to be a constant bright white light.

So, we set up the computer to project such a hologram scaled down to fit within a spherical projecting area 3 miles in diameter. We physically step into the projection and zoom in to roughly the space that our Solar System occupies during what we designate here and now, the present that our computer occupies. We still see nothing but white light, of course. We cannot see the spin advance, we can't see the singularity at either end of the cycle. We can't distinguish planets, galaxies, or galaxy clusters.
Just white light. Our visual perspective of our Model Universe is entirely "extra-universal" at this point and nothing inside it can be distinguished from anything else inside. There is still NO Universal Perspective from a vantage point outside The Universe.  You cannot have a perspective of the physical universe unless you are part of it and experiencing it as part of it.  So, to "enter" the hologram you have to have an interface.
From this point we can deduce that human bodies as they exist are part of just such an interface.

Some further assumptions are necessary here:
One observer per interface with one consciousness.
Once an soul observer is interfaced, full investment is required.
Any and all of the observer's awareness prior to interface is not brought into the interface since the human body is not "wired" to translate extrauniversal awareness...so your inter-universal "self" begins physical existence unaware of the true nature of the interface.

Your consciousness accumulates perceptions and presents the observerl with choices. The choices are made and the sum of one's choices show a pattern of the observer vis a vis differing systems of analysis...ie sliding scales of good/bad...right/wrong...sympathetic/apathetic/antithetic...progression/regression...etc.

It's how and why we experience time due to the nature of the interface that's interesting.

It's how and why certain quantum effects are observed and why many of them may have been misinterpreted that's interesting.

When an interface (Interface 1) is established the point of perspective is fixed in relationship to the "spin" of The Universe.
The spin moves forward one tick...one planck length, across the 3 physical dimensions.  From one "tick" to the next, ALL particles and wave forms of The Universe have reached terminus and collapsed to the point singularity then have been recreated exactly the same except everything has "ticked" forward one unit from your perspective. When The Universe recreates your interface it is now Interface 2 and from Interface 2 (which has been "left behind" by the "spin") you now experience The Universe as it exists one tick forward.
Things (Things 1) that were moving in Space 1 during Interface 1 have been exactly recreated in Space 2 where they would be had their movement been in a "contiguous" Universe since the information of their movement is retained.

As a visual aid of this place a sheet of paper flat on a table. Hold a pen or pencil point on the paper and slide the paper across the table without moving the pencil point. The movement of the paper represents the forward "ticking" of the Universe's spin across each iteration of the Universe.

The line left across the paper, if divided into discrete Planck lengths represents each time the Universe has gone past your interface's point of reference.

Since everything that exists in the Universe only exists for one planck unit of time and is then recreated in the next planck time you are able perceive actual things due to the change produced by the translation of "Universal Spin".
Your level of perceptual resolution does not allow you to experience anything as short as a Planck time so it actually takes many iterations to even leave the slightest impression on your senses.  At human resolution what CAN be perceived are all the effects of classical Newtonian physics.

After The Universe "ticks" past, the spatial and temporal coordinates occupied by your interface enough times, some observations that amount to the uncertainty of quantum mechanics begin to take place.

When you think you are measuring a particle, that particle has ceased to exist and has been replaced many times. That's going to result in quantum effects being reported since it has been assumed by physicists that The Universe is contiguous.

Also, since time and space advance one planck every iteration...NOTHING can advance faster. By the time a particle travels one planck length, it no longer exists and its replacement particle can only occupy the space that has moved forward ONE Planck length.

Hence there's a speed limit...c.

Take the thought experiment where a person travelling at the speed of light turns on a flashlight.

The 1 planck legth limit will only allow the flashlight beam to be perceived as moving at the speed of light no matter which speed or direction any observer is moving.

This is the "inter-universal" frame of reference.

As opposed to the "extra-universal" frame of reference where The Universe can only be seen as a bright white light.

There is no "Universal" frame of reference....just inter and extra.

The "tick" is actually a 3-dimensional spin constant. Without it time will not flow the way it does.

The spin (quantum spin) is imparted to all aspects of the physical Universe upon the initiation of expansion. The spin is a function of The Universe collapsing to a point singularity. It doesn't require parts or functions of the model to be introduced from a separate origin.

http://hyperphysics.phy-astr.gsu.edu/hbase/spin.html

As for Lp (planck length), the value of Lp is merely an artifact of your experience being mediated by the progressive nature of the spin over each successive Tp(Planck Time) which are separated in reality by the digitality of the spin.
One iteration of the Universe (one "flash of the strobe") creates one Tp...so an entire iteration of expansion reaching perfect 0 entropy can be divided into any fraction of, what we assume from our perspective to be, the smallest measure of time or length within which information can exist.

In other words, each single iteration is contiguous while successive iterations are not. WE experience The Universe via the successive iterations so we cannot measure or even conceive of the contiguousness of an iteration unless we realize HOW we experience.

So, what we have is basically a strobing "Big Bang".
Each "flash" spreads The Universe within/through/across a substrate. The anchor for existence. In a computer the substrate would be a magnetic field or something like a quantum dot matrix or some crystalline structure. Some call it the "zero point field" or "the ether" among other things. At its base it's just a substrate. In relation to The Universe, the substrate is unmoving...static...inert. So as the "spin" of The Universe progresses, a point on the substrate contains sequential points of Universe so that point of substrate "sees" time go by. Any interface by a "soul" has to "tunnel" through the substrate so the interface must remain "in place" with that part of substrate through which it passes into the realm of mass and energy.

This is why the so-called "arrow of time" is a one-way street. You are not moving through time.
Time is caused NOT by you moving with the universe but by successive Universes moving past your interface (vantage point).

So when you look up to the stars and think, "Wow, the universe is amazing."
Think also of The Universe as a really powerful strobe running at planck frequency. Each flash creates what we see and as for as long as your interface remains undecayed, you get to see it go by in amazing slow motion...1 Planck Second at a time as the future is sliding inexorably toward you.


The model I have described shows that "time" can only occur ACROSS successive iterations of "big bang"/expansion/collapse cycles.

WITHIN any single iteration there is no time so there is no "velocity".  Any single iteration occurs "instantly", or within a framework that is time independent.

We can remove a large piece of probably faulty "reductionism" (a road that science so far has failed to see can only go so far) by combining "big bang" and "expansion" into one "thing".

Unfolding.

So ONE iteration of the "universe" unfolds and refolds in what we would call 0 time.  

The "spin" (a three dimensional "spin") reorients the subsequent unfolding in relation to the substrate.  The reorientation of each successive unfold/refold is observed by us, via the mediation of our interface through the substrate, as change...so now there's time as we experience.

Between any two unfold/fold events every particle/wave-form is reoriented by 1 planck length.  That would give us the perception of 1 planck time going by but our resolution is not fine enough to actually observe a planck length.

Let's take an atom in one iteration and follow into the next iteration.
If the atom is perfectly stationary in relation to the substrate in iteration 1, it will reoccur one planck length away in iteration 2.  No matter what, that's "where" it will exist.  To us that change amounts to one planck time.
Since everything has also moved that planck length, anything that was not moving in relation to the substrate in iteration 1 will be oriented in exactly the same way in relation to anything else that was not moving in relation to the substrate.

The "spin" then "moves" a stationary particle at the speed of light to where it next exists.

Now take a particle that is moving very, very nearly at the speed of light in relation to the substrate (and your observation point...which is stationary in relation to the substrate.  Within iteration 1 the particle, moving "across" or "along" the substrate is actually headed toward where it will be in iteration 2 and would reach that point IF the universe did not refold...wiping it out of existence.

The "information" of this event in iteration 1 is retained in the refolding.

In the unfolding to iteration 2, the particle completes it's trip to where it is recreated.  To us, then, it appears that the particle has taken much less time to get to its iteration 2 existence.  And we call that time dilation.

Now we're ready to see why e = m times c squared.

A particle gets from iteration 1 to iteration 2 in what we define as c due to the spin of the universe.

If it is already headed toward where it will be (in relation to the substrate) the "time" to get there is shortened.

If it travels that entire planck length across/through/along the substrate in iteration 1 it arrives at iteration 2 in what we see as no time.  If it takes no time to get there we see that it moved "in relation to us" at the speed of light (c).

(If its motion during an iteration takes it PAST the point where it must appear for the next, it has moved into the future.  Perfectly consistently with Einstein's work.)

But we have also moved across the substrate at the speed of light and at the end of iteration 2 that particle, if still moving in the same manner, will be reaching iteration 3.

If we are NOT moving in relation to the substrate that particle will get to iteration 3 in what we see as no time.
If we are moving toward our next iteration...which we always are...that does not affect the particle's movement in relation to the substrate and it still, to us, gets there in no time at c.

So a particle gets the compounded energy of both the "spin" of the Universe AND that which has moved it 1 planck length in an iteration.....c squared.  This is multipled across whatever units of mass are assigned so its total energy is mass times the speed of light squared.

There's still every question remaining pertaining to "humanity" and "purpose" etc.
But at least the model puts the origin of YOU, the non-physical you, outside the physical Universe where the answers to "humanity" and "purpose" etc lie.
https://en.wikipedia.org/wiki/Planck_time
https://www.newscientist.com/article/dn18541-what-happens-at-absolute-zero/
https://en.wikipedia.org/wiki/Supersolid
http://hyperphysics.phy-astr.gsu.edu/hbase/spin.html

What IS time?
HOW does it "occur"?
Why do we experience it the way we do?
WHY is it one-way?

WHY is "c" the upper speed limit?
Why does time "dilate" for something more and more as it accelerates through space?

WHY is it that the light from a moving light source will be seen as "c" from both the light-source as it moves AND from a stationary observer?

WHY is a Planck length the lowest limit of space/time resolution in which information exists?

Why did Einstein say there is no "universal frame of reference"?

Why does the total energy of a particle equal mass times the speed of light squared?

ps...I found this site by googling  "iterations of the universe".  It's apparently a very infrequently used phrase  :-)


Title: Re: Resolution of the Universe
Post by: Tglad on January 05, 2017, 01:31:02 AM
Quote
how many plancklenghts (aka voxels) are packed into the expanding sphere of the observable universe
quantum theory does not mean that the universe is voxels, or even that it is discrete. The only discrete part of quantum theory is the energy levels of orbiting electrons. But the physics of the underlying virtual particles is entirely continuous and deterministic.

AH, I hope you don't mind me having a go at answering your questions, I'm not a physicist though, these are my opinions:

"What IS time?"
It is a quantity we can macroscopically define as an axis on a manifold (3D space + 1D of time, but curved).

"HOW does it "occur"?"
It doesn't occur... but universes that have time seem to be the only ones where planets and animals can form in order to ask that question.
"Why do we experience it the way we do?"
hard to answer about anything, but I think the reason we have memories of the past and predictions of the future (rather than the other way around) is that the future is higher entropy, so takes HUGELY more storage, but the past is lower entropy and can be stored in the present.
"WHY is it one-way?"
If there was no entropy gradient the world would be hugely disordered (perhaps a big fuzzy cloud) and no-one around. We're only here because the entropy is rapidly increasing, and it only appears one way because our memories are on the low entropy side. If the entropy gradient were the other way then our memories would be on the other side, and we would still feel that time is one way.

"WHY is "c" the upper speed limit?"
It isn't really a limit, you can always go faster and faster from your perspective. But others will not see you go past faster than c.
Its value could either be infinity (which causes problems with causation), or finite. If it is finite, it has to have some value in some units, and we're going to call it c.

"Why does time "dilate" for something more and more as it accelerates through space?"
Either time^2 is positive, in which case time is like a space dimension and we would not call it time, or time^2 is 0 (independent of space) and we live in the Newtonian universe but the infinite speed of light has problems with causation (circular dependencies), or time^2 is negative and we live in Einstein's universe, and we get time dilation. For more info you could look up Lorentz boosts.

"WHY is it that the light from a moving light source will be seen as "c" from both the light-source as it moves AND from a stationary observer?"
Because the light source is time dilated, so, while the relative distance travelled by the light is less, the time that has passed appears to be (and is) less for the person at the light source, so the ratio of the two is the same velocity.

"WHY is a Planck length the lowest limit of space/time resolution in which information exists?"
This is due to the uncertainty principle... it seems to me that there is still a lot of behaviour going on below this length (wave behaviour of virtual particles) but that seems to be the limit of our ability to resolve distances.

"Why did Einstein say there is no "universal frame of reference"?"
In Newton's universe, knowing that the speed of light is finite, one has to assume that the earth is roughly stationary as the speed of light is measured the same in all directions. So there must be a frame of reference (a coordinates) that is roughly fixed with the earth. This would be weird as if you were some another galaxy and shined a torch, the light would come out slower in one direction than another.
In Einstein's special relativity (1905), there is no fixed frame, objects going at any fixed velocity will observe light's speed to be equal in all directions. However, it assumes the acceleration of the frame is 0.
In Einstein's general relativity (1915?) he generalises the laws of physics to work the same on any frame of reference, including accelerating ones.

"Why does the total energy of a particle equal mass times the speed of light squared?"
don't know  ^-^


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 05, 2017, 04:27:04 AM
Hi Tglad,

Thanks for the response.

The questions at the end of my post are handled in the model.  It's a way to begin a model and how its basic form produces results that agree with/confirm the math physics uses.

The thing about the math is it can't be truly substantiated as applicable to reality by using math...that would be tautological.

The model steps outside of math to provide confirmation of the math.

Really math heavy stuff like the battle over whether dark matter exists or not might be solved with a more meticulous treatment of the model.  If the model doesn't allow for dark matter, then the math involved that brings people to suppose that dark matter exists may be just a useless exercise.  That "dark matter math" might just arise out of an artifact created by treatment of The Universe as a contiguous, one-time iteration when it's actually discrete, sequential iterations.


Title: Re: Resolution of the Universe
Post by: youhn on January 05, 2017, 06:34:07 PM
Quote
If you take the 2nd Law to its logical conclusion, the end state of every wave form (ie everything) in the Universe will be to become entirely perfectly homogeneous. Which happens to be a state of affairs which is exactly the opposite of what it seems like entropy is doing in its drive toward more and more disorder.  The "disorder" during the process is the activity involved in seeking a "steady state".

This is a misinterpretation of the 2nd law, which actually is not really a law as for example the value of c, or e = mc^2. It's more of a statistical/average thing.
I would rather say the universe works as a structure-building machine. Starting with a compact singularity (? which we call it, but don't know for real) of energy. Only when time start to progress (or perhaps time is an artifact, the result of things moving/iterating/flowing) this energy bifurcates into particles, which first form simple atoms. Some bits of energy/matter were closes together, so started to warp space a bit more. As a result, planets and stars were formed. Some iterations later, stars exploded and brought new more complex atoms in the game. Eventually these atoms, at least on earth, have formed into even more complex structures, like RNA, DNA. Then from single cells to multi-cell structures, which began to shape their environments in increasingly more effective ways. Some even think about blowing up threatening astroids, taking over planets or even tapping the complete energy from a star. Seem that the universe wants to interact with itself. Could we call it (the universe) a closed system...? (then the 2nd law would apply).


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 05, 2017, 08:07:06 PM
So, if I understand what you're getting at, as a "structure building machine" existing as a closed system (finite, as per assumption 1 of the model), the model might proceed in a way that the first "iteration" of Universe is just a single event..."bifurcation of energy" or "step 1".  The Universe then refolds from there and retains the info of that event and repeats it in the second iteration. As the second iteration unfolds this bifurcation of energy progresses one step...and the Universe refolds again retaining the information of both steps so the "build" represented by iteration 2 is repeated in iteration 3 whereupon another quanta of "building" occurs....
Until after enough iterations, each adding another quanta of "progressive building", the Universe arrives at buildx where/when a particle or particles exist.  And from there, at a particular buildxxxxx..., conditions arise that allow for our interface to be established and here we are.

That could very well be the case and I see no reason why the model could not function this way.

I suppose that opens up the question of whether or not "we" are interfaced at the leading edge of the "building" or, since it's reiterative, are we somewhere/when in the middle of the building and the leading edge is now at some point far into what we call "the future"?

Thanks for the input youhn.

As an addendum I have to add that since the model cannot "begin" at or "during" iteration 1 (iteration alpha) or at or "during" the point singularity, it has to start at iteration omega so every subsequent iteration would proceed through the entire summation of iterations.  Exploration into the point singularity, iteration alpha and all iterations that "build" from alpha to reach omega will have to wait for the computer.  Could be a "long" wait.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 05, 2017, 10:52:04 PM
quantum theory does not mean that the universe is voxels, or even that it is discrete. The only discrete part of quantum theory is the energy levels of orbiting electrons. But the physics of the underlying virtual particles is entirely continuous and deterministic.

AH, I hope you don't mind me having a go at answering your questions, I'm not a physicist though, these are my opinions:

"What IS time?"
It is a quantity we can macroscopically define as an axis on a manifold (3D space + 1D of time, but curved).

"HOW does it "occur"?"
It doesn't occur... but universes that have time seem to be the only ones where planets and animals can form in order to ask that question.
"Why do we experience it the way we do?"
hard to answer about anything, but I think the reason we have memories of the past and predictions of the future (rather than the other way around) is that the future is higher entropy, so takes HUGELY more storage, but the past is lower entropy and can be stored in the present.
"WHY is it one-way?"
If there was no entropy gradient the world would be hugely disordered (perhaps a big fuzzy cloud) and no-one around. We're only here because the entropy is rapidly increasing, and it only appears one way because our memories are on the low entropy side. If the entropy gradient were the other way then our memories would be on the other side, and we would still feel that time is one way.

"WHY is "c" the upper speed limit?"
It isn't really a limit, you can always go faster and faster from your perspective. But others will not see you go past faster than c.
Its value could either be infinity (which causes problems with causation), or finite. If it is finite, it has to have some value in some units, and we're going to call it c.

"Why does time "dilate" for something more and more as it accelerates through space?"
Either time^2 is positive, in which case time is like a space dimension and we would not call it time, or time^2 is 0 (independent of space) and we live in the Newtonian universe but the infinite speed of light has problems with causation (circular dependencies), or time^2 is negative and we live in Einstein's universe, and we get time dilation. For more info you could look up Lorentz boosts.

"WHY is it that the light from a moving light source will be seen as "c" from both the light-source as it moves AND from a stationary observer?"
Because the light source is time dilated, so, while the relative distance travelled by the light is less, the time that has passed appears to be (and is) less for the person at the light source, so the ratio of the two is the same velocity.

"WHY is a Planck length the lowest limit of space/time resolution in which information exists?"
This is due to the uncertainty principle... it seems to me that there is still a lot of behaviour going on below this length (wave behaviour of virtual particles) but that seems to be the limit of our ability to resolve distances.

"Why did Einstein say there is no "universal frame of reference"?"
In Newton's universe, knowing that the speed of light is finite, one has to assume that the earth is roughly stationary as the speed of light is measured the same in all directions. So there must be a frame of reference (a coordinates) that is roughly fixed with the earth. This would be weird as if you were some another galaxy and shined a torch, the light would come out slower in one direction than another.
In Einstein's special relativity (1905), there is no fixed frame, objects going at any fixed velocity will observe light's speed to be equal in all directions. However, it assumes the acceleration of the frame is 0.
In Einstein's general relativity (1915?) he generalises the laws of physics to work the same on any frame of reference, including accelerating ones.

"Why does the total energy of a particle equal mass times the speed of light squared?"
don't know  ^-^

Hi again Tglad,

The fact that energy levels are quantized is actually "reflected" throughout the system, producing such things as spin quantum, time quantum and "distance" quantum.  The quantum nature of these things at the basis of existence you might call fractalization of quanta.
In the same way that "spin" is fractal...from galaxy clusters down into galaxies into star clusters into solar systems...etc down to quarks and such.

Here's an example discussing spin quantum...
(note the references to Bose-Einstein Condensate (BEC) and their discussion of the lattice (which I termed "substrate" in my model) that is a necessary component of the environment produced for the described experiment.


From:Bose–Einstein condensation of spin wave quanta at room temperature:

"The unique properties of spin waves result from interactions acting between
magnetic moments. For relatively small wavevectors (k < 104 cm−1), spin wave
dynamics is almost entirely determined by magnetic dipole interactions. Owing
to the anisotropic nature of the magnetic dipole interactions, the frequency of
a spin wave depends not only on the absolute value of its wavevector, but also
on the orientation of the wavevector relative to the static magnetization.
For
large wavevectors (k > 106 cm−1), the exchange interaction dominates. In the
wavevector interval 104 cm−1 < k < 106 cm−1, neither of these interactions can be
neglected. The corresponding excitations should be treated as dipole-exchange
spin waves.
From the quantum-mechanical point of view, the spin wave energy should
be quantized. The quantitative theory of quantized spin waves, or magnons,
was developed by Holstein & Primakoff [2] and Dyson [3]. If one considers the
completely magnetized state at zero temperature as the vacuum state of the
ferromagnet, the low-temperature state can be treated as a gas of magnons.
The magnons behave as weakly interacting quasi-particles obeying Bose–Einstein
statistics. Magnons at thermal equilibrium do not usually show coherence effects.
In fact, they form a gas of excitations, nicely described within the quantum
formalism of population numbers.

One of the most striking quantum phenomena possible in a gas of bosons is
Bose–Einstein condensation (BEC) [4].
It represents a formation of a collective
macroscopic quantum state of bosons. As the temperature of the boson gas T
decreases at a given density N, or, vice versa, the density increases at a given
temperature, and the chemical potential m, describing the gas, increases as well.
On the other hand, m cannot be larger than the minimum energy of the bosons
3min. The condition m(N, T) = 3min defines a critical density Nc(T). If the density
of the particles in the system is larger than Nc, BEC takes place and the gas
is spontaneously divided into two fractions: (i) particles with the density Nc are
distributed over the entire spectrum of possible boson states and (ii) a coherent
ensemble of particles is accumulated in the lowest state with 3 = 3min.
Several groups have reported observations of field-induced BEC of magnetic
excitations in different quantum low-dimensional magnets (for a review, see [5]).
In these materials, a phase transition occurs if the applied magnetic field is
strong enough to overcome the antiferromagnetic exchange coupling. Such a
transition is accompanied by a magnetic mode softening (3min → 0). It can be
treated as BEC in an ensemble of magnetic bosons. If, however, a gap exists in
the magnon spectrum (3min > 0), there is no possibility of observing BEC at true
thermodynamic equilibrium because m < 3min. In fact, if the magnetic subsystem
stays in equilibrium with the thermal bath (lattice), its state is characterized by
the minimum of the free energy, F (e.g. [6]). On the other hand, the chemical
potential is the derivative of the free energy with respect to the number of
particles. In a system of quasi-particles whose number can vary, F can be
minimized through creation and annihilation of particles. In other words, quasiparticles
will be created or annihilated owing to energy exchange with the lattice

until their number corresponds to the condition of the minimum F (this is the
same as m = 0). Thus, to observe BEC in a gas of quasi-particles with 3min > 0,
one should drive the system away from the true equilibrium using an external
source. In the case of polaritons, one uses a laser [7], in the case of magnons,
parametric microwave pumping is a perfect tool for this purpose."

Bose–Einstein condensation of spin wave quanta at room temperature
BY O. DZYAPKO1, V. E. DEMIDOV1, G. A. MELKOV2
AND S. O. DEMOKRITOV1,*
1Institute for Applied Physics, University of Münster,
48149 Münster, Germany
2Department of Radiophysics, National Taras Schevchenko University of Kiev,
Kiev, Ukraine

http://rsta.royalsocietypublishing.org/content/roypta/369/1951/3575.full.pdf

I just ran across this so I apologize in the delay in bringing it to your attention to further the discussion here on "quanta"


Title: Re: Resolution of the Universe
Post by: Tglad on January 06, 2017, 12:28:12 AM
quantised energy (only in bounded states, such as orbiting a nucleus) includes angular kinetic energy. But there is no quantisation of time or space... Having a minimal (approximate) distance at which you can tell things apart (plank length) is not the same as a quantisation of space. One can also count the number of bits of information that a surface of a given area is capable of storing... but that's not the same as space being quantised. (At least, from what I have read, e.g. last chapters of the Road to Reality by Roger Penrose).

Several researchers have played with models that quantise space but none have been successful, the biggest problem is that you generally lose Lorentz invariance. i.e. the quantisation would look different under a Lorentz boost, and the important principle of no universal coordinate frame is lost.
Interestingly there are such concepts as space-time crystals (http://'https://plus.google.com/117663015413546257905/posts/WTUTcYJMnGR?sfc=true') (which Josleys has worked on), where quantised lattices are invariant under certain boosts, but it doesn't apply to arbitrary boosts.


Title: Re: Resolution of the Universe
Post by: kram1032 on January 06, 2017, 01:49:13 AM
The universe certainly isn't, like, a voxel-grid. Things don't "jump" between two nearest (planck-distance separated) locations to neighbouring times (planck-time separated) if you look that closely. The transition is smooth and continuous. You just can't pinpoint where exactly a given particle is.
Discretization effects only occur if you add boundary conditions.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 06, 2017, 04:49:37 AM
quantised energy (only in bounded states, such as orbiting a nucleus) includes angular kinetic energy. But there is no quantisation of time or space... Having a minimal (approximate) distance at which you can tell things apart (plank length) is not the same as a quantisation of space. One can also count the number of bits of information that a surface of a given area is capable of storing... but that's not the same as space being quantised. (At least, from what I have read, e.g. last chapters of the Road to Reality by Roger Penrose).

Several researchers have played with models that quantise space but none have been successful, the biggest problem is that you generally lose Lorentz invariance. i.e. the quantisation would look different under a Lorentz boost, and the important principle of no universal coordinate frame is lost.
Interestingly there are such concepts as space-time crystals (http://'https://plus.google.com/117663015413546257905/posts/WTUTcYJMnGR?sfc=true') (which Josleys has worked on), where quantised lattices are invariant under certain boosts, but it doesn't apply to arbitrary boosts.

The problem there is that lack of evidence is not evidence of lack.

William G. Tifft, a professor of astronomy at the University of Arizona:
"There is no conclusive evidence that time is quantized, but certain theoretical studies suggest that in order to unify general relativity (gravitation) with the theories of quantum physics that describe fundamental particles and forces, it may be necessary to quantize space and perhaps time as well. Time is always a 1-dimensional quantity in this case.

My own work, which combines new theoretical ideas with observations of the properties of galaxies, fundamental particles and forces, does suggest that in a certain sense time may indeed be quantized. To see this we need some background information; in this scenario, time is no longer 1-dimensional!
"My colleagues and I have observed that the 'redshifts' of galaxies seems to be quantized. The redshift is the apparent shift in the frequency of light from distant galaxies. This shift is toward the red end of the spectrum and its magnitude increases with distance. If redshifts were due to a simple stretching of light caused by the expansion of the universe, as is generally assumed, then they should take on a smooth distribution of values. In fact, I find that redshifts appear to take on discrete values, something that is not possible if they are simply due to the cosmic expansion. This finding suggests that there is something very fundamental about space and time which we have not yet discovered.
"The redshifted light we observe is consists of photons, discrete 'particles' of light energy. The energy of a photon is the product of a physical constant (Planck's constant) times the frequency of the light. Frequency is defined as the reciprocal of time, so if only certain redshifts are possible, then only certain energies are present, and hence only certain frequencies (or, equivalently, time intervals) are allowed. To the extent that redshifts of galaxies relate to the structure of time, then, it suggests an underlying quantization.
"In our newest theoretical models we have learned to predict the energies involved. We find that the times involved are always certain special multiples of the 'Planck time,' the shortest time interval consistent with modern physical theories. The model we are working with not only predicts redshifts but also permits a calculation of the mass energies of the basic fundamental particles and of the properties of the fundamental forces. The model implies that time, like space seems to be three dimensional.
We now think that three-dimensional time may be the fundamental matrix of the universe.
In this view, fundamental particles and objects--up to the scale of whole galaxies--can be represented as discrete quantized structures of 3-d time embedded within a general matrix of 3-D time. The structures seem to be spraying radially outward from an origin point (time = 0): a big-bang in 3-D time. Any given chunk, say our galaxy, is flowing outward in 3-D time along its own 1-dimensional track, a 1-D timeline. Inside our (quantized) chunk we sense only ordinary 3-D space, and the single 1-dimension time flow of our chunk of 3-D time.


"Now we can finally attempt to answer the original question, whether time is quantized. The flow of time that you sense corresponds to the flow of our chunk of 3-D time through the general matrix of 3-D time. This time is probably not quantized. Both ordinary space and ordinary 'operational' time can be continuous. On the other hand, the structure of the time intervals (frequencies and energies) that make up the 3-D chunks of time which we call galaxies (or fundamental particles) does appear to be quantized in units connected to the Planck scale. In the 3-D time model, space is a local entity. Galaxies are separated in 3-D time, which we have misinterpreted as separation in space."

https://www.scientificamerican.com/article/is-time-quantized-in-othe/

So, if the above is correct, the other two dimensions of time occur in a way not experienced by us since we experience it across iterations.  At least one of the other two dimensions of time may occur within any individual iteration.  I mistakenly thought that there could be no time within an iteration but the above seems to point toward each iteration possessing its own dimension of time.

So the unfolding of an iteration may produce the 2nd dimension of time within each single iteration and it may be that the refolding of an iteration produces the 3rd dimension of time.  

Continuing to treat The Universe as a single iteration (instead of repetitions of folding/unfolding) will produce artifacts that can only be resolved with such things as "special cases" and the like.
They seem to be getting very close to my model in that last bolded statement.

This is getting to be a very interesting discussion.  Thanks for the input once again.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 06, 2017, 05:35:02 AM
The universe certainly isn't, like, a voxel-grid. Things don't "jump" between two nearest (planck-distance separated) locations to neighbouring times (planck-time separated) if you look that closely. The transition is smooth and continuous. You just can't pinpoint where exactly a given particle is.
Discretization effects only occur if you add boundary conditions.

Hi kram, thanks for the response.
I hope my post above answers some of your concerns.

I think part of the problem in the communication here might come from a difficulty with stepping outside the current paradigm in order to explore another.

If you look closely at my model I make no mention of a "voxel grid" nor do I imply one.  And I certainly don't suggest that "things jump between two nearest (planck-distance separated) locations to neighbouring times (planck-time separated)"  
If The Universe were a 1 iteration contiguous structure I suppose something like a "voxel grid" would be necessary.  But since I have a repeating iteration in the model it is each separate iteration that is unfolding as a "quantized" universe (lower case "u").  It would be the sum of each unfolding that is "The Universe" which, due to the nature of the interface roughly described in my model, produces observations that seem to rule out quantized time and space.  We are unable to see separate iterations...we basically skim along as the future proceeds toward us...one "quanta" of Universe at a time while we mistakenly believe each individual "quanta" is the whole.

Looking at it as if there was one contiguous iteration plodding along across billions of years in a continuous flow is similar to looking at any single frame of a movie reel and saying that the contents of that single frame is "The Movie" and then trying to mathematically model The Movie on a single static frame.


Title: Re: Resolution of the Universe
Post by: Tglad on January 06, 2017, 06:29:51 AM
Well I don't want to argue because 1. I'm not a physicist, and 2. neither are you. So I suspect we'd both be speaking outside our knowledge area.
But I will repeat what I have read from Penrose (who is a respected physicist), which is that several people have attempted to quantise time and/or space but get stuck maintaining Lorentz invariance. The evidence of lack is that quantisations are not Lorentz invariant and Lorentz invariance is core in there being no universal or privileged inertial coordinate frame.

and from Wikipedia:
Quote
Redshift quantization is a fringe topic with no support from mainstream astronomers in recent times. Although there are a handful of published articles in the last decade in support of quantization, those views are rejected by the rest of the field.

I won't give any more responses as it is beyond my expertise, but good luck with the idea, you could try discussing it on physicsforums.com.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 06, 2017, 07:24:24 AM
Well I don't want to argue because 1. I'm not a physicist, and 2. neither are you. So I suspect we'd both be speaking outside our knowledge area.
But I will repeat what I have read from Penrose (who is a respected physicist), which is that several people have attempted to quantise time and/or space but get stuck maintaining Lorentz invariance. The evidence of lack is that quantisations are Lorentz invariant and Lorentz invariance is core in there being no universal or privileged inertial coordinate frame.

and from Wikipedia:
I won't give any more responses as it is beyond my expertise, but good luck with the idea, you could try discussing it on physicsforums.com.

Yes, Lorentz holds in the observable universe (lower case u).  The observable universe, however, is the single frame of the movie.

The model I'm proposing has unobservable components due to the nature of the interface between observer and consciousness.

It's a tough leap to make but stepping outside a paradigm isn't always easy.

But thanks again.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 06, 2017, 09:02:40 AM
Well I don't want to argue because 1. I'm not a physicist, and 2. neither are you. So I suspect we'd both be speaking outside our knowledge area.
But I will repeat what I have read from Penrose (who is a respected physicist), which is that several people have attempted to quantise time and/or space but get stuck maintaining Lorentz invariance. The evidence of lack is that quantisations are Lorentz invariant and Lorentz invariance is core in there being no universal or privileged inertial coordinate frame.

and from Wikipedia:
I won't give any more responses as it is beyond my expertise, but good luck with the idea, you could try discussing it on physicsforums.com.

I would also like to note that, even though you make reference to a Wikipedia article, you omit the following, also from Wikipedia:

Based on observations of nearby galaxies, Tifft has put forward the idea that the redshifts of galaxies are quantized, or that they occur preferentially as multiples of a set number. These findings on redshift quantization were originally published in 1976 and 1977 in the Astrophysical Journal.[2][3][4] The ideas were controversial when originally proposed; the editors of the Astrophysical Journal included a note in one of the papers stating that they could neither find errors within the analysis nor endorse the analysis.[3] Subsequently Tifft and Cocke put forward a theory to try to explain the quantization. Tifft's results have been largely replicated by Croasdale[5] and later Napier and Guthrie.[6] Croasdale did a comprehensive analysis of the statistical significance and confirmed the special frame in which quantization is found to be the same over the whole sky. Since the initial publication of these results, Tifft’s findings have been used by others, such as Halton Arp, in making an alternative explanation to the Big Bang Theory, which states that galaxies are redshifted because the universe is expanding.[7][8] However, Tifft himself, when interviewed for the popular science magazine Discover in 1993, stated that he was not necessarily claiming that the universe was not expanding.[9]

https://en.wikipedia.org/wiki/William_G._Tifft

And from another paper:

"The existence of quantized red-shift as an observational fact is now well established."
https://books.google.com/books?id=92jxCAAAQBAJ&pg=PA175&lpg=PA175&dq=%22Tifft+and+Cocke%22&source=bl&ots=8ptHycEcPk&sig=RX2JjC8XFBg4eH4INsDZec4eMfU&hl=en&sa=X&ved=0ahUKEwiuov69iK3RAhWB5SYKHa1pCFIQ6AEINjAF#v=onepage&q=%22quantized%20redshifts%22&f=false

The trail that your concerns has led to is very welcome.  I was not aware of these developments in theory when I first began conceptualizing this model at the end of last year.  Frankly, I don't even know what sparked my brain to begin this endeavor.  Maybe it was my decades-long passion for solving extremely difficult crossword puzzles.  A skill which requires one to consider clues without resort to any specific context but instead to consider all possible contexts nearly simultaneously then making sure that any possible answer "fits" by extrapolating all possible answers into the "build" of the puzzle before committing the pen.
I like to work in pen and I like error-free committal :-).
So far, thanks to this forum, I cannot say that I solved the "Universe" puzzle error free since my "pen" originally did not include the term "boson" and I may change the term "observer" to "end-user".  Also I have to put different dimensions of time into slots that I had originally filled with "no time".

I'm sorry that you may not respond again Tglad.  You have helped.  Thank you.


Title: Re: Resolution of the Universe
Post by: youhn on January 06, 2017, 06:39:03 PM
... Things don't "jump" between two nearest (planck-distance separated) locations to neighbouring times (planck-time separated) if you look that closely. The transition is smooth and continuous.

Do we actually KNOW this?! Can we look THAT closely ... ? I would leave this as a topic of discussion, but if you have proof/examples/observations otherwise, please share!

Discretization effects only occur if you add boundary conditions.
I'm not sure what you mean by this. Doesn't everything naturally have a boundary, and thus also boundary conditions?  :hrmm:


Title: Re: Resolution of the Universe
Post by: knighty on January 06, 2017, 09:21:44 PM
quantised energy (only in bounded states, such as orbiting a nucleus) includes angular kinetic energy. But there is no quantisation of time or space... Having a minimal (approximate) distance at which you can tell things apart (plank length) is not the same as a quantisation of space. One can also count the number of bits of information that a surface of a given area is capable of storing... but that's not the same as space being quantised. (At least, from what I have read, e.g. last chapters of the Road to Reality by Roger Penrose).

Several researchers have played with models that quantise space but none have been successful, the biggest problem is that you generally lose Lorentz invariance. i.e. the quantisation would look different under a Lorentz boost, and the important principle of no universal coordinate frame is lost.
Interestingly there are such concepts as space-time crystals (http://'https://plus.google.com/117663015413546257905/posts/WTUTcYJMnGR?sfc=true') (which Josleys has worked on), where quantised lattices are invariant under certain boosts, but it doesn't apply to arbitrary boosts.

I'm not a physicist either, so let's let them answer: Carlo Rovelli (also Peter Shor comments) here (http://physics.stackexchange.com/questions/3662/does-the-discreteness-of-spacetime-in-canonical-approaches-imply-good-bye-to-str).  ;D


Title: Re: Resolution of the Universe
Post by: Chillheimer on January 07, 2017, 11:00:14 AM
Welcome howard! Thanks for sharing your ideas - very interesting, we have a very similar approach, looking forward to more discussions and exchange with you.

there's so much text and so many points that deserve answers and more discussion, it's a bit overwhelming..
Before I write nothing, I'll just write a few thoughts and will probably leave out a lot.

Where do I start..
quantum theory does not mean that the universe is voxels, or even that it is discrete. The only discrete part of quantum theory is the energy levels of orbiting electrons. But the physics of the underlying virtual particles is entirely continuous and deterministic.

I don't claim that the universe actually IS discrete voxels. My calculation was a little fun-number-play, don't take my use of terms like voxels too serious.

But: I think it is discrete in relation to the observer. This seems to match with Howards idea of the interface/observer.
I'd like to use the Mandelbrot-Set as simple model to explain: You look at a rendered picture of a zoomed in part of the Mset. It has a fixed resolution like 1920*1080. With discrete pixels. But that is only the current snapshot at the current magnification. You can of course zoom in or out to reveal more or less details.
And so the image you look at is at the same time discrete and continuous. Always depending on the field of view of the observer.
Also, you do have an absolute limit in the mandelbrot set, like the planck units: When zoomed out so far that the area of the mset -2 to +1 is displayed as one single pixel. You have no more shapes or rules. No more working "physics" that make sense.



Regarding entropy:
I think an important point that is frequently left out with entropy is the following:
Evolution of technology has always been speeding up. We call it Moore's Law when it comes to computers. But I'm convinced that you can observe Moore's Law also in biological evolution and probably even until the BigBang.
I've posted this a few times, but pictures say more than a thousand words ;)
(http://www.chillheimer.de/temp/youarehere.jpg)
Couple this with the concept of shapestacking in the mandelbrot-set (http://www.fractalforums.com/mandelbrot-and-julia-set/video-shapestacking-explained-mandelbrot-sediments/) and you get a whole new perspective on entropy and the power law.
It's about different levels of complexity. Each new Level of complexity has the lower levels below embedded into it, consists of them, but on the new level you start with a complexity of zero. "Relative" entropy is 'reset' to zero and starts rising from there.
Complex arrangements of elementary particles forming single atoms.
complex arrangements of atoms forming single molecules.
Complex arrangements of single molecules forming a single cell
Complex arrangements of single cells form complex organisms like humans.
Complex arrangements of single humans connecting  through the internet forming a new level of complexity, a global brain...

Entropy is relative!
It keeps rising, but starts all over on each level of complexity.
And this speeds exponentially.

And I don't mean complexity as in chaos. chaos is extremely complex. real complexity, with 'meaningful' information always is fractal.

Putting these observations together, I find it pointless to talk about that "endstate" of the universe, where everything theoretically smoothes out.
It's like Achilles and the tortoise (https://en.wikipedia.org/wiki/Zeno's_paradoxes#Achilles_and_the_tortoise). He'll never reach it.


@Tglad:
Please continue participating. If there is no interdisciplinary talk there is no evolution of ideas. No one can be a specialist in all areas.
Especially when it comes to a fractal worldview, you need to know a littlebit of everything, to put the whole image together.
I really miss appreciation of the polymath or rennaisance man in todays culture.
In the past I often didn't participate in discussions, in fear of looking like a fool to the specialists. But this is foolish in itself. How can we learn and grow that way?
Our society is obsessed with specialization.
And this has brought us lots of progress, no doubt.
But it also brings us into the danger of getting lost in the micro-view and missing the big picture.
We need to find a better balance.




So much for now, have to start working.. Sorry I haven't dived into more details of your ideas Howard. Another day.. ;)


edit:
on consciousness.
howard, I strongly recommend Peter Russel, "primacy of consciousness", check on youtube.
I share that view. Consciousness is in everything, the more complex and responsive to the surrounding the more conscious a being (or object).
I like to see it as "consciousness is measurement". Even a single elementary particle is conscious on the most basic level. when 2 elementary particles meet, they "measure" each others velocity, direction, energy... and this will result in a certain probabilty of consistent output.
if there is no interaction, it doesn't manifest/exist. not part of our universe, irrelevant.
double-slit, wave particle dualism, large fullerene molecules act as a wave - if not measured. (http://physicsworld.com/cws/article/news/1999/oct/15/wave-particle-duality-seen-in-carbon-60-molecules)
"the moon isn't there if no one is looking." true imho - but it measures itself, the particles of itself interact with themselves. so it's not necessary that there is a distant observer as the moon "observes" itself. or should i say measures itself, is conscious of itself.. ;)
sorry, had to get rid of this. though far too short, this could fill several evenings of realtime discussion..


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 07, 2017, 05:34:51 PM
Welcome howard! Thanks for sharing your ideas - very interesting, we have a very similar approach, looking forward to more discussions and exchange with you.

there's so much text and so many points that deserve answers and more discussion, it's a bit overwhelming..
Before I write nothing, I'll just write a few thoughts and will probably leave out a lot.

Where do I start..
I don't claim that the universe actually IS discrete voxels. My calculation was a little fun-number-play, don't take my use of terms like voxels too serious.

But: I think it is discrete in relation to the observer. This seems to match with Howards idea of the interface/observer.
I'd like to use the Mandelbrot-Set as simple model to explain: You look at a rendered picture of a zoomed in part of the Mset. It has a fixed resolution like 1920*1080. With discrete pixels. But that is only the current snapshot at the current magnification. You can of course zoom in or out to reveal more or less details.
And so the image you look at is at the same time discrete and continuous. Always depending on the field of view of the observer.
Also, you do have an absolute limit in the mandelbrot set, like the planck units: When zoomed out so far that the area of the mset -2 to +1 is displayed as one single pixel. You have no more shapes or rules. No more working "physics" that make sense.



Regarding entropy:
I think an important point that is frequently left out with entropy is the following:
Evolution of technology has always been speeding up. We call it Moore's Law when it comes to computers. But I'm convinced that you can observe Moore's Law also in biological evolution and probably even until the BigBang.
I've posted this a few times, but pictures say more than a thousand words ;)
(http://www.chillheimer.de/temp/youarehere.jpg)
Couple this with the concept of shapestacking in the mandelbrot-set (http://www.fractalforums.com/mandelbrot-and-julia-set/video-shapestacking-explained-mandelbrot-sediments/) and you get a whole new perspective on entropy and the power law.
It's about different levels of complexity. Each new Level of complexity has the lower levels below embedded into it, consists of them, but on the new level you start with a complexity of zero. "Relative" entropy is 'reset' to zero and starts rising from there.
Complex arrangements of elementary particles forming single atoms.
complex arrangements of atoms forming single molecules.
Complex arrangements of single molecules forming a single cell
Complex arrangements of single cells form complex organisms like humans.
Complex arrangements of single humans connecting  through the internet forming a new level of complexity, a global brain...

Entropy is relative!
It keeps rising, but starts all over on each level of complexity.
And this speeds exponentially.

And I don't mean complexity as in chaos. chaos is extremely complex. real complexity, with 'meaningful' information always is fractal.

Putting these observations together, I find it pointless to talk about that "endstate" of the universe, where everything theoretically smoothes out.
It's like Achilles and the tortoise (https://en.wikipedia.org/wiki/Zeno's_paradoxes#Achilles_and_the_tortoise). He'll never reach it.


@Tglad:
Please continue participating. If there is no interdisciplinary talk there is no evolution of ideas. No one can be a specialist in all areas.
Especially when it comes to a fractal worldview, you need to know a littlebit of everything, to put the whole image together.
I really miss appreciation of the polymath or rennaisance man in todays culture.
In the past I often didn't participate in discussions, in fear of looking like a fool to the specialists. But this is foolish in itself. How can we learn and grow that way?
Our society is obsessed with specialization.
And this has brought us lots of progress, no doubt.
But it also brings us into the danger of getting lost in the micro-view and missing the big picture.
We need to find a better balance.




So much for now, have to start working.. Sorry I haven't dived into more details of your ideas Howard. Another day.. ;)


edit:
on consciousness.
howard, I strongly recommend Peter Russel, "primacy of consciousness", check on youtube.
I share that view. Consciousness is in everything, the more complex and responsive to the surrounding the more conscious a being (or object).
I like to see it as "consciousness is measurement". Even a single elementary particle is conscious on the most basic level. when 2 elementary particles meet, they "measure" each others velocity, direction, energy... and this will result in a certain probabilty of consistent output.
if there is no interaction, it doesn't manifest/exist. not part of our universe, irrelevant.
double-slit, wave particle dualism, large fullerene molecules act as a wave - if not measured. (http://physicsworld.com/cws/article/news/1999/oct/15/wave-particle-duality-seen-in-carbon-60-molecules)
"the moon isn't there if no one is looking." true imho - but it measures itself, the particles of itself interact with themselves. so it's not necessary that there is a distant observer as the moon "observes" itself. or should i say measures itself, is conscious of itself.. ;)
sorry, had to get rid of this. though far too short, this could fill several evenings of realtime discussion..


Welcome back Chillheimer,

I like the diagram.  It's very close to what my mind's eye sees as I think about this puzzle.

I think the first place to start a response is to go back to the assumptions that have to be made to begin a computer simulation of a universe that returns a result exactly matching observation of our universe.

The first is that the universe is finite.  Since a one word answer to the question, "what is the universe?", can correctly be, "everything", it logically follows that "everything" is finite.
Another way to correctly answer the question...in three words...is, "ALL possible information".  
Ergo, information is finite.  The process of the universe's expression of information is then finite.  There is no such thing as "never" in this case.  Never cannot exist.  I use the word "possible" since that is a term central to the concept of "real".  And, as far as we can tell, the universe is real in such a way as to render the exclusion of the "impossible" from "ALL information".

Can it ever reach a state of "finity" in reality where ALL information has been expressed?  For the purposes of the computer simulation, it has to be assumed that it does.  And what would the structure of the universe be at that point?   Once it reaches that state there can be no more.  So how does the universe "proceed" from there?  It can't...unless it starts over.
And if it's true that information is always "retained", a restart would produce a replicate from the same point singularity.

I also understand your ideas on "consciousness" as having a sliding scale.  That is why I put consciousness IN the physical universe.
In that way the moon can "measure/experience" itself but it's not going to write a book about itself.  The moon, on this level "understands" itself and anything on that sliding scale will also "understand" any physical relationship it has to the moon.

Why do I bring up "writing a book"?  The moon has no need to write a book to understand itself.  Its mode of communication is through what we are working on, physics and cosmology.  Why do WE need books.  We are "Observers"  or "end-users".
If we need to invent a method of communication with physics and cosmology it follows that the end-user isn't IN the physical universe but interfaces with it.

All this because of one assumption..."nothing is infinite".


Title: Re: Resolution of the Universe
Post by: Chillheimer on January 09, 2017, 01:37:27 PM
wow. this sucks. I just wrote roughly 2 pages and hit some wrong button. all gone. aaaarrggg...
I'll have to take the time to rewrite this another day. Have to go now. :(


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 09, 2017, 08:25:19 PM
wow. this sucks. I just wrote roughly 2 pages and hit some wrong button. all gone. aaaarrggg...
I'll have to take the time to rewrite this another day. Have to go now. :(


I hate it when that happens.

Since finite vs infinite is a bit off the thread topic here, and you have started a thread on finite/infinite elsewhere, I'd suggest moving this discussion there if you would.  I have also started a topic over there on my model with a rewritten version.
I look forward to your return.
http://www.fractalogy.org/forum/viewtopic.php?f=11&t=98


Title: Re: Resolution of the Universe
Post by: Chillheimer on January 19, 2017, 03:33:40 PM
Okay, let's get this over with. Next try ;)  (I installed Lazarus Form recovery for Chrome and hope this will safe me if this happens again)

First of all I don't like the usage of the terms Computer & Simulation in this context.
It implies so many things that are too far out.
Whenever I've used these words in discussions, it always leads to: So who is running the computer? Some kind of super-aliens? Are they a simulation of even superior mega aliens with an even higher developed technology?
All these discussions lead to nothing because people cling to that thought and everything becomes so unrealistic through it.

I personally prefer to describe it somewhat like this (keep in mind, english is not my native language, it's hard to find the right words in german for me already)
Every point in space can be (at least) compared to a quantum bit. (maybe it even IS a quantum bit)
A quantum bit can have infinite states at the same time.
An empty point in space is flooded by electromagnetic waves from all directions of the surrounding cosmos. It has embeded all these information at a point in time.
So all points in space are connected through radiation.

I don't see the need to use words like simulation or computer.
I very much like the term Konrad Zuse used in his outstanding paper: Rechnender Raum (https://en.wikipedia.org/wiki/Calculating_Space) which translates to "Calculating Space" (english pdf) (http://ftp://ftp.idsia.ch/pub/juergen/zuserechnenderraum.pdf)


The first is that the universe is finite.  Since a one word answer to the question, "what is the universe?", can correctly be, "everything", it logically follows that "everything" is finite.
Another way to correctly answer the question...in three words...is, "ALL possible information".  
I believe a fractal approach can solve the problem of finite vs infinite.
Because a fractal has both in my opinion. (waiting for sockratease to crawl out of his cave any minute now ;))

Take an image of the Mandelbrot-Set.
It has a fixed resolution, like 1920*1080 points that have been iterated through z->z²+c to find if they are part of the Mset or not.
That still image is finite.
But if you add time and keep iterating by zooming into certain coordinates, it can go on infinitely.
It expands - like the universe.
And it just has one direction of causality, an arrow of time. You need the previous value to enter into z-z²+c.
It's entropy rises.
It's a closed yet open system.
If you take the current slice of time, the now, it is finite (as the observable universe).
But if you take the overall system it is infinite.


Can it ever reach a state of "finity" in reality where ALL information has been expressed?  For the purposes of the computer simulation, it has to be assumed that it does.  And what would the structure of the universe be at that point?   Once it reaches that state there can be no more.  So how does the universe "proceed" from there?  It can't...unless it starts over.
And if it's true that information is always "retained", a restart would produce a replicate from the same point singularity.
That "starting over" can be observed in the Mset as well, you reach Mini-Mandelbrot-Sets and the same patterns repeat all over. But based on the previous patterns.
Same in the universe - you can observe the same fractal patterns on all levels of complexity throughout the Cosmos.
I'd file that under "strong evidence" ;)



Hm.. I had written more last time, but the train of thought has departed...
I'll better post now, before loosing again.


Title: Re: Resolution of the Universe
Post by: kram1032 on January 19, 2017, 06:24:39 PM
youhn well I don't know the details, to be honest. I should rather say there is no reason to believe so. With LIGO we can measure insanely tiny sizes, which is how we discovered gravitational waves, but that's still far off from measuring anything close to Planck scale. To my knowledge, though, things work out less problematically under a continuous space/time hypothesis.

As for the occurrence of quantization, that mathematically only ever occurs when we assume a boundary. For instance, the famous "particle in a box" - if we do that calculation and make our box larger and larger, at infinity the quantization just disappears. Similarly, for a hydrogen atom, higher and higher energy states of a bound electron lie closer and closer together until they essentially behave classically. Once the electron has too much energy, it isn't bound at all any longer and resumes a continuous (though inherently imprecisely measurable) path.
The universe appears to be (but the debate isn't entirely settled) entirely unbounded. "Boundaries at infinity", as would be required for such a setting, don't have any effect at all on any close state. No quantization, no "voxelized" space-time.
There certainly are theories that attempt to have such a fundamentally discrete world anyway, but to my knowledge they are, for usually good reasons, not very popular.
When you hear "fundamentally smallest scale" you shouldn't think of digital cameras with their fixed pixel-wise resolution but rather of the effective resolution that the cameras' optics will produce. There are no sharp jumps but rather any smaller details are completely washed out.
Our own eyes, for instance, have such a limit, which is about 1 arc minute in the center of our vision but significantly lower off-center. That means, if two lines are about 1 arc minute apart, we can still just about tell. If it's less, we'll soon perceive them as a single line.

This picture roughly gives you an idea:
(https://upload.wikimedia.org/wikipedia/commons/a/ae/Airy_disk_spacing_near_Rayleigh_criterion.png)
As you can see, at the top the two dots are easily distinguishable. At the bottom you will certainly see that what you are looking at isn't perfectly circular but it may be hard to tell that this happens to be two dots. This idea can vary continuously. The minimum dot resolution is "set by the universe", so to speak. But how far they can be apart is not.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 19, 2017, 07:04:31 PM
Okay, let's get this over with. Next try ;)  (I installed Lazarus Form recovery for Chrome and hope this will safe me if this happens again)

First of all I don't like the usage of the terms Computer & Simulation in this context.
It implies so many things that are too far out.
Whenever I've used these words in discussions, it always leads to: So who is running the computer? Some kind of super-aliens? Are they a simulation of even superior mega aliens with an even higher developed technology?
All these discussions lead to nothing because people cling to that thought and everything becomes so unrealistic through it.

I personally prefer to describe it somewhat like this (keep in mind, english is not my native language, it's hard to find the right words in german for me already)
Every point in space can be (at least) compared to a quantum bit. (maybe it even IS a quantum bit)
A quantum bit can have infinite states at the same time.
An empty point in space is flooded by electromagnetic waves from all directions of the surrounding cosmos. It has embeded all these information at a point in time.
So all points in space are connected through radiation.

I don't see the need to use words like simulation or computer.
I very much like the term Konrad Zuse used in his outstanding paper: Rechnender Raum (https://en.wikipedia.org/wiki/Calculating_Space) which translates to "Calculating Space" (english pdf) (http://ftp://ftp.idsia.ch/pub/juergen/zuserechnenderraum.pdf)

I believe a fractal approach can solve the problem of finite vs infinite.
Because a fractal has both in my opinion. (waiting for sockratease to crawl out of his cave any minute now ;))

Take an image of the Mandelbrot-Set.
It has a fixed resolution, like 1920*1080 points that have been iterated through z->z²+c to find if they are part of the Mset or not.
That still image is finite.
But if you add time and keep iterating by zooming into certain coordinates, it can go on infinitely.
It expands - like the universe.
And it just has one direction of causality, an arrow of time. You need the previous value to enter into z-z²+c.
It's entropy rises.
It's a closed yet open system.
If you take the current slice of time, the now, it is finite (as the observable universe).
But if you take the overall system it is infinite.

That "starting over" can be observed in the Mset as well, you reach Mini-Mandelbrot-Sets and the same patterns repeat all over. But based on the previous patterns.
Same in the universe - you can observe the same fractal patterns on all levels of complexity throughout the Cosmos.
I'd file that under "strong evidence" ;)



Hm.. I had written more last time, but the train of thought has departed...
I'll better post now, before loosing again.

I was waiting for someone to point out where infinity might lie and you got it right, congratulations!
Except the only way to truly observe it where you have found it is from outside the simulation....outside the universe.  And then you would only see a bright white light that is "on" forever...into infinity.

I'm going to stick with simulation because other peoples' uninformed questions about who or what is controlling it or them should not affect the model.  It's pretty obvious that each person controls themselves.

As for the answer to those questions, it's most likely humans of the future who develop the simulation, as stated in the description.  The physics of the universe within the sim remain unchangeable, the physics cannot be manipulated.  It could not properly proceed otherwise.

The only source of options to manipulate anything goes through a person.  The construct of "consciousness" is the medium with which to implement options of manipulation.  The construct of consciousness is so widely accepted and self-apparent that people often don't even think about it as they imagine an innovative option...a new choice or set of choices...and add them to already previously known options stored in their own memory.
So your consciousness controls your body just as an "end-user" controls an avatar.  Your body IS an avatar for your conscious.  Your body is an avatar no matter how you try to define "human".  Your conscious is the "user" of that avatar.  
Does any rational person think to ask, "Is my conscious a super mega-alien controlling my body?"  After all, you cannot tell WHAT your conscious looks like.  Would it matter what your conscious looked like even if you could tell?  No.  You would still be you.  Your conscious is your expression of your identity.

What I am saying in the model is that, your conscious is wholly dependent on your body's physical existence so it is part of the physical universe.  It uses your body (interfaces with an avatar) to acquire information through optional manipulations of the physical in compliance with the underlying "laws" of physics.

My model then goes up fractal with it....it "zooms out".  Your conscious, being part of the physical universe, then interfaces with an extra-universal conscious.
You cannot tell WHAT that looks like either but it is still YOU.  Your inter-universal conscious is an electromagnetic field using your physical body as an avatar and that field is serving as an avatar for the extra-universal conscious....an electromagnetic field that is YOU.  But the extra-universal you has access to the infinite and thus can access infinite possible options.  This brings the CONCEPT of infinity into the physical universe but not the actual OCCURRENCE of infinity.

Over the last couple weeks I have filled out the model with more detail (yes, it's longer :-) ) and started a facebook page for it.

So...what happens when you make consciousness within a universe the highest level of interface...ie the last stop for fractal expression of interfacing?

https://www.youtube.com/watch?v=RPmfgHwuLhY

If the dog could talk it might ask, "How are those super-mega aliens DOING that?"


Title: Re: Resolution of the Universe
Post by: youhn on January 19, 2017, 11:56:30 PM
The minimum dot resolution is "set by the universe", so to speak. But how far they can be apart is not.

The whole post was a good read, but this was a kind of aha-moment for me. This would mean smaller things than the planck length can exists, if I understand it correctly. We need electron microscopy to make the world visible below the atomic scale. That's a jump of about 1000 orders of magnitude, down from wavelength of light (10^-7) to the size of an atom. No current technology can look into an atom (10^-11) or an electron, but we can smash those particles together so they scatter and leave traces. Quarks (10^-18) for example are 60000 orders of magnitude smaller than a hydrogen atom, this is about the resolution of LIGO. How does time scale at these "zoom depths"? Do we have to look at smaller and smaller timeframes, in order to observe smaller and smaller sizes? We can freeze matter down to almost absolute zero, but this does not conserve (macro) structure and (nano and lower scale) behaviour. I wonder what the future will bring. Would we be able to look more directly at these very small scales?

In general I don't really like the idea of the universe being a simulation. At the same time I do have some sentimental feeling for digital physics, but I do not believe the world is strictly digital as in 0 and 1 (or similar ON, OFF). The universe could really on computing methods, if you would see a computer as a general information processing device. Then each state of the universe would get processes into the next iteration, of itself. There is no separation between the computer, the program, the input and the output. The world computes itself from the initial state to whatever the logic of the smallest parts drives it to. If that initial state as either completely ordered or completely chaotic (which is a kind of order in itself) then there would be no differentiation and time and space would have so less meaning you could say they don't exists. Somewhere along the shifts of configuration the basic principles of the universe where crystallized. Massive phase changes of the very young universe caused the formation of different particles. Not a smooth process, but more in waves or blows. In our current age of the universe the phase seems stable over long periods of time, at the same time the structure has become more fragmented. Giving more complexity at different orders of magnitude. How is it possible that (mostly flow related) phenomenon look the same as massive different scales? This would imply a computational method intervowen in different scales, which seem to contradict a more grid-like computer.

Another angle. How would dark energy and dark matter look at the very small scales?


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 20, 2017, 08:35:16 AM

First of all I don't like the usage of the terms Computer & Simulation in this context.
It implies so many things that are too far out.

I would like to suggest a readjustment of perspective.  I completely understand your hesitation about terminology.

Terminology can be a quirky thing.  Sometimes a term, all by itself, can produce hard and/or soft psycho-social restrictions among an entire populace that can limit an individual's perspective.  I therefore propose that the term, "mainstream science", be replaced by, "prevailing psycho-social mindset of empiricism".

Whereas "mainstream" conceptually locks the perception of useful scientific endeavor into recent history, that's only a small part of the entire river system of inquiry I discussed here:
https://www.facebook.com/anomalous.howard.3/posts/144392299395704

Here's the crux of it....There's been a major assumption in the prevailing psycho-social mindset of empiricism which insists that there can be no condition that exists where "not universe" is possible. 

The arbitrary and unwarranted restrictive effect of that specific mindset is well illustrated by the following.  Note how its persistence derailed open-minded exploration and has led to ever greater explanatory convolutions in order to avoid the obvious even as observational evidence accumulates which demands consideration of "not universe" as a rational component of the entirety of reality.

"In the 1920s, theoretical physicists, most notably Albert Einstein, considered the possibility of a cyclic model for the universe as an (everlasting) alternative to the model of an expanding universe. However, work by Richard C. Tolman in 1934 showed that these early attempts failed because of the cyclic problem: according to the Second Law of Thermodynamics, entropy can only increase.[1] This implies that successive cycles grow longer and larger. Extrapolating back in time, cycles before the present one become shorter and smaller culminating again in a Big Bang and thus not replacing it. This puzzling situation remained for many decades until the early 21st century when the recently discovered dark energy component provided new hope for a consistent cyclic cosmology.[2] In 2011, a five-year survey of 200,000 galaxies and spanning 7 billion years of cosmic time confirmed that "dark energy is driving our universe apart at accelerating speeds."[3][4]

One new cyclic model is a brane cosmology model of the creation of the universe, derived from the earlier ekpyrotic model. It was proposed in 2001 by Paul Steinhardt of Princeton University and Neil Turok of Cambridge University. The theory describes a universe exploding into existence not just once, but repeatedly over time.[5][6] The theory could potentially explain why a repulsive form of energy known as the cosmological constant, which is accelerating the expansion of the universe, is several orders of magnitude smaller than predicted by the standard Big Bang model.

A different cyclic model relying on the notion of phantom energy was proposed in 2007 by Lauris Baum and Paul Frampton of the University of North Carolina at Chapel Hill.[7]

Other cyclic models include Conformal cyclic cosmology and Loop quantum cosmology."

https://en.wikipedia.org/wiki/Cyclic_model


Title: Re: Resolution of the Universe
Post by: Chillheimer on January 20, 2017, 12:57:02 PM
I'm going to stick with simulation because other peoples' uninformed questions about who or what is controlling it or them should not affect the model.  It's pretty obvious that each person controls themselves.
Don't underestimate the reluctance of people towards theories because of "badly chosen wording".

But putting this aside, I really think "simulation" is actually wrong. First sentence about simulation at wikipedia:
Simulation is the imitation of the operation of a real-world process or system over time.
Imitation of a real world process.
That's were I see a major problem. There simply is no need to add this layer of unknown, unproven (and imho false) assumption.

same for "computer simulation":
A computer simulation (or "sim") is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works
to be studied, model a real-life situation on a computer...

it's not just that people think like I described, it is a logical consequence of the definition of the used terms.

If (as youhn points out as well) it is the actual universe itself, computing itself, iterating as a huge fractal feedback-system, there is no need for the terms.
But I don't want to overly harp on about this ;)

Have you read into Zuses Calculating Space? I'd love to hear your opinion on it and where you see parallels/differences to your theory.




As for the answer to those questions, it's most likely humans of the future who develop the simulation, as stated in the description.  The physics of the universe within the sim remain unchangeable, the physics cannot be manipulated.  It could not properly proceed otherwise.
--as stated in what description?--
Hm. And these humans of the future were the first simulators? so what is their universe made of?
what about causality?

what do you see as evidence for this?

(don't get me wrong, I like many parts of your ideas and deliberately pick the parts where I think there's open questions)


Your inter-universal conscious is an electromagnetic field using your physical body as an avatar and that field is serving as an avatar for the extra-universal conscious....an electromagnetic field that is YOU.  But the extra-universal you has access to the infinite and thus can access infinite possible options.  This brings the CONCEPT of infinity into the physical universe but not the actual OCCURRENCE of infinity.
Nice image.
Reminds me of talks about psychedelic experiences and the concept of entheogenic substances as a "shortcut" to to this "extra universal you".
Too bad that the topic is stigmatized and usually not discussed or studied without restrictions or drifting into illegality.
I found the short book "Being Human" by Martin Ball very insightful on this. Recommended read!



For instance, the famous "particle in a box" - if we do that calculation and make our box larger and larger, at infinity the quantization just disappears. Similarly, for a hydrogen atom, higher and higher energy states of a bound electron lie closer and closer together until they essentially behave classically. Once the electron has too much energy, it isn't bound at all any longer and resumes a continuous (though inherently imprecisely measurable) path.
The universe appears to be (but the debate isn't entirely settled) entirely unbounded. "Boundaries at infinity", as would be required for such a setting, don't have any effect at all on any close state. No quantization, no "voxelized" space-time.
If I understand that correctly the fractal view I described above regarding finite vs. infinite both being present in a fractal fits in there.

When you hear "fundamentally smallest scale" you shouldn't think of digital cameras with their fixed pixel-wise resolution but rather of the effective resolution that the cameras' optics will produce. There are no sharp jumps but rather any smaller details are completely washed out.
That is kind of what I mean. Using the term voxels in the initial post was misleading and just playiong around with numbers.

Our perception of the world seems quantized when looking very closely. extreme example: long exposures of hubble deep field, with just a few single photons arriving from whole galaxies
But that is just our perception, our current frame of reference. Our camera as you say.


The whole post was a good read, but this was a kind of aha-moment for me. This would mean smaller things than the planck length can exists, if I understand it correctly.
Yes, but not within our frame of reference, they are out of reach. (if I understand correctly).

How does time scale at these "zoom depths"? Do we have to look at smaller and smaller timeframes, in order to observe smaller and smaller sizes?
It's linked, the smaller the space the smaller the time intervals involved.


In general I don't really like the idea of the universe being a simulation. At the same time I do have some sentimental feeling for digital physics, but I do not believe the world is strictly digital as in 0 and 1 (or similar ON, OFF).
As written above, I think reality itself, the "big picture" might not be quantized into 0 and 1, but our perception of reality is. A single photon reaches you or it doesn't. That tiny spot in hubbles deepfield exists, or it doesn't. At least from your perspective.


Another angle. How would dark energy and dark matter look at the very small scales?
Even more than gravity, dark energy has no significant impact on small scales. Even within neighbouring galaxies it is too small too measure.


There is no separation between the computer, the program, the input and the output. The world computes itself from the initial state to whatever the logic of the smallest parts drives it to.
Wholeheartedly agree!
and to add a little more confusion: Would this make the universe deterministic? every action and each of your own decisions predestined from the initial conditions?
i personally don't think so and think this is connected to heisenbergs uncertainty principle. minute (and impossible to measure) differences at the "smallest" scales will blow up to a totally diffferent end result.

..
damn. these posts always get so long. now howard posted even more regarding my terminology problems.
I therefore propose that the term, "mainstream science", be replaced by, "prevailing psycho-social mindset of empiricism".
As much as I would like to have an alternative to "mainstream science", "prevailing psycho-social mindset of empiricism" might be correct, but soo un-catchy that it won't be used for sure. I have a hard time remembering it longer then 5 minutes ;)
Sorry, I don't have a better idea.

there can be no condition that exists where "not universe" is possible. 
says who?
As I understand the concepts, multiverse and also the big bang theory have "not universe" embedded.
It's just not used often, because it makes little sense to speculate about something that we (very probably) will never be able to falsify.


The arbitrary and unwarranted restrictive effect of that specific mindset is well illustrated by the following.  Note how its persistence derailed open-minded exploration and has led to ever greater explanatory convolutions in order to avoid the obvious even as observational evidence accumulates which demands consideration of "not universe" as a rational component of the entirety of reality.

"In the 1920s, theoretical physicists, most notably Albert Einstein, considered the possibility of a cyclic model for the universe as an (everlasting) alternative to the model of an expanding universe. However, work by Richard C. Tolman in 1934 showed that these early attempts failed because of the cyclic problem:
according to the Second Law of Thermodynamics, entropy can only increase.[1] This implies that successive cycles grow longer and larger. Extrapolating back in time, cycles before the present one become shorter and smaller culminating again in a Big Bang and thus not replacing it. This puzzling situation remained for many decades until the early 21st century when the recently discovered dark energy component provided new hope for a consistent cyclic cosmology.
I just have to throw in the bifurcation diagram here again, because it fits the "cyclic" universe idea so well. except that it's cycles of growing complexity, as with shapestacking in the mandelbrot-set.

phew. so much for today.


Title: Re: Resolution of the Universe
Post by: kram1032 on January 20, 2017, 01:45:13 PM
It's not just that we cannot possibly measure distances smaller than roughly the Planck Scale: Remember, we have pretty tight control over physics these days. It's more fundamental than that. There is NO POSSIBLE physical process (in our current understanding) that could properly distinguish between them. It's not just imperceptible to us. Also to the involved particles!
Once stuff comes so close together as to inhabit basically the same space, the momentum of those particles becomes so insanely uncertain that it's very likely they shoot away at crazy speeds. It's not a lasting situation.
This is what Quantum Mechanics says anyway. But if you put stuff together this closely, you will potentially also have to think about General Relativity. And then you'll basically get a black hole center singularity. (Note, energy, momentum and relativistic mass are all linked to each other so if stuff somehow comes closer together than the Planck Scale without having been kicked apart by uncertain momentum before, you may get insane momenta and with them insane energy which in turn might be enough to get a black hole)
This is where experimentally confirmed physics as it currently stands just breaks down. From that point on you need new models like Super String Theory, but thus far nobody proposes an experiment to confirm any of that which we actually could plausibly build, and the maths is so tough, the area has lost quite a bit of momentum in recent years.
In fact, the relativistic idea is how the Planck Length arises in the first place. Any closer would cause a Schwarzschild Black Hole if you just naively combine Quantum Mechanics and General Relativity and hope it works. It's possible that this entire idea falls completely flat with what ever unified theory ends up working. It's simply the regime where all bets are off. It's the modern equivalent of Here Be Dragons - a white spot on a map. One we are still trying to fill in.
There actually are slightly different fundamental constants possible if you take slightly different starting assumptions. (I can't quite recall the details. It had something to do with calculating electron interactions in an electromagnetic force) - Do not think of the Planck Scale as a literal magical limit. The important thing is the order of magnitude. We expect new physics to show themselves rather clearly anywhere between like 10 - .1 Planck Lengths.

And then there are conformal ideas of gravity which have a certain property of scale-free-ness: In these you cannot apply a ruler out of thin air. You can only ever measure distances "relatively". Like, you can compare how many finger lengths your arm spans, or you can figure out how many hydrogen-hydrogen bonds span your finger, but any fixed value you put to it is just a convenience. "1 meter" here doesn't have any absolute meaning, but rather means, as of right now, "the distance light happens to travel in suchandsuch many ticks of an atomic clock".
If you bring this idea to a logical conclusion then there is actually no reason to believe that anything special should happen around Plack Scale. It's just another freely exchangeable unit of reference. - But to my knowledge, while such conformal ideas can be made consistent with General Relativity as we know it today, they also don't have any further evidence beyond that yet. At this point we are randomly (well, somewhat systematically) throwing paint at the wall and see what sticks. While being blind. So we are actually more like guessing what sticks and then checking whether what we already can see is at least consistent with our guesses. Eventually we'll design experiments that can actually distinguish many of these theories. And in fact, LIGO already ruled out a TON of ideas that had been thrown around because they would have needed very different observations:
- Gravitational waves do exist
- they have a certain shape
- they happen with a certain (surprisingly high) frequency
All three of those were able to rule out a variety of ideas. As we get more data, especially as we can pinpoint these mergers more closely once VIRGO also goes online, more theories will fall away as inconsistent with reality, and the remaining ones (which will still be way more than enough) can be focused on more carefully.

Finally I'm not quite sure what you mean with time-scale when measuring a length. The Plack Time is the time it takes a photon to travel one Planck Distance. This is an insanely short time, just like the distance is an insanely short distance.
But if you look at, say, the LIGO results, you'll find (and in fact I'm sure you've already heard) that gravitational waves just happen to be largely in the audible part of the spectrum. You can directly transform gravitational waves to audio waves and hear pretty much all their structure. The data you likely already heard wasn't sped up or slowed down (actually the original LIGO video featured an original-speed version and one sped up by an octave or so to get a clearer idea of the lower frequencies), it's actually at real-life speed.
These waves have insanely tiny amplitudes (way smaller than a proton diameter), but their frequencies are in the audible range. In so far I wouldn't expect the idea of a zoomed in timeframe to be particularly meaningful either way.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 20, 2017, 05:39:14 PM

Simulation is the imitation of the operation of a real-world process or system over time.
Imitation of a real world process.
That's were I see a major problem. There simply is no need to add this layer of unknown, unproven (and imho false) assumption.

same for "computer simulation":
A computer simulation (or "sim") is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works
to be studied, model a real-life situation on a computer...

Studying the situation is exactly what humans try to do.  I do not see how assuming that we are doing so in a simulation would necessarily be a false assumption.

Quote
Have you read into Zuses Calculating Space? I'd love to hear your opinion on it and where you see parallels/differences to your theory.

Unfortunately your link did not work.  I have since found a translation and downloaded it.  I promise to go over it as time permits.  Thanks for the suggested reading.

Quote
--as stated in what description?--
Hm. And these humans of the future were the first simulators? so what is their universe made of?
what about causality?

It doesn't necessarily have to be humans of the future although that was the possibility I chose in my initial post here which is what I meant by "description".

Whatever or whoever would be responsible for the simulation would be doing so as a means of study...just as you say.


Quote
As I understand the concepts, multiverse and also the big bang theory have "not universe" embedded.
It's just not used often, because it makes little sense to speculate about something that we (very probably) will never be able to falsify.

A computer sim universe brings "not universe" into a model in a way such that there is a systemic interaction between universe and not universe.  It allows for extension of assumptions that might begin to better describe "not universe" instead of just ignoring it as an inaccessible quantity.


Quote
"In the 1920s, theoretical physicists, most notably Albert Einstein, considered the possibility of a cyclic model for the universe as an (everlasting) alternative to the model of an expanding universe. However, work by Richard C. Tolman in 1934 showed that these early attempts failed because of the cyclic problem: I just have to throw in the bifurcation diagram here again, because it fits the "cyclic" universe idea so well. except that it's cycles of growing complexity, as with shapestacking in the mandelbrot-set.


I think this is where you might be improperly inferring that the concept of fractal infinity produces actual interuniversal infinity.  The concept is never the real thing.

The following, unfortunately, is going to continue the trend of long posts that we seem to be stuck with... ^-^  It's one of the recent additions to the sim model:

Black Holes and Information Preservation (Hair)

Keeping in mind that The Universe is being considered by many cosmologists to be the result of a computer simulation, it would then be likely that a black hole acts as an information filter. Here I propose a black hole as a "data port".  Certain information will  be allowed to pass through for "collection" and "analysis" and other feedback purposes.
In the model I have proposed, each iteration contains changes that occur which are then "projected" forward into the next iteration producing the dimension of time that we experience and why E=MC2 without resorting to highly advanced mathematics.

https://www.facebook.com/anomalous.howard.3/posts/144743669360567

(or see: https://www.theguardian.com/science/2010/oct/18/einstein-relativity-science-book-review
from one of six titles vying for the 2010 Royal Society Prize for Science Books: "Why Does E=mc2?" by Brian Cox and Jeff Forshaw

"Did you know that you're travelling at the speed of light? Not just you: your book, your chair, the room around you, your home. In fact, everything is moving at the speed of light.

Don't feel it? Don't worry, no one else did either until Albert Einstein redefined the substance of reality at the start of the 20th century. Neither Galileo, Michael Faraday, James Clerk Maxwell or Isaac Newton knew about the speed of light thing, despite laying the foundations for the insights that the Austrian patent-clerk-turned-physicist would eventually have.

Let me clarify. We are all moving at a speed "c" that happens to correspond with the speed of light as it moves through a vacuum in normal space. Except that our movement is through a 4D co-ordinate system called spacetime.")

Any information that is necessary to reproduce the physical contiguity within the next iteration must be held within the universe while information regarding "non-physical" aspects pass through the data port.

The "soft hair" now proposed for black holes is the information of the "physical" awaiting (it's only a "wait" of one Plancktime) the "unfolding" of the next iteration.

Any wave function that has undergone collapse out in The Universe gets translated and recoded by the "receptor" upon which the collapse occurred. This translation becomes recoded as an "effect".
The mechanism by which the transition itself from wave function to effect occurs becomes part of the information.  That encoding process of  collapse brings information through a series of electromagnetic wave form transitions through the nervous system and on to the brain.  One product of this becomes memory.

If your body is the receptor of a collapse an initial transition happens within the "person's" nervous system (as part of the Universe) within a framework of critical dynamics.  Transition information becomes part of each transitioned wave form's information as it is becomes what we call a perception. (You could call the sum of transition information a "motive". This would be like tracing a human action back to it's root cause, or motive, by tracking backward in that person's history to decode "why" that action was undertaken to begin with).

The transition information (the motive) is not information concerning the "physical" make-up of the universe so it IS NOT NECESSARY for the "physical" unfolding of the next iteration of universe. The transition information (motive) can pass through the data port.

How this is possible is summarized here:
From:
Viewpoint: Black Holes Have Soft Quantum Hair
"Strominger had an important insight in 2014 [4] while investigating a different problem. He realized that there are an infinite number of conservation laws that govern the scattering of gravitons—the elementary excitations in a quantum theory of gravity. Working with his students, Strominger realized soon thereafter that a similar result holds for electromagnetism [5]. Currently, he is collaborating with Hawking and Perry to apply this insight to black holes. In the new paper, the authors illustrate their ideas by considering electromagnetism in the presence of a black hole.
The key to their argument about black hole hair is provided by new conservation laws that generalize the usual notion of conservation of electric charge. The total charge in a region can be obtained by integrating the radial component of the electric field around a sphere surrounding the region. If no charge enters or leaves the region, its value is independent of time. Strominger’s generalization is based on integrating, over a sphere of infinite radius, the radial electric field weighted by an arbitrary function. It turns out [5] that this integral is still conserved. This provides an infinite number of new conserved quantities.

This observation connects to black hole hair in the following way. Using Gauss’ theorem, one can convert the surface integral describing the new conserved charge to a volume integral over all space. In the absence of black holes, the new conservation law simply means that this volume integral in the past is equal to the integral in the future. However, if black holes are present, the integral in the future must include a contribution over the black hole horizon.
If both gravity and electromagnetism are described classically, the contribution to the new charges coming from the black hole horizon must vanish. But Hawking, Perry, and Strominger argue that the situation is very different when electromagnetism is described quantum mechanically. To understand the difference, first consider the vacuum state and then add one photon. The result is a new quantum state with energy equal to the energy of the photon. As Strominger showed [5], if one takes the limit as the photon energy goes to zero (that is, the photon becomes “soft,” with vanishing energy), the result is a new state, which can be called a new vacuum because it has essentially the same energy as the original vacuum state. The first vacuum is turned into the second by acting with an operator that is just the quantum version of the new conserved charge.

The authors’ work now shows that acting with this same operator on a black hole horizon adds photons with essentially zero energy. These photons make up what they call the “soft hair” on a black hole. Since there are an infinite number of new charges, there are an infinite number of soft hairs that a black hole can support. Furthermore, the researchers demonstrate that when a charged particle falls into the black hole, it excites some of this soft hair. The exact conservation of the new charges implies that when a black hole evaporates, the information about the hair on the horizon must come out in the Hawking radiation.

It is important to note that this paper does not solve the black hole information problem. First, the analysis must be repeated for gravity, rather than just electromagnetic fields. The authors are currently pursuing this task, and their preliminary calculations indicate that the purely gravitational case will be similar. More importantly, the soft hair they introduce is probably not enough to capture all the information about what falls into a black hole. By itself, it will likely not explain how all the information is recovered when a black hole evaporates, since it is unclear whether all the information can be transferred to the soft hair. However, it is certainly possible that, following the path indicated by this work, further investigation will uncover more hair of this type, and perhaps eventually lead to a resolution of the black hole information problem."
https://physics.aps.org/articles/v9/62



Title: Re: Resolution of the Universe
Post by: anomalous howard on January 20, 2017, 07:36:55 PM
Have you read into Zuses Calculating Space? I'd love to hear your opinion on it and where you see parallels/differences to your theory.

I would have to say, that when you consider what I propose for black holes and that each galaxy is running as a subroutine with a feedback loop operating at Plancktime,
the only real difference is that my model allows for a mechanism of data input/output where the workings of physics in the universe is just as mechanistic as Zuse postulates but WE are not mechanistic in our range of possible responses to that physics.  (this point is made in that video of the dog I posted)  Otherwise we are very close.  Perhaps if Zuse had today's information about black holes available to him he might have arrived at a model much like my own.

I probably should add that the subroutine feature of my model can explain the homogeneity observed in surveys of space and why there is no specific direction from which a "big bang" occurs.  It also shows why every galaxy has a black hole at their center. This is why I prefer the term "unfolding".

It also explains why Tifft can make the statement, "Galaxies are separated in 3-D time, which we have misinterpreted as separation in space."
https://www.scientificamerican.com/article/is-time-quantized-in-othe/

Other reading:
https://en.wikipedia.org/wiki/Digital_physics


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 20, 2017, 11:11:34 PM
 Seth Lloyd, professor of mechanical engineering at the Massachusetts Institute of Technology. The book proposes that the universe is a quantum computer, and advances in the understanding of physics may come from viewing entropy as a phenomenon of information, rather than simply thermodynamics. Lloyd also postulates that the universe can be fully simulated using a quantum computer.....
https://en.wikipedia.org/wiki/Programming_the_Universe

In this video, starting at about the 17 minute mark, Lloyd explains a biological cell's nucleus in a way that shows it as a fractal re-expression of how I described the function of a black hole as a "filter" for information passing into the black hole/data port.
Right down to Hawking's "hair".

https://www.youtube.com/watch?v=I47TcQmYyo4

Then at 32 mins he explains how a 300 bit quantum computer will be able to run solutions for every particle in the universe simultaneously.

Then there's
http://spectrum.ieee.org/nanoclast/semiconductors/materials/quantum-dots-made-from-graphene-help-realize-their-promise-for-quantum-computing

I might as well also suggest the following free e-book:
http://www.freebookcentre.net/physics-books-download/Hacking-Matter-[PDF-212p].html


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 21, 2017, 05:31:58 PM
Subroutines, Quantum Computing and "Spooky action at a distance"...Removing Some of the "Magic".

The following will be incorporated into "Musings..."
with background material drawn from the April 1985 Physics Today article by N. David Merman.

https://en.wikipedia.org/wiki/David_Mermin

http://people.westminstercollege.edu/faculty/ccline/courses/phys301/PT_38(4)_p38.pdf

The article actually begins with a reference to Arthur C. Clarke's "Third Law":
"Any sufficiently advanced technology is indistinguishable from magic."

Here's the first paragraphs of the article:
______________________________________________________________
N. David Mermin
Quantum mechanics is magic

In May 1935, Albert Einstein, Boris Podolsky and Nathan Rosen published an argument that quantum mechanics fails to provide a complete description of physical reality. Today, 50 years later,the EPR paper and the theoretical and experimental work it inspired remain remarkable for the vivid illustration they provide of one of the most bizarre aspects of the world revealed to us by the quantum theory. Einstein's talent for saying memorable things did him a disservice when he declared "God does not play dice," for it has been held ever since that the basis for his opposition to quantum mechanics was the claim that a fundamental understanding of the world can only be statistical. But the EPR paper, his most powerful attack on the quantum theory, focuses on quite a different aspect: the doctrine that physical properties have in general no objective reality independent of the act of observation.  As Pascual Jordan put it Observations not only disturb what has to be measured, they produce it. ... We compel [the electron] to assume a definite position. ... We ourselves produce the results of measurement. Jordan's statement is something of a truism for contemporary physicists.
Underlying it, we have all been taught, is the disruption of what is being measured by the act of measurement, made unavoidable by the existence of the quantum of action, which generally makes it impossible even in principle to construct probes that can yield the information classical intuition expects to be there.  Einstein didn't like this. He wanted things out there to have properties, whether or not they were measured.

We often discussed his notions on objective reality. I recall that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it."
______________________________________________________________

Subroutines, Quantum Computing and "Spooky action at a distance"...Removing Some of the "Magic".

One of the problems that Einstein had with his own theory is illustrated in his correspondence to Max Born.  Here's a quote from one of his letters also reproduced in the above article:

"That which really exists in B should...not depend on what kind of measurement is carried out in part of space A; it should also be independent of whether or not any measurement at all is carried out in space A. If one adheres to this program, one can hardly consider the quantum-theoretical description as a complete representation of the physically real. If one tries to do so in spite of this, one has to assume that the physically real in B suffers a sudden change as a result of a measurement in A.
My instinct for physics bristles at this."

In another letter:
"I cannot seriously believe in [the quantum theory] because it cannot
be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance."

 The "spooky actions at a distance" (spukhafte Fernwirkungen) are the acquisition of a definite value of a property by the system in region B by virtue of the measurement carried out in region A."

At the time there were no computers let alone quantum computers.
In the youtube video I posted yesterday, Seth Lloyd (the "inventor" of quantum computing) explains how superposition allows for a qubit to represent both a 1 and a 0 at the SAME TIME.  I also linked to an article on graphene quantum dots which produce FOUR simultaneous quantum states.  (That's truly phenomenal imo)

A two state qubit can be made to READ AS just a 1 or just a 0 when the proper stimulus is applied to it.  Also explained...and shown in realtime in another video: https://www.youtube.com/watch?v=zNzzGgr2mhk

So in a quantum computer running an integrated series of subroutines there can be a subroutine that is the qubit solution for "Moon".  Then there's the subroutine that is the qubit solution for "Human Avatar".  As both a simultaneous 1 and 0 the Moon solution relative to the Human Avatar solution only "reads out" FOR the Avatar when the Avatar requests it....or when the Avatar decides to apply the proper stimulus to the Moon solution.  The request (or "switch" to apply the proper stimulus for readout) is simply made by directing your eyes to it.  Until then (for energy efficiency reasons, I'm sure) the Moon Solution, as qubit, is only the PROBABILITY held in the qubit.

And this answers Einstein's question solving the "spooky action at a distance" conundrum.

Now you should be able to fully understand that this type of interaction, where YOU through a Human Avatar have total freedom as to what quantum solutions you would like to produce, requires that an interface with your avatar is established between that which is IN the computer (the rules of physics in a qubit potentiality of the solution for Universe) and YOU as an end-user OUTSIDE the computer.

When you try to measure the size of the universe....the solution returned is HUUUUUGE!!!!!  When in reality, the requested solution only reads out that way when you try to measure it (request it).  In it's unrequested state, the universe has no real size at all.  It's size is "unrequested".


Title: Re: Resolution of the Universe
Post by: youhn on January 21, 2017, 06:07:33 PM
The "spooky actions at a distance" is as much "spooky" as the following.

Take three persons, for example you, me and the other.
I take a piece of paper and write down "0" on the left, and "1" on the right.
Then the piece of paper is torn apart through the middle, leaving two pieces either with a "0" or a "1".
I mix up these papers, but hidden from sight (even my own).

Now I give you one piece, and you walk 10 meters away from.
The other receives the other piece of paper, and walks 10 meters in the other direction.
At this moment, you do not know if you're piece of paper has a "0" or "1" on it.
At the same time, there is no physical link between the two pieces whatsoever.

The magic is that when you reveal you're piece of paper,
everyone will certainly know that the other one MUST be different.
This is what "spooky" action at a distance is.
(it's not so much about space and distance, but more about space and time...)

You might say that the hidden part is the only "spooky" aspect of it.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 21, 2017, 09:02:29 PM
The "spooky actions at a distance" is as much "spooky" as the following.

Take three persons, for example you, me and the other.
I take a piece of paper and write down "0" on the left, and "1" on the right.
Then the piece of paper is torn apart through the middle, leaving two pieces either with a "0" or a "1".
I mix up these papers, but hidden from sight (even my own).

Now I give you one piece, and you walk 10 meters away from.
The other receives the other piece of paper, and walks 10 meters in the other direction.
At this moment, you do not know if you're piece of paper has a "0" or "1" on it.
At the same time, there is no physical link between the two pieces whatsoever.

The magic is that when you reveal you're piece of paper,
everyone will certainly know that the other one MUST be different.
This is what "spooky" action at a distance is.
(it's not so much about space and distance, but more about space and time...)

You might say that the hidden part is the only "spooky" aspect of it.

Actually, the "spooky" part of it is that a 1 and a 0 are on both pieces of paper and when you look only a 1 or a 0 would appear on each.


Title: Re: Resolution of the Universe
Post by: youhn on January 21, 2017, 09:41:52 PM
Actually, the "spooky" part of it is that a 1 and a 0 are on both pieces of paper and when you look only a 1 or a 0 would appear on each.

They could be both true. Both are possible, which does not mean both states really exist at the same time on one piece of paper. Entanglement just means that there once was interaction back in time, but the subjects have move apart from that moment on. Nothing spooky about that. It would only be spooky if the initial interaction was hidden (due to limits of apparatus of detection).


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 22, 2017, 05:17:11 AM
They could be both true. Both are possible, which does not mean both states really exist at the same time on one piece of paper. Entanglement just means that there once was interaction back in time, but the subjects have move apart from that moment on. Nothing spooky about that. It would only be spooky if the initial interaction was hidden (due to limits of apparatus of detection).

The mathematics that describe entanglement isn't even necessary if the universe is the product of quantum computing. 
In fact, most of quantum math would be unnecessary to describe the "universe" in that case.

Maybe we should consult William of Okham.  But I suppose it would be just so depressing for tens of thousands of theoretical physicists if the simulation solution were the correct one (being the least complex) that they could never really come to grips with it.


Title: Re: Resolution of the Universe
Post by: kram1032 on January 22, 2017, 11:10:37 AM
youhn's analogy here is pretty much flawless.
The point is that, prior to looking at it, you have absolutely no clue what so ever what exact number is on your piece of paper. What you DO know, due to how it was generated, is what is the probability of getting each value. In this case there is a 50:50 chance of getting a 0 or a 1, respectively. - This probability is the quantum state containing "both a 0 and a 1" as possible futures.
Furthermore, also due to how this sample was generated, you know for a fact that what ever you have on your piece of paper will be the opposite of what the other person's shows. - this is what causes the "spookiness" of the action at a distance.
No more information is available to you at this point. All you have is a probability density function. As SOON as you look at your piece of paper, the function collapses and you get your, say, 0. And then the action at a distance kicks in: By your second piece of information you now INSTANTLY know for a fact, no matter how far away you are from the other piece, that that other piece MUST read, in our example, 1.
At the first glance, you just violated special relativity: The information about the other piece's content traveled to you faster than the speed of light. This is what spooked Einstein at first. But the way the sample was generated in the first place already contained the necessary information. And if, say, the other person lost their piece of paper and needs the information of what's on it, it would still take a speed-of-light-constrained "classical" channel (them meeting up with you, you showing/telling them your piece's contents) to let the information travel onwards.
Phrased like that there really is nothing spooky about this action. It would be spooky to NOT behave that way.

QM does NOT say that both STATES exist at the same time. What it says is that the POSSIBILITIES of either state exist at the same time.

And I don't see how entanglement could possibly not be mathematically necessary in a quantum computation description of the universe when entanglement precisely is one of the fundamental moves (maybe actually the only one?) that make quantum computation special and different from classical computation.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 22, 2017, 05:47:16 PM
youhn's analogy here is pretty much flawless.
The point is that, prior to looking at it, you have absolutely no clue what so ever what exact number is on your piece of paper. What you DO know, due to how it was generated, is what is the probability of getting each value. In this case there is a 50:50 chance of getting a 0 or a 1, respectively. - This probability is the quantum state containing "both a 0 and a 1" as possible futures.
Furthermore, also due to how this sample was generated, you know for a fact that what ever you have on your piece of paper will be the opposite of what the other person's shows. - this is what causes the "spookiness" of the action at a distance.
No more information is available to you at this point. All you have is a probability density function. As SOON as you look at your piece of paper, the function collapses and you get your, say, 0. And then the action at a distance kicks in: By your second piece of information you now INSTANTLY know for a fact, no matter how far away you are from the other piece, that that other piece MUST read, in our example, 1.
At the first glance, you just violated special relativity: The information about the other piece's content traveled to you faster than the speed of light. This is what spooked Einstein at first. But the way the sample was generated in the first place already contained the necessary information. And if, say, the other person lost their piece of paper and needs the information of what's on it, it would still take a speed-of-light-constrained "classical" channel (them meeting up with you, you showing/telling them your piece's contents) to let the information travel onwards.
Phrased like that there really is nothing spooky about this action. It would be spooky to NOT behave that way.

QM does NOT say that both STATES exist at the same time. What it says is that the POSSIBILITIES of either state exist at the same time.

And I don't see how entanglement could possibly not be mathematically necessary in a quantum computation description of the universe when entanglement precisely is one of the fundamental moves (maybe actually the only one?) that make quantum computation special and different from classical computation.

The entanglement does occur in the working calculating space of the quantum computer but that doesn't mean that any two simulated particles are entangled. It's a perspective thing.  If you assume the universe is not a simulation held in probabilistic programming, you will ascribe entanglement's existence to the wrong "place" and you will be unable to find an initial state where the entanglement was established in any "naturally occurring" event  where the two particles in question could be separated by very great distances (the "limits of apparatus detection" problem).  

But if you assume it is simulated, the cause and source of entanglement effects is apparent.

What I'm doing is what is suggested by Sean Carroll here:
https://www.preposterousuniverse.com/blog/2013/01/17/the-most-embarrassing-graph-in-modern-physics/

"Not that we should be spending as much money trying to pinpoint a correct understanding of quantum mechanics as we do looking for supersymmetry, of course. The appropriate tools are very different. We won’t know whether supersymmetry is real without performing very costly experiments. For quantum mechanics, by contrast, all we really have to do (most people believe) is think about it in the right way. No elaborate experiments necessarily required (although they could help nudge us in the right direction, no doubt about that). But if anything, that makes the embarrassment more acute. All we have to do is wrap our brains around the issue, and yet we’ve failed to do so."
https://en.wikipedia.org/wiki/Sean_M._Carroll

In the model I have replaced the "string" of string theory with the term "motive" to describe the chain of events one would have to trace back to arrive at the cause/effect relationship that is first established to produce what we are presented with in terms of "entanglement" within the simulation.

I realize that not everyone uses facebook but it's where the model currently resides.

https://www.facebook.com/approaching42/

While much of the material used is at https://www.facebook.com/anomalous.howard.3


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 23, 2017, 06:37:30 AM
This is what can be done now:

"We report the first electronic structure calculation performed on a quantum computer without exponentially costly precompilation. We use a programmable array of superconducting qubits to compute the energy surface of molecular hydrogen using two distinct quantum algorithms. First, we experimentally execute the unitary coupled cluster method using the variational quantum eigensolver. Our efficient implementation predicts the correct dissociation energy to within chemical accuracy of the numerically exact result. Second, we experimentally demonstrate the canonical quantum algorithm for chemistry, which consists of Trotterization and quantum phase estimation. We compare the experimental performance of these approaches to show clear evidence that the variational quantum eigensolver is robust to certain errors. This error tolerance inspires hope that variational quantum simulations of classically intractable molecules may be viable in the near future."

full text
http://journals.aps.org/prx/abstract/10.1103/PhysRevX.6.031007

How soon before it's possible that two H molecules are simulated in a way that WITHIN the simulation they "read out" as "entangled"?  Where would that simulated state of entanglement originate?  Would the simulated molecules actually BE entangled? Are there really two H molecules? Could a 3rd simulated particle be "created" within such a simulation in order that it reacts with the first two? It's obvious.

A quantum computer with 300 qubits or more (why stop at 300?) and a direction for how to begin programming for a universe simulation.....I wouldn't call it impossible.  80 years from the first Turing Machine to the first quantum computers.

However, any quantum computer in a simulated universe would itself be a simulation.  Then, the entangled particles observed within the qubits of that simulated computer would have been programmed to entangle.

Then things get really fractal--as a simulated multiverse.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 24, 2017, 07:12:59 PM
THE PHYSICS OF INFORMATION: FROM ENTANGLEMENT TO BLACK HOLES
Speaker(s): Leonard Susskind, Sir Anthony Leggett, Christopher Fuchs, Seth Lloyd, Bob McDonald
https://www.youtube.com/watch?v=3S0IGwKGV6s

All these guys have to do is put the end-user in the right place.
If you pay attention to what they're saying you can understand that by thinking about interfacing with a computational, deterministic universe my model would leap out at them.


Title: Re: Resolution of the Universe
Post by: youhn on January 25, 2017, 06:51:38 PM
The talk with both Leonard Susskind and Seth Lloyd is a good watch, thanks!

While diving into the entanglement subject (Leonard Susskind) I found this very nice summarizing image:

(http://www.nature.com/polopoly_fs/7.31405.1447669320!/image/Engtanglement%20gravity%20graphic%20FINALRGB2_Web.jpeg_gen/derivatives/landscape_630/Engtanglement%20gravity%20graphic%20FINALRGB2_Web.jpeg)

Source: http://www.nature.com/news/the-quantum-source-of-space-time-1.18797

And a quote from the same source:

"The geometry–entanglement relationship was general, Van Raamsdonk realized. Entanglement is the essential ingredient that knits space-time together into a smooth whole — not just in exotic cases with black holes, but always."


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 26, 2017, 01:00:01 AM
The talk with both Leonard Susskind and Seth Lloyd is a good watch, thanks!

While diving into the entanglement subject (Leonard Susskind) I found this very nice summarizing image:

(http://www.nature.com/polopoly_fs/7.31405.1447669320!/image/Engtanglement%20gravity%20graphic%20FINALRGB2_Web.jpeg_gen/derivatives/landscape_630/Engtanglement%20gravity%20graphic%20FINALRGB2_Web.jpeg)

Source: http://www.nature.com/news/the-quantum-source-of-space-time-1.18797

And a quote from the same source:

"The geometry–entanglement relationship was general, Van Raamsdonk realized. Entanglement is the essential ingredient that knits space-time together into a smooth whole — not just in exotic cases with black holes, but always."

Thanks for the article youhn.
I had been thinking about the error correction problem in conjunction with a fractal multiverse running the way I described where the successive fractal representations run in synch thereby reinforcing each plancktime computation that produces the next iteration.  Each iteration that is one step zoomed out from its fractal representation will receive the combined reinforced information (which would be identical) from all levels "below" it.  Any small deviations in any single representation would not be "allowed" to occur this way.

Where in the article:  
"In principle, when the qubits interact and become entangled in the right way, such a device could perform calculations that an ordinary computer could not finish in the lifetime of the Universe. But in practice, the process can be incredibly fragile: the slightest disturbance from the outside world will disrupt the qubits’ delicate entanglement and destroy any possibility of quantum computation.

That need inspired quantum error-correcting codes, numerical strategies that repair corrupted correlations between the qubits and make the computation more robust. One hallmark of these codes is that they are always ‘non-local’: the information needed to restore any given qubit has to be spread out over a wide region of space."  
(the restorative information is actually spread infinitely "downward" into the fractal.)

It may be that using a model with extra-universal end-users interfaced with observers of a plancktime reiterative universe that "falls" into the fractal answers:

"Still, researchers face several challenges. One is that the bulk–boundary correspondence does not apply in our Universe, which is neither static nor bounded; it is expanding and apparently infinite. Most researchers in the field do think that calculations using Maldacena’s correspondence are telling them something true about the real Universe, but there is little agreement as yet on exactly how to translate results from one regime to the other." ----

where the reiterative universe, as in my model, is the very same "space" the article describes as one we DON'T experience (the one that apparently has no gravity which would explain why it can be programmed to go from heat death to big bang to heat death instantly or within one plancktime).  Where the reiterative universe is the boundary and the experiential Universe is the bulk.  Each reiteration of the boundary occurs one plancktime after the previous and separated by 1 plancklength along the bulk (giving rise to experiential (bulk) time and E=Mc2 (a "bulk-only" phenomenon) as well as "expansion".  So the universe is "recalculated" each plancktime with every "logical step, or operation, needed to construct the quantum state of a system" as it should exist one Plancklength away.  Since only one planckspace/time occurs prior to recalculation, only ONE "step" or "operation" need be computed for each waveform per iteration rather than keeping track of the entire chain....or "motive".  If a waveform undergoes no quantum change in a planck space/time, it is simply resimulated and is one planck spacetime removed from its prior position. (the first C in C squared.)  If the waveform is a photon travelling at C its E will = C squared since it theoretically has 0 mass in the "bulk" Universe and it will have reached its position for the next iteration in (observationally) 0 time.  (time dilation).
Anything with mass in the bulk that is moving will never reach C without becoming massless so at that slower than light speed the M is introduced E=Mc2

Also I believe my model accounts for time.  I believe any model like the one described in the article (which is actually VERY close to mine) will require an interface between an observer (consciousness/perception/sensation) in the bulk and an extra-universal end-user.
The end-user would necessarily exist beyond both the bulk and the boundary.  That, though, is the real difficulty physics will have to figure out.  But since we DO experience time and observe time dilation and all those other effects that my model handles, it's the only proper explanation imo.

also from the article:
"Another challenge is that the standard definition of entanglement refers to particles only at a given moment. A complete theory of quantum gravity will have to add time to that picture. “Entanglement is a big piece of the story, but it’s not the whole story,” says Susskind."

He thinks physicists may have to embrace another concept from quantum information theory: computational complexity, the number of logical steps, or operations, needed to construct the quantum state of a system. A system with low complexity is analogous to a quantum computer with almost all the qubits on zero: it is easy to define and to build. One with high complexity is analogous to a set of qubits encoding a number that would take aeons to compute."  
(this assumes that the "computing" is NOT done each and every Planck spacetime where the reiterative model greatly simplifies computational load)

And here where Susskind refers to, "the number of logical steps, or operations, needed to construct the quantum state of a system", I have called "motive" in the model.

Then this part of the article--
"One potential consequence, which he is just beginning to explore, could be a link between the growth of computational complexity and the expansion of the Universe. Another is that, because the insides of black holes are the very regions where quantum gravity is thought to dominate, computational complexity may have a key role in a complete theory of quantum gravity."

--is directly related to my model's use of black holes as having the computational function of an "information filter" for each reiteration of the boundary.  (in the model I use "universe" with a small u where here they use "boundary" --- and Universe with a capital U where here they use "bulk")

Since as each iteration of universe (boundary) unfolds, there is a point at which the first "black hole" would form.  All subsequent black holes  would, upon refolding of the boundary, feed back to that initial black hole (via wormhole) then back to the singularity.  Calculation takes place and the boundary unfolds again with all the calculations necessary for coherent expansion.

There's also the matter of the spin of the singularity and that may be where entanglement starts....which would place the origin of entanglement within the quantum computer itself...the "computer" itself being neither boundary nor bulk being the "original" computer rather than one of the infinite number of fractal representations that "error correct" up through the fractal.

https://en.wikipedia.org/wiki/Cyclic_model

https://en.wikipedia.org/wiki/Simulated_reality

Great article, thanks again.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 26, 2017, 05:43:48 AM
Here's Susskind on reversibility of quantum events in which he shows that exact reversibility can ONLY occur when there is NO observation of the event.

In my model every unfolding/refolding of the boundary (universe---small u) occurs completely unobserved.
It isn't until AFTER unfolding/refolding occurs that computations are made as each waveform that has undergone any quantum change "dumps" the information regarding that change via the black hole "filtration" system back to the singularity upon completion of the refolding.  Only then is it observed for calculation as the next iteration is configured then unfolded.

So when a waveform undergoes decoherence during one Plancktime, the information regarding that decoherence is analyzed and computed for the next unfolding. The resulting computational solution for the result of that decoherence becomes recoded as part of that waveform's information for the next iteration.  The moment of decoherence itself does not amount to an observation because the EFFECT (observation in the Universe) of that decoherence will only begin to proceed AFTER the analysis and recoding.

Although this is somewhat simplified because, in the case of a photon striking a retina (the photoelectric membrane of the eye), the waveform of "photon" is transduced to an electromagnetic waveform (refold/calculate/unfold) which travels through the nervous system (refold/calculate/unfold...refold/calculate/unfold...refold/calculate/unfold and so on) until it enters the brain (r/c/u...r/c/u...) and is registered as a perception (r/c/u...) and then correlated with memory (r/c/u) etc, etc and finally enters "consciousness".
With each change to that waveform not "realized" until AFTER it has completed its reverse trip to the computer.

This process also requires an extrauniversal end-user to be interfaced into bulk via an avatar representing sensation/perception/consciousness.  The physical body that contains sensation/perception/consciousness can then begin the process of converting the waveform that has merged with conscious into a response that is interactively mediated by input to the computer based on individual choice from the end-user as the actual observer.  This way there is never any REAL observation occurring IN the universe as it proceeds.            Start at 29 minutes in:

https://www.youtube.com/watch?v=2h1E3YJMKfA


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 26, 2017, 06:06:44 AM
Here's a review of black holes as information filters with an article describing the mechanism as per Hawking, Perry, and Strominger:

Black Holes and Information Preservation (Hair)

Keeping in mind that The Universe is being considered by many cosmologists to be the result of a computer simulation, it would then be likely that a black hole acts as an information filter.  Where I propose a black hole as a "data port" then, certain information will then be allowed to pass through for "collection" and "analysis" or some other "purpose".
In the model I have proposed, each iteration contains changes that occur that are "projected" forward into the next iteration producing the dimension of time that we experience.
Any information that is necessary to reproduce the physical contiguity within the next iteration is thus held within the universe while information regarding "non-physical" aspects pass through.
The "soft hair" is the information of the "physical" awaiting (it's only a "wait" of one Plancktime) the "unfolding" of the next iteration.  
Any wave function that has undergone collapse out in The Universe gets translated and recoded by the "person" upon which the collapse occurred.  This translation becomes encoded as an "effect".  The transition from wave function to effect will pass the encoding process of that collapse as information through a series of electromagnetic wave form transitions.  All this happens within the "person" (as part of the Universe) and the transition information combinitively becomes part of each subsequent transitioned wave form's information.  (You could call the sum of transition information a "motive".  This would be like tracing a human action back to it's root cause, or motive, by tracking backward in that person's history to decode "why" that action was undertaken to begin with).
The chain of transition information (the motive) IS NOT NECESSARY for the"physical" unfolding of the next iteration of universe.  The transition information (motive) can pass through the data port.

From:
Viewpoint: Black Holes Have Soft Quantum Hair
https://physics.aps.org/articles/v9/62

"Strominger had an important insight in 2014 [4] while investigating a different problem. He realized that there are an infinite number of conservation laws that govern the scattering of gravitons—the elementary excitations in a quantum theory of gravity. Working with his students, Strominger realized soon thereafter that a similar result holds for electromagnetism [5]. Currently, he is collaborating with Hawking and Perry to apply this insight to black holes. In the new paper, the authors illustrate their ideas by considering electromagnetism in the presence of a black hole.

The key to their argument about black hole hair is provided by new conservation laws that generalize the usual notion of conservation of electric charge. The total charge in a region can be obtained by integrating the radial component of the electric field around a sphere surrounding the region. If no charge enters or leaves the region, its value is independent of time. Strominger’s generalization is based on integrating, over a sphere of infinite radius, the radial electric field weighted by an arbitrary function. It turns out [5] that this integral is still conserved. This provides an infinite number of new conserved quantities.

This observation connects to black hole hair in the following way. Using Gauss’ theorem, one can convert the surface integral describing the new conserved charge to a volume integral over all space. In the absence of black holes, the new conservation law simply means that this volume integral in the past is equal to the integral in the future. However, if black holes are present, the integral in the future must include a contribution over the black hole horizon.

If both gravity and electromagnetism are described classically, the contribution to the new charges coming from the black hole horizon must vanish. But Hawking, Perry, and Strominger argue that the situation is very different when electromagnetism is described quantum mechanically. To understand the difference, first consider the vacuum state and then add one photon. The result is a new quantum state with energy equal to the energy of the photon. As Strominger showed [5], if one takes the limit as the photon energy goes to zero (that is, the photon becomes “soft,” with vanishing energy), the result is a new state, which can be called a new vacuum because it has essentially the same energy as the original vacuum state. The first vacuum is turned into the second by acting with an operator that is just the quantum version of the new conserved charge.

The authors’ work now shows that acting with this same operator on a black hole horizon adds photons with essentially zero energy. These photons make up what they call the “soft hair” on a black hole. Since there are an infinite number of new charges, there are an infinite number of soft hairs that a black hole can support. Furthermore, the researchers demonstrate that when a charged particle falls into the black hole, it excites some of this soft hair. The exact conservation of the new charges implies that when a black hole evaporates, the information about the hair on the horizon must come out in the Hawking radiation (see Fig. 1).

It is important to note that this paper does not solve the black hole information problem. First, the analysis must be repeated for gravity, rather than just electromagnetic fields. The authors are currently pursuing this task, and their preliminary calculations indicate that the purely gravitational case will be similar. More importantly, the soft hair they introduce is probably not enough to capture all the information about what falls into a black hole. By itself, it will likely not explain how all the information is recovered when a black hole evaporates, since it is unclear whether all the information can be transferred to the soft hair. However, it is certainly possible that, following the path indicated by this work, further investigation will uncover more hair of this type, and perhaps eventually lead to a resolution of the black hole information problem."
https://physics.aps.org/articles/v9/62


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 26, 2017, 08:55:21 AM
And then a reminder of the fractal nature of things in general.

Black holes act as a filter that allows only computationally necessary information to exit the universe.
That information is input to the quantum computer that it uses to solve for the next iteration.
The solution is expressed as recoded information and output back into the universe to produce the proper effect of any quantum change that occurred during the prior iteration.

In this reposted video, Seth Lloyd roughly (very roughly) describes the nucleus of a cell as an information filter that allows only computationally necessary information to exit the outer cell structure as input to the cellular computer...DNA.  
The DNA then solves for the recoding of that information and outputs it back to the cell.

So a cell nucleus is a fractal re-expression of a black hole (including soft hair) and DNA is the fractal re-expression of the quantum computer that solves for the universe.

https://youtu.be/I47TcQmYyo4

Lloyd also explains how there is both a 0 and a 1 on the paper ;-).  The paper isn't in the bulk where the paper will have a 1 or a 0...it's in the quantum computer.

Weirdly enough, graphene quantum dots can hold 4 digits simultaneously.

My son is working toward his PhD through research into quantum dots.  Next time I talk with him I'll have to ask about this graphene business.  Currently he's working with "standard" dots and sometimes, being absorbed in too much specialization, it's hard to keep up with the ocean of info from beyond that specialization.  He may have heard more than I have about it though.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 26, 2017, 08:36:27 PM
So now my model comes down to one question, a form of which I just posed to
http://physics.stackexchange.com/questions/307786/when-does-refraction-begin

"As a wave function (a single quantum of field excitation) enters a refractory medium, does it begin to refract only after the entire wavelength has entered or does the leading edge of the wavelength exhibit refraction before the entire wave function has entered? I realize this is all happening in a placktime but has any experiment been devised to show exactly when refraction begins?

I suppose I could ask, is it required that any single quantum of field excitation in one medium fully decohere before its state is translated to another medium? Or are there intermediary stages that "build" toward full translation?"

The answers might be fun.


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 27, 2017, 12:47:07 PM
Most of what I closely reviewed as I worked this out is below.  Of course Tglad, youhn and Chillheimer immensely.  

https://en.wikipedia.org/wiki/Black_hole

https://www.google.com/search?q=%22complex+surface+singularity%22&oq=%22complex+surface+singularity%22&aqs=chrome..69i57.11535j0j7&sourceid=chrome&ie=UTF-8

https://www.nasa.gov/image-feature/goddard/2017/hubble-gazes-into-a-black-hole-of-puzzling-lightness

http://drum.lib.umd.edu/bitstream/handle/1903/8017/umi-umd-5139.pdf;sequence=1

https://en.wikipedia.org/wiki/Manifold

http://link.springer.com/article/10.1007%2FBF02345020

https://physics.aps.org/articles/v9/62

http://pbelmans.ncag.info/blog/2014/10/30/on-rational-surface-singularities/

http://www.sciencemag.org/news/2014/11/what-powers-black-holes-mighty-jets

http://scindeks-clanci.ceon.rs/data/pdf/0354-7310/2002/0354-73100204283R.pdf

http://www.icrar.org/4161-2/

https://en.wikipedia.org/wiki/Attractor#Strange_attractor

http://www.math.columbia.edu/~neumann/preprints/BNP-Jun20-black.pdf

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000314

https://en.wikipedia.org/wiki/Butterfly_effect

https://en.wikipedia.org/wiki/Calculating_Space

https://en.wikipedia.org/wiki/Attractor#/media/File:Chua-chaotic-hidden-attractor.jpg

https://en.wikipedia.org/wiki/Cohomology#History.2C_to_the_birth_of_singular_cohomology

http://ieet.org/index.php/IEET/comments/Edge20161030

https://en.wikipedia.org/wiki/Compactification_(physics)

http://www.huffingtonpost.com/michael-lazar/could-the-universe-be-a-s_b_9816034.html

https://www.google.com/search?q=%22Critical+dynamics%22&oq=%22Critical+dynamics%22&aqs=chrome..69i57.9184j0j7&sourceid=chrome&ie=UTF-8

https://en.wikipedia.org/wiki/Digital_physics

https://en.wikipedia.org/wiki/Dilaton

https://www.youtube.com/watch?v=UL1h-QgeD9c

https://en.wikipedia.org/wiki/Fermat's_principle

https://www.google.com/search?q=%22fractal+brains%22&oq=%22fractal+brains%22&aqs=chrome..69i57.9399j0j7&sourceid=chrome&ie=UTF-8

http://www.physionet.org/tutorials/fmnc/index.shtml

http://spectrum.ieee.org/nanoclast/semiconductors/materials/quantum-dots-made-from-graphene-help-realize-their-promise-for-quantum-computing

https://en.wikipedia.org/wiki/Heat_death_of_the_universe

https://www.quora.com/How-can-you-explain-zero-point-energy-to-a-non-physicist

https://en.wikipedia.org/wiki/Introduction_to_gauge_theory

http://www.space.com/32543-universe-a-simulation-asimov-debate.html

https://www.scientificamerican.com/article/is-time-quantized-in-othe/

http://www.icmp.lviv.ua/ising/Isilect.pdf

http://blogs.discovermagazine.com/cosmicvariance/2005/10/25/lorentz-invariance-and-you/#.WHawpFMrLcs

https://www.cfa.harvard.edu/~narayan/Benefunder/Narayan_et_al.pdf

https://www.youtube.com/watch?v=BZ0YFoUcY0s

http://journals.aps.org/pra/abstract/10.1103/PhysRevA.83.062104

http://journals.aps.org/prx/abstract/10.1103/PhysRevX.6.031007

https://en.wikipedia.org/wiki/Poincar%C3%A9_duality

http://www.konradvoelkel.com/wp-content/uploads/program-rational-homotopy-20130507.pdf

https://en.wikipedia.org/wiki/Programming_the_Universe

https://profmattstrassler.com/2013/09/24/quantum-field-theory-string-theory-and-predictions-part-2/

https://www.youtube.com/watch?v=rqJWhIld8mU

https://www.youtube.com/watch?v=I47TcQmYyo4

https://www.preposterousuniverse.com/blog/2013/01/17/the-most-embarrassing-graph-in-modern-physics/

https://www.youtube.com/watch?v=3S0IGwKGV6s

https://williamtifft.wordpress.com/

https://en.wikipedia.org/wiki/Topological_entropy

https://www.theguardian.com/science/2010/oct/18/einstein-relativity-science-book-review

https://en.wikipedia.org/wiki/William_G._Tifft

http://www.nature.com/news/the-quantum-source-of-space-time-1.18797

One paper was sent to me by the author since it's not available online in its entirety....In brief:

Tribute to H. John Caulfield:  Hijacking of the “Holographic Principle” by cosmologists
Chandrasekhar Roychoudhuri
Physics Department, University of Connecticut

The paper will be divided into six sections. The Section 2 describes very briefly my first graduate research beginning with holography, which made me aware of some very early contribution of Caulfield, the generation of a local reference beam (LRB) out of the very object beam that one wants to record in a hologram [6]. In Section 3, we first summarize the basic optical holographic principle to underscore that touch-able cosmic bodies should not be compared with un-touchable optical images generated by optical holograms. Then we discuss that information is always some subjective interpretation of experimental data, which can never give complete information about anything we study. In this context we discuss the historic “Measurement Problem” identified by the founders of quantum mechanics as the in-surmountable “Information Retrieval Problem”. This is to strengthen our view that information is no more than subjective human interpretation, limited further by insufficient information that we can gather from any set of experiments. Section 4 presents further questions raised by Caulfield’s paper [1] and it resolves them by analyzing the problem behind the
concept of “Indivisible Quanta” as due to our neglect of the obvious: Non-Interaction of Waves (NIW). We support this NIW-property by summarizing that various historical postulates and working theories actually contain the NIW-property,even though they do not explicitly recognize it as such. This leads to the recognition that the space is a physical tensionfield and supports the perpetual propagation of EM waves, just as air, as a substrate, holds pressure tension field and allows the perpetual propagation of sound waves. This leads us to the Section 5. It summarizes that optical Dopplershifts, like Doppler shifts for sound waves, depend separately upon the velocities of the source and that of the detector with respect to the stationary cosmic medium. Section 6 presents a brief summary of our core points again.

Another paper sent to me by another author who argues agains the entire concept of a simulated universe....mostly philosophical but I think he also may not be right in the head since he appears to be pretty serious about building a time machine.  (a simulated universe wouldn't allow for that)

And I downloaded:
The short book "Hacking Matter" which I provided a link for in this forum subject.

Now it needs another rewrite.  It'll be about 42 double spaced printed pages.








Title: Re: Resolution of the Universe
Post by: anomalous howard on January 27, 2017, 11:20:32 PM
The fractal self-correcting multiverse

(https://scontent-lga3-1.xx.fbcdn.net/v/t1.0-9/428729_10150716014048554_593485855_n.jpg?oh=f5ccfbc01a216e78dd128cafad0c7c62&oe=594A8430)


Title: Re: Resolution of the Universe
Post by: anomalous howard on January 28, 2017, 09:32:15 PM
By now it may be dawning on whatever two or three people who may be reading this with a reasonable understanding that entanglement arises out of the error correcting function of the fractal where any two "adjacent" fractal representations together form a "boundary".
We can call them fractal rep Y and fractal rep Z.  The results of Z "feed upward" to Y.  So if a particle with "up spin" in Z flips to "down spin", it returns the information that a flip has occurred up to Y where the same particle also flips. However to do so in a way that allows for the particle in Y to pass the information up further, Y would have to be running with "down spin" and flip to "up spin".  That new state for Y is passed up to X where the X representation of the particle flips from "up" to "down"....and so on up through the fractal as a means of error correction/prevention (which necessarily runs at plancktime speed) and is synched with (lower case u) universe reiteration speed.
(On a side note: Functioning in this way is most likely what gives rise to the class of naturally occurring fractals throughout the Universe that we see as spiral and helical forms.)

Two adjacent particles will never be in the same position with respect to "up"/"down" so the effects of entaglement are produced in a mathematically describable way.  The math of entanglement isn't concerned with how or why entaglement happens, just that it does so in a predictable describable way.  If it's interpreted improperly the reason for it will lead to misconjecture.

It seems to me that the current methodology of physics needs to separate the processes occurring in the fractal from the processes occurring between the (lower case u) universe and the functioning of the end-user interface.  They see our experience in the "bulk" arising out of the boundary described by fractal rep Y and fractal rep Z thinking that Y is the "bulk"  The "bulk", or (upper case U) Universe actually arises from the end-user interface function with (lower case u) universe.  It's the function of the interface that causes time.  

That we experience time is enough by itself to show that the function of the fractal (error correction/prevention) is separate from the origin of the bulk (Universe).
Entanglement and "boundary" arise from functions of the fractal.
"Bulk" arises from function of the interface.

The best way to think about the end-user interface function is to think about the page you're seeing on the monitor.  What you're seeing is upper case Universe.  Lower case universe is the code that is translated via interface to the screen.

<html  xmlns="http://ww.w3.org/1999/xhtml">
>#shadow-root  (open)
><head>...</head>
<body>
      <div  class=maindiv"  style="width:  99%;">
etc,etc

The monitor screen has a refresh rate.
Lower case u universe also has a refresh rate.