• Saigonauticon@voltage.vn
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I built a machine to try and test that using Bell’s inequality (e.g. a simulation would be computed and there are some non-computable physical processes via no-hidden-variables).

    Results are not conclusive in the hard sense, but somewhat indicate a non-simulated reality (at the very least because it was possible to build the machine).

    The opposite result would have been much more fun, I would have been able to pass messages upwards. So of course I would Rickroll God.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The problem is this assumes the same physics for both the outer and inner worlds.

      If anything, the behavior of quantizing continuous waves into discrete units such that state can be tracked around the interactions of free agents seems mighty similar to how procedural generation with continuous seed functions converts to voxels around observation/interaction in games with destructive or changeable world geometry like Minecraft or No Man’s Sky.

      Perhaps the continued inability to seamlessly connect continuous macro models of world behavior like general relativity and low fidelity discrete behaviors around quantum mechanics is because the latter is an artifact of simulating the former under memory management constraints?

      The assumption that possible emulation artifacts and side effects are computed or themselves present at the same fidelity threshold in the parent is a pretty extreme assumption. It’d be like being unable to recreate Minecraft within itself because of block size constraints and then concluding that it must therefore be the highest order reality.

      Though I do suspect Bell’s inequality may eventually play a role in determining the opposite conclusion to the one you came to. Namely that after adding an additional separated layer of observation to the measurement of entangled pairs in the Wigner’s friend variation in Proietti, Experimental test of local observer-independence (2019), measured results were in conflict. This seems a lot like sync conflicts in netcode, and I’ve been curious if we’re in for some surprises regarding the rate at which conflicts grow as the experiment moves from just two layers of measurement by separated ‘observers’ to n layers. While the math should have it grow multiplicatively with unobserved intermediate layers still having conflicts which compound, the lazy programmer in me wonders if it will turn out to grow linearly as if the conflicts are only occurring in the last layer as a JIT computation.

      So if we suddenly see headlines proposing some sort of holographic principle to explain linear growth in rates of disagreement between separate observers in QM, might be productive to keep in mind that’s exactly how a simulated system sweeping sync conflicts under the rug without actively rendering intermediate immeasurable steps for each relative user might work.