Our website is made possible by displaying online advertisements to our visitors.
Please consider supporting us by disabling your ad blocker.

Responsive image


Algorithmic probability

From observer states to physics via algorithmic probability. A colorful illustration of Observation 5.4 shows the computational ontological model's possible histories, represented as a tree graph. The vertices correspond to the random variable (the computational state at time ), with valid states (black) forming a growing sequence of bit strings, while invalid states are shown in gray. Alice and Bob, observing an approaching meteorite, are part of the computational process, represented by and , respectively. Their shared emergent reality transitions probabilistically: with 99% likelihood, the meteorite hits Bob3rd, and with 1%, it misses. Postulates 3.2 imply this outcome is irrelevant for Bob1st, whose state transitions elsewhere, as generates a semimeasure, not a measure. The fate of Bob1st lies beyond the scope of Postulates 3.2, awaiting Postulates 3.1. [1]

In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s.[2] It is used in inductive inference theory and analyses of algorithms. In his general theory of inductive inference, Solomonoff uses the method together with Bayes' rule to obtain probabilities of prediction for an algorithm's future outputs.[3]

In the mathematical formalism used, the observations have the form of finite binary strings viewed as outputs of Turing machines, and the universal prior is a probability distribution over the set of finite binary strings calculated from a probability distribution over programs (that is, inputs to a universal Turing machine). The prior is universal in the Turing-computability sense, i.e. no string has zero probability. It is not computable, but it can be approximated.[4]

Formally, the probability is not a probability and it is not computable. It is only "lower semi-computable" and a "semi-measure". By "semi-measure", it means that . That is, the "probability" does not actually sum up to one, unlike actual probabilities. This is because some inputs to the Turing machine causes it to never halt, which means the probability mass allocated to those inputs is lost. By "lower semi-computable", it means there is a Turing machine that, given an input string , can print out a sequence that converges to from below, but there is no such Turing machine that does the same from above.

  1. ^ Markus Müller."Law without law: from observer states to physics via algorithmic information theory." Quantum 4 (2020): 301.https://quantum-journal.org/papers/q-2020-07-20-301/pdf/
  2. ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).
  3. ^ Li, M. and Vitanyi, P., An Introduction to Kolmogorov Complexity and Its Applications, 3rd Edition, Springer Science and Business Media, N.Y., 2008
  4. ^ Hutter, M., Legg, S., and Vitanyi, P., "Algorithmic Probability", Scholarpedia, 2(8):2572, 2007.

Previous Page Next Page






احتمال خوارزمي Arabic Probabilitat algorísmica Catalan Probabilité algorithmique French アルゴリズム的確率 Japanese 알고리즘 확률 Korean

Responsive image

Responsive image