Arrow of Time Explained? Emergence = Intelligence = Entropy = Hypercomputation

From an earlier post: “Today, I viewed a recording from FQXi 2014 where Scott Aaronson from MIT talks about the Physical Church-Turing Thesis. He brought up irreversibility. That made me think about the claim made by one paper I’d recently talked about [by AI researcher Ben Goertzel] that consciousness may be hypercomputational. Aaronson drew the link for me between hypercomputation and irreversibility. Hypercomputation implies irreversibility because, by definition, you cannot enumerate the sequence of instructions of a hypercomputation. If you don’t know how something was done, how could you undo it?”

From another previous post, the undecidability of the spectral gap verifies that there are, in fact,  hypercomputational aspects of nature. This falsifies the Physical Church Turing Thesis. To be a hypercomputational process means to be emergent, i.e. the sum is greater than the parts, otherwise the process could be fully described by its components and would not be hypercomputational. As noted above, hypercomputation implies irreversibility. The verified existence of hypercomputational, emergent phenomena in nature explains why we have the arrow of time. Furthermore, this irreversibility is shown to be linked with intelligence by Wissner-Gross’s Entropica simulation. From statistical mechanics, entropy is the measure of irreversibility and it is also apparently the measure of emergence and hypercomputability. We already know that theromodynamic entropy and Shannon entropy are duals and that  maximization of Shannon entropy (i.e., compression) is an objective of artificial intelligence algorithms. I speculate that if we equate Tononi & Koch’s measure of integrated information, phi, with thermodynamic entropy we may reveal precisely how the arrow of time arises from the fact that hypercomputational, emergent intelligence is a fundamental operating basis of nature. To explain the first-person experience, “consciousness,” is a separate issue- we should refer to the works of, e.g., Bruno Marchal or Max Tegmark.

Argument that AI cannot be conscious

There cannot exist a scientific theory to derive the first person experience from third person observations. This necessarily means that an AI implemented on a standard Turing machine cannot experience in the first person. There cannot be a solution to Chalmer’s Hard Problem of consciousness, but there is potentially a way to “measure” the presence of consciousness even if it can’t be implemented on a traditional computer. I offer my argument below, though it may need some patchwork and reorganization. The discussion thread this information is extracted from can be found on the Everything List.

Digital physics – the hypothesis that all of nature can be reproduced on a computer – rests upon the Strong Church-Turing Thesis, which posits that nature does not admit non-computable real numbers:

AI researcher Ben Goertzel discusses the “Hypercomputable Humanlike Intelligence hypothesis, which suggests that the crux of humanlike intelligence is some sort of mental manipulation of uncomputable entities – i.e., some process of “hypercomputation” [1-6].”:

This recent article seems to imply that non-computable real numbers exist in nature. If so, this seems to falisfy the Strong Church Turing Thesis. This in turn seems to make the HHI hypothesis possible, and if the HHI hypothesis could be verified then according to Goertzel’s argument science would never be able to describe cognition. (He’s concerned about the implications that would have for neuroscience and AI, naturally.):

I should mention this paper, which gives credence to HHI by beginning with Tononi’s Integrated Information Theory of consciousness and, by assuming consciousness is a lossless integrative process, concluding that it would not be computable.

I receive an objection on the Everything List, pointing me to a post on Scott Aaronson’s blog about Integrated Information Theory .

Alright, I’m going to try to piece things together and start by backing up. The determination of whether matter, described by QM, has a spectrum gap has recently been shown to be undecidable. This implies some aspect of physics is not computable, and therefore the Strong Church Turing thesis and digital physics are invalidated. It also implies that the reductionist hypothesis is invalid because there will be explanatory gaps from the microscopic to the macroscopic, and that emergence (sum is greater than parts) should be elevated to a first-class component of nature. The phi measure in IIT “will be high when there is a lot of information generated among the parts of a system as opposed to within them” and therefore can be by definition considered a measure of emergence. Tononi/Koch claim it’s a measure of consciousness, though Koch is ironically a self-proclaimed “romantic reductionist.” As reductionism is invalidated, the sum-greater-than-parts measure phi by Tononi/Koch corresponds to one hypercomputable aspect of nature; we say that to be hypercomputable or irreducable (i.e., to an algorithm) is to be emergent (sum-greater-than-parts). However, the interpretation of phi (a measure of emergence) as consciousness is based on intuition. It has not been verified due, presumably, to lack of instrumentation. If this association of phi with consciousness could be verified, then we could safely assume the HHI hypothesis to be true, i.e. that consciousness is a hypercomputable aspect of nature.
The argument by Maguire et al. starting with IIT (and also corroborating HHI hypothesis) seems to agree with the spectral gap evidence that reductionism is invalid. However, it doesn’t invalidate IIT as claimed because IIT implies hypercomputability; the phi measure is a measure of emergence. If consciousness is demonstrated to correlate with phi and thus proved to be hypercomputable, Goertzel says we can still produce humanlike AI with Turing machines by modeling imitation, intuition and chance. Furthermore, If HHI is true, then any reductionist attempts to describe consciousness like the OrchOR theory by Penrose/Hameroff are doomed to fail because a hypercomputable (emergent) process, by definition, cannot be described in any formal language (as Goertzel points out in his paper).
Aaronson’s zombie argument is just claiming that no scientific theory could explain the first-person experience and solve the Hard Problem of consciousness. He just observes it would be ridiculous if ‘someone claims that integrated information “explains” why consciousness exists.’ Notice that if consciousness is hypercomputable (correlates with phi), then that is exactly what Goertzel concludes in his paper. Aaronson even admits ‘we can easily interpret IIT as trying to do something more “modest” than solve the Hard Problem, although still staggeringly audacious. Namely, we can say that IIT “merely” aims to tell us which physical systems are associated with consciousness and which aren’t, purely in terms of the systems’ physical organization. The test of such a theory is whether it can produce results agreeing with “commonsense intuition”: for example, whether it can affirm, from first principles, that (most) humans are conscious; that dogs and horses are also conscious but less so; that rocks, livers, bacteria colonies, and existing digital computers are not conscious (or are hardly conscious); and that a room full of people has no “mega-consciousness” over and above the consciousnesses of the individuals.’
I think the test might actually suggest that a room full of people does have a “mega-consciousness,” but the rest is right on point. Now we just need some instrument that we can wave over human beings, squirrels, and chunks of dirt that can evaluate phi (a measure of emergence/ hypercomputability) in order to locally correlate it with consciousness.
I should mention one important exception… if we could harness the undecidability of the spectral gap to implement infinite real weights for Hava Siegelmann’s analog recurrent neural network model, then I suppose we could in theory achieve hypercomputation. This would have many incredible consequences. We wouldn’t be able to develop an algorithm to reproduce consciousness, but perhaps by trying all possibilities (perhaps guided by intuition, as Goertzel suggests) we might then actually stumble upon a conscious being. Maybe our brains are analog recurrent neural networks that exploit this undecidability of the spectral gap…