Partial Order, Entailment, Communication, Compression

This post will begin with the causal set model of cosmology, an approach which can be modeled by partially ordered sets (directed acyclic graphs) of spacetime “events.” Actually, it will begin with a generalization of this model with category theory leveraging concepts such as functors, natural transformations, and colimits. We assume that in the limit, such relational “augmented abstract block diagrams,” borrowing Rosen’s terminology, exhibit maximal entailment- that is, few morphisms will themselves be unentailed by yet other morphisms when compared with the cardinality of the diagram.

The post will attempt to bridge the gap between such a static, category-theoretic, relational and observer-independent formalism of explanation (I daresay, reality itself) and the dynamic, ergodic, biosemiotic model of self-organization that models evolution by the communication of learning agents (“anticipatory systems” in Rosen’s parlance, or autopoietic systems) performing two tasks: 1) pattern recognition / data compression / model building, and 2) maximization of Fisher information or, in anthropomorphic terms, maximizing future possibilities. In effect, we will be reconciling Tegmark’s Mathematical Universe Hypothesis (i.e. that relations between numbers — in an abstract sense, patterns themselves — are the only things that objectively exist in “reality” irrespective of the presence of an observer or the names issued to their entities) with the learning-agent-based models of organization and emergence provided by Markovian biosemiotics.

It seems the first task is to complete the analogy binding data compression to estimation theory (i.e., the entropy maximization that takes place during data compression is mathematically isomorphic to Fisher information maximization… the strategic positioning of a sensor in order to be able to observe recursion- i.e., differential equations- in sampled data). Of course, Frieden’s Extreme Physical Information provides us with a model for how the intentional agents communicate with one another and the open systems in which they reside. With a complete analogy, the appearance of intentionality (that is, any localized combination of model building and maximization of future possibilities) can then be understood as a natural result of the process of communication. Communication, here, should be understood as the transmission of probability distributions from one location to another such that the two distributions become identical. The meaning of “location” and “transmission” is what we will need to derive from the static, category-theoretic model of cosmology.

The process of communication and conversation can be understood as a movement toward thermodynamic equilibrium; consider a Chinese tangram puzzle — the individual shapes can be shuffled about but will only fall into place corresponding to an emergent configuration that was predetermined by their interaction with each other and with the environment. The static geometry of the pieces and the constraints imposed by their environment predetermined their resting places. In a sense, the pieces of the tangram had no say in their ultimate distribution and ordering. It was only the way their temporal form interacted with environment that determined their fate- neither the form of any entity nor the environment are alone sufficient to determine outcome. This post is trying to understand how the final configuration of the tangram comes about from the shaking of the pieces… what causes the appearance of the shaking (appearance of dynamics as measured by a consensus of first-person, subjective perspectives) given some third-person perspective of a static category-theoretic augmented abstract block diagram?

To transition from a maximally entailed augmented abstract block diagram specification in category theory — complete with functors and natural transformations which themselves enable learning and indeed formally model the concept of modeling — to the thermodynamic equilibrium process of communicating probability distributions, we refer to an anecdote. Here, I suggest the reader to investigate the discussion of layered architectures using Algebraic Higher Order (AHO) nets in the context of formally modeling mobile ad-hoc networks (MANETs).  Indeed, such self-configuring networks of intelligent radios are quite analogous to what I’ve described above as “learning-agent-based models of organization and emergence provided by Markovian biosemiotics.” The layered architectures using AHO nets appear to provide the sought-after analogy with category theory. Introductory details may be found in “Formal Modeling and Analysis of Mobile Ad Hoc Networks and Communication Based Systems using Graph and Net Technologies” by Kathrin Hoffmann, Hochschule für Angewandte Wissenschaften, Hamburg, Germany. A related text entitled “Petri Net Technology for Communication-Based Systems” was published by Springer in 2003. Ms. Hoffman’s analysis of concurrency and partial order in the context of applying category theory to distributed configuration of intelligent radios reminds me of the recent articles I shared by Tommaso Bolognesi which discuss the use of process algebra in the context of algorithmic causal sets and observers: “Event patterns: from process algebra to algorithmic causal sets” and “Internal observers in causet-based algorithmic spacetime.”

MANET technology is now being investigated and developed in order to enable the emerging “Internet of Things” and to respond to the growth of complex information networks. It seems this commercial source of funding may inadvertently help answer open questions in cosmology, biology, and general intelligence.

Event patterns: from process algebra to algorithmic causal sets — tommaso bolognesi

Notions of event and event occurrence play a central role in various areas of computer science and ICT (Information and Communication Technology). In this proposal we are particularly interested in event concepts from process algebras such as Milner’s Calculus of Communicating Systems (CCS) and Hoare’s Communicating Sequential Processes (CSP), and related languages (e.g. LOTOS), since […]

via Event patterns: from process algebra to algorithmic causal sets — tommaso bolognesi

Internal observers in causet-based algorithmic spacetime. — tommaso bolognesi

(T.B. and Vincenzo Ciancia) Notions of observation play a central role in Physics, but also in Theoretical Computer Science, notably in Process Algebra. The importance in Physics of the mutual influences between observer and observed phenomena is well recognized, and yet the properties of the former are in general fuzzily specified, in spite (or because) […]

via Internal observers in causet-based algorithmic spacetime. — tommaso bolognesi

The Omega Point

I would suspect Chaitin denounces static metamathematics because of its association with scientific dogma… in other words, people become attracted to a current body of knowledge and often forget that the body of knowledge itself transforms wildly over time. There is a political inertia in academia… new discoveries that seem to contradict prior bodies of knowledge are often ignored for some time. Following Schmidhuber’s theory of creativity and art, I would claim that compression progress – inference – serves as Chaitin’s notion of “dynamic” metamathematics and is a necessary characteristic of life itself. Organisms and machines interact by the lingua franca known as Information. Regarding the modern taboo against telelogy / “final cause” / vitalism/ intentionality (except in systems sciences like cybernetics and biosemiotics), Terence McKenna in this interview has something to say about how evolution and attractors in chaos theory are really just particular interpretations of compression progress… a directed path or asymptotic approach towards Minimum Description Length. He says “all nature aspires for this state of perfect novelty… you could almost say that Nature abhors habit and so it seeks the novel by producing various kinds of phenomena at every level in biology, chemistry, and society. And so there really is a purpose to the universe… [hyper-complexification].” This is congruent with Rosen’s notion of relational biology and the source of what we call “randomness.” The convergent evolution of flight in species as diverse as insects and birds and (tool usage / self-identification) in species as diverse as cetaceans and primates suggests there really may be some “Omega Point” toward which all evolution approaches. It is not a random walk in an infinite space- it is a directed search guided by the maximization of uncertainty. Of novelty spawned by relational adaptation at all scales.

Biosemiotics, teleology, and agent-based modeling

The world needs a replacement for magic and religion in the modern age of strict rationalism and materialism. We have stigmatized emotional expression and individuals feel increasingly neurotic under the systematic prospect of criticism. Symbolic reasoning is not what makes us joyful… Computer Algebra Systems can do this. What makes us joyful is our ability to compress data to make decisions in real-time that increase degrees of freedom. Inference, not deduction. A culture shift will occur when we begin to seriously consider the notion that, contrary to assumptions often made in probabilistic modeling today, the likelihood of future events is subject to change based on the adaptive interaction of learning agents. As theoretical biologists have discovered, what we observe makes more sense if we stop modeling with lifeless particles and start acknowledging that objects appear to have “purpose” or “intention.” That they learn and adapt in relation to one another, and that this adaptation drives the system towards attractors. Call to action: learn Clojure, perform Agent-Based Modeling of data compression agents that maximize degrees of freedom, & check results against (bio)semiotic theory and the Extreme Physical Information principle. Please refer to “Life Itself” by Robert Rosen and “Origins of Order” by Stuart Kauffman for elaboration. I’d also recommend “The Amoeba’s Secret” by Bruno Marchal. To get started, install Leiningen and clone this software repository that provides Clojure code to simulate ant foraging behavior:


;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Ant sim ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Copyright (c) Rich Hickey. All rights reserved.
; The use and distribution terms for this software are covered by the
; Common Public License 1.0 (http://opensource.org/licenses/cpl.php)
; which can be found in the file CPL.TXT at the root of this distribution.
; By using this software in any fashion, you are agreeing to be bound by
; the terms of this license.
; You must not remove this notice, or any other, from this software.
;dimensions of square world
(def dim 80)
;number of ants = nants-sqrt^2
(def nants-sqrt 7)
;number of places with food
(def food-places 35)
;range of amount of food at a place
(def food-range 100)
;scale factor for pheromone drawing
(def pher-scale 20.0)
;scale factor for food drawing
(def food-scale 30.0)
;evaporation rate
(def evap-rate 0.99)
(def animation-sleep-ms 100)
(def ant-sleep-ms 40)
(def evap-sleep-ms 1000)
(def running true)
(defstruct cell :food :pher) ;may also have :ant and :home
;world is a 2d vector of refs to cells
(def world
(apply vector
(map (fn [_]
(apply vector (map (fn [_] (ref (struct cell 0 0)))
(range dim))))
(range dim))))
(defn place [[x y]]
(-> world (nth x) (nth y)))
(defstruct ant :dir) ;may also have :food
(defn create-ant
"create an ant at the location, returning an ant agent on the location"
[loc dir]
(sync nil
(let [p (place loc)
a (struct ant dir)]
(alter p assoc :ant a)
(agent loc))))
(def home-off (/ dim 4))
(def home-range (range home-off (+ nants-sqrt home-off)))
(defn setup
"places initial food and ants, returns seq of ant agents"
[]
(sync nil
(dotimes [i food-places]
(let [p (place [(rand-int dim) (rand-int dim)])]
(alter p assoc :food (rand-int food-range))))
(doall
(for [x home-range y home-range]
(do
(alter (place [x y])
assoc :home true)
(create-ant [x y] (rand-int 8)))))))
(defn bound
"returns n wrapped into range 0-b"
[b n]
(let [n (rem n b)]
(if (neg? n)
(+ n b)
n)))
(defn wrand
"given a vector of slice sizes, returns the index of a slice given a
random spin of a roulette wheel with compartments proportional to
slices."
[slices]
(let [total (reduce + slices)
r (rand total)]
(loop [i 0 sum 0]
(if (< r (+ (slices i) sum))
i
(recur (inc i) (+ (slices i) sum))))))
;dirs are 0-7, starting at north and going clockwise
;these are the deltas in order to move one step in given dir
(def dir-delta {0 [0 -1]
1 [1 -1]
2 [1 0]
3 [1 1]
4 [0 1]
5 [-1 1]
6 [-1 0]
7 [-1 -1]})
(defn delta-loc
"returns the location one step in the given dir. Note the world is a torus"
[[x y] dir]
(let [[dx dy] (dir-delta (bound 8 dir))]
[(bound dim (+ x dx)) (bound dim (+ y dy))]))
;(defmacro dosync [& body]
; `(sync nil ~@body))
;ant agent functions
;an ant agent tracks the location of an ant, and controls the behavior of
;the ant at that location
(defn turn
"turns the ant at the location by the given amount"
[loc amt]
(dosync
(let [p (place loc)
ant (:ant @p)]
(alter p assoc :ant (assoc ant :dir (bound 8 (+ (:dir ant) amt))))))
loc)
(defn move
"moves the ant in the direction it is heading. Must be called in a
transaction that has verified the way is clear"
[loc]
(let [oldp (place loc)
ant (:ant @oldp)
newloc (delta-loc loc (:dir ant))
p (place newloc)]
;move the ant
(alter p assoc :ant ant)
(alter oldp dissoc :ant)
;leave pheromone trail
(when-not (:home @oldp)
(alter oldp assoc :pher (inc (:pher @oldp))))
newloc))
(defn take-food [loc]
"Takes one food from current location. Must be called in a
transaction that has verified there is food available"
(let [p (place loc)
ant (:ant @p)]
(alter p assoc
:food (dec (:food @p))
:ant (assoc ant :food true))
loc))
(defn drop-food [loc]
"Drops food at current location. Must be called in a
transaction that has verified the ant has food"
(let [p (place loc)
ant (:ant @p)]
(alter p assoc
:food (inc (:food @p))
:ant (dissoc ant :food))
loc))
(defn rank-by
"returns a map of xs to their 1-based rank when sorted by keyfn"
[keyfn xs]
(let [sorted (sort-by (comp float keyfn) xs)]
(reduce (fn [ret i] (assoc ret (nth sorted i) (inc i)))
{} (range (count sorted)))))
(defn behave
"the main function for the ant agent"
[loc]
(let [p (place loc)
ant (:ant @p)
ahead (place (delta-loc loc (:dir ant)))
ahead-left (place (delta-loc loc (dec (:dir ant))))
ahead-right (place (delta-loc loc (inc (:dir ant))))
places [ahead ahead-left ahead-right]]
(. Thread (sleep ant-sleep-ms))
(dosync
(when running
(send-off *agent* #'behave))
(if (:food ant)
;going home
(cond
(:home @p)
(-> loc drop-food (turn 4))
(and (:home @ahead) (not (:ant @ahead)))
(move loc)
:else
(let [ranks (merge-with +
(rank-by (comp #(if (:home %) 1 0) deref) places)
(rank-by (comp :pher deref) places))]
(([move #(turn % -1) #(turn % 1)]
(wrand [(if (:ant @ahead) 0 (ranks ahead))
(ranks ahead-left) (ranks ahead-right)]))
loc)))
;foraging
(cond
(and (pos? (:food @p)) (not (:home @p)))
(-> loc take-food (turn 4))
(and (pos? (:food @ahead)) (not (:home @ahead)) (not (:ant @ahead)))
(move loc)
:else
(let [ranks (merge-with +
(rank-by (comp :food deref) places)
(rank-by (comp :pher deref) places))]
(([move #(turn % -1) #(turn % 1)]
(wrand [(if (:ant @ahead) 0 (ranks ahead))
(ranks ahead-left) (ranks ahead-right)]))
loc)))))))
(defn evaporate
"causes all the pheromones to evaporate a bit"
[]
(dorun
(for [x (range dim) y (range dim)]
(dosync
(let [p (place [x y])]
(alter p assoc :pher (* evap-rate (:pher @p))))))))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; UI ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(import
'(java.awt Color Graphics Dimension)
'(java.awt.image BufferedImage)
'(javax.swing JPanel JFrame))
;pixels per world cell
(def scale 5)
(defn fill-cell [#^Graphics g x y c]
(doto g
(.setColor c)
(.fillRect (* x scale) (* y scale) scale scale)))
(defn render-ant [ant #^Graphics g x y]
(let [black (. (new Color 0 0 0 255) (getRGB))
gray (. (new Color 100 100 100 255) (getRGB))
red (. (new Color 255 0 0 255) (getRGB))
[hx hy tx ty] ({0 [2 0 2 4]
1 [4 0 0 4]
2 [4 2 0 2]
3 [4 4 0 0]
4 [2 4 2 0]
5 [0 4 4 0]
6 [0 2 4 2]
7 [0 0 4 4]}
(:dir ant))]
(doto g
(.setColor (if (:food ant)
(new Color 255 0 0 255)
(new Color 0 0 0 255)))
(.drawLine (+ hx (* x scale)) (+ hy (* y scale))
(+ tx (* x scale)) (+ ty (* y scale))))))
(defn render-place [g p x y]
(when (pos? (:pher p))
(fill-cell g x y (new Color 0 255 0
(int (min 255 (* 255 (/ (:pher p) pher-scale)))))))
(when (pos? (:food p))
(fill-cell g x y (new Color 255 0 0
(int (min 255 (* 255 (/ (:food p) food-scale)))))))
(when (:ant p)
(render-ant (:ant p) g x y)))
(defn render [g]
(let [v (dosync (apply vector (for [x (range dim) y (range dim)]
@(place [x y]))))
img (new BufferedImage (* scale dim) (* scale dim)
(. BufferedImage TYPE_INT_ARGB))
bg (. img (getGraphics))]
(doto bg
(.setColor (. Color white))
(.fillRect 0 0 (. img (getWidth)) (. img (getHeight))))
(dorun
(for [x (range dim) y (range dim)]
(render-place bg (v (+ (* x dim) y)) x y)))
(doto bg
(.setColor (. Color blue))
(.drawRect (* scale home-off) (* scale home-off)
(* scale nants-sqrt) (* scale nants-sqrt)))
(. g (drawImage img 0 0 nil))
(. bg (dispose))))
(def panel (doto (proxy [JPanel] []
(paint [g] (render g)))
(.setPreferredSize (new Dimension
(* scale dim)
(* scale dim)))))
(def frame (doto (new JFrame) (.add panel) .pack .show))
(def animator (agent nil))
(defn animation [x]
(when running
(send-off *agent* #'animation))
(. panel (repaint))
(. Thread (sleep animation-sleep-ms))
nil)
(def evaporator (agent nil))
(defn evaporation [x]
(when running
(send-off *agent* #'evaporation))
(evaporate)
(. Thread (sleep evap-sleep-ms))
nil)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; use ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; (comment
;demo
;; (load-file "/Users/rich/dev/clojure/ants.clj")
(def ants (setup))
(send-off animator animation)
(dorun (map #(send-off % behave) ants))
(send-off evaporator evaporation)
;; )

view raw

ants.clj

hosted with ❤ by GitHub


https://www.uttv.ee/embed?id=11059

Ship of Theseus and Transhumanism

You may be familiar with the paradox of the Ship of Theseus. If not, here is a brief overview. Essentially, it leaves us wondering about the relationship between the components of an object and the object’s identity. How much do the parts tell you about the whole?


Complexity theory (reference work by researchers at Sante Fe Institute) relates the explanatory gap between the micro and macro in statistical mechanics to the inability to describe a complex network’s evolution in time using a finite (comprehensible) set of differential equations. The economy, ecosystems, weather, three-body systems, the brain, and even the organization of living organisms that compose us are all chaotic dynamical systems that fall into the domain that Gödel himself proved to be non-contradictory yet necessarily beyond the limits of understanding. By understanding, I mean at least in the sense of symbolic representation… some expression of deterministic causation between parts of a system and the whole. Less esoterically, neural networks and genetic algorithms can produce solutions “better” than manual human labor (less complex, more resilient, more effective) but these are not amenable to causal analysis or symbolic deconstruction. I point the reader to the use of genetic programming for the design of the NASA ST5 spacecraft antenna as an example. I also point the reader to the work of Mitchell, Crutchfield, and Das from 1996 during which genetic algorithms were used to evolve cellular automata towards attractor states, enabling global synchronicity via local interaction in a similar fashion to the way fireflies synchronize their flashing and revealing a “particle physics” that allowed information to remain conserved from generation to generation.  I think for us to succeed with transhumanist goals including life extension or mind uploading, we need to resolve the Theseus paradox and that probably requires reconciling statistical mechanics with statistical inference. Check out Roy Frieden’s Extreme Physical Information principle and the application of the variational principles MaxEnt / Minimum Fisher Information in machine learning. The article “ELUDING THE DEMON – HOW EXTREME PHYSICAL INFORMATION APPLIES. TO SEMIOSIS AND COMMUNICATION” suggests that the field of artificial life and this aforementioned reconciliation requires an understanding of estimation theory in the context of quantum uncertainty and apparent randomness.

The theory of everything which is so preciously sought after is not a unification of general relativity with quantum mechanics… it is instead the unification of statistical mechanics with statistical inference. An investigation of variational principles, Frieden’s Extreme Physical Information, etc. will lead us towards a future of transhumanism, but it will not be a future of mind uploading or life extension. It will be an era where we use genetic programming to evolve digital, artificial life. There will be no “equations” or design… it will, just as we observe, occur spontaneously as interacting digital agents begin to synchronize into a chaotic, complex dynamical system characterized by self-similarity and power-law rank distributions. The forthcoming video game “No Man’s Sky” is a teaser of this future and it leverages procedural generation; such generative art produces complexity and apparent diversity not unlike Darwinian evolution yet, just like Darwinian evolution, the procedural generation algorithm itself can be described in just a few lines of source code. And, just like in nature, phenomenal perception only occurs locally when information is transmitted from the source to the observer. With no observer logged in to the game, there is no need to render any information; can the world really be said to exist at that time?

(Meta?)physical poetry

I had a conversation with my father today about Gödel’s incompleteness theorems, genetic programming, evolved antennas, metabolic scaling, and fractal distribution networks. We talked about the elegance of using biologically-inspired techniques to evolve non-linear black-box algorithms that are more efficient than linear closed-form analytical solutions produced by the best engineers. We discussed the ubiquitous nature of mutation, selection, and inheritance (of ideas and matter) and the relationship between evolutionary fitness and the “preferential attachment” and recursion that generate fractal scale-free networks.

My father is an ISTJ, and a very concrete thinker. He understood what I said and related these abstract patterns and ideas to everyday examples. I was moved by his visualizations,  and thought that perhaps a book or movie or some other work of art could one day help people to understand how these simple guiding mechanisms of the cosmos elude attempts at reduction and linear estimation and how they produce the tempting illusion of randomness and stochastic behavior. In particular, he asked me to imagine being in a rocket traveling among the stars and seeing the same fractal pattern as I maneuver between them; I get closer to them but it seems as though I’ll never get there, much like the arrow in Zeno’s paradox. He asked me to imagine a drive through a forest and a resulting kaleidoscopic approach of trees moving from the far field to the near field.

There is a hidden language that unites genetics, memetics, fractal networks, nonlinear dynamics, attractors, least action, maximum Fisher information, asymptotic analysis, general intelligence, self-awareness, renormalization, uncertainty, and topological dimensions; I believe that imagery like that suggested by my father may one day popularize the new non-reductive, simulation-based approach to science and one day bring about the discovery of that unified theory. The mechanism will be so simple it will convey no Shannon information at all and its Kolmogorov complexity will be zero; it will be the ineffable Tao- ineffable as a simple consequence of Gödel’s results. As time passes, I discover that apparent differences in descriptive languages are illusory- every field of study is looking at one particular branch of a large tree. The search continues.

Chaos, evolution, self-awareness, least action, Maximum Fisher Information, scale-free networks

“The day science begins to study non-physical phenomena, it will make more progress in one decade than in all previous centuries of its existence.”  —Nikola Tesla

Recursion gives rise to fractal dimension. This is studied in the field of complexity science (also known as systems science, or the science of complex adaptive systems) because  fractal distribution networks and the scale-free power-law degree distributions that characterize them are ubiquitous in nature, a common thread in physics as well as the social sciences. These networks appear to result from “preferential attachment,” which is to say that the generating mechanism is one that abides by the aphorism “the rich get richer.” This preferential attachment mechanism of self-organization is also the principle behind natural selection. There are two interesting case studies I want to share so that I may relate the concepts of fractal networks and emergence to Tononi’s Integrated Information Theory of consciousness, Hofstadter’s suggestion of recursion as the mechanism of self-awareness, and the casual sets approach to quantum gravity. My goal of late is to explore more intimately the relationship between hypercomputation (non-causal, non-local coordination: the explanatory gap between the micro and macro studied in statistical mechanics) and general intelligence (data compression); I plan to apply multi-agent simulations and genetic algorithms to really understand the role evolution plays in this relationship. I’ll be hosting my software projects and invite any interested to join me in this exploration.

In 1993, Melanie Mitchell of Santa Fe Institute et al. performed an experiment that involved evolving a population of elementary (one-dimensional) cellular automata using a genetic algorithm to eventually perform a global computation task. Researchers wondered how the algorithm enabled the automaton (which itself is a collection of agents interacting locally) to coordinate and perform a global task. Notice that such seemingly emergent coordination among locally interacting components is exactly the sort of hypercomputational, non-causal, non-reductionist link between the micro and macro that is studied in statistical mechanics.  By performing edge detection on the lattices produced by automata of successive generations, researchers discovered that the genetic algorithm was enabling the cellular automaton to learn to perform the global task by encoding information in the form of a “particle physics” along the time axis of the lattice.

When we say an object has a fractal dimension, it means there is an infinite level of detail at all scales of magnification. There are techniques, including the box-counting method, of estimating the fractal dimension of a pattern. Fractals are generated by recursion; that is to say they occur as a result of self-referential processes. The Mandlebrot set is one popular example. The bifurcation diagram of the logistic map is itself a fractal with a known fractal dimension. The logistic map is ‘an archetypal example of how complex, chaotic behaviour can arise from very simple non-linear dynamical equations. […] This nonlinear difference equation is intended to capture two effects: reproduction where the population will increase at a rate proportional to the current population when the population size is small, and starvation (density-dependent mortality) where the growth rate will decrease at a rate proportional to the value obtained by taking the theoretical “carrying capacity” of the environment less the current population.’

In the 1990s, West, Brown, and Enquist investigated the mystery of why empirical data appeared to suggest that the metabolic rate of an organism (the rate at which is radiates heat) did not fit the geometrically-based hypothesis that it should be proportional to the organism’s mass taken raised to the two-thirds power. What they discovered is that the empirical data, which actually suggested the metabolic rate to be proportional to the mass taken to the three-fourths power, could be explained by modifying the geometric hypothesis. In particular, one should not consider the rate to be proportional to the surface area of a three-dimensional object but rather a four-dimensional object. When considering the internal anatomy of biological organisms, they discovered that the fractal branching of the material distribution systems resulted in an additional fractal dimension. They concluded that “although living things occupy a three-dimensional space, their internal physiology and anatomy operate as if they were four-dimensional… fractal geometry has literally given life an added dimension.” Scale-free, fractal distribution networks similar to the nervous and respiratory systems of biological organisms appear everywhere and similar “quarter-power laws” have been empirically measured, relating- for example- the size of a city to its crime, GDP, income, and patents.

Featured image extracted from the paper “Evolving Cellular Automata with Genetic Algorithms: A Review of Recent Work” by Mitchell, Crutchfield, and Das.

Arrow of Time Explained? Emergence = Intelligence = Entropy = Hypercomputation

From an earlier post: “Today, I viewed a recording from FQXi 2014 where Scott Aaronson from MIT talks about the Physical Church-Turing Thesis. He brought up irreversibility. That made me think about the claim made by one paper I’d recently talked about [by AI researcher Ben Goertzel] that consciousness may be hypercomputational. Aaronson drew the link for me between hypercomputation and irreversibility. Hypercomputation implies irreversibility because, by definition, you cannot enumerate the sequence of instructions of a hypercomputation. If you don’t know how something was done, how could you undo it?”

From another previous post, the undecidability of the spectral gap verifies that there are, in fact,  hypercomputational aspects of nature. This falsifies the Physical Church Turing Thesis. To be a hypercomputational process means to be emergent, i.e. the sum is greater than the parts, otherwise the process could be fully described by its components and would not be hypercomputational. As noted above, hypercomputation implies irreversibility. The verified existence of hypercomputational, emergent phenomena in nature explains why we have the arrow of time. Furthermore, this irreversibility is shown to be linked with intelligence by Wissner-Gross’s Entropica simulation. From statistical mechanics, entropy is the measure of irreversibility and it is also apparently the measure of emergence and hypercomputability. We already know that theromodynamic entropy and Shannon entropy are duals and that  maximization of Shannon entropy (i.e., compression) is an objective of artificial intelligence algorithms. I speculate that if we equate Tononi & Koch’s measure of integrated information, phi, with thermodynamic entropy we may reveal precisely how the arrow of time arises from the fact that hypercomputational, emergent intelligence is a fundamental operating basis of nature. To explain the first-person experience, “consciousness,” is a separate issue- we should refer to the works of, e.g., Bruno Marchal or Max Tegmark.

Argument that AI cannot be conscious

There cannot exist a scientific theory to derive the first person experience from third person observations. This necessarily means that an AI implemented on a standard Turing machine cannot experience in the first person. There cannot be a solution to Chalmer’s Hard Problem of consciousness, but there is potentially a way to “measure” the presence of consciousness even if it can’t be implemented on a traditional computer. I offer my argument below, though it may need some patchwork and reorganization. The discussion thread this information is extracted from can be found on the Everything List.

Digital physics – the hypothesis that all of nature can be reproduced on a computer – rests upon the Strong Church-Turing Thesis, which posits that nature does not admit non-computable real numbers:

AI researcher Ben Goertzel discusses the “Hypercomputable Humanlike Intelligence hypothesis, which suggests that the crux of humanlike intelligence is some sort of mental manipulation of uncomputable entities – i.e., some process of “hypercomputation” [1-6].”:

This recent article seems to imply that non-computable real numbers exist in nature. If so, this seems to falisfy the Strong Church Turing Thesis. This in turn seems to make the HHI hypothesis possible, and if the HHI hypothesis could be verified then according to Goertzel’s argument science would never be able to describe cognition. (He’s concerned about the implications that would have for neuroscience and AI, naturally.):

I should mention this paper, which gives credence to HHI by beginning with Tononi’s Integrated Information Theory of consciousness and, by assuming consciousness is a lossless integrative process, concluding that it would not be computable.

I receive an objection on the Everything List, pointing me to a post on Scott Aaronson’s blog about Integrated Information Theory .

Alright, I’m going to try to piece things together and start by backing up. The determination of whether matter, described by QM, has a spectrum gap has recently been shown to be undecidable. This implies some aspect of physics is not computable, and therefore the Strong Church Turing thesis and digital physics are invalidated. It also implies that the reductionist hypothesis is invalid because there will be explanatory gaps from the microscopic to the macroscopic, and that emergence (sum is greater than parts) should be elevated to a first-class component of nature. The phi measure in IIT “will be high when there is a lot of information generated among the parts of a system as opposed to within them” and therefore can be by definition considered a measure of emergence. Tononi/Koch claim it’s a measure of consciousness, though Koch is ironically a self-proclaimed “romantic reductionist.” As reductionism is invalidated, the sum-greater-than-parts measure phi by Tononi/Koch corresponds to one hypercomputable aspect of nature; we say that to be hypercomputable or irreducable (i.e., to an algorithm) is to be emergent (sum-greater-than-parts). However, the interpretation of phi (a measure of emergence) as consciousness is based on intuition. It has not been verified due, presumably, to lack of instrumentation. If this association of phi with consciousness could be verified, then we could safely assume the HHI hypothesis to be true, i.e. that consciousness is a hypercomputable aspect of nature.
The argument by Maguire et al. starting with IIT (and also corroborating HHI hypothesis) seems to agree with the spectral gap evidence that reductionism is invalid. However, it doesn’t invalidate IIT as claimed because IIT implies hypercomputability; the phi measure is a measure of emergence. If consciousness is demonstrated to correlate with phi and thus proved to be hypercomputable, Goertzel says we can still produce humanlike AI with Turing machines by modeling imitation, intuition and chance. Furthermore, If HHI is true, then any reductionist attempts to describe consciousness like the OrchOR theory by Penrose/Hameroff are doomed to fail because a hypercomputable (emergent) process, by definition, cannot be described in any formal language (as Goertzel points out in his paper).
Aaronson’s zombie argument is just claiming that no scientific theory could explain the first-person experience and solve the Hard Problem of consciousness. He just observes it would be ridiculous if ‘someone claims that integrated information “explains” why consciousness exists.’ Notice that if consciousness is hypercomputable (correlates with phi), then that is exactly what Goertzel concludes in his paper. Aaronson even admits ‘we can easily interpret IIT as trying to do something more “modest” than solve the Hard Problem, although still staggeringly audacious. Namely, we can say that IIT “merely” aims to tell us which physical systems are associated with consciousness and which aren’t, purely in terms of the systems’ physical organization. The test of such a theory is whether it can produce results agreeing with “commonsense intuition”: for example, whether it can affirm, from first principles, that (most) humans are conscious; that dogs and horses are also conscious but less so; that rocks, livers, bacteria colonies, and existing digital computers are not conscious (or are hardly conscious); and that a room full of people has no “mega-consciousness” over and above the consciousnesses of the individuals.’
I think the test might actually suggest that a room full of people does have a “mega-consciousness,” but the rest is right on point. Now we just need some instrument that we can wave over human beings, squirrels, and chunks of dirt that can evaluate phi (a measure of emergence/ hypercomputability) in order to locally correlate it with consciousness.
I should mention one important exception… if we could harness the undecidability of the spectral gap to implement infinite real weights for Hava Siegelmann’s analog recurrent neural network model, then I suppose we could in theory achieve hypercomputation. This would have many incredible consequences. We wouldn’t be able to develop an algorithm to reproduce consciousness, but perhaps by trying all possibilities (perhaps guided by intuition, as Goertzel suggests) we might then actually stumble upon a conscious being. Maybe our brains are analog recurrent neural networks that exploit this undecidability of the spectral gap…