π-calculus for biochemistry and matter as shared memory

I think that evidence of the effectiveness of πcalculus in modeling biology (as curated by  Microsoft’s Luca Cardelli), coupled with the suggestion that matter is “shared address space” or a “communication channel” by physicist Diederik Aerts et al. seems to suggest that interpretation (B) of the Strong Anthropic Principle I reproduce below may be quite credible, contrary to popular belief. Specifically, the interpretation should be modified to reflect Donald Hoffman’s “Conscious Agent Thesis” and the theories of cybernetics and biosemiotics by incorporating multiple, communicating observers (“processes”). Pi-calculus can be modeled with 2-categories (see “Higher category models of the pi-calculus” by Google’s Mike Stay) and theoretical biologist Robert Rosen illustrated in “Life Itself” that category theory allows for Aristotelian “final cause,” which appears to be a crucial element for understanding why evolution appears to obey neither random search nor intelligent design (see Gregory Chaitin’s argument at time marker 33:47 in the following video leveraging Kolmogorov complexity and Busy Beaver functions from his work on metabiology). Final cause cannot be modeled with recursion and differential equations, as Rosen illustrated by deriving the phase space concept of statistical mechanics by applying Newton’s second law to remove higher-order terms of a Taylor series. Since our modern natural system formalizations rely on classical Turing machines implemented using recursion and differential equations, they are not able to model final cause and therefore render emergent, self-organized, reactive phenomenon quite puzzling to us.

We know that graph rewriting – as would be performed by Robin Milner’s Bigraphical Reactive Systems –  generates edge-of-chaos networks with moderate Kolmogorov complexity exhibiting self-similarity and power laws, as suggested by the Barabási–Albert model. My investigation in the immediate future will focus on Robin Milner’s Bigraphical Reactive Systems (a behavioral specification language that generalizes π-calculus) and their ability to 1) perform super-Turing computation and provide the oracle that Chaitin used in the video above to perform evolutionary fitness evaluation, 2) produce cosmological causal sets in an algorithmic fashion in accordance with the work of Tommaso Bolognesi – particularly those lattices that appear to exhibit scale invariance, and 3) explain how third-person determinacy (in Bruno Marchal’s terms, “3-determinacy”) can entail Heisenberg’s uncertainty principle (“1-indeterminacy”) in agreement with the Universal Dovetailer Argument. At some point, I will need to also understand why the communicating processes or agents themselves appear to be performing data compression (see Juergen Schmidhuber) and entropy maximization (see Alex Wissner-Gross) and how Bernard Roy Frieden’s Extreme Physical Information can be derived from these Bigraphical Reactive Systems (thereby explaining the abundance of hyperbolic second order partial differential equations in the physical sciences, as quoted by Max Tegmark in section 7.6 of Russell K. Standish’s “Theory of Nothing”).

The following supplementary analysis is reproduced from http://www.colorado.edu/philosophy/vstenger/Cosmo/anthro_skintel.html :

Strong Anthropic Principle (SAP): The Universe must have those properties which allow life to develop within it at some stage in its history.

This suggests that the coincidences are not accidental but the result of a law of nature. But it is a strange law indeed, unlike any other in physics. It suggests that life exists as some Aristotelian “final cause.”

Barrow and Tipler (22) claim that this can have three interpretations:

(A) There exists one possible Universe ‘designed’ with the goal of generating and sustaining ‘observers.”

This is the interpretation adopted by most theistic believers.

(B) Observers are necessary to bring the Universe into being.

This is traditional solipsism, but also is a part of today’s New Age mysticism.

(C) An ensemble of other different universes is necessary for the existence of our Universe.

This speculation is part of contemporary cosmological thinking, as I will discuss below. It represents the idea that the coincidences are accidental. We just happen to live in the particular universe that was suited for us.

Partial Order, Entailment, Communication, Compression

This post will begin with the causal set model of cosmology, an approach which can be modeled by partially ordered sets (directed acyclic graphs) of spacetime “events.” Actually, it will begin with a generalization of this model with category theory leveraging concepts such as functors, natural transformations, and colimits. We assume that in the limit, such relational “augmented abstract block diagrams,” borrowing Rosen’s terminology, exhibit maximal entailment- that is, few morphisms will themselves be unentailed by yet other morphisms when compared with the cardinality of the diagram.

The post will attempt to bridge the gap between such a static, category-theoretic, relational and observer-independent formalism of explanation (I daresay, reality itself) and the dynamic, ergodic, biosemiotic model of self-organization that models evolution by the communication of learning agents (“anticipatory systems” in Rosen’s parlance, or autopoietic systems) performing two tasks: 1) pattern recognition / data compression / model building, and 2) maximization of Fisher information or, in anthropomorphic terms, maximizing future possibilities. In effect, we will be reconciling Tegmark’s Mathematical Universe Hypothesis (i.e. that relations between numbers — in an abstract sense, patterns themselves — are the only things that objectively exist in “reality” irrespective of the presence of an observer or the names issued to their entities) with the learning-agent-based models of organization and emergence provided by Markovian biosemiotics.

It seems the first task is to complete the analogy binding data compression to estimation theory (i.e., the entropy maximization that takes place during data compression is mathematically isomorphic to Fisher information maximization… the strategic positioning of a sensor in order to be able to observe recursion- i.e., differential equations- in sampled data). Of course, Frieden’s Extreme Physical Information provides us with a model for how the intentional agents communicate with one another and the open systems in which they reside. With a complete analogy, the appearance of intentionality (that is, any localized combination of model building and maximization of future possibilities) can then be understood as a natural result of the process of communication. Communication, here, should be understood as the transmission of probability distributions from one location to another such that the two distributions become identical. The meaning of “location” and “transmission” is what we will need to derive from the static, category-theoretic model of cosmology.

The process of communication and conversation can be understood as a movement toward thermodynamic equilibrium; consider a Chinese tangram puzzle — the individual shapes can be shuffled about but will only fall into place corresponding to an emergent configuration that was predetermined by their interaction with each other and with the environment. The static geometry of the pieces and the constraints imposed by their environment predetermined their resting places. In a sense, the pieces of the tangram had no say in their ultimate distribution and ordering. It was only the way their temporal form interacted with environment that determined their fate- neither the form of any entity nor the environment are alone sufficient to determine outcome. This post is trying to understand how the final configuration of the tangram comes about from the shaking of the pieces… what causes the appearance of the shaking (appearance of dynamics as measured by a consensus of first-person, subjective perspectives) given some third-person perspective of a static category-theoretic augmented abstract block diagram?

To transition from a maximally entailed augmented abstract block diagram specification in category theory — complete with functors and natural transformations which themselves enable learning and indeed formally model the concept of modeling — to the thermodynamic equilibrium process of communicating probability distributions, we refer to an anecdote. Here, I suggest the reader to investigate the discussion of layered architectures using Algebraic Higher Order (AHO) nets in the context of formally modeling mobile ad-hoc networks (MANETs).  Indeed, such self-configuring networks of intelligent radios are quite analogous to what I’ve described above as “learning-agent-based models of organization and emergence provided by Markovian biosemiotics.” The layered architectures using AHO nets appear to provide the sought-after analogy with category theory. Introductory details may be found in “Formal Modeling and Analysis of Mobile Ad Hoc Networks and Communication Based Systems using Graph and Net Technologies” by Kathrin Hoffmann, Hochschule für Angewandte Wissenschaften, Hamburg, Germany. A related text entitled “Petri Net Technology for Communication-Based Systems” was published by Springer in 2003. Ms. Hoffman’s analysis of concurrency and partial order in the context of applying category theory to distributed configuration of intelligent radios reminds me of the recent articles I shared by Tommaso Bolognesi which discuss the use of process algebra in the context of algorithmic causal sets and observers: “Event patterns: from process algebra to algorithmic causal sets” and “Internal observers in causet-based algorithmic spacetime.”

MANET technology is now being investigated and developed in order to enable the emerging “Internet of Things” and to respond to the growth of complex information networks. It seems this commercial source of funding may inadvertently help answer open questions in cosmology, biology, and general intelligence.

Category Theory, Final Cause, Evolution

Recursion/mechanics is not enough. We need another tool in the explanatory toolbox, and that tool may very well be category theory. Otherwise we have to heuristically optimize configurations of sensors until they sample streams of data that themselves exhibit the contagious recursion property necessary to leverage mathematical induction and perform prediction.

The study of phase space in statistical mechanics is a result of applying Newton’s second law to cut off higher order terms in the expansion by Taylor’s theorem. The recursive nature of Taylor’s theorem encapsulates the “causal” nature of mechanics.

Category theory gives us more than recursion and traditional “causality” (which itself relies on the notion of subjective sensor sample  measurements streams). It lets us model the concept of an object’s function in a larger system giving rise to its particular structural implementation. Think about evolution and genetic algorithms.

It reverses our perception of cause and effect… it is not the material interaction of independent particles that gives rise to emergent phenomena. It is the emergent phenomena that expresses itself as the interaction of those particles.

What would it mean for the attractor to exhibit “maximum entropy” or, equivalently, maximum compression? The holographic principle is telling not just that information is more fundamental than matter and energy, but that information itself is a byproduct of final cause and the mechanism by which evolution proceeds.

Event patterns: from process algebra to algorithmic causal sets — tommaso bolognesi

Notions of event and event occurrence play a central role in various areas of computer science and ICT (Information and Communication Technology). In this proposal we are particularly interested in event concepts from process algebras such as Milner’s Calculus of Communicating Systems (CCS) and Hoare’s Communicating Sequential Processes (CSP), and related languages (e.g. LOTOS), since […]

via Event patterns: from process algebra to algorithmic causal sets — tommaso bolognesi

Internal observers in causet-based algorithmic spacetime. — tommaso bolognesi

(T.B. and Vincenzo Ciancia) Notions of observation play a central role in Physics, but also in Theoretical Computer Science, notably in Process Algebra. The importance in Physics of the mutual influences between observer and observed phenomena is well recognized, and yet the properties of the former are in general fuzzily specified, in spite (or because) […]

via Internal observers in causet-based algorithmic spacetime. — tommaso bolognesi

The Omega Point

I would suspect Chaitin denounces static metamathematics because of its association with scientific dogma… in other words, people become attracted to a current body of knowledge and often forget that the body of knowledge itself transforms wildly over time. There is a political inertia in academia… new discoveries that seem to contradict prior bodies of knowledge are often ignored for some time. Following Schmidhuber’s theory of creativity and art, I would claim that compression progress – inference – serves as Chaitin’s notion of “dynamic” metamathematics and is a necessary characteristic of life itself. Organisms and machines interact by the lingua franca known as Information. Regarding the modern taboo against telelogy / “final cause” / vitalism/ intentionality (except in systems sciences like cybernetics and biosemiotics), Terence McKenna in this interview has something to say about how evolution and attractors in chaos theory are really just particular interpretations of compression progress… a directed path or asymptotic approach towards Minimum Description Length. He says “all nature aspires for this state of perfect novelty… you could almost say that Nature abhors habit and so it seeks the novel by producing various kinds of phenomena at every level in biology, chemistry, and society. And so there really is a purpose to the universe… [hyper-complexification].” This is congruent with Rosen’s notion of relational biology and the source of what we call “randomness.” The convergent evolution of flight in species as diverse as insects and birds and (tool usage / self-identification) in species as diverse as cetaceans and primates suggests there really may be some “Omega Point” toward which all evolution approaches. It is not a random walk in an infinite space- it is a directed search guided by the maximization of uncertainty. Of novelty spawned by relational adaptation at all scales.

Biosemiotics, teleology, and agent-based modeling

The world needs a replacement for magic and religion in the modern age of strict rationalism and materialism. We have stigmatized emotional expression and individuals feel increasingly neurotic under the systematic prospect of criticism. Symbolic reasoning is not what makes us joyful… Computer Algebra Systems can do this. What makes us joyful is our ability to compress data to make decisions in real-time that increase degrees of freedom. Inference, not deduction. A culture shift will occur when we begin to seriously consider the notion that, contrary to assumptions often made in probabilistic modeling today, the likelihood of future events is subject to change based on the adaptive interaction of learning agents. As theoretical biologists have discovered, what we observe makes more sense if we stop modeling with lifeless particles and start acknowledging that objects appear to have “purpose” or “intention.” That they learn and adapt in relation to one another, and that this adaptation drives the system towards attractors. Call to action: learn Clojure, perform Agent-Based Modeling of data compression agents that maximize degrees of freedom, & check results against (bio)semiotic theory and the Extreme Physical Information principle. Please refer to “Life Itself” by Robert Rosen and “Origins of Order” by Stuart Kauffman for elaboration. I’d also recommend “The Amoeba’s Secret” by Bruno Marchal. To get started, install Leiningen and clone this software repository that provides Clojure code to simulate ant foraging behavior:


;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; Ant sim ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Copyright (c) Rich Hickey. All rights reserved.
; The use and distribution terms for this software are covered by the
; Common Public License 1.0 (http://opensource.org/licenses/cpl.php)
; which can be found in the file CPL.TXT at the root of this distribution.
; By using this software in any fashion, you are agreeing to be bound by
; the terms of this license.
; You must not remove this notice, or any other, from this software.
;dimensions of square world
(def dim 80)
;number of ants = nants-sqrt^2
(def nants-sqrt 7)
;number of places with food
(def food-places 35)
;range of amount of food at a place
(def food-range 100)
;scale factor for pheromone drawing
(def pher-scale 20.0)
;scale factor for food drawing
(def food-scale 30.0)
;evaporation rate
(def evap-rate 0.99)
(def animation-sleep-ms 100)
(def ant-sleep-ms 40)
(def evap-sleep-ms 1000)
(def running true)
(defstruct cell :food :pher) ;may also have :ant and :home
;world is a 2d vector of refs to cells
(def world
(apply vector
(map (fn [_]
(apply vector (map (fn [_] (ref (struct cell 0 0)))
(range dim))))
(range dim))))
(defn place [[x y]]
(-> world (nth x) (nth y)))
(defstruct ant :dir) ;may also have :food
(defn create-ant
"create an ant at the location, returning an ant agent on the location"
[loc dir]
(sync nil
(let [p (place loc)
a (struct ant dir)]
(alter p assoc :ant a)
(agent loc))))
(def home-off (/ dim 4))
(def home-range (range home-off (+ nants-sqrt home-off)))
(defn setup
"places initial food and ants, returns seq of ant agents"
[]
(sync nil
(dotimes [i food-places]
(let [p (place [(rand-int dim) (rand-int dim)])]
(alter p assoc :food (rand-int food-range))))
(doall
(for [x home-range y home-range]
(do
(alter (place [x y])
assoc :home true)
(create-ant [x y] (rand-int 8)))))))
(defn bound
"returns n wrapped into range 0-b"
[b n]
(let [n (rem n b)]
(if (neg? n)
(+ n b)
n)))
(defn wrand
"given a vector of slice sizes, returns the index of a slice given a
random spin of a roulette wheel with compartments proportional to
slices."
[slices]
(let [total (reduce + slices)
r (rand total)]
(loop [i 0 sum 0]
(if (< r (+ (slices i) sum))
i
(recur (inc i) (+ (slices i) sum))))))
;dirs are 0-7, starting at north and going clockwise
;these are the deltas in order to move one step in given dir
(def dir-delta {0 [0 -1]
1 [1 -1]
2 [1 0]
3 [1 1]
4 [0 1]
5 [-1 1]
6 [-1 0]
7 [-1 -1]})
(defn delta-loc
"returns the location one step in the given dir. Note the world is a torus"
[[x y] dir]
(let [[dx dy] (dir-delta (bound 8 dir))]
[(bound dim (+ x dx)) (bound dim (+ y dy))]))
;(defmacro dosync [& body]
; `(sync nil ~@body))
;ant agent functions
;an ant agent tracks the location of an ant, and controls the behavior of
;the ant at that location
(defn turn
"turns the ant at the location by the given amount"
[loc amt]
(dosync
(let [p (place loc)
ant (:ant @p)]
(alter p assoc :ant (assoc ant :dir (bound 8 (+ (:dir ant) amt))))))
loc)
(defn move
"moves the ant in the direction it is heading. Must be called in a
transaction that has verified the way is clear"
[loc]
(let [oldp (place loc)
ant (:ant @oldp)
newloc (delta-loc loc (:dir ant))
p (place newloc)]
;move the ant
(alter p assoc :ant ant)
(alter oldp dissoc :ant)
;leave pheromone trail
(when-not (:home @oldp)
(alter oldp assoc :pher (inc (:pher @oldp))))
newloc))
(defn take-food [loc]
"Takes one food from current location. Must be called in a
transaction that has verified there is food available"
(let [p (place loc)
ant (:ant @p)]
(alter p assoc
:food (dec (:food @p))
:ant (assoc ant :food true))
loc))
(defn drop-food [loc]
"Drops food at current location. Must be called in a
transaction that has verified the ant has food"
(let [p (place loc)
ant (:ant @p)]
(alter p assoc
:food (inc (:food @p))
:ant (dissoc ant :food))
loc))
(defn rank-by
"returns a map of xs to their 1-based rank when sorted by keyfn"
[keyfn xs]
(let [sorted (sort-by (comp float keyfn) xs)]
(reduce (fn [ret i] (assoc ret (nth sorted i) (inc i)))
{} (range (count sorted)))))
(defn behave
"the main function for the ant agent"
[loc]
(let [p (place loc)
ant (:ant @p)
ahead (place (delta-loc loc (:dir ant)))
ahead-left (place (delta-loc loc (dec (:dir ant))))
ahead-right (place (delta-loc loc (inc (:dir ant))))
places [ahead ahead-left ahead-right]]
(. Thread (sleep ant-sleep-ms))
(dosync
(when running
(send-off *agent* #'behave))
(if (:food ant)
;going home
(cond
(:home @p)
(-> loc drop-food (turn 4))
(and (:home @ahead) (not (:ant @ahead)))
(move loc)
:else
(let [ranks (merge-with +
(rank-by (comp #(if (:home %) 1 0) deref) places)
(rank-by (comp :pher deref) places))]
(([move #(turn % -1) #(turn % 1)]
(wrand [(if (:ant @ahead) 0 (ranks ahead))
(ranks ahead-left) (ranks ahead-right)]))
loc)))
;foraging
(cond
(and (pos? (:food @p)) (not (:home @p)))
(-> loc take-food (turn 4))
(and (pos? (:food @ahead)) (not (:home @ahead)) (not (:ant @ahead)))
(move loc)
:else
(let [ranks (merge-with +
(rank-by (comp :food deref) places)
(rank-by (comp :pher deref) places))]
(([move #(turn % -1) #(turn % 1)]
(wrand [(if (:ant @ahead) 0 (ranks ahead))
(ranks ahead-left) (ranks ahead-right)]))
loc)))))))
(defn evaporate
"causes all the pheromones to evaporate a bit"
[]
(dorun
(for [x (range dim) y (range dim)]
(dosync
(let [p (place [x y])]
(alter p assoc :pher (* evap-rate (:pher @p))))))))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; UI ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(import
'(java.awt Color Graphics Dimension)
'(java.awt.image BufferedImage)
'(javax.swing JPanel JFrame))
;pixels per world cell
(def scale 5)
(defn fill-cell [#^Graphics g x y c]
(doto g
(.setColor c)
(.fillRect (* x scale) (* y scale) scale scale)))
(defn render-ant [ant #^Graphics g x y]
(let [black (. (new Color 0 0 0 255) (getRGB))
gray (. (new Color 100 100 100 255) (getRGB))
red (. (new Color 255 0 0 255) (getRGB))
[hx hy tx ty] ({0 [2 0 2 4]
1 [4 0 0 4]
2 [4 2 0 2]
3 [4 4 0 0]
4 [2 4 2 0]
5 [0 4 4 0]
6 [0 2 4 2]
7 [0 0 4 4]}
(:dir ant))]
(doto g
(.setColor (if (:food ant)
(new Color 255 0 0 255)
(new Color 0 0 0 255)))
(.drawLine (+ hx (* x scale)) (+ hy (* y scale))
(+ tx (* x scale)) (+ ty (* y scale))))))
(defn render-place [g p x y]
(when (pos? (:pher p))
(fill-cell g x y (new Color 0 255 0
(int (min 255 (* 255 (/ (:pher p) pher-scale)))))))
(when (pos? (:food p))
(fill-cell g x y (new Color 255 0 0
(int (min 255 (* 255 (/ (:food p) food-scale)))))))
(when (:ant p)
(render-ant (:ant p) g x y)))
(defn render [g]
(let [v (dosync (apply vector (for [x (range dim) y (range dim)]
@(place [x y]))))
img (new BufferedImage (* scale dim) (* scale dim)
(. BufferedImage TYPE_INT_ARGB))
bg (. img (getGraphics))]
(doto bg
(.setColor (. Color white))
(.fillRect 0 0 (. img (getWidth)) (. img (getHeight))))
(dorun
(for [x (range dim) y (range dim)]
(render-place bg (v (+ (* x dim) y)) x y)))
(doto bg
(.setColor (. Color blue))
(.drawRect (* scale home-off) (* scale home-off)
(* scale nants-sqrt) (* scale nants-sqrt)))
(. g (drawImage img 0 0 nil))
(. bg (dispose))))
(def panel (doto (proxy [JPanel] []
(paint [g] (render g)))
(.setPreferredSize (new Dimension
(* scale dim)
(* scale dim)))))
(def frame (doto (new JFrame) (.add panel) .pack .show))
(def animator (agent nil))
(defn animation [x]
(when running
(send-off *agent* #'animation))
(. panel (repaint))
(. Thread (sleep animation-sleep-ms))
nil)
(def evaporator (agent nil))
(defn evaporation [x]
(when running
(send-off *agent* #'evaporation))
(evaporate)
(. Thread (sleep evap-sleep-ms))
nil)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; use ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; (comment
;demo
;; (load-file "/Users/rich/dev/clojure/ants.clj")
(def ants (setup))
(send-off animator animation)
(dorun (map #(send-off % behave) ants))
(send-off evaporator evaporation)
;; )

view raw

ants.clj

hosted with ❤ by GitHub


https://www.uttv.ee/embed?id=11059

Ship of Theseus and Transhumanism

You may be familiar with the paradox of the Ship of Theseus. If not, here is a brief overview. Essentially, it leaves us wondering about the relationship between the components of an object and the object’s identity. How much do the parts tell you about the whole?


Complexity theory (reference work by researchers at Sante Fe Institute) relates the explanatory gap between the micro and macro in statistical mechanics to the inability to describe a complex network’s evolution in time using a finite (comprehensible) set of differential equations. The economy, ecosystems, weather, three-body systems, the brain, and even the organization of living organisms that compose us are all chaotic dynamical systems that fall into the domain that Gödel himself proved to be non-contradictory yet necessarily beyond the limits of understanding. By understanding, I mean at least in the sense of symbolic representation… some expression of deterministic causation between parts of a system and the whole. Less esoterically, neural networks and genetic algorithms can produce solutions “better” than manual human labor (less complex, more resilient, more effective) but these are not amenable to causal analysis or symbolic deconstruction. I point the reader to the use of genetic programming for the design of the NASA ST5 spacecraft antenna as an example. I also point the reader to the work of Mitchell, Crutchfield, and Das from 1996 during which genetic algorithms were used to evolve cellular automata towards attractor states, enabling global synchronicity via local interaction in a similar fashion to the way fireflies synchronize their flashing and revealing a “particle physics” that allowed information to remain conserved from generation to generation.  I think for us to succeed with transhumanist goals including life extension or mind uploading, we need to resolve the Theseus paradox and that probably requires reconciling statistical mechanics with statistical inference. Check out Roy Frieden’s Extreme Physical Information principle and the application of the variational principles MaxEnt / Minimum Fisher Information in machine learning. The article “ELUDING THE DEMON – HOW EXTREME PHYSICAL INFORMATION APPLIES. TO SEMIOSIS AND COMMUNICATION” suggests that the field of artificial life and this aforementioned reconciliation requires an understanding of estimation theory in the context of quantum uncertainty and apparent randomness.

The theory of everything which is so preciously sought after is not a unification of general relativity with quantum mechanics… it is instead the unification of statistical mechanics with statistical inference. An investigation of variational principles, Frieden’s Extreme Physical Information, etc. will lead us towards a future of transhumanism, but it will not be a future of mind uploading or life extension. It will be an era where we use genetic programming to evolve digital, artificial life. There will be no “equations” or design… it will, just as we observe, occur spontaneously as interacting digital agents begin to synchronize into a chaotic, complex dynamical system characterized by self-similarity and power-law rank distributions. The forthcoming video game “No Man’s Sky” is a teaser of this future and it leverages procedural generation; such generative art produces complexity and apparent diversity not unlike Darwinian evolution yet, just like Darwinian evolution, the procedural generation algorithm itself can be described in just a few lines of source code. And, just like in nature, phenomenal perception only occurs locally when information is transmitted from the source to the observer. With no observer logged in to the game, there is no need to render any information; can the world really be said to exist at that time?

(Meta?)physical poetry

I had a conversation with my father today about Gödel’s incompleteness theorems, genetic programming, evolved antennas, metabolic scaling, and fractal distribution networks. We talked about the elegance of using biologically-inspired techniques to evolve non-linear black-box algorithms that are more efficient than linear closed-form analytical solutions produced by the best engineers. We discussed the ubiquitous nature of mutation, selection, and inheritance (of ideas and matter) and the relationship between evolutionary fitness and the “preferential attachment” and recursion that generate fractal scale-free networks.

My father is an ISTJ, and a very concrete thinker. He understood what I said and related these abstract patterns and ideas to everyday examples. I was moved by his visualizations,  and thought that perhaps a book or movie or some other work of art could one day help people to understand how these simple guiding mechanisms of the cosmos elude attempts at reduction and linear estimation and how they produce the tempting illusion of randomness and stochastic behavior. In particular, he asked me to imagine being in a rocket traveling among the stars and seeing the same fractal pattern as I maneuver between them; I get closer to them but it seems as though I’ll never get there, much like the arrow in Zeno’s paradox. He asked me to imagine a drive through a forest and a resulting kaleidoscopic approach of trees moving from the far field to the near field.

There is a hidden language that unites genetics, memetics, fractal networks, nonlinear dynamics, attractors, least action, maximum Fisher information, asymptotic analysis, general intelligence, self-awareness, renormalization, uncertainty, and topological dimensions; I believe that imagery like that suggested by my father may one day popularize the new non-reductive, simulation-based approach to science and one day bring about the discovery of that unified theory. The mechanism will be so simple it will convey no Shannon information at all and its Kolmogorov complexity will be zero; it will be the ineffable Tao- ineffable as a simple consequence of Gödel’s results. As time passes, I discover that apparent differences in descriptive languages are illusory- every field of study is looking at one particular branch of a large tree. The search continues.

Chaos, evolution, self-awareness, least action, Maximum Fisher Information, scale-free networks

“The day science begins to study non-physical phenomena, it will make more progress in one decade than in all previous centuries of its existence.”  —Nikola Tesla

Recursion gives rise to fractal dimension. This is studied in the field of complexity science (also known as systems science, or the science of complex adaptive systems) because  fractal distribution networks and the scale-free power-law degree distributions that characterize them are ubiquitous in nature, a common thread in physics as well as the social sciences. These networks appear to result from “preferential attachment,” which is to say that the generating mechanism is one that abides by the aphorism “the rich get richer.” This preferential attachment mechanism of self-organization is also the principle behind natural selection. There are two interesting case studies I want to share so that I may relate the concepts of fractal networks and emergence to Tononi’s Integrated Information Theory of consciousness, Hofstadter’s suggestion of recursion as the mechanism of self-awareness, and the casual sets approach to quantum gravity. My goal of late is to explore more intimately the relationship between hypercomputation (non-causal, non-local coordination: the explanatory gap between the micro and macro studied in statistical mechanics) and general intelligence (data compression); I plan to apply multi-agent simulations and genetic algorithms to really understand the role evolution plays in this relationship. I’ll be hosting my software projects and invite any interested to join me in this exploration.

In 1993, Melanie Mitchell of Santa Fe Institute et al. performed an experiment that involved evolving a population of elementary (one-dimensional) cellular automata using a genetic algorithm to eventually perform a global computation task. Researchers wondered how the algorithm enabled the automaton (which itself is a collection of agents interacting locally) to coordinate and perform a global task. Notice that such seemingly emergent coordination among locally interacting components is exactly the sort of hypercomputational, non-causal, non-reductionist link between the micro and macro that is studied in statistical mechanics.  By performing edge detection on the lattices produced by automata of successive generations, researchers discovered that the genetic algorithm was enabling the cellular automaton to learn to perform the global task by encoding information in the form of a “particle physics” along the time axis of the lattice.

When we say an object has a fractal dimension, it means there is an infinite level of detail at all scales of magnification. There are techniques, including the box-counting method, of estimating the fractal dimension of a pattern. Fractals are generated by recursion; that is to say they occur as a result of self-referential processes. The Mandlebrot set is one popular example. The bifurcation diagram of the logistic map is itself a fractal with a known fractal dimension. The logistic map is ‘an archetypal example of how complex, chaotic behaviour can arise from very simple non-linear dynamical equations. […] This nonlinear difference equation is intended to capture two effects: reproduction where the population will increase at a rate proportional to the current population when the population size is small, and starvation (density-dependent mortality) where the growth rate will decrease at a rate proportional to the value obtained by taking the theoretical “carrying capacity” of the environment less the current population.’

In the 1990s, West, Brown, and Enquist investigated the mystery of why empirical data appeared to suggest that the metabolic rate of an organism (the rate at which is radiates heat) did not fit the geometrically-based hypothesis that it should be proportional to the organism’s mass taken raised to the two-thirds power. What they discovered is that the empirical data, which actually suggested the metabolic rate to be proportional to the mass taken to the three-fourths power, could be explained by modifying the geometric hypothesis. In particular, one should not consider the rate to be proportional to the surface area of a three-dimensional object but rather a four-dimensional object. When considering the internal anatomy of biological organisms, they discovered that the fractal branching of the material distribution systems resulted in an additional fractal dimension. They concluded that “although living things occupy a three-dimensional space, their internal physiology and anatomy operate as if they were four-dimensional… fractal geometry has literally given life an added dimension.” Scale-free, fractal distribution networks similar to the nervous and respiratory systems of biological organisms appear everywhere and similar “quarter-power laws” have been empirically measured, relating- for example- the size of a city to its crime, GDP, income, and patents.

Featured image extracted from the paper “Evolving Cellular Automata with Genetic Algorithms: A Review of Recent Work” by Mitchell, Crutchfield, and Das.