AlgoMantra, b. 2005

1/f)))$VediCrystalpunk | CryptoTantrika > ./Swaha!!
OrganicPoetry
AlgoKreeda
AlgoYatra
Recent Posts
Archives
Contributors

Design : The Nimble Nimbus
Participants

Powered by Blogger Free Guestmap from Bravenet.com

Commercial Break
Saturday, May 24, 2008
How to Grow a Spaceship - Part II
[A previous article described an intended approach to 'grow spaceships' in a simulated set of artificial, monadological universes. Here, we continue that line of thought, although we should abandon using Processing as the coding platform for various reasons, and adopt something more powerful - like C.]

Suppose that your particular distribution of Linux was a planet like Earth, and it was crawling with little pieces of emergent code that wanted to escape the pull of its 'gravity'. In this case, 'gravity' would be the equivalent of the linguistic resistance the system posed as a whole for some random piece of code to make a TCP connection with another computer and replicate itself. Any piece of code that does succeed, could then be called a spaceship. By that definition, a spaceship is a |thought+entity| that escapes it's progenitor's desire to remain as a Unit, and by its very escape - converts that notion of Unity into a notion of Source. This is why a spaceship has been emblematic of progress in the second half of the twentieth century.

The remarkable thing about Thomas Kuhn's 1962 work, The Structure of Scientific Revolutions is that it places the evolution of cosmic knowledge (a.k.a "the laws of physics") at the mercy of the same laws that govern the cosmos. This is fairly close to the intuitive notion of the anthropic principle in a sense. If the ideas above were to be framed in the form of a simple question, it would go like:

If the world is governed by the laws of physics, and the laws of physics to begin with, since that is the underlying foundation of all other scientific laws, then what governs the rate at which these laws (i.e, cosmic knowledge) will be revealed to us - the laws of physics themselves?

On the surface, this is a paradox - but it is not so. It places a greater question mark on the scientific method, which presents itself as a logical algorithm for the formulation of science, and an adequate, mechanical method of 'discovery'. On the contrary, the history of science is replete with accidents. Neither did the scientists choose the laws they were about to discover, nor did the laws have much cognition of the people they were going to pop in the heads of (did they?). Moreover, the ideas of invention, proof and discovery were so closely tied to linguistic heritage, that many ancient cultures' 'proofs' of what is considered modern knowledge may soon be considered acceptable.

The holographic principle is an aspect of modern thought that reflects these ancient beliefs. A crude explanation would be that: Let us say that the actual universe exists in a (N+1 ) dimensional dynamical form, but its image in N dimensions is a "surface" that contains all the information needed to describe the things going on in N+1. In terms of a black hole's entropy, a relation was established where the totality of entropy inside the black hole was proportional to the surface area of it's event horizon.

This relationship between surface and inside/outside is of fundamental importance to perhaps all the sciences, and the mother of all sciences - philosophy. For the surface is a distinction, and the act of marking the first distinction between an object and the world is an act of logik. It is logik that creates a cascade of distinctions, not distinctions that necessitate the need for logik.

And we return to the question posed in the beginning - what is the Shannon entropy of a linux box at any given point?

Labels:

2 Comments: Post a Comment
Blogger Algosome said ..

You don't really care about the Shannon Entropy, which tells you about the information in a data stream on a channel. For a static array of bits like the contents of a Linux box, you want to know about its Kolmogorov-Chaitin algorithmic complexity. Any particular value of algorithmic complexity is dependent on the instruction set used to compute it, so that the complexity of a Linux box on x86 hardware cannot be validly compared to the complexity of a Linux box on PPC or Itanium. Yes you can emulate x86 on PPC and vice versa, but the numbers don't match when you compare them. For the real world, our best fundamental model to base complexity values on is the Standard Model of Quantum Field Theory plus general relativity. Because it incorporates special relativity, it gives you the space-time equivalence that you want for executing programs, but it also requires you to go from discrete, countable sets of bits to continuous-valued, uncountable wave functions that nobody has the math to deal with in this context. People working in Loop Quantum Gravity are doing interesting things with category theory that have some remote resemblances to the things that compiler theorists are doing with category theory, but it doesn't really get you all the way there, as far as I can tell.

9:21 AM 
Anonymous Anonymous said ..

Thanks for your insightful comment.

I would not treat the contents of a Linux box as a static array, simply because it is a dynamic system. There are processes running all the time in a box that is ON. My assumption would make much more sense if we do not treat the contents as discrete bits and bytes, but as electrical signals.

So even if I were to calculate the Kolmogorov complexity of the 'array' at any given static point in time, I would have to recalculate it soon after. Repeated measurement of complexity over time would take us closer to the notion of measuring information entropy.

I am still trying to wrap my head around the matrices they use to show wavefunction collapse, and the math is way too obscure for me. I think of this as a linguistic issue ("the language of mathematicians") rather than a conceptual one.

10:56 PM