Thanks for your provocative and interesting post.
I believe if you and Penrose consider the nested structural coding on just the genetic and epi-genetic levels, and add in metabolic and protein-folding (expressive) features -- even if you don't dare venture into the stacks and sequences in the underlying 10^20 per second hydrogen bonding packets forming within distributed respiration sites' transactions -- you may notice that the ~system is shifting from one structurally coded resonance point to another, basically, adjusting in terms of energy -- conserving energy, hashing out balance within the nested structured~duality.
Also, the ~system is running these transactions in analog mode, naturally. The ~routines that succeed do so naturally, energetically.
With this backdrop, that is, conceptualizing reality as nested fields within nested fields, then oddities such as Pi and infinity, etc., are a lot more like special system resonance points rather than just special numbers on a single idealized number line. Similarly, primes, etc.
As for sums and strings...
<<
1. Only humans can do sums.
2. Sums use strings.
3. The genetic change that allowed us to do sums allowed our brains to use strings internally.
In contrast I think the correct conclusion is
3. The genetic change that allowed us to do sums allowed our brains to interface internal non-string computation to external string signalling. A mechanism was acquired that could convert temporal sequences into spatial patterns and out again.
2. Sums use strings.
3. The genetic change that allowed us to do sums allowed our brains to use strings internally.
In contrast I think the correct conclusion is
3. The genetic change that allowed us to do sums allowed our brains to interface internal non-string computation to external string signalling. A mechanism was acquired that could convert temporal sequences into spatial patterns and out again.
>>
I wonder whether #1 thru #3a or 3b is correct. Though limited, animals appear to keep track of things and do a balance with lack of food/food. And, it seems the key difference is adding nested level of association which seems to mainly be a reverberating memory type of thing begining with a one-to-one alignment of artifacts of interest/energy and one's fingers and toes, later to marks in sand or on bark or skin and the associated protein-folding expressions. An increase in level of association can also be gained by inhibition or reduction of dissociation -- which again circles around to increased empathy relating perhaps just to an increase in mirror neurons.
I'm not tracking on why you suppose or suggest a "mechanism to convert temporal sequences into spatial patterns and out again". Can you clarify, please? It seems to me that habituating and then falling asleep to the constant heart-beating and in-out breathing (plus orbits and seasons), prompts assuming that the erroneous temporal relation exists. This flaw seems more like a natural initial approximation -- a precursor to discerning that reality is actually nested fields within nested fields.
I suspect all this may become easier to provisionally accept once there are some repeatable instances of multiple-state transportation kicking around. We can't walk or count our way to infinity; still it's just a matter of hitting a resonance point and shifting state.
Nested structural coding... genetics, epi-genetics, metabolism, protein-folding, reproduction, respiration... Think about it.
Best regards,
Ralph Frost
http://frostscientific.com
With joy you will draw water
from the wells of salvation. Isaiah 12:3
---In jcs-online@yahoogroups.com,
I have been thinking about the genetic change that allowed humans to do sums and use language and ended up reading a book by Gallistel and King over Xmas (Memory and the Computational Brain) that made me think that maybe an important myth has been perpetuated from Turing through Chomsky to the present day that needs exploding. Gallistel argues that human brains must work a bit like computers with read/write memory stacks even if they look connectionist anatomically. This week I remembered Penrose's argument for something spooky in the brain because his mathematical proofs were non-computable. I think there may be a simple explanation.
What is hidden implicit in Gallistel's argument is that information in human brains must be carried in sequences or strings, like in morse code, words and computer cables (also DNA). This is the only way you can 'move a message around' - if the meaning is held in a temporal sequence, because if you want to move it you cannot afford for the meaning to be held in where the message is sitting.
The problem is that brains have to be able to respond immediately to danger and switch track with no notice from one thing to another (baby crying, milk boiling, doorbell) and very like they also benefit from making use of 'whichever cell fires a response first wins' which means that the regular clocking cycles needed to use strings are no good. Moreover, the is no central processor to read and write to and from and no obvious stacking mechanism that would fit with string use. That is not to say that there is nothing like stacking. There must be some chaining routines in short term memory for remembering telephone numbers, but I think it is a mistake to think this immediately implies string-based computations.
The other reason for thinking that brains do not use strings internally is that strings are only necessary if you only have one channel. In a brain every cell receives thousands of inputs so complex meanings can be encoded spatially rather than temporally. There is no point is slowing things down and using strings. Why take 1,000 cycles to send a message when you can do it in one? Gallistel marvels at how brains work 10,000 times faster than they should do on the basis of his computer analogies. But to me that simply illustrates how impoverished a computing system a computer is - limited to one bit at a time. In a system with constantly diverging and converging signals encoding rich patterns in space the rules worked out by Church and Turing are just irrelevant. There is no reason to expect human thought to be constrained in that way.
Where I think the mistake may lie is in the following syllogism.
1. Only humans can do sums.
2. Sums use strings.
3. The genetic change that allowed us to do sums allowed our brains to use strings internally.
In contrast I think the correct conclusion is
3. The genetic change that allowed us to do sums allowed our brains to interface internal non-string computation to external string signalling. A mechanism was acquired that could convert temporal sequences into spatial patterns and out again.
As an analogy, think of the first movement of Bach's first cello suite or the backing to the Bach/Gounod Ave Maria, or if you like some Bix Beiderbecke or even Thelonious Monk or maybe best of all Chan Chan from Buena Vista (but not, please, Michael Nyman). What comes in is a complicated string of sounds but what goes on in one's head is a conversion of those sounds into chords and it is the relation between the chords that drives the music forward emotionally. The real musical computations are independent of note sequence, so Stefan Grapelli can play a solo with notes all over the shop while it is perfectly obvious that he is really still playing Sweet Georgia Brown. OK, the analogy is limited, but maybe music is a celebration of this uniquely human faculty and musicality was sexually or socially selected for because it was a sign of 'fitness' in this sense.
But Penrose might ask how we are going to get this non-string system to solve problems involving infinities and irrational numbers like Pi etc. I think the answer may draw on two factors. Firstly, the system is likely to operate partly using a Bayesian statistics in which the maximum number is 1 (certainty) and infinity is simply the limitlessness of certainty. Infinity is only troublesome if you have to get there by building a string sequence in the way Peano indicated. Infinity as an idea rather than a number is easy. Numbers like Pi are inherent to spatial relations and as long as you do not need to write them out as strings they pose no problem either.
So maybe Shannon was right about information in strings going between brains but not applicable to what happens inside brains - in fact he seems to have assumed that brains must be some other sort of converters to probabilities, so HIS interpretation of information may have been right. It just got misapplied.
Best wishes
Jo
What is hidden implicit in Gallistel's argument is that information in human brains must be carried in sequences or strings, like in morse code, words and computer cables (also DNA). This is the only way you can 'move a message around' - if the meaning is held in a temporal sequence, because if you want to move it you cannot afford for the meaning to be held in where the message is sitting.
The problem is that brains have to be able to respond immediately to danger and switch track with no notice from one thing to another (baby crying, milk boiling, doorbell) and very like they also benefit from making use of 'whichever cell fires a response first wins' which means that the regular clocking cycles needed to use strings are no good. Moreover, the is no central processor to read and write to and from and no obvious stacking mechanism that would fit with string use. That is not to say that there is nothing like stacking. There must be some chaining routines in short term memory for remembering telephone numbers, but I think it is a mistake to think this immediately implies string-based computations.
The other reason for thinking that brains do not use strings internally is that strings are only necessary if you only have one channel. In a brain every cell receives thousands of inputs so complex meanings can be encoded spatially rather than temporally. There is no point is slowing things down and using strings. Why take 1,000 cycles to send a message when you can do it in one? Gallistel marvels at how brains work 10,000 times faster than they should do on the basis of his computer analogies. But to me that simply illustrates how impoverished a computing system a computer is - limited to one bit at a time. In a system with constantly diverging and converging signals encoding rich patterns in space the rules worked out by Church and Turing are just irrelevant. There is no reason to expect human thought to be constrained in that way.
Where I think the mistake may lie is in the following syllogism.
1. Only humans can do sums.
2. Sums use strings.
3. The genetic change that allowed us to do sums allowed our brains to use strings internally.
In contrast I think the correct conclusion is
3. The genetic change that allowed us to do sums allowed our brains to interface internal non-string computation to external string signalling. A mechanism was acquired that could convert temporal sequences into spatial patterns and out again.
As an analogy, think of the first movement of Bach's first cello suite or the backing to the Bach/Gounod Ave Maria, or if you like some Bix Beiderbecke or even Thelonious Monk or maybe best of all Chan Chan from Buena Vista (but not, please, Michael Nyman). What comes in is a complicated string of sounds but what goes on in one's head is a conversion of those sounds into chords and it is the relation between the chords that drives the music forward emotionally. The real musical computations are independent of note sequence, so Stefan Grapelli can play a solo with notes all over the shop while it is perfectly obvious that he is really still playing Sweet Georgia Brown. OK, the analogy is limited, but maybe music is a celebration of this uniquely human faculty and musicality was sexually or socially selected for because it was a sign of 'fitness' in this sense.
But Penrose might ask how we are going to get this non-string system to solve problems involving infinities and irrational numbers like Pi etc. I think the answer may draw on two factors. Firstly, the system is likely to operate partly using a Bayesian statistics in which the maximum number is 1 (certainty) and infinity is simply the limitlessness of certainty. Infinity is only troublesome if you have to get there by building a string sequence in the way Peano indicated. Infinity as an idea rather than a number is easy. Numbers like Pi are inherent to spatial relations and as long as you do not need to write them out as strings they pose no problem either.
So maybe Shannon was right about information in strings going between brains but not applicable to what happens inside brains - in fact he seems to have assumed that brains must be some other sort of converters to probabilities, so HIS interpretation of information may have been right. It just got misapplied.
Best wishes
Jo
No comments:
Post a Comment
Leave a comment