Friday, January 27, 2017

Re: [jcs-online] The Hive Mind

Errol, ....Serge, others...

Relating to your question about, hivemind and your examples and question to Serge of, "How do you define 'think'?", below,   please notice that I define 'think' and 'consciousness' as structural coding (or nested structural coding) which fits for all your microbial communications instances. 
Also, please notice that the structural coding I generally refer to is in the energy-related ~6^n hydrogen-bonded ordered water stacks continually forming at aerobic respiration sites. Also, please notice that these sites, and thus the account I advocate and bring forward are INSIDE  neurons (and other cells), thus differing a bit from the neuron theory model  which Jonathan and many others may advocate.  Also please note the nested organization of: ordered water structures (and energy) within respiration sites within neurons within brain structures within environmental vibrations within hivemind conveyed within protein folding markings in the English language words and sentences resonating here within our various screens, etc..

The thing with this structural coding within the respiration reaction is it is integrated with energy collection and conservation and thus structural codings which associate with energy conservation, and/or associated development of what we call enzymatic structures which replicate DO have or offer some survival or sustenance advantage. And, this structural coding advantage, if you think about it, is also quite a bit like the value we find in the empirical 'proofs' of our so-called objective science.   Things that strongly repeat and also conserve energy generally have value and/or persist and re-occur. 

[As an aside, please notice that  the storyline I am advocating, including the 6^n structural coding in water forming in the respiration reaction and besides also demonstrating multiple states and variable mass density in increments of 1/2 spins, flows from  empirical 'proof' or basis found  in the analog math of magnetic tetrahedron. Learning the analog math IS acquiring physical intuition via the tactile channel on the things listed herein.]

But back to the close coupling of energy conservation with structural coding, at the root level in the respiration reaction and associated metabolisms, if a pathway conserves more  energy compared with alternative or competing pathways, ~it has energy which can be delegated to other activities or serve other purposes. Or, in the case of acquiring a more general underlying principle, participants may account for more features and facets while using less resources, or when acquiring an  improved trial theory and/or scientific paradigm, doing so reduces the amount of error and inefficiency in related technical and societal constructions built up  from the various trial theories.

This ~inner level of structural coding, coupled with our energetics, is generally either resonant or dissonant with ITS surroundings and energy flow and the existing and forming structural coding (as in stacks of ordered water, bound water, amino acid chains, etc., etc.).  This inner resonance is likely akin to the way we 'know' some pathway or ~structure is the appropriate or ~correct one. Because of the resonance, because of the larger amount of conserved energy and/or the speed of resolution along the 'right' pathway.  

I suggest when Serge points at his 'inner intersubjectivity'   he may be attempting to articulate something like the synergy between the inner resonance of energy  conservation and developed or developing structures within the respiration reaction sites. 

It is this energetic resonance that I think of as the special """skin-in-the-game""" that the storyline I advocate has over, say, ai or, or even group consensus.    That is, the energy value of a recalled hydrogen-bonding structural coding from some similar event can add energy to the organism and sustenance leading to further replication.  So far, AI devices don't have their replication and  energy supply coupled with their form of "consciousness".

Similarly, and hivemind, so far seem like a neat reactive programming polling display which is neat but  it's development is dependent or co-dependent on various other, non-integrated systems. 
They don't have skin-in-the-game.

Building outward,  one may now consider that hydrogen bonding codings formed within respiration sites play some roles in adjusting other structures within neurons adding another  nested level of resonance and energy conservation.

And, on outward, presumably passing through grammar and logic and reason, etc., and sometimes marking it,  becoming a thought worthy of speech.

It's all structural coding, or, as you say, thinking. 

Best regards,
Ralph Frost

Changing the western scientific paradigm.

With joy you will draw water
from the wells of salvation. Isaiah 12:3

---In, wrote :

[S.P.] The neurons are not separately thinking agents. It is only an organism as a whole complex system that possesses its exemplar of consciousness and who can be said to be thinking. Therefore we should not confuse the swarm of thoughts (in one head) with the swarm of neurons (in that same head). 

[EM]   Single celled organisms can learn.  They seek out food.  They flee danger.  Bacteria can chemically communicate with each other.  Isn't that thinking?  Why can single celled organisms think, but neurons can't?  How do you define 'think'?

From: "Serge Patlavskiy serge.patlavskiy@... [jcs-online]"
Sent: Tuesday, January 24, 2017 6:14 AM
Subject: Re: [jcs-online] The Hive Mind

Errol McKenzie on Jan 20, 2017 wrote:
> It seems to me that the consensus of neurons that Jonathan talks 
>about in his book, is sort of analogous to the consensus of a swarm 
>of humans connected with this company's tool.
[S.P.] The neurons are not separately thinking agents. It is only an organism as a whole complex system that possesses its exemplar of consciousness and who can be said to be thinking. Therefore we should not confuse the swarm of thoughts (in one head) with the swarm of neurons (in that same head). 
Also, a swarm of separately thinking humans may be only compared with the swarm of thoughts of one person, but not with the swarm of separately thinking neurons. The case is that every person permanently looks for consensus with oneself -- a person is permanently solving the problem of inner intersubjectivity. A search for consensus with other person(s) -- it is what I call solving the problem of outer intersubjectivity.
Serge Patlavskiy

From: "'Edwards, Jonathan' jo.edwards@... [jcs-online]"
Sent: Monday, January 23, 2017 11:27 AM
Subject: Re: [jcs-online] The Hive Mind

Dear Errol,

I think the comparison with my proposal makes sense In fact somewhere I have proposed something very similar to this but with a significant twist. 

The idea is that there is an interactive game with the ongoing state of play fed to thousands of children’s game stations  online. Each five year old child has a handset and is told to use the handset to play the game by trying to make the next move. The game is of the sort where you are not particularly surprised if what happens is not exactly what you wanted to happen. In reality the moves depend on the combined inputs of a thousand children. So each child is quite convinced that they are playing the game but in fact they are taking part in a consensus exercise.

The model you suggest is probably intended to include an appreciation by the ‘players’ that they are one of many. So the crucial difference for neurons is that, unless like mine they have come to realise they are part of a consensus, they work on the assumption that they are ‘the person’ and unique.

Marvin Minsky of course proposed a Society of Mind that can work rather like this but he presumed that societal interaction was all at a sub personal level.

Best wishes


On 20 Jan 2017, at 16:22, Errol McKenzie errolmacky@... [jcs-online] <> wrote:

A new company has developed what they call artificial artificial-intelligence, where, with the help of a computer, a group makes a prediction based on its cumulative knowledge.

Reading this article made me think of Jonathan Edwards' book, How Many People are there in My Head.  It seems to me that the consensus of neurons that Jonathan talks about in his book, is sort of analogous to the consensus of a swarm of humans connected with this company's tool.  Almost as if, groups of neurons might come to a consensus in a similar way, that groups of connected (via the tool) decision-making people do.

And when I try the BETA version of the tool  ( )  it has a feeling associated with moving the icon, that reminds me of Ralph's magnetic tetrahedra.

"Rosenberg is trying to capture the same dynamic with his human swarms. Answering a question with the Unanimous AI tool involves moving an icon to one corner of the screen or other – pulling with or against the crowd – until the hivemind converges. Individuals must constantly vie with other members of the group to persuade them to edge towards their preferred solution."
That's all I got.  Does anyone see anything significant to the operation of human consciousness in any of this?

No comments:

Post a Comment

Leave a comment