Reply 1


From: Seanet Online Services, Seattle WA

The wording of this question has a certain implication. Does being part of a larger
consciousness necessarily diminish the importance of the individual human
being? I would argue that this is not so. In fact, if anything, it should increase
it. If humans are not part of a “World Brain”, then their significance becomes
solely that which attaches to their own existence. Assuming that society’s patterns
are as they exist today (that is, assuming that the question of the existence
or nonexistence of a larger conciousness is in the context of the reality as we
know it – the format of society might demand such a consciousness but since
we are unable as individuals to make such an assessment we proceed assuming
that either case is equally possible within the context of this discussion) that
significance is then the pattern of their everyday lives. But if humans are part
of a World Brain, then in addition to the significance of the individual there is
the significance of the group, and since each human is part of that group, they
share in that significance. Group significance magnifies everybody’s worth.

The Darwinian view of the purpose of human (and all life) as simply a vehicle for
the perpetuation of the species has meaning only from a statistical, reductionistic
point of view. That is, you can only claim that individuals exists purely to perpetuate the
species if you assume that at the most reduced scale (the grain size under observation
is at a minumum) actions are meaningless. Also, Darwin’s theory does NOT imply that
individual humans are expendable or all homogeneously insignificant. What it implies
is that less capable humans, in the long run, will end up expended. More capable
humans, over the long run, will end up having larger significance. Moreover, I don’t
think it’s necessarily true that at the minimum grain size (the individual) actions are
meaningless. I think the point to note here is that the atomic-level objects need not
only inherit meaning from objects of larger scope. In fact, I would argue that meaning
is more of a bottom-up than a top-down process. What is the meaning of a person’s
existence? That’s up to that person to decide.

I agree that the resolution of the original question depends a great deal on us coming
up with a better definition of consciousness. I think one of the most important
aspects of consciousness is that of the “emergent property” – the appearance of
new behavior classes as you expand the scope of the system. If a system is
conscious, we should see such emergent properties. Harkening back to the bottom-
up concept of meaning I talked about, I think that in a system composed of individual
meaning-producing elements, an conscious body would have larger meanings at the
wider scope level. These meanings would be incomprehensible (or imperfectly understood,
in the same way that a Flatlander cannot fully understand a 3-D object like an apple) to
the individual members of the collective consciousness but well-understood and
acted upon by that collective. Neural nets clearly behave in this way, although it is
far from clear that an individual neuron creates any meaning. That’s why it’s so hard
to assert conclusively that neural nets possess a degree of consciousness. On the
opposite side, we all “agree” that humans are concious but it’s hard to assess whether
society as a whole has any larger meaning. In fact, from what I said above, it’s impossible
for an individual human to make that assessment. I would argue, though, that there’s evidence
that society is a self-organizing system to some extent, and to that extent it resembles
a neural net. Given that we assert humans are conscious, this implies that society is
indeed a larger consciousness.

I think the more relevant way to look at consciousness is not as a binary true/false
condition, but rather to look at a degree of consciousness. Even an ordinary NN
might possess a certain degree of consciousness relative to the system it solves.
In fact, it’s possible to hypothesize that even atoms and elementary particles have
an (infinitesimal) degree of consciousness. Now, a self-organizing system of such
atoms would therefore possess consciousness, but given that the individual
consciousnesses of the atoms are so small, the resulting system would not
necessarily possess a large (relative to human consciousness) degree of consciousness.
It would, however, have a higher degree of consciousness than its constituent parts.
An NN, therefore, is a “consciousness enhancer”. Certain parameters of the emergent
properties of a self-organizing system are thus the most important factors affecting
degree of consciousness. It would be interesting to try to find out what these parameters
were and implement an NN to optimize them – perhaps this is a topic for further research?

Alex Rast
Inficom, Inc.