Implications of Consciousness

The implications of phenomenal consciousness

The existence of phenomenal consciousness and of free-will and agency, to whatever extent we believe they do exist, underpins many of the important beliefs which we hold, take for granted and routinely act upon.

Responsibility, morality and justice

Without consciousness, it is not meaningful to speak of responsibility, in the moral sense, or about justice any more than it would be meaningful to use these terms in connection with a robot or a computer system. Interestingly, however, the ways society deals with wayward people has much in common with the ways we deal with wayward machines or animals, even though these dealings are described and interpreted differently. If a computer malfunctions then, depending on the cause, the computer might be repaired or reprogrammed but it would be bizarre to say, and seriously mean it, that it should be punished. If a conscious subject, such as a human, malfunctions by acting against what is acceptable in the community then that subject might be given medical treatment if the problem was considered a medical, or mental, one or reprogrammed by means or reward or punishment if the problem were considered a moral or behavioural one. [, Chapter 19] (See Section ).

From the point of view of an alien investigator who simply examined observed behaviour, there is little, if any, difference between the two cases. The interpretation and implications of the two are very different simply because a human is assumed to be conscious whereas a computer is not. This also raises the question whether the motivation for punishment is correction, reputation or vengeance. Such a distinction would be irrelevant for a non-conscious machine, even though people will sometimes kick a malfunctioning machine in anger.

A less clear cut situation exists in the case of animals. The interpretation of behaviour would depend on our beliefs regarding whether, or to what extent, the animal is conscious. If a dog misbehaves and is given a smack with the intention of making it behave better, is that a punishment or an attempt at reprogramming, or both? Or if a dog is given a treat for good behaviour. The answer depends on whether we believe dogs are conscious or are merely automata or whether they are conscious but in a lesser way, or a lesser extent, that humans are.

The value of human life

Many, if not the majority, of cultures, including vehemently materialistic ones, place a high value on an individual human life including that of new born babies. Upon what is that based? If there were no consciousness and, moreover, no unique centres of consciousness then such a view would be illogical. Humans are not hard to come by and, arguably, there are more in existence than the planet can comfortably support. Nevertheless the thought of culling humans (or gorillas) is morally repugnant in a way that the culling of badgers is not. One difference is that consciousness is attributed to the one and not the other. Another is how similar they are to us. That such a value is placed on every individual as something unique is an indication that we know, or at least act as if, there is more to a conscious subject than there is to a robot, even if we cannot say what that is or even if some would wish to deny it.

Meaning and Intentionality

Closely related to consciousness is meaning and, what is technically called, intentionality[i]. Our thoughts do not exist in isolation, they represent things in the external world. In that sense they have meaning. In contrast to that, the symbols manipulated by a computer do not have any meaning which has been conferred by the computer itself. The only sense that the symbols and calculations have intentionality is because such intentionality has been conferred by the designer or the programmer. The attribution of meaning to the symbols in the computer, conferred by an external agent, is called “derived intentionality”. A human agent is considered not to require such an external reference and the meanings of the thoughts in the this case are called “intrinsic intentionality” [].

The difference, which is argued means that “strong AI” or computers which are conscious, are impossible, is illustrated by Searle’s Chinese Room thought experiment[]. The man in the Chinese room can provide outputs which are meaningful to the people outside but incomprehensible to himself. To the man in the Chinese Room, the symbols and the rules for manipulation are purely syntactic whereas to the agents outside, they are also semantic.

The origin of “intrinsic intentionality”, or why only humans are considered to have it, is not known.

Conversely, it has been shown that humans will, in certain circumstances, relinquish this intrinsic intentionality and operate in what is referred to as “agentic mode”[ii] where a person will yield their agency to an authority figure.

In milder cases, people can be “affirmed” by those they look up to and to some extent the meaning of what they do is derived from the belief that the authority figure gives it rather than it being truly intrinsic.

Searle describes an “intention” as a brain state which refers to something in the external world. Such states include a belief, like, hate, desire etc.

Such states can be properties of unconscious systems but only if they are capable of causing conscious mental phenomena. For example a belief that the world is round is a property of a subject even when asleep and unconscious. However, that belief is capable of being made conscious on awaking.

An analogy is information stored on a computer disk. It is capable of being accessed by the computer but in it’s stored form it is very different from its active form.

A non-conscious state is one which can never become conscious.

An unconscious state is one which is capable of becoming conscious.

How can a word refer to an object? How can a brain state refer to something in the external world? Where does “meaning” enter into a system?

According to Searle, even if Dennett’s homunculii were progressively simpler, they would still need intentionality.

Derived intentionality: The symbols and workings of a computer have meaning only because someone else, a programmer or user, has attributed such meaning. Same as the meaning of symbols in the Chinese Room. They can only have meaning because someone outside has attributed that meaning.

Intrinsic intentionality: A source of meaning which can only be a conscious subject.

Observer dependent and independent facts?

Gray: Our senses present an “interpretation” of the data based on models which exist in our brains. They are not always accurate and sometimes not always consistent, such as the duck/rabbit picture, but it is the best guess of the processing system. The interpreted result is presented to the conscious subject already processed. Data fusion takes place before it is consciously perceived.

Presented to what? A Cartesian Theatre?

There is a problem in that there is nowhere in the brain for the data fusion to take place.

Searle speaks of consciousness emerging from the “micro properties of neurons” in an analogous way to the liquidity of water emerging from the micro-properties of atoms. However there is no idea of how that could possibly happen. Maybe more analogous to the emergence of electrical phenomena from putting together matter in particular ways. It relies on new, previously unknown, properties of material.

Existing neuro-science gets on “quite nicely, thank you” without reference to consciousness and there is nothing for consciousness to do.

Symbol grounding[]

ihttps://plato.stanford.edu/entries/intentionality/

iiStanley Milgram, “Obedience to Authority: An Experimental View”, 1974