The implications of Free Will
The existence of phenomenal consciousness and of free-will and agency, to whatever extent we believe they do exist, underpins many of the important beliefs which we hold, take for granted and routinely act upon.
Responsibility, morality and justice
Without consciousness, it is not meaningful to speak of responsibility, in the moral sense, or about justice any more than it would be meaningful to use these terms in connection with a robot or a computer system. Interestingly, however, the ways society deals with wayward people has much in common with the ways we deal with wayward machines or animals, even though these dealings are described and interpreted differently. If a computer malfunctions then, depending on the cause, the computer might be repaired or reprogrammed but it would be bizarre to say, and seriously mean it, that it should be punished. If a conscious subject, such as a human, malfunctions by acting against what is acceptable in the community then that subject might be given medical treatment if the problem was considered a medical, or mental, one or reprogrammed by means or reward or punishment if the problem were considered a moral or behavioural one. [Gray 2004, Chapter 19]
From the point of view of an alien investigator who simply examined observed behaviour, there is little, if any, difference between the two cases. The interpretation and implications of the two are very different simply because a human is assumed to be conscious whereas a computer is not. This also raises the question whether the motivation for punishment is correction, reputation or vengeance. Such a distinction would be irrelevant for a non-conscious machine, even though people will sometimes kick a malfunctioning machine in anger.
A less clear cut situation exists in the case of animals. The interpretation of behaviour would depend on our beliefs regarding whether, or to what extent, the animal is conscious. If a dog misbehaves and is given a smack with the intention of making it behave better, is that a punishment or an attempt at reprogramming, or both? Or if a dog is given a treat for good behaviour. The answer depends on whether we believe dogs are conscious or are merely automata or whether they are conscious but in a lesser way, or a lesser extent, that humans are.
The value of human life
Many, if not the majority, of cultures, including vehemently materialistic ones, place a high value on an individual human life including that of new born babies. Upon what is that based? If there were no consciousness and, moreover, no unique centres of consciousness then such a view would be illogical. Humans are not hard to come by and, arguably, there are more in existence than the planet can comfortably support. Nevertheless the thought of culling humans (or gorillas) is morally repugnant in a way that the culling of badgers is not. One difference is that consciousness is attributed to the one and not the other. Another is how similar they are to us. That such a value is placed on every individual as something unique is an indication that we know, or at least act as if, there is more to a conscious subject than there is to a robot, even if we cannot say what that is or even if some would wish to deny it.
Meaning and Intentionality
Closely related to consciousness is meaning and, what is technically called, intentionality[i]. Our thoughts do not exist in isolation, they represent things in the external world. In that sense they have meaning. In contrast to that, the symbols manipulated by a computer do not have any meaning which has been conferred by the computer itself. The only sense that the symbols and calculations have intentionality is because such intentionality has been conferred by the designer or the programmer. The attribution of meaning to the symbols in the computer, conferred by an external agent, is called “derived intentionality”. A human agent is considered not to require such an external reference and the meanings of the thoughts in the this case are called “intrinsic intentionality” [Searle 1999].
The difference, which is argued means that “strong AI” or computers which are conscious, are impossible, is illustrated by Searle’s Chinese Room thought experiment[]. The man in the Chinese room can provide outputs which are meaningful to the people outside but incomprehensible to himself. To the man in the Chinese Room, the symbols and the rules for manipulation are purely syntactic whereas to the agents outside, they are also semantic.
The origin of “intrinsic intentionality”, or why only humans are considered to have it, is not known.
Conversely, it has been shown that humans will, in certain circumstances, relinquish this intrinsic intentionality and operate in what is referred to as “agentic mode”[Milgram 1974] where a person will yield their agency to an authority figure.
In milder cases, people can be “affirmed” by those they look up to and to some extent the meaning of what they do is derived from the belief that the authority figure gives it rather than it being truly intrinsic.
Searle describes an “intention” as a brain state which refers to something in the external world. Such states include a belief, like, hate, desire etc.
Such states can be properties of unconscious systems but only if they are capable of causing conscious mental phenomena. For example a belief that the world is round is a property of a subject even when asleep and unconscious. However, that belief is capable of being made conscious on awaking.
An analogy is information stored on a computer disk. It is capable of being accessed by the computer but in it’s stored form it is very different from its active form.
A non-conscious state is one which can never become conscious.
An unconscious state is one which is capable of becoming conscious.
How can a word refer to an object? How can a brain state refer to something in the external world? Where does “meaning” enter into a system?
According to Searle, even if Dennett’s homunculii were progressively simpler, they would still need intentionality.
Derived intentionality: The symbols and workings of a computer have meaning only because someone else, a programmer or user, has attributed such meaning. Same as the meaning of symbols in the Chinese Room. They can only have meaning because someone outside has attributed that meaning.
Intrinsic intentionality: A source of meaning which can only be a conscious subject.
Observer dependent and independent facts?
Gray: Our senses present an “interpretation” of the data based on models which exist in our brains. They are not always accurate and sometimes not always consistent, such as the duck/rabbit picture, but it is the best guess of the processing system. The interpreted result is presented to the conscious subject already processed. Data fusion takes place before it is consciously perceived.
Presented to what? A Cartesian Theatre?
There is a problem in that there is nowhere in the brain for the data fusion to take place.
Searle speaks of consciousness emerging from the “micro properties of neurons” in an analogous way to the liquidity of water emerging from the micro-properties of atoms. However there is no idea of how that could possibly happen. Maybe more analogous to the emergence of electrical phenomena from putting together matter in particular ways. It relies on new, previously unknown, properties of material.
Existing neuro-science gets on “quite nicely, thank you” without reference to consciousness and there is nothing for consciousness to do.
ihttps://plato.stanford.edu/entries/intentionality/
2.5.5First person and third person perspectives
We can look at a person in two different ways depending on whether that person is oneself or someone else. If we consider ourselves, we are first aware that we are a conscious subject with a perception of the world around us with which we can interact and influence and be influenced by. We feel pain, joy, hope and various other emotions which depend in a complex way on our environment, our character and our choices. We are also aware that we have at our disposal an array of automatic systems which help us to navigate through and interact with the outside world. For instance, one such system enables us to walk without thinking about how we are doing it, another enables us to recognise different faces without having to analyse each individual feature. We can learn skills which program new automatic systems which allow us to, for instance, play the piano without having to consider every hand movement. Nevertheless, it is generally our perception that these automatic systems are under our control in an analogous way to the autopilot in an aircraft being under the ultimate control of the human pilot.
When we look at another person, we have no direct awareness of their inner feelings, motivations or their sense of agency. What we first see is a set of behavioural responses to stimuli from the outside world. We see the sort of automatic systems which we are aware of in ourselves but not what, or who, is in ultimate control. Starting from this point of view, analysis of brain architecture would indicate a hierarchy of control systems which could lead to the belief that a person is simply an automaton. The only reason we have for believing otherwise is by analogy with ourselves. We imagine that they feel the same as we would given the same circumstances and responses. Indeed, our brains are equipped to do that by virtue of “mirror neurons”[i] which allow us to form a model of what is going on in another person’s brain.
However, from the point of view of an alien, the behaviour of individuals and society could be analysed, modelled and possibly reproduced without having to include Phenomenal Consciousness or Agency in the model at all. Velmans[Velmans 2003] wrote: “From a third person perspective, phenomenal consciousness appears to play no causal role in mental life while from a first person perspective it appears to be central.” [Velmans 2009] If it was not for our own first person experience of consciousness, there is no reason to include it in a theory of the world. The alien looking at the systems on Earth would have little, if any, way to distinguish a person from a complex robot.
Starting from the third person perspective alone, the key features of phenomenal consciousness and agency would not be seen or needed in the description of a person. Starting from the first person perspective, this clearly indicates that something is missing.
Some attempts to explain this situation, coming under the headings of Dualism, Physicalism and Idealism, are described in the next Chapter. The reality is that none of these hypotheses, at their current state of development, have provided an answer to the difficult questions, or the “hard problem” of consciousness and, so far, there is no indication of how they could ever do so. Dualism leaves many questions unanswered and has been criticised for its lack of explanatory power. However, physicalism is not compatible with the existence of free-will and it tends to ignore or deny the existence of phenomenal consciousness completely. Gray [Gray 2004], a dyed in the wool materialist, says this in the conclusion to his book: “No theory yet proposed is up to the mark as a solution to the Hard Problem. But some of the right questions are now being asked. And relevant data are beginning to come in” ([Gray 2004], p323). At this stage no door can be prudently ignored.
ihttps://en.wikipedia.org/wiki/Mirror_neuron