Showing posts with label Mind. Show all posts
Showing posts with label Mind. Show all posts

Saturday, 21 May 2016

Ryle’s The Concept of Mind, Chapter 2 – Knowing How, Knowing That

In this chapter Ryle seeks to convince us that there is no Ghost in the Machine. The intuition that there is some mental ‘inner’ precursor to external actions is wrong. “…When we describe people as exercising qualities of mind, we are not referring to occult episodes which their overt acts and utterances are effects; we are referring to those overt acts and utterances themselves” (Ryle 2000, p26).

Ryle seeks to move us from thinking about an inner mental life, some of which leads to or initiates external behaviour. Instead, the view (as I understand it) that he wants us to develop is that our mental life is either expressed in a way observable to an external observer, or in a way that is not. When we talk we are vocalising our thoughts, and when we merely think (in words at any rate) we are doing the same activity a talking but not vocalising it. The introduction of the Category Mistake in the previous chapter was intended to prepare us for this move. Ryle gave an example of the visitor to Oxford University seeing the colleges and libraries, but wondering where the university is. Public thought (e.g. talking to someone) and private thought (e.g. talking to yourself without vocalising) are like the colleges and libraries, and the mind is the university.

Knowing How and Knowing That

Ryle draws a distinction between knowing how and knowing that. This distinction appears to map onto declarative and procedural knowledge.

Misunderstandings and Feints

“Misunderstanding is a by-product of knowing how. Only a person who is at least a partial master of the Russian tongue can make the wrong sense of a Russian expression” (Ryle 2000, p58). Feinting also requires knowing how. It is the “art of exploiting”, or provoking, your opponent’s premature conclusion of what course of action you are following (Ibid.).



Saturday, 23 April 2016

Ryle's The Concept of Mind, Chapter 1 - Descartes' Myth

The Official Doctrine

In Chapter 1 Ryle outlines the ‘Official Doctrine’ of the Mind, that mind and body are distinct (substance dualism). Bodies are in space and time, and are ‘mechanical’ (roughly, they are causal systems). Bodies are public, in that their activities can be scrutinised by other parties. Minds on the other hand are in time, but belong in a kind of ‘mental space’ that is linked to the relevant body but isolated from other things in the physical universe. We are blind to the minds of others, and take it on a sort of faith that other people also have minds as we cannot observe them directly.

We have a privileged access to our own thoughts and feelings, and access that nobody else enjoys. This privileged access gives us a direct appreciation, we sort of watch and observe them. While we might be wrong or uncertain about things that occur in the external world, we cannot be mistaken about the happenings that we observe in our internal world.

Ryle calls this the Official Doctrine, as it was the dominant and explicitly/implicitly held belief about minds at the time the book was published in 1949. Ryle refers to the official doctrine “abusively” as the dogma of the Ghost in the Machine.

Can you tell Ryle isn’t a fan of the Official Doctrine?

(Another wonderful turn of phrase about the Official Doctrine is Daniel Dennett’s ‘Cartesian Theatre’).

Category Mistakes

Ryle introduces the idea of a ‘category mistake’, whereby someone has incorrectly categorised an entity and then proceeded to treat it as though it belonged to that category. His famous example is of a visitor to Oxford or Cambridge university being shown various aspects of the university, e.g. various colleges and libraries, but then asking “But where is the university?”. The visitor has committed a category mistake, he has already seen the university (or at least parts of it) but is expecting something more because he considers universities to belong to the same category of things as colleges and libraries. Another example is the Home Office and the British Constitution both being ‘institutions’ but being radically different. Or expecting the ‘average family’ to be a similar sort of entity to an actual family (you can’t actually live next door to the average family).


Sunday, 5 July 2015

Intentional Systems Theory and HCI

Intentional Systems Theory

Intentional Systems Theory is a theory about how we understand and predict the behaviour of different sorts of systems. It suggests three different approaches, or 'stances', to predict the behaviour of systems. Having a rough understanding of these stances is a useful aid in understanding how people understand different systems.

The Physical Stance - the physical stance is based on physical laws. For example, we predict what will happen when something is dropped based on our understanding of gravity. We predict where water will go, and how balls on a snooker table will move using the physical stance.The physical stance is in a sense the 'truest' stance, and in principle the behaviour of any system should be describable using it, but for many systems it is not a practical approach.

The Design Stance - the design stance is based on assuming design in a system or artefact. For example, when figuring out a novel artefact believed to be an alarm clock, we don't try to understand it in terms of physical laws or even circuit diagrams. We make design assumptions; such as an alarm clock will have some way of setting a time for the alarm, and at that time it will do something (such as making a noise) that has a reasonable chance of waking us up, and so on. With the design stance we assume an artefact has a purpose, and we figure out how it should be used to fulfil that purpose.

The Intentional Stance - with the intentional stance we assume that something can be usefully predicted as though it had a mind complete with things like beliefs and intentions, and that it will act rationally based on what is in its mind. This is how we interact with other humans, and how we understand the behaviour of animals. We don't know that Fido and Rover (who are dogs) have a mental life anything like ours, but by assuming they have things like beliefs and intentions and that they will act rationally on them, we are able to predict their behaviour.

Using the Stances to Understand Systems

The intentional stance can be usefully applied to things like computers and robots, and frequently are. Trying to predict the behaviour of a rock by thinking about what beliefs it has will get you nowhere. Assuming the moon is designed and has some purpose doesn't yield any insight into its future orbits. Similarly, people don't feel betrayed by rocks, or that the moon doesn't like them.

In contrast thinking of a computer opponent in a game as being an intentional system is useful: your opponent has an intention to kill your game character; you can assume it has beliefs such as your current location; it may even be trying to guess where you will go next; and, it will act rationally within its means to meet its goal of hunting down and killing your character.

The different stances provide insight into how people react emotionally to different systems and artefacts. One cannot really be angry with an artefact considered with the physical stance. In the case of the design stance, you can be angry at the designer of an alarm clock, annoyed at yourself for not using it properly, and frustrated when it doesn't work properly, but you can't be angry at the clock, or feel betrayed by the clock. People can (and do) feel betrayed or let down by computer game characters, and even devices like smartphones. (This may get worse as they take on human-like characteristics with their embedded personal assistants).

The three stances are useful ways of thinking about different systems, and in understanding how other people think about things in the world.

The Stances and HCI

When designing a user interface the goal is to provide the user with something that allows them to usefully understand and successfully interact with the system. The user interface does not need to be like the system it is for, it can be a fiction (or 'user illusion') that provides a simple or intuitive model of the system to make it easier to use. The three stances provide a designer with three distinct approaches to designing a user interface. The designer should select the most appropriate stance for each element, or for the overall interface, depending on how the product should be used.

A user interface using the physical stance will include features that encourage the user to think in terms of physical systems. Physical controls like joysticks and mice that create a corresponding movement in the system when the user moves their hand are an example of physical stance interface elements. Another example would be a touch-device that supports 'swiping' between different views in a way that corresponds to moving and manipulating physical artefacts. Users explore physical stance systems and discover ways of using them.

User interface elements that use the design stance will have features that are 'like' other design stance artefacts, they will suggest purpose to the user, and intended use. The user will identify features of the user interface (buttons, controls, etc) that look like they are supposed to be used in a certain way, toward a certain goal. Logical and sequential grouping of controls will help to emphasise the intended use. Users learn how to use design stance systems.

Systems that use the intentional stance promote discourse, affect, and human-like (or perhaps animal-like) interactions. These systems exploit a pre-existing system model that users have (i.e. of a mind, or mind-like thing). Such systems can focus on the pre-existing model of interaction, and build upon it. However, while a model of a design stance system can be made explicit and specific to the designer's needs, to an extent the gross model properties of an intentional system may be fixed. Users engage with and enter a discourse with intentional systems.



Footnotes

This article undoubtedly draws on: Dennett, D. (2011) Intentional Systems Theory, in The Oxford Handbook of Philosophy of Mind, (eds. McLaughlin, Beckermann, and Walter). Oxford University Press.



Saturday, 28 January 2012

We Are Not Human Beings

What follows is a brief discussion about Parfit's We Are Not Human Beings, and my thinking on the topic. As per other such discussions on this blog, it isn't necessarily intended to be a robust interpretation or set of arguments, rather a handy overview and part set of handy notes primarily for my benefit, but also hopefully for the benefit of others.

Partfit, D. (2012). We Are Not Human Beings. Philosophy, the Journal of the Royal Institute of Philosophy, volume 87, number 339, January 2012.

Parfit discusses personal identity; what it is that is us, how it can be distinguished from things that are not us, and how we should think about ourselves. There are numerically identical things; my keyboard has been the same keyboard for several years. Here by same I mean it is the same object, and a spatial-temporal path could be traced for it, and we could be confident that when viewed in the past it is identical in the sense that it is the same object. Things can also be considered qualitatively identical. My preferred design of trackball sadly only lasts a few years before wearing out, so over the years I have had several objects that share a whole host of properties (they are the same model, colour, etc.) but are different objects. The trackball that I have now is the same sort of trackball that I had ten years ago, and aside from the serial number it is indistinguishable to me, but it isn't the same object. The numerical identity of my trackball is of no concern of me, I only care if I have a qualitatively identical trackball.

People are different. We are concerned with numerical identity as well as qualitative identity. Substituting my beloved with her twin simply wouldn't do.

Or would it?

What we are interested in with regards to personal identity is the thinking bit of a person. If we swapped two people's brains over we would consider it a body transplant rather than a brain transplant. We would follow the brain if we were concerned with things like responsibilities, promises, guilt, and friendship.

Parfit covers three concepts of personal identity; the soul view (which he promptly sets aside), the Animalist View, and the Lockean View. The Animalist View is rooted in our identity as animals, it is biological continuity that is of interest to us. This certainly seems to be the face value working rule that we use; we identify persons based on their bodies. The Lockean View is focused on psychological continuity, and Parfit gives us his definition:

The Narrow, Brain-Based Criterion:  If some future person would be uniquely psychologically continuous with me as I am now, and this continuity would have its normal cause, enough of the same brain, this person would be me. If some future person would neither be uniquely psychologically continuous with me as I am now, nor have enough of the same brain, this person would not be me. In all other cases, there would be no answer to the question whether some future person would be me. But there would be nothing that we did not know.
Parfit presents a continuum of thought experiments to draw out the distinction between the Animalist and Lockean views, moving along the scale; Transplanted Head, Surviving Head, Surviving Cerebellum.

Parfit refers to McMahan's Embodied Part View. Under this view we are the thinking part of the Human Animal.  Parfit extends this view with the Embodied Person View; human animals think by having a concious thinking part which is a person in a Lockean sense.

Parfit covers a number of potential and actual objections, but ends with the conclusion that we can be best described as Embodied Persons; "Since the animal thinks only by having a part that thinks, there are not too many thinkers here [the Too Many Thinkers objection]. And since the animal is a person only in the derivative sense of having a Lockean person as a part, there are not too many persons here [the Too Many Persons objection]".

It would seem to me that there is another distinction here. The thinking parts in our brains is the embodied part of us, the apparatus and substrate of thinking, of our identity. Our identity however is the brain-state. Admittedly our identity, our thinking, is tightly coupled with the physical mechanisms of the thinking which are in turn tightly coupled with wider aspects of our bodies. If my brain was reset to a blank state, then there would be a break in the psychologically continuity. It may be that the brain state left is the closest to generating my identity of any brain in existence, but if we were to say it is still me we would merely be making the best of a bad situation.

  1. If my brain state was destroyed/reset, would we still claim identity? Possibly, but only in so far as we conflate personal identity with our physical shells.
  2. If 100 people had their brain state reset, and their physical appearance altered so that they all appeared the same... would we consider there to be anything of personal identity left to trace? No.
  3. If my brain state was destructively copied (i.e. moved) into a replicant of me, we would consider my identity to be in the replicant.
  4. If my brain state was destructively copied into a computer that functionally simulated my brain for me, and then moved back into my original body we consider the human-body-brain-state-copy to be me.
Our personal identity is the thinking itself, the representation stored and manipulated in the brain. The apparatus for personal identity that we have available to us is the parts of the brain that do the thinking, which are embodied in the human animal. In computer terms we aren't even the software, we're the current state of the software at a moment in time.

Numerical identity isn't important. Qualitative identity is what matters; we're interested in the properties of a thinking animal/other. In a not quite fleshed out analogy, in the arrangement of physical things so that the rainbow twinkles just so, we are the rainbow.

The Cogito—I think, therefore I am—becomes:

Thinking occurs, I am the thinking.

Wednesday, 17 August 2011

Goal attaining systems

I came across an interesting categorisation of goal attaining systems today:

  • Goal-achieving; a system that recognises when a goal state has been reached. This can be implicit. Examples include a doorbell buzzer, or flowers opening up in the sun.
  • Goal-seeking; a system that is such that it will move towards a goal state without any recognition that the goal is being worked towards. Examples include a marble spinning in a bowl, which will ultimately come to rest in the middle.
  • Goal-directed; a system that has a representation of the goal state and it's behaviour is intended to bring about. This is a system with feedback control. Examples include a thermostat, and human planning.
This is a useful set of categories that can be utilised in thinking of philosophical entities and systems, and for the design of software agents, applications, and robots. It came to me via David McFarland's Guilty Robots, Happy Dogs: The Question of Alien Minds.