Intentional Systems Theory
Intentional Systems Theory is a theory about how we
understand and predict the behaviour of different sorts of systems. It suggests three different approaches, or 'stances', to
predict the behaviour of systems. Having a rough understanding of these stances
is a useful aid in understanding how people understand different systems.
The Physical Stance - the physical stance is based on
physical laws. For example, we predict what will happen when something is dropped based on
our understanding of gravity. We predict where water will go, and how balls on
a snooker table will move using the physical stance.The physical stance is in a
sense the 'truest' stance, and in principle the behaviour of any system should
be describable using it, but for many systems it is not a practical approach.
The Design Stance - the design stance is based on assuming
design in a system or artefact. For example, when figuring out a novel artefact
believed to be an alarm clock, we don't try to understand it in terms of
physical laws or even circuit diagrams. We make design assumptions; such as an
alarm clock will have some way of setting a time for the alarm, and at that
time it will do something (such as making a noise) that has a reasonable chance
of waking us up, and so on. With the design stance we assume an artefact has a purpose,
and we figure out how it should be used to fulfil that purpose.
The Intentional Stance - with the intentional stance we
assume that something can be usefully predicted as though it had a mind complete
with things like beliefs and intentions, and that it will act rationally based
on what is in its mind. This is how we interact with other humans, and how we
understand the behaviour of animals. We don't know that Fido and Rover (who are dogs) have a
mental life anything like ours, but by assuming they have things like beliefs
and intentions and that they will act rationally on them, we are able to
predict their behaviour.
Using the Stances to Understand Systems
The intentional stance can be usefully applied to things
like computers and robots, and frequently are. Trying to predict the behaviour
of a rock by thinking about what beliefs it has will get you nowhere. Assuming
the moon is designed and has some purpose doesn't yield any insight into its
future orbits. Similarly, people don't feel betrayed by rocks, or that the moon
doesn't like them.
In contrast thinking of a computer opponent in a game as
being an intentional system is useful: your opponent has an intention to kill
your game character; you can assume it has beliefs such as your current
location; it may even be trying to guess where you will go next; and, it will
act rationally within its means to meet its goal of hunting down and killing
your character.
The different stances provide insight into how people react
emotionally to different systems and artefacts. One cannot really be angry with
an artefact considered with the physical stance. In the case of the design
stance, you can be angry at the designer of an alarm clock, annoyed at yourself
for not using it properly, and frustrated when it doesn't work properly, but
you can't be angry at the clock, or feel betrayed by the clock. People can (and
do) feel betrayed or let down by computer game characters, and even devices
like smartphones. (This may get worse as they take on human-like
characteristics with their embedded personal assistants).
The three stances are useful ways of thinking about
different systems, and in understanding how other people think about things in
the world.
The Stances and HCI
When designing a user interface the goal is to provide the user with something that allows them to usefully understand and successfully interact with the system. The user interface does not need to be like the system it is for, it can be a fiction (or 'user illusion') that provides a simple or intuitive model of the system to make it easier to use. The three stances provide a designer with three distinct approaches to designing a user interface. The designer should select the most appropriate stance for each element, or for the overall interface, depending on how the product should be used.
A user interface using the physical stance will include features that encourage the user to think in terms of physical systems. Physical controls like joysticks and mice that create a corresponding movement in the system when the user moves their hand are an example of physical stance interface elements. Another example would be a touch-device that supports 'swiping' between different views in a way that corresponds to moving and manipulating physical artefacts. Users explore physical stance systems and discover ways of using them.
User interface elements that use the design stance will have features that are 'like' other design stance artefacts, they will suggest purpose to the user, and intended use. The user will identify features of the user interface (buttons, controls, etc) that look like they are supposed to be used in a certain way, toward a certain goal. Logical and sequential grouping of controls will help to emphasise the intended use. Users learn how to use design stance systems.
Systems that use the intentional stance promote discourse, affect, and human-like (or perhaps animal-like) interactions. These systems exploit a pre-existing system model that users have (i.e. of a mind, or mind-like thing). Such systems can focus on the pre-existing model of interaction, and build upon it. However, while a model of a design stance system can be made explicit and specific to the designer's needs, to an extent the gross model properties of an intentional system may be fixed. Users engage with and enter a discourse with intentional systems.
Footnotes
This article undoubtedly draws on: Dennett, D. (2011) Intentional Systems Theory, in The Oxford Handbook of Philosophy of Mind, (eds. McLaughlin, Beckermann, and Walter). Oxford University Press.