Thursday, 29 May 2014

The Problems of Conciousness, part 2

The Problem of Mechanism

This problem seems to be roughly "how do you go from neuron stuff to experience stuff". It is clear from various experiments and brain damage cases that we do move from neuron stuff to experience stuff, but how does it happen?

It will come as no surprise when I say that I don't know what the answer is. However our view is from the inside, from within the bubble of conciousness. We are blind to the workings of our brain, we only 'see' some of the results of its workings. Our experience is based on what makes it into conciousness, and no more. What does it make it is likely to to be processed, and to some degree incomplete. We should not expect the answer to the problem of mechanism to make intuitive sense. We might already have the answer, but not be willing to accept it.

What would the answer look like? What more are we looking for above and beyond the neurons and brain structures that when stimulated just so give us these experiences?

The Problem of Duplicates

Philosophical zombies are 100% identical physically and behaviourally to a concious human being, but lack concious experiences. You can talk to Zombie-Fred and be completely unable to determine whether he is a zombie or not. He'll talk to you about sports, tell you how much he likes a particular movie, and gaze with (seeming) admiration at scenes of natural beauty. Yet Fred has no concious experience. If you poke Zombie-Fred, various neurons and things will do the same as they would do in your body, but what goes on in Zombie-Fred's mental life is just some message that he got poked, with no actual concious feeling or sensation involved.

The argument that accompanies this is something like:

  1. Zombies are identical to humans in every way, except they don't have concious experiences.
  2. Physical activity in a human brain identically replicated in a zombie brain does not result in concious experiences.
  3. Therefore, there needs to be something extra to create conciousness.
Counterargument: Zombies aren't real.

Is there much more to say? Perhaps I am being too quick at dismissing zombies, but zombies don't seem to be an explanation to fill a gap in our understanding, they seem to be something that is creating a gap that isn't there.

How can Zombie-Fred pass as a proper human without having a conciousness? Doesn't Zombie-Fred need something in its place? It would need some sort of rule or ability to talk about feelings of true love, or how excruciating a pain is, or how joyously vibrant a shade of green is. This would probably not be a difficult problem, and would be something well within Zombie-Fred's abilities, but Zombie-Fred would have something that humans don't have; some sort of "faking having a conciousness" module. However, wouldn't this in principle be an observable difference?

China Brain

If our minds are just the results of neurons arranged in a big complicated machine, wouldn't it be possible to produce the same effect via some different arrangement? The China Brain thought experiment concerned getting billions of Chinese people to replicate the behaviour individual neurons, such that collectively they were simulating a brain. Why Chinese people? I have no idea.

The work that this thought experiment (or intuition pump) does is to show that the China Brain could  be "possible and it is functionally equivalent to a normal human being, it supposedly presents the illustration of the absent qualia hyopthesis. Block concludes that functional organisation is not what determines or fixes phenomenological conciousness" (Tye 2007). I think that is a well placed 'supposedly'. The intuition is to dismiss the China Brain. My response is to consider that the China Brain should, in some sense, have its own conciousness. It seems fantastical, and hard to countenance, but it seems where evidence and reason leads. I suppose this could make me a die-hard physicalist, unwilling to consider a non-physical account of conciousness no matter how absurd my position seems to be. But, the China Brain thought experiment doesn't give a reason to reject the possibility of conciousness, only an intuition not to. Intuitions can be useful, but they aren't answers. 

The Problems of Conciousness, part 1

Conciousness. Probably the most intimate and direct thing that we can know of, yet still somewhat strange and elusive.

I've started some background reading from the Blackwell Companion to Conciousness, and I'm having an interesting time reading Michael Tye's 'Philosophical Problems of Conciousness'. I'm not quite getting some of the problems, and I'm not sure whether this is due to a failure to fully appreciate the problems or if I'm a hardline materialist/physicalist.

What follows will be a very brief synopsis of some of the problems, and my initial thoughts in response. The subject of this blog entry is:

Tye, M. (2007) 'Philosophical Problems of Conciousness'. In Velmans, M., and Schneider, S. (eds.) The Blackwell Companion to Conciousness. Blackwell Publishing.

The Problem of Ownership

The problem of ownership is explaining how the mental objects of conciousness (feelings etc.) can be physical "given they are necessarily owned and necessarily private to their owners" (Tye 2007). The idea here seems to be that the mental objects and experiences are private in the sense that nobody else can have them, and nobody else can have access to them. I own my pen, and I can give it away to someone else so that they can own it.

I'm not sure the concept of ownership contained in this idea is correct. I own my pen, yes. I wouldn't own my child, nor would I own my favourite colour, or my boss. Do I own my arm? Yes. Not in the sense that I own my pen, but I still sorta own my arm. I could say my arm is part of me, but I think it would be more accurate to say it was part of my body.

Do I own my mental objects and experiences? I don't think it would be right for people to have unfettered access to these things without my consent. So perhaps I am asserting some sort of ownership over them. I can go further and say that I think that these mental objects are me. They might not be the whole of me, I have a store of memories for example, but they are a key component of me. Feelings and experiences as they occur are me being me. Without that I (as a mental entity) am not there. I don't own me, I am me. Descartes' "I think, therefore I am" taken a step or two further.

Are my thoughts and feelings private? Well, yes. In one sense I don't think they necessarily are so. In the future could we rig up some sort of machine to record feelings and play them back in someone else's head? Maybe. This doesn't quite work because fully fledged persons may not have the exact same experience or thought even though they undergo the same 'Playback'. Thoughts and feelings surely don't exist in isolation, they will draw on past thoughts and emotions. That pang of loss will have shades of different sorrows for each of us. But, what are we saying here? If we ran a bit of software, or set of data, in a bunch of different computers we'd expect to get the same result. But only if we ran it in computers that were functionally the same, and shared the same relevant state. If we ran our data on multiple computers, which had other software running, or other data in memory accessible and relevant to our data, then we'd expect different outcomes or at least different machine states. Why expect something different from human brains?

I don't think we own our mental objects in the sense of owning property, and I'm not sure that they're private in the sense intended by Tye.

The Problem of Perspectival Subjectivity

This problem is that there is something that physicalism doesn't account for in conciousness. It is possible to understand something, but for that understanding to be incomplete until you have experienced it. "Phenomenally concious states are perspectival in that fully comprehending them requires adopting a certain experiential point of view. But physical states are not perspectival in this way" (Tye 2007).

Tye gives the following example:
"A man who is blind and deaf cannot experience lightning by sight or hearing at all, but he can understand fully just what it is, namely a certain sort of electrical discharge between clouds".
This man understands lightning, but cannot experience it. There is some sort of gap. I agree. I'm not sure quite what the problem for physicalism is though. (I should probably make a quick aside to say that I really do get that there seems to be some sort of fantastical jump that is hard to grasp or accept arises from the machinery of the brain...).

The brain process things. Our concious experience is the result of a number of sub-systems processing various inputs, such as sense data and memory retrieval. Imagine there is a little black box labelled 'Experience Machine' in our brains. Various inputs going into the black box, and out pops our concious experience. Understanding all that there is to know about lightning is not the same thing as experiencing it. In considering it, different inputs are going into the little black box. The inputs are thought objects about lightning. They are not going to be stored and represented in the brain the same as a direct past or current experience of being in the vicinity of lighting, and they will not be accompanied by visual and auditory sense data. What is going into the little black box when thinking about lightning is not the same as when experiencing it externally. So we should expect different things to arrive in our conciousness.

On this account, we would only expect a full understanding to be like actually experiencing it if we were able to simulate the little black box in our heads and were able to marshal appropriate inputs. If we could feed into the little black box all of the inputs that would go in when directly experiencing lightning, then we would expect to have the concious experience of it.

I have not given an account for the little black box labelled conciousness, but I think I have sketched out why we should expect a complete 'understanding' to be lacking the sense gained from having a concious experience of it.

Saturday, 24 May 2014

Believing Bullshit, by Stephen Law

A couple of highlights from my read through of Stephen Law's Believing Bullshit.

"Any given set of observations can be explained by a number of theories. To use the jargon of the philosophy of science: theories are underdetermined by the evidence."
as we'll see later in “But It Fits!” any theory, no matter how nuts, can be made to “fit”—be consistent with—the evidence, given sufficient ingenuity. It doesn't follow that all theories are equally reasonable, or that we can never fairly conclusively settle the question of which among competing theories are true on the basis of observational evidence."
"Many belief systems often start with a mystery—they offer to explain what might otherwise seem rather baffling."
"Putting these various points together, we can sum up by saying that, in order for a theory to be strongly confirmed, that theory has to stick its neck out with respect to the evidence. It has to be bold, to risk being proved wrong. If a theory either fails to make any predictions, or if it makes only vague and woolly predictions, or else if it predicts things that are not particularly unexpected anyway—if, in short, it takes no significant risks with the evidence—then not only is it not strongly confirmed, it can't be."
"amazing coincidences are inevitable. There are billions of people living on this planet, each experiencing thousands of events each day. Inevitably, some of them are going to experience some really remarkable coincidences." 
Believing Bullshit is all about the 'dirty tricks' people can play in presenting their theories and ideas to you, and how to spot these tricks and respond to them. If this sounds vaguely interesting it is well worth a read.

You may also be interested in Law's blog.


Well lookee here. I still have a blog. Maybe it is time to do something with it again.