Am I a compatibilist about free will?

Let me see. I can, for instance, think of the following example. Somebody tells me that I am an automaton and all the things I will do in the future are already written in a book. Suppose, in addition, that I am presented with conclusive evidence, so that I start to believe what I am told.

Even in this case, I would not be able to give up considering myself responsible (morally or otherwise) for my actions. (Were I made by this example to stop considering myself responsible for my actions, it would not be as if I was convinced by an argument – that would be something happening to me.)

So I cannot stop considering myself responsible for my actions, even if my actions, regarded as natural events, would be described by determinist laws, could be predicted entirely by someone else a.s.o.

My point is that we use ‘free’ in very different ways when we speak of freedom in nature and of free agents (or actions). If my being free is a necessary condition for me being responsible for my actions, then I must be free.

Suppose I hit somebody while being extremely angry. What does being responsible for my action mean? I should not have hit that person. This does not change if I am shown that it was written in a book that I would hit that person at that time. The book says that things could not have gone otherwise, but my action was the kind of action with respect to which we say what one should or should not do.

If I take the natural history into account, then I am inclined to say that I should have been a different person, one that would not have hit that other person in that circumstance. However, I do not want to take the natural history into account, since being a different person is not something one can do.

Considering myself free as an agent, then, has nothing to do with believing that certain events could have happened differently. It seems to me that the choices of a free agent have to be conceivable only (I am not free to draw a non-round circle), but if this sounds too weird (due, perhaps, to a failure to see the different uses of ‘free’ I am speaking about), I could stop talking about being free as an agent or having choices entirely.

Let us take another example. Suppose someone glued a knife to my hand while I was fast asleep and hurt another person by moving my hand. I am not responsible for what happened, since it was not my action. By contrast, if someone made me hurt another person by threatening to kill me, I might be responsible for what I did, even if all the other people would have done the same thing in the same situation.

There are, of course, cases in which it would be difficult to say if someone is responsible or not for something, but in such cases I think we would oscillate between saying that the person in case performed an action and saying that something just happened to that person.

To shorten the story, I think, for instance, that Harry Frankfurt’s argument against the Principle of Alternative Possibilities – having choices of action is a necessary condition for being (morally) responsible – misses the point.

Frankfurt seems to say that one could be in a ‘no choice situation’ without being forced to do what she has no choice but to do. To this I would answer that as long as we regard something as an action, alternative actions are always conceivable (and this is what I would call ‘having choices of action’).

On the other hand, we do not need to think about courses of natural events in order to talk about responsibility. What Frankfurt does is to establish the no choice situation counterfactually and then to cut all the causal conection between the counterfactual situation and the real course of events. Then, when he brings the causal connection back, he makes it clear that moral responsibility is excluded just when the no choice situation is the only cause for what the agent does. (But it is not.)

Now I can return to my first example. The case of a robot assuming responsibility for her own actions may seem strange. The robot must have been programmed to assume responsibility for her actions, right? Also, the book describing all the robot’s future actions must contain entries like ‘The robot will say she is responsible for doing X’ (verbal actions are still actions, after all).

Still, I have no problem with this if I imagine myself being such a robot (in fact, Strawson’s psychological impossibility argument from Freedom and Resentment seems to suggest that we are ‘programmed’ to assume responsibility for our actions) and I cannot see how anybody could feel inclined to give up all responsibility (that is, to give up being an agent) in such a case.

To conclude, it seems that if being a compatibilist means to say that responsibility and determinism are compatible, I am a compatibilist. However, if a compatibilist is a person who believes that determinism, seen as a claim about how nature works, is relevant to the problem of free will, I am not a compatibilist.

Comments are closed.