Friday 9 November 2012

Myth of the Given

There must be some firm foundation on which the rest of our knowledge is built by various inferential methods. This was the traditional view, foundationalism, and is still to some extent the common-sense view of how knowledge works. But it has been increasingly losing favour in philosophy, especially since Sellars' famous argument, "the myth of the given."

Sellars' argument works like this: 

Broken down foundationalism makes two claims.

(1) Epistemic Independence Requirement (EIR). There must be cognitive states that are basic in the sense that they possess some positive epistemic status independently of their epistemic relations to any other cognitive states. 

(2) Epistemic Efficacy Requirement (EER). Every none-basic cognitive state can possess positive epistemic status only because of the epistemic relations it bears, directly or indirectly, to basic cognitive states. Thus the basic states must provide the ultimate support for the rest of our knowledge. Such basic, independent and efficacious, cognitive states would be the given

Sellar argues that you cannot have both. The standard candidates for basic empirical knowledge either fail EER (e.g., knowledge of sense-data), or presuppose other knowledge on the part of the knower, and thus fail EIR (e.g., knowledge of appearances). 

Tuesday 23 October 2012

Condemned to be free

How much of our lives and the things that happen to us are we responsible for? The common-sense answer would be that we are responsible only for those actions which we cause and intend to cause, but that we are not responsible for the things that happen to us, good or bad, that just happen to us. That is to say, where the causes was something other than us. For example, things caused by society, luck, accident, other people, or physics. And this common-sense answer is often taken as relinquishing us from responsibility for things like being drafted in a war, or being born into poverty, or indeed being born into wealth.  

But in Being and Nothingness, Sartre takes these ideas and arrives at a completely different and much more extreme conclusion. We are wholly responsible for ourselves, he argues, including even if we are drafted into a war. 

To make this argument, Sartre first has to argue that consciousness doesn't pick it's intended actions just based on what the facts cause it to pick, e.g. Bill is caused to get a drink because of the fact that he is thirsty.

Sartre argues that there could be no fact in the world that could motivate our conscious actions by itself. We would always need in addition goals, values and interpretations in order to have the intentions that lead to our deliberate actions. Put another way, for a cause (some fact in the world) to motivate an action it must be experienced as a cause. So for example, for Bill to be motivated to perform the action of getting a drink because of the cause 'thirst', he must first experience thirst as as a cause. He must have ideas about what one should do if they are thirsty. If he did not have the idea that when thirsty he should get a drink, on the other hand then thirstiness would cease to be a cause for him doing so.

So similarly, according to Sartre's thinking, even if Bill is stranded on a dessert island and is thirsty and can't get a drink, it's his responsibility, because it's he, not his thirst, that has motivated him to want a drink.

Sartre himself gives the example of war.

"If I am mobilized in a war, this war is my war... I deserve it first because I could always get out of it by suicide or by desertion; these ultimate possibles are those which must always be present for us when there is a question of envisaging a situation. For lack of getting out of it, I have chosen it. ... If therefore I have preferred war to death or to dishonour, everything takes place as if I bore the entire responsibility for this war. of course others have declared it, and one might be tempted perhaps to consider me as a simple accomplice. But...it depended on me that for me and by me this war should not exist, and I have decided that it does exist... But in addition the war is mine because by the sole fact that it arises in a situation which I cause to be, and that I can discover it there only by engaging myself for or against it"

Monday 22 October 2012

To be caused and to be motivated

There is an argument against free-will that goes like this: all actions are caused, and if it's caused it's determined and not a free choice. But, even if an action was not caused, it still wouldn't be a free choice, because if an action isn't caused by anything then it's random.

But there is a difference between a cause and a motive that is relevant for discussion of free-will.

People can be *caused* to experience something out of their control. For instance, if Ted is pushed out of a window, he is caused to hit the ground and be injured. Here we can say that it was determined that Ted got injured (it was determined by the fall).

But if one is 'motivated', instead, then the implication is that one is only encouraged towards an action, but might not take it up. With motivation, the future is open. For instance, if Bill is thirsty, he is motivated to go get a drink, or ask someone to bring him a drink. But his motivation alone does not guarantee that he'll do either of those things, he might pick staying thirsty.

So it's simply not the case that a free-will advocate is limited to only the options that actions are caused or random, there is a third category.

The determinist can only argue that the third category doesn't really exist--that motivation as so defined is a myth. For instance, they can say that Bill thinks he chooses to stay put rather than go get a drink to sate his thirst, but really circumstances mean that Bill would always make that 'choice' and not another.

Life without Drama

Personal dramas are created when the parties involved feel so sure that they're entitled to their negative interpretation of one another that actions such as cutting people off, breaking up, or being disrespectful, harsh or maybe even cruel, are warranted.


What is surprising though is that not much is actually needed to convince people they are entitled to their negative interpretation. People always feel they are, of course. But that's the point, they *feel* they are. Emotions become part of the decision. Frustration, anger, confusion, sadness, pride--it's not always an obvious emotion. But an emotion and a few bits of evidence are not sufficient to make an informed decision.

What the emotion does is fit with a particular interpretation of the evidence, and THEN cause that interpretation to become rigid (because emotions are hard to overturn). One is stopped from easily seeing *different* perspective (including more true perspectives)

Actually almost nothing warrants a negative interpretation. It's always good to err because you'll learn more being open to positive interpretations than the other way around. Negative interpretations are a bit of a full stop.

On maybe an implicit level the idea behind this process is that we sometimes should cut people off, break up, or be disrespectful, and being entitled to a negative interpretation tells us when to do this. So if we never let ourselves have negative interpretations we might end up 'being walked on' or harmed in some ways. But this way of thinking ignores how easy it is to get along with people if you try to. One can take up ideas like 'not taking things personally'.

We simply don't need to ever be disrespectful to people to not be victimised. We can not be victimised by our positive attitudes. And as for breaking up and cutting people off, this is something we should rarely ever do. People should be able to drift apart mutually if they so wish. But in general, relationships have a lot of knowledge to them, and we should only ever put a stop to them if absolutely necessary. Where 'absolutely necessary' here means 'when the other person has destroyed the relationship' and there is now nothing that could be done'. And where 'absolutely necessary' DOES NOT mean 'my negative interpretation tell me to end the relationship.'

Thursday 18 October 2012

A Life Without Deadlines

Deadlines serve a pretty important function: Some times, things need to be done by a certain time. But there is a dark side to deadlines, and that is self-coercion.

We all know that deadlines can be pressuring, stressful, boring, and even upsetting, but what all these problems come down to is that you are forcing yourself to do something. And not only is self-coercion unpleasant in the short term, but it's bad for you in the long run. There are two reasons for this. 

Firstly, if you don't want to do something, you might be right not to. You're rejection of doing something indicates you have a criticism of it, and you'll probably agree with the criticism if you understood it, after all, it's *your* criticism. But the self-coercer speaks to themselves in just the same way a coercive person speaks to their victim. 'You're just being lazy,' 'you're just being stupid', 'you're just weak'--anything but actually taking their wishes seriously. So the criticism gets pushed down and repressed. And it's not good to ignore a problem, because doing so also ignores the solutions that will help you!

The second reason is self-coercion breeds doubtle-think. You might want to meet the deadline in some regards--you want the promotion, you want the degree, you want the hot body by bikini season--but you also have problems with it that mean you don't want to do it. You're left with double-think. Worse still, your double-think can breed if you live a life of self-coercion. 

Maybe you want to meet this deadline because you want the promotion, but maybe you only want the promotion because of another instance of double-think. For example, say, you want a job that pays well to prove you're a worthy person, but you had also wanted to be a circus clown because it's nice to entertain people. 

Deadlines are one of the ways, along with all other kinds of self-coercion, to keep yourself tied up in knots, confused and distressed about the things you think you want, if you want them at all. 


THE SOLUTION 

Life without deadlines isn't really about throwing out egg timers and never satisfying a client. You can still do things by a certain time. The trick is, to do it by a certain time just because you want to. No coercion. No pain. And to get to that state, you need to do the opposite of what was said above. You need to do things you don't have criticisms of. And you need to get rid of double-think. Both of these things really come down to the same thing: you need to do what's fun for you. 

It's contrary to all the advice out there. People like to say about deadlines 'ah, well it's just life.' They advocate force as a means of being productive. But it's not really true. Ultimately, the most productive person is one who loves what they're doing. Such a person gets a thing done *before* the deadline. So however contrary it is to common advice, it's not exactly unintuitive. We know it works. The question is are you willing to do what you want, to get what you want?


Wednesday 17 October 2012

ignorance is bliss, but the unexamined life is not worth living

Ignorance can be bliss. If you're seriously ill but don't know it, then you have all the bliss that comes with not being ill. The same is true if you're about to get fired but think you're being called in for a promotion. Or if you think you're a charming person but everyone hates you.

And indeed, if you're anything like me, you've once or twice avoided going to the doctors because you don't want to hear bad news. Or you've avoided exposing yourself to tough criticism because you don't want to know if you're doing something wrong. Or you've held a concerning looking letter in your hands a few moments longer than necessary because if you don't open it, it's a bit like there's been no bad news... right?

To will ignorance for the sake of bliss is a common enough thing. But what kind of person would choose that as their default? For most of us, these are some of our more shameful moments. We'd much prefer to be calm and pragmatic in the face of bad news. Why? For the very obvious reason that we have the problem even if we don't know we do, and the best thing to do with a problem is solve it. Simple.

OK, but it's hard to solve problems, and I like bliss.

Ignorance can be bliss, but being completely and desperately stumped on a troubling problem can be bliss. This is why you end up with people who devote their whole lives to science, maths, philosophy and so on--it's not because they're easy.

What makes you happy is reprogrammable. It depends on what ideas you have. So if ignorance can make you happy, and problem solving can make you happy, then there's nothing in it except what's better for you.

I think this would be a nice way to interpret 'the unexamined life is not worth living,' it's not worth living because there's no reason to prefer it to the examined one.

The Relevance of Scepticism

When a sceptic asked of your proposition 'ah, but what if you're a brain in a vat on Mars?' the appropriate response in almost all cases is simply this: 'so what?'

There's an unwritten. unspoken, often unthought clause to all our theories and propositions, and that is, that when we are arguing they are true or false--good or bad, we mean as theories or propositions to describe this world we live in, whatever this world is when looked at from above it: An illusion, part of a multi-verse full of other worlds with differing truths, or whatever else it could be.


To point out that there could be a scientist, looking over your brain as it thinks within its vat, who can see this extra context about your world that you don't know (a.k.a that you're imagining it), is in many ways the same kind of claim as saying that there's a God looking at you with the extra information that he created you and put your world into motion. It's the same in the following sense: that despite the fact that it is extra information that would change your perceptive on your theories and propositions, it doesn't really affect the quality of your theories and propositions given that you live in the world that you do, a.k.a one that almost entirely ignores this "extra information".

For this reason, let Moore say to the sceptic, "damnit, I have a hand!"* And let it be unsaid that he means 'at least in my world I do'


*'cause I see it' isn't really a good reason to believe you have hands in and of itself, but that's quite another matter

Monday 15 October 2012

The Utilitarian Cost of Free Healthcare

Oregon Health Plan is state funded healthcare, and with euthanasia legal in the state, the local government are finding interesting ways of cutting from the budget in this time of financial crisis.

The OHP no longer covers medication that slows the growth of cancer, but they do offer euthanasia in its place. The reason: price. 

If you've ever learned about the five transplant patients of any other thought experiment that tried to test the claims of utilitarianism against our moral intuitions, you might have heard them be described as 'exaggerations', or 'extreme examples', but the reality of State funded healthcare, such as the OHP, or the NHS, is they do operate on some, occasionally quite distributing, utilitarian principles. They have to. As Dr Walter Shaffer of Oregon put it, "We can't do everything for everyone." There are always more patients than resources and choices have to be made. Value judgements have to be made about what helps the greatest number, and indeed what makes the greatest number happy (after all, why help a cancer patient that has low chance if you can help a cancer patient that has a high chance?)

In the UK euthanasia isn't yet legal, but there are back-door euthanasia programs like the Liverpool Care Pathway, an NHS program where those who are 'dying' can have their medication, food and water withdrawn, hastening the process of death. Professor Patrick Pullicino recently came forward with information about thousands of elderly patients killed on the NHS prematurely every year because they've been diagnosed as terminal.  Why are the elderly picked on? Well, even if they aren't actually dying, they soon will be, right? Why waste resources?

Most good people turn up their nose at the five transplant patients. To treat life as non-sacred for the benefit of others is a disguising idea. Even if the 'others' are a larger and preferable group. Yet in real life, are we OK with a bit of utilitarianism or what?

While a Libertarian, and some conservatives, would happily argue that there are private healthcare options that'd solve this problem, it's the supporters of State healthcare that strike up my curiosity. Is this the cost they happily pay for free health care for all most? 

Thursday 11 October 2012

What's good about doing a philosophy degree?

If you want to learn to be good at philosophy you need to come at it with two things. Firstly you need self-direction. You need to be curious. To have problems you're interested in. To be willing to think for yourself. Secondly you need to have a very good understanding of philosophy's traditions. You're only ever one of the latest people to learn something, and as that person, you're one of the least well informed.

Both are crucial, but it's easy to prefer one over the other.

It's common to find self-direction scary, and it's common, if you do have self-direction, to sometimes be so sure of your own thoughts that you underestimate the value of very thoroughly learning why traditions can show you up.But if you can learn to have both, the one keeps the other in its toes in just the right way.


For these reasons a philosophy degree CAN be the best thing to do. I say 'can' because it's so easy to screw up at it. You can screw it up by not doing a good combination of these two criteria.

Self-direction:

Firstly, it's not school. you're not told what to think. You're teacher is no longer a straight up authority, but just some guy who happens to really be familiar with the papers you're looking at for this philosophy problem. You can challenge him, you can challenge the papers, and you can challenge your fellow classmates. Better still, you're encouraged to do this.

And you get a lot of choice about what you're going to learn. From what university to pick (for instance, I like the University of Manchester because it's very cutting edge), to what units on the degree program, and THEN even what parts of the unit you're interested in (because you can choose which part to do your essay on and which part to bring up in your tutorials).

Traditions:

But it isn't exactly bespoke, as learning as an autodidact would be. It's more like going to the supermarket or ordering from a menu. But that's good, because that's where the second criterion comes in.

Once you know what problems you're most interested in, you'll get given a reading list on that subject, and it's this that turns out to be worth thousands of pounds, believe it or not. Very few people understand the arguments in philosophy well enough to recommend you a good reading list. Very few people have even heard of the philosophers you'll be looking at. The amount of knowledge and understanding of philosophy that goes into those reading lists is huge. But a reading list is nothing by itself, after all, they are actually online.    It's the fact that the entire university experience is dedicated to that reading list. Philosophy is hard, *and* full of  contexts and subtleties that are easy to miss. You don't just need the reading list, you need someone to help you understand the reading list and it's readings (which is what your lectures are for). And then, you need someone to work with to challenge the reading and the reading list (which is what your tutorials are for, in part).


Used right, philosophy degrees are a genius combination of what you need to be good at philosophy.

Wednesday 10 October 2012

Define 'Define'?: Philosophers and definitions

We're all familiar with two reasons to define terms. One: because you're speaking to someone who doesn't know what the word means. And two: because you're speaking to somebody who misunderstood the particular meaning of the word you intended.

But philosophers purpose a third option, one that gives definitions a much bigger role than we might be used to in every day life. According to philosophy we define because our concepts are rough around the edges. That is, we want to talk about what a thing is and what it isn't, in absolute terms, but our concepts aren't well enough understood to do this. 'How can we make any real progress talking about justice if we actually don't understand what justice is and what it isn't yet?'



If you weren't used to philosophy and you peered in to take a look, all this back and forth about what free-will is and what it isn't, or if ethical statements express beliefs or commands, or if knowledge is justified true belief (or if that requires either amending or additional conditions), and everything else philosophers waffle on about, would all seem rather pointless. It's only not pointless when you understand how what they're talking about relates to real problems and solutions to those problems.

Philosophers have a specific definition of 'definitions' and that is: to analyse the necessary and sufficient conditions of a concept. So when philosophers define free-will, say, they aren't being pretentious, rather they're trying to formalise a concept of free-will that resists criticisms from determinists where other accounts of free-will have failed. And when Gettier wrote about 'justified true belief being necessary but not sufficient for knowledge', the implication wasn't some minor curiosity about words, but an error in our concept of 'knowledge' that could lead to re-evaluate our entire treatment of what we know given fallibility.

Real work is done with philosophers definitions.

What the philosophers have to be careful of, however, is getting too into definitions, and forgetting all the other ways we have to make progress in philosophy. Actually you can learn a lot about ideas like 'good' and 'bad' in ethics, even if you aren't really sure what 'good' and 'bad' exactly mean. It's good to bring up definitions if you notice a problem with one, or if you think one will solve the problem, but bringing one up to stop a conversation thread only ever stops potential progress.

Thursday 4 October 2012

Minimize Misunderstandings

Misunderstandings are common and impossible to completely eliminate, but we can understand some of the things that go into misunderstanding someone or something and, in that much, make a bit of progress minimising them.



The challenge is you don't know when its happened. If you *don't understand* then you know this and you can ask for clarification. But if you misunderstand you actually think you have understood. The skill at hand then really comes down to how to be good at understanding things.

Preconceptions - you'd be mistaken to think you can really get around having them, but being aware of your preconceptions is the first step to being open to having them be shown wrong.
Double checking - Once you have your theory on what you've just heard or read, ask if you've understood correctly. Simple, but overall good practice.
Putting passion to the side - there's nothing worse for creating misunderstand than a person who eagerly jumps in because they hear a buzz word or they think they see an error. Passionate disagreement can be a good thing, but know it's time and place.
Know your weak spots - are you not very well read? Do you often equivocate words? Are you not good with subtlety? Can you not hold long arguments in mind? Have fun finding your weak spots. You'll have them. What's better, to know them or not know them?  
What information don't you have? - this one is similar to preconceptions. Whatever argument, or whatever, you are facing has a context. It's solving a problem, it's a response to something, and so on. Be careful to keep track of this and know where you have a gap of information. 
The skill of analysis - knowing how to analysis arguments is a great way to ensure you understand them. There are several things involved here.  Comprehend the meaning of their language, discerning their meaning from other possible meanings. Prioritise the parts of their argument as they have. Gauge what kind of argument you've just gotten.

Thursday 27 September 2012

Not fooling yourself: the quiz!


Question one:
do you 'let ideas die in your place'? That is, do you have a positive attitude towards criticism and not take realising you're wrong personally?

Question two:
Are you optimistic that problems have solutions?

Question three:
Do you actively seek criticism of your ideas?

Question four:
Do you have some, at least a basic, understanding of logic, reasoning and argument

Question five:
Do you have a lot of strongly held beliefs: 

yes, one's I've thought a lot about consciously 
yes, one's I've thought a bit about
yes, but I don't realise it because they're all inexplicit beliefs

Question six:

Think of your three mostly highly regarded theories. Got them? Good. How many hard criticisms have you had challenge each in the last 6 months?

Question seven:
Do you, in large, have the same beliefs as your peers? If so, which came first, the peers of the beliefs?

Question Eight:
how often do you have a good debate or discussion, or read something to a similar effect?

Question Nine:
Do you have at least one friend who'll eagerly try to tell you why you're wrong about stuff?

Question Ten: 
Think of a few of your intellectual idols. People who have influenced your own thoughts. Can you readily think of problems you believe there are with their work?

Question Eleven:
Are you willing to think of bold solutions to problems and not just sit on the fence, where you'll be more prone to fallacies of common sense?

Question Twelve:
Do you frequently have Socratic episodes in which you realise that you are naiver or more ignorant than you thought?

Question Thirteen:
Do you think you did well on this test?

Why we Promise

A business man puts his reputation on the line when he promises a client that his product will be of X quality at T time. It's a completely different thing than if he merely said in passing that he thinks it will be X good. When a person promises it means they are assuring you that you can hold them to their word on this matter.

It's the same sort of thing with personal relationships, too. It's not exactly 'reputation' on the line, but no one would want to put their friend off being their friend because they aren't reliable to their word.


And so this social practice has helped people make many good stable agreements over the years. You could say, it's helped make relying on people more reliable! If you say you'll do something and you forget, then it's not clear how blameworthy you are. Perhaps you're careless, or perhaps it was a simple mistake. But if you promise someone you'll do something and forget, this is more obviously very neglectful of you and therefore blameworthy. After all, everyone knows the rules of promises.They're serious stuff, you can't just neglect to remember.

But if promises are so great, why don't we use them even more?! Why don't I get my friend S, who's always 5 minutes late, to promise to be on time, and this way he has to make the extra effort that he's not currently making or else he might lose me as a friend? Why not order promises at every single junction?

Well, quite obviously, we don't do that because that'd be horrible. Specifically, it would be horrible because we need to, to get along with each other, be a bit more tolerant of each others mistakes. We also need to be tolerant in other ways that effect promises, like we need to be tolerant that people change their minds. So the question really is, what makes the instances where it's good to make a promise so special that we are less tolerant of our friends making a mistake (like forgetting, or not being able to do it) or having the freedom to change their minds?

The intuitive answer is: when the promisee really needs the promiser to keep their promise. It's a bit annoying that S is always five minuets late, but it's a fairly tolerable quirk as far as quirks go, particularly in an age of smart phones. But if a person is injured at the side of a road, and you promise to come back for them with help, forgetting isn't really OK. Or if a dad promises his son he'll be at his birthday party, it wouldn't really be OK to change his mind in preference of the pub. But wait, now we're in the territory of things you're probably obligated to do regardless of the promising.

So the criteria needs to be that the action is not something you are obligated to do unless you promise to do it, and yet something that you could fairly be not tolerated for not doing. Perhaps our answer lies where we started, with the business man trying to keep your business.

Here's an example: Rebecca asks John, her friend, if he can be around tomorrow night because she needs to go over a very important pitch she'll be giving at work that could get her a promotion. He promises he will.   If he didn't keep this promise, unless he had a very good reason, he wouldn't be a very good person to be friends with. This is actually completely different from my friend S always being 5 mins late, because this kind of thing won't really effect how good a friend that person is (we'll assume the friendship has other assets).

certain relationships come along with certain things you actually have to be reliable for in order for that relationship to last. Good neighbours have to water the plants when you're on holiday and keep the noise down after 10 PM. Friends have to be there for you when you're having a personal problem. Boyfriends and girlfriends have to share an affection. But these are consensual 'have tos' I am using. Without your continued agreement you don't have to be a good neighbour  or a friend, or a boyfriend/girlfriend, or a two people trading, or anything other kind of human relationship. Promises are one of many ways we confirm we want to keep 'doing business together', so to speak, and accept the cost of doing so.



Wednesday 26 September 2012

Cult Philosophy

Western philosophy is defined by its tradition of criticism. So when you get a ''movement'' in philosophy it is just as notable for its internal divergences as for its shared principles. What you get very little of, then, are philosophy cults. Of course, cults, which are common enough, operate according to a certain "philosophy" in the sense that they hold certain ideas or world-views, but this is different from the sense of "philosophy" western philosophy is interested in because it lacks a critical attitude.

Once in a while, though, there are borderline cases: A group of thinkers who discuss positively a shared set of beliefs, who to one set of people may appear to be philosophy, and to another a cult. (An example of this may be Rand's Collective or Objectivists more generally.)



The reason these borderline cases have an appearance of a cult to them is due to the ways in which they go against the traditions of Western philosophy as hinted at above. They tend to have qualities like: the group seeking guidance from an authority rather than each member of the movements taking their lead from problems they've identified with their superior's theory. Or they lack careful and rigorous exchange with their opponents, (instead often considering their opponents only according to the most naive or negative interpretation).

The reason they have the appearance of not being a cult is that they will in some way have attributes that are too "serious" to be a cult. Perhaps, for instance, the participants of the movement consider the ideas carefully and do not take them on blind-faith. Or perhaps the participants are a part of the movement for the ideas themselves, and not "to belong" or other such psychological determinants typical of cults.

Perhaps then, instead of discreet categories when it comes to belonging to a movements--one being a school of philosophy and the other a cult--it would be more accurate to say that both exist at two extremes of a continuum. This might be a more fitting interpretation.

Yet two categories do spring to mind: the learner and the thinker. Let me explain.

We might ask, what is actually so bad about one of these borderline cults if the ideas are actually good? Yes, there are criticisms, good criticisms, most important of which is that criticism is an important weapon against fallibility and this must be taken seriously, which the borderlines aren't as good at. But there are assets, too.

Sometimes a philosopher is so busy carefully understanding something, so as to rip it to shreds, they miss a lot of what's good and useful about a theory. In this respect the likes of Rand's collective are being more thorough--certainly they could tell you more about what's good about objectivism and how to put it into practice in your every day life than most other ethical movements could with their theories.

So perhaps we need both! We need a bit of cultishness to learn the best of good theories inside and out (the learner). And we need the traditions of western philosophy to then alleviate us against dogma and to go on to use our own understanding of things (the thinker).


You see, I suggest what is actually bad about some of these borderline cases, isn't their bits of cultishness per se, but is the degree to which they mistake themselves for thinkers when they are are learners.

It is good to know which you are doing, and if you were to mistake one for the other, you could find yourself in a very dogmatic position.

Wednesday 29 August 2012

Socrates would hate self-help


At the birth of Western philosophy lies Socrates who preached that, ‘the unexamined life is not worth living.’ As philosophy has continued to develop from traditions largely put forth by this man—fallibilism, philosophy as a critical dialogue, and a need to understand and question the concepts we take for granted—it has in an important sense moved away from the examined life. This is very unfortunate.

So concerned with fundamental truths about the world and about human nature (and as it should be) philosophy has evolved almost a distaste for the personal. Even ethical and political theories only touch upon what could be considered guidance. They dampen their feet at the pool but don’t dive in.

There will on trend be more talk of what an individual owes society, than what you owe your friend, or your family member, or your lover, or your colleague. Philosophy is more interested in what counts as knowledge, then how one makes discoveries. Whether happiness is a value is of more interest than if there are good or bad ways to be happy and if so what they are.

These questions are less fundamental, but they are still very general and very important question about humans and about reality, and so they are undoubtedly philosophical.

I suppose the assumption for many of these more personal matters is that they belong more to the realm of psychology. Discovery is psychology. Happiness is psychology. Healthy relationships are psychology. Particularly self-help gets given a lot of these questions. But it would be a mistake to think that Socrates examined life lives on in psychology and self-help.



The problem with putting these questions in this domain is that we don’t get deep enough answers.  Self-help is very subjective. The individual isn’t so much searching for truth as identifying their current goals and reorganising a few mistaken methods to arrive at those goal. Socrates would disapprove.

Socrates would want to question our goals and learn about why some of them were in error. Socrates would want to question if we should even have goals. Socrates would want to scan through and examine as close to every thought and assumption motivating every single piece of the puzzle on the table. (at least the spirit of Socrates would, whether the man himself was capable of doing this is a difficult question given we can't even guarantee he ever existed). 

Self help is just too lazy for this end. Self-help mirrors the guru style of Eastern philosophy. Western philosophy can do better. And so I pledge: let us bring the examined life into the world of high standard truth seeking of western philosophy.  
  

Tuesday 28 August 2012

The Induction Myth

Induction has been held in high regard as the spirit of scientific enquiry. Science is in the business of going from the known to the unknown, and indeed, these are the kind of inferences that are what we mean by induction. 

We have, for instance, general relativity, a theory that gravitation is a geometric property of spacetime. This is ‘unknowable’, but as an explanation fits with ‘known’ data most strongly, such as the observed light deflection during the Eddington Eclipse. Similarly, biological variation (known) supports evolution (unknown). Hubble-type expansion (known) supports the big bang (unknown). And so on and so on. 


On the other hand, however, we have the rather concerning news that induction is unreliable!


Russell famously asked us to imagine a chicken that noticed every day the farmer came and fed him like clockwork. The chicken therefore assumed the farmer must be a benevolent man, and predicted he would continue to bring food every day. According to an inductive line of thinking, the chicken had ‘extrapolated’ observations into a theory, and each feeding time added further justification. Until one day, the farmer came and wrung the chicken’s neck. 






How can it be that induction cannot be justified as being reliable, and yet it reliably gets such great results for us in science? 

This apparent dichotomy is the essence of the 'problem of induction', however, it is based on an equivocation.


Notice that the kind of induction that Russell (and incidentally other critics of induction) are referring to, is what can be known as 'more of the same' type inferences. e.g. if the sun has always risen, it will continue to rise. It's this expectation that reality is uniform that is in error.


When I referred earlier to certain prized theories in science that go from the known to the unknown, however, they were not more of the same type inferences. They were a different kind of 'going from the known to the unknown', one where the unknown is the best explanation for the known (this is called inference to the best explanation). In fact, look to any prized scientific theory and I suggest you will on trend find this kind of inference.


They are called narrow induction (more of the same type inferences) and broad induction (inference to the best explanation).


Through our distinction then we can dissolve the dichotomy. Narrow induction bears inconsistent and unreliable results, but it is not needed by science. We can throw it out and it appears we can do fine with broad induction alone.


Now, I may here appear to abandon an expectation of uniformity too quickly. What about laws of nature such as 'all copper conducts electricity' surely here narrow induction has it's place? But no. The scientific explanation for why copper conducts electricity is that we have a theory that copper contains electrons, which is evident in its physical characteristics. This too is broad induction.


I purpose that the importance of narrow induction is nothing but a myth, kept alive by nothing more than an equivocation  Let's be done with it and give the credit where it's due.


Sunday 26 August 2012

Kant on the good of monogamous marriage

Kant thought that the only way for two people to have sexual relationships without the risk of reducing themselves to objects was if they were in a monogamous marriage.

The risk of becoming an object lies in the use of sex to satisfy an appetite. Basically, you are using a person as a means and not an end in itself. But in a monogamous marriage, Kant supposed, the way in which two people surrender themselves to one another is equal, and thus no one is asked to surrender more and so no one is victimised.



This strikes me as curious. Surely the greatest respect you can give a person in regards to treating them as an end, and not merely a means to your own purposes, is to honour their autonomy. And yet a monogamous marriage requires great sacrifices of ones autonomy! It is now someone else's business how you conduct your life. You're personal relationships are constrained by the whims of your spouse. You are in a very literal sense 'sharing your life with someone'.


Thursday 5 July 2012

Imagined and True

Consider for a moment computer programming. The program is just made up. There is a sense in which it's not 'real'. You could call it a kind of 'abstract engineering'.

Put it like this: When one engineers a physical machine, they're limited by what is possible given the resources. A software engineer is only really limited by human imagination.


But this difference between what one can create with physical resources and what one can create with imagination isn't as fundamental as it might at first appear.

Consider yourself looking up at the night sky and viewing a star. The physical constraint on your resources, as a biological being, mean that what you see is a small, slightly sparkly dot just above you. But you know that really it's not small, not sparkly, and not particular in 'your' sky. Instead it's a gigantic sphere of plasma held together by gravity at an immense distance from you. Just like the computer program, we are having to image that this is what the star is, because we don't see it that way.

Now, you could at this point try to draw a distinction between imagining what stars are into existence and imagining a software program into existence. You could say that the star is *really there*, and it's only a limitation of our human situation that we see it as a small dot in the sky. But the computer software is really there, too. If one programs it, it will 'be'. It might not 'be' at the moment, but actually neither is the star. The star used to exist, but doesn't any more.

'Being' at a particular moment is not a necessary quality of  something being true. The difference between the star that was and the program that will be is trivial, they are both really there, albeit at different points in time, and we can know about both through explanatory theories of the world.

ONE CAN DISCOVER SOMETHING THAT ONE FIRST HAS TO CREATE.


Wednesday 4 July 2012

Why Do We Need Science?

In The Beginning of Infinity David Deutsch writes that no one would have ever wondered what stars are if there hadn't been expectations that unsupported things fall, and that light needs fuel which will eventually run out. Both of which made stars rather curious.


Why do we need science in the first place? This is the first question we should ask in ascertaining a philosophy of science. The answer is already familiar to us. We need science because we have explanations about how the world works and we know many of them are wrong.
If the explanation of physical phenomena were evidence in their appearance, as empiricism suggests, we would know things pretty easily. Science arises precisely because it is hard to explain the world. Or, more accurately, because it is hard for us to explain the world.

Deutsch writes about the scientific revolution being born out of a time where dogmatic, weak explanations gave way to critical search for strong, hard to vary explanations.

What does 'hard/easy to vary' mean?

People once accepted myths and these explanations were such that it was of no real consequence if you varied their details. For example, the ancient Greek myth explaining the weather stated that winter was caused by Hades, long ago, kidnapping the goddess of spring, Persephone. Persephone's mother negotiated a deal where Hades would let her go, but under the agreement that she would eat a magic seed that would compel her to still visit him once a year. This is fundamentally different to the tight explanations that we associate with science. Why a magic seed and not another kind of magic? This is a detail that could be changed. Likewise, it could be that her father made the deal. It could be that she visits once a year to get her revenge on Hades. Or he is obsessed and kidnaps her every year. They could even easily be different gods entirely. 


Scientific explanations, are hard to vary.   

When philosophy of science became prominent in the early 20th century it was in part to do with how impressed people were by science. There was an idea that it served some kind of noble function, and served it with unusual reliability. The philosophers were not interested because their passions lay with science, particularly, but because they wanted to know if there was anything philosophy could learn about itself. And indeed they were right. Science does serve a noble function. It is not just that we have expectations, and the world will occasional disagree with them by having things like stars not fall to the ground, science is important because we have a history of accepting our expectations and ignoring the problems with them. As Deutsch illuminates, science began when we stopped making it so easy to fool ourselves--when we began a trend of looking for problems and creating hard to vary explanations.

Karl Popper sums up why we need science:

“Science, one might be tempted to say, is nothing but enlightened and responsible common-sense—common-sense broadened by imaginative critical thinking. But it is more. It represents our wish to know, our hope of emancipating ourselves from ignorance of the expert, the narrow-mindedness of the specialist, or the fear of being proven wrong, or of being proved 'inexact', or having failed to prove or justify our case. And it includes the superstitious belief in the authority of science itself”

Most philosophies of science time and time again get this wrong. They treat science as almost obvious. Empiricists describe science as being something you can quite easily read in nature. Even more common is to treat scientific theories as easy to verify. It is a fundamental misunderstanding of what science is for: to challenge what we think is obvious about reality.

(For more on the theme I recommend chapter one of Beginning of Infinity and Realism and the Aim of Science by Karl Popper).

We Created Civilisation, Not Evolution


I have heard it suggested that what makes us so special as a species is our ability to empathise. That it is only because of this that we get civilisation. After all, how could civilisation work without the ability to imagine how others feel?

Implying that civilisation is a logical consequence of empathy is paramount to saying we evolved to have civilisation. But the truth is we are not evolved to be part of civilisation at all. It is something we created, and empathy has little to do with how.
The way we evolved was to be part of a much smaller group known as a band. Large enough to haunt, small enough to feed. This is typical of mammals. It was this way for us for millions of years before we created civilisation. Civilisation is relatively new. Something we created in the last 12 thousand or so years.

It began with the agricultural revolution, also known as the neolithic revolution.

There was a time when we moved from eating grass to eating meat, because eating meat took less work for more energy. This was an evolutionary change. But once the ice age was over with, we were freed up to turn our creativity to take a stab at working on the work-energy problem ourselves, and here farming began.

There are still tribes around today that live similarly to how we did when all lived in bands. They live under conditions where a baby being born a girl and not a boy can endanger all the bands futures if she is not killed. Farming meant we could facilitate large number, and so this sort of thing was less of a problem. And as the numbers got larger, trade became possible.

Through the continued invention of new technologies to facilitate trade and farming we could, in turn facilitate continually more people. As time passed entire villages and towns were born, and by this point, we were inventing not only new technologies to help facilitate the numbers, but ideas on how to govern, such as religions, moralities, and law and order.

All of this--how we trade, what technologies we had, what ideas we had on how to govern--continued to improve, and as they did villages became empires. Today we live in a global village of billions. It is nothing evolution could have ever created. Civilisation required the rapid and innovative adaptivity of human creativity alone. And we are not done.

One day, chances are, as our ideas on how to govern and our technologies only continue to make progress we will move out into space and be able to facilitate trillions. This is preventable only if we stop making progress.

It's true that civilisation is a unique thing to people, but it is not the fundenental things that is special about us, it is one of many thing, all having their roots in our unique capacity to imagine, not just what others may be feeling, but anything.

To create. To solve problems.

(For more on this theme I highly recommend Jacob Bronowski's Ascent of Man and David Deutsch's Beginning of Infinity)

Saturday 30 June 2012

Perception is theory-laden

Compare the the hot air balloon below when it is in the shade and when it is in the light. You can tell that they are the same colours in both instances even though they actually appear to be different. In the shade the white appears pink, and what is sky blue in the light is royal blue in the shade, etc.

Why do you not assume you are looking at two different hot air balloons very close to one another? Weirdly, you know they're the same thing precisely because you know they're the same colours (despite the fact that they don't actually appear to be the same colours at all).


This is because of an inbuilt process called 'colour constancy' that allows us to identify something as the same colour in a variety of different lighting conditions.

But even though it comes from certain things that are already inside our brain, it is still just one theory that guides our perception. What is most fascinating is that we need not perceive colour in this way! Say if you are a painter and you need to perceive colours as they actually do appear to be, so that you can reproduce them, one does this by changing the theory used to perceive colours. Allow me to give an example.

Below is an optical illusion that works by exploiting colour constancy.

These grey squares are actually the same, however our eyes are not looking to compare the colours themselves. What our eyes do is look for clues to what lighting conditions the colours are under, and this is how they will decide what the colours are.

I'll show you what your eyes actually look at to work out the colour, and then I'll show you how to ignore what you normally see, by using the theory of colour perception painters use to paint.

On the lower square, our eyes see the light grey shading on the top and decide that the lower square must be mostly in shadow. Your brain therefore tells you that it's really a lighter grey but looks dark because the light isn't hitting it. So you actually see it as lighter.

On the other hand, the top square appears to be tilted upwards, and so your brain assumes its in full light. Plus your eyes read the dark grey around the edges as shading, which is more verification that the top square is otherwise in full light. Your brain then tells you that the top square is actually darker than the other, but appears quite light because it has a lot of light hitting it.

Because you're brain sees them as being under different light conditions, it tells you to see them as two different shades of grey, a much darker and a much lighter.

The illusion is of course broken if you cover up the light grey shading in the middle that is the main reason your eyes think the square are under different lighting conditions.



But this isn't the only way one can change their perception of the squares. A painter will be used to understanding how to isolate colours ignoring the lighting conditions. You can try this, too. If you ignore the light and dark shading and just compare the main colour of both squares, after a moment you should be able to see they are the same colour without even having to cover the middle up.

Basically, depending on which theory you use to perceive the above, actually changes what you perceive.


Wednesday 20 June 2012

The first person a philosophy student should criticise is themselves

"I fear that a time and place where readers habitually try to refute their own interpretations and expectations of what they are reading are only in writers dreams." - Karl Popper

Intro to Critical Rationalism



There are many theories of epistemology that hold the truth to be objective, but Critical Rationalism is very different from the others.

Although it is epistemologically optimistic, in that it asserts that the truth exists and is accessible to us, it is also ultimiately sceptical, in that it asserts that we can't know the truth.

These days its fashionable to pose as falliblist. So perhaps the above does not sound so shocking. Many epistemologically optimistic positions tag on the disclaimer that the truth is never certain. But there is not much behind most of these disclaimers. Claims that the truth is never certain usually boil down to appeals to likelihood. That is to say, the kind of statements that go 'X isn't certainly true, but it is likely true.'

However, the claim the 'you believe X is true' and the claim that 'you believe Y is likely true' are not of a fundamentally different kind. Both appeal to certainty, (in one case the certainty of your belief that X is true, and in the other your certainty of the belief that Y is likely true). In summary: appeals to a theory being likely true do not take fallibility seriously.

In fact, critical rationalism is shocking because it is the only optimistic epistemology that takes seriously the arguments of scepticism. It is not just adding fallibility on as a disclaimer that has no real bearing. Critical Rationalism never purport to even know if a given theory might be true. 

Here's how it works:

Critical Rationalism puts forward the idea that knowledge improvement can occur by exposing our theories to rigorous criticism in the hope of eliminating the false theories. Karl Popper, the originator of CR, argued that there was an asymmetry between what he called 'positive reasons' and 'negative reasons'. (negative reasons being reasons against a theories, such as internal contradictions, external contradiction, falsification via empirical testing (for scientific theories), claims against its explanatory power and so on.).

This asymmetry leaves positive reasons relatively worthless--positive reasons can never really go towards verifying a theory. On the other hand, critical reasons are relatively accessible to us and can go towards falsifying a theory.

So thought we can't know if a given theory is true, we can work out if it's better than a rival theory if we show that it survives falsifation while the other theory has not.

You could look at it as the idea that we can never get close to the truth, at least we can't know that we are, but we can get further for from error, and in so doing, we can improve the state of our knowledge 

Underestimation of Values

There's a myth that in our last moments we reveal who we really are. That there's a 'way we are' hidden beneath our chosen, explicit values that is more telling of our true nature. For instance if one was to torture a man, believed to have a lot of courage in his daily life, one might discover he was a coward when he was found begging for his life.

There's another myth that in life or death situations our survival instincts kick in and overpower whatever values we have. For instance if one was on an abandoned island with a loved one, perhaps that love wouldn't count for much when that person got really hungry.

Both myths undermine the role of our explicit, chosen values. 

The first myth first. The idea that we reveal our true self in a life or death situation is nonsense. In life or death situations, if we act unpredictably, it's because we're panicked. People who are that panicked make rushed decisions. These decisions come out almost random, in that they easily could have gone in a different direction. Indeed people often find somewhat inconsistent behaviour in moments of pure panic.

For the second myth, similarly the idea of 'survival instincts' turns out to be somewhat misleading.



It's a device particularly used in fiction to 'raise the stakes'. There's nothing scarier than the idea of being in a dangerous situation, than being in a dangerous situation with many other people, many of whom will do what they can to survive. When watching the Titanic, for instance, it is horrific to think of that many people all fighting for so few seats on the life boats. Who among us does not imagine being in that situation and fighting for our lives? But we shouldn't underestimate the possibility that we chose that survival value, rather than it being part of some inner animal that will take over our human consciousness if things get tough enough.

A good way of illustrating this point is by looking at what *really* happened on the Titanic. On the real Titanic on ratio, more Americans survived than British people. Why? Because the British people queued for the lifeboats, and the Americans pushed in. I know, it'd almost be funny if it wasn't such a tragic event. But the bottom line is that, even in such dire circumstances, their cultural values defined them.

In life or death situations we do not escape who we are in terms of the values we have chosen for ourselves. If we do, it is only because we are too panicked to think clearly and act in an almost random way.

There is no truer self than the values we choose to live by day-by-day.

Saturday 16 June 2012

Kant Quote on Enlightenment

"Enlightenment is man's emergence from his self-incurred immaturity. Immaturity is the inability to use one's own understanding without the guidance of another." - Kant