Tuesday 19 February 2013

Understanding Procrastination

There's this thing where, if you're conflicted between two wants, your brain will pick a third, unrelated  thing to do instead. It's a trick most of us have developed to avoid the potential stress involved in our internal conflicts. We do it with all sorts of stuff, but when we do it with work, we call it 'procrastination'.

This is why this bizarre thing happens when you procrastinate. You don't work on the project that you should be, nor on the project that you'd be working on if you were free to do whatever you want. Instead, you end up doing the kind of stuff that ranks somewhere like four or five on your preference list. This is the time when you watch an entire box-set of a television series you like, or you alphabetise your DVDs. Stuff you enjoy, but you wouldn't usually do. 

But in understanding this, you can actually do something about it. 

First I should say, of course there are other things 'procrastination' can mean. Sometimes a person genuinely prefers going on facebook, or watching the DVD box-set, and it's other people, not the individual, who have decided that they're not doing what they should be. But I'm not talking about this kind of thing. The solution is too easy: ignore them.

The kind I'm talking about is most pecular to people, because they actually love the things involved. On the one hand they have X, a project they love and do regularly. On the other hand they have Y. Y is the kind of thing that you would enjoy doing, maybe even quite similar to X, but you don't yet feel motivated to do it for its own sake. Unfortunately, you have to do it, because there's a deadline looming or it's important for some other external reason. You love X, and you will love Y, so you'd think picking one of them would be easy. But it's not, you pick Z instead, something you kinda like but never normally pick. Why? Because of self-coercion. 

We so readily accept guilt and worry as legitimate ways of motivating ourselves. I personally think it's a hand-me-down from how many times (particularly growing up) other people used guilt and so forth to coerce us. But the consequence of this is that we don't do X, as we normally would, because doing X would remind us that we're not doing Y. And we of course don't do Y, because guilt isn't actually a very persuasive reason to find Y enjoyable. So we pick Z, specifically because it's different enough from Y and X that we can forget all about them.

The solution, as bizarre as it might sound to people, is to not try to force yourself to do Y, especially by means of guilt or shame or anything else like this. Just embrace that you really want to do X, and be happy about that. 


Now, at this point, you might say: but what about the looming deadline?! But doing X instead, IS the solution for how to start doing Y.

It's true that you might actually be better off doing Y, but you're not going to be motivated to do Y unless you allow time for yourself to be positively inspired to do it. Say X is write a play, and Y is write your university review on someone else's play. In doing X, you give yourself chance to be inspired to do Y, either instead, first or afterwards because X will be similar enough to Y to allow for this. (Remember, this follows logically that they're similar enough, from the fact that you have to pick Z, not X, to not think about Y). 

Once guilt is out the picture, this positive inspiration can take hold quite quickly. I've known it take minuets even. On the other hand, of course, this doesn't happen when you pick Z (say, facebook) because Z has been specifically picked because it lets you forget all about X and Y, so there'll be little or nothing to remind you of why Y is good/fun/interesting. 

So in summary: If you want to get rid of this kind of procrastination 1) let go of guilt and be OK with doing what you really want, and 2) do what you actually want and let that be the inspiration for doing other stuff, too. 

The Validation Theory of Epistemology P.3

Coming soon...

Monday 18 February 2013

The Validation Theory of Epistemology P.2

First we should start with clarifying what it means to validate a theory as a candidate for true belief.


One of the most popular theories of justification in epistemology at the moment is the idea that a theory is justified when it is shown to be 'likely true'. This position is an attempt to get around the traps fallibilism present in the vindication approach, without having to give up on its merits. Basically, if you can show that something is likely true, this makes it an appealing candidate for true belief, in the same sort of way that showing a theory is definitely true would. 
It doesn't really work, though. To say something is likely true is still to make an appeal to certainty. Now, instead of saying that you are certain that p is true, you are saying that you are certain that p is likely true. The difference is not fundamental

A second, less popular approach is to say that justification is to show a theory is a 'possible truth'. This gets around any of the problems of making an appeal to certainty, but again, it doesn't really work. It's such a weak requirement of a justification to say that 'it could be true' that you could say it of anything, and it therefore becomes meaningless.

Popper made one of the only serious attempts to make the 'possible truth' route work. He argues that a theory is justified as a candidate for true belief, if it is a possible truth in the sense that it has not yet been shown to be false. His 'reasons to believe something' are negative, then, not positive. It's not that the theory has any particular strength we're pointing to to indicate that there's good reason to believe it, it's that it's the only surviving theory left.

But the problem with the Popperian approach is it assumes too much of falsification. Now Popper did admit that a falsification could be wrong, but he failed to recognise the impact this has on his theory. It's only that we're able to eliminate some 'possible truths' in the Popper approach, that makes preferring one possible truth over another meaningful. If we actually can't eliminate any theories, the Popper project falls down.




But validation theories needn't be as strong as to make claims to likely truth or as weak as making claims to possible truth, there is a middle ground, one that is often overlooked. 

Continues here
For part 1 click here

The Validation Theory of Epistemology P.1

Traditional epistemology is based on something called the 'Cartesian Foundationalist Program' (CFP). CFP assumes that natural knowledge (knowledge of the sciences) must be based somehow on sense experience, meaning that knowledge reduces to observation and can be deduced from observation. 

However, neither project works. In the case of reduction, as Quine argues, even the most modest of generalisations about observable traits will cover more cases than its utterance can have occasion to actually observe. Put simply, observation itself never offers enough content to account perfectly for the generalisations made about them. Similarly, Hume persuaded us all long ago that one can't deduce theory from observation. A scientific theory is not guaranteed by it's observational premises in the way deduction requires it would be.

These problems with CFP are well known and widely accepted, and yet epistemology has not yet been willing to let go of it. To understand why we need to understand what was so attractive about CFP in the first place. 

In a nutshall, CFP has been preferred because it offers a clear definition of what a justification is, a.k.a. a justification is a 'certain truth'. It's harder to make the case that we are justified in believing a 'likely truth' or a 'possible truth', but a person who believed in a certain truth would clearly be justified in doing so. Unfortunately, the promise of 'certain truth' is unfulfillable. CFP believed it was possible due to the above two mistakes, but also because of a much larger, often ignored, mistaken belief that sensory data is more credible than it actually is, and the often missed point that observation is theory-laden.   


Epistemology, has therefore been forced to find a new standard of justification. It has been suggested that rather that attempting to 'vindicate' theories, then, we should merely attempt to 'validate' them. That is to say, offer some reason why they are good candidates for true beliefs independent of them necessarily being true beliefs. And the validation approach is widely accepted. No one around these days in philosophy seriously believes in vindicating theories. But paradoxically, philosophers still find themselves attracted to CFP, and time and time again, efforts will go into making CFP work.

The problem is, although we're left with validation as our only option, this approach has not been established as a coherent, tight philosophy in the same way that the 'vindication theory of epistemology' had been. That is, it's obvious how, if it were possible, a vindicated approach to justification would act as a candidate for truth, it is not obvious how a validated approach would. I conjecture that this is why CFP, though understood to be false, remains so prominent in epistemology, and I believe it will, until the validation theory is fully thought-through. 


For more on the problems of the validation theory of epistemology, see P.2
For my purposed solution see P3.

Tuesday 12 February 2013

Happiness as an end in itself

The Utilitarian position that, 'happiness is the sole basis of morality and that people never desire anything but happiness,' can be rather jarring. Narrowly understood, 'happiness' is a kind of emotional pleasure. Yet some of out finest moments might lack this. Take for example, jumping into a pool to save a child. Or the endurance of many, many months of work on a worthy project. Now, it is true that these things can bring us a narrow happiness. After saving the child, we may feel this in the form of pride, but we may not, we may only feel anger at the useless lifeguard, or terror at the frightening situation now behind us. In either case it doesn't matter, for we entered the situation not for the feelings we would experience afterwards, but for the child.

Similarly, we may feel many points of pride when working on the work project. But we will also for the most part feel not very much at all, as one does when they 'enter the zone' for several hours a day. If feelings of pleasure were paramount, one could spend that time on pleasure-seeking.

On the other hand, it would be a little silly to assume happiness plays no role. There is a sense in which we jump in the pool to save the child because we find the state of things where the child lives a happier one than the one where the child dies. And there is a sense in which we enter into this long and tiresome work project at the expense of many pleasures we could instead pursue, because the worthiness of the project makes us feel more fulfilled than anything else could.

Our problem is that, once we broaden the definition of 'happiness', to include all these different types of things--pride,  fulfilment,  abstract happiness, and physical happiness--and anything else they may be included, we face the dilemma that happinesses conflict, and it is not obvious which we maximise and which we leave behind. Mill's solution to this problem was a rather unsuccessful appeal to human nature. He divided happiness up into 'higher pleasures' and 'lower pleasures'. It was essentially a kind of Victorian snobbery, which mistook the trends of his time and class for human nature. Indeed, his mistake was in thinking that 'human nature' could be any help to us at all in solving the problem.

But if we put aside human nature, what are we left with? Nurture. We are left with the ideas and values of an individual, to which our happiness (of any kind) are subservient.

Happiness is an end in itself then, not because happiness is intrinsically good, because in fact happiness isn't intrinsically anything it seems, but because some form of happiness or another is the end expression of actions that realises our values. Likewise, some form of unhappiness or another is the end expression of actions that don't. What determines the hierarchy then, is not kinds or quantities of happiness, but what we value most highly.

Saturday 9 February 2013

Ayn Rand, the Academic Philosopher

Rand wasn't an academic. She didn't write like an academic. She wasn't exceedingly well read or studious on academic philosophy. And the tendency is, academic philosophers do not take her seriously if they've heard of her at all.

But underneath the style she chose and the approach she took, how well would her ideas translate into academia? Or to put it a more interesting way: can Rand be taken seriously as a *philosopher* and not just a guru.  



Philosophy problems Rand offers solutions in:

The meta-ethical problem 'why be moral?': what if you don't want to be moral? Doesn't the whole thing kind of fall apart if you don't care about doing the right thing?

Rand's solution is that she never asks you to care. But, she argues, it would never be in your interest not to. Rand's morality is utterly self-interested, and is more concerned with freeing you from not doing the moral thing (e.g. being altruistic) due to cultural standards or pressures, than it is in commanding you to act any way you might not see the point in.


The meta-ethical problem of objectivity: Objective morality is hard to defend meta-ethically. Values that are 'mind-dependent' are subjective. They are governed by preferences. For a value to be objective it must be good 'mind-independently'. But how could something be valuable if its not valuable to the subject? What other authority could determine a values status as valuable?

Obviously 'objective morality' is important to Rand, enough she named her theory 'Objectivism', but actually she never insists on objective values. What she insists on instead is that our chosen values be thought through rationality. So for example, a heroin addict isn't acting wrongly because they value having a high, if this is what they choose, they may. They are acting wrong because they are picking a destructive way of attaining a high, one that causes them distress and other things *they* don't enjoy. With rational thought, they could realise a better way to get their high.

Rand manages to not be a subjectivist without having to awkwardly discover a way in which this mysterious thing called 'mind-independent value' exists.

The ethical problem of altruism: If we're motivated to do what's good for us, why care about what's good for others?

What makes Rand's account of altruism so interesting is that it's negative. For Rand, 'altruism' is her word for 'self-sacrifice for the sake of others', that is to say, putting someone else's values before you own. To do so is for Rand wrong. But she acknowledges that sometimes your own value may contain in it the consequence of doing something nice for another.

Forget whether or not this is true, this is a very interesting answer to the problem because she explains away the problem: All morally permissible acts that are altruistic are also self-interested. Others have attempted this answer, but they always end up with a morality being nothing more than prudence. But because of Rand's interesting combination of values and selfishness she doesn't fall into this trap.


Verdict

The above three problems are each major hot debates in contemporary ethics. Rand actually offers interesting and worthy solutions to them. She is not ignored by academic philosophy because of lack of content then, but lack of being understood. This may be her fault for being unclear, it may be the fault of her advocates for not being good enough at philosophy to represent her, or it may be the fault of academic philosophers who have heard of her, for dismissing her based on character.

The Struggle of a Grown-up Autodidact


I had always believed that people enjoy being an autodidact, that is to say 'a self-taught student'. I had been one past the age of 12, as had most of my friends, and as had hundreds of young people I met at home education clubs and festivals. As I grew up, I would further meet people who had been to school, but had had it about them to learn what they knew, and what they were brilliant for, by themselves.


What being an autodidact, particularly from a young age provides, is a freedom to pursue and nurture your own interests in a way more efficient and pleasant, for you are in charge of the learning style and the when, wheres, and hows. Of course, people who are so familiar with it being someone else's job to take care of what they learn for them, always get confused at this point. You'd have to be naturally (unusually) ambitious to not just waste your time and do nothing, they say.What they miss is that, if you look back at my description, you will notice I said the words 'your own interests'.

There is much more to say to defend autodidactism, this has barely scratched the surface of why it is an important way of conducting yourself as an individual. But, this is not why I am writing today. I'm writing because it occurs to me that there is a way that people don't enjoy being an autodidact, and it is one of the most important ways they should. That is, most people, even those given the chance to, don't enjoy thinking for themselves once they're grown-up.  



What exactly is the difference between you being in charge of your learning and someone else? It's not the use of teachers, or books, or documentaries, or labs, or computers, or indeed any learning resource. It is a question of who is managing the decisions of what, when, where and how to learn. Implicit in this, is in fact what autodidactism is at it's core: thinking for yourself.  

The difference is really one of the difference between man A and man B. Man A reads a book because he has a particular need met in advancing his learning through content promised in that book. He hears about the book from others, and engages with their recommendation to read it critically to gauge if it is really the book for him. When he reads it, he is as critical with the author's words. He adapts the parts he likes, he fixes the parts he finds broken, and he takes nothing it says on faith. Man A, is a man who's thoughts are so his own that they don't match perfectly with anyone else's, not even his greatest influences. 

Man B is told he should learn X, and takes it on faith. Is told he should read book Y to do so, and takes it on faith. And assumes the book true, on faith. His thoughts are not his own thoughts, they are other people's slipped inside his brain. 



The sad truth is, as fun as it is to be an autodidact when you're a child and parents take care of your food, and housing and so on, the older you get, the less fun it is to think for yourself. Wouldn't it be so much easier to toe the line for your career. Or to say what your friends say, so that you have some. Or to just let someone tell you how to live, because its such a scary thing to have to work out at the very time you need to already know the answer.

This attitude is a myth, it's always better to think for yourself, doing so will always give you more control over what happens in your life.There are problems involved in doing so, but they are all soluble in their own ways. But it is a myth many autodidacts grown-up must buy into, for it happens time and time again that a grown-up audodidact will try to find people to think for them, or will let the crowd around them think for them.

I am currently finishing up university. Some people I know who support autodidactism don't agree with university, they mistake it for school. It's not, university is as coercive to how one should learn as having a library card is, or as using a shop to buy your food is to what you eat. I have been here because I want to think for myself about the education resources university provides, and I am pleased with how I have done so. But it worries me that some autodidacts go to give up on being an autodidact. Equally it worries me, the autodidact that haven't gone to university, for I know a good few of them who seem to have just found a group, intellectual or cultural, to tell them how to live (and therefore what to think). (This is in fact much worse than the university problem, because university has more self-criticism than a group of people tend to).  

But what worries me the most is that it doesn't seem to be said enough that being an autodidact (someone who thinks for themselves) gets harder now. That it isn't quite as simple as it was when we were children. And admitting it/being conscious of it, is an important part of considering it, and considering critically how one wants to proceed.