Saturday 7 December 2013

The poverty of 'altruism' and 'selfishness'


In ethics we talk of the difference between 'selfishness' and 'altruism', and although it is frequently acknowledged that these terms are very elusive, we are to a great extent dependent on them for moral discussion.


On the surface of it, a straightforward reading of the dictionary has the case set out plainly.  


self·ish adjective \ˈsel-fish\: having or showing concern only for yourself and not for the needs or feelings of other people  

al·tru·ism  noun \ˈal-trü-ˌi-zəm\ : feelings and behaviour that show a desire to help other people and a lack of selfishness

However, there's a sense in which these definitions can only be read plainly if we were discussing, say, animal subjects--subjects which have a clearly defined 'self' to which they can clearly be concerned with benefiting or not within a given activity. Animals have this as they are strictly programmed by evolution. They have a 'self' (using the term loosely) amounting to a biological entity looking to survive and replicate, as a biological entity. This sets out clear boundaries in which activities such as eating when hungry or finding shelter are self-preserving, while activities such as grooming another or helping another eat when they are hungry are other-orientated.

Humans are a different kind of creature, we are programmed by evolution only in a weak sense, and we are in a much stricter sense programmed by values. We have selves that are mental, which exist in self-selected ideas, and as such are malleable. We can learn, we can develop preferences, and we can change our minds. What is at one point unselfish can become selfish merely by shifts in our ideas. And therefore, there's a sense in which we can never become less selfish in the pursuit of altruism. If a person is motivated by 'wanting to win the football match' this notion may entail 'wanting to motivate the team', 'wanting to play your best', or 'wanting to give the crowd a good time.' Each action can be interpreted fairly as both selfish and altruistic. Although each can be seen in a sense as generous, they are not idly or purposelessly giving, they serve some desire of the self. This veil of selfishness to our actions continues to apply to most human intentions, even charity.

The lines between altruism and selfishness are blurred by our existence through the mental in which 'us' and 'the outside world' can be intimately intertwined by values. In the extreme, selfishness can manifest itself in form of dying for a cause or a loved one.

Under an Objectivist perceptive, this renders talk of altruism redundant, with altruism standing out only in instances in which coercive pressures precede generosity. But it is not evident that, if this be so, by the same token talk of 'selfishness' shouldn't becomes obsolete. If we find ourselves in a situation where one can say 'I changed my mind drastically from being selfish to being selfish', the term is hardly descriptive.  

But of course, all of this misses the point. Practically speaking when people speak of 'selfishness' being good or bad, or 'altruism' being good or bad, they're really being used as umbrella terms for some handy rules of thumb to help guide us in selecting and reviewing our values.

selfishness is good entails ideas like:                   selfishness is bad entails ideas like:

it's al-right to enjoy yourself                                       consider long term consequences
your property is your own                                          the world is better when you don't hinder others
great innovation follows doing you own thing                      

altruism is good entails ideas like:                        altruism is bad entails ideas like:

human life is valuable                                                  your life is as worthy as another's
generosity enriches the soul                                         you needn't become a willing slave to others


But the terms offer nothing more comprehensive, and certainly can't profit us in a one verse the other framework.

To decide between values we require much more sophisticated ideas to guide us.

Friday 22 November 2013

Why you should make improbable claims.


There is a sense in which the title of this entry may seem misleading. If 'improbable' is read in a certain way, the assumption will be that my intention is to encourage theories that are bizarre relative to what would be a more practical conclusion given our best (by which I mean most successfully tested) theories. A concrete example of a 'bizarre' or improbable theory of this sort would be something like this:

Light speed is relative to human imagination. Much like pixie dust in Neverland, the human subject has to believe and then it will behave how they want it to. 
This theory is clearly 'bizarre' or improbable. It makes claims about light, and the power of the human mind to directly interfere with the laws of nature, which are contrary to our best knowledge in science. More precisely there's an arbitrariness or variability to it. Why does the ritual require 'believing' as a control mechanism and not say a song, or a dance. Or why indeed this theory at all and not another theory of light, say that: human can't control light speed because it's the work of the gods. Not working within the confines of epistemology opens the door for theories as fiction or nonsense.

So I'd clearly have my work cut out for me if this was at all the reading I intended. Which it was, but only so I could turn it all around and show that there is another reading, which makes a lot of sense.

'Probable' has two meanings in ordinary language, at least two proper meanings (it may also be misused in certain ways).

(i) The first is the one we have seen above, a soft meaning. We may find this kind of 'probable' in sentences like 'I'm probably going to the party Saturday' or 'probable she forgot'. What it means in this usage is that we are not sure either way, but theory A (going to the party, or 'she' forgetting) has tested better than theory B (not going to the party, 'she' not forgetting). So going back to the world of science, we might describe Einstein's theory as much more probable than Newton's.

(ii) The second meaning is stricter. Here 'probable' refers to the probability calculus.

A common mistake is when (ii) will be assumed to show (i). That is, if the probability calculus is high, then it's our best/most tested theory, and vice versa.

Consider the prevalence of this view. We see it in our every day lives when we flick (or more accurately scroll) through a popular magazine and read about topics claiming things like 'studies have found blonds really do have more fun', where surveys and statistics are used to establish generalisations.

We also see it in the supposed solution to the problem of induction. Induction attempts to show a proposition true for having a pattern of occurrence. The problem is, the future doesn't always resemble the past. By making the mistake, however, that (ii) implies (i), philosophers have argued that we should believe propositions reached through inductive logic not because they're guaranteed, but because they're probable. So for example, if the proposition P was that the sun will rise tomorrow, and our induction was that it has always risen previously, it is thought that we can accept P because it scores highly with probability calculus.

So what's the problem with equating (i) and (ii)?

The problem is the rule 'obtain high probabilities!' (ii) puts a premium on ad hoc hypotheses. Put simply, (ii) is mostly likely to wield high results the less information it actually has to work with, which far from leading to (i) is actually at odds with it, as (i) has it that our best theory is the one that has been successfully tested the *most*.

You can make 'the sun will rise tomorrow' less and less probable according to (ii), just by adding more information: That the sun will burn out, that weather conditions can hid sun rises, or the plenitude of distastes that can occur such as supernovas and meteoroids. In doing this the (ii) would go down, even though we could speak more expertly on (i).





 

Sunday 2 June 2013

How to live well

While some people just drift through life, others are driven towards certain belief systems and attitudes aimed at taking control of how good or bad their life turns out to be. This is a very good and important thing to do in principle for the obvious reason that it's empowering. The alternative would be to hand over your future happiness to lady luck.

The trap one can fall into, however, is having too rigid an idea of what one must do to live well.

It happens all too often that a person gets so convinced of a particular approach that they actually completely miss that they're miserable and not getting any closer (or as close as they could have gotten) to where they'd like to be. Some people identify the wrong thing they want and live towards that goal. Such people may suffer from the pains of forcing themselves towards a goal, and feelings of disappointment/loss when they get there and realise they're still not happy. Other people try to live the best life by having generally the 'right idea' about things like career, love, friendship and so on. Such people can suffer because their ideas are so set for them that they miss out on good stuff that aren't covered by their philosophies. Or because it can be hard to expose these more detailed philosophies to rigorous criticism, because you have to make time to do so.

The weird thing is, if you look at people who get life right, often they're more like drifters than planners or deciders. It seems to me that the difference between an aimless drifter and a purposeful drifter is exactly the life lesson that these planners and deciders miss out on. And it's a matter of methodology.


Purposeful drifters have error correcting criteria for the things they do in their lives, while aimless drifters tend to be more inductivist. They achieve happiness by repeating the things that have worked for them in the past, but perhaps at the cost of re-living a lot of the same problems again and again.

Some examples of purposeful drifting might be to have a criterion such as 'does doing what I'm doing make me feel happier or not?' This works by having a theory, i.e. happiness, that identifies progress towards good living or not. This works well because it allows for a standard by which to criticise what you're doing, but in fact involves very little in the way of presumptions (the problem had by the 'planners and deciders'). It also focuses narrowly on what information you actually have, i.e. your feelings about a thing right now, and doesn't delve into information you don't have such as what you'll want in the future.

Or, for another example, one can have a rule of thumb of doing something every time their life gets comfortable to shake things up. If one felt like their job wasn't challenging any more, but weren't sure what else to do, quitting forces one to try out new conjectures and gather new information. All of this again is good for error correction. This approach in particular works a lot like natural selection, since you add selection pressures to your current ideas by virtue of making things less comfortable for yourself.

'Planners and deciders' have the right idea in trying to use knowledge creation to make their lives better. That is to say, they identify possible problems and come up with possible solutions. But their problem seems to be that their approach doesn't lead to much new knowledge being created--it is set up to be biased towards their first theory, whether that be their first theory on how to approach career/love-life/whatever or their first idea on what their goals are. Similarly the 'aimless drifter' creates no improved knowledge on how to live because they 'let what is, be'. But between the two appears to be a golden mean of drifting, but with error correcting/knowledge creating mechanisms in place to improve ones situation and engage with better and better thing in life. This seems to be the approach that is needed if one wants to live well.

Monday 20 May 2013

Selfishness

In a cringy 1959 interview, Mike Wallace asked Ayn Rand this question:

"Should husbands and wives tally up at the end of the day and say, well wait a minuet, I love her if she's done enough for me today, and she loves me if I've properly performed my functions?"

Almost any interview with Rand involves this question in some form. The question concerns a confusion that the only kind of selfishness is a utilitarian kind. That selfish people are only ever trying to maximize their own utility--that nothing is valuable to the selfish person for its own sake. Now, of course it's an ignorant question to ask of Rand. Rand is quite clearly a virtue ethicists, and quick to praise beauty, rationality, love, human life, and so on as having some sort of intrinsic value. But the mistake doesn't happen because people read Rand and get confused, it happens precisely because the people that ask it don't read Rand, hear 'the virtue of selfishness' and jump to conclusions about what selfishness entails.

Selfishness in common-day English has two distinct meanings.

To act with disregard for others
OR
To act with regard for yourself
On the surface of it, it might appear as if these were two sides of the same coin, but in action they lead to two entirely different ways of life. To act with disregard for others is to preclude yourself from any concern for other people, and because acting with concern for other people can be both profitable and rewarding for its own sake, would consequently lead to less value for the self overall. On the other hand, to act with regards for yourself, only entails that your actions are in-line with your intrinsic and extrinsic values, so will include lots of concern for other people where appropriate. Particularly in the example of loving someone.





Practical Love

There is one good reason to assume that there's a mistake made in how we commonly think of romantic love, even before any arguments are discussed: it's not very successful at delivering what it promises. The promises behind romantic love vary--happiness, fulfilment, to 'feel complete', to make the world a brighter place, and so on and so on--and these things are occasionally achieved, but mostly it just seems to frustrate people, make them act rather strange, and lead to a lot of upset and feelings of unworthiness. When there is an in-congruency like this between the intention and the actuality, something has gone wrong somewhere. The question is, what and where?
"It’s doubly unnatural when the system is based not on rational ideas like good character, similar outlooks and whether or not she owns the next door farm, but on a fluttering change in the brain chemistry that some neuroscientists would say is a form of temporary insanity."
-Ed West on romantic love. See more here 

 Mr West thinks the problem is that people base their decision of who to love on chemicals rather than practicals. It's a common criticism of love--that it's just silly chemicals, but unfortunately it doesn't hold. The chemical reaction responds to ideas the person already holds about who to love. It's not just random firing of dopamine and oxytocin triggered by no cognitive process at all, otherwise we'd have less women falling for swarve, charismatic men, and more people in relationships with toasters, or homeless people, or whatever else would have more odds at being present during a random chemical spray. Indeed, ideas also are needed to interpret the chemicals as love and not something else, as we do feel these chemicals in other contexts such as roller-coaster rides and so on.

If people don't do love practically, that isn't the fault of chemicals, then, but the fault of some bad idea people have about who to love, how to love, and why to love.  And... maybe most importantly, when to love.

One such mistake might be elucidated through looking at the meaning of the word 'practical'.
Of, relating to, governed by, or acquired through practice or action, rather than theory, speculation, or ideals
 One thing that seems true of the bad ideas about love is that they're rather presumptuous. Person A meets person B and, based on the idea that they perceive them to have certain lovable (to them) character traits, falls in love, but without actually checking if they *really* have those character traits or if it was just a good first impression. Or person A meets person B and, seeing how well they get along--they have so much chemistry!--decides this means they'd get along with one another really well probably if they spent every other day together for the foreseeable future. Or person A really like person B for a year, probably they'd like them just as much til death do they part.

This problem goes even deeper, however, in that it ignores not just misunderstandings you might have about the external world (them), but also ones about the internal (you).

There's a lot of deciding 'how something should be' before one really has experience of it in romantic love, and by definition, this isn't very 'practical'. People should be more easy-going about it. More open to surprises. More open to turning out to be wrong. And less often love based on guesswork of what sort of person can make you happy over and above the reality of what makes them happy... in practice.

Wednesday 15 May 2013

Tony Robbins: The Bad Guy

Season 4 Buffy arch-enemy Adam isn't the only television bad guy I can think of inspired by personal development guru Tony Robbins. Dexter's Jordan Chase also comes to mind. But while in other cases the similarity is quite superficial, in the case of Adam he is, as the character Spike points out, "... exactly like Tony Robbins." (Yoko Factor, s04e20)


If you don't know who Tony Robbins is this means you're probably not a American, or you've probably never looked at the self-help section in a book shop. But in brief summary, Tony Robbins is the highest paid and most successful personal development guru in the world. His whole thing is that he can help you re-engineer yourself for 'ultimate success' by changing how your mind works (by 're-programming your neural pathways'). Particularly, he helps you by teaching you how to do this thing called 'modelling', where you copy the exact heuristics of a 'highly successful' person as they succeed at the thing you want to succeed at. Which is... pretty much what Adam does.

Adam is a frankenstein style big bad on Btvs, who goes around cutting people and demons up to understand how they work, so that he can eventually execute his plan of piecing together people/demons to make super soldiers free of weakness, just like him. Tools of his trade: he has a knack for being able to motivate people (monsters) towards his mission. Like many problems Btvs aims to explore, this is a literal interpretation of the Tony Robbin's mission in life, put in a monster fighting context.

The question is: why does Tony Robbins make such a good villain? On the surface, you might think that a guy who devotes himself to helping others get the 'life they deserve' is no one to associate with evil.


Well, the reason Adam made a good villain for Buffy season 4, was because the season was devoted to the idea of being young and independent for the first time in your life, and having to learn how to deal with the 'greys' and uncertainties in life (pure existentialist-style) and *not* rely on having all the answers. Therefore Buffy fought villains that wanted certainty to fight their weaknesses, so as to contrast with her own journey and that of her friends. She fought soldiers who lived by all the moral certainties that soldiers live by, and she fought a Tony Robbins monster set on removing weakness.

In one form or another the 'Tony Robbins as a bad guy' idea usually comes down to a suspicion of how he's conning people. He promises so much... if you just hand over your money. But the above is an interesting variant, where the con isn't a promise he won't deliver, but a promise he couldn't and shouldn't deliver.

So what's so good about 'the grey'?

It can be tiring to see pain romanised. Pain isn't romantic, it's just painful. But if one can learn to find some peace, maybe even enjoyment, from the complexities and uncertainties they will deal with in their life, they're putting themselves in a much more powerful position to be able to error correct. And error correction of ideas is needed to learn anything about how to live. To seek the perfect cure of weakness, then, might sound pragmatic, but it's really explaining away part of the methodology needed to enjoy life.

The principle is exactly the same in any other area of knowledge creation. If one attempts to create knowledge by avoiding or explaining away problems, one only ends up with dogmas. To actually 'problem solve', one needs to be on friendly terms with the idea that there are problems.

Monday 13 May 2013

How to deal with loneliness

There's something very odd about loneliness. Firstly, it seems to be a misnomer. Though it manifests as a craving 'for people', specifically usually a craving for love or some other form of intimate connection with another human being such as a strong friendship, the PROBLEM itself doesn't seem to have anything to do with not having this contact.

Why do I say this? Well, people can feel lonely with love/friendship or without it. Similarly, not having enough love/friendship in your life and wanting it needn't lead to lonely feelings. So clearly something else is a foot in a lonely mind, and on consideration it seems to be more of an internal problem than anything to do with the external.



There's a wisdom in NLP to the effect of: Loneliness is not liking who you're alone with.

I'd tentatively accept that there's truth to this wisdom, because it seems the case that if you were a happy fulfilled person it would be hard to then be lonely, even if you did crave greater human contact. It also helps explain the unrealistic way in which people get lonely. They want people to come save them from some unpleasant feeling, that seemingly wouldn't be present if they had an otherwise fulfilling life.

The problem with the wisdom is that it's a little vague, there are lots of ways that you could be in some sense unhappy with yourself but still be... happy. Maybe you're OK with the knowledge that you need to make X, Y or Z improvements to yourself to be happier/better off. 'OK' as in it doesn't interfere with your mood.

I have a conjecture, therefore, that loneliness arises when you're not happy with yourself/your daily life, and there is some coercive force that makes you feel that you couldn't do anything about it, so in an eagerness to ignore this unfortunate situation, you focus outwards in the hope that another person can come along to distract you and make you feel happy despite your miserable situation.

This coercion can come from some authority in your life, say a school. It can come from a lack of knowledge, say you feel trapped in a job and you just don't know what to do to get a better job. Or it could come from a more specific ignorance: you've gotten this incorrect idea about how you should live and you're now forcing yourself to live that ways even though it's unpleasant.

Remove the coercive force, make the problem soluble, solve is, and the loneliness should retire.

Euthanasia and the Slippery Slope

"There is no slippery slope and the relaxing of practice is not supported by evidence from the Netherlands or from anywhere else where the law is more compassionate," 
- Terry Pratchett on euthanasia. 

It's interesting that this writer of Disc World fame, who has recently been trying to legalise euthanasia after being diagnosed with alzheimers, would say this. Most people think this. I hear it a lot. But half the time I hear it I can't help but wonder, did they look at the evidence? Or are they assuming that because there hasn't been a huge outrage, no slippery slope has been breeched?



The Remmelink Report (1991) was the first official government study into the practice of Dutch euthanasia. The document found involuntary euthanasia to be prevalent, with 45% of cases being involuntary. The study found that in 1990 an average of 3 people a day died from involuntary euthanasia. Of which 14% were fully competent and 72% had never given any indication that they would want their lives terminated. And in 8% of cases doctors performed involuntary euthanasia despite believing that other options were available.

The most frequently cited reasons given for ending the lives of patients without their knowledge or consent were: "low quality of life", "no prospect for improvement", and "family couldn't take it anymore."

In Belgium, a 10-year review found similarly that almost half of patients euthanised had not given their consent.

The thing is, make no mistake, the guidelines in the Netherlands are exactly what you'd think they'd be for there to be no 'slip' into involuntary euthanasia (such as it stating specifically that it must be voluntary). Belgium, too, with it's requirement that a patient not only volunteer but be conscious at the time of the decision.

Sadly, ultimately, if it's written down, it can be interpreted.

There's always been other elements to upholding the law as it should be read and that's: where do the incentives lie on mass? And what's the cultural attitude? And unfortunately, you can currently get a bigger buck taking a life than keeping one. Whether we're talking Dignitas's profits or how State healthcare can save much needed resources here and there. Similarly, culturally, we seem to be a little too compassionate, a little too quick to agree that a person's life must not be worth living. Is it so wrong to tell a severely disabled person that wants to die to keep trying? To figure out how to enjoy the life as they have it?

Until the incentives and the culture can support the moral interpretation of euthanasia legislation, there should be no euthanasia legislation. It is an unfortunate truth, but a necessary one. The cost so far: hundreds of thousands of lives.

Thursday 14 March 2013

Reply: A Skeptical Look at Karl Popper

For more of Popper criticisms see here for the full Martin Gardner article.


Popper recognized — but dismissed as unimportant — that every falsification of a conjecture is simultaneously a confirmation of an opposite conjecture, and every conforming instance of a conjecture is a falsification of an opposite conjecture.

People often notice this thing where falsification and confirmation can be two sides of the same coin. For example, Popper refers to the eddington eclipse test as an example of a theory 'surviving falsification' but most refer to it as a 'confirmation'. So in this instance the meaning is the same. From here, Popper critics generalise that confirmation and falsification are always two sides of the same coin, and therefore Popper wasn't really saying anything, just playing with language. 

But there is a genuine difference between the two. Attempts to falsify makes for tough tests, whereas attempts to confirm are only tough tests in the instances where the test is also acting as an attempt to falsify. So for example, finding another black crow is a valueless confirmation to a popperian in part because it could never falsify that some crows aren't black.

Of course, Popper would avoid this talk of confirmation, because of its association to verification/vindication. (as a historical side note, in the first addition of LSD Popper did refer to corroboration as 'degree of confirmation' and had to change it because it was confusing. But even drawing into the end of his career, in Realism and the Aim of Science he guesses that his followers will eventually have to change 'corroboration' too because, though less confusing, it's still too confusing).

One is that falsifications are much rarer in science than searches for confirming instances. Astronomers look for signs of water on Mars. They do not think they are making efforts to falsify the conjecture that Mars never had water.

one has to be careful not to confuse the psychology of the thinker and the process by which knowledge is improved.

There are many reasons why scientists may be looking for confirmation of water on mars, psychologically, but say when mapping out mars some scientists stumbled upon what seemed like a giant lake. Would their work as scientists be done? No. As Popper says, '...support is little or no value unless we consciously adopt a critical attitude and look out for refutations of our theories."

Falsifications can be as fuzzy and elusive as confirmations.

Popper did know that the auxiliary hypothesis problem applied equally to falsification and confirmation, I'm assuming this is the problem Gardner has in mind. His solution was something like: if you suspect an auxiliary hypothesis is responsible for your test giving the wrong result, then test that auxiliary hypothesis independently.   (I incidentally don't buy this answer, but it is as far as I know it's the Popper answer). 

Tuesday 19 February 2013

Understanding Procrastination

There's this thing where, if you're conflicted between two wants, your brain will pick a third, unrelated  thing to do instead. It's a trick most of us have developed to avoid the potential stress involved in our internal conflicts. We do it with all sorts of stuff, but when we do it with work, we call it 'procrastination'.

This is why this bizarre thing happens when you procrastinate. You don't work on the project that you should be, nor on the project that you'd be working on if you were free to do whatever you want. Instead, you end up doing the kind of stuff that ranks somewhere like four or five on your preference list. This is the time when you watch an entire box-set of a television series you like, or you alphabetise your DVDs. Stuff you enjoy, but you wouldn't usually do. 

But in understanding this, you can actually do something about it. 

First I should say, of course there are other things 'procrastination' can mean. Sometimes a person genuinely prefers going on facebook, or watching the DVD box-set, and it's other people, not the individual, who have decided that they're not doing what they should be. But I'm not talking about this kind of thing. The solution is too easy: ignore them.

The kind I'm talking about is most pecular to people, because they actually love the things involved. On the one hand they have X, a project they love and do regularly. On the other hand they have Y. Y is the kind of thing that you would enjoy doing, maybe even quite similar to X, but you don't yet feel motivated to do it for its own sake. Unfortunately, you have to do it, because there's a deadline looming or it's important for some other external reason. You love X, and you will love Y, so you'd think picking one of them would be easy. But it's not, you pick Z instead, something you kinda like but never normally pick. Why? Because of self-coercion. 

We so readily accept guilt and worry as legitimate ways of motivating ourselves. I personally think it's a hand-me-down from how many times (particularly growing up) other people used guilt and so forth to coerce us. But the consequence of this is that we don't do X, as we normally would, because doing X would remind us that we're not doing Y. And we of course don't do Y, because guilt isn't actually a very persuasive reason to find Y enjoyable. So we pick Z, specifically because it's different enough from Y and X that we can forget all about them.

The solution, as bizarre as it might sound to people, is to not try to force yourself to do Y, especially by means of guilt or shame or anything else like this. Just embrace that you really want to do X, and be happy about that. 


Now, at this point, you might say: but what about the looming deadline?! But doing X instead, IS the solution for how to start doing Y.

It's true that you might actually be better off doing Y, but you're not going to be motivated to do Y unless you allow time for yourself to be positively inspired to do it. Say X is write a play, and Y is write your university review on someone else's play. In doing X, you give yourself chance to be inspired to do Y, either instead, first or afterwards because X will be similar enough to Y to allow for this. (Remember, this follows logically that they're similar enough, from the fact that you have to pick Z, not X, to not think about Y). 

Once guilt is out the picture, this positive inspiration can take hold quite quickly. I've known it take minuets even. On the other hand, of course, this doesn't happen when you pick Z (say, facebook) because Z has been specifically picked because it lets you forget all about X and Y, so there'll be little or nothing to remind you of why Y is good/fun/interesting. 

So in summary: If you want to get rid of this kind of procrastination 1) let go of guilt and be OK with doing what you really want, and 2) do what you actually want and let that be the inspiration for doing other stuff, too. 

The Validation Theory of Epistemology P.3

Coming soon...

Monday 18 February 2013

The Validation Theory of Epistemology P.2

First we should start with clarifying what it means to validate a theory as a candidate for true belief.


One of the most popular theories of justification in epistemology at the moment is the idea that a theory is justified when it is shown to be 'likely true'. This position is an attempt to get around the traps fallibilism present in the vindication approach, without having to give up on its merits. Basically, if you can show that something is likely true, this makes it an appealing candidate for true belief, in the same sort of way that showing a theory is definitely true would. 
It doesn't really work, though. To say something is likely true is still to make an appeal to certainty. Now, instead of saying that you are certain that p is true, you are saying that you are certain that p is likely true. The difference is not fundamental

A second, less popular approach is to say that justification is to show a theory is a 'possible truth'. This gets around any of the problems of making an appeal to certainty, but again, it doesn't really work. It's such a weak requirement of a justification to say that 'it could be true' that you could say it of anything, and it therefore becomes meaningless.

Popper made one of the only serious attempts to make the 'possible truth' route work. He argues that a theory is justified as a candidate for true belief, if it is a possible truth in the sense that it has not yet been shown to be false. His 'reasons to believe something' are negative, then, not positive. It's not that the theory has any particular strength we're pointing to to indicate that there's good reason to believe it, it's that it's the only surviving theory left.

But the problem with the Popperian approach is it assumes too much of falsification. Now Popper did admit that a falsification could be wrong, but he failed to recognise the impact this has on his theory. It's only that we're able to eliminate some 'possible truths' in the Popper approach, that makes preferring one possible truth over another meaningful. If we actually can't eliminate any theories, the Popper project falls down.




But validation theories needn't be as strong as to make claims to likely truth or as weak as making claims to possible truth, there is a middle ground, one that is often overlooked. 

Continues here
For part 1 click here

The Validation Theory of Epistemology P.1

Traditional epistemology is based on something called the 'Cartesian Foundationalist Program' (CFP). CFP assumes that natural knowledge (knowledge of the sciences) must be based somehow on sense experience, meaning that knowledge reduces to observation and can be deduced from observation. 

However, neither project works. In the case of reduction, as Quine argues, even the most modest of generalisations about observable traits will cover more cases than its utterance can have occasion to actually observe. Put simply, observation itself never offers enough content to account perfectly for the generalisations made about them. Similarly, Hume persuaded us all long ago that one can't deduce theory from observation. A scientific theory is not guaranteed by it's observational premises in the way deduction requires it would be.

These problems with CFP are well known and widely accepted, and yet epistemology has not yet been willing to let go of it. To understand why we need to understand what was so attractive about CFP in the first place. 

In a nutshall, CFP has been preferred because it offers a clear definition of what a justification is, a.k.a. a justification is a 'certain truth'. It's harder to make the case that we are justified in believing a 'likely truth' or a 'possible truth', but a person who believed in a certain truth would clearly be justified in doing so. Unfortunately, the promise of 'certain truth' is unfulfillable. CFP believed it was possible due to the above two mistakes, but also because of a much larger, often ignored, mistaken belief that sensory data is more credible than it actually is, and the often missed point that observation is theory-laden.   


Epistemology, has therefore been forced to find a new standard of justification. It has been suggested that rather that attempting to 'vindicate' theories, then, we should merely attempt to 'validate' them. That is to say, offer some reason why they are good candidates for true beliefs independent of them necessarily being true beliefs. And the validation approach is widely accepted. No one around these days in philosophy seriously believes in vindicating theories. But paradoxically, philosophers still find themselves attracted to CFP, and time and time again, efforts will go into making CFP work.

The problem is, although we're left with validation as our only option, this approach has not been established as a coherent, tight philosophy in the same way that the 'vindication theory of epistemology' had been. That is, it's obvious how, if it were possible, a vindicated approach to justification would act as a candidate for truth, it is not obvious how a validated approach would. I conjecture that this is why CFP, though understood to be false, remains so prominent in epistemology, and I believe it will, until the validation theory is fully thought-through. 


For more on the problems of the validation theory of epistemology, see P.2
For my purposed solution see P3.

Tuesday 12 February 2013

Happiness as an end in itself

The Utilitarian position that, 'happiness is the sole basis of morality and that people never desire anything but happiness,' can be rather jarring. Narrowly understood, 'happiness' is a kind of emotional pleasure. Yet some of out finest moments might lack this. Take for example, jumping into a pool to save a child. Or the endurance of many, many months of work on a worthy project. Now, it is true that these things can bring us a narrow happiness. After saving the child, we may feel this in the form of pride, but we may not, we may only feel anger at the useless lifeguard, or terror at the frightening situation now behind us. In either case it doesn't matter, for we entered the situation not for the feelings we would experience afterwards, but for the child.

Similarly, we may feel many points of pride when working on the work project. But we will also for the most part feel not very much at all, as one does when they 'enter the zone' for several hours a day. If feelings of pleasure were paramount, one could spend that time on pleasure-seeking.

On the other hand, it would be a little silly to assume happiness plays no role. There is a sense in which we jump in the pool to save the child because we find the state of things where the child lives a happier one than the one where the child dies. And there is a sense in which we enter into this long and tiresome work project at the expense of many pleasures we could instead pursue, because the worthiness of the project makes us feel more fulfilled than anything else could.

Our problem is that, once we broaden the definition of 'happiness', to include all these different types of things--pride,  fulfilment,  abstract happiness, and physical happiness--and anything else they may be included, we face the dilemma that happinesses conflict, and it is not obvious which we maximise and which we leave behind. Mill's solution to this problem was a rather unsuccessful appeal to human nature. He divided happiness up into 'higher pleasures' and 'lower pleasures'. It was essentially a kind of Victorian snobbery, which mistook the trends of his time and class for human nature. Indeed, his mistake was in thinking that 'human nature' could be any help to us at all in solving the problem.

But if we put aside human nature, what are we left with? Nurture. We are left with the ideas and values of an individual, to which our happiness (of any kind) are subservient.

Happiness is an end in itself then, not because happiness is intrinsically good, because in fact happiness isn't intrinsically anything it seems, but because some form of happiness or another is the end expression of actions that realises our values. Likewise, some form of unhappiness or another is the end expression of actions that don't. What determines the hierarchy then, is not kinds or quantities of happiness, but what we value most highly.

Saturday 9 February 2013

Ayn Rand, the Academic Philosopher

Rand wasn't an academic. She didn't write like an academic. She wasn't exceedingly well read or studious on academic philosophy. And the tendency is, academic philosophers do not take her seriously if they've heard of her at all.

But underneath the style she chose and the approach she took, how well would her ideas translate into academia? Or to put it a more interesting way: can Rand be taken seriously as a *philosopher* and not just a guru.  



Philosophy problems Rand offers solutions in:

The meta-ethical problem 'why be moral?': what if you don't want to be moral? Doesn't the whole thing kind of fall apart if you don't care about doing the right thing?

Rand's solution is that she never asks you to care. But, she argues, it would never be in your interest not to. Rand's morality is utterly self-interested, and is more concerned with freeing you from not doing the moral thing (e.g. being altruistic) due to cultural standards or pressures, than it is in commanding you to act any way you might not see the point in.


The meta-ethical problem of objectivity: Objective morality is hard to defend meta-ethically. Values that are 'mind-dependent' are subjective. They are governed by preferences. For a value to be objective it must be good 'mind-independently'. But how could something be valuable if its not valuable to the subject? What other authority could determine a values status as valuable?

Obviously 'objective morality' is important to Rand, enough she named her theory 'Objectivism', but actually she never insists on objective values. What she insists on instead is that our chosen values be thought through rationality. So for example, a heroin addict isn't acting wrongly because they value having a high, if this is what they choose, they may. They are acting wrong because they are picking a destructive way of attaining a high, one that causes them distress and other things *they* don't enjoy. With rational thought, they could realise a better way to get their high.

Rand manages to not be a subjectivist without having to awkwardly discover a way in which this mysterious thing called 'mind-independent value' exists.

The ethical problem of altruism: If we're motivated to do what's good for us, why care about what's good for others?

What makes Rand's account of altruism so interesting is that it's negative. For Rand, 'altruism' is her word for 'self-sacrifice for the sake of others', that is to say, putting someone else's values before you own. To do so is for Rand wrong. But she acknowledges that sometimes your own value may contain in it the consequence of doing something nice for another.

Forget whether or not this is true, this is a very interesting answer to the problem because she explains away the problem: All morally permissible acts that are altruistic are also self-interested. Others have attempted this answer, but they always end up with a morality being nothing more than prudence. But because of Rand's interesting combination of values and selfishness she doesn't fall into this trap.


Verdict

The above three problems are each major hot debates in contemporary ethics. Rand actually offers interesting and worthy solutions to them. She is not ignored by academic philosophy because of lack of content then, but lack of being understood. This may be her fault for being unclear, it may be the fault of her advocates for not being good enough at philosophy to represent her, or it may be the fault of academic philosophers who have heard of her, for dismissing her based on character.

The Struggle of a Grown-up Autodidact


I had always believed that people enjoy being an autodidact, that is to say 'a self-taught student'. I had been one past the age of 12, as had most of my friends, and as had hundreds of young people I met at home education clubs and festivals. As I grew up, I would further meet people who had been to school, but had had it about them to learn what they knew, and what they were brilliant for, by themselves.


What being an autodidact, particularly from a young age provides, is a freedom to pursue and nurture your own interests in a way more efficient and pleasant, for you are in charge of the learning style and the when, wheres, and hows. Of course, people who are so familiar with it being someone else's job to take care of what they learn for them, always get confused at this point. You'd have to be naturally (unusually) ambitious to not just waste your time and do nothing, they say.What they miss is that, if you look back at my description, you will notice I said the words 'your own interests'.

There is much more to say to defend autodidactism, this has barely scratched the surface of why it is an important way of conducting yourself as an individual. But, this is not why I am writing today. I'm writing because it occurs to me that there is a way that people don't enjoy being an autodidact, and it is one of the most important ways they should. That is, most people, even those given the chance to, don't enjoy thinking for themselves once they're grown-up.  



What exactly is the difference between you being in charge of your learning and someone else? It's not the use of teachers, or books, or documentaries, or labs, or computers, or indeed any learning resource. It is a question of who is managing the decisions of what, when, where and how to learn. Implicit in this, is in fact what autodidactism is at it's core: thinking for yourself.  

The difference is really one of the difference between man A and man B. Man A reads a book because he has a particular need met in advancing his learning through content promised in that book. He hears about the book from others, and engages with their recommendation to read it critically to gauge if it is really the book for him. When he reads it, he is as critical with the author's words. He adapts the parts he likes, he fixes the parts he finds broken, and he takes nothing it says on faith. Man A, is a man who's thoughts are so his own that they don't match perfectly with anyone else's, not even his greatest influences. 

Man B is told he should learn X, and takes it on faith. Is told he should read book Y to do so, and takes it on faith. And assumes the book true, on faith. His thoughts are not his own thoughts, they are other people's slipped inside his brain. 



The sad truth is, as fun as it is to be an autodidact when you're a child and parents take care of your food, and housing and so on, the older you get, the less fun it is to think for yourself. Wouldn't it be so much easier to toe the line for your career. Or to say what your friends say, so that you have some. Or to just let someone tell you how to live, because its such a scary thing to have to work out at the very time you need to already know the answer.

This attitude is a myth, it's always better to think for yourself, doing so will always give you more control over what happens in your life.There are problems involved in doing so, but they are all soluble in their own ways. But it is a myth many autodidacts grown-up must buy into, for it happens time and time again that a grown-up audodidact will try to find people to think for them, or will let the crowd around them think for them.

I am currently finishing up university. Some people I know who support autodidactism don't agree with university, they mistake it for school. It's not, university is as coercive to how one should learn as having a library card is, or as using a shop to buy your food is to what you eat. I have been here because I want to think for myself about the education resources university provides, and I am pleased with how I have done so. But it worries me that some autodidacts go to give up on being an autodidact. Equally it worries me, the autodidact that haven't gone to university, for I know a good few of them who seem to have just found a group, intellectual or cultural, to tell them how to live (and therefore what to think). (This is in fact much worse than the university problem, because university has more self-criticism than a group of people tend to).  

But what worries me the most is that it doesn't seem to be said enough that being an autodidact (someone who thinks for themselves) gets harder now. That it isn't quite as simple as it was when we were children. And admitting it/being conscious of it, is an important part of considering it, and considering critically how one wants to proceed. 

Thursday 24 January 2013

The Myth of the Myth of Mental Illness

Is 'mental illness' mental or illness? This is the question asked by Szas, and he argues both answers are given and both are wrong. 


Sometimes, he argues, the term is used to refer to peculiarities of thought or behaviour arising from a disease of the brain (illness). And other times, he argues, the term is used to describe something which is very different than a disease of the brain, something he calls 'problems with living' (mental). But neither of these definitions make much sense when followed through logically, he claims. 


In the first instance, if mental illness is at root a disease of the brain, then it seems to only confuse matters to call it 'mental illness'. The problem is nothing to do with anything 'mental' but is a problem of the brain. There are many physical disorders that take this form that we would not make the mistake of classifying as mental. For instance in elderly patients who lose their sight, visual hallucinations are common. Or in the late stages of syphilis, panic and hallucinations are common. But it would be peculiar to consider either 'mental' problems rather than brain/physical problems. 


The second definition fairs no better. If mental illness is a problem with living then it seems to not make sense to talk of it as a disease. As Szas says, "Since medical action is designed to correct only medical deviations, it seems logically absurd to expect that it will help solve problems whose very existence had been defined and established on nonmedical grounds." 


But does this argument exhaust all the possibilities? 


Let's accept the two premises above: 1) the root cause of a mental illness can't be a physical problem and 2) medical action can only be used for mental illness if it's a medical/physical problem. Both conditions can be satisfied if 'mental illness' was such that the root cause was that the patient has a 'problem with living', and this in turn affected the brain in such a way that the patient would benefit from medication. 


This third understanding seems to solve Szasz problems but it is also, I believe, what many people have in mind when they speak of mental illness. Consider how mental illnesses are treated: through a combination of therapy and pills aimed at altering brain chemistry. The pills alleviate the symptoms but the patient is recommended that they continue to work out the underlying issues in therapy before they stop taking the pills. (Why would they need therapy if the pills stopped the underlining problem?) 


Of course, none of this is to say that there isn't a dark side to how mental illness is thought of. There is still a problem of classification, and the potential abuse of broadening what is mentally ill too far. After all  who decides what counts as a mental illness, and what is merely a personality? Indeed, all ideas have an effect on the brain in at least some mild way. But we can begin to see here that a satisfying definition of mental illness could exist, if we can show that the unwanted personal problems in a patient have made the brain ill in a way that is already recognised as an illness in medicine more generally. For example, one is ill if they hallucinate whether the cause be mental illness, syphilis  or visual impairment. On the other hand, there is no illness outside mental illness that has, say, excitability as a symptom (or whatever other absurd 'symptoms of mental illness' are currently being employed).