Showing posts with label psychology. Show all posts
Showing posts with label psychology. Show all posts

Friday, December 2, 2011

Infant Morality

Recent research in child psychology provides new insight into the development of moral judgments. The proximity to the topic of ethics reminds me of how discussions in college philosophy classes very frequently turned to the subject of child psychology at one time or another. The results of the new research raise a variety of questions in my mind.

The study, by Kiley Hamlin of the University of British Columbia, showed babies of five and eight months old a series of videos, in which one puppet either helped or hindered another in a task, and later either had a toy returned to or taken from it by a third puppet. Three quarters of five month old children preferred the third puppet to return the toy no matter what, but eight month olds gave preference to the giver or taker depending on the earlier actions of the puppet from whom it was giving or taking.

The conclusion that has been drawn from this is that between those ages, children learn to determine whether a person deserves good or bad outcomes. That is, they have developed a sense of justice, and see value in rewards and punishments in addition to just straightforward good and bad. University College London child psychologist Uta Frith is quoted in the coverage of this research as saying: “To me this says that toddlers already have more or less adult moral understanding. Isn’t this amazing? I don’t know in what way adults would react in the same situation in a more sophisticated way.”

She may be right that toddlers possess adult moral understanding, but I would add that that doesn’t necessarily say anything good about toddlers; it says something terrible about adults. I also reflects badly on the persons conducting the research, or at least those commenting on it, who seem to be entirely too cavalier about the accuracy of intuitive moral judgments. Although reward and punishment are sensible moral concepts, it seems to me that by lauding eight month olds for having a natural inclination towards tit-for-tat ethics actually contradicts one of the noble axioms we end up teaching them later: two wrongs don’t make a right.

It strikes that based on the description of the experiment, and the accompanying videos of the puppets, the child participants have no reason to believe that the giver and taker puppets actually witnessed the helping or hindering actions of the other puppets. If that’s so, they aren’t judging the appropriateness of specific acts of reward or punishment; rather, they are projecting a sense of justice, from their own points of view, onto independent events. That doesn’t strike me as morally sound. It seems quite subjective, and it may be worth noting that in the experiments this natural subjectivity is coming from someone who has not yet developed an independent sense of self.

I wonder whether it has struck the original researchers that these observations may imply a groundwork for the development of religious concepts in the human mind. Given that the children are not judging the appropriateness of direct reward and punishment from the person who has been harmed or hindered, the justice that is being dispensed turns out to be a sort of cosmic justice. That is, if a child continues to think that good and bad outcomes are deserved or undeserved regardless of their actual connection to prior good deeds or wrongdoing, there comes a certain point at which a sophisticated intelligence needs to give some account of how punishment can work without human intention. Notions like God and karma fit the bill.

So eight month olds and twenty-eight year olds alike might be inclined to think well of unprovoked acts of aggression if their victims have formerly showed themselves to be assholes, because the act is justified from their own limited point of view. That is, many adults may indeed fail to “react in the same situation in a more sophisticated way.” But I certainly think they ought to do better. Adults, who have had some time to reflect on the tremendous nuance of ethical calculations, should be capable of making moral judgments from an objective point of view.

It disturbs me to think that natural human development leads one to consequentialism, because I don’t think that’s the correct conclusion. Rather, acts are good or bad in and of themselves, not based on their outcomes or whether their objects are deserving. You can either give the ball back to the puppet that dropped it, or you can take it from him. Stealing is no more or less wrong if the puppet had been a jerk beforehand. Interestingly, that is apparently the way the five month olds in the study see things. So in my view infant morality may be preferable to toddler morality.

Before this, I thought that deontology and consequentialism were competing on a fairly level playing field. Now I see that the deck may be stacked against my favored category of ethical theory, in that promoting deontology requires overriding aspects of human nature. That makes for a challenging breaking point.

Of course, the results of this research were not unanimous for all participants, and some demonstrated different preferences. That leads me to wonder whether those children that preferred a puppet who returned toys even to bad puppets will naturally grow up to be adults like me, who believe that the rightness or wrongness of an act is unaffected by its surrounding context. Perhaps there is an evolutionary trait that appears in the development of a minority of children and leads them to make ostensibly moral judgments that are quite different from what these researchers conclude is normal and assert is accurate.

But the fact remains that the vast majority of children evidently grow into a natural belief that right and wrong are subjective and context-dependent. And the further fact remains that I believe that’s false and unethical. Thus, my view is that in a philosophically and morally sophisticated society, some natural consequences of child development, such as the impulse to cheer on the misfortune of those who have caused misfortune, need to be overridden later in life.

Friday, October 14, 2011

October Horror Post #2

I've let almost two weeks lapse since making the first in what was supposed to be a series of posts throughout the month related to the topic of horror. I really need to start getting into the Halloween spirit now.

I am continuing my way through 2008’s Fear Itself television series, and most recently watched the episode “Skin and Bones,” which is by far the best of those that I have now watched. It’s strength rested largely on the makeup effects, as applied to creating an antagonist that was frightful in initially subtle ways. The story is a familiar one, and apparently an increasingly popular one. It is essentially the same as the charmingly bizarre 1999 film, Ravenous, though “Skin and Bones” is executed in a quite different way.

I believe that a part of the latter’s appeal may be attributable to the earnestness of its director, Larry Fessenden. Each episode of Fear Itself has a special feature consisting of interviews with that episode’s director and actors. While several directors thus far have had something interesting to say about horror, its role, and its appeal, Fessenden’s initial commentary is far and away the most striking to me. He says:

“I love horror because it really is just part of my psyche. I think it’s the way my brain in wired. When I walk down the street and I see a fence post, I imagine someone impaled on it. I see life through this filter of real despair and have always had an awareness of death and of the fragility of life. I really think horror is a psychological genre, and people who are drawn to it, I think, have some sort of existential experience with life.”

That notion of imagining horror in mundane contexts is powerfully familiar to me, but I had never really connected it to an affinity for horror as a genre of film or literature. I have, however, considered how it may relate to my strong sense of empathy, my philosophical and spiritual tendencies towards stoicism and asceticism, and my experiential curiosity.

The wiring of my brain may be a bit different from that of Fessenden’s. I don’t have a particularly common tendency to imagine horrible outcomes from a third-person perspective. Rather, there are situations in which I cannot suppress thoughts about the terrible things that could happen to me, and what that would be like. It’s usually associated with the perils of the modern world, though the sight of wild animals may prompt me to imagine, and almost fantasize about being mauled or maimed by them. If I see a hydraulic lift, I immediately and vividly imagine having an arm trapped in it as it lowers. Many such things primarily impress me with the damage they can do, and their practical use is only an afterthought.

Often, my psychological focus almost rises to the level of impulse. I visited my former employer recently, and he showed me a bowl cutter that he had recently gotten running. It is an extremely old item and has no safety catch, so the blades can be turned when the lid is raised and they are completely exposed. He gleefully demonstrated its operation, and I stared at the whirring blades and felt as though I was willfully denying the impulse to reach out towards them. I actually have a certain sense of fear when I use dangerous hardware, because I worry that I might injure myself intentionally should my conscious mind forget to safeguard me against my id, or whatever it is that acts against the basic instinct for self-preservation.

I’m not sure why my mind works this way. I know I am not alone in it, given Fessenden’s comments and given the fact that my ex-girlfriend, for one, attested to the same tendencies. But I’m equally certain that it is not common enough to be called ordinary. But maybe those who do have such vividly dark imaginations have other things in common as well. Maybe an appreciation of the artistic depiction of such unsavory fantasies is one of them.

Something that actually frustrates me about modern horror fandom is that audiences seem to have a distinct lack of empathy. So much of the most popular horror is better identified as “torture porn,” and the people who love it seem to be indulging in pure, base voyeurism. I worry that a lot of theater-goers are more prone to put themselves in the position of the perpetrator or horror, rather than the victim. I may be misjudging them, though. It may be that they still find the things on screen to be genuinely disturbing, but that that registers and is expressed differently.

Ultimately, I can only speak for myself, and what I’d say to defend my interest in material that is shocking or just psychologically or thematically dark is that I want to be disturbed by what I’m seeing. I want to vicariously put myself in the place of someone who is fleeing for his life, suffering torments, going insane, and so on. The fact is that horrible things really do happen every day. And I hate the feeling of being insulated from them, of being trapped in my personal fantasy world of relative comfort and pleasure.

When the real world as I experience it is such a fantasy, I compensate by seeking out the fantasies that stretch to the opposite extreme and depict extraordinary fear and hardship. In one case that may be watching a scary movie, and in another it may be simply imagining what it would be like if my hand got caught in the meat grinder. And in other cases, it might be having a long conversation with a person suffering from multiple personality disorder, or pausing to give a little money to a homeless person, or volunteering, or fasting. There is real horror in the world, and I believe that by keeping myself distant from it, I would be keeping myself distant from a vast segment of reality, as well as from an awareness of the suffering that maybe, someday I will be able to alleviate.

That last consideration raises what could be an interesting question: I wonder if anybody has every analyzed the political leanings of movie-going audiences. It seems like there could be some basis for believing that people who are more interested in observing horror, or reading about it, might also be more inclined to be politically liberal. A basic difference between liberalism and conservatism, as I see it, is that liberalism focuses on the improvements that are still needed in the world, while conservatism sees only the improvements that are already behind us, and disregards the possibility of negative consequences or ongoing mistakes. Put more simply, liberalism is acutely aware of the horror in the world, and conservatism denies it. It would make sense if people who have a psychological impulse to observe or imagine personal horrors also have a social interest in collective horrors.

Although, that would make more sense if it weren’t for the fact that so much of the horror that I consider to be the best has such decidedly conservative themes. And I think that may make a good topic for my next post on the general subject of horror.

Wednesday, August 10, 2011

The Personality of Mental Illness

On Monday’s Colbert Report, the guest was Nassir Ghaemi, who has written a book called “A First-Rate Madness: Uncovering the Links between Leadership and Mental Illness.” In the interview he explained that certain mental illnesses can have positive effects, such as mania contributing to intense creativity and depression being associated with greater empathy. I was thrilled to hear such an idea uttered on television, because it represents an almost unheard of push towards mainstream recognition of views similar to my own on mental abnormality, psychiatric medication, and over-diagnosis of mental illness.

I feel that Americans are far too quick to see themselves as afflicted by mental states and psychological tendencies, when they might be better served by identifying themselves as being simply influenced by those things. It suggests a terrible pessimism, and a pessimism that ironically is grounded in a preoccupation with happiness. It seems to me that people look on their own abnormalities through a negative filter, noticing only the extent to which it impedes a sense of pleasure and one’s capability for thorough social assimilation.

I’m certainly willing to acknowledge that there are mental afflictions that seriously threaten a person’s life or well-being, and to which medication may be a reasonable response, but I believe that in the vast majority of cases, the impulse towards diagnosis and treatment is based on an unanalyzed desire for normality, and that this ignores the possibility of negative effects from artificially altering one’s own brain chemistry. The essence of my view is that it is extremely difficult to disembed who you are from how your mind works. My personal feeling is that no matter how serious a diagnosis I could secure for any damaging tendencies or dark thoughts that I experienced, I would never appeal to medication or even to elaborate therapies as a means of contending with them. My worry is that by making a top priority of removing the abnormality, people effectively risk using a machete to remove a tumor. Something is likely to be lost that you weren’t aiming for. Is it worthwhile to induce changes in your personality for the sake of improving your sense of comfort? Some may well say yes, but I imagine that most people to whom the question is relevant simply don’t think about it.

For my part, I think that being prone to what one could call depression is part of who I am. By definition, I guess that means that I’m not a happy person, and that I don’t have much hope of being one unless I experience some very significant changes. I can be content with that, however, because I highly value other things apart from happiness. I also believe that without my depression, I likely would not be nearly as principled a person as I am, or precisely as Ghaemi points out, as empathetic. The worst of my depression can sap my motivation, but the best of it gives me a clearer picture of what a kinder loving world would look like. I can’t fathom the idea of reverse-engineering a part of my brain so as to be able to operate more easily and more regularly in pursuit of much less significant goals.

Perhaps this commentary will be viewed as unfair because I am generalizing my ordinary experience of downcast moods and deleterious attributes, and pronouncing upon mental illness, which may be quite different, and inconceivable to me. But who, apart from a psychologist is to say that I’m not mentally ill? And even if one psychologist denies that description, I expect I’d be able to find another willing to levy a diagnosis. A great portion of the problem with this subject, to my mind, is that there is no clarity as to where the dividing line lies. I’ve always thought of the selective diagnosis of bipolar disorder as rather unfair. Defined in its most general possible terms as the vacillation between exhilarating highs and debilitating lows, manic depression just strikes me as a symptom of being alive. There are no doubt some people who barely experience those highs and lows at all, and others who are unambiguously bipolar and feel utterly out of control because of the strength and frequency of the phases, but there must be vast swaths of the population who exist somewhere in the middle ground, where those phases occur and have a recognizable effect, but don’t necessarily dominant the individual’s personality. Clinically, some of those people will end up with a diagnosis of the illness, and others will not. I surmise that that must harm the self-perception of individuals who end up on both sides of the arbitrary divide. Those who are denied a diagnosis are left with the impression that what they experience is a set of personal features that are under their control, which they can alter or overturn by their own efforts. The ones that are identified as bipolar, on the other hand, are made to think that they are sick and that their own efforts are futile without external treatment.

As I see it, the truth is both that all of us and none of us are in control of our own minds. Our own behavior can reinforce itself or contradict itself, and reason can isolate problems that need to change even within our own thinking, but our activity and our brain chemistry also have innumerable external influences, which similarly affect realization, reinforcement, and reversal. The nexus of internal and external influences provides the potential for constant change. Sometimes that change may be regressive, and sometimes it may be insubstantial, but under ordinary circumstances it can at least be assumed that it is gradual. And that fact affords people the possibility of measured change, in contrast to the option of cutting off the source of discomfort and sacrificing whatever may go with it.

Mental illness is a frightening topic. But what is far more frightening to me is the notion that such large numbers of Americans are more concerned with being normal than they are with being themselves. Mental illness and personality traits may sometimes be the same features in different degrees, and they may be inextricably linked together. I think there is too much guess work and too much alarmism involved for it to be worth playing with the fabric of one’s mind, but I am decidedly in the minority within a culture that so values the kind of happiness that is best obtained by blending in and accommodating the circumstances rather than changing them. In that context, it will be a long time before enough people change their views to reach the breaking point wherein, as Ghaemi says, we are able to, “in a matter-of-fact way, accept that some abnormality is actually quite good.”

Sunday, June 5, 2011

How Many Syndromes Do You Have

I wrote a recent post recapping an article about mental illness and using it as a basis for expressing some of my general criticisms of psychiatry. The fact that Wait, Wait, Don't Tell Me discussed the recent identification of "Hotness Delusion Syndrome" on this weekend's show brings back into focus one of my points of contention with the field.

The details behind HDS are that more men over the age of forty are single than women of that age, and therefore such men are pursued much more vigorously. The imbalance leads men in that stage of life to drastically overestimate their own attractiveness. I refuse to believe that I'm being unfair in saying that that could not possibly be rightly called a syndrome. If the failure to thoroughly and objectively analyze the consequences of statistical deviations is a syndrome, then the majority of all Americans must be very seriously afflicted.

I am completely in favor of endeavoring to explain all manner of human behaviors. It is good to know the circumstances and causes underlying people's judgments, beliefs, and actions, in order to help us to compensate for mitigating factors and to improve ourselves both personally and socially. The notion of diagnosis carries a different connotation, however. It is more excuse than explanation, and it diminishes the sense of agency in a person's actions, while still locating the problem almost exclusively in the mind of the individual.

And there is an ongoing impulse to provide virtually every bit of human activity with this kind of assessment. It leads, in my view, to absurd diagnoses of silly conditions, where it would be much more helpful to simply say, "You should work on this personality characteristic or that skewed perspective. It is not helpful to try to disembed these features from the person carrying them, and define them unto themselves. For when we do, we end up with things like Alexithymia - difficulty in understanding or expressing your emotions - which, surely, every last human being must have in some measure.

I understand that it is a matter of degree, and that some people experience serious psychological harm and social consequences of characteristics that are, on their surface, innocuous. That does not make it any more sensible to define those characteristics as syndromes and disorders. Saying that a diagnoses of Hotness Delusion Syndrome is absurd in all cases is no worse than saying it is wrong to apply it to some people who experience the same symptoms in a lesser degree. The dividing line for things like this could be nothing other than arbitrary.

The challenge here is that I must, and willingly do extend that analysis to more widely recognized conditions like bipolar disorder. When I think of what bipolar signifies at its absolutely most basic level of description, I think "A series of highs and lows? Isn't that just life?" And I think it is unfair that one person who experiences mild fluctuations of that sort might be denied a diagnosis, while someone else with very slightly more extreme or less ordered phases might be ascribed the title that provides him with both the stigma and the convenient explanatory function of a disorder.

Again, I understand that extreme bipolar disorder can be menacing to a person's well-being, and that therapy may be genuinely helpful in some instances, and medication in others. My own preferences, were I to be faced with a diagnoses of this sort, are defined, but irrelevant. I leave that decision as a matter of personal opinion in all cases. Severe interventions may be extremely helpful in some cases, but what I believe is never helpful is compartmentalizing a person's psyche and thus failing to treat them as a complete person.

Tuesday, May 31, 2011

Down With Psychiatry

I have to admit that I don’t always use my subscription to The New Yorker to full effect, but sometimes an article appears in the pages of a new issue that lets me know with its subtitle that it is something I have to read and give my fullest attention.  Yesterday’s issue contains such an article.  The piece by Rachel Aviv is called God Knows Where I Am, and beneath that title on the table of contents, it reads “A patient rejects her diagnosis.”  That is a meaningful subject to me, because everywhere I look, I see people not only accepting psychological diagnoses, but accepting them unquestioningly, and courting them as if by sworn duty.