Monday, February 28, 2011

Mainstreaming Conservatism

Kristina Loew at the Revealer has recently made an interesting comment about pop culture. She says:

“From movies to music, conservative voices have cornered the tween scene, that 12 – 13 year old demographic which often looks to their favorite stars for moral guidance (and ways to spend their parents’ paychecks). “

Loew calls this “the mainstreaming of conservatism,” and wonders if it is time for parents to ask if it has gone too far. Of course, for those of us in opposition to the worldview upheld by Twilight, Miley Cyrus, the Jonas Brothers, and Justin Bieber, the answer is clear. But the vast majority of parents of tweens are likely in support of the evidently uniform worldview of their cultural role models.

For me, the basic observation raises a different question: Who’s to blame?

Just why is it that the only persons put forward in the popular tween market are ones who have been instilled with thoroughly conservative perspectives on sex, religion, and other potentially divisive social topics? Are these just the sorts of people of whom tweens demand more, or are tweens simply swallowing whatever they are being fed? If a roomful of record executives and casting agents are determining the moral content of the stars they introduce to the public, are we to lay blame on them for pulling the strings, or on the young consumers for applauding the puppet show? And what motivation is there for that manipulation in the first place? Do the men in power strive to control the culture of the emergent generation, or is it something less conspiratorial – that they simply fear outcry from parents if their kids are introduced to threatening, progressive ideas too early in life?

Perhaps that fear is justified. Let us not forget the tear-filled testimonies that fill the media every time the television exposes unprepared audiences to a one-second flash of a nipple or half of a swear word. A culture of sensitivity is no doubt providing at least a partial underpinning to a culture of mainstreaming conservatism, and who is to blame for that? Is the moral make-up of modern parents really so staunchly conservative, or are liberal parents simply not doing enough to make their voices audible? Is that even feasible, or does the media drown them out by privileging one side over another?

But then, why are there any sides on the issue of what values and ideologies we should be introducing our children to through their pop stars? I’m not one to claim that teens or even pre-teens can’t understand issues of social and political import, but I would much prefer that they be given a broad span of time to arrive at their ideas about them independently. I have much respect for some young people. I actually think that I myself was smarter at fifteen than I am now at twenty-five. But in my perhaps glib estimation, the more mature teens and tweens, with more thoroughly formulated ideals, are the least likely to admire the pop stars available to them in the mainstream. If that’s true, then the children whose views are likely to be shaped by the dominant culture are the ones who simply haven’t thought much about the topics about which it’s telling them what to think.

If some kids just want to listen to syrupy pop music and watch vampires proselytize about Mormonism, let them. There is no reason for the makers of that material to be bound up with social views that, as near as I can tell, even they haven’t fully formulated yet. Why do we think they should be role models, rather than just entertainers? Why is there even cause to seek Justin Bieber’s sage counsel on women’s reproductive health? Why not let him sing and earn his six figures, and wait until both he and his audience have grown up enough to develop opinions that they hold with serious conviction?

This post consists of a lot more questions than opinions. All I know is that something’s got to give. There’s got to be a breaking point that keeps us from introducing our least reflective youths to our most conservative ideas, and growing them into the social structure that keeps recycling that dynamic. It may come by unseating the cultural powers-that-be, or just by raising our voices over theirs, or by teaching our children to think for themselves earlier and more often, but whatever the means, let it come.

Sunday, February 20, 2011

My Comment on Egypt

I was talking to my mother on the phone the other day, and we took up casual discussion of the news. She brought up Egypt, and quickly came to the comment, “It’s amazing what that internet can do, isn’t it?”

I would have been content to pass the remark off as an instance of an older and relatively out-of-touch person ascribing almost magical qualities to a new technology that she’s just not familiar, but it seems like that sort of commentary has characterized a great deal of the media reporting on the topic. In light of that, I think it’s important to remember that the unrest in the Middle East didn’t start with a tweet – OMG you guyz, totes fed up with Mubarak, let’s march – it started with several people setting themselves on fucking fire.

Now that, my friends, is a breaking point. The decision that a social injustice has so impaired the sense of your life’s value that you are willing to die, and die in a terrifically awful way as a form of protest is an incredibly powerful thing. It makes the depth of the problem a people face so undeniably clear that they are spurned to massive action to honor that sacrifice. A facebook wall post does not really have the same impact. Internet communication is a terrific tool, but it’s just a tool. It’s not responsible for anything in and of itself. It is almost certainly the reason why the full course of these protests took months rather than years, but the speed of the communication does not reflect in any measure of the level of commitment of the protesters.

It is also worthwhile to remember that the Mubarak regime completely shut down the internet in response to the protests. The fact that they did not peter out as a result should make it vividly clear that the internet was not the thing driving the movement. That it continued on when it was no longer convenient to get in touch with people who weren’t in shouting distance goes to show that Egyptians were eager to organize on the ground and in the moment. And that is, after all, exactly what needs to be done, regardless of the level of involvement from Facebook and Twitter, if the strength of the movement comes from the presence of groups of real, live people in the physical streets.

It’s a peculiar Western tendency, but we seem to be enamored with the idea that great things can be achieved with a small amount of commitment from each of very large number of people. But if circumstances were dire and the people of America remained that lazy, I think they would find out that no matter how vast the communication network, if the people involved do not have a firm commitment, no real change is possible. We seem to like the idea that we can participate in revolutions by changing our backgrounds and using hash tags, but how many people in modern American society have really contributed something materially significant to an international movement? Plugging into what’s going on through our laptops and iPhones evidently feels participatory, but it’s not the same as being there and having to stick your neck out for the cause. Communication is not enough. If you aren’t willing to sacrifice something real, you aren’t about to accomplish anything. It makes sense, though, that we would have this sense that personal sacrifice and strength of commitment is not necessary. Even when it comes to war these days, we aren’t asked to so much as pay higher taxes, and some people seem to think that simply acknowledging that they’re happening, as with a bumper sticker, is a sufficient act of solidarity.

If we are ever to reach real breaking points, we have to realize that breaking points are not so easily reached.

Saturday, February 12, 2011

Film Analysis: Lars and the Real Girl

Lars and the Real Girl is my latest Netflix return. I well remember the film being advertised when it came out in 2007, and I actually had very little interest in seeing it. It was one of those new releases that I was willing to acknowledge might be very good, but about which I simply couldn’t discern why I should be interested in seeing it. I finally rented the film in part because it was directly recommended to me, and also because I was informed that its star is Ryan Gosling, who is simply a phenomenal actor. The recommendation came with the assurance that Lars and the Real Girl was a veritable emotional rollercoaster, eliciting laughter and tears in roughly equal measure. I found that to be quite accurate, and was delighted by it.

It was a very good film. The performances made it work brilliantly, which was surely no easy task, given the highly unusual content of the story. Every visual detail was well-presented, and the earnestness of the screenplay was obvious. Naturally, I am blogging about this film because I have something to say about the writing and its thematic content, and something having to do with breaking points. The story of Lars and the Real Girl is simultaneously far-fetched and profoundly realistic. It involves a small, northern latitude town, which, when Lars develops his delusional relationship with a life-sized doll, agrees in its entirety to go along with his fantasy until he is able to, through its presence, work out the issues that make it virtually impossible for him to relate to others.

In the DVD special features, the screenwriter specifically states her intentions in writing the script, saying that she wanted to show what it might look like if people dealt with mental illness of this sort through compassion, acceptance, and tolerance. And that is a wonderful thing to behold. The film presents the general topic of mental illness in a remarkably progressive way. The main character is not expected to fix himself by pure force of will. Rather, it takes a community effort, with a great deal of patience, over a long period of time. And I think that is a far more realistic and altruistic perception. It takes a great many changes of circumstance to bring Lars back to reality, and concordantly it is clear that it was a great many circumstances, not all of them in his control, that brought him to the point of needing his delusion.

I wish to God that people as a whole would come to a breaking point in their understanding not just of mental illness, but of social circumstance and extremes of emotion, and anything at all that takes more than mere desire for change to be meaningfully altered. The usual impulses in such cases include medicating the symptom and blaming the victim so as to demand change without personally investing anything in the outcome. But it seems clear to me that real change for the better comes only with time, and only when there is outside stimulus to trigger it and nurture it. The writer of the film, Nancy Oliver, does not name the town in which it is set. Certainly, the place does not exist. Though the characters are flawed and in need of development, their universal care for one another makes the setting, in effect, utopia. It portrays in a bizarre and humorous way, an ideal that we can all strive for when we are content in our reality, and hope for when we are lost in our delusions.

Wednesday, February 9, 2011

Academic Rigor

Today’s Morning Edition broadcast featured an interview with Professor Richard Arum of NYU, who has recently done a study ostensibly demonstrating that a “lack of academic rigor leaves students adrift.” When asked about my alma mater in particular, Professor Arum pointedly skirted the issue, instead making the general statement that expensive private universities are slightly more demanding, and elicit slightly more conviction from their students. While that may be true, he left no doubt that the problem is faced across the board by American institutions of higher learning: their students are demonstrating very little improvement in writing and critical thinking skills. I’m very certain that that is a problem among NYU students, as well, but my agreement with Professor Arum does not extend very far beyond the basic recognition of the problem. Much to the contrary of his thesis, I am quite willing to lay the blame for failure to acquire intellectual skills squarely on the students thus failing. Of course, to my mind, the real issue is that they would not fail at that were they not unfairly expected to demonstrate that improvement in the first place. Whereas Professor Arum seems to expect that heightened academic standards and stronger curricula can bring growing numbers of students up to appreciable levels of improvement, I seek to emphasize that so long as enrollment grows, such standards will be increasingly difficult to meet.

No academic standards and no reasonable educational philosophy can improve writing and critical thinking skills among young people who have no interest in acquiring them in the first place. I am growing enormously impatient for the public dialogue about college education to reach a breaking point whereby someone acknowledges that it’s unsustainable and indeed damaging to continue on with the familiar trend of impelling every high school student to go to an institution of higher learning. Some students simply have no interest in college. Why is this a dirty secret that no one sees fit to acknowledge? What’s more, some students are no doubt better off not attending, both for their own sake and for the sake of society at large. Some people can be happy as line cooks and car mechanics, and they can do a damn good job of filling social roles that still need to be filled. And yet we go on emphasizing the crucial importance of a college education, somehow oblivious to the fact that at a certain point, we are going to end up with large segments of society academically trained and doing decidedly non-academic jobs. Like myself, right now, for instance. But you see, in my case, I had a tremendous amount of interest in applying my education to a highly intellectual career path. Others do not have that impulse, and yet they follow the same path, not only wasting their own time needlessly, but bringing down the overall academic standards of the institutions into which they flood.

Professor Arum directly acknowledges that there has been a fifty percent drop in the average number of hours spent in study by students over the course of the past few decades, but apparently he expects us to conclude from that that the demands placed on college students have inexplicably diminished during that time. Is it not more likely that the average level of commitment from students has declined as more and more of them are pushed, against their own inclinations, into academic pursuits? There’s got to come a time when a person like Professor Arum looks at this data and breaks away from the indoctrinated view that more formal education is always good for everybody. There must be.

Monday, February 7, 2011

Super Bowl Ads

I’m the sort of person who likes to watch the Super Bowl for the commercials. However, my enjoyment of advertising is not limited to that occasion, on which it has acquired a reputation for tremendous entertainment value. I find advertising terribly interesting, and I derive a lot of enjoyment from analyzing it – what I think works and what I think doesn’t, and moreover what I think the persons responsible for the advertisement are saying about their target audience and society at large.

I didn’t watch the Super Bowl last night, as I don’t have television, and haven’t for quite some time. I did, however, take a look at a handful of the commercials online today. Of those that I viewed, I found the spots for Groupon and Living Social to be the best, both on point of humor and evident effectiveness. That was remarkable to me, because these are the newest companies represented, and indeed the newest kinds of companies in the current market. Concordantly, it seemed to me that they both produced commercials very specifically geared to a new generation of consumer.

Whereas the other spots that I sampled seemed fairly ordinary and non-adventurous with their content, the Groupon and Living Social ads seemed to be taking chances that might have alienated certain viewers, but likely not those that could be expected to utilize their services. The Groupon ads both made use of the same premise, masquerading as public service announcements for several seconds before effectively disregarding the plight of the whales and the Tibetan people in order to laud a deal related to each of them that the spokesperson had acquired through Groupon. The campaign runs the risk of being accused of insensitivity, but I think it adeptly walks that line without crossing it. The gamble at play here is, I think, an understanding about the social character of highly modern consumers, and I think the Groupon ads do a good job of identifying their target audience as the sort that would be likely to take an interest in social and environmental issues, but not in a humorless way. I take the makers of these spots to be assuming that the persons they are seeking to reach do not take themselves too seriously, and can laugh over their own causes, that they will both give those causes their attention and set them aside when it’s not an immediate concern, in order to take a nice whale-watching trip, or have a Tibetan meal. It may in fact be a jaded perspective, or it may be a livable and realistic one, but in any event, I agree with the implicit claim that it’s characteristic of the current generation.

Living Social goes another route, and puts itself at risk of being accused not of a deficiency of sensitivity, but of an excess of it. They present a burly, reclusive man at the start of the thirty second spot, and show him discovering Living Social and being exposed to a wealth of new activities and products, which change his appearance until, in the final reveal, he approaches a classy bar dressed as a woman. I imagine that there must be some amount of tenuousness when the idea has been presented to portray transvestitism positively during the nation’s most-watched sporting event. But anything with such a large audience is likely to have a diverse set of viewers, and Living Social did a fine job of zeroing in on those of them that would be likely to use their service, namely young, urban, open-minded consumers. The ad strikes me as a skillful act of selective alienation, with the makers of it recognizing at the outset that they were not going to reach everybody, and so making an ad that would be appealing only to the emergent market that their similarly nascent business is trying to tap. It is probably the case that only people who are okay with alternative lifestyles are likely to utilize Living Social.

I think it is interesting that the youngest companies have done some of the best jobs at trying to appeal to the youngest consumers. They do not have entrenched models for their advertising, and they may well have hired young firms to craft the commercials for them. It makes good sense that what is new in the marketplace of goods and services would mesh best with what is new in the marketplace of social ideas.

Thursday, February 3, 2011

Film Analysis: Daybreakers

I stumbled upon the trailer for Daybreakers online quite some time ago, and immediately added it to my Netflix queue. I’ve finally gotten around to watching it, having found myself in the mood for a film rooted in horror. The reason I was so eager to see this one in the first place, though, was that the trailer made it quite apparent that the screenplay used the vampire phenomenon as a metaphor for modern dependence on scarce resources, particularly foreign oil. Watching the film, I found that that metaphor was presented in such obvious terms as to not call for any real comment.

It is not until the resolution to the plot that the film comes to say much more than “We can’t live without this, it’s killing people, and we’re running out.” I would point out, then, that as with any film analyses I do, this makes no effort to conceal plot points for anyone who has not seen the movie.

The basic background of the story is that by the year 2019, the world has become overrun with vampires, such that they are now the dominant society, with human beings either farmed for their blood or in hiding somewhere out in the countryside. The central protagonist is a vampire hematologist hard at work on the production of a synthetic substitute for human blood, which is distributed as is any consumer good in ordinary human society, and which it is painfully obvious is rapidly running out. We learn in time that the doctor is driven not only by concerns over the survival of his species, but by a moral imperative to protect the victims of systematic blood harvesting. In a plot point that I was pleased to discover closely parallels my own current position as a vegetarian sausage maker, the main character, who even shares my first name, is said to be employed by a company that makes its money through the capture and killing of human beings while personally abstaining completely from the drinking of human blood, relying instead on that of less nourishing animals.

The situation of increasing scarcity and the imperative search for an alternate source of the same basic good blatantly mimics the familiar energy crisis and the increasing emphasis on solar power, wind turbines, nuclear plants, and the like. Scatterings of dialogue throughout the film present the notion that a replacement for human blood will never truly solve the vampires’ problem, which is, ultimately, that they (read: we) are all vampires. The implicit thesis thus seems to be that green technology can be nothing more than a short term fix, with the underlying problem being society’s insatiable demand for energy. In the film, it is taken for granted that there is simply no alternative to vampirism until Ed, played by Ethan Hawke, comes in contact with a man who goes by the name of Elvis, played by Willem Dafoe, who had been a vampire until a non-lethal dose of sunlight brought him back to life and mortality. At this point, Ed takes up with a group of human hold-outs and endeavors to recreate the cure.

Also at this point, the major setting changes, and we find ourselves transported from a sprawling metropolis filled from top to bottom with fluorescent light to a bucolic, starlit vineyard supporting a small community of friendly, driven people. This is the first distinct thematic push beyond the very basic metaphor, and the filmmakers seem to begin to advocate returning to a largely rural way of life as a means of reducing energy demand and strengthening community ties. This may seem simple and naïve at first blush, but as the film goes on, it becomes easier to conclude that the alternative being hinted at is a bit more nuanced than the notion of everyone just dropping everything and taking off for the hills and planting a family garden. Ed manages to make himself human again, but it requires risk, sacrifice, experimentation, multiple trials, and enormous pain.

But it is with the eventual conclusion that it becomes clear that the future envisioned for us is not to be expected to be easy, straightforward, or pleasant. Ed discovers that the cure for vampirism can be spread through the blood of a former vampire, and subsequently tricks his unscrupulous employer into biting him and unwittingly turning himself human again. The thematic statement that I take from that scene is that once someone in a position of power learns the alternative to our old ways of living and spreads that thinking elsewhere in the upper class, vested interests cannot reject it. The idea is too powerful, too necessary to not take hold against any efforts to suppress it.

And yet the transition threatens to make the entire world briefly much worse, as the lingering reliance on scarce resources yields a terrible upsurge in bloodshed before the new humanity can take hold. Once human blood courses through the veins of a few former vampires, they are ravenously attacked by the others, who are still in desperate need of it. The cure passes from one to another, and all continue to fall upon one another until only a handful of men are left in the room, surrounded by the ravaged bodies of those who had simply purged themselves of their need earlier, and shaken by what they have done. The suggestion is evidently that even once some of us have solved our energy and food crises, wars will go on escalating and the strong will continue to exploit the weak.

It is not a rosy picture, but ultimately it may be a realistic one. And when the final shots show our heroes greeting the sunrise over the carnage left by the old world and later speeding away from the vampiric city and back to their modest, self-reliant, and decidedly human rural setting, we are meant to think that it is the picture of a lovely future that makes the awful suffering that will come first worthwhile.

Wednesday, February 2, 2011

Buffalo Winters

After hearing for weeks about severe winter weather throughout much of the east coast, Buffalo has finally been struck with the first snowfall that my high standards will accept as extraordinary for the region. It was about seven-thirty when my friend left my apartment tonight. A few minutes later, she called me to apologetically ask that I come help her get her car out from where it had been plowed in a street away from me.

After I put on my coat and traction-less work boots, and ran to the scene with my laughably small snow shovel, she would not stop either thanking me or apologizing. But she’s my friend. The idea that such a simple, obvious, and necessary favor would in any sense a burden or inconvenience is just absurd. I had to remind her that it is truly my pleasure to help a friend in need. In fact, if she had been a complete stranger, and I’d just happened to be passing by while carrying a spatula, I would have been undeniably eager to leap to her aid. That’s the way we all are in this town, isn’t it? I spent every winter of my childhood hearing the reinforced narrative that the winter weather in Buffalo does wonders to bring to the fore the friendliness and compassion of the local population. When the going gets tough, we all pitch in and help one another.

Bullshit. When I finished digging out my friend, she put the car back into park for a moment and climbed out, saying, “I have to give you a kiss now.” As she came close, I looked over her shoulder and then turned to her to whisper before she drew back, “You know, they say Buffalo is the City of Good Neighbors?” She laughed and pointed out that she had been thinking exactly the same thing. While she had kept shifting gears and easing on the accelerator, I had scraped at the snow in a way reminiscent of scooping ice cream with a thimble, occasionally stopping to try to dig my piss-poor footwear into ice to singlehandedly push the car by its fender. All the while, she and I had both repeatedly taken note of the five or six people standing within twenty-five feet of us, evidently all together, some of them working with two colossal snow shovels, others just standing nearby or seated in a car across the street.

Another set of hands or a better tool, and we could have had my friend’s car out of its trap in thirty seconds flat. I didn’t mind undertaking the task alone, but it just seemed to me that there was something almost aggressive about the neglect of someone so near at hand with such an easily fixable problem. No one else even needed the least bit of assistance, my friend being the only one on the street who had not taken advantage of the local church to park in its lot. Is this the famous Buffalo neighborliness? Do I just happen to repeatedly run into the rare exceptions to the rule, or is that self-image that the city repeats like a mantra just not hold up to scrutiny.

I springboard from here into pages upon pages about the empty-headed optimism that locals have about this area, and perhaps I will in fact go on about it in the near future. But for now, suffice it to say that on point of economics, cultural progress, and certainly the moral character of the population, proud Buffalonians often follow the same trend of convincing themselves that it is a terrific place by focusing exclusively on the positive. Frequently, it seems, people with such a doting perspective think the town’s virtues stack up nicely against those of other American cities for an equally ridiculous reason: they’ve never lived anywhere else.

I have. I’m not a well-traveled man, but I’ve lived in New York City and in Boise, Idaho, and I’ve spent sufficient time in Washington D.C., Arkansas, Oklahoma, and Seattle to have gotten a fair sense of the character of those places and their locals, and in many years since leaving Buffalo and slinking back, I have never for a moment come under the impression that the residents of this city are somehow kinder, more generous, more outgoing, or otherwise better neighbors than in even one of the other places I’ve experienced. Not even of Manhattan would I say that it is less neighborly than Buffalo, and Manhattan has a reputation for being full to the brim with rude bastards.

The stereotypes reported for both sides of the state are equally unreliably, and probably for interestingly contrasting reasons: Buffalo’s designation as City of Good Neighbors was evidently a creation of local residents striving to build up the place in their own minds and in those of potential visitors. New York City’s reputation for being brusque and unfriendly is no doubt traceable to the reports of tourists who do not know how to integrate into the pace and local culture of metropolis. I would hazard to guess that stereotypes describing virtually any city are deeply flawed, and for the same simple reasons. People have a habit of making the best of the situations that are familiar to them, and in some cases inescapable. We structure our sense of value around the place and the circumstances in which we were raised, and if we are not sufficiently socially adaptable, being confronted with the unfamiliar is functionally no different from being confronted with the immoral. Speed and directness can seem aggressive is you’ve always been familiar with a slower pace. And where we have our own sense of value, but also find ourselves tied to a particular place, we might be inclined to find examples of those values in our local circumstances, and conclude thereby that it is representative of the place where we live.

That tendency is a great coping mechanism, but it is not a great representative of reality. It is, instead, an endemic problem of self-delusion, which is present everywhere, but for which I see particularly poignant examples in the way people who love Buffalo talk about Buffalo. In that specific case, I eagerly wish for that breaking point wherein some lifelong Buffalonian sees one of his good neighbors look over at him as he struggles with something and say “Nah, fuck that guy, he can take care of himself.” But in broader terms, this topic speaks to the social breaking point that I will spend my life looking toward and trying to provoke at every turn. We must, all of us, stop accentuating the positive, stop looking on the bright side and convincing ourselves that, heck, things are really pretty great around here. I want to write a cynic’s manifesto to convince people to look into the shadows, acknowledge every flaw in the social fabric around you, and then begin agitating for change. There must come a breaking point at which we decide that we can no longer swallow any more saccharine, that we’re too dizzy from constantly turning away. For the simple fact is that there is no solving any of our problems when coping with them means forgetting they exist.

Tuesday, February 1, 2011

Rejecting the Cause After Its Effect

Peter J. Boyer, in a New Yorker article on the effects of Roger Ailes’ acquisition of the local newspaper for the Hudson town of Philipstown, makes one comment toward the end of the piece that strikes me as especially insightful.

“Ailes plainly wished to provide for his family a particular vision of small-town America, one shaped by nostalgic vision, which is not without irony. He regrets the sway of the local environmentalists, but it was their influence that made the area a sort of place where Roger Ailes would wish to live. Without them, the view from the Aileses’ Hudson aerie would include a Con Edison hydroelectric plant.”

To my mind, this observation, though not stressed anywhere else in the article in which it appeared, speaks to a broadly relevant issue of the well-meaning hypocrisy often underlying people’s worldviews, particularly conservative ones. It is something that I’ve recently noted with such aggressive disdain that my mind runs immediately to other examples of the same trend, which go on pricking at my brain and need to be acknowledged for the consistent logical failings that they are.

Some people, often very vocal ones, have a tendency to laud certain virtues of society, environment, economy, etc., while almost simultaneously attacking the trends or institutions that can be credited with creating those circumstances. I recall a clip of Glenn Beck deriding a new food safety bill by pointing out that the United States has the safest food in the world. For Beck, that fact evidently serves as proof that the bill is effectively redundant, and it never seems to cross his mind that bills of that sort are the very things that create and maintain the safety of American food. I think also of the anti-vaccine movement, which sometimes reasons that there is no cause for widespread vaccination because serious infections are not widespread in American society. In fact, that basic mode of thinking is something that I can find lurking in a wide variety of conspiracy theory. There is confusion about or neglect of the cause for an imposed effect, so the paranoiac concludes that there must be some sinister alternative rationale for a social program or act of legislation, or what have you.

I acknowledge that it is an easy error to make. That is, it is an easy error to make if you don’t reflect on it much. We are used to causality being very easy to observe, especially in its base forms. “I hit you, you fall down.” But there is suddenly a higher demand for analysis when the cause has already happened and the effect lingers. It requires understanding not only what is occurring, but what has occurred, and may even call for deduction to work out the particulars. “You are on the ground; something must have hit you; I wonder what it was.”

I don’t for a moment think this is a fallacy peculiar to conservatives or conspiracy theorists. It is the trapping of any social or political myopia, wherein we draw conclusions based on what is observed presently and believed constantly, rather than on a farther-reaching analysis. It may well be only because of my own liberal leanings, which make it natural for me to scrutinize the flawed reasoning of the opposition, but it does seem to me as if this sort of misunderstanding of causality is, in socio-political contexts, most common among certain columns of conservativism, namely those who harshly criticize social programs, environmental campaigns, and the like primarily for the reason that their worth to society is not immediately obvious, because their effects are subtle, occasional, and best understood only in retrospect.

So this is a breaking point that I’m hoping for now: When people take a long look at their own values and begin to reevaluate their ideologies if they should find that their criticism of something like environmental activism stands in contradiction to their admiration for something else, like the unspoiled beauty surrounding one’s own $6.2 million property. It is vitally important that we understand as best we can the full circumstances surrounding the situations that we observe, but for far too many people, all need for inquiry breaks apart in the face of ideology.