Thursday, May 31, 2012

Romney

But also the candidate and the campaign`s lack of compunction, lack of 
remorse or even explanation when they get caught lying. They don`t correct 
it when they get called out. They don`t seem to feel bad about it. They 
do not seem to see it as a problem.

Lenny Bruce

Are there any niggers here tonight? Could you turn on the house lights, please, and could the waiters and waitresses just stop serving, just for a second? And turn off this spot. Now what did he say? "Are there any niggers here tonight?" I know there's one nigger, because I see him back there working. Let's see, there's two niggers. And between those two niggers sits a kyke. And there's another kyke— that's two kykes and three niggers. And there's a spic. Right? Hmm? There's another spic. Ooh, there's a wop; there's a polack; and, oh, a couple of greaseballs. And there's three lace-curtain Irish micks. And there's one, hip, thick, hunky, funky, boogie. Boogie boogie. Mm-hmm. I got three kykes here, do I hear five kykes? I got five kykes, do I hear six spics, I got six spics, do I hear seven niggers? I got seven niggers. Sold American. I pass with seven niggers, six spics, five micks, four kykes, three guineas, and one wop. Well, I was just trying to make a point, and that is that it's the suppression of the word that gives it the power, the violence, the viciousness. Dig: if President Kennedy would just go on television, and say, "I would like to introduce you to all the niggers in my cabinet," and if he'd just say "nigger nigger nigger nigger nigger" to every nigger he saw, "boogie boogie boogie boogie boogie," "nigger nigger nigger nigger nigger" 'til nigger didn't mean anything anymore, then you could never make some six-year-old black kid cry because somebody called him a nigger at school.
- From Julian Barry's screenplay for "Lenny"

Dangerfield

Date: Mon, 26 Sep 94 14:38:51 PDT
To: Fun_People
Subject: Rodney Dangerfield Bits

[I don't know if this was a transcript of a Dangerfield appearance or just a
clever simulation. It's probably not a transcript because when I've seen him
live I haven't laughed, but I did laugh at this. Then again, maybe it's just
something wrong with me... -psl]

Forwarded-by: LeClub International
From: Avi Golden

*I GET NO RESPECT*

"Good crowd... good crowd. I'm telling you I could use a good crowd. I'm ok
now, but last week I was in rough shape... Why? I looked up my family tree and
found out I was the sap."

"I come from a stupid family. During the Civil War my great Uncle fought for
the west!"

"My father was stupid. He worked in a bank and they caught him stealing pens."

"When I was born... the doctor came out to the waiting room and said to my
father... I'm very sorry. We did everything we could... but he pulled through."

"My mother had morning sickness after I was born."

"My father carries around the picture of the kid that came with his wallet."

"When I played in the sandbox the cat kept covering me up."

"I could tell that my parents hated me. My bath toys were a toaster and a radio."

"Some dog I got too. We call him Egypt because he leaves a pyramid in every room."

"What a dog I got. His favorite bone is in my arm!"

"I worked in a pet store and people kept asking how big I'd get."

"One year they wanted to make me poster boy... for birth control."

"I remember the time I was kidnaped and they sent back a piece of my finger to
my father. He said he wanted more proof"

"My uncle's dying wish was to have me sitting on his lap. He was in the
electric chair."

"I went to a freak show and they let me in for nothing."

"I stuck my head out the window and got arrested for mooning"

"Once when I was lost... I saw a policeman and asked him to help me find my
parents. I said to him... Do you think we'll ever find them" He said... I
don't know kid... there are so many places they can hide."

"I remember I was so depressed I was going to jump out a window on the tenth
floor... so they sent a priest up to talk to me. He said... On your mark... "

"On Halloween... the parents send their kids out looking like me. Last year...
one kid tried to rip my face off! Now it's different... when I answer the door
the kids hand me candy."

"It's tough to stay married. My wife kisses the dog on the lips... yet she
won't drink from my glass!"

"Last week my tie caught on fire. Some guy tried to put it out with an ax!"

"For two hours... some guy followed me around with a pooper scooper."

"I met the surgeon general. He offered me a cigarette!"

"A travel agent offered me a 21 day special. He told me I would fly from New
York to London. Then from Tokyo back to New York. I asked him... how am I
supposed to get from London to Tokyo? ... He told me... That is why we give
you 21 days.

"Another travel agent told me I could spend 7 nights in Hawaii... No days...
just nights."

"They say... Love thy neighbor as thy self... What am I supposed to do? jerk
him off too?"

"My wife isn't very bright. The other day she was at the store and just as she
was heading for our car, someone stole it! I said... did you see the guy that
did it? She said ... No, but I got the license plate."

"A girl phoned me and said... Come on over there's nobody home. I went over...
Nobody was home!"

"I went to a massage parlor. It was self service."

"If it weren't for pick-pocketers I'd have no sex life at all."

"One day... as I came home early from work... I saw a guy jogging naked. I
said to the guy... Hey buddy... why are you doing that for? He said... Because
you came home early."

"It's been a rough day. I got up this morning... put on a shirt and a button
fell off. I picked up my briefcase and the handle came off. I'm afraid to go
to the bathroom!"

"I went to see my doctor... Yeah... I told him once... Doctor... every morning
when I get up and look in the mirror... I feel like throwing up; what's wrong
with me? He said... I don't know but your eyesight is perfect"

"I remember when I swallowed a bottle of sleeping pills. My doctor told me to
have a few drinks and get some rest."

"I told my dentist my teeth are going yellow. He told me to wear a brown necktie."

"My dentist found a new way to cover up his bad breath... he holds up his arms"

"My dentist has bad breath... ... Why every time he smokes he blows onion rings."

"My psychiatrist told me I'm going crazy. I told him... If you don't mind I'd
like a second opinion... he said... Alright... you're ugly too!"

Monday, May 28, 2012

Modern art fraud

The Painted Word (1975)



Author Info:
Tom Wolfe
1931-


It is not necessary to read these two books together, but they really do compliment one another and it is when taken together that they make the most powerful case.  The case is that, just as each of us has always secretly suspected, modern art is crap.  In fact, not only is it crap, it is intentionally so, more or less as a calculated insult to our middle brow tastes.  Indeed, while most of us would consider it the purpose of art to convey beauty, modern artists consider art to be merely a tool for political expression.  Logically then, since most of them are, and were, opposed to our middle class, democratic, capitalist, protestant values, modern art is antithetical to virtually everything that most of us believe in.

I say that we have all always intuited that this is true, but it was left to Tom Wolfe, naturally, to declare for one and all that the emperor had no clothes.  He does this most forcefully in the opening lines of Bauhaus, which deals with modern architecture, when he says:
O beautiful, for spacious skies, for amber waves of grain, has there ever been another place on earth
    where so many people of wealth and power have paid for and put up with so much architecture they
    detested as within they blessed borders today?

But the reasons for the sorry state of the arts are most clearly explicated in Painted Word.  The essay therein was occasioned by a Hilton Kramer review of an exhibition of Realist artists.   On the morning of April 28, 1974, Wolfe picked up the New York Times and read the following by Kramer:
"Realism does not lack its partisans, but it does rather conspicuously lack a persuasive theory.  And
    given the nature of our intellectual commerce with works of art, to lack a persuasive theory is to
    lack something crucial--the means by which our experience of individual works is joined to our
    understanding of the values they signify."

Kramer's words brought about an epiphany:
All these years, in short, I had assumed that in art, if nowhere else, seeing is believing. Well - how
    very shortsighted! Now, at last, on April 28, 1974, I could see. I had gotten it backward all along.
    Not `seeing is believing', you ninny, but `believing is seeing', for Modern Art has become completely
    literary: the paintings and other works exist only to illustrate the text.

Painted Word is an extended riff upon this theme--the idea that art had become wholly dependent on theory.  His case builds to the stunning dénouement when an artist named Lawrence Weiner presented the following artwork in the April 1970 issue of Arts Magazine:
1. The artist may construct the piece
2. The piece may be fabricated
3. The piece need not be built

Each being equal and consistent with the intent of the artist the decision as to condition rests with the receiver upon the occasion of receivership.

Concludes Wolfe:
And there, at last, it was!  No more realism, no more representational objects, no more lines, colors
    forms, and contours, no more pigments, no more brushstrokes, no more evocations, no more
    frames, walls, galleries, museums, no more gnawing at the tortured face of the god Flatness, no
    more audience required, just a "receiver" that may or may not be there at all, no more ego projected,
    just "the artist", in the third person, who may be anyone or no one at all, not even existence, for that
    got lost in the subjunctive mode--and in the moment of absolutely dispassionate abdication, of
    insouciant withering away, Art made its final flight, climbed higher and higher until, with one last erg
    of freedom, one last dendritic synapse, it disappeared up its own fundamental aperature...and came
    out the other side as Art Theory!...Art Theory pure and simple, words on a page, literature undefiled
    by vision, flat, flatter, Flattest, a vision invisible, even ineffable, as ineffable as the Angels and the
    Universal Souls.

And it is upon reaching this final state of pure theory that C.S. Lewis pessimistic prediction in The Abolition of Man comes to fruition.  When we as a people, no longer capable of forming coherent judgments about quality, no longer confident enough to differentiate what is good from what is bad, end up being forced to accept any old garbage that is hailed by the critics and forced upon us.

Wolfe is at his wickedly funny, subversive best here, pricking the pretensions of the Art world--artists, critics and patrons alike.  If you want to know why the establishment reacts so angrily to his novels, you need look no farther than these two dissections of the tastes, or lack of such, exhibited by the intelligentsia in Modern Art.  When you pronounce to the world that the opinion makers live in ugly, uncomfortable buildings and decorate their homes with art which is at best a hoax, at worst a pile of trash, you sort of have to expect that the opinions they deliver won't be all that favorable to you.

(Reviewed:29-Nov-99)

Grade: (A+)

  

Tom Wolfe hates modern arr

The Painted Word (1975)



Author Info:
Tom Wolfe
1931-


It is not necessary to read these two books together, but they really do compliment one another and it is when taken together that they make the most powerful case.  The case is that, just as each of us has always secretly suspected, modern art is crap.  In fact, not only is it crap, it is intentionally so, more or less as a calculated insult to our middle brow tastes.  Indeed, while most of us would consider it the purpose of art to convey beauty, modern artists consider art to be merely a tool for political expression.  Logically then, since most of them are, and were, opposed to our middle class, democratic, capitalist, protestant values, modern art is antithetical to virtually everything that most of us believe in.

I say that we have all always intuited that this is true, but it was left to Tom Wolfe, naturally, to declare for one and all that the emperor had no clothes.  He does this most forcefully in the opening lines of Bauhaus, which deals with modern architecture, when he says:
O beautiful, for spacious skies, for amber waves of grain, has there ever been another place on earth
    where so many people of wealth and power have paid for and put up with so much architecture they
    detested as within they blessed borders today?

But the reasons for the sorry state of the arts are most clearly explicated in Painted Word.  The essay therein was occasioned by a Hilton Kramer review of an exhibition of Realist artists.   On the morning of April 28, 1974, Wolfe picked up the New York Times and read the following by Kramer:
"Realism does not lack its partisans, but it does rather conspicuously lack a persuasive theory.  And
    given the nature of our intellectual commerce with works of art, to lack a persuasive theory is to
    lack something crucial--the means by which our experience of individual works is joined to our
    understanding of the values they signify."

Kramer's words brought about an epiphany:
All these years, in short, I had assumed that in art, if nowhere else, seeing is believing. Well - how
    very shortsighted! Now, at last, on April 28, 1974, I could see. I had gotten it backward all along.
    Not `seeing is believing', you ninny, but `believing is seeing', for Modern Art has become completely
    literary: the paintings and other works exist only to illustrate the text.

Painted Word is an extended riff upon this theme--the idea that art had become wholly dependent on theory.  His case builds to the stunning dénouement when an artist named Lawrence Weiner presented the following artwork in the April 1970 issue of Arts Magazine:
1. The artist may construct the piece
2. The piece may be fabricated
3. The piece need not be built

Each being equal and consistent with the intent of the artist the decision as to condition rests with the receiver upon the occasion of receivership.

Concludes Wolfe:
And there, at last, it was!  No more realism, no more representational objects, no more lines, colors
    forms, and contours, no more pigments, no more brushstrokes, no more evocations, no more
    frames, walls, galleries, museums, no more gnawing at the tortured face of the god Flatness, no
    more audience required, just a "receiver" that may or may not be there at all, no more ego projected,
    just "the artist", in the third person, who may be anyone or no one at all, not even existence, for that
    got lost in the subjunctive mode--and in the moment of absolutely dispassionate abdication, of
    insouciant withering away, Art made its final flight, climbed higher and higher until, with one last erg
    of freedom, one last dendritic synapse, it disappeared up its own fundamental aperature...and came
    out the other side as Art Theory!...Art Theory pure and simple, words on a page, literature undefiled
    by vision, flat, flatter, Flattest, a vision invisible, even ineffable, as ineffable as the Angels and the
    Universal Souls.

And it is upon reaching this final state of pure theory that C.S. Lewis pessimistic prediction in The Abolition of Man comes to fruition.  When we as a people, no longer capable of forming coherent judgments about quality, no longer confident enough to differentiate what is good from what is bad, end up being forced to accept any old garbage that is hailed by the critics and forced upon us.

Wolfe is at his wickedly funny, subversive best here, pricking the pretensions of the Art world--artists, critics and patrons alike.  If you want to know why the establishment reacts so angrily to his novels, you need look no farther than these two dissections of the tastes, or lack of such, exhibited by the intelligentsia in Modern Art.  When you pronounce to the world that the opinion makers live in ugly, uncomfortable buildings and decorate their homes with art which is at best a hoax, at worst a pile of trash, you sort of have to expect that the opinions they deliver won't be all that favorable to you.

(Reviewed:29-Nov-99)

Grade: (A+)

  

Friday, May 25, 2012

Profiling lie

In your article, you declare that my profile isn’t accurate because “it isn’t true that almost all Muslims are out to blow up airplanes. In fact, almost none of them are.” Unfortunately, this gets things exactly backwards. The question is not, What is the probability that any given Muslim is a terrorist? The question is, What is the probability that the next terrorist will be a Muslim? You can bury the signal in as much noise as you want; it will not change the fact that the threat of suicidal terrorism is coming from a single group.

Profiling debate

Of course I’m aware that a terrorist could place a bomb in an old lady’s bag—and that is why I was careful to say that everyone’s baggage should be screened. But it is very far-fetched to think that jihadist organizations will successfully recruit people of the sort pictured in my original blog post. And if we are concerned that terrorists might kidnap some old lady’s grandchildren and force her to walk through security with a bomb in her girdle—well, that’s what behavioral profiling is for. Presumably, our screeners would find themselves in the presence of one very nervous old lady.
We have to ask ourselves which is more plausible—that terrorists will find it easier to recruit or coerce the least likely suspects, or that they will benefit from our needlessly searching these suspects by the hundreds of millions, year after year? I do not doubt that a profile can be gamed—and this is worth worrying about—but I am more concerned about the risk of airport screeners obviously wasting their time. 
So I hope we can put that bit about mere “correlation” behind us. Generally speaking, we know who we are looking for—Muslim jihadists.
In your article, you declare that my profile isn’t accurate because “it isn’t true that almost all Muslims are out to blow up airplanes. In fact, almost none of them are.” Unfortunately, this gets things exactly backwards. The question is not, What is the probability that any given Muslim is a terrorist? The question is, What is the probability that the next terrorist will be a Muslim? You can bury the signal in as much noise as you want; it will not change the fact that the threat of suicidal terrorism is coming from a single group.
We face an ongoing threat of people bringing bombs onto airplanes. There will surely be a next attempt, and one after that, and one after that. Even as you and I have been conducting this debate, we have been hearing reports about new and improved “underwear bombs” and about the prospect of terrorists having IEDs surgically implanted in their bodies. How likely is it that these ghoulish attempts to murder innocent people will come from Muslims waging jihad? It isn’t 1 in 80 million, or 1 in 8 million, or even 1 in 8. You admit that the likelihood is “high.”
Your concern about the low base rate of terrorism, leading to the problem of too many false positives, seems misguided. The problem of base rate is often very important, of course, but not in the case of airport security. For readers who might be unfamiliar with Bayesian statistics, let me briefly illustrate what I think you were trying to do with your math:
Let’s say I get a blood test designed to screen for some terrible disease and it comes back positive. My doctor tells me that this test is 99% accurate and only produces false positives 1% of the time. Does this mean that that I have a 99% chance of having the disease? No. We need to know how prevalent this disease is in the population of people who share my risk factors (the base rate). If the disease is rare, the chance that I have it will still be quite low. A false-positive rate of 1% will produce 100 errors per 10,000 tests. If the disease only affects 1 in 10,000 people like me, my actual chance of having the disease (given that I tested positive) will be 1/101—or slightly less than 1%.
This seems to be the kind of sobering and counterintuitive demonstration of the “base rate fallacy” you were attempting in your article. The lesson that you and many others seem desperate to draw is that a little Bayesian analysis proves that profiling Muslims makes absolutely no sense. But what is interesting about false positives in my medical example is that the consequences of entertaining them (i.e., believing that one has a deadly illness) are huge, and learning the base rate completely changes one’s sense of the risk. This is not the case with the threat of Muslim terrorism.
What is a false positive in the context of airport security? It might be nothing more than asking a person a follow-up question or performing a hand inspection of his bag. We are not talking about imprisoning people who fit the profile at the airport. A concern about false positives only makes sense if paying closer attention to innocent Muslims has some truly terrible consequences. You suggest that it will have two: it will produce a backlash in the Muslim community and allow terrorists to game the system (rendering the profile inaccurate). I am skeptical about both these claims for reasons that I hope we will discuss.
Of course your base rate argument could also be used to justify taking no security precautions whatsoever—which I’m beginning to worry is what you recommend. In your essay, you assume that false positives (screening innocent Muslims) are so unpleasant as to be morally unacceptable, while false negatives (letting the occasional bomb-laden terrorist onto an airplane) aren’t so bad that we should seek to prevent every instance of them. I am open to the idea that we are irrationally afraid of airline terrorism (and airplane crashes generally), but you have not made this case. And I would point out that our horror at the prospect of planes exploding at 30,000 feet is part of the cost of terrorism that we must consider. If, as result of some quirk in human psychology, a few downed airplanes will cripple our economy in a way that a few blown up trains never will, then it is rational for us to have a zero-tolerance policy regarding bombs on airplanes.
BS:  It turns out designing good security systems is as complicated as I make it out to be.  Witness all the lousy systems out there designed by people who didn’t understand security.  Designing an airport security system is hard.  Designing a passenger profiling system within an airport security system is hard.  And I’m going to walk you through an analysis of your security design.
In your response above, you make a big deal about two points that are unimportant.
One, it doesn’t matter that the correlation between Muslim and terrorist is a causal relationship. We’re taking about a detection system.  You’re proposing that we can detect attribute A (terrorist) by using attribute B (Muslim).  That’s what matters, not whether or not there’s a causal arrow or which direction it points.  In using the word “correlation” I was giving you the benefit of the doubt; it’s a lower bar.
And two, “the probability that the next terrorist will be a Muslim” doesn’t matter either.  To demonstrate that, for now I’ll just assume the probability equals one.
To analyze your system, I first need to describe it.  In security, the devil is in the details, and it’s the details that matter.  Lots of security systems look great in one sentence but terrible once they’re expanded to a few paragraphs.
You’re proposing an airport passenger screening system with two tiers of security.  Everyone gets subjected to the lower tier, but only people who meet your profile, “Muslims, or anyone who could conceivably be Muslim,” would be subjected to the higher tier.
SH:  Yes, and anyone else whose bag or behavior seems to merit follow up (e.g., the Hindawi affair).
BS:  That’s behavioral profiling, completely different from what we’re discussing here.  I want to stick with your ethnic profiling system.
SH: Well, I disagree. And the Israelis, who are generally credited with being the masters of behavioral profiling, appear to disagree as well. A person’s behavior can only be interpreted in context. What does a man’s sweating profusely and looking agitated mean? It means one thing if he is a morbidly obese senior from Alabama traveling with his wife and their church group, who is struggling to get all the trinkets he purchased in Jerusalem into a bursting suitcase; it means another if he is a 23-year-old man traveling on a Pakistani passport who is doing his best to not make eye contact with anyone. The distinction between behavioral profiling and everything else that can be noticed about a person is a myth. However, we can table this issue for the time being.
BS: You can disagree, but I assure you that the Israelis understand the difference between ethnic profiling and behavioral profiling.  Yes, they do both together, but that doesn’t mean you can confuse them.  But let’s stick to topic: ethnic profiling.
In practice, this would mean that everyone would go through primary airport screening: x-ray machine for hand luggage, and the magnetometer or full-body scanner for their bodies. But when primary screening results in an anomaly—this is generally because the magnetometer beeps, the full-body scanner shows something, or there’s something suspicious in an x-ray image—in some cases people who don’t meet the profile would be allowed through security without that anomaly being further checked.
SH:  Yes, depending on the anomaly.
BS: TSA screeners would have to make the determination, based on some subjective predetermined criteria which they would have to apply, whether or not individuals meet the profile.  You are not proposing this because it will improve security.
SH: On the contrary, I believe it will improve security. Let’s say that in each moment the TSA has $100 worth of attention, and they can spend it any way they want. A dollar spent on a toddler whose family does not stand a chance of having turned him into an IED is a dollar wasted (i.e., not spent elsewhere).
BS:  That’s also a separate issue.  We’re comparing profiling with not profiling. You are essentially making an efficiency argument in support of profiling: “I am more concerned about the risk of airport screeners obviously wasting their time.”  This efficiency, you argue, could result in either cost savings as TSA staffing was reduced, or in increased security elsewhere as superfluous screeners were retasked to do other things that might improve security.  But that is independent of, and irrelevant to, the analysis of the proposed security system.  The proposed benefit of the profiling system is the same security at reduced cost, and reduced inconvenience to non-profiled people.
SH:  I agree. I would just emphasize that I think of efficiency in terms of increased security, not in terms of reducing costs. Efficiency allows for more eyes on the problem—another person watching the scanner images, another person able to study the behavior of a suspicious person. Every moment spent following up with the wrong family is not just a moment in which the line slows down—it’s also a moment in which someone or something else gets ignored.
BS:  Of course.  Again, when you have an efficiency gain you can either realize it by reducing your cost or by doing more of what you’re already doing.  But that potential additional security has nothing to do with the efficacy of profiling.  If we believe that an extra $10 of attention will make us safer, we can either add $10 to the TSA’s budget, or save $10 by increasing efficiency somewhere else.
SH: Now I see what you are getting at—and I’m prepared to agree for the sake of letting you continue with your analysis. But I want to point out that there might be more to it than the question of efficiency. I think a policy of not profiling—that is, remaining committed to the fiction that we have no idea where the threat of suicidal terrorism is coming from—might cause screeners to be much worse at their jobs than they would otherwise be. Gains in efficiency due to profiling might not just be a matter of “doing more of what you’re already doing.” It could be doing more of what the Israelis are already doing—which I don’t think entails their lying to themselves about the source of the problem.
BS: You are, however, implying a different type of profiling system: to take a security procedure now randomly applied—swabbing luggage for explosive residue, for example—and apply it according to the profile.  Leave that aside for now; I’ll come back to it later.
One piece of security philosophy to start.  Complexity is the enemy of security.  Adding complexity to a security system invariably introduces additional vulnerabilities (see my 2000 essay).  Simple systems are easier to analyze.  Simpler systems have fewer security assumptions.  Simpler systems are more robust against mistakes in analysis.  And simpler systems are more secure. 
More specifically, simplicity tends to completely remove potential avenues of attack.  An easy example might be to think of a building.  Adding a new door is an additional complexity, and requires additional security to secure that door.  This leads to an analysis of door materials, lock strength, and so on.  The same building without that door is inherently more secure, and requires no analysis or assumptions about how it will be secured.  Of course, this isn’t to say that buildings with doors are insecure, only that it takes more work to secure them.  And it takes more work to secure a building with ten doors than with one door.  I will appeal to simplicity multiple times in any analysis of your profiling system.
Let’s get started, then.  Security is always a trade-off: costs versus benefits.  We’re going to tally them up.
The primary benefit to your system is increased efficiency, but it’s not as much as you think. In Kip Hawley’s memoir of his time as head of the TSA, he talks about the shoe scanning process. After Richard Reid’s failed shoe-bombing attempt in late 2001, TSA screeners started requiring people wearing thick-heeled shoes and boots to remove them and put them through the x-ray machines.  They deliberately chose the most accurate correlation in order to minimize the passenger inconvenience. But when they revised the rule to require everyone to take their shoes off, checkpoint throughput increased.  There is an inherent inefficiency to non-uniform procedures, and when passengers knew what to expect, there was less delay.
Your system is different.  The non-uniformity is in the resolving of anomalies, not in the basic security procedures that everyone has to go through. There would be an efficiency benefit resulting from your system, but it would still be diminished because passengers wouldn’t know what to expect.
Simpson had hired a team of high-profile lawyers, including F. Lee Bailey, Robert Shapiro, Alan Dershowitz, Robert Kardashian, Gerald Uelmen (the dean of law at Santa Clara University), Carl E. Douglas and Johnnie Cochran. Two attorneys specializing in DNA evidence, Barry Scheck and Peter Neufeld, were hired to attempt to discredit the prosecution's DNA evidence,[9] and they argued that Simpson was the victim of police fraud and what they termed as sloppy internal procedures that contaminated the DNA evidence.[15]
Simpson's defense was said to cost between US$3 million and $6 million.[26] Simpson's defense team, dubbed the "Dream Team" by reporters, argued that LAPD detective Mark Fuhrman had planted evidence at the crime scene. LAPD Criminalist Dennis Fung also faced heavy scrutiny. In all, 150 witnesses gave testimony during the trial.

Shades of grey

A good number of libraries are refusing to order the books, saying that they "violated its no-erotica policy" or "did not meet the standards of the community." The National Coalition Against Censorship's executive director, Joan Bertin, said it was "egregious" for a library to remove a book from its adult section. "There are some possible arguments for trying to keep kids away from certain kinds of content, but in the case of adults, other than the restrictions on obscenity and child pornography, there's simply no excuse," she said.

Otto zehm case

Otto Zehm (1970–2006) was a mentally disabled man from Spokane, Washington who died on March 20, 2006, two days after being beaten, tasered multiple times, and improperly restrained by seven Spokane Police Officers.[1] Zehm committed no crime and on May 30, 2006, the Spokane County coroner ruled the death a homicide.[2][3]

Poverty of Affirmation?

But then Smiley says there is a poverty of hope, truth, affirmation, and that those are just as important as economic poverty? Dude, Smiley is insane.

Cornell west

Or I could have easily titled this blog as “the most amazing lecture I’ve ever heard in my life.” I’m biased, sure, but hearing Cornel West speak in person is easily something I would recommend to anyone and everyone. I had heard that Professor West and Mr. Smiley would be speaking in Seattle, and that included in ticket price was a copy of their newest book, The Rich & the Rest of Us: A Poverty Manifesto, but I’d been watching my budget and was waffling on if I would buy a ticket – I ended up caving, thanks to some pressure from some friends – and I can only say that I am so glad I did.

The content of their lecture, as well as their book, was phenomenal. It beats the partisan politics off of the issue and thrusts the facts of poverty in the United States, raw and naked, right in your face. (Admittedly, part of the reason this blog post has taken a while to get out is that every time I look at the book for a couple facts to throw in with the post, I keep getting distracted reading the book!) Both West and Smiley are heavily inspired by the life of the Dr. Martin Luther King, Jr., with Mr. Smiley calling MLK “the greatest American this country ever produced,” and despite his faults, I would have to agree. The things that make MLK an amazing individual, in my opinion, are the things that can make any American great – and if I may be so bold, can save this country from this slippery slope on which we find ourselves.

In 2011 Dr. West and Mr. Smiley embarked on an 18-city journey across the United States, The Poverty Tour: A Call to Conscience, shedding light on and bringing awareness to the state of poverty in America:

Nearly 50 million Americans now live in poverty. If that doesn’t bother you, then let this be a call to your conscience indeed.

Aside from the PBS special linked above, Smiley and West’s book is the information and call to action from their experiences in 2011, with the book’s content ranging on a variety of related issues tied into poverty in America – that it’s not simply a poverty defined by lack of money, but also:

Poverty of Opportunity
Poverty of Affirmation
Poverty of Courage
Poverty of Compassion
Poverty of Imagination
Perhaps the best quote I have since come across in the book that defines the issues is this:

“The level of inequality in this country has gotten so far out of hand, the quantity of compassion so thoroughly diminished, that the very future of American democracy is at stake.”
Tavis Smiley posited that poverty in America is a threat to American security, and I would have to agree. The day that we as a society actively disengage from compassion and action to protect the least amongst us, is the day that we begin actively participating in hurting our own society as a whole. Well, ‘as a whole’ in the sense of those who make less than the richest 1% of Americans, or less than $380,000 a year.

I know, some of you might be going ‘now hang on a minute, Nina, don’t go off on us now about the 99% or the 1% or all that jazz.” I won’t – not yet. Nor do I think that any discussion about economic issues and using those labels (99-percenters, 1-percenters, etc) is necessarily obligated to make judgment calls or descriptions of personal character – I have no use for demonizing or labeling someone’s character. Montanan though I may be, I find mud-slinging and name-calling about as useful as gun-slinging – that is, antiquated, unnecessary, and dangerous.


Dr. Cornel West at the book signing.
What I feel West and Smiley calling us to as a society is indeed a call to conscience. The faces of the poor are no longer relegated to solely the impoverished minorities that many amongst the well-to-do in society cannot relate to -or that even some minorities in America who may be well off themselves cannot find themselves relating to. The poor in America today are not relics from the Great Depression, or merely miscreants and addicts. In today’s economy, there are a growing number of people – individuals and families – who are one paycheck away from being homeless.

In fear of becoming a post that goes on for too long, I will instead share some quotes from the part of West and Smiley’s book I have read thus far. I hope it inspires you, angers you, pisses you off, or at least moves you in some way.

“Can we still claim ‘the greatest’ status when one out of two Americans is living in poverty or near the poverty line? Should our reputation as a global leader legitimately come into question when, every quarter, millions more of our citizens face the haunting specter that they, too, may soon join the ranks of America’s poor? How patriotic is a nation where veterans are more likely than non-veterans to be homeless?

“… What are the real choices and chances available in our democracy for average citizens when the wealthiest 1 percent of U.S. citizens controls nearly 42 percent of the wealth, or when the top 400 citizens have wealth equivalent to the bottom 150 million citizens? Is this still the land of opportunity when nearly 14 million Americans are ‘officially’ unemployed, and millions more are underemployed, to say nothing of the countless millions who have completely given up looking for work? The myth of American exceptionalism, of being the best of the best, overshadows an inconvenient truth. We are a nation where poverty of opportunity is dangerously close to becoming a permanent reality.” (pgs 44-45)

“…How can America be ‘first’ if the least among us are our last collective concern? What does it say about the priorities of a nation that allows 53% of its children–the most vulnerable and valuable — to live in or near poverty?” (pg 55)

 

For further information on Cornel West & Tavis Smiley, you can view this video clip of them on Democracy Now, as well as read this article about them on the Huffington Post. They have a weekly radio show, Smiley & West, on Sunday afternoons. This author would highly recommend their new book, The Rich and the Rest of Us: A Poverty Manifesto to every American.

Thursday, May 24, 2012

Dylan

“Visions of Johanna.” This is underrated only to the extent that everyone in the world doesn’t acknowledge that it’s obviously his best song. It’s about nothing, it’s about everything; it moves, it stands still, it runs like blood, it smells like smoke, copper, eggplant. Actually that last sentence is my imitation of Chronicles writing. But seriously—of the many attempts by rock-and-rock poets to write something deep, this is as deep as it gets. Indecipherable, and yet you feel you know exactly what he’s talking about.

Manson

advertisement
PHOTOS AND VIDEOS

Manson Writer:
"Mom, It's...

Cell Phones in CA
Prisons: Manson Busted
More Photos and Videos
Eight hours of audio never before heard by law enforcement has been requested by the Los Angeles Police Department, and it could link followers of the Manson Family to unsolved murders.

In a letter dated March 19, LAPD Chief Charlie Beck requested "eight hours or so" of audio recordings between attorney Bill Boyd and his then-client Charles "Tex" Watson, according to a U.S. bankruptcy filing.

Watson, the former right-hand man of Charles Manson, is currently serving a life sentence for his involvement in the 1969 Manson Family murders.

Although the LAPD has yet to receive the recordings, police believe the interviews could contain information about unsolved murders.

"The LAPD has information that Mr. Watson discussed additional unsolved murders committed by followers of Charles Manson," Beck wrote in a request to a trustee with the U.S. Department of Justice.

Document: LAPD Chief's Letter Requesting Audio Recording (PDF)

The LAPD's request corresponds to the liquidation of Boyd's Texas-based law firm as part of a bankruptcy proceeding. Boyd, who died in 2009, represented Watson beginning in 1969 and "for some time thereafter," according to Beck.

"It is requested that the original recordings be given to the LAPD in order to determine if information regarding unsolved murders was included in the recordings. The LAPD, Robbery-Homicide Division will be investigating Mr. Watson's recordings…" wrote Beck.

A bankruptcy court hearing is scheduled for Tuesday in Plano, Texas, to determine if the audio will be given to police.

The recordings remained private until September 1976 when Watson authorized its sale to author Chaplain Ray Hoekstra to help cover unpaid legal fees. Hoekstra used the material for his 1978 book "Will You Die For Me?"

Watson was sentenced to death for the murders of Abigail Ann Folger, Wojciech Frykowski, Thomas Jay Sebring, Steven Earl Parent, and Sharon Tate Polanski. California temporarily suspended the death penalty in 1972, and Watson has been serving a life sentence ever since. He was most recently denied parole last November.

Follow NBCLA for the latest LA news, events and entertainment: iPhone/iPad App | Facebook | Twitter | Google+ | Instagram | RSS | Text Alerts | Email Alerts

Posted 2 hours ago

High speed film

"High speed film, thirty years later. The entertainment business keeps holding things back. Cinerama three D, high speed film.

"you could literally hear a gasp from the audience when they were shown the difference between 24-frame and 48 frames. And they liked 60 frames even better.”"

Smoking

As soon as I quit smoking, my lungs will start to regenerate.

Wouldn’t that be nice? But, unfortunately, it’s doesn’t happen. Delicate lung tissue that has been destroyed will stay that way. If you smoke, make every effort to avoid further lung damage by quitting smoking, avoiding lung irritants and infections, and taking the best medications available to keep your lungs stable.
SOMETIME IN MY VERY early adolescence, I acquired, while living in the very heart of Appalachia, a land of lazy southern drawls, a British accent. No one around me had a British accent; my father was from Chicago Heights, my mother from Braggadocio, Missouri, and my peers were budding good old boys whose fathers drove tractors and pickup trucks and spoke in an unmusical twang that I, a pompous fop in my teens, found distinctly undignified.

Given the hearty, blue-collar community in which I grew up, the origin of my stilted style of delivery remained a complete mystery to me until, as an adult, I began to watch old movies again. Over and over in the voices of film stars as different as Joan Crawford in Mildred Pierce and Katharine Hepburn in Suddenly, Last Summer, I heard the echoes of my own voice, the patrician inflections of characters who conversed in a manufactured Hollywood idiom meant to suggest refinement and good breeding, the lilting tones of Grace Kelly in Rear Window, Bette Davis in Mr. Skeffington, Tallulah Bankhead in Lifeboat, or even Glinda the Good Witch in The Wizard of Oz.

In that tour de force of bitchy camp, The Women, the all-female cast speaks in two distinct accents: the harsh American cockney of the kitchen help, who squabble about the muddled affairs of their wealthy mistresses, and the high-society, charm-school intonations of the Park Avenue matrons who rip each other to shreds in the gracious accents of an Anglophilic argot concocted by the elocutionists at the major studios. Only Joan Crawford, the inimitable Crystal Allen, a social-climbing shopgirl who claws her way up to the top, can speak in both accents as the occasion requires, one for when she is at her most deceitful, hiding her common upbringing beneath the Queen's English of the New York aristocracy, and the other for when she is being her true self, a crass, money-grubbing tart who gossips viciously with her equally low-class cohorts at the perfume counter.

To an insecure gay teenager stranded in the uncivilized hinterlands of North Carolina, the gracious ladies of Park Avenue and Sutton Place embodied a way of life more glamorous and less provincial than his own. The influence of Hollywood films was so pervasive among young homosexuals that it insinuated itself into our voices, weakening the grip of our regional accents, which were gradually overridden by the artificial language of this imaginary elite. Even today I have never succeeded in exorcising Joan, Bette, Grace, and Kate from my vocal cords, where they are still speaking, having left the indelible mark of Hollywood's spurious interpretation of classiness, culture, and gentility branded into my personality. This strange act of ventriloquism represents the highest form of diva worship and is the indirect outcome of my perception in my youth that, as a homosexual, I did not belong in the community in which I lived, that I was different, a castaway from somewhere else, somewhere better, more elegant, more refined, a little Lord Fauntleroy marooned in the wilderness. In my unconscious imitation of the voices of the great film stars, I was seeking to demonstrate my separateness, to show others how out of place I felt, and, moreover, to fight back against the hostility I sensed from homophobic rednecks by belittling their crudeness through unremitting displays of my own polish and sophistication. I was not attracted to Hollywood stars because of their femininity, nor did my admiration of them reflect any burning desire to be a woman, as the homosexual's fascination with actresses is usually explained, as if diva worship were simply a ridiculous side effect of gender conflicts. Instead, it was their world, not their femininity, that appealed to me, the irrepressibly madcap in-crowd of Antie Mame, of high spirits and unconventional "characters," of nudists and Freudians, symphony conductors and Broadway prima donnas, who lived in a protective enclave that promised immunity from shame and judgment, beckoning me with its broadmindedness and indulgence of sexual eccentricities.

For me and countless other gay men growing up in small-town America, film provided a vehicle for expressing alienation from our surroundings and linking up with the utopic homosexual community of our dreams, a sophisticated "artistic" demimonde inhabited by Norma Desmonds and Holly Golightlys. Homosexuals' involvement with Hollywood movies was not only more intense but fundamentally different from that of the rest of the American public. For us, film served a deeply psychological and political function. At the very heart of gay diva worship is not the diva herself but the almost universal homosexual experience of ostracism and insecurity, which ultimately led to what might be called the aestheticism of maladjustment, the gay man's exploitation of cinematic visions of Hollywood grandeur to elevate himself above his antagonistic surroundings and simultaneously express membership in a secret society of upper-class aesthetes.

Richard Friedel's novel The Movie Lover (1981) provides a telling illustration of how gay men used cinema for the defensive purposes of dramatizing their alienation. Burton Raider, its gay protagonist, is such a precocious coxcomb that he reads Vogue in his crib and refuses to wear the insipid teddy-bear bibs and Day-Glo overalls his parents buy for him, spurning sweaters with choo-choo-train prints and opting instead for kimonos and capes. In this affectionate caricature of the gay sensibility, Burton searches for some way to underscore his difference from his untidy, proletarian cousins, who despise him for his effeminacy and are content to catch frogs and throw stones at tin cans while he sobs on the sofa over Rita Hayworth movies or swoons over "the vertiginous Ann Miller and the noble Norma Shearer." One Christmas, he even asks a bewildered Santa Claus to bring him a lavish photo album of MGM stars, a present ideally suited for a young homosexual who "never quite belonged, never quite fit in" with the other children on his block.

It is this feeling of estrangement that leads him to flaunt, through the imitation of film, what he modestly refers to as his "singularity." When both he and his cousins unwrap dump trucks, he imperiously sends Santa scurrying back to the North Pole to retrieve the forgotten book, sputtering at the bewildered saint's retreating figure, dashing for his sled, "what [am] I supposed to do now ... go over and play blue-collar worker" with his cousins, whom he watches contemptuously as they excavate a construction site by the Christmas tree. As my own voice and Burton's affectations reveal, the preciousness of the aesthete, our love of Japanese screens, Persian carpets, kimonos, capes, MGM stars, and British accents, reflects less the homosexual's innate affinity for lovely things, for beauty and sensuality, than his profound social discontent, which we attempt to overcome by creating flattering images of ourselves as connoisseurs and epicureans.

The hard-bitten personalities of such Machiavellian careerists as Joan Crawford and Marlene Dietrich were, of course, not irrelevant to gay men's fascination with them. In fact, we related so intensely to the steeliness of characters like the murderous Bette Davis in The Little Foxes, who, with chilling equanimity, stands by as her choking husband writhes in convulsions before her, clutching his heart and helplessly groping for his missing blood pressure medication, that we used them as substitutes for ourselves, refashioning them in our own images. In the homosexual's imagination, Hollywood divas were transformed into gay men, undergoing a strange sort of sex change operation from which they emerged as drag queens, as men in women's clothing, honorary butch homosexuals as fearless as Joan Crawford in Johnny Guitar playing Vienna, a hard-boiled saloon keeper who guns down her rival, Mercedes McCambridge, or as Tallulah Bankhead in Lifeboat playing a shipwrecked reporter, adrift in the Atlantic, who uses her diamond Cartier bracelet as bait to catch fish. Drag queen imagery has, in fact, always pervaded gay men's discussion of the legendary Hollywood actresses, of Gloria Swanson, whose "acting has more than a whiff of the drag queen about it"; of Vivien Leigh, whom gay author Paul Roen identifies with "for the simple reason that I know she's really not a woman"; or of Mae West, who was "Mt. Rushmore in drag," as well as "the first woman to function as a leading man." (For decades, the latter was even suspected of literally being a biological male until her post-mortem finally convinced her skeptical gay fans that her curvaceous hips and imposing bosom were the real thing and not prosthetic foam rubber devices.)

Because of our fiercely fetishistic involvement with diva worship, the star even in a sense traded places with her gay audience, who used her as a naked projection of their frustrated romantic desires, of their inability to express their sexual impulses openly in a homophobic society, and to seduce and manipulate the elusive heterosexual men for whom many homosexuals once nursed bitterly unrequited passions. In the process of this transference, the diva was voided of both her gender and her femininity and became the homosexual's proxy, a transvestite figure, a vampish surrogate through whom gay men lived out unattainable longings to ensnare such dashing heartthrobs as Clark Gable, Humphrey Bogart, and Gene Kelly.

Although at first sight gay diva worship seems to have been as giddy as an adolescent girl's moonstruck infatuation with her teen idols, the homosexual's love of Hollywood was not an expression of flamboyant effeminacy but, rather, in a very literal sense, of swaggering machismo. For all of the lush sensuality of Greta Garbo melting limply into the arms of John Barrymore in Grand Hotel or Elizabeth Taylor batting her eyes at the impotent Paul Newman in Cat on a Hot Tin Roof, diva worship provided effeminate men with a paradoxical way of getting in touch with their masculinity, much as football provides a vicarious way for sedentary straight men to get in touch with their masculinity. Despite appearances to the contrary, diva worship is in every respect as unfeminine as football. It is a bone-crushing spectator sport in which one watches the triumph of feminine wiles over masculine wills, of a voluptuous and presumably helpless damsel in distress single-handedly mowing down a lineup of hulking quarterbacks who fall dead at her feet, as in Double Indemnity, where Barbara Stanwyck plays a scheming femme fatale who brutally murders her husband and then assists in dumping his lifeless body from a moving train in order to collect his insurance policy, or in Dead Ringer, where Bette Davis watches calmly as her dog lunges for the throat of her gigolo boyfriend. As one gay writer wrote about his attraction to the classic cinematic vamp, "as any drag queen can tell you: beneath all those layers of cosmetic beauty lies the kind of true grit John Wayne never knew." Before Stonewall, homosexuals exploited these coldblooded, manipulative figures as a therapeutic corrective of their own highly compromised masculinity. To counteract their own sense of powerlessness as a vilified minority, they modeled themselves on the appealing image of this thick-skinned androgyne-cum-drag-queen, a distinctly militaristic figure who, with a suggestive leer and a deflating wisecrack, triumphed over the daily indignities of being gay. Even today, gay men still allude to the star's usefulness in enabling them to "cope," in offering them a tough-as-nails persona that they can assume like a mask during emotionally trying experiences in which they imagine themselves to be Joan Crawford in Mildred Pierce building her restaurant empire or Bette Davis in Dark Victory nobly ascending the stairs to die alone in her bedroom, struck down in the prime of her life by a mysterious brain tumor. In an article on Ruby Rims, a female impersonator so immersed in celebrity culture that he has even named his cats "Eve" and "Channing," the New York Native describes how homosexuals fight back through imitation, through the often unconscious reenactment of Hollywood scenarios in the course of real-life experiences:

[Rims] finds that if he is angered or frustrated by something or someone, he can usually give vent to his feelings by becoming Bette Davis.... "She's a release for me," he said, his face brightening. "I can walk right up to someone and say--gasp" 'You're an asshole,' and blow cigarette smoke in their face."
Quite by accident, by pure serendipity, the diva provided the psychological models for gay militancy and helped radicalize the subculture. The homosexual's inveterate habit of projecting himself into the invincible personas of Scarlett O'Hara in Gone with the Wind or Alexandra del Lago in The Sweet Bird of Youth prepared the ground psychologically for the political resistance that was to come in the 1960s and 1970s when the gay man's internal diva was at last released from the subjective prison of his fantasy world to take the streets by storm. When drag queens fought back at Stonewall, chances are that what they had on their minds was the shameless chutzpah of their film icons, whose bravura displays of gutsiness they were reenacting. We consumed, assimilated, and recycled Hollywood images in such vast quantities and with such intense passion that it is interesting to speculate whether gay liberation would have been delayed had gay men not found inspiration in these militant paradigms. Something as retrograde and conformist as popular culture, with its uncritical advocacy of materialism, success, and blissfully domestic heterosexual relationships, was actually used for radical purposes, enabling a despised subculture to defend itself from the very America Hollywood celebrated. In the absence of the gay-positive propaganda in which contemporary gay culture is saturated, film became a form of "found" propaganda that the homosexual ransacked for inspiring messages, reconstituting the refuse of popular culture into an energizing force.

As Rims's comments reveal, one aspect in particular of the Hollywood actress's persona appealed to gay men: her bitchiness, her limitless satiric powers, as in The Women where the characters taunt each other with such venomous comments as "where I spit no grass grows," "your skin makes the Rocky Mountains look like chiffon velvet," and "chin up--that's right, both of them," or in All About Eve in which Bette Davis toasts the slanderous critic who raises his wine glass to her across a restaurant by taking a ferocious bite out of a stalk of celery.

Homosexuals were drawn to the image of the bitch in part because of her wicked tongue, her ability to achieve through conversation, through her verbal acuity, her snappy comebacks, the control over others that gay men were often unable to achieve in their own lives. The fantasy of the vicious, back-stabbing vagina dentata, always quick on her feet, always ready to demolish her opponent with a stunning rejoinder, is the fantasy of a powerless minority that asserts itself through language, not physical violence. Straight men express aggression through fistfights and sports; gay men through quick-witted repartee and caustic remarks. Straight men punch; gay men quip. Straight men are barroom brawlers; gay men, bitches. By providing the models for the beautiful shrew who, in film after film, attained a kind of conversational omnipotence, Hollywood fueled the homosexual's love of archness, of withering irony, which became the deadliest weapon of all in the arsenal of the pre-Stonewall homosexual. If shit-kicking amazons in sequins, ermine; and lame inadvertently helped each gay man nurture, like his own inner child, his own inner diva, and thus strengthened his will to resist his degradation at the hands of a homophobic society, wittiness was the primary element of his revenge, the method by which he gained the upper hand over his enemies, remaining in possession of the battlefield long after the victims of the winged barbs he hurled had beat a hasty retreat. Given the centrality to the subculture of the image of the arch queen, it is not an exaggeration to say that gay politics grew out of gay wittiness, whose acerbic muse was the mordant Hollywood goddess. Wittiness was the first very tentative step toward gay liberation, a vitriolic expression of discontent, of our disdain for American prudery, which we reviled through verbal protest, a compulsion to denigrate, to engage in cutthroat bickering, which eventually reached critical mass and led to concrete political action. Bitching, in other words, was a form of protopolitics. It channeled the bitter frustrations of homosexuals' lives into a pronounced conversational mannerism that marked an important symbolic stage in the gay man's effort to translate his otherwise impotent rage into practical measures for social reform.

Hollywood suffused the gay sensibility during the first half of the twentieth century, not only because of its usefulness as "found" propaganda, but also because of the power of the new medium to build group solidarity. Given that homosexuals are an invisible minority whose members are not united by obvious physical characteristics and who are indeed often unrecognizable even to each other, they had to invent some method of identifying themselves as a group or risk remaining in the politically crippling state of fragmentation that for decades kept them from organizing to protect their basic civil rights. Blacks are united by their skin color, Chicanos by their language and place of origin, and the disabled by their infirmities. Homosexuals, however, are bound together by something less tangible: by their tastes, their sensibility, by the books they read, the clothes they wear, and the movies they watch.

Before the gay sensibility developed, homosexuals constituted an alienated diaspora of scattered individuals who lived a splintered existence in localized pockets where they strove to efface every identifying mark that might compromise them in the eyes of outsiders, breaking their cover and thus leading to their professional downfall and personal humiliation. For a minority trying so vigorously to erase itself, political unity was a contradiction in terms. With the codification of the gay sensibility and the liberation of men trapped in the solipsistic isolation of intense shame, the homosexual suddenly recognized that he belonged to a group, an elaborate network of fellow solipsists who began to establish connections with each other. Bridges were built partly through the cultivation of shared tastes in popular culture, through a reverence for a group of cinematic heroes whose glamor lent an unprecedented centrality to the previously disjointed and atomized nature of gay life. Hollywood divas were drafted, naturally without their knowledge, into the role of quasi-gay-liberation leaders, their charismatic presence unifying a body of followers who flocked together, not necessarily because of their idol's peculiar talents as an actress, but simply because she provided a kind of magnet. Large numbers of gay men established around these stars a new type of esprit de corps as the votaries of a particular pantheon of goddesses. Fandom, in other words, was an emphatic political assertion of ethnic camaraderie, as was the gay sensibility itself, which did not emanate from some sort of deeply embedded homosexual "soul," but arouse as a way of achieving a collective subcultural identity.

Up until the 1960s, the performer served as a bellwether, a prophet without a religion, a platform, a cause, a messiah whose disciples were more in love with themselves than they were with their star. The priority of audience over artist becomes particularly clear in the case of the ultimate idol of the gay masses, Judy Garland. Her concerts during the 1950s and 1960s were so popular among homosexuals that, in each city in which she appeared, local gay bars emptied as their patrons came out en masse to hear a dazed and disoriented performer, slumped over the microphone, croak out the broken lyrics of songs that, in her final days, she had difficulty remembering.

Garland's force as a lodestone, an excuse for a public gathering of homosexuals, emerges in the existing accounts of her concerts, which witnesses describe as orgiastic rites of blind idolatry during which screaming multitudes of homosexuals, whipped up into a frenzy by such plaintive songs as "Over the Rainbow" and "The Man That Got Away," wept out loud, laying on the stage at their divinity's feet mountains of flowers. "It was as if the fact that we had gathered to see Garland gave us permission to be gay in public for once," one older gay man wrote of a 1960 concert, while another recalled a performance he attended as "more a love-in than a concert":

When Judy came onto the stage, we were the loudest and most exuberant part of that audience. We not only listened, we felt all the lyrics of all the songs. Judy Garland was all ours; she belonged to every gay guy and girl in the theatre. I like to think that we were the greatest part of that audience; the part that Judy liked best."
Although Garland was in many ways a brilliant performer, homosexuals came to regard her simply as the catalyst for the raucous, gay pride "love-ins" that erupted spontaneously during her concerts. Her uncritical mass appeal helped overcome our fragmentation to create for only a few hours, within the safe confines of an auditorium, an ephemeral, transitory "community" that lured us out of the closets in order to experience the unforgettable thrill of a public celebration of homosexuality. Those commentators who insist on trying to explain gay diva worship exclusively on the basis of the intrinsic appeal of a particular star--as a result of her pathos, suffering, vulnerability, glamor, or sexiness, to give only a few of the reasons that have been offered--have in many ways chosen as their starting point a mistaken premise. The answer to the proverbial question "why did gay men like Judy Garland so much?" is that they liked, not her, so much as her audience, the hordes of other gay men who gathered in her name to hear her poignant renditions of old torch songs that reduced sniffling queens to floods of self-pitying tears. The hysterical ovations her audiences gave her were in some sense applause for themselves. Garland was simply the hostess, a performer who good-naturedly rented out her immense reputation as an occasion for a huge gay party, a dry run for Stonewall, a dress rehearsal for the birthday bash of the burgeoning gay rights movement, which her last and most important concert, her funeral, was to inspire only a few years later.

It was this shared knowledge of popular music and film that created the very foundations of camp. Homosexuals quickly incorporated into their conversations and style of humor a body of subcultural allusions, ranging from Carmen Miranda in The Gang's All Here singing "The Lady in the Tutti-Frutti Hat" in eight-inch platform heels on a runway framed by giant strawberries, to Marlene Dietrich wearing a blond Afro crooning "Hot Voodoo" in Blonde Venus while chorus lines of cavorting Negresses wearing war paint do the cakewalk behind her, to Maria Montez in Cobra Woman, dancing a kootch dance in a slinky, sequined snake dress as she selects terrified subjects for blood sacrifices, who are borne off shrieking to their unhappy fate.

Through constant quotation of the scripts of Hollywood movies in our private conversations, we created a collage of famous lines and quips, which, after frequent repetition, achieved the status of passwords to a privileged world of the initiated, who communicated through innuendo, through quoted dialogue pregnant with subtext. A miscellaneous body of canonic lines was lifted straight out of the masterpieces of popular culture and exploited as a way of declaring our membership in the forbidden ranks of a secret society:

Toto, I don't think we're in Kansas anymore.

But you are, Blanche, you are in that chair.

I have always depended on the kindness of strangers.

Buckle your seatbelts, it's going to be a bumpy night.

What is the scene? Where am I?

What a dump!

Jungle red!

When gay people engaged in camp before Stonewall, they often did so as a way of laughing at their appropriation of popular culture for a purpose it had never been intended to serve: that of identifying themselves to other homosexuals and triggering in their audience instantaneous recognition of stock expressions, gestures, and double entendres that strengthened the bonds that held them together. Even today, a performer as gifted as Lypsinka, whose acts consist entirely of an intricate series of quotations from films, succeeds as a brilliant comic by virtue of her uncanny ability to play on her audience's gleeful sense of unity as a minority, an ethnic group that relives, long after we have established other channels of communication, the power of allusion to increase solidarity. Lypsinka simply mouths the words "no wire hangers" or "Barbara, pleeeeeease! and her audience howls with laughter caused, not by the intrinsic hilariousness of the lines, but by the delight we take in the unanimity of our response, in our virtually reflexive recognition of the source of the allusions. This esoteric knowledge contributes to the elitist pleasure of a coterie sealed off from the rest of the uninitiated American public.

Camp and diva worship also served a more narrowly personal function than just providing us with a repository of subcultural narratives that became our own private language. Before Stonewall, allusions to such films as The Women, Mildred Pierce, Now, Voyager, A Streetcar Named Desire, or Gilda were ingeniously incorporated into the extremely delicate business of cruising for friends and sex partners, many of whom would undoubtedly have been far too timid to state their preferences openly. How much simpler it was to encode one's sexual orientation into something as elusive and uncompromising as a taste for a particular actress, whose name could be dropped casually in the course of a conversation in hopes that one's partner would pick up the gambit and agree that he did indeed like Judy Garland, that Mae West was outrageously funny, and that Tallulah Bankhead was, as she herself once said, "as pure as the driven slush." As exemplified by that time-honored expression "a friend of Dorothy," a particular taste in film provided a useful come-on for cautious gay men, who could reveal themselves to others without risking exposure, since only a fellow insider would recognize the allusion, thus allowing the homosexual to circumvent the potential embarrassment of a dismayed or even hostile reaction to a flat declaration. Over the decades gay men became so adept at communicating their forbidden desires through camp allusions that a sort of collective amnesia has descended over the whole process, and we have lost sight of the fact that our love for performers like Judy Garland was actually a learned behavior, part of our socialization as homosexuals. Many gay men still mistake their cultish admiration for the likes of Tippi Hedren, Kim Novak, or Barbara Stanwyck as an expression of an innate gay predisposition, as if the love of actresses was the result of a physiological imbalance in our smaller hypothalamuses, of a diva chromosome in our DNA which produced a camp sensibility that somehow preceded our awareness of our homosexuality.

As early as the 1950s, our use of camp as a method of cruising began to change. Irony was always present in the subculture's involvement with celebrities, partly because of the homosexual's sly awareness that he was misusing something as naive and wholesome as popular culture, with its golly-gee-whiz, Kansas-bred Dorothys and its Norman Rockwell happy endings, to reinforce something as illicit and underground as his solidarity with other homosexuals. As time went on, however, the note of facetiousness implicit in many gay men's treatment of Hollywood became louder and louder, until the wry smile of camp became the cackling shriek of the man who could no longer take seriously the divas he once adored.

By the early 1960s, some gay men had begun to express repulsion for our obsequious fawning over celebrities. Patrick Dennis's 1961 camp masterpiece Little Me provides a clear instance of the increasing skepticism homosexuals were bringing to their involvement with Hollywood. The novel purports to be the memoir of Belle Schlumptert, aka Belle Poitrine, a great film actress, but in fact this imaginary autobiography, complete with hilarious photographs documenting Belle's meteoric rise to fame from her humble beginnings as the daughter of a scarlet woman, is an irresistibly scathing satire of a megalomaniac piece of trailer trash who uses the casting couch as a trampoline to catapult herself into stardom. By the 1980s and 1990s, the pantheon of immortals, while still treated reverently by many gay men, had become fair game for ridicule, as when New York drag queens commemorated the 1981 release of Mommie Dearest by dressing up as Joan Crawford and kicking life-size effigies of her daughter Christina up and down Christopher Street. Similarly, in 1987, New York Native columnist Dee Sushi imagined a hypothetical Broadway musical based on Whatever Happened to Baby Jane? in which a chorus line of spinning wheelchairs would whirl across the stage like dervishes to the accompaniment of a song entitled "But Cha Are!"

One of the reasons for the change from reverence to ridicule, from Joan Crawford as the bewitching siren to Joan Crawford as the ax-wielding, child-beating, lesbian drunk, is that, in the minds of younger homosexuals, the diva had come to be perceived as the emotional crutch of the pathetic old queen. Surrounded by his antiques and registered crockery, this geriatric spinster compensates for the loneliness of his thwarted life by projecting himself into the tantalizing hourglass figures and haute couture ball gowns of his favorite actresses. For gay men under the age of 40, the classic film star has become the symbolic icon of an oppressed early stage in gay culture in which homosexuals sat glued to their television sets feasting their eyes on reruns, achieving through their imaginations the sense of self-worth that gay men now attain by consuming the propaganda our political leaders disseminate in such vast quantities. For the contemporary homosexual, who prides himself on his emotional maturity and healthiness, the use of the diva to achieve romantic fulfillment through displacement is the politically repugnant fantasy of the self-loathing pansy whose dependence on the escapism of cinema must be ritually purged from his system. We accomplish this catharsis by creating through conversations, theater, and even cabaret acts images of the vulgarity and psychological desperation of glamorous actresses, of Joan Crawford clobbering Christina with a can of bathroom cleanser or chopping off the head of her faithless husband in Strait-Jacket.

John Weir's novel about AIDS, The Irreversible Decline of Eddie Socket (1989), revolves around this act of purgation. Like Rims, its dying protagonist attempts to face his bleak future by staging what he calls "Barbara Stanwyck moment[s]." With his inspired sense of melodrama, he appears to be a typical example of a gay man with a relentlessly active internal diva, but, far from offering "empowerment," she becomes a vampire that feeds on his vitality, an incubus that saps his life of its reality and makes him feel that "the whole fucking world was in quotes. Was death going to be in quotes, too?" Lying in his excrement in his hospital bed, abandoned by the nurses and orderlies who are too afraid to touch him, he wonders out loud to a friend:

Who's the main character in my life? ... Who is starring in my life? It can't be me ... I'm just a walk-on.... Not even a supporting player. Not even a cameo appearance by a long-forgotten star. I'm just an extra. No one else is starring in my life. That's why they're halting production. It's a bad investment for the studio.
Returning from the hospital, his friend has an hallucination that be interpreted as a diatribe against the gay escapist whose obsession with actresses diminishes the reality of his life, starving it of its meaning and providing a safe emotional haven from the difficulties of being a homosexual. As he sits on a crosstown bus, Elizabeth Taylor appears out of the blue and proceeds, before his very eyes, to pull herself apart like Lego blocks in order to disabuse him of his adoration, first removing her contact lenses, then her chin, cheeks, breasts, left buttock, and right kneecap, until he realizes that, despite all of her glamor, "she's a walking prosthesis," "a pile of rubber parts on the floor between us,... all diminished." Stuffing herself into a shopping bag, she hobbles off the bus, no longer the alluring emblem of the life the homosexual cannot live, but a Mr. Potato Head. This apocalyptic image of Taylor's self-destruction is pivotal to a book that is, in many ways, an anticamp requiem, an expression of the young homosexual's mounting impatience with the retrograde use of Hollywood as a security blanket.

The sacrosanct image of the Hollywood deity was also tarnished by the fact that, in the late 1960s and 1970s, gay men began holding their proto-gay-liberation leaders to a higher political standard. Because of the role actresses played in bringing gay men together as fans and instilling in them a sense of national identity that transcended the fragmented world that existed before Stonewall, homosexuals were at first unswervingly loyal to their patron saints and remained largely blind to their glaring deficiencies as the subculture's unofficial envoys to mainstream society. As we became more politically aware, however, and more conscious of our clout as a unified minority, we became impatient with the patronizing maternalism of early gay politics, which had produced the great matriarchy of mother hens who hovered protectively over their broods of gay fans. After Stonewall, we were no longer satisfied with the crumbs of celebrities' halfhearted comfort and support, with the meager consolation they offered for the humiliation of our social ostracism, which rarely amounted to more than such statements as "you poor little darlings" or "leave them alone, you bullies, they're so harmless."

C) 1997 Daniel Harris All rights reserved. ISBN: 0-7868-6165-7



Return to the Books Home Page

Wednesday, May 23, 2012

Free will

I have noticed that some readers continue to find my argument about the illusoriness of free will difficult to accept. Apart from religious believers who simply “know” that they have free will and that life would be meaningless without it, my most energetic critics seem to be fans of my friend Dan Dennett’s account of the subject, as laid out in his books Elbow Room and Freedom Evolves and in his public talks. As I mention in Free Will, I don’t happen to agree with Dan’s approach, but rather than argue with him at length in a very short book, I decided to simply present my own view. I am hopeful that Dan and I will have a public discussion about these matters at some point in the future.
Dan and I agree on several fundamental points: The conventional (libertarian) idea of free will makes no sense and cannot be brought into register with our scientific picture of the world. We also agree that determinism need not imply fatalism and that indeterminism would give us no more freedom than we would have in a deterministic universe.
These points of agreement can be easily illustrated: Imagine that I want to learn Mandarin. I attend classes, hire a native-speaking tutor, and vacation in China. My efforts in this regard, should they persist, will be the cause of my speaking Mandarin (badly, no doubt) at some point in the future. It’s not that I was destined to speak Mandarin regardless of my thoughts and actions. Choice, reasoning, discipline, etc., play important roles in our lives despite the fact that they are determined by prior causes—and adding a measure of randomness to this clockwork, however spooky, would do nothing to accentuate their powers.
Biological evolution and cultural progress have increased people’s ability to get what they want out of life and to avoid what they don’t want. A person who can reason effectively, plan for the future, choose his words carefully, regulate his negative emotions, play fair with strangers, and partake of the wisdom of various cultural institutions is very different from a person who cannot do these things. Dan and I fully agree on this point. However, I think it is important to emphasize that these abilities do not lend credence to the traditional idea of free will. And, unlike Dan, I believe that popular confusion on this point is worth lingering over, because certain moral impulses—for vengeance, say—depend upon a view of human agency that is both conceptually incoherent and empirically false. I also believe that the conventional illusion of free will can be dispelled—not merely ignored, tinkered with, or set on new foundations. I do not know whether Dan agrees with this final point or not.
Fans of Dan’s account—and there are many—seem to miss my primary purpose in writing about free will. My goal is to show how the traditional notion is flawed, and to point out the consequences of our being taken in by it. Whenever Dan discusses free will, he bypasses the traditional idea and offers a revised version that he believes to be the only one “worth wanting.” Dan insists that this conceptual refinement is a great strength of his approach, analogous to other maneuvers in science and philosophy that allow us to get past how things seem so that we can discover how they actually are. I do not agree. From my point of view, he has simply changed the subject in a way that either confuses people or lets them off the hook too easily.
It is true that how things seem is often misleading, and popular beliefs about physical and mental processes do not always map smoothly onto reality. Consider the phenomenon of color: At the level of conscious perception, objects appear to come in a variety of colors, but we now know that colors do not exist “out there” in the way they seem to. Explaining our experience of color in terms of the color-free facts of physics and neurophysiology requires that we make a few adjustments in our thinking—but this doesn’t mean color is merely “an illusion.” Rather, it must be understood in terms of lower-level facts that are not themselves “colored.”
Nothing changes at the level of our vision when we understand what color really is—and we can still talk about “blue skies” and “red apples” without any sense of contradiction. There are certain anomalies to be reconciled (for instance, two objects reflecting light at the same wavelength can appear to be different colors depending on the context), but we are not mistaken in believing that we see red apples and blue skies. We really do experience the world this way, and one job of vision science is to tell us why.
Dan seems to think that free will is like color: People might have some erroneous beliefs about it, but the experience of freedom and its attendant moral responsibilities can be understood in a similarly straightforward way through science. I think that free will is an illusion and that analogies to phenomena like color do not run through. A better analogy, also taken from the domain of vision, would liken free will to the sense that most of us have of visual continuity.
Take a moment to survey your immediate surroundings. Your experience of seeing will probably seem unified—a single field in which everything appears all at once and seamlessly. But the act of seeing is not quite what it seems. The first thing to notice is that most of what you see in every instant is a blur, because you have only a narrow region of sharp focus in the center of your visual field. This area of foveal vision is also where you perceive colors most clearly; your ability to distinguish one color from another falls away completely as you reach the periphery in each eye. You continuously compensate for these limitations by allowing your gaze to lurch from point to point (executing what are known as “saccades”), but you tend not to notice these movements. Nor are you aware that your visual perception appears interrupted while your eyes are moving (otherwise you would see a continuous blurring of the scene). It was once believed that saccades caused the active suppression of vision, but recent experiments suggest that the post-saccadic image (i.e. whatever you next focus on) probably just masks the preceding blur.
There is also a region in each visual field where you receive no input at all, because the optic nerve creates a blind spot where it passes through the retina. Many of us learned to perceive the subjective consequences of this unintelligent design as children, by marking a piece of paper, closing one eye, and then moving the paper into a position where the mark disappeared. Close one eye now and look out at the world: You will probably not notice your blind spot—and yet, if you are in a crowded room, someone could well be missing his head. Most people are surely unaware that the optic blind spot exists, and even those of us who know about it can go for decades without noticing it.
While color vision survives close inspection, our conventional sense of visual continuity does not. The impression we have of seeing everything all at once, clearly, and without interruption is based on our not paying close attention to what it is like to see. I argue that the illusory nature of free will can also be noticed in this way. As with the illusion of visual continuity, the evidence of our confusion is neither far away nor deep within; rather, it is right on the surface of experience, almost too near to us to be seen.
Of course, we could take Dan’s approach and adjust the notion of “continuity” so that it better reflected the properties of human vision, giving us a new concept of seamless visual perception that is “worth wanting.” But if erroneous beliefs about visual continuity caused drivers to regularly mow down pedestrians and police sharpshooters to accidentally kill hostages, merely changing the meaning of “continuity” would not do. I believe that this is the situation we are in with the illusion of free will: False beliefs about human freedom skew our moral intuitions and anchor our system of criminal justice to a primitive ethic of retribution. And as we continue to make advances in understanding the human mind through science, our current practices will come to seem even less enlightened.
Ordinary people want to feel philosophically justified in hating evildoers and viewing them as the ultimate authors of their evil. This moral attitude has always been vulnerable to our learning more about the causes of human behavior—and in situations where the origins of a person’s actions become absolutely clear, our feelings about his responsibility begin to change. What is more, they should change. We should admit that a person is unlucky to inherit the genes and life experience that will doom him to psychopathy. That doesn’t mean we can’t lock him up, or kill him in self-defense, but hating him is not rational, given a complete understanding of how he came to be who he is. Natural, yes; rational, no. Feeling compassion for him would be rational, however—or so I have argued.
We can acknowledge the difference between voluntary and involuntary action, the responsibilities of an adult and those of a child, sanity and insanity, a troubled conscience and a clear one, without indulging the illusion of free will. We can also admit that in certain contexts, punishment might be the best way to motivate people to behave themselves. The utility of punishment is an empirical question that is well worth answering—and nothing in my account of free will requires that I deny this.
How can we ask that other people behave themselves (and even punish them for not behaving) when they are not the ultimate cause of their actions? We can (and should) make such demands when doing so has the desired effect—namely, increasing the well-being of all concerned. The demands we place upon one another are part of the totality of causes that determine human behavior. Making such demands on children, for instance, is a necessary part of their learning to regulate their selfish impulses and function in society. We need not imagine that children possess free will to value the difference between a child who is considerate of the feelings of others and one who behaves like a wild animal.
In Free Will, I argue that people are mistaken in believing that they are free in the usual sense. I claim that this realization has consequences—good ones, for the most part—and for that reason we should not gloss over it by revising our definition of “free will” too quickly. Dan believes that his adjustment of the concept has allowed him to provide a description of human agency and moral responsibility that preserves many of our intuitions about ourselves and still fits the facts. I agree, for the most part, but I think that other problems need to be solved. That is why I have focused on the scope and consequences of popular confusion. Dan does not appear to see this confusion the way I do: Either he doesn’t agree about its scope or he doesn’t see the same consequences. But, again, I am hopeful we will be able to sort out our differences in the future…

Tuesday, May 22, 2012

Prager

PRAGER WROTE:  Here's what's being taught in Universities. 

1. There is no better and no worse in literature and the arts. The reason universities in the past taught Shakespeare, Michelangelo, and Bach rather than, let us say, Guatemalan poets, Sri Lankan musicians, and Native American storytellers was “Eurocentrism.”

2.  There is no objective meaning to a text. Every text only means what the reader perceives it to mean.

Simply put, that's nonsense.  Nobody thinks like that except a few college professor careerist pseudo post modernists.  These are long outdated concepts touted for a short time in the seventies and never taken seriously by anyone. To bring these old myths up now is simply wrong.  It's a canard. A false and slanderous claim; something D.P. should not be doing. It's mudslinging at progressives.  Its the sort of thing that should backfire. Its untrue and unfair.  Progressives can be brought to task with the truth.  We all love the idea of reduced drudgery, but we cannot yet afford a 35 hour week. It may take a hundred years. 

Here's Pat Buchanin:

America is losing control. Why? A failure to understand human nature and the lessons of history -- and the mindless pursuit of Utopian dreams.

We wagered the wealth of a nation on a Great Society gamble that through endless redistribution from top to bottom, we could create a more just, equal and productive society.

Long after reality caught up to us, we continue to chase the dreams.



That's how to bring a progressive to his senses.  Shine the light on Europe's failed premature socialist anti economy.  Admit that the end of drudgery is a fine goal, well intended.  But premature.  Lethally so. 

Anagrams

PRESBYTERIAN:
When you rearrange the letters:
BEST IN PRAYER


ASTRONOMER:
When you rearrange the letters:
MOON STARER


DESPERATION:
When you rearrange the letters:
A ROPE ENDS IT


THE EYES:
When you rearrange the letters:
THEY SEE


GEORGE BUSH:
When you rearrange the letters:
HE BUGS GORE


THE MORSE CODE :When you rearrange the letters:
HERE COME DOTS
 

DORMITORY:
When you rearrange the letters:
DIRTY ROOM
 

SLOT MACHINES:
When you rearrange the letters:
CASH LOST IN ME
 


ANIMOSITY:
When you rearrange the letters:
IS NO AMITY


ELECTION RESULTS :
When you rearrange the letters:
LIES - LET'S RECOUNT


SNOOZE ALARMS :
When you rearrange the letters:
ALAS !  NO MORE
Z'S


A DECIMAL POINT :
When you rearrange the letters:
I'M A DOT IN PLACE

THE EARTHQUAKES:
When you rearrange the letters:
THAT QUEER SHAKE


ELEVEN PLUS TWO:
When you rearrange the letters:
TWELVE PLUS ONE



AND FOR THE GRAND FINALE:


MOTHER-IN-LAW:
When you rearrange the letters:
WOMAN HITLER

Monday, May 21, 2012

Incontrovertible

Do we believe, even for a second, that if Obama had been busted for marijuana -- under the laws that he condones -- would his life have been better? If Obama had been caught with the marijuana that he says he uses, and 'maybe a little blow'... if he had been busted under his laws, he would have done hard f*cking time. And if he had done time in prison, time in federal prison, time for his 'weed' and 'a little blow,' he would not be President of the United States of America. He would not have gone to his fancy-a** college, he would not have sold books that sold millions and millions of copies and made millions and millions of dollars, he would not have a beautiful, smart wife, he would not have a great job. He would have been in f*cking prison, and it's not a god damn joke. People who smoke marijuana must be set free.

God

God, the all-powerful creator, capable of moving mountains and of begetting a universe with all the laws of physics, couldn't find a better way to lift the burden of sin than a blood sacrifice.

Sunday, May 20, 2012

Trayvon

Also a case of using the wrong words, euphemisms, to mislead. Slipped out of control and hurt to describe Zimmerman getting  his head bashed repeatedly on the cement.  That is an OBVIOUS strategy to undercut the horrible life threatening violence of trayvon's actions. 

Please don't argue, as if any statement, no matter how true, has to be disputed.  That's ignorant.  Most people don't do that.  

Classic lies

Turley vs. Dershowitz .  A classic case of opinions, not law. In which turley  is simply foolish. 


“The prosecution’s best case is that Zimmerman provoked Martin, absolutely improperly followed him, and confronted him,” Professor Dershowitz adds. “But as a result of that, a battle ensues and Martin’s on top, banging his head against the ground, and [Zimmerman] reasonably believes that his life is at stake and pulls out the gun. It’s classic self-defense, and if it’s not self defense, it’s at worst involuntary manslaughter.”


In fact, the prosecutor in the case, legal experts say, isn’t trying to argue that Zimmerman wasn’t hurt, but that he ultimately instigated the fight by getting out of his car and “confronting” Martin, who was “minding his own business,” and then shot him and killed him when the situation slipped out of his control. The fact that Zimmerman used massive and deadly force on a teenager armed only with his fists suggests to prosecutors that he displayed reckless – and thus criminal – indifference to human life, the definition of second degree murder. 

"You lose the [self-defense argument] if you are the aggressor or if you do not have a reasonable basis for fear or serious harm or death,” George Washington University law professor Jonathan Turley told the Guardian newspaper recently. “Even if he is not the aggressor there will remain the question of escalation or confrontation."



THAT'S SILLY. TURLEY IS MAKING A FOOL OF HIMSELF. Zimmerman was the  aggressor, but Trayvon escalated confrontation to dangerous life threatening violence. Turley can't  answer that.

Saturday, May 19, 2012

Prager lies

There is no objective meaning to a text. Every text only means what the reader perceives it to mean.

Prager lies

There is no better and no worse in literature and the arts. The reason universities in the past taught Shakespeare, Michelangelo, and Bach rather than, let us say, Guatemalan poets, Sri Lankan musicians, and Native American storytellers was “Eurocentrism.”

Monday, May 14, 2012

Charles Murray on the Future of Art

Upon reading Daniel Boorstin’s The Discoverers many years ago, I became fascinated with the ebbs and flows of human achievement, and especially those points in world history that have been associated with a flowering of great accomplishment. The most famous are Athens in the Periclean age and Florence in the Renaissance, but there have been many other less spectacular examples. Sometimes, the surge of great creativity is most obvious in a particular domain—literature in nineteenth-century Russia, for example—but strides made in one field are usually accompanied by strides made in others. Historically speaking, what accounts for the difference in the fertility of the cultural ground?

In the late 1990s, I set out to assemble databases of humanity’s great achievements, applying historiometric methods to identify the significant figures and remarkable achievements. The result was a book I published in 2004, Human Accomplishment: The Pursuit of Excellence in the Arts and Sciences, 800 B.C. to 1950. In its concluding chapters, I laid out the conditions (italicized in the rest of this essay) that characterize the times and places in which accomplishment has flourished. The question I seek to answer in this essay is: Given what we know about the conditions that led to great accomplishment in the past, what are the prospects for great accomplishment in the arts as we move through the twenty-first century? I begin with the conditions that are empirically indisputable and work my way to ones that are more interpretive.

A major stream of human accomplishment is facilitated by growing national wealth, both through the additional money that can support the arts and sciences and through the indirect spillover effects of economic growth on cultural vitality.

National wealth is an enabling condition. It doesn’t ensure great accomplishment, but it provides the wherewithal for patrons to buy works of art and boxes at the concert hall. Economic growth is also a signal of a civilization’s vitality and confidence, which is likely to be mirrored in the vitality and creativity of its arts. In this regard, the news is good for the United States. We are very rich and we are likely to continue to get richer, assuming current economic policies don’t continue forever. The United States is unlikely to be impeded from great accomplishment by a lack of wealth.

A major stream of human accomplishment is fostered by the existence of cities that serve as centers of human capital and supply audiences and patrons for the arts.

America has several urban centers that provide the critical mass of human capital necessary for great accomplishment.

A major stream of human accomplishment is fostered by political regimes that give de facto freedom of action to their potential artists and scholars.

No problem here either, though it should be noted that the requirements for political freedom are not stringent. Some of the great streams of accomplishment have occurred under absolute monarchies—Louis XIV’s France comes to mind—when the monarch allowed artists to work unmolested. In contemporary America, scientific research in both the hard and soft sciences is constrained on some topics by political correctness and even government restrictions (e.g., stem cell research). But composers, painters, sculptors, and authors still have plenty of freedom of action.

A major stream of human accomplishment is fostered by a culture that encourages the belief that individuals can act efficaciously as individuals, and encourages them to do so.

The creative act in painting, sculpture, musical composition, or writing comes down to a solitary person thinking of something new and pursuing it without knowing for sure what the result will be. Any culture will turn out some audacious, self-willed people in that vein. But the more collectivist, communitarian, or familial a culture is, the fewer such individuals will emerge, and the greater the damping effect on artistic creation will be. Thus, classical China was a highly familial society with stunning achievements in painting and poetry, but there was much less innovation and branching out in those artistic fields than in the West. When Confucianism was the reigning philosophical paradigm, the aesthetic rules set down by revered poets and painters could remain nearly intact for centuries.

Once again, it is hard to find reasons for thinking that America has a problem meeting this criterion. We continue to be a highly individualistic culture. If anything, our most talented have too inflated a sense of their ability to act efficaciously as individuals.

The best single predictor of a stream of accomplishment in the current generation is the presence of great models in the previous generation.

The insight that great accomplishment begets more great accomplishment goes back two thousand years to a Roman, Velleius Paterculus, who first analyzed the clustering of genius in Athens and concluded that “genius is fostered by emulation.” In the modern era, that insight has been confirmed in rigorous quantitative studies, and it is one of those social science findings that shouldn’t surprise anyone. If children who have the potential for creating great art are watching a Leonardo da Vinci set the standard, they are more likely to create art like Michelangelo, Dürer, or Raphael did. This is relevant for thinking about the future of American accomplishment in the arts because, as far as I can see, we do not have any great models in the current generation who will produce greatness in the next generation.

The magnitude and content of a stream of accomplishment in a given domain varies according to the richness and age of the organizing structure.

The problem here is that we are living at a time when the rich organizing structures that gave us five centuries of magnificent accomplishments in the visual arts, music, and literature from 1400–1900 are old and filled up.

By organizing structure, I mean the principles, tools, and craft used to generate the artistic product. As an example, consider the organizing structure of painting as it was revolutionized in the fifteenth century. The new set of principles were those of linear perspective; a major new tool was invented in the form of oil paints; and the techniques that were developed to take advantage of the new principles and tool constituted an elevated level of craft. Together, they formed an organizing structure for creating two-dimensional art that was incredibly rich with possibilities and unleashed a flood of great work. Music saw the development of an equally promising new organizing structure over a longer period, from the late middle ages through the Baroque period, with the creation of polyphony and eventually tonal harmony (principles), new and improved instruments (tools), and the evolution of techniques for taking advantage of the new resources (craft). In literature, the organizing structure that created an eruption of great work starting in the late eighteenth century was overwhelmingly dominated by a new principle: the modern novel.

All of these organizing structures are more than two centuries old. Even the systematic use of abstraction in the visual arts (a new set of principles) has been around for more than a century and a half. In other words, all of the organizing structures for the great artistic works of the West have been largely “filled up,” in a practical sense. It is theoretically true, as Arnold Schoenberg famously said, that plenty of good music remains to be written in C major. But artists want to break new ground, and the more creative power an artist possesses, the less likely it is that he wants to produce another version of a well-established form. What’s the point of writing a great symphony in the classical style (from the ambitious composer’s point of view), when we already have so many of them?

This doesn’t mean that a second renaissance is impossible within these ageing organizing structures, but it would have to be sparked by a renewed passion for the kind of art they permit—a renewed passion for the things that can be conveyed by the “window on the world” of realistic art, tonal harmony harnessed to grand themes, and fictional narratives stuffed full of life. Absent that fundamental change in the satisfactions artists take from their creations, we will need new organizing structures to give their potential full rein.

The richest new organizing structure of the twentieth century was the motion picture. It is also the only organizing structure that does not show signs of being filled up. A plausible case can be made that the film industry is still making products that rank somewhere among the all-time best, and there is reason to hope that even better are yet to come.

What are the prospects for the discovery of completely new organizing structures in the arts? It’s hard to tell. Until a new organizing structure appears, how can one identify the void that it fills? But I will cautiously advance the possibility that we are approaching limits dictated by human evolution.

Consider the dead-end organizing structure that appeared during the twentieth century: the atonality (or contra-tonality) that Arnold Schoenberg thought would rival tonal harmony for the public’s affection. He was wrong. Neuroscientists are identifying the reasons why he was wrong. Music based on tonal harmony is attractive for reasons that go deep into the brain, and atonality creates an instinctive aversion in most persons for equally deep reasons. In his book The Art Instinct, Denis Dutton has a fascinating account of the characteristics of landscape paintings that appeal to humans across cultures and across time, and persuasively links those characteristics to human responses that were hard-wired in the early phases of human evolution. Human traditions of storytelling suggest that humans are hard-wired to prefer certain narrative conventions.

Still, humans are adaptable. Some of Mozart’s and Beethoven’s music was considered painfully dissonant when it was first played, and much of Stravinsky’s work has become a lasting part of the repertoire. Expressionism left “window on the world” realism behind, but humans still responded enthusiastically to the expressionists’ new conventions for capturing reality. Innovations in novelistic narrative using stream of consciousness have gained acceptance.

But humans are adaptable only up to a point. True, some people say they love Arnold Schoenberg’s music, respond in some important way to Andy Warhol’s art, and have read Gravity’s Rainbow all the way through. But they constitute a small minority. Most people are drawn to tonal music, pictorial art, and literature that is centered on storytelling for reasons that go back to the ancient African savanna.

If that proposition is correct, then the prospects for the emergence of important new organizing structures are limited when it comes to content that appeals to a wide audience. Instead, they must depend on possibilities created by new technology. In music, wonderful new instruments could enable new varieties of music that tap into the same inborn needs that C major satisfies. Technology might give visual artists new ways of creating works that appeal to the same instincts that have the made pictorial art so beloved. Electronic video games may evolve into a new organizing structure for storytelling that eventually will produce great cultural products. And who knows what symbiosis between humans and technology will eventually be developed, enabling artists and their technological muses to create jointly works that rise above the bar set by the great masters of the past? At least we can always hope.

A major stream of human accomplishment is fostered by a culture in which the most talented people believe that life has a purpose and that the function of life is to fulfill that purpose.

Imagine two cultures with exactly equal numbers of potentially brilliant artists. One is a culture in which those potentially brilliant artists have a strong sense of “this is what I was put on this earth to do,” and in the other, nihilism reigns. In both cultures, the potentially brilliant artists can come to enjoy the exercise of their capabilities. But the nihilists are at a disadvantage in two respects.

The first disadvantage is in the motivation to take on the intense and unremitting effort that is typically required to do great things. This is one of the most overlooked aspects of great accomplishment. Fame can come easily and overnight, but excellence is almost always accompanied by a crushing workload. Psychologists have put specific dimensions to this aspect of accomplishment. One thread of this literature, inaugurated in the early 1970s by Herbert Simon, argues that expertise in a subject requires a person to assimilate about 50,000 “chunks” of information about the subject over about ten years of experience—simple expertise, not the mastery that is associated with great accomplishment. Once expertise is achieved, it is followed by thousands of hours of practice, study, and labor.

The willingness to engage in such monomaniacal levels of effort in the arts is related to a sense of vocation. By vocation, I have in mind the dictionary definition of “a function or station in life to which one is called by God.” God needn’t be the source. Many achievers see themselves as having a vocation without thinking about where it came from. My point is that the characteristics of nihilism—ennui, anomie, alienation, and other forms of belief that life is futile and purposeless—are at odds with the zest and life-affirming energy needed to produce great art.

The second disadvantage involves the artist’s choice of content. If life is purposeless, no one kind of project is intrinsically more important than any other kind. Take, for example, an extraordinarily talented screenwriter who is an atheist and a cynic. When asked if he has a purpose in life, he says, “Sure, to make as much money as I can,” and he means it. The choice of content in his screenplays is driven by their commercial potential. His screenplays are brilliantly written, but it is a coincidence if they deal with great themes of the human condition. His treatment of those great themes, even when he happens to touch on them, is not driven by a passion to illuminate, but to exploit. If instead he has a strong sense of “This is what I was put on earth to do,” the choice of content will matter, because he has a strong sense that what he does is meaningful. To believe life has a purpose carries with it a predisposition to put one’s talents in the service of the highest expression of one’s vocation.

Thinking ahead to the rest of the twenty-first century, the problem is that the artistic elites have been conspicuously nihilist for the last century, and the rest of the culture has recently been following along. The most direct cause of a belief that one’s life has a purpose—belief in a personal God who wants you to use your gifts to the fullest—has been declining rapidly throughout society, and the plunge has steepened since the early 1990s. The rejection of traditional religion is especially conspicuous among intellectual and artistic elites.

A major stream of accomplishment in any domain requires a well-articulated vision of, and use of, the transcendental goods relevant to that domain.

In the classic Western tradition, the worth of something that exists in our world can be characterized by its embodiment of truth, beauty, or the good. Truth and beauty are familiar concepts, but “the good” is not a term in common use these days, so I should spell out that I am using it in the sense that Aristotle did in the opening sentence of the Nicomachean Ethics: “Every art and every inquiry, and similarly every action and pursuit, is thought to aim at some good; and for this reason the good has rightly been declared to be that at which all things aim.” When applied to human beings, the essence of “the good” is not a set of ethical rules that one struggles to follow, but a vision of human flourishing that attracts and draws one onward.

The proposition I argued in Human Accomplishment is that great accomplishment in the arts is anchored in one or more of these three transcendental goods. The arts can rise to the highest rungs of craft without them, but, in the same way that a goldsmith needs gold, a culture that fosters great accomplishment needs a coherent sense of transcendental goods. “Coherent sense” means that the goods are a live presence in the culture, and that great artists compete to approach them. This doesn’t mean that in, say, Renaissance Italy, every artist spent his days thinking about what beauty meant, but that a coherent conception of “beauty” was in the air, and it was taken for granted that art drew from that understanding.

Beauty is not the only transcendental good that the arts require. A coherent sense of the good is also important—perhaps not so much for great music (though I may be wrong about that), but often for great art and almost always for great literature. I do not mean that a great painting has to be beautiful in a saccharine sense or that great novels must be moral fables that could qualify for McGuffey’s Readers. Rather, a painter’s or a novelist’s conception of the meaning of a human life provides the frame within which the artist translates the varieties of human experience into art. The artistic treatment of violence offers an example. In the absence of a conception of the good, the depiction of violence is sensationalism at best—think Sam Peckinpah. When the depiction of violence is taken to extremes, it can have the same soul-corroding effect as pornography. But when it is informed by a conception of the good, the depiction of violence can have great artistic power—think Macbeth. So whereas some great works of art, music, and even literature are not informed by a conception of the good, the translation of this concept to the canvas or the written word is often what separates enduring art from entertainment. Extract its moral vision, and Goya’s The Third of May 1808 becomes a violent cartoon. Extract its moral vision, and Huckleberry Finn becomes Tom Sawyer.

To generalize my argument regarding the importance of the transcendental goods, I believe that when artists do not have coherent ideals of beauty, their work tends to be sterile; when they do not have coherent ideals of the good, their work tends to be vulgar. Without either beauty or the good, their work tends to be shallow. Artistic accomplishment that is sterile, vulgar, and shallow does not endure.

These observations are especially relevant to our era because in the twentieth century, truth, beauty, and the good were outright rejected in the culture. I am referring to the rise of certain nihilistic strains in modernism, which took root in the last half of the nineteenth century, broke into bloom in the years just before World War I, and reached full flower in the 1920s and 1930s. In From Dawn to Decadence, Jacques Barzun described the three strategies used by the avant-garde to advance its agenda:

One, to take the past and present and make fun of everything in it by parody, pastiche, ridicule, and desecration, to signify rejection. Two, return to the bare elements of the art and, excluding ideas and ulterior purpose, play variations on those elements simply to show their sensuous power and the pleasure afforded by bare technique. Three, remain serious but find ways to get rid of the past by destroying the very idea of art itself.
Sometimes, the new way of thinking was expressed cynically. “To be able to think freely,” André Gide wrote, “one must be certain that what one writes will be of no consequence.” Sometimes the proponents of the new art used the language of the transcendental goods with an Orwellian redefinition, as in Guillaume Apollinaire’s pronouncement that the modern school of painting “wants to visualize beauty disengaged from whatever charm man has for man.” By the mid-twentieth century, the abstract painter Barnett Newman put it more brutally: He and his colleagues were acting out of “the desire to destroy beauty.”

Postmodernism has followed modernism. In the visual arts, the repudiation of the transcendental goods was taken to new extremes. Some of the specific sensations, such as Mapplethorpe’s sadomasochistic photographs and Serrano’s Piss Christ, have become nationally famous. In some schools of contemporary music, the aspect of truth that is so compelling in Bach—the mathematical inevitability of some of his music—has been transmuted into extremely intricate mathematical puzzles, but puzzles that are devoid of beauty or emotion. In literature, modern novelists linger on the anxieties of the human condition, but seldom draw on a conception of the good as a resource for illuminating that condition.

I take these potshots at modernism and postmodernism aware that exceptions exist. I also happily report that the postmodernists are feeling some pushback. In The Blank Slate, Steven Pinker listed some of the movements—the Derriere Guard, Radical Center, Natural Classicism, the New Formalism, the New Narrativism, Stuckism, the Return of Beauty, and No Mo Po Mo—that are trying to fuse innovation in the arts with coherent conceptions of what I call the transcendental goods. Wendy Steiner’s Venus in Exile is a damning indictment of the postmodernists’ rejection of beauty, but she is able to point to many examples of the return of coherent conceptions of beauty in recent years. With those caveats, this generalization about the early twenty-first century still seems justified: The postmodern sensibility still dominates the current generation of visual artists, composers, literary critics, and “serious” novelists, and, to that extent, the renunciation of the transcendental goods remains.

Drawing these strands together, my analysis of the patterns of past streams of accomplishment leads to a mixed prognosis for the future. America, as it enters the second decade of the twenty-first century, has the physical infrastructure for great achievement in the arts: national wealth and vibrant urban centers. Its potential artists have sufficient freedom of action. The American culture still fosters a sense of personal autonomy and efficaciousness.

But it does not have a generation of great models for the next generation to emulate. The organizing structures that produced the oeuvres of great past accomplishment in literature, painting, sculpture, and music are old and filled up. Even the newest organizing structure, surrounding motion pictures, is a hundred years old at this point. The twentieth century saw a steep decline in religious faith among the elite, which presumably is associated with a steep decline in the sense of the “this-is-what-I-was-put-on-earth-to-do” motivation to create great work. The same century saw a rejection of the transcendental goods that I believe are part of the indispensable raw material for great achievement in the arts.

So we have the infrastructure for a major stream of accomplishment, but not the culture for one. On top of that major obstacle are three other potential problems that I must put as questions because my analysis of previous human history can give us no direct answers to them. Can a major stream of artistic accomplishment be produced by a society that is geriatric? By a society that is secular? By an advanced welfare state?

History gives us no direct answers because we are facing unprecedented situations. We have never observed a great civilization with a population as old as the United States will have in the twenty-first century; we have never observed a great civilization that is as secular as we are apparently going to become; and we have had only half a century of experience with advanced welfare states. But we need to think about these questions. The aging of the population is a demographic certainty. The prudent expectation, based on trends over the last fifty years, is that by mid-century the United States will be about as secular as Western and Northern Europe are now, and that the United States will have a welfare state indistinguishable from those of Western and Northern Europe. Neither of the latter two events is as inevitable as the aging of our society, but an alternative future would require a sharp U-turn in existing trends. What happens to the arts if these things come to pass?

The aging of society is about to accelerate, and the effects will probably be nonlinear. In 1900, only 13 percent of the population was over fifty years of age. By 1950, the proportion had grown to 23 percent. By 2000, not a lot had changed, with 27 percent of the population over fifty. But by 2050, current projections show the United States with 40 percent of its population over fifty, and that number could rise even higher with the advances in prolonging life that scientific advances are opening up. By 2100, the over-fifties will presumably constitute well over half of the population.

We cannot know for certain how the aging of the population will play out culturally, but it is hard to think of scenarios in which the arts become more vibrant and creative. It is possible that an aging population will facilitate a renewal of interest in the transcendental goods—people generally get more concerned about the great issues of life as they get older. But one of the constants of human history is that the creation of great art is dominated by the young—the median age of peak accomplishment is forty—and the milieu in which great art is created is surely facilitated by energy, freshness of outlook, optimism, and a sense of open-ended possibilities. We must assume that all of these will be in shorter supply than in the past now that our society is increasingly populated by the old.

The aging of the population is happening not only because of low birth rates, but also because life expectancy is increasing, which leads to another phenomenon without historic precedent: the removal of the constant psychological awareness that one’s own death could happen at any time. The early death of a close friend or family member happens so rarely in the lives of most people that it has become an anomaly—today, we see the deaths of people in their sixties described as “untimely.” Our baseline assumption is that we’re going to live to old age. How is this affecting the human drive to achieve?

In a world where people of all ages die often and unexpectedly, there’s a palpable urgency to getting on with whatever you’re going to do with your life. If you don’t leave your mark now, you may never get the chance. If you live in a world where you’re sure you’re going to live until at least eighty, do you have the same compulsion to leave your mark now? Or do you figure that there’s still plenty of time left, and you’ll get to it pretty soon? To what extent does enjoying life—since you can be sure there’s going to be so much to enjoy—start to take precedence over maniacal efforts to leave a mark?

I raise the issue because it fits so neatly with the problems associated with increased secularism and the increased material security provided by the advanced welfare state. In a world when death can come at any time, there is also a clear and present motivation to think about spiritual matters even when you are young. Who knows when you’re going to meet your Maker? It could easily be tomorrow. If you’re going to live to be at least eighty, it’s a lot easier not to think about the prospect of non-existence. The world before the welfare state didn’t give you the option of just passing the time pleasantly. Your main resources for living a comfortable life—or even for surviving at all—were hard work and family (especially, having children to support you in your old age). In the advanced welfare state, neither of those is necessary. The state will make sure you have a job, and one that doesn’t require you to work too hard, and will support you in your old age.

Put all three conditions together—no urgency to make your mark, no promptings to think about your place in the cosmos, no difficulty in living a comfortable life—and what you seem to get, based on the experience of Western and Northern Europe, is what I have elsewhere called the Europe Syndrome.

The Europe Syndrome starts with a conception of humanity that is devoid of any element of the divine or even specialness. Humans are not intrinsically better or more important than other life forms, including trees. The Europe Syndrome sees human beings as collections of chemicals that are activated and, after a period of time, deactivated. The purpose of life is to while away the intervening time between birth and death as pleasantly as possible. I submit that this way of looking at life is fundamentally incompatible with a stream of major accomplishment in the arts.

The most direct indictment of the Europe Syndrome as an incubator of great accomplishment in the arts is the European record since World War II. What are the productions of visual art, music, or literature that we can be confident will still be part of the culture two centuries from now, in the sense that hundreds of European works from two centuries ago are part of our culture today? We may argue over individual cases, and agree that the number of surviving works since World War II will be greater than zero, but it cannot be denied that the body of great work coming out of post-war Europe is pathetically thin compared to Europe’s magnificent past.

The indirect indictment of the Europe Syndrome consists of the evidence that it is complicit in the loss of the confidence, vitality, and creative energy that provide a nourishing environment for great art. I blame primarily the advanced welfare state. Consider the ironies. The European welfare states brag about their lavish “child-friendly” policies, and yet they have seen plunging birth rates and marriage rates. They brag about their lavish protections of job security and benefits and yet, with just a few exceptions, their populations have seen falling proportions of people who find satisfaction in their work. They brag that they have eliminated the need for private charities, and their societies have become increasingly atomistic and anomic.

The advanced welfare state drains too much of the life from life. When there’s no family, no community, no sense of vocation, and no faith, nothing is left except to pass away the time as pleasantly as possible.

I believe this self-absorption in whiling away life as pleasantly as possible explains why Europe has become a continent that no longer celebrates greatness. When I have spoken in Europe about the unparalleled explosion of European art and science from 1400 to 1900, the reaction of the audiences has invariably been embarrassment. Post-colonial guilt explains some of this reaction—Europeans seem obsessed with seeing the West as a force for evil in the world. But I suggest that another psychological dynamic is at work. When life has become a matter of passing away the time, being reminded of the greatness of your forebears is irritating and threatening.

Is there any way for the American arts to flourish even if we don’t make a political U-turn and stave off the European welfare state? In trying to think about how a renaissance might happen, I cannot put aside the strongest conclusion that I took away from the work that went into Human Accomplishment: Religiosity is indispensable to a major stream of artistic accomplishment.

By “religiosity” I do not mean going to church every Sunday. Even belief in God is not essential. Confucianism, Daoism, and Buddhism are not religions in the conventional sense of that word—none postulates a God—but they partake of religiosity as I am using the word, in that that they articulate a human place in the cosmos, lay out understandings of the ends toward which human life aims, and set standards for seeking those ends.

A secular version of this framework exists, and forms a central strand in the Western tradition: the Aristotelian conception of human happiness and its intimate link with unceasing effort to realize the best that humans have within them. In practice, we know that the Aristotelian understanding of human flourishing works. A great many secular people working long hours and striving for perfection in all kinds of jobs are motivated by this view of human life, even if they don’t realize it is Aristotelian.

Whether it happens in a theological or Aristotelian sense, I believe that religiosity has to suffuse American high culture once again if there is to be a renaissance of great art. Is that possible? And if it is, is it realistic? Thinking through such questions would take another essay at least as long as this one. But let me close by offering a reason for optimism.

The falling away from religiosity that we have seen over the last century must ultimately be anomalous. From the Enlightenment through Darwin, Freud, and Einstein, religiosity suffered a series of body blows. The verities understood in the old ways could not survive them. Not surprisingly, new expressions of those truths were not immediately forthcoming, and the West has been wandering in the wilderness.

It won’t last forever. Humans are ineluctably drawn to fundamental questions of existence. “Why is there something rather than nothing?” is one such question. “What does it mean to live a good life?” is another. The elites who shape the milieu for America’s high culture have managed to avoid thinking about those fundamental questions for a century now. Sooner or later, they’ll find it too hard.