Feeds:
Posts
Comments

Archive for April, 2008

I call my sons the Bear and the Tiger on this blog mostly because I don’t want any weirdos invading their privacy. In fact, I try to write mostly about my own reflections on parenting, without a whole lot of identifying detail or anything that would embarrass them later. But those are also their nicknames in real life. They’re named after characters in German kids’ books by a guy named Janosch. (Just Janosch. That’s his whole name.)

The Little Tiger and the Little Bear live in a house on the bank of a river near the woods. The Little Bear is a would-be gourmet who mostly knows how to make bouillon. The Little Tiger is, like any good cat, a bit lazy, but he does like to watch the Bear work. Occasionally the Tiger gathers mushrooms for dinner. They have two constant companions, a tiger-duck and a little green frog called Gunther Kastenfrosch. They both believe in soft, comfy couches.

And they take care of each other, even though the Tiger can’t read yet. (He does learn eventually; you can see an online version, in German but with pictures, here.) The Bear has some rudimentary literacy, just enough to get him in trouble. When one day he finds a banana carton labeled “Panama,” he gets it in his head that Panama must smell like bananas from top to bottom. That sets them off on a quest for the land of their dreams, but when they ask for directions to Panama, they keep getting told to take the next left turn. After several lefts, they end up – you guessed it – back at their little house between the river and the woods. It’s sort of a mild-mannered version of “There’s no place like home,” minus the witches and tornadoes but with repeated black humor featuring a fox who’s “romancing” a goose.

The Bear takes care of the Tiger when he takes sick one day while picking mushrooms. He carries the Tiger home and plies him with his favorite food (well, bouillon), tea, and visitors. He bandages the Tiger from neck to foot, though the Tiger implores him to “leave my back unwrapped” because “I might have to cough.” (The picture shows how well that worked.) Finally, the Tiger is carried to the Hospital for Animals by a grand procession of motley woods-dwellers, including an elephant and a vain, flirtatious donkey named Majorca. There, an x-ray reveals the diagnosis: a slipped stripe! The Tiger has an operation (“a little blue dream”) and the same caravan of animals schleps him triumphantly home again.

None of this has a whole lot to do with my kids, really. They got the names even before they were born because their dad and I loved the Janosch books. My Bear is a pretty good reader; my Tiger is actually more apt than my Bear to help around the house, though the right verb is more often “help.” But like the Little Tiger and the Little Bear, my sons are both intense – both bent on seizing all they can from their young lives. Both are resourceful and quirky. And while they get ferociously on each other’s nerves (a topic that deserves a whole ‘nother post), like the storybook Bear and Tiger they do adore each other when the day’s done and we’re all snuggled together on the comfy couch.
All images come from posters at the Little Tiger online shop, which sadly only ships to German addresses.

Read Full Post »

From Swingset to Marriage Bed

Photo by Flickr user Guacamole Goalie, used under a Creative Commons license.

As I was reading further today in Leila Ahmed’s Ahmed’s Women and Gender in Islam, I came across an account of the Prophet Muhammad’s most beloved wife, Aisha, which sheds a small but bright spotlight on the marrying-off of very young girls, whether among the Fundamentalist Latter Day Saints or anywhere, really. I should say I’m not meaning to pick on Islam in any way here. In fact, I’m going to do something rare for me and make a sweeping generalization about the nature of people – children – in just about any place and time since the dawn of civilization. (This is part of the fun of writing a blog; I’d have to be more nuanced and careful if this were a history book.)

When our story begins, Muhammad had spent a quarter century monogamously married to Khadija, a wealthy trader who had proposed to him when she was 40 and he only 25. By the time she died, he was well established as the leader of the nascent Islamic faith and ready to branch out into polygyny. Aisha was one of two women that he married soon after his first wife’s death, when he was around age 50.

Now, by all accounts Muhammad adored Aisha best of all the many wives he eventually took, and she loved him dearly too. But the beginnings of their union are disturbing to modern Western sensibilities for reasons that (as I’ll argue in a moment) can’t be reduced to ahistorical cultural imperialism. Aisha was only six when her family betrothed her to Muhammad. They first had to break an engagement to a young boy to whom she’d already been promised, but her parents did so readily because they wanted to cement their alliance with Muhammad.

Here’s how it felt for Aisha. (Ahmed’s source on this are the hadith, or stories of the Prophet’s life, some of which are tales told by Aisha herself.)

Aisha later recalled that she had realized she was married (that is, that the marriage agreement had been concluded) when her mother called her in from her games with her friends and told her she must stay indoors; and so “it fell into my heart,” she said, “that I was married.” She did not, she recalled, ask to whom (Ibn Sa’d, 8:40)
(Ahmed, 50)

The marriage was then finally consummated in her father’s own house a few years later when she was nine or ten. Aisha’s father, Abu Bakr, hurried the process along by providing the “marriage portion” (payment that the groom had to make) when Muhammad couldn’t afford it. Again, we have Aisha’s recollection of what happened next:

My mother came to me and I was swinging on a swing. … She brought me down from the swing, and I had some friends there and she sent them away, and she wiped my face with a little water, and led me till we stopped by the door, and I was breathless [from being on the swing] and we waited till I regained my breath. Then she took me in, and the Prophet was sitting on a bed in our house with men and women of the Ansar [Medinians] and she set me on his lap, and said, “These are your people. God bless you in them and they in you.” And the men and women rose immediately and went out. And the Prophet consummated the marriage in our house.
(Ahmed, 52)

Ahmed further recounts that Muhammed sometimes played dolls with Aisha and showed her “tender care and patience.” (51) This would all be lovely – were he her father, and not her husband.

Again, I don’t want to single out Islam in any way. The problem here is very young age at marriage, not any particular religion (though religiously-justified polygyny does seem to promote such marriages). I also don’t want to denigrate the relationship between Aisha and Muhammad, which by all accounts grew into one marked by mutual love and interdependence, despite the passivity she initially displayed, not even showing curiosity about who her husband would be. I’m aware, too, that until just a few hundred years ago, no human society saw childhood as a distinct and special phase in life that needs and deserves protection and nurturance.

And yet – Aisha was still playing with dolls. She was flying high on a swing when she was brought to her husband, a nine- or ten-year-old virgin. I don’t care if the time is now, 1400 years ago, or ancient Babylon; I don’t care if the girl in question is six or nine or thirteen. Children do need to play. They do not need the cares and restrictions (in Aisha’s case, literal seclusion) of adulthood. All of that can wait – even in societies where people’s life expectancies are much shorter than my own – until a girl stops playing with dolls on her own accord.

I’ll leave you to draw your own conclusions about the modern-day FLDS and their child brides given in “spiritual marriage” to men three times their own age.

Read Full Post »

Testing, Testing

I’m partly just testing to make sure I can still post from my new computer – woohoo! I got a lovely MacBook Pro, which arrived just today. And then I spent a good long time trying to get the wireless to work. Which, obviously, I did – and (not so obviously) without any help other than what Google could serve up.

But what I really want to grouse about is school testing, which the Bear is going through this week. He’s in second grade. He’s utterly casual and unstressed about it. He tests easily and well. He’s lucky, I’m lucky, and I know it. That’s not my point, though. Even though I really like our elementary school, the testing system is absurd and I’m not at all sure who it’s supposed to be helping.

The older kids? All of them are obviously stressed, even if – like some of the fifth graders I know – they’re not so much uptight about the tests themselves as burned out on all the homework that led up to them.

The teachers? They’ve been like hamsters in a wheel ever since our eleven snow days torpedoed their lesson plans. The problem with snow days, coming in winter as they tend to do, is that they also tend to fall before testing week. On top of this, the Bear’s main teacher is dealing with some scary-serious health problems. She’s wonderful. She’s done an awesome job in the face of major physical challenges. Someone should please give her an A+ and call it good.

And what are the kids learning, anyway? For starters, that you learn stuff to pass the test, and afterward you get to hit auto-erase and goof off for the rest of the year. That tests come fast and furious, all in one week, and nothing else really counts. That material and ideas don’t matter unless they might appear on the tests.

Most curiously, they learn that the reward for a job well done is – sugar! Nothing against rewards or sweets – I like ‘em both – but in moderation, please. All year long, the school tries to promote healthy eating. And then, during testing week, the school bombards them with sweets after their exams – and even during. Both second-grade classes get to suck on lollipops or hard candies. According to the Bear, who’s usually a reliable reporter on such things, there are studies showing that pressure on the roof of one’s mouth helps people perform better on exams. I haven’t tried to verify it. The kids, in any event, think it’s a sweet deal.

Read Full Post »

Photo of a geyser in Iceland by Flickr user Benzpics63, used under a Creative Commons license.

Ben Harder, science journalist at U.S. News and World Report, is calling out the major news services for recycling a five-year-old study on prostate cancer as if it were fresh news. He’s right to criticize their sloppy reporting, of course. He was wrong, however, to suggest that the study is dubious just because it’s not brand-new. Given the study’s content, I hope that the screw-up in reporting will give it more exposure than it might otherwise get. When I read about it a few months ago, my reaction was: Wow, this is news that helps men take their health into their own hands, if you’ll forgive a bad pun. So why isn’t it already common knowledge?

What a group of Australian scientists found is this: Masturbation may offer protection against prostate cancer. And actually, not just masturbation but any sexual activity resulting in ejaculation. The group, headed by Dr. Graham Giles, found that men in their twenties who ejaculated at least seven times per week reduced their risk of prostate cancer by one-third compared to those who ejaculated fewer than three times per week. That’s a remarkable figure.

The explanation Dr. Giles offered when the study was published in 2003 makes intuitive sense to me, even if it’s still somewhat speculative. Basically, to use a rather unfortunate plumbing metaphor, he suggested that the pipes stay cleaner and healthier when flushed out regularly:

Our research indicates that there is no association between prostate cancer and the number of sexual partners, which argues against infection as a cause of prostate cancer in the Australian population.

We also found no association between maximum number of ejaculations in a 24 hour period and prostate cancer. Therefore, it is not men’s ability to ejaculate that seems to be important.

While it is generally accepted that prostate cancer is a hormone dependent cancer, apart from age and family history, its causes are poorly understood.

For this reason, our explanations are fairly speculative – one possible reason for the protective effects of ejaculation may be that frequent ejaculation prevents carcinogens building up in the prostatic ducts.

If the ducts are flushed out, there may be less build up and damage to the cells that line them.

Ben Harder did find one subsequent study, published in 2004, that strikingly corroborated the Australians’ findings. That study found:

Each increment of 3 ejaculations per week across a lifetime was associated with a 19% (95% CI, 7%-30%) decrease in risk of organ-confined prostate cancer.

Its lead author, Dr. Michael Leitzmann, told Harder he’s certain no further work has been done on this topic. Why???

These studies found a free, simple, and fun way a man can protect himself against a cancer that strikes one in five men. Yet I’ll bet more adult men are aware of other habits that protect against prostate cancer, such as drinking tea and eating tomatoes. As a gal who calls herself Sungold, I’m unabashedly pro-tomato; but why should tomatoes get all the press while the benefits of ejaculation are ignored?

I can only think our deep cultural ambivalence about sex is to blame. That would explain why this news failed to make a splash five years ago. And that also accounts for the dearth of follow-up studies, which mirrors the shameful underfunding of research on prostate cancer in general. This anti-sex mindset is also deeply anti-scientific, preoccupied with ideas about purity that date all the way back to Leviticus.

Artwork by Flickr user adamrice, used under a Creative Commons license.

(In case you can’t read the quotation from Leviticus 15:16-17 in the image above: “And if any man’s seed of copulation go out from him, then he shall wash all his flesh in water, and be unclean until the even. And every garment, and every skin, whereon is the seed of copulation, shall be washed with water, and be unclean until the even.”)

If it seems like I’m making too much of this, check out this comment, copied verbatim from Harder’s blog:

how can anyone condone masterbation? in the Bible it is widely and worldly known as a sin! you will be sending people straight to hell.

Unfortunately, it’s also “widely and worldly known” that this is the brand of thinking that brought anti-condom AIDS education to Africa, sees cervical cancer as the just wages of sin, and believes comprehensive sex education causes teenage pregnancy. In this worldview, a few million excess cases of prostate cancer might seem like a cheap sacrifice in creating a moral dystopia where the only pleasure is feeling holier-than-thou.

Read Full Post »

Photo by Flickr user See Wah, used under a Creative Commons license.

You’d think that if a man contributes half the DNA to a baby and wants to be involved in supporting and loving a child, that would make him a father? Well, not in Kentucky. Yesterday the Louisville Courier-Journal reported:

A man who fathers a child during an affair with a married woman has no legal rights to fatherhood, the Kentucky Supreme Court ruled yesterday in an important decision on the legal status of marriage.

In a 4-3 vote, a deeply divided court upheld the presumption that a child born to a married woman living with her husband is a child of the marriage.

On the face of it, this decision is not terribly surprising. Determining paternity and awarding child support and custody have rarely had much to do with protecting a parent’s rights or preserving their bonds to their children. Prior to the twentieth century, American states typically regarded children as their fathers’ charges. That presumption withered away as the nurturing aspects of mother-work became more valued and visible, and mothers became the default custodians in the twentieth century. But the government’s stake was always in 1) minimizing poor relief and social welfare obligations, and 2) serving the best interests of the child. This was also true in European countries, which – particularly from the 1700s onward, with the rise of the absolutist state – fought illegitimacy because it strained public coffers, not necessarily because it stood for immorality.

Normally, the government’s role has been to extract support payments from reluctant putative fathers, whether it meant enlisting eighteenth-century midwives to interrogate unwed women in labor who had refused to give up the name of their lovers, or forcing twenty-first century men to undergo court-ordered DNA testing.

This case is different. Here, a baby has two fathers. There’s the husband (Jonathan Ricketts) of the child’s mother (Julia Ricketts), who by all accounts had nothing to do with his conception but wants to raise him as his own. And then there’s the mother’s ex-lover (James Rhoades), who is the baby’s biological father and who also wants a role in his son’s upbringing.

In days of old, the law tried to guarantee the child a stable home and financial security by presuming that the mother’s husband is also the father. But now, genetic testing can sweep away this presumption, at least on the scientific level. In 2004, a Maryland court refused to even order a DNA test in a similar case where a putative father wanted to claim paternity of a child he allegedly sired with his married ex-lover, citing the best-interests-of-the-child standard. Legally, this area seems to be a bit of a mess:

According to one of the dissenting opinions, 33 states allow a man to challenge the presumption that a child born to a married couple is the husband’s.
(Source: Louisville Courier-Journal)

But apparently this won’t work in Kentucky case, even though DNA testing showed Rhoades to be the baby’s biological father. The court rejected his suit mainly on the basis of a formality, saying it lacked standing to judge on the matter. But only two of the seven judges signed off on that opinion; in all, the fractured court produced five (!) different opinions. The one that’s getting the media attention is this:

“While the legal status of marriage in this early 21st century appears to be on life support, it is not dead,” Justice Bill Cunningham wrote in a concurring opinion. He wrote that married couples have a right “to be left alone” from the claims of “interloper adulterers.”
(Source: Louisville Courier-Journal)

Cunningham is oddly putting neither the state’s financial interest nor the child’s well-being front and center. Instead, he’s invested in protecting “marriage.” Whatever happened to the best interests of the child? Isn’t it up to the Ricketts to rebuild their marriage, if they can, and not for the judge to protect either their specific union or some abstraction called “marriage”?

And what’s up with this “interloper” language? Cunningham makes it sound as though Rhoades carried Julia Ricketts off against her will on a galloping stallion. In fact, no one involved seems to be claiming anything of the sort. From all appearances, Julia and James had a consensual relationship that ended bitterly. Why should we assume she had no part in the decision to stray from her marriage?

If you call the man an interloper, it saps the woman of all moral agency. In this situation, Julia actually had immeasurably greater responsibility to her marriage than James did; she’s the one who made the vows to Jonathan. What business does a judge have absolving her of that? Isn’t the question of her culpability (or any mitigating factors, since we don’t know what went on inside that marriage) a matter for the couple to figure out for themselves?

Cunningham might be working from the assumption that a wife couldn’t possibly have wanted sex of any sort, much less the illicit kind; that she must have been seduced or coerced, because only men are horny. He might also be viewing the marriage – and the wife – as the husband’s domain or even property, which the “interloper” interfered with. If he’s going to hark back to early modern principles, Cunningham might at least reaffirm the traditional concern for the child, rather than the husband’s rights as head-of-household.

What would it take to put the “best interest of the child” back at the forefront? Multiple commentators have noted that the Uniform Parentage Act of 2002 would address this quandary, bringing the law closer to science, human decency, and common sense. Only a handful of states have adopted versions of it thus far. The relevant portion states:

The presumption [that the mother's husband is also the child's father] is rebutted by a court decree establishing paternity of the child by another man.

This would obviously open the door to Rhoades claiming paternity.

I don’t think there’s any easy resolution to this case; everyone involved is going to feel pain over it for the rest of their lives. You can see this immediately in James Rhoades’ blog; even though he’s writing mostly about his own pain, through it you glimpse entire parallel universes of hurt. It’s evident that no one will come out unscathed, least of all the beautiful toddler at the center of the storm. But this decision – which at once grants mothers power to behave irresponsibly (see blogger Stephanie’s take on this) and degrades wives and children to a husband’s property – clearly does not do justice to either Rhoades or the child.

Read Full Post »


This week in the religion, gender, and sexuality class I’m helping teach, we read an account of the Buddhist creation myth. One of the fun things about teaching is having a chance to learn about stuff that is completely new to me. At least in the humanities and social sciences, I think this is virtually always true, even if you’re teaching in your own research field. But it’s even truer when half the course is really outside your area of expertise. So this Buddhist origin story, the Agganna Sutta, was all new to me.

And since I’m so not an expert on it, I mostly just want to offer a modest appreciation of its beauty.

There comes a time . . . when sooner or later this world begins to re-evolve. When this happens, beings who had deceased from the World of Radiance, usually come to life as humans. And they become made of mind, feeding on rapture, self-luminous, traversing the air, continuing in glory, and remain thus for a long, long period of time.
(This is from the version I read for class. Here’s the whole Agganna Sutta but in a slightly less beautiful translation.)

Such gorgeous language. Feeding on rapture! Self-luminous! Imagine such an existence.


The catch is that if you’ve got self-luminance, you don’t get to have a body. Later on, as the earth solidifies from liquid to a sweet milky substance to plants (a nifty evolution story), the formerly radiant beings grow more solid. As they solidify, they come to know cravings. And so, although there’s no Eve to take the fall for the Fall in this story, sex becomes a polluting force. (Buddhism has its own issues with women and the flesh, as it turns out.)

But even so. I would like to be self-luminous and traverse the air for just an hour, as long as I didn’t have to stay in the World of Radiance. The funny thing is, I feel like I have my share of radiance in my own little earth-bound life with all its cravings, desires, and beauty.


The petite red tulips come from the Bear’s elementary school; the others, from my garden. All photos by me.

Read Full Post »

German condom/anti-HIV ad: “For young vegetables, too.” I’ve seen this and others in the series posted publicly at bus and subway stops all over Berlin. Photo by flickr user compujeramey, used under a Creative Commons license.

I heard an extraordinary story this evening while I was serving on a panel on reproductive rights at my college. One of the other panelists was a young, smart, committed Americorps worker who’s been dealing with foster kids. When she first started her job, a 14-year-old asked her if she could procure condoms for her. The girl had very little money and was afraid of getting busted for stealing them.

So my co-panelist said sure, I’ll work on getting you some. But when she approached her supervisor, she was told, “What? you’ve got to be kidding. This girl has already had chlamydia. She can’t be trusted with condoms. We’ll put her on Depo-Provera.”

When this boomeranged back at the girl, she protested that she didn’t need birth control after all.

I don’t know if the girl ever did get put on Depo-Provera, aka “the shot”; my co-panelist didn’t ever find out. (And of course all of this is second-hand, but my co-panelist seemed pretty reliable, and a colleague of hers confirmed some of the details later in the evening.)

But boy, can I understand why she wouldn’t want it. Just in case you’ve forgotten, Depo-Provera was highly controversial in the 1970s and 1980s because it was tested on poor women, partly in developing countries, partly among American minorities, where free and informed consent was a virtual impossibility. The FDA kept it off the market for many years and provoked strong public opposition with its approval in 1992. How it morphed into a respectable form of birth control, I don’t know; I was out of the country for most of the 1990s (though I did somehow get in on the Starr Report).

Depo-Provera has a much nastier risk profile than its hormonal cousin, the birth control pill. This starts with nuisances like nausea. If the pill makes you queasy, you can get off it and return to normal within a few days. If the shot makes you sick, you’re stuck with it for three months, as my co-panelist pointed out. The emotional side effects are also harsh. The only person I knew who was on it suffered from serious mood swings – and no, I don’t know why she stuck with it.

More seriously, a few years ago the FDA required a “black box” warning for Depo-Provera because it induces bone loss, which may be irreversible. What a perfect drug for a 14-year-old girl who might still be growing!

This is such a lousy idea that it’s tempting to just rant at the social worker and call her evil idiot. And yet, I think if I do that, I overlook how overburdened that worker must be, with a swollen caseload and never enough resources. This region is quite poor and for reasons both budgetary and human, the temptation must be tremendous to do anything, everything, to stop yet another child from being born into the foster care system. Compliance is obviously another big selling point: a girl living in unstable circumstances might forget to take the pill, which is a non-issue with Depo-Provera.

And yet … the idea that her history of chlamydia meant that condoms would be inappropriate? That is just Orwellian logic. What will she catch next time – maybe HIV? Adding to the sick absurdity, her infection didn’t actually come from consensual sex; she got it from being raped.

What I don’t know – and am trying to figure out – is how much power the state has to force or coerce girls in foster care to get the shot. Whatever power it has would presumably come from its parental role. Which raises a related question: Can a parent legally force his or her underage daughter to get Depo-Provera? If I find any answers (in comments here, or through my thus far fruitless googling), I’ll let you know.

Read Full Post »

As my dear regular readers know, I have two young sons. I teach women’s and gender studies, so I might spend more time than the average parent contemplating how they’re affected by prevailing ideas about masculinity. I try to let them unfold as they are meant to be, without a personal or political agenda. I teach them to use words, not blows, to settle differences (even if the words get loud sometimes). I want them to grow into kind and generous people with a reasonable share of happiness.

And yet, in some cowardly corner of my heart, I fear the day when one of them is called a sissy – or a pansy – if he refuses to swing a punch.

It’s silly, isn’t it? Because when you think about it, pansies aren’t weak; they’re not even the opposite of the most conventional notion of masculinity.

They can be subtly handsome.


They can strut an assertive, spunky personality.


They can be contemplative and subtle.


They can be blindingly bright.


And they can be tough, persistent survivors. This whole lot made it through an Ohio winter that gave us eleven (11!) snow days. (Granted, most of those were pretty bogus; I think the pansies survived them more unscathed than I.)


Which of these things should I not hope my sons will become?

Pansies and photos are mine – lucky me!

Read Full Post »

Back to Babylon


Lion of Babylon, again from Berlin’s Pergamon Museum. Photo by Flickr user Alberto Marin, used under a Creative Commons license.

Sara Robinson at Orcinus has a chilling analysis of the coercion applied to the women at the FLDS Yearning for Zion Ranch in Texas. Robinson weighs whether they were “brainwashed” or (only?!?) “coerced.” Whatever term you pick, she shows that the pressures on them were extreme.

I was fascinated, in a train-wreck sort of way, at the unbelievably close parallels between Robinson’s description and the harsh control of women under the Mesopotamian patriarchies that I wrote about in my last post. (You should probably read that one first if you want this post to make any sense.)

There are admittedly a couple of differences that make the FLDS out to be a hair more lenient. For instance, the FLDS girls in Texas were schooled through the tenth grade in the unlikely event that they didn’t marry sooner. While some Babylonian women were able to read and write, literacy was far from universal. (I’ll use Babylon here as shorthand, though there were some territorial variations and women’s situation also deteriorated over the centuries.)

But otherwise? Here’s the FLDS (quoting Robinson, here and throughout):

Almost every feature of these women’s lives is determined by someone else. They do not choose what they wear, whom they live with, when and whom they marry, or when and with whom they have sex.

In Babylon, marriages were arranged. Women had no choice about the modesty of their attire; to veil or not to veil was determined by their position in society. Slaves and prostitutes were most obviously forced into sex, but wives and concubines weren’t exactly free to decline, either.

From the day they’re born, they can be reassigned at a moment’s notice to another father or husband, another household, or another community.

Debt slavery, anyone? The loaning out of wives? The so-called cradle of civilization had all that.

Everything they produce goes into a trust controlled by the patriarch: they do not even own their own labor.

This is actually worse than the Code of Hammurabi, which granted women the right to hold property in their own names. So much for progress.

If they object to any of this, they’re subject to losing access to the resources they need to raise their kids: they can be moved to a trailer with no heat, and given less food than more compliant wives, until they learn to “keep sweet.”

Obedience was paramount in the ancient Middle East as well, and disobedience constituted grounds for a man to divorce his wife and abandon her to poverty. Here, too, a Babylonian divorcee might be better off than an FLDS wife, since her husband had to return her dowry.

At the very least, women who do decide to leave the sect leave without money, skills, or a friend in the world. Most of them have no choice but to leave large numbers of children behind — children who are the property of the patriarch, and whom many of them will never see again. If a woman is even suspected of wanting to leave, she’s likely to be sent away from her kids to another compound far yonder as punishment for her rebelliousness. For a woman who’s been taught all her life that motherhood is her only destiny and has no real intimacy with her husband, being separated from her children this way is a sacrifice akin to death.

This, too, runs straight parallel to the ancient idea that a woman’s children are her husband’s property. Of course, Babylonian women couldn’t just bail; there was no outside world to which they could flee.

At the very worst, death is indeed what awaits them. The FLDS preaches “blood atonement” — the right of the patriarchs to kill apostates who dare to defy them, usually by slitting their throats.

And again, as I wrote yesterday, Mesopotamian patriarchs had some limited rights to kill their wives and children. I can’t vouch for their technique, whether they favored throat-slitting or some other method.

Does this exculpate the women of the FLDS for their role in tolerating the patriarchs’ horrors – the rape of their daughters and the exile of their sons into poverty and isolation? I don’t know. A few women have left the sect, and I’d like to believe that people can find reservoirs of courage and integrity even in the harshest circumstances. But I have to be humble about this. I have no idea of whether, having grown up in that climate of terror, I’d be thinking clearly enough to perceive the abuses. Nor can I know if I’d be brave enough to take my children and flee.

A true patriarchy is a closed system, even when it’s an island in a larger culture. There’s no viable escape route. Collaboration appears to be the safest path – no, it is the safest path. And the control of women and children is complete enough that most won’t even dream of a way out.

Update: In comments, labelleindifference1 points to an article by Anthony Zerbisias in the Toronto Star detailing some of the physical restraints, in addition to the mental factors considered above. Among other things, guard towers, night vision cameras, and patrol cars keep the compound under strict surveillance. Thanks for the tip, labelle.

Read Full Post »

Well, not much, anyway. Let me explain. (And yeah, I realize I’m courting trouble here.)

Before anyone starts throwing rotten tomatoes at their computer screens, I’ll give you an actual thesis statement: Patriarchies (note the plural, she says pedantically) have existed in many parts of the globe over many centuries. To call the present-day United States a patriarchy is just inaccurate. Yes, male privilege is still the rule rather than the exception. But to collapse all societies including our own into this single category ignores the substantial cracks in the edifice of male power today. The term patriarchy vastly overgeneralizes. It’s ahistorical.

Lion of Ishtar, Babylonian frieze displayed in Berlin’s Pergamon Museum; photo by Flickr user kairoinfo4u, used under a Creative Commons license.

In the gender and religion class I’m helping teach, we’ve been discussing the patriarchies of the ancient Middle East. Patriarchy was invented, according to Gerda Lerner’s now-classic study The Creation of Patriarchy, when humans morphed from hunter-gatherers into settled farmers. Increased productivity from agriculture meant people could begin to accumulate property for the first time. Control of property gave men both a motive for controlling productive and reproductive resources – slaves of both sexes and fertile “free” women. Holding property also gave dominant men leverage in controlling subordinate persons.

What, exactly, did this patriarchal control look like? Relying largely on Lerner’s account, Leila Ahmed’s Women and Gender in Islam describes this history for ancient Mesopotamia. In ancient Assyria, laws were geared to give men as much control as possible, up to and including selling wives and children into slavery (or pawning them in cases of debt) and killing them under certain circumstances. Virgins essentially belonged to their fathers and were sold into marriage; virginity was thus an asset that belonged to the patriarchs. Veiling and seclusion marked wives as respectable – and their opposites, harlots and slaves, as not. (Concubines occupied a middle position in this hierarchy.) Men were free to screw around with slaves, prostitutes, and concubines. Women could be put to death for adultery.

Initially, under the Babylonian Code of Hammurabi (ca. 1760 B.C.E.), women – especially wives – did have a few rights that mitigated this bleak situation. They could hold property, practice a number of occupations, make contracts, and sign pre-nuptial agreements that might spare them from debt slavery or other abuses. Wives could hold slaves as prostitutes and pimp them out, which just goes to show how the upper-crust women were complicit in the system and profited from it.


The Code of Hammurabi in the Louvre; photo by flickr user Scott MacLeod Liddle, used under a Creative Commons license.

Over time, though, women’s status went from bad to worse throughout the Middle East, due largely to the increasingly warfare and militarization in the region. Where Zoroastrianism reigned several centuries before the birth of Jesus, women lost rights precipitously, and – as Ahmed puts it – “Elements of these Zoroastrian regulations suggest that notionally women were somewhere between personhood and thingness – as evidenced by wives being legally loaned for sexual and other services.” (Ahmed, 20–21) A man could loan out his wife to another man without her consent; she had to give him sex and raise his children if he was a widower. But any offspring still belonged to her lawful husband, in accordance with the idea that “a woman is a field. … All which grows there belongs to its owner, even if he did not plant it.” (Ahmed, 20) Disobedience was grounds for a man to divorce his wife and invalidated any pre-nup. Incestuous marriage was held to be pious and a smart way to outfox demons, with the result that men married their own mothers and sisters and daughters.

What sets modern America apart from ancient Mesopotamia? Mainly, the control of women isn’t nearly systematic enough to qualify as patriarchy. In fact, patriarchy has been in a slow though uneven decline ever since the early days of Christianity. Yes, I know that Christianity has much to answer for in its history of misogyny and loathing of sex and the body. But compared to a society where women had no sexual determination, the ability to opt for celibacy offered women at least the chance to say no.

Fast-forwarding to today: I realize we’re still far from full equality. We haven’t had a female president. Women are still a minority in each house of Congress. Female CEOs are scarce on the ground, too. Absolutely, there are fuckwits of both sexes who’d like to give the state far-reaching control over women’s reproductive lives. People like Leslee Unruh and Ann Coulter prove the point that those who’d like to restore patriarchy need female collaborators.

But the fact remains that American women do have the right to abortion, which fundamentally and fatally undermines male control of women’s reproductive capacities. We have a viable female candidate for the presidency, even if her campaign has been beset by media sexism. We’ve gone from having just a token woman or two in Congress to women making up 16 percent of each chamber – not to mention our first female Speaker of the House. (Even if I don’t always agree with Nancy Pelosi, I’m mostly glad she got the job.) Ann Coulter has a megaphone but I’m not sure anyone other than hypnotized wingnuts take her seriously. I’ll admit Leslee Unruh scares me, especially since her latest brainstorm, a new ballot initiative banning abortion in South Dakota that might actually pass since it has a rape/incest exemption. If you want to convince yourself that the patriarchy has planted pod people among us, just read The Well-Timed Period’s take on Unruh.

But Unruh is just one super-scary chick, up against legions of young women who believe that they get to do with their bodies what they will. Women in the United States now have very substantial reproductive and sexual freedom. Even something as apparently trivial as no-strings-attached hookups undercuts patriarchal control of women (unless the women involved are being coerced). I’m not saying women should all go out and get laid to smash the patriarchy. But women’s sexuality and fertility was the main “resource” captured by patriarchy in the first place. Where women dissociate the two and claim an autonomous sexuality, true patriarchy cannot exist. This (and not the welfare of the fetus or even anti-sex hysteria) is the rock-bottom reason why right-wingers froth at the mouth over abortion rights.

Patriarchy is still absolutely a useful term. It can explain a great deal about the history behind today’s gender woes. But we’d be better off not just intellectually but politically if we reserved it for those situations where it really fits: Afghan fathers who sell their 13-year-old daughters into marriage with men four decades their senior. Or polygamist Mormon men who do the same with their daughters in Texas. When feminists use “patriarchy” imprecisely (as happens all the time in the blogosphere), it diminishes those abuses while painting us into a corner, politically. If patriarchy is timeless, then what’s the point in blaming it, much less fighting it? If instead we note that male control of women is no longer monolithic, we might have better luck dismantling its remnants – and inspiring others to join us.

Read Full Post »

Cat Got Your Tongue?


The latest National Geographic Kids claims that cats have 16 words that they use to communicate. It was driving me and the Bear nuts that the magazine doesn’t say more – is this a top-secret language, so secret that only the cats know and aren’t telling? Is it kind of like the mysterious name that T.S. Eliot describes in “The Naming of Cats?”

… The name
that no human research can discover–
But The Cat Himself Knows,
and will never confess.

When you notice a cat in profound meditation,
The reason, I tell you, is always the same:
His mind is engaged in rapt contemplation
Of the thought, of the thought,
of the thought of his name:
His ineffable effable
Effanineffable
Deep and inscrutable singular Name.

Even after googling it, the Bear and I failed completely to figure out what National Geographic was getting at. But I did find this very funny lexicat at Artsy Catsy. I’m copying their whole list; if you love cats, do visit their blog.

1) Catcall: A signaling device we use to give marching orders to humans to fill food bowls, open doors, give us chin scritches and their undivided attention.

2) Catty-corner: The proper location for our litter box.

3) Caterwaul: What we sometimes hit when we miss the litter box.

4) Catwalk: Our daily exercise regimen, consisting of short shuffles to the food bowl with occasional detours to the catty-corner.

5) Catkin: What happens to kitties who didn’t have hoohaectomies or ladygardenectomies.

6) Catalyst: What humans use to do our shopping, i.e. 1. Stinky goodness 2. Temptations 3. Litter 4. Toys 5. More Temptations

7) Catacomb: A device used on kitty spa days to remove cat-a-mats from our furs.

8) Catapult: What we do when the catacomb gets caught in a cat-a-mat.

9) Catnip: What we do when you “pult” too hard with the catacomb.

10) Catgut: An essential part of our insides, which requires constant filling with stinky goodness and Temptations.

11) Catsup: Mealtime; a method for filling the catgut.

12) Catnap: What we do when we’re not catsupping.

13) Category: The yucky stuff we yak up all over the house when we’ve catsupped too much.

Now, we at Kittywampus no longer have a resident cat since the demise of Grey Kitty a few years ago. But we still observe proper eticatte, and so – knowing what GK would’ve said – here are our humble additions to Artsy Catsy’s compendium:

Catalog: The shape deposited in our litter box after too much catsup and cat-a-mats.

Cat scan: Surveying our territory for signs that the humans are about to break out the catsup.

Catsuit: That which suits us, especially stinky goodness and Temptations.

Catamount: That’s personal! Didn’t you learn about that in biology class?

Catalytic converter: The elegant system for digesting catsup.

Caterpillar: Leader and paragon of the feline community; top cat. (Feared and loathed by Grey Kitty, who was not all that.)

Catfish: The stinkiest of stinky goodness; comes in a can and should be served at every catsup.

Catechism: The long and often futile program of schooling humans to properly cater to cats.

Catastrophe: Punctuation, used (sparingly) in cat communication. When formed by the tail, often resembles a human question mark.

Categorical imperative: The requirement for humans to cater to every feline whim; priority of feline wishes over all else.

If you come up with any more catty silliness, leave a comment and I’ll add it here.

LOLdictionarycat from I Can Has Cheezburger?

Read Full Post »


Yesterday I dragged one of figleaf’s comment threads off topic, and while he was gracious about it (as always), it made me realize this is why I have my own soapbox: so I can bellyache about obscure issues like ahistorical history and its exclusion of our embodied selves.

One of the very first things that new history grad students learn is to beware ahistoricism. You can’t impose our present values, ideas, obsessions, and worries on the past. Now, it’s fine to let your questions about the past be inspired by your present-day concerns. In fact, if your research doesn’t somehow connect up to things you care about today, I have no clue how you sustain interest over years of thesis-writing. But once you’ve got a topic and a set of questions, you have an obligation to look at your sources with as jaded an eye as possible and to be alert to their strangeness and otherness.

Here’s a hypothetical example of how that might work. Let’s say I set out to recover the voices of women within a particular social movement and show how they assumed leadership roles. But once I get into the archives, I begin to see that the movement had been overwhelmingly led by men, even though women did much of the footwork. I can still look for women’s voices, and I can perhaps show how their ideas represented a valuable path not taken. But I’d be a fool to try to insert them into leadership positions if that doesn’t match the evidence. I’d be similarly foolish to simply say, “See? Sexism kept these women down back then just like it does today!” Sure, sexism would have been at the root of their frustrations. But if I’m a smart historian, I’ll try to dissect just how sexism operated and – crucially – I’ll look for ways in which this differed from how sexism works today. With luck, I might then discover a new insight or two about the workings of gender in the past, and indirectly, that might have implications for today. Or it might not.

Most historians, as a result of their graduate training, are pretty careful about this. But there’s been one glaring blind spot, history’s understanding of the body. Up until the late 1980s, historians generally regarded the human body and our embodied experiences as outside the purview of history. Bodies are biological, right? We’re born, we eat, we reproduce, we die. Or so went the dominant and usually unspoken assumption. Even the history of medicine long concentrated on the deeds of Great Men, rather than exploring how embodiment might have changed over the years. Starting in the 1970s, social historians of medicine began to ask not just why people now live longer than in the past, or why TB is no longer such a scourge, but how these changes matter to ordinary people’s lives.

By the late 1980s, in response to the new social histories of medicine and feminism’s focus on bodies, historians were starting to explore how people might have experienced their bodies differently in the past. For me, the book that transformed how I thought about not just history but about being human was Barbara Duden’s The Woman Beneath the Skin: A Doctor’s Patients in Eighteenth-Century Germany. Duden describes a world where people experienced their bodies as much more permeable to the outside world, where the boundary between self and not-self was much blurrier than we understand it to be. This is the kind of insight that Duden could never have achieved if she’d just mirrored feminist concerns of the day and set out to show how women lost power as doctors’ authority grew (though she’s certainly argued that elsewhere, in more nuanced ways than most).

So that’s the kind of sensitivity to difference and otherness that I hope to emulate in my own research on past forms of embodied experience. In thinking and writing about the history of childbirth, I could paint the doctors as bad guys; there are plenty of examples of sloppy and sexist medicine, which I might trot out another day when I’m in an ornerier mood. But if I describe a medical conspiracy to wrest birth away from women, I miss out on the ways in which women exercised choice and agency, including how they deliberately invited doctors’ involvement in the delivery room to enhance their own safety. If I uncritically transport current arguments about the naturalness and inherent safety of birth into the past, I’ll overlook the fact that a century ago, women in today’s rich countries faced roughly a five percent lifetime risk of dying from complications of childbirth. And if I assume that my experiences with pregnancy explain much about those of women a century ago, I might ignore the elemental fact that unlike us, who can pee on a stick and see the thin blue line appear, throughout most of history pregnancy was so couched with uncertainty that a woman couldn’t even be sure she was pregnant, as opposed to suffering a menstrual disturbance, an imbalance of hot and cold, or even the evil eye.

These aren’t examples that directly feed into political work on choices in childbirth, say, or access to abortion. But indirectly, if we’re willing to listen to their unfamiliarity, they might shake our smug assumption that our reproductive politics and institutions are quasi-natural or the best of all possible worlds. They might inspire appreciation and even awe for the vast variety of human experience. And this sort of history can play a small part, I think, toward redressing what Kochanie described in comments a few days ago as “our culture’s devaluing of the body and the tasks associated with it.”

Photo by Flickr user gadl, used under a Creative Commons license.

Read Full Post »


If the Tiger hadn’t woken me in the pitch of night (“Mama, I have to peeeeeee!”) I’d have missed it altogether. But since I was only half-zonked, I heard the rattling of the loopy metal drawer pulls on my dresser. Then I felt my bed gently rocking, but instead of the up-and-down wave motion of my partner turning over, it was more of a transverse wave. A back-and-forth. Also, none of the humans in that bed were moving. At all.

This is why it’s probably just as well that I don’t live in Palo Alto anymore. My survival instincts were never the sharpest, and they don’t improve in the pre-dawn hours. Instead of rushing into a doorframe, I realized groggily: “Oh! We’re having an earthquake!” And then I remembered: “Oh! I live in Ohio! So it can’t be an earthquake!” (Clarity of thought is inversely proportional to the number of exclamation points, at least in my little brain.) And then I thought something even less coherent about Memphis and how it will probably be leveled someday due to the rigid fault line that runs nearby, remembered I don’t live in Memphis, and sank back into stupid sleep.

It was a restless sleep, though, punctuated by dreams in which I kept trying to convince some unseen listener that beds don’t naturally move from side to side. Nor do drawers rattle untouched by human hands. There was some static about whether a school bus or garbage truck roaring past my house might have caused the same symptoms. And then, as I gave up on convincing my invisible interlocutor, the whole experience slipped into a dreamworld and I forgot about it altogether …

… until I’d already packed the kids off to school and was listening to NPR, when a report came on about an earthquake that was felt as far east as Cincinnati.

So I wasn’t hallucinating or dreaming. The quake was real. It was centered in Illinois, arrived at 4:37 a.m., and measured 5.2 on the Richter scale, according to the AP.

I’m here to say that contrary to all the news reports, the quake was palpable even in the southeastern corner of Ohio.

Last time I was in a quake, it was 1987, I was working in San Francisco and I hid under my desk, thinking maybe I shouldn’t have eaten up all my Carnation Breakfast Bars that were supposed to be my disaster stash. But the motion of this one, the pronounced lateral waviness, reminded me more of one I’d experienced while in college circa 1985 when I sought shelter in the doorway of the Research Administration office, where I was working part-time. That one, too, swayed us from side to side, though much more dramatically; I remember clutching the doorframe.

Since no one was apparently hurt, I’ll confess that I was pretty thrilled to experience a quake again. And amazed at how – even after all these years, with my brain more slumbering than not – my body knew exactly what I was feeling.

If you felt the quake, too, I’d love to hear how it was where you live.

This tulip lives in front of my house.

Read Full Post »

So I’ve mentioned that I’m a discussion leader for a class on religion, gender, and sexuality. It’s been huge fun so far. Today my group ended up talking about abstinence-only education. One of the men said his public school had used an abstinence curriculum featuring sex that leads to heroin use and a stolen kidney! Oooh, I think they were doing it wrong.

But that’s not what I really meant to share. As you probably know, by the Middle Ages the Catholic Church had decisively elevated virginity as holier than matrimony. But given Paul’s injunction that it was “better to marry than to burn” (1 Corinthians 7:9), and also given that the vast majority of Christians weren’t flocking to monasteries, the Church was wrestling with how to regulate sexuality within marriage.

Enter the Penitentials. Monks began writing down and codifying regulations on sex from the sixth century onward. If you broke the rules set forth in the Penitentials, you had to confess your sin to your priest and he’d prescribe the proper penance, which might involve fasting, fines, or sexual abstinence. For a few centuries, the Penitentials were the key instrument for disciplining the faithful.

As Michel Foucault wrote in The History of Sexuality, the Penitentials also gave people a raft of ideas about specific ways in which it might be deliriously fun to sin. But that’s another story.

As a non-Catholic, I first encountered the Penitentials through a book I read in grad school, Law, Sex, and Christian Society in Medieval Europe by James Brundage, which I got to revisit for class today. Brundage’s book is a 698-page scholarly tome covering the Church’s medieval ideas about homosexuality, adultery, and the whole gamut of sexual behavior. But if all you want to know is how sex in marriage was circumscribed, Brundage provides a great one-page flow chart. Our students loved it. And I have to agree it’s pretty hilarious – from the safe remove of the twenty-first century. My favorites are the rules against getting naked and against doing it in church.

(Click on the chart to embiggen it.)

The chart comes from Brundage; I swiped the scanned version from BoingBoing.

Read Full Post »

Ephemeral Gold

Since I need to be working on my infernal overdue article, and since I want to be mucking around in my garden, I’m not going to say much today. Instead, I’ll defer to Robert Frost, who I think might’ve had my little garden in mind when he wrote this, one of my favorite poems: “Nothing Gold Can Stay.”

Nature’s first green is gold
Her hardest hue to hold.


Her early leaf’s a flower


But only so an hour.


Then leaf subsides to leaf.


So Eden sank to grief,


So dawn goes down to day.


Nothing gold can stay.

And yeah, I realize that’s an awfully melancholy sentiment on a mild spring day awash in unfamiliar brilliant sunshine. Yet this poem has been running through my head all day as I’ve biked past trees that are just leafing out green-gold, the very newness of their buds whispering a story older than Eden of transience, loss, and rebirth.

All photos are from my garden, taken a few days ago.

Read Full Post »


I just learned today that a conference paper I assumed would be due next month is due … today. Oops. The good news is that hardly anyone meets the so-called deadlines for conferences. (Students of mine – past, present, and future – you didn’t hear me say that, and if you did, you’re not off the hook!)

This paper is unfinished partly because I bit off way more than I’m actually qualified to handle. The title is “The Meaningful and the Mute: Theorizing and Historicizing Embodied Experience.” Translated into everyday language, the subtitle really ought to be: “WTF Was I Thinking?” I’m a historian; I’m so not a philosopher!

What I think I want to say in this paper is that the experiences we have as embodied creatures – those experiences that directly engage our bodies – are particularly meaningful and transformative, not just to us as individuals but to the evolution of society and politics. I want to make the argument that intensely embodied experience is different from other kinds of experience because people are less likely to reflect on it. And if we take it for granted, it can hold greater sway over us.

Of course, you might object, all of our experience is embodied. And you’d be right. As I sit typing this, an activity that mostly involves thought, my mind isn’t somehow mysteriously disembodied like one of those pulsing, glass-encased brains in an old sci-fi movie. All kinds of chemicals are coursing through me, my fingers are moving faster than my thoughts, and my upper back is twinging just enough to be a slight distraction. It’s not just that we have bodies, we are our bodies. One thing I obviously need to do in my paper is take some swings at good old mind-body dualism.

But the kinds of experience I’m thinking of are those in which our bodies are fully engaged, where we’re completely inhabiting our bodies. Sports might be one example (though not one I’ll explore, because as a serious lifelong klutz I’m not interested enough). Sex is one you’ve probably thought about already. Illness. Dancing. Physical work (at least some of the time). And then there’s childbirth, which will be the main focus of my paper, since that’s what I study as a historian.

The main problem I think I’m facing (though I’m sure others will crop up) is how to argue effectively that this sort of experience is uniquely influential without however simply assuming what I’m trying to prove. I need to spend some time communing with the philosophers, especially Merleau-Ponty and feminists who’ve written on him. But at the end of the day, I’m still not gonna be a philosopher, and we historians are expected to cough up some evidence.

You might see more posts on these themes if I can manage to be not too jargon-y and full of hot air. In the meantime, I’m interested in others’ take on it. So please let me know whether you buy the idea that certain kinds of experience are more potent than others in shaping our selves, our identities, our communities.

Image by Flickr user nathaliebee, used under a Creative Commons license.

Read Full Post »


Barack Obama is in hot water over his suggestion that working-class voters are bitter over their economic disenfranchisement and that they’re seeking solace in such distractions as guns, God, and religion. Here’s the quotation that’s causing the brouhaha, in case you’re even further behind the news than I am:

“You go into some of these small towns in Pennsylvania, and like a lot of small towns in the Midwest, the jobs have been gone now for 25 years and nothing’s replaced them,” Obama said. “And they fell through the Clinton Administration, and the Bush Administration, and each successive administration has said that somehow these communities are gonna regenerate and they have not. And it’s not surprising then they get bitter, they cling to guns or religion or antipathy to people who aren’t like them or anti-immigrant sentiment or anti-trade sentiment as a way to explain their frustrations.”
(Source: Time)

Marc Ambinder at The Atlantic rightly points out that the trigger words here are “bitter” and “cling.” Also “guns” and “religion,” which suggests we’re in big trouble if you can’t even mention either term except to praise them. I’ll get to them too in a moment.

Aren’t people embittered? On this score, Obama said nothing new. It’s basically the same argument that Thomas Frank put forth a few years ago in What’s the Matter with Kansas? Frank himself, interviewed by Sam Stein for the HuffPost, confirmed that he found a country rife with bitterness when he researched his book. Even apart from economic issues, there’s plenty of cause for embitterment, starting with our lost war and eviscerated Constitution, and ending with eight lost years when it comes to energy independence and climate control.

“Cling” was a crappy choice of word, and I won’t try to defend it. It makes people sound weak – and who wants to see themselves as a weakling? Had Obama left out that word, he might’ve avoided this shitstorm. Instead, both of his opponents immediately attacked him as “elistist.”

Democratic guru Bob Shrum points out at the HuffPost that it’s the other two candidates, Clinton and McCain, who are far more deeply rooted in the economic and political elites:

Ironically, Obama’s the one raised by a single mother. He’s the one who only recently finished paying off his student loans. He doesn’t know what it’s like to have $100 million. The opponents who are attacking him are the ones who inhabit that financial neighborhood. …

The Clintons haven’t lived in the real world for at least twenty-five years; they’ve been in a bubble surrounded by aides moving from one mansion to another. This doesn’t mean they don’t care or can’t empathize. But it does make it awkward to damn the guy who was a community organizer helping laid-off steelworkers as someone who is out of touch.

The Clinton-McCain axis can portray Obama as elitist partly because it’s clear to them, and to much of America, that as a half-black man Obama can’t be part of the working class, no matter how humble his origins. This assumption is rooted in real material conditions, on one level: Those laid-off steelworkers are overwhelmingly white men. But it’s primarily ideological. Ever since the Reagan era, many white Americans “know” that poor blacks belong to the underclass. It’s evident that Obama’s too well-educated for that, so he must belong to the elite. There’s no middle ground in this ideological binary, which of course willfully ignores the actual existence of America’s black middle class.

The elitism smear sticks a bit too easily to Obama because his education left an imprint on him that’s familiar to anyone else who’s enjoyed a highly privileged education. I didn’t go to Columbia and Harvard Law, but I did study at two fancy-pants private universities (with oodles of financial aid, which Obama must have received too). I recognize his ability to project an almost aristocratic intellectualism and an aura of deserving to lead – both of which are by-products of that sort of rarefied education. I suspect this is just as recognizable to people who’ve been shut out of privilege. To the extent this inspires resentment, Obama’s opponents can exploit it.

Probably more importantly, as long as Clinton and McCain are willing to kowtow to unreason and anti-intellectualism, they can paint Obama as elitist merely because he refuses to genuflect guns and fundamentalism. But why do “guns” and “religion” set people off? Why does their mere mention make a candidate vulnerable to charges of elitism? Like opposition to immigration, which Obama also cited in his remarks, NRA-style pro-gun advocacy and fundamentalist religion are rooted in profoundly irrational human impulses. These are pre-Enlightenment refuges.

By contrast, Obama expresses a faith in human reason, decency, and civil discourse that’s rare in our political culture. This, more than anything, may be what’s read as elitism. We live in a time when reason, intellectualism, and science have all been smeared with the mud of elitism. The Republicans have succeeded brilliantly in discrediting all of these things as weapons that a powerful class of liberal intellectuals wields against the common man. In fact, this is all a smokescreen for the Republicans’ own manipulations, but that doesn’t hamper them from casting people like me (me!?!) as the enemy of ordinary people. Does it make any sense? Let’s just say I haven’t shipped anyone’s job overseas lately. Does that matter? Heck no.

Seen from this angle, a certain kind of elitism – leadership, myth-busting, wisdom, and discernment – might be just what’s needed to dismantle the illusions the right wing has constructed. I’m not much of a Marxist, but the old Marxian notion of “false consciousness” doesn’t seem like an entirely wrong label for those illusions. I’d prefer to see myself as a radical constitutional democrat (small D, this time). But when people have been so thoroughly misled about their own interests that they consistently vote against it, maybe a dose of benevolent elitism might be a necessary corrective.

Non-dogmatic kitteh from I Can Has Cheezburger?

Read Full Post »

Academics Anonymous


One of the paradoxes of academics who blog is that we practically grovel for name recognition in all of our official work. You get published in your field and your name is the currency that helps you get a job, keep a job, earn tenure. Scientists tussle over who gets to be lead author on a paper. Yet academics who blog tend to go underground, taking on a pseudonym and often not revealing their blog to their colleagues.

It’s not just bloggers who do this. Authors of personal essays in The Chronicle of Higher Education routinely use pseudonyms, too. This practice recently came under fire in the Chronicle with an article by Peter Plagens hyperbolically titled “The Dangers of Anonymity”:

I understand why Valerie Plame might want to use a pseudonym, or why Larry Summers probably should have used one, but I don’t understand why so many academics, even when writing fluffy little “casuals,” think they have to use them. The practice is particularly common in The Chronicle’s Careers section, with articles that are neither scandalous personal confessions nor heroic acts of whistle-blowing.

Plagens’ argument boils down to his accusation that these authors are, in a word, “chicken.” He sees no reason why people can’t use their real names while complaining about leaky faucets or airing their fantasies of being a biker chick.

The specific authors Plagens attacked got a chance to respond in the Chronicle. They very reasonably said they didn’t want to be Google-able from here to eternity by current students or future employers. They pointed out that academic freedom is pretty damn fragile if you’re untenured, and that Plagens’ proposed remedies for discrimination – suing your colleagues’ asses or getting a shiny new job – are un-amusing and often infeasible. Even barring serious repercussions, these authors are reluctant to poison relations with co-workers who’d dread appearing in an essay lampooning them or their department.

But none of these authors addressed what I see as the biggest barrier to using one’s real name: the threat of not being taken seriously. Dr. Crazy hints at this issue in her blog, Reassigned Time:

Sometimes people want to write about the mundane. Tragically, the mundane does not generally accord one professional accolades. While it’s true that one might not face profoundly negative repercussions (like not getting tenure) for writing such things under one’s “real” name, one also will not receive professional accolades. In a culture of tenure and promotion that depends upon accolades, well, it certainly doesn’t make sense to write about the mundane under one’s “real” name. Why? Because, well, it makes one seem mundane as opposed to outstanding, which is what tenure committees even at the most lame universities seek.

Yes! And in fact, if you look at the quotation I grabbed from Plagens, you can see from his use of the term “fluffy little ‘casuals’” that he doesn’t just object to anonymity or pseudonymity, he’s sneering at anything less than Deep Serious Intellectual Texts.

Writing about anything personal can quickly be perceived as not just mundame but frivolous. Sure, once you’ve achieved a reputation through more conventional channels, you may get away with publishing glimpses of your personal life. (I’m thinking of the autobiographical portions of Susan Bordo’s wonderful The Male Body or Jane Gallop’s Feminist Accused of Sexual Harassment.) But if you’re not already famous, you tread lightly. Academics and network news anchors are about the only remaining professions where “gravitas” seems to be regarded as a basic job qualification. (This is no longer even required of the POTUS, as evidenced by the Current Occupant.)

Academics who blog bump up against the prevalent academic norm that there’s no such thing as “spare” time. You ought to be devoted to your job 24/7, living a sort of modern-day monastic life. Which is why parenthood and motherhood are too often regarded as crippling one’s chance at a tenured position (whether that’s true or not in any particular case). (Note that failure to win tenure doesn’t just mean job insecurity; it often means unemployment and a strong chance you’ll never work in your field again.)

There’s also a gendered dimension to this. Insofar as women are still taken less seriously in many academic disciplines, there’s probably more pressure on us not to appear too frivolous. We’re also still more closely associated with the body, which means that if we blog about mothering or sex or anything else with a major corporeal dimension, we may play into stereotypes and again provide fodder for those colleagues who still have (usually unarticulated) problems seeing women as their equals. We’re also too quickly presumed to be mired in our personal lives.

All of this can vary, depending partly on your discipline. Women remain highly marginal in many of the sciences, but indefinable bullshit like gravitas seems to matter less there. In the humanities, women are quite prevalent but a certain tweedy seriousness plays more of a role than in the sciences. (Picture the historians who appear on TV as talking heads. Doris Kearns Goodwin is the only female, and she sure does the tweedy thing.) In the program where I currently teach, women’s studies, none of these intangibles seem to be very important. I’d have no problem with my colleagues reading my blog; they’re wonderful and real people. But we’re also marginal to the rest of the university.

Being pseudonymous can offer some nice positive benefits, too, as the Chronicle commentators point out. When you detach from your real-world identity, what you write can more easily be read as universal. You can develop a different voice than you might use in your other writing projects. You can explore personal topics frankly. You tell the truth, as you see it, without embarrassing innocent bystanders. All of these benefits apply to pseudonymous academic bloggers, too, as Profgrrrrl has thoughtfully explored.

Is this irresponsible, much less “dangerous,” as Plagens suggests? Dr. Crazy notes that there’s a big difference between pseudonymity and anonymity.

Pseudonymity … is not about being untraceable but rather about taking on a traceable identity that is distinct from one’s legal identity, or one’s identity at birth. It’s about taking on a “pen name,” a name that people can follow, and by extension a way of thinking that people can follow.

If you use a pseudonym, you develop a consistent persona over time. In fact, it’d be really hard to do otherwise. You also feel a sense of responsibility to your readers. As I learned last week when I got attacked by Clintonista partisans for blogging on the O’Bleness story, I felt no less beholden to getting it right just because I wasn’t using my legal name. I carefully re-examined what I’d written, and precisely that self-scrutiny let me feel confident that I hadn’t distorted the truth insofar as it could be known from a sparse set of facts.

I was also grateful for pseudonymity when I started getting hateful comments. Someone who really wanted to track me down could do it, but I haven’t left a trail of bread crumbs leading straight to me. If there’s any danger lurking out there, it’s not from “chicken” grad students and professors airing their dreams and complaints under an assumed name; it’s from crazies and stalkers who’d like to put the chill on those of us they call eggheads, surrender monkeys, and feminazis. In this climate, I’m happy to share a name with the world’s yummiest cherry tomato.

Gratuitous crocus photo from my garden, taken about a week ago.

Read Full Post »

Those Bloomin’ Taxes


I should be working on my tax return, but dang it, I haven’t filed away any paperwork since last June, so I first have to sort through a foot-tall pile of paper. True to form, I’m procrastinating. Which is just how I got into this pickle in the first place.

One of the other annoyances of tax season is how it coincides with planting season. This morning I said to heck with the taxes and planted some sweet pea seeds. Then I came inside and read about a nifty proposal that would give tax breaks for gardening! Well, not necessarily for planting sweet peas (they’re poisonous) but for growing food in our yards, similar to the Victory Gardens in WWI, except this time with a little tax incentive.

Writing in Alternet, Roger Doiron says:

I am proposing that home growers finally catch a break. Not from bugs, weather, or clunky garden shoes, but from taxes. It’s not as silly an idea as it may sound. We give tax breaks to people to encourage them to put hybrid cars in their garages and solar panels on their roofs, so why not offer incentives for solar-powered, healthy food production in their backyard? …

More home gardens would offer us victory not only over rising food and health care costs, but also foreign oil dependency and climate change. Researcher estimate that locally-grown foods use up to 17 times less climate-warming, fossil fuels than foods from away. And when it comes to local foods, it doesn’t get any “localer” than one’s own yard.

Doiron would have the government waive taxes on gardening supplies and – more significantly – offer an income tax deduction for a kitchen garden (or for rental of a community garden plot) similar to the break for a home office, based on square footage.

This is such a cool and clever idea. It won’t save the earth all on its own. But as the price of oil climbs ever higher, it might help ease the transition to the more local world that we’ll all be forced to inhabit in the future. Less lofty – but no less important – more people might discover the pleasures of perfectly fresh vegetables: tender-crisp baby lettuce, sun-warmed tomatoes, sweet buttery purple-podded beans.

By that way, that sweet feline pansy pictured above, taunting those of you who are still digging out from winter? It survived from last fall, along with most of its companions, under layers of snow and discouragement. And since it’s a pansy, it’s edible – though this particular specimen probably has too much dirt-and-oil grime from the street in front of my house.

Read Full Post »


Anyone who grew up in North Dakota perks up when their home state makes national headlines. So this week, when I saw that there may be massive oil fields in the wild western half of the state, I got excited even though I haven’t lived there in nearly three decades.

According to Andrew Leonard at Salon, earlier estimates ranged as high as 500 billion barrels in the Bakken shale formation, which extends from North Dakota into Montana and Canada. (I hope this doesn’t mean we’ll have to invade Canada.) Even if that figure were correct, no more than half would be recoverable in the best-case scenario.

Now, with the release of a United States Geological Survey report on Thursday, the amount of technically recoverable oil there has been estimated between 3.0 and 4.3 billion barrels, as Leonard reported. (See his post for links to the actual report.) Note that this is technically recoverable, which still doesn’t tell us if or when it’ll make economic sense. As Leonard further notes, the extraction process for shale oil usually involves pulverizing mountains. Here, companies would likely use “horizontal drilling,” which the AP described as follows:

Oil companies began sharing technology about two years ago on how to recover the oil. The technology involves drilling vertically to about 10,000 feet, then “kicking out” for as many feet horizontally, while fracturing the rock to release the oil trapped in microscopic pores in the area known as the “middle” Bakken.

If it seems like there ought to be a better way, I’ve got a fine idea. North Dakota has another major resource that’s never been a secret to its sons and daughters: wind.

Way back in 2000, the New York Times reported:

Together, South Dakota, North Dakota and Texas have sufficient wind resources to provide electricity for the entire United States, according to studies cited by Energy Secretary Bill Richardson.

Being a good North Dakotan, I read that piece. And then I saw dollar signs. My dad still owns six quarter sections of land. It’s not prime farmland, but wind? Boy, have we got it!

What we don’t have is transmission capacity to move all that electricity out of the Dakotas and into the rest of this energy-greedy country. We also don’t have clever ways of storing really massive amounts of electricity. Those are the the two things that would lay the groundwork for large-scale exploitation of wind power.

Of course, revamping our transmission grid and reinventing the battery would require huge investments. It’d take a major public initiative. But it might still be cheaper than pulverizing or drilling under the western half of North Dakota. It would certainly be cheaper than invading any more countries for their oil – yes, even cheaper than attacking Canada, never mind Iran.

Photo of the North Dakotan Badlands by Flickr user zanzibar, used under a Creative Commons license.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 49 other followers

%d bloggers like this: