The entire human genome has been read. Even ten years ago, that seemed a distant goal; but last month, scientists announced that they have completed reading a “rough draft” of the complete recipe for a human being. It will soon be available on compact disc for anybody to read: a book 800 times longer than the Bible. This breakthrough will open an amazing world of possibilities for medicine-including the prediction, prevention, and treatment of many diseases, from kidney stones to cancer.
But why stop at disease? Instead of merely eliminating the negative, why not accentuate the positive-by tinkering with the text to improve it? After all, in pursuit of the perfect human being, we have willingly tried every weapon that falls into our hands, from prayer to psychoanalysis to breast implants. Will we-and should we-do the same with genes?
It is fashionable to answer that we should not, that we probably nonetheless will, and that that would be a disaster; I am more sanguine. Genetically modified people will not pose a great threat to society, even if we choose to create them; but we will, on the whole, not choose to create them, so there is little to worry about on this score.
A SAD LEGACY
Discussions of these issues are burdened by a complicated history. A century ago, progressive social reformers were obsessed by the new agenda of “eugenics.” There was a sense of urgency in their desire to improve the human race by selective breeding. It had worked well enough in cattle and chickens, but we human beings were not only failing to breed from the best specimens; we were allowing the worst to have the most children.
“Some day,” said Theodore Roosevelt in 1910, “we will realize that the prime duty, the inescapable duty, of the good citizen of the right type is to leave his or her blood behind him in the world.” In the same year, Winston Churchill lobbied for compulsory sterilization of the mentally handicapped: “I feel that the source from which the stream of madness is fed should be cut off and sealed up before another year has passed.”
Britain never did pass such a law, thanks to determined opposition from a libertarian member of parliament named Josiah Wedgwood. In America, however, states began to pass laws allowing mandatory sterilization. In 1927, the Supreme Court upheld Virginia’s eugenic-sterilization law in Buck v. Bell. Carrie Buck, whom the state wished to sterilize, lived in a colony for “epileptics and the feeble minded” in Lynchburg, with her mother Emma and her daughter Vivian. After a cursory examination, Vivian was declared an imbecile-she was six months old at the time!-and Carrie was ordered sterilized to prevent her from bringing more imbeciles into the world. Supreme Court Justice Oliver Wendell Holmes thundered that “Three generations of imbeciles are enough!” Compulsory- sterilization laws were thenceforth upheld in many states; more than 100,000 Americans were sterilized under them.
The tragedy of that story lies not in the science behind eugenics, but in the politics: It is the coercion that was wrong. An individual who volunteers for sterilization is doing no harm, whatever his or her motives; one who orders another to be sterilized against his or her will is doing wrong. It is that simple.
The eugenic movement began with the best of intentions. Many of its most strident advocates were socialists, who saw eugenics as enlightened state planning of reproduction. But what it actually achieved, when translated into policies, was a human-rights catastrophe: the rejection of many immigrants, the sterilization of many people whose only crime was to have below-average intelligence, and eventually, in Germany, the murder of millions of people.
But now, a century later, we are once again practicing a sort of eugenics: We abort fetuses that would be born with Down syndrome or inherited disorders. In New York, Ashkenazi Jews who carry the Tay- Sachs mutation can avoid marrying each other through blood testing organized by the Com mittee for the Prevention of Jewish Genetic Disease. We also stand on the brink of cosmetic genetic engineering.
Are we simply repeating the mistakes of the past? No. The principal difference is that whereas eugenics, as conceived in the early part of the 20th century, was a public project, modern genetic screening is a private matter. Only China still preaches eugenics for the good of society; everywhere else, modern eugenics is about individuals applying private criteria to improve their own offspring by screening their genes. The benefits are individual, and any drawbacks are social- exactly the opposite of the old eugenics.
Another difference is precision: The selective breeding of the past worked slowly and unpredictably-but today, we can insert a gene into an organism and be all but certain what the effect will be: For example, inserting the genetic phrase “Make insulin!” into a bacterium transforms the life of a diabetic.
Genetic engineering of plants and animals is now routine. Only the genetic engineering of human beings is forbidden.
It would work: Of this, there is no doubt. Take a simple example: the gene on chromosome 4 that is associated with Huntington’s disease, a terrible mental affliction of middle age. You could go into the gene in a fertilized egg, find the crucial phrase “CAG”-which, in affected people, is repeated more than 39 times in the middle of the gene-and remove about half of the repeats. It would not be easy, but it could probably be done. The result would be a healthy person with no risk of Huntington’s, and no risk of passing it on to her children.
This procedure might become a neater and less intrusive option than the three now available: 1) screening followed by abortion-the course of action followed by many couples who find themselves carrying a child with a devastating disease; 2) pre-implantation genetic diagnosis, which is in vitro fertilization followed by implantation of a healthy embryo and rejection of one that carries the faulty gene; and 3) gene therapy, the immensely difficult and dangerous procedure of trying to infect sufficient cells in your body with a gene-carrying virus to correct the faulty gene.
It is not just medical genetic engineering that is feasible; cosmetic genetic engineering could also begin tomorrow, though it would still be very primitive. There is a gene on chromosome 17 called the ACE gene, which comes in two equally common varieties, long and short. On average, people who inherit two long ACE genes (one from each parent) are better athletes than people who inherit two short versions. For instance, of 123 British-army recruits, the long-gened ones improved their weight-lifting ability much faster during training than the short-gened ones.
It would be comparatively trivial to engineer a human embryo so that it had two long ACE genes. The result would be a child slightly more likely to win long-distance running races. There would be no risk of unpredictable consequences, because about one in four of us have two long ACE genes already; it is not unnatural. It would not be cruel to the child, and it would have no consequences for society. But should it be allowed?
Everybody-from scientists to theologians-seems to agree that human genetic engineering is wrong. Their reasons are a mixture of respect for the sanctity of life and fear of the unpredictable. But if they were ever to relent and allow it, it would be for medical, not cosmetic purposes. To correct a cruel inherited disease-like Huntington’s-is one thing. To use the same technology to correct a subjective defect-non- blue eyes or short stature-would be quite another.
NO NIGHTMARE SCENARIOS
Yet even at this “easy” end of the spectrum, there are uncomfortable questions. Cures might seem uncontroversial, but they are not. My colleague, the sociologist Tom Shakespeare, is achondroplasic, but he regards his inherited short stature as a disability only to the extent that society imposes that prejudice on him. With a first-class degree from Cambridge University and a good career, he does not see his genetic disorder as a reason for eliminating future people like him.
He sees genetic engineering as undermining society’s respect for people like him, because it sends the message that disability could be-and therefore perhaps should be-eliminated. In a genetically engineered society, the parents of a genetically disabled child would feel social opprobrium for not having “done something about it.”
I see his point, but I do not fully agree with it, for the following reason: I see nothing in history to suggest that the ability to cure a condition lessens compassion or respect towards its sufferers. Indeed, society’s respect for people with “preventable” disabilities has surely never been greater than now:People with Down syndrome, for example, were once abandoned or shunned; they are now treated with much more respect.
I suspect it will prove impossible in practice to draw a line between cure and enhancement, between medical and cosmetic genetic engineering. One person’s cure is another person’s enhancement. Is an inherited weight problem a disease? Would it be a cosmetic enhancement to “cure” dyslexia?
Assume that one day, genetic alleles that predict homosexuality are discovered-a wild assumption, but not inconceivable. To a heterosexual couple, disabling the “gay genes” in their potentially homosexual child might seem like a “cure” that prevents an “abnormal” life. But if so, then for a homosexual couple trying to procreate through a surrogate mother, disabling the “straight genes” in their child might also seem to be a “cure.”
James Watson famously remarked a few years ago that if a mother, following a prenatal diagnosis, wanted to avoid having a gay child, that was up to her. After all, more than 95 percent of abortions are carried out for the convenience of the mother and are of “normal” fetuses. It was hard for Watson to see who else should be allowed to make the decision on her behalf: “These things should be kept away from people who think they know best . . . I am trying to see genetic decisions put in the hands of users, which governments aren’t.” This led to a predictably hysterical headline in a British newspaper: “Abort gay babies, says Nobel prize winner.” Note the misleading use of the imperative tense: Watson was arguing against coercion, not for it.
Another objection to genetic engineering is that it would drive out diversity, as people converge on the “ideal.” Again, I think this argument is mistaken. Far from threatening diversity, genetic engineering may actually increase it. Supposing cosmetic genetic engineering became accepted, musical people might seek out musical genes for their children; athletes might seek athletic genes; etc. It is very unlikely that everybody would choose the same priority.
If diversity is not threatened by genetic engineering, then another argument-that in a genetically engineered world, there would be an underclass of those who could not afford the procedure-also evaporates. In this scare scenario, the rich might buy themselves an even better start in life with the best genes. But this argument applies only if genetic engineering becomes commonplace, and it is at least partly undermined if everybody is using different criteria of perfection. After all, just by choosing our marriage partners we have been practicing private eugenics ever since we became human: Dark good looks, a slender figure, a winning personality, or a quick mind-in considering these attributes in potential mates, we are, at least partly, selecting genes. Yet diversity is not threatened, because each of us has different criteria.
When artificial insemination was first thought of, in the 1930s, a prominent Nobel-prize-winning American geneticist, H. J. Muller, wrote a book speculating about the uses to which such a technique would be put. “How many women,” he wrote, “in an enlightened community devoid of superstitious taboos and of sex slavery, would be eager and proud to bear and rear a child of Lenin or of Darwin! Is it not obvious that restraint, rather than compulsion, would be called for?”
Well, no, it is not obvious. Attempts to make test-tube babies “eugenic” by establishing banks of sperm from clever men, or of eggs from beautiful women, have largely failed. People use in vitro fertilization (IVF) to have their own children, not to have other people’s-let alone Lenin’s. IVF is actually an instructive precedent for the current debate: When it was invented in the 1970s, society as a whole largely disapproved, finding the procedure unnatural and abhorrent. It gained acceptance because mild disapproval by the many was matched by fierce demand from the few; it took off because infertile individuals demanded it, not because society as a whole decided they could have it. It was an individual decision, not a collective one.
Just as infertile people demanded access to IVF, even before it had been fully tested for safety, it is possible that people who carry fatal genes will demand access to genetic engineering; but this is not very likely. Unlike infertile people, they already have an alternative: pre-implantation genetic diagnosis, which can spot and discard affected embryos in favor of unaffected ones. And if these people, for whom a “bad” gene is the difference between misery and happiness, do not need genetic engineering, why would anybody else need it? It is true that a few people might wish for a blue-eyed boy, but would they really be prepared to abandon the easy business of natural conception for a painful and exhausting test-tube conception instead?
My point is that cosmetic genetic engineering would attract a very small-and probably half-hearted-clientele, even if it were made legal and safe. I simply cannot think of a single feature of my own children that I would have liked to fix in advance. People do not want particular types of children; they just want their own children, and they want them to be a bit like themselves.
When faced with a mediocre hand at poker, you change some cards and start again. But when faced with perhaps 40,000 human genes, any of which you could change for what may or may not be a slightly better version, which would you change? Looks, intelligence, talents, skin color? It doesn’t sound so tempting, does it?
The history of eugenics teaches that nobody should be forced to engineer her children’s genes-but, by implication, neither should anyone be forced not to. To regulate such decisions with heavy-handed state intervention would be to fall into the very trap that caught the do-gooder eugenicists of 1910. I am not against all regulation: At the very least, governments can step in to ensure standards among practitioners (something they are quite good at). But they would be unwise to try to specify in detail what people can and cannot decide to do for themselves. As Thomas Jefferson said: “I know no safe depository of the ultimate powers of the society but the people themselves; and if we think them not enlightened enough to exercise their control with a wholesome discretion, the remedy is not to take it from them, but to inform their discretion.”
When the Supreme Court ended its latest term in June, everyone agreed it had been a momentous one. In its last week alone, the Court had ruled that police must continue to give the Miranda warning to suspects; that government may provide certain forms of aid to parochial schools; that states may not establish election rules that keep political parties from choosing their own candidates; that school districts may not let students pick speakers who might deliver invocations (or anything else that resembles a prayer) before football games; that states may single out anti-abortion speech for special restrictions; that states cannot force the Boy Scouts to admit openly homosexual scoutmasters; and-the show-stopper of the term-that states may not prohibit partial-birth abortion.
Once the Court finished speaking, activists and re porters tried to glean the meaning of it all. Conser vatives cheered some rulings, liberals others. The stakes in the presidential election were said to have been raised: So many decisions were close that one or two appointments by George W. Bush or Al Gore would make the difference on matters such as federalism, abortion, and racial preferences.
Almost nobody remarked on what was truly remarkable about the Court’s term, which was not the pattern of its decisions but the fact that it was weighing in on all these subjects at all. Alexander Hamilton supposed that the judiciary would be the “least dangerous” branch of the federal government, commanding as it did neither sword nor purse. Yet it is now a commonplace of political conversation that what matters most about the presidency is the power to select the people who actually rule the country-federal judges. That is what the Republican and Democratic parties tell their activists. Given the scope of the Court’s decisions, it is hard to say that they are wrong.
Also remarkable is the notion that the Constitution-a wondrously brief document-could possibly settle so many policy questions. That the Constitution protects a right to commit partial-birth abortion or requires the Miranda warnings is not, to say the least, clear from its text. To deepen the puzzle, the Court does not quite say that these policies are in the Constitution.
The distance from the Constitution to the Court’s decisions, then, is vast. It is worth examining in some detail the intellectual and rhetorical moves the justices use to travel it. These moves, I submit, are simply elaborate rationalizations and deceptions. They deceive the public into thinking that the Court is engaged in interpreting the Constitution, even as they rationalize what it is really engaged in: judicial rule. The Court is lying, to us and to itself, and its shaky rationalizations are causing it ever more visible strain.
When contentious moral issues come to the Supreme Court, its first step is to assert jurisdiction by going up, and then down, one level of abstraction. The Bill of Rights bars soldiers from camp ing out in civilians’ homes without their consent and stops the police from conducting “unreasonable searches and seizures.” Therefore, the Court reasons, the Constitution protects privacy. The alleged right to abortion can be seen-supposedly-as an aspect of privacy. Thus, the Constitution protects (or at least has something to say about) abortion.
One way of looking at this move is that the Court has treated as authoritative the questions the Constitution raises, not the answers it gives. Commenting on cases involving school prayer and abortion, Notre Dame law professor Gerard Bradley has explained that “the Supreme Court has treated the Constitution . . . as delineating the boundaries of subject matters (religion, privacy) over which law-making authority has been invested in courts. Actually the plural (subject ‘matters’) may be gratuitous. There is one problem: the individual v. the demands of the organized society. And this problem gives rise to an unlimited jurisdiction to make law.”
Confronting this problem, the Court then assumes that voters and legislatures are unable to handle it. In Court opinions, these characters are liable to make an appearance in the guise of “intolerant majorities.” There are also frequent references to “majority will,” always a blind unreasoning force devoid of principle. Again and again the Court says that the “mere fact of majority sentiment,” or “opinion,” or “feeling,” or (notoriously, in a 1996 gay-rights case) “animus” cannot justify a law. But, as Bradley has observed, nobody really believes that the mere fact of an opinion or feeling could provide reasons for action, independent of the reasons one has for holding that opinion or feeling. The point of the Court’s rhetoric is slyly to associate the Court itself with reason, and the people and legislators with prejudice and other forms of irrationality, and to frame a problem which only judicial rule can solve.
And this move is not confined to decisions on moral issues; it is rather the typical move of the modern Supreme Court. The Court applies a “rational-basis test” in many cases: If it finds that a legislative classification (treating the sexes differently, for example) has no rational basis, it will strike it down. The test implies a rather harsh, if usually unexamined, implicit criticism of lawmakers-they have enacted laws that are not just wrong, but have no “rational basis.”
But the rational-basis test is empty. In Constitutional Cultures, Robert Nagel powerfully demonstrates that it doesn’t do the analytical work it purports to do. Instead, that work is done by sleight of hand: The Court concludes that some law has no rational basis by ignoring or rejecting some of its purposes, by refusing to recognize that a law can have multiple purposes in tension with each other, or by manipulating the level of abstraction at which a purpose is defined.
Something akin to the rational-basis test pops up again and again in constitutional law. As Nagel writes,
Some of the words keep changing, but the tune continues to sound suspiciously familiar. A “time, place, and manner” restriction on speech, for example, must serve a significant governmental interest. The government may restrict commercial speech if its interest is substantial and if its regulation directly advances that interest. . . . To justify discrimination against a “suspect classification,” the government must show that its purpose is substantial and that the distinction is necessary for accomplishing that purpose. . . . Whether administrative procedures comply with due process standards depends in part on the weight of the government’s interest. State regulations that restrict interstate commerce must serve a legitimate local purpose and there must not be alternate means for promoting that purpose.
Etc. As Nagel dryly observes, “to anyone not inured to the Court’s methods, it must be perplexing that constitutional provisions apparently so different substantively should all turn out to have such similar meaning operationally. Indeed, the coincidence is sufficiently striking that the uninitiated might wonder how much the Court’s ‘interpretations’ could possibly have to do with the Constitution itself.”
All these tests-and it is significant in itself that the Court is forever conjuring “tests” and “hurdles” and “requirements” that other people “must satisfy” or “must show” they meet-conceal the scandal of wide judicial discretion under layers of verbiage. What purposes are “legitimate”? How “significant” does an interest have to be to count as “significant”? These are legislative judgments pretending to be legal ones.
That was one of Justice Antonin Scalia’s complaints about the recent decision on partial-birth abortion. Bans on the practice, Justice John Paul Stevens brusquely declared, are “irrational.” The Court found, moreover, that the bans violated a test the Court had established in a previous abortion case: They imposed an “undue burden” on the right to abortion. As Scalia wrote, “what I consider to be an ‘undue burden’ is different from what the majority considers to be an ‘undue burden’-a conclusion that can not be demonstrated true or false by factual inquiry or legal reasoning. It is a value judgment. . . . upon the pure policy question whether this limitation upon abortion is ‘undue’-i.e., goes too far.” The test is simply a way to settle abortion policy by “a democratic vote of nine lawyers.”
The mystique of the Supreme Court is rooted in a claim that it ensures the rule of law. But it increasingly represents the rule of nine people-and, often enough, of one woman. Justice Sandra Day O’Connor is the Court’s most frequent swing vote; indeed, it may be more appropriate to refer to “the O’Connor Court” than to “the Rehnquist Court.” Her vote decides which racial preferences will stand, which fall; it was her vote last year that had school districts across the country scrambling to come up with sexual-harassment policies for the playground. Most of the legal briefs in the recent Miranda-rights case were pitched to her. And O’Connor delights in complicated tests. In a case on preferences, she once articulated a standard of “strict scrutiny” that she said would be “strict in theory” but not “fatal in fact.” Her decisions often turn on the picayune facts of a case, in the spirit of all those decisions governing how close a creche can be to city hall.
O’Connor’s defenders-such as Cass Sunstein, a law professor at the University of Chicago and author of One Case at a Time: Judicial Minimalism on the Supreme Court-praise her holdings for being narrow rather than sweeping, pedestrian rather than grandiose. But the price of this narrowness is a loss of predictability and clarity: Because she announces no clear principles, everyone, lower federal courts included, is left guessing what the outcome of the next case will be. Her decisions invite new cases, in which she can refine her tests further. Her approach, in other words, amounts to the assumption of the power to issue arbitrary vetoes. The late Justice William Brennan once said of Ed Meese’s originalism that it was “arrogance cloaked as humility.” The words are an apt description of Justice O’Connor’s jurisprudence.
A JUDICIAL BREZHNEV DOCTRINE
Until recently, it was the journalistic fashion to describe the Court as humble, moderate, even conservative. Never mind that on the most politically charged moral questions before it-gay rights, euthanasia, abortion-it almost always came down on the “progressive” side or implicitly reserved its right to do so in the future. (The Boy Scouts case just handed down is, admittedly, an exception to this pattern.) Moreover, a sort of judicial Brezhnev doctrine seems to be in operation: What encroachments the Court has made on democratic self- government, it will keep. No major decision of the Warren Court has been overruled.
The Court’s latest term, however, has had even some non-conservatives complaining about judicial supremacy. In The New Republic, Jeffrey Rosen recently wrote that “the defining characteristic of this Court, like [the Warren Court], is hubris. Both combine haughty declarations of judicial supremacy with contempt for the competing views of the political branches.” The Washington Post’s wrap-up at term’s end quoted law professors who contrasted the Court’s willingness to overturn congressional acts-24 have been struck down in the last five years-with its unwillingness to admit mistakes on its own part. Stuart Taylor Jr., a highly respected legal journalist of mildly liberal inclination, pointed out that the Court’s activism put it to the left of the public.
Leftist legal scholar Mark Tushnet has also been questioning the aggrandizement of the courts, but in a slightly different way. Tushnet, like Sunstein, is enamored of a philosophy of judicial restraint that preserves liberal precedents for all time. What all these second thoughts about the Court reflect, in large part, is unease with the Court’s modest revival of federalism and more aggressive restriction of programs that grant preferences based on race (or sex). These decisions, liberals argue, are a form of conservative activism that should be resisted.
Conservatives can reasonably object that federalism and color blindness, unlike the liberal innovations of the last half-century, actually have constitutional warrant. My own view is that conservatives are on stronger ground on federalism than on race. Their premise, however, is sound: The notion of judicial activism presupposes a constitutional ideal from which the activists are deviating. Whether the Court is right to strike down a law or program cannot be evaluated, in other words, apart from the constitutionality of that law or program; and what liberals want is not so much liberation from the tutelage of the Court as liberation from the discipline of the Constitution itself.
And yet the liberals still have a point: Conservatives have been too quick to look to the courts for help in reining in the federal government. So far, the Court’s federalist decisions have restricted Congress and the executive branch, but not the federal courts. In this context, such cases look less like a revival of federalism than like an assault on the separation of powers. Some conservatives, such as George Will, cheered Boerne v. Flores, a 1997 decision in which the Court said that Congress could not force states and localities to conform to its view of religious liberty. But the Court’s reason for reaching this ostensibly federalist result was that it alone had the power to define the contours of religious liberty; if the Court changed its mind, the states and localities would have to conform to it.
Relying on judicial power to promote federalism is likely to prove self-defeating, because that power is itself a threat to federalism. As Amherst political theorist Hadley Arkes has pointed out, Congress is by its very structure more concerned about federalism than the courts are. He uses Roe v. Wade as an illustration: Congress would never have done what the Court did, sweeping away the laws of all fifty states.
ACHIEVING JUDICIAL RULE
The last week of the term provided two more examples to strengthen Arkes’s argument. In Stenberg v. Carhart, the Court struck down a ban on partial-birth abortion that Nebraska’s legislature had passed 99 to 1. (Only one legislator in a hundred met Justice Stevens’s standard of rationality.) By implication, the Court nullified similar laws passed by 30 other states. In Dickerson v. United States, it reaffirmed that police are constitutionally required to inform suspects of their rights. Both decisions were setbacks for federalism.
They were also illustrations of how far the Court has traveled from the Constitution. In each case, the Court more or less gave up the claim that its ruling was firmly grounded in the Constitution. In Planned Parenthood v. Casey-the 1992 case on which the Court primarily relied in Stenberg-the Court had already suggested that Roe v. Wade was probably wrongly decided, but had chosen to stick with it anyway. In Dickerson, the Court declared that it would continue to require the Miranda warnings, while refusing to say that the Constitution itself requires them. In both decisions, persistence in error was justified, in part, by social expectations: Women had grown accustomed to the abortion right, and the Miranda warnings “have become part of our national culture.” (Perhaps the Court is announcing a new Hill Street Blues Doctrine: If a decision gets mentioned in enough prime-time television shows, it’s here to stay.) The Court is saying that the more consequential and far-reaching its mistakes are, the more stubbornly it will cling to them.
It is an audacious argument, and the Court is having difficulty making it. In Dickerson, the Court refers anxiously to Miranda’s “constitutional underpinnings,” its status as “a constitutional decision” that is “constitutionally based”-but, as Justice Scalia notes, the Court’s “carefully couched iterations” never quite say that refusing to issue the Miranda warnings would violate the Constitution. The Casey Court, unable to appeal to the Constitution, made a plaintive appeal to the people themselves to accept the legitimacy of its ruling. If it overruled Roe, the Court argued, public respect for the Court would be diminished-and “so would the country be in its very ability to see itself through its constitutional ideals.”
To which the only appropriate response is: What the heck is the Court talking about? Professor Bradley offers a translation: “The key variables in the legitimacy equation are now Court, people, and constitutional ideals. The burden of the passage is to justify dispensing with the Constitution by positing some galvanizing, mystical bond among the three.” The Court speaks for us; indeed, it is called, it says, “to speak before all others” for our constitutional ideals. As Bradley puts it: “We will be your Court, and you will be our people.”
The alternative vaguely sketched by the Court is divisiveness and even violence: That’s the rhetorical point of the Court’s call in Casey for “the contending sides of a national controversy to end their national division.” Democracy cannot handle hot-button issues peacefully-the public, remember, emerges through the prism of the Court’s opinions as given to irrational spasms of anger and prejudice.
And so, at last, the Court reaches its destination: arguing, even pleading, for oligarchy. The public must be unified from the bench. Judicial rule does not, of course, mean that the Supreme Court actually decides everything. The justices will not raise taxes (although at least one federal judge has done so in the past). We may safely predict that the justices will not concern themselves with traffic rules, and the Congress will not close up shop. The courts have not ventured into foreign policy, despite occasional temptations and opportunities. What judicial rule means is that the Court may in principle decide anything, and claims the jurisdiction to do so.
It is this extra-constitutional claim that accounts for the evasiveness and bitterness of recent terms: for Chief Justice Rehnquist’s embarrassed silence in Dickerson about previous, contradictory opinions he had written for the Court, and for Justice Scalia’s dissents-the most furious in the Court’s history-attempting to demystify what the Court is up to.
After a few rocky moments earlier this year, George W. Bush finally solidified his credentials on abortion and earned the respect of many pro-life leaders. The main question now is not what he believes, but how he plans to explain himself. And on this point he needs help. Witness his reaction to the Su preme Court’s monstrous decision in Stenberg v. Carhart, the Nebraska partial-birth-abortion case.
The principal teaching of the abortion cases is that if a woman wants an abortion, the Constitution guarantees her right to get one. Abortion-on-demand has been the default mode of the Court for almost 30 years now. Carhart confirms as much, and demonstrates that the constitutionality of abortion regulations will be determined by ideology, not facts. Abortion may not be restricted, the Court ruled in 1992, if the regulation imposes an “undue burden” on the woman. Carhart establishes that any serious restriction is unlikely to pass this test. Not the age or physical condition of the child, not the absence of any credible threat to maternal health, not the state’s interest in protecting a partially delivered child-none of that could suffice to overcome the supervening right of a pregnant woman to abort.
The decision also supports the previously latent suggestion that the right to abort includes the right to a dead child-even a child capable of surviving the abortion. Once marked for extinction, apparently, always marked for extinction. Carhart raises the obvious question: What about a child who is fully delivered? The distinction between partial- birth abortion and infanticide is, after all, a question of millimeters, and the Court has repeatedly indicated that the mother’s interest, not the child’s, is the exclusive focus of constitutional concern. Whether the justices understand the full import of their own argument is irrelevant. The iron law of logic carries an argument to the conclusion buried in its premises. In James Burnham’s formulation, Who says A must say B.
The strongest sentiment Bush could convey about the horror of Carhart was that he was “disappointed.” He added, “I hope to be able to come up with a law that meets the constitutional scrutiny.” And he reaffirmed that he will “fight” for a ban on partial-birth abortion. Now, “hope to be able to come up with” is a noticeably weak construction that, if nothing else, robs “fight” of any teeth it might otherwise possess. (Imagine Justice Scalia writing in dissent, “I am disappointed by the majority’s decision. In a future case, I hope to be able to come up with some kind of an opinion that captures their favor.”)
This simply won’t do. Bush has had plenty of time to think about his position on abortion. The question, after all, has been contested ferociously in every imaginable forum. Yet the issue that has agitated the nation as none other for more than a generation seems not to have engaged Bush’s focused attention. To be sure, he has repeatedly noted his opposition to partial-birth abortion. But the Supreme Court just ruled that a partially delivered child can have her brain sucked out and her skull crushed, bestowing on this butchery the status of a constitutional right. And the governor allowed that he was disappointed.
Bush and his advisers clearly have a lot to learn about the political dynamics of abortion. His policy appears to be one of friendly noninvolvement: Express pro-life sentiments, but shy from engaging the issue. This strategy has at least two defects: It assumes that Al Gore will play by the same rules (he won’t); and it ignores the necessary implications of Bush’s own statements.
Consider, for example, Bush’s promise “to come up with a law that meets the constitutional scrutiny.” The Supreme Court has made clear that any statute will almost certainly have to include a maternal-health exception. “Health,” the Court ruled years ago, means “mental health,” and mental health in the abortion context means that whatever Lola wants, Lola gets. Lola will get her abortion as long as a doctor is willing to take up the scalpel. A health exception, as the Carhart dissenters noted, would eviscerate any statutory prohibition. Everyone knows this-but does Bush?
Al Gore’s own statement on the decision gives a hint of what lies ahead. It too underplayed Carhart, but in a strategically useful way by including a sentence that ought to give Bush the chills: “A woman’s right to choose must include the right not to be forced to undergo a procedure that might endanger her life or health.” Never mind that all competent medical authority denies that maternal health ever requires partial-birth abortion. Gore will defend “choice” and “maternal health,” running as a moderate who wants to protect “settled” constitutional precedent. Bush inspires no confidence that he is prepared to offer an effective response.
A comparison of the candidates’ websites confirms this judgment. While Gore’s two official sites contain plenty of red meat to satisfy pro- choice appetites, Bush’s avoids controversy. There you will find four muted bullet-points:
–Pro-life with exceptions for rape, incest and life of mother
–Set the goal that all children should be welcomed in life and protected by law
–Supports parental notification, banning use of taxpayer funds for abortion, and banning partial birth abortion
–Supports efforts to increase adoptions
The second and fourth points are so bland that even Gore could embrace them; the first and third will prove impotent against the Democrats’ predictable assault. That third point, as it touches partial-birth abortion, has just been blown away by the Court. Gore will say: “I too favor a ban on partial-birth abortion, but I agree with the Supreme Court that any such restriction must protect the woman’s health. I cannot understand why the Governor is so callously indifferent to protecting maternal health in these tragic and very personal decisions.”
Bush’s wish to avoid the abortion minefield is understandable, but the logic of events, and of his own prior statements, will inevitably drive him there. How can he reply? He needs to change the terms of the debate. As long as the issue is a matter of “the right to choose,” he will lose the argument, because the “choice” mantra has been drilled relentlessly into American heads. Bush’s main chance-maybe his only chance-is to refocus attention on the child who is killed. Here is some advice:
1) Issue a new statement that de scribes the partial-birth-abortion procedure. Take your text from the Court’s dissenters. The public needs to shudder at the horror of the act.
2) Summarize the Carhart decision: It is now impossible to restrict this barbaric practice in any meaningful way. Quote again from the dissenting opinions.
3) Acknowledge that you and Gore disagree on abortion; the split between you mirrors that in the electorate. Say that this division will not soon heal, adding, “In the meantime, I will use my powers of persuasion and the powers of the presidency, as permitted by the Constitution, to keep abortion within all reasonable limits.” Follow Ronald Reagan’s playbook.
4) Cite the massive opposition to partial-birth abortion and the specter of infanticide. Cite Sen. Daniel Patrick Moynihan on the difficulty of distinguishing between the two.
5) Endorse the bill introduced by Rep. Charles Canady that would protect children born alive even if they are “premature.” Say, “Now that the people are prevented from enacting effective legislation to ban partial-birth abortion, this bill is the surest way to prevent a slide into constitutionally sanctioned infanticide. The right of the people to express their sense on so important a matter must not be surrendered to the judiciary. Reasonable people can disagree about abortion, but surely they will wish to protect a child already born. Let this election become a plebiscite. Up or down: Should the Constitution protect that child? Mr. Gore, what is your answer?”
6) Let Gore make fine points about law-agree to disagree, but keep attention riveted on the child: “Mr. Gore, will you or will you not protect her?”
By talking about a palpable human being, whose physiognomy even the dullest intelligence can easily conjure, Bush will find a position that is safer to argue than anything else in this perilous area. Such a strategy comports with the views of 70 percent or more of the people. It may be the only strategy that can put Gore, who now holds the rhetorical high ground, on the defensive.
Republicans love to poke fun at the many incarnations of Al Gore, who seems, like a chrysalis, always in the process of becoming. But more important than the changes themselves is the question of whether he will find a tack that actually works. The latest version of the Gore campaign has the dressed-down and moussed-up vice president scourging, in painstakingly enunciated sound bites, every possible nasty business interest (except, oddly enough, rapacious landlords).
Gore-as-populist certainly makes more sense than his prior iterations. Immediately after the primaries the vice president became McGore, duplicating John McCain’s campaign-finance-reform agenda, but, always an overachiever, taking it even further. Gore became McCain without the charm, a dreary prospect indeed since charm was McCain’s appeal. Gore’s corporate-bashing represents a more intelligent-and substantive-way to bottle McCain’s thunder.
Voters enjoyed McCain’s fighting spirit, which was refreshing in an age dominated by saccharine political rhetoric. It also marked him as “willing to stand up for what he believes,” one of the most important qualities the public looks for in its politicians. The anti-corporate assault allows Gore to claim McCain’s fighting mantle, without seeming personally nasty the way he did when attacking Bush as “arrogant” and “smug.” It is doubly important for Gore to find an effective way to attack, because it is the only thing he’s comfortable doing.
And the anti-corporate rhetoric actually connects to a substantive program in a way McCain’s campaign-finance rants didn’t. Gore quickly discovered that people don’t care whether or not you spend “soft money.” His new tack allows him to maintain the same anti-special- interest rhetoric, but in support of important political goals-in the case of Big Oil, diverting attention from his own energy policy; in the case of pharmaceutical companies, building support for his prescription-drug benefit; and so on through “corporate polluters,” and Big Tobacco, and firearms manufacturers.
His rhetoric may well be a shrewd way to energize his base and attempt to reach elderly and working-class voters with a frankly populist appeal. But Gore also risks throwing away the lesson of Clinton’s victory in 1996, muting the appeal of the administration’s economic record, and running smack into powerful electoral and demographic forces that he had already retooled his campaign to take into account. (Eventually, all this retooling doubles back on itself.) The vice president’s latest tactic calls into question whether the Third Way will have a second act.
In the Clinton White House in 1996 there was a dispute over how to talk about the economy. Labor secretary Robert Reich, in keeping with traditional liberal concerns, wanted to emphasize the economy’s failings, the workers “left behind.” Dick Morris urged the president to stay positive, heralding the economy’s achievements in a way that helped make the public more optimistic, and, in turn, win those optimists over to his side.
Gore is now attempting a straddle, sounding like Reich in his assault on the new malefactors of wealth, and like Morris in touting “progress and prosperity.” It’s a contradictory combination. A populist assault on business usually requires the dire atmospherics of a recession to catch fire (this is why Pat Buchanan’s populism seemed to have a future in the GOP in 1992, and now doesn’t even seem to have a future in the rump Reform party). It’s also tricky to sound anti-corporate and pro- capitalist at the same time. Gore and the Justice Department are open to the charge that they favor “progress and prosperity,” so long as no one makes a profit or expands his business.
Old and New Democrats argue with each other about what was most responsible for Clinton’s 1996 victory-defending popular entitlement programs and standing up to Newt Gingrich, or acceding to a balanced budget and muting the party’s liberalism. But this is like debating what’s most important in a car, the axles or the wheels. By eliminating his most obvious political vulnerabilities, Clinton’s fiscal and social moderation paved the way for his defense of entitlement programs. Gore’s anti-corporate jag risks upsetting the balance.
In the aftermath of the 1996 election, New Democrat strategist Will Marshall argued, “Whereas the Left’s economic story mainly conveys fear of change and animosity toward U.S. businesses-usually depicted with all the subtlety of a Snidely Whiplash cartoonmust craft a new narrative that appropriates the new symbols, lexicon, and techniques of the information age.” This seems indisputable. On the other hand, as Ruy Teixeira and Joel Rogers demonstrate in their new book America’s Forgotten Majority, New Democrats tend to exaggerate the decline of the white working class as an electoral force. New Democrat analysts, for instance, loosely categorize all “women” as a suburban, upscale constituency when many of them, of course, are low-income.
Gore’s Social Security savings-accounts proposal seemed to square this New Democrat/Old Democrat circle. It acknowledged that a new, healthy force is afoot in the economy-e.g., widely available stock ownership- and at the same time it explicitly aimed to help lower- and middle- income workers avail themselves of it. But the latest anti-corporate swing puts Gore right back in Snidely Whiplash territory-especially if George W. Bush makes him pay a price for his rhetoric by identifying it with the old McGovern/Mondale liberalism.
Which remains in doubt. Bush likes business in the west Texas, down- and-dirty sense. He has an instinctive distaste for Wall Street, populated by Ivy Leaguers with slicked-back hair and pin-striped suits, so he may not be inclined to stick up for corporate profits. Also, his compassionate-conservative message is suffused with Catholic social teachings skeptical of unbridled capitalism. Finally, he’s built his campaign around the idea that people will have to disregard the good times to elect him-if people care only about Wall Street, he says on the stump, then I’ll still be the governor of Texas in November.
Bush, therefore, has failed to take full advantage of a traditional Republican strength, and an absolutely crucial issue in any presidential election: stewardship of the economy. He raps the Clinton administration for thinking it “invented prosperity,” and has a line about how his tax cut will provide insurance against a recession, but that’s about it. Even this half-hearted effort has propelled him ahead of Gore. According to a Rasmussen survey, just 27 percent of people think a Gore win would be good for the economy, while 38 percent say it would be bad. Voters are about evenly split on a Bush victory.
This traditional strength of Repub licans should be all the stronger in light of “the new investor class.” In his latest stump speech, Gore sets up an opposition between “the people” and “the power.” This is a notion as outdated as it sounds. Today’s captains of industry are associated with technologies that in crease personal autonomy and choice. In the popular imagination, they aren’t cheating the little guy, but empowering him: Bill Gates, for example, has been ruled a monopolist but remains one of the most popular figures in the country. Three times as many people credit him with the robust economy than attribute it to Bill Clinton. It helps that millions of people own a piece of Microsoft.
And just as important as stock ownership is the force of aspiration. The newly populist Gore recently rapped the GOP death-tax repeal for giving “away the store to those who already own the shopping center.” But 65 House Democrats voted for the repeal exactly because they have constituents who hope to own shopping centers. In boom-time America especially, small businesses can aspire to become evil corporations, and middle-class people to earn enough money-perhaps partly through investing in firms like Merck or Pfizer-to become the targets of the Democrats’ rhetorical ire.
Bill Buckley’s latest novel, Spytime, is a highly original work; indeed, one might describe it as a new kind of novel if Buckley had not already done something vaguely similar in The Redhunter, which was about Sen. Joe McCarthy. There is, of course, nothing new about historical fiction centered on real characters: What’s unusual in these books is that Buckley writes not only about very recent events but about famous characters whom he knew personally, not so much romans a clef as romans en clair. When television and film sally into this, or at least into a neighboring field, the results tend to be labeled “faction” and frowned upon; but Buckley offers a significantly different, generally more authentic, and altogether superior product.
His protagonist, this time, is James Jesus Angleton. All that most of us may remember about Angleton is that he was, throughout much of the Cold War, head of counterintelligence in the CIA, a brilliant player in the global spy game, and that in 1974 he was abruptly dismissed. Why? The received explanation was that he had become dangerously paranoid, seeing spies, particularly in high places, where they did not exist. Other explanations were whispered-that he was the victim of political conspiracy, of anti-anti-Communism, of a Soviet plot, or even that he had been a Soviet spy himself. Buckley approaches this unresolved puzzle by following Angleton’s career from youthful recruitment by Allen Dulles to the bitter day of his expulsion from the agency.
However, to call Spytime a fictionalized biography would be slightly misleading. Much of the second half is more like a conventional spy story, describing the adventures (if “adventures” is not too upbeat a word) of Angleton’s young American protege Tony Crespi, on his first assignment as a deep-cover agent in Beirut, where he has been told to keep an eye on Kim Philby. Philby was once a colleague whom Angleton trusted, but now he is known, or suspected, to have been the most important in a whole bunch of traitors at the heart of Britain’s Secret Intelligence Service.
The narrative technique consists of short datelined chapters lacing together Angle ton’s personal story with all the intelligence-related dramas of the period; these begin with Mussolini’s capture by Communist partisans in 1945, at which Angleton was allegedly present, and range through Khrushchev’s revelatory speech about Stalin, the building of the Berlin Wall, the Bay of Pigs disaster, the Cuban missile crisis, and the Viet nam War, accompanied by the re curring enigma of doubtful defectors and double agents. Most of this background information has gradually emerged into the public domain, but Buckley supplements it with material from his own rich experience and wide acquaintance.
Therein lies the major difficulty of this peculiar genre. The reader is never quite sure of the line dividing ascertained fact from informed imagination: He cannot know which details have been un earthed from contemporary memoirs and which have been added for verisimilitude by the novelist. One is inevitably alert for implausibilities. One wonders: Did Jack and Bobby Kennedy really talk to each other in quite this style? Is there any foundation for the brief sexual episode involving Angleton? Can the directors of the CIA actually have discussed his removal in such a curt, cold-blooded way?
On the other hand, plausibility is not the same thing as truth; the truth must sometimes be toned down to render it plausible; a well- constructed falsehood is, almost by definition, easier to swallow. Who would readily have believed that the head of the Soviet section in the British intelligence service was himself as Soviet spy, a man so well regarded by British and allied colleagues that he might (stretching credulity still further) quite possibly have gone on to become head of the whole service? And yet it was so. Conversely, there are still people who believe that a man, now dead, who did become head of the service was indeed a spy; which is probably not so. Others within the intelligence community believed that Britain’s prime minister, Harold Wilson, was himself at least a Soviet agent of influence. This is almost certainly not true, although some of his friends and associates may well have been, and he himself constantly feared that he was being plotted against by secret forces. Angleton saw no reason to believe that American politics and the CIA were immune from such sinister penetration. The British spy ring-Philby, Burgess, Maclean, An thony Blunt-had proved real. There was rumored to be a Fifth Man. The Fifth Man could be American.
If working for half a lifetime in what has justly been called a wilderness of mirrors, mapping it, using it, being deceived by it, finally pushed Angleton over the edge of rationality, who should be surprised? He saw conspiracies everywhere because in his world there were conspiracies everywhere. But perhaps not absolutely everywhere. Then again, who knows? This ultimate kernel of doubt Buckley leaves unsettled, as in a book where the characters are real and the narrative is not wholly fictitious, he was bound to do.
Taken just as a thriller, Spytime works well and moves fast: But there is always another dimension. Only the most imperceptive reader will not feel that Buckley is leading him down these mirrored paths of recent history for a purpose, to make him think about the nature of loyalty, about the subtleties of deception, about the extent of the Cold War, about who was right and who was wrong and how hard it can be to tell the difference.