Monday, October 26, 2009
"WHAT MIDDLE CLASS"?
Global Wealth Inequality (2007-08 Average) James S. Henry and Brent Blackwelder (Click chart)
October 26, 2009 at 01:44 PM | Permalink | Comments (0)
Tuesday, July 04, 2006
DEFENDING THE FIRST AMENDMENT ON LONG ISLAND Federal Judge Orders Southampton Village to Allow Free Speech On July 4th!! James S. Henry
(Note: The following is a brief account of this year's conflict between the First Amendment and the Village of Southampton, New York, by Tony Ernst, a reporter for WPKN/ WKPM, 89.5 in Bridgeport, Connecticut.)
=========================================================
In Southampton Village, Long Island, a dispute over whether advocacy groups for the Bill Rights and against the Iraq war can march in the July 4 Independence Day parade was resolved Monday morning by a Federal Judge.
The First Amendment rights of the groups were affirmed by Judge Joanna Seybert of the District Court in Islip.
Judge Seybert issued an order directing that the plaintiffs will be able to march and freely engage in political speech at the July 4 parade in Southampton Village, without interference from Village authorities.
Last month, members of the Peconic Quakers, the South Fork Unitarian-Universalists,the East End Bill of Rights Defense Committee, and the East End Vets were told by organizers of the parade, the Village of Southampton's "Commission on Veterans Patriotic Events," that they could not march with signs of protest as they had done in previous years.
James S. Henry, the attorney for the plaintiffs, remarked on the irony of having to go to court to exercise his constitutional rights on July 4th.
A law suit filed by the plaintiffs is still pending.
Those wanting to join the marchers should meet at 9:30am on Tuesday July 4
at the parking lot of Our Lady of Poland Church on Maple Street south of the Southampton Railroad station.
***
(c) SubmergingMarkets, 2006
July 4, 2006 at 09:22 AM | Permalink | Comments (0) | TrackBack
Wednesday, May 18, 2005
"I AM NOT NOW, NOR HAVE I EVER BEEN, AN OIL TRADER!" George Galloway Kicks Senate Butt
This week's developments in the so-called Iraq Oil-for-Food scandal ("OFF") have turned out to be nothing less than a fiasco for the US Senate's Permanent Investigations Subcommittee and its feckless freshman Republican Chairman, Minnesota's Norm Coleman.
In the first place, a newly-released minority staff report by Democrats on the Subcommittee shows that Bayoil USA, a Houston-based oil trading company headed by David B. Chalmers, Jr., now under indictment, was by far the most important single conduit for the illegal surcharges pocketed by Saddam Hussein under the program.
The report showed that more than half of Iraq's oil sales that generated surcharges for Saddam were made to US buyers during the period September 2000 to September 2002, most of them right under the nose of the Bush Administration and the US Treasury's rather lackadaisical Office of Foreign Assets Control.
Other US companies that have reportedly received subpoenas in the on-going surcharges investigation include ExxonMobil, ChevronTexaco, and Houston's El Paso Corp, as well as prominent Texas oilman Oscar S. Wyatt Jr., who was also deeply involved in supporting and profiting from oil-for-food.
Next, British MP George Galloway, appearing voluntarily before the Subcommittee, deliverered a feisty denial of allegations that he had personally profited from the oil allocations, as well as a withering assault on the last twenty years of US policies toward Iraq.
Meeting little resistance from the badly-outgunned Senators, Galloway made the points that
- He met with Saddam no more times than Donald Rumsfeld, who had met with Saddam to sell arms and provide maps, while Galloway met him to seek peace and encourage arms inspections;
- He had actually opposed Saddam's policies way back in 1990, while the first Bush Adminstration was still making loans and selling arms to Saddam;
- He had always opposed the oil-for-food program as a poor substitute for lifting sanctions, which unfairly punished all Iraqis for the sins of its dicator -- especially its children, up to 1 million of whom may have died because of increased infant mortality;
- The Subcommittee's investigation was a "smokescreen" that distracted attention from far more serious issues -- such as the disappearance of more than $8.8 billion of Iraqi national funds during the first year after the US invasion.
The combative Scot's hard-hitting testimony makes compelling viewing.
Meanwhile, we recall that back in June 2003, J. Bryan Williams III -- ExxonMobil's former head of global crude procurements, and the US' hand-picked UN overseer on the Iraq Sanctions Committee, in charge of making sure that Saddam did not obtain any illicit income from the oil-for-food program -- pled guilty to evading taxes on $7 million, including a $2 million kickback to help Mobil win business in Kazakhstan's oil dictatorship.
So there is at least some good news here, Senators -- if you want to find big-time corruption in the international oil trade, you don't have to go looking for it in London, Moscow, or Paris.
These developments also help to put Senator Coleman's continual "head-hunting" of UN Secretary General Kofi Annan in perspective. While there's no evidence that Kofi profited personally from OFF, his minions were probably not squeeky-clean. But the enormous profits earned by Saddam's "fellow travelers" in Houston make them seem like pikers.
Furthermore, while Kofi is certainly not much of an effective manager, we now know from the Bolton hearings that administrative skill doesn't count for very much with the Bush Administration.
Indeed, it appears that Annan's key fault is that he had the temerity to oppose the Iraq invasion, and even to label the War "illegal" -- once the invasion had already occurred. With Paul Volcker's final report on the oil-for-food scandal due out soon, and US Ambassador to the UN John Bolton (!) likely to arrive as soon as he clears the Senate and adjusts his meds, the outlook for the summer is definitely for more fireworks.
***
(c) SubmergingMarkets.Com, 2005.
May 18, 2005 at 01:54 PM | Permalink | Comments (1) | TrackBack
Monday, April 18, 2005
WHAT'S SO F'IN FUNNY? From One Wolfie to Another "We Have a New Pope!"
Send your proposed entries to [email protected]. Good luck!
A note to our Faithful Readers: Our Editor is on book leave, writing a long-awaited tome on international private banking. Meanwhile, we will pass the time by offering free "Submerging Market" hats to the best proposed captions. Here's the first. Question: Why are two these fellows smiling?
April 18, 2005 at 02:47 PM | Permalink | Comments (0) | TrackBack
Wednesday, January 12, 2005
SO-CALLED "NATURAL" DISASTERS III. The Aftershocks to Our Religious Beliefs James S. Henry
Unhappy mortals! Dark and mourning earth!
Affrighted gathering of human kind!
Eternal lingering of useless pain!
-- Voltaire, Poem on the Lisbon Disaster, 1755
(T)housands of pilgrims to a Marian shrine (on India's coast) were washed away as they attended mass….(A) divinity student.. said she watched one man shout: ‘There is nothing! There is nothing! Where is God? What is God?’
-- Chicago Tribune, 12/31/04
The powerful Sumatra quake and tsunami not only moved whole islands, caused the Earth to wobble, and took more than 220,000 lives.
They also shook our world views to the core, and placed a tremendous PR burden on the more than 100 competing members of the global “non-profit” religion industry.
This is partly because of the sheer scale of the disaster. But it is also because this was surely one of the most ecumenical "natural" catastrophes in history.
While at least half the victims are Muslim, there are also substantial numbers from most of the world's other major religions, including Hindus, Buddhists, Christians, Jews, Jains, and Sikhs, as well as many non-believers.
This has posed an interesting explanatory challenge to all these different religious perspectives at once, and has allowed us to compare their responses. When we do so, as discussed below, we find that many of them have been unhelpful, anti-humanitarian, and even downright loony.
Indeed, the aftershocks that the tsunami has caused to religious mythology, especially to the curious views held by true believers, fundamentalists and extremists of all persuasions, may be among its few benefits.
Meanwhile, of course, the crisis has also permitted those of us who are perhaps a little less certain about Divine Will to join together in what has become an unprecedented, salutary transnational effort to help some of the world's least well off.
We truly hope that the Gods are watching -- They may learn something.
A Wave of Skepticism?On the one hand, many of the world's faithful are now questioning their religious beliefs – a logical reaction that parallels the wave of skepticism that swept across Europe after the devastating Lisbon tsunami of 1755.
According to these new skeptics, recent events in South Asia have demonstrated that our modern gods may not be quite as powerful, helpful, or attentive as we had hoped.
As a former head of the Yad Vashem Holocaust Museum put it, we may be dealing with a real "nebbish" here.Certainly the notion of “praying for the victims,” as one Protestant minister blithely suggested that we do at a memorial service that I attended this week, seems a little odd in this situation.
After all, if Poseidon were willing or able to help the innocent victims of this disaster, presumably He would have done so several weeks ago.
The fact is that we may just have to rely upon each other. That leaves precious little extra time and energy to pay homage to diffident bystanders, no matter how immortal.
Sending Us a Message?
On the other hand, some true believers are stubbornly digging in.
Under the gun to explain how the mass suffering produced by the tsunami is consistent with the existence of Supernatural Powers that deserve our respect, they have reverted to variations on the age-old theme of "blaming the victim."
As these fundamentalists would have it, Poseidon (or Allah or God or Shiva or karma or...) is just “angry” with some or all of us, and is trying to send a message.
According to this anti-empathetic view, the millions of people who have suffered from this tsunami -- and presumably all other disaster victims from the Genesis Flood on down -- richly deserved what they got, or were sacrificed to teach the rest of us "lessons."
The precise messages sent and the lessons to be learned are a bit murky, but there is no shortage of proposed alternatives:For example, Godhatessweden.org, a website owned by the Topeka, Kansas-based Westboro Baptist Church, has suggested that the suffering of “filthy, faggot Swedes” in the tsunami disaster was punishment from God for Sweden's tolerance toward homosexuality.
This particular church has also sponsored another website that features a rather tasteless proposed design for a monument to tsunami victims.
Meanwhile, Sheik Fawzan Al-Fawzan, who is Imam of Prince Mitaeb Mosque in Riyadh, a professor at Imam Mohamed Bin Saud Islamic University, and a member of Saudi Arabia's Senior Council of Clerics, its highest religious body, sugggested in a recent interview on Saudi television that the tsunami was "sent by God" to punish South Asian countries for immoral sexual activity, and for letting gay people into their countries
The illustrious Sheik has also argued that "slavery is part of Islam," and that those who deny this are "infidels" who deserve to be beheaded.
One leading American fundamentalist commentator, Bill Koenig, has suggested that a disproportionate number of Christians miraculously survived the tsunami, compared with their Muslim or Hindu brethren. (Presumably Bill does know that there were more Muslims and Hindus than Christians in the region to begin with...)
Similarly, a Salvation Army officer in Sri Lanka commented that it was indeed very odd that "All of our officers (clergy) survived.... God spared their lives."(...Although he admitted that some of them also had SA flotation devices....)
Another proto-Christian website argues that disasters like the recent tsunami and the Flood in Genesis don't make God a "mass murderer" because "there is no such thing as an innocent human." (....and unless you are absolutely innocent, or can swim really well, you deserve to drown......)
Korean "scientists" have demonstrated conclusively that Noah's ark was strong enough to have withstood a tsunami and floods even larger than the Sumatran one -- like those produced by the Genesis Flood. The Reverend Sun Myung Moon, who recently described himself as "the Messiah" at an event held in his honor at the US Congress, is evidently planning to enter the shipping industry.
Israel’s Chief Sephardic Rabbi, Shlomo Amar, the President of its High Rabbinical Court, also saw fit to blame the tsunami victims, asserting that God was punishing people who failed to fulfill the seven "Noahide commandments," those that G_d supposedly gave to Adam and Noah. (He also promised to tell folks around the planet what the "Noahide commandments" are.)
Pandit Harikrishna Shastri, a Hindu priest at New Delhi's Birla temple, claims that the tsunami was caused by a "huge amount of pent-up man-made evil on earth" and by the positions of the planets. (He did not resolve the perplexing question of whether we should deal first with the stockpile of evil or the planets' positions.)
To Buddhist Ananda Guruge, a former Sri Lankan ambassador who teaches at California's Buddhist-affiliated University of the West, "Buddhism makes people responsible for their own fate," and the region's "bad collective karma" explains the disaster. (He did not clarify whether regions that have never experienced destructive tsunamis necessarily have terrific collective karma.)
To Ruth Barrett, a Wiccan high priestess who heads a Wisconsin temple dedicated to the goddess Diana, the disaster was simply a chiropractic problem. It was caused by "Mother Nature stretching — she had a kink in her back and stretched."
Interestingly, the religious extremists who advocatethese hard-shelled positions take a different view when the victims of a disaster are closer to home -- in New York, Oklahoma City, or Jerusalem. But not all -- recall the Reverend Jerry Falwell's conclusion that that 9/11 was also part of God's vengeance for gay rights and abortion.The secular humanists in the audience also have to ask: Were the sixty thousand children who have died in this tsunami disaster so far, and the 400,000+ other children who have lost their parents really old enough to understand, much less deserve, such punishment? Precisely what lessons are we supposed to draw from their capricious fates, other than that we have to prepare more carefully for such disasters,
These extreme fundamentalist interpretations also remind us of J.L. Mackie's conundrum“If God is Good, he’s not God. But if God is God, He's not Good." In other words, if it just so happens that an arbitrary, vindictive, brutal Satan now rules the world, does that necessarily mean that we are obligated to worship Him?
As the 19th century poet John Todhunter put it:
No Cell Phones in Hell?
From the standpoint of “sending us a message,” surely Poseidon must also understand that many of us have cell phones and email. Personally, I have preset my spam filter to let all messages from Absolute Deities pass right on through.
Indeed, it turns out that there were thousands of cell phones and Internet addresses even in Banda Aceh, Indonesia and the Andaman and Nicobar Islands, as well as India, Thailand, and Sri Lanka.
For that matter -- a point that should be of particular interest to the dozens of seismologists and tsunami experts around the globe who received almost instantaneous warnings of the December 26 undersea earthquake's severity, but now say they "just didn't know who to call" -- there are also online telephone directories for all these places – including Banda Aceh, Sri Lanka,numbers of Phuket, Thailand, and TamilNadu, on the southeast coast of India.
Starting from scratch, it recently took two SubmergingMarkets journalists just 15 minutes to locate hundreds of long-distance numbers and Internet addresses for dozens of hospitals, schools, hotels, lawyers, doctors, local businesses, and government offices on the frontlines of the tsunami – not to mention US Embassies, consulates, and military bases.
Evidently all these phone numbers and Internet connections just happened to be busyprecisely when the earthquake struck. That must have prevented all the international experts from getting through.
Perhaps Poseidon was trying to send us a message after all!
èèè
© James S. Henry, Submerging Markets™, January 05
January 12, 2005 at 03:19 PM | Permalink | Comments (0) | TrackBack
Tuesday, January 04, 2005
SO-CALLED “NATURAL” DISASTERS Part II. The Need for a Global Disaster-Relief Agency James S. Henry
So far, the Boxing Day 2004 Sumatra tsunami is still not quite the most destructive earthquake-related disaster in history, but this may soon change. Until now, the casualty records have been held by the 7.8 Richter-scale earthquake that leveled Tangshan, China, in 1976, claiming at least 244,000 lives, and by the 1556 earthquake in China’s Shanxi province that claimed 830,000.
However, the Sumatran quake has already resulted in more than 150,000 deaths, including 94,081 confirmed dead in Indonesia, nearly 9000 dead or missing in Thailand, 15160 in India, (andup to 20,000 more in the Andaman and Nicobar Islands), 44,000 in Sri Lanka, and 396 in Tanzania, Somalia, the Seychelles, Madagascar, the Maldives, Burma, Malaysia, and Bangladesh. Furthermore, the latest reports from UN observers in the region indicate that even these death tolls may grow “exponentially.”
For the bankers and investors in the audience, the purely economic impact of the Sumatra tsunami is expected to be relatively slight, since most of its victims were indigenous poor people in remote areas, and the region's tourist industry will quickly recover. Japan’s 1995 Kobe earthquake, in contrast, caused more than $100 billion of property damage.
However, in terms of lives lost, injuries, displaced people, and damage caused beyond the boundaries of the country where the earthquake originated, Sumatra is already a record-setter. While other tsunamis have taken lives outside their countries of origin, this one’s long-distance impact has already taken more lives in more countries than all other tsunamis since 1800. The potential human and geopolitical impact of all this is much more significant than the destruction of over-valued Kobe high-rises.
In other words, this is one of the most profound transnational disasters ever. It is therefore not surprising that, as discussed below, it has already commanded an overwhelming global response from the world's aid donors -- at least on paper.
For the moment, at least, the developing world may have finally succeeded in capturing our attention, if by nothing more than the sheer power of its own suffering. Perhaps we will finally now come to understand that both the relief and the prevention of such disasters are appropriate global responsibilities.
We may also wish to reserve some of our benevolence and good will for the victims of more "routine" Third World perils -- for example, the two million children who die from drinking dirty water each year, the 1.6 million people who still die each year from tuberculosis, and the 1.2 million who die from malaria. These continuing disasters may not be as dramatic, sudden, and visible as tsunamis and earthquakes, but they are no less worthy of our concern.
TO THE RESCUE?
Après le fait, the world community has mounted a huge relief effort to provide clean drinking water, food, medicine, energy, medical care, and temporary shelter for 5 million displaced people.
The most rapid progress has been made on fund-raising. In one week, 45 governments and international institutions pledged more than $3.2 billion in humanitarian aid, more than the world spent on all such disasters from 2002 on. The tsunami pledges so far include an incredible $680 million from Germany, $500 million from Japan ($3.91 per capita), $350 million from the US ($1.19 per capita), $182 million from Norway ($39.13 per capita), $96 million from the UK ($1.59 per capita), $76 million from Sweden ($8.39 per capita), $76 million from Denmark ($14 per capita), $250 million from the World Bank, $175 million from the Asian Development Bank, $309 million from other EU member countries ($1.06 per capita), $66 million from Canada ($2.06 per capita), about $60 million apiece from Australia ($3 per capita) and China (5 cents per capita), $50 million from South Korea, and $25 million from Qatar. Somewhat less generously, Saudi Arabia and Kuwait have each contributed $10 million, New Zealand $3.6 million, Singapore $3 million, Venezuela, Libya, Tunisia, and UAE $2 million, Turkey $1.25 million and Mexico $100,000.
Furthermore, there are also discussions underway among G-8 countries to provide debt relief for Indonesia, Sri Lanka, and the other victim countries, which might yield another $3 billion a year -- so long as these countries agreed to spend it on aid for tsunami victims.
Three days after the quake, President Bush had promised just $35 million. As several observers noted, that was just 12 cents per capita, less than 10 percent of Canada’s per capita effort. As Vermont Senator Patrick Leahy said, “We spend $35 million before breakfast in Iraq.”
Furthermore, in 2004, the US Congress had provided $13.6 billion to Florida’s hurricane victims, 5.6 times more than the $2.4 billion that the US spent on all global humanitarian assistance that year. Colin Powell rebuked the critics in public, reminding them that the $2.4 billion was 40 percent of the entire world’s budget for humanitarian relief in 2004. Apparently he also quietly lobbied the President to increase the official US aid.
Meanwhile, in addition to the pledges of official government aid, more than fifty private relief agencies have also pitched in, from Action Against Hunger, CARE, Catholic Relief, Doctors Without Borders, Islamic Relief, Oxfam, the International Red Cross, and Save the Children to UNICEF, World Action, and WorldVision. The American Red Cross alone reports that it has already received more than $79 million in private aid pledges for tsunami victims, while CARE US has received $3.5 million, Doctors Without Borders $4 million, Save the Children $3 million, Americares $2 million, Oxfam US $1.6 million, Catholic Charities $1.1 million, and World Vision $1 million.
Private donors from European countries have also been exceptionally generous. For example, Swedes’ 9 million people have contributed more than $60 million, in addition to the $76 million that their government has offered – more than $15 per capita. And Norway’s 4.6 million people have raised nearly $33 million in private donations, in addition to their government's $180 million -- a $46 per capita global record for tsunami relief.
...THE PAPER THEY’RE PRINTED ON?
Unfortunately, the historical record shows that such official government disaster aid pledges are cheap -- they often do not result in “new money,” and many countries actually renege on their official pledges completely.
For example, in the case of Iran’s Bam earthquake in December 2003, 40 donor countries also responded to a similar “UN flash appeal,”pledging $1.1 billion of aid. However, one year later, less than 2 percent ($17.5 million ) of that has been forthcoming. Most foreign aid workers and journalists came and went in less than a month, and Bam’s reconstruction problems have long since disappeared from the headlines. While significant progress has been made in restoring basic services like water and electricity, most of the city’s 100,000 former residents are still unemployed and living in tents.
Such reneging by the world community has also been the pattern in most other recent disasters, including Mozambique’s 2000 floods, Central America’s Hurricane Mitch in 1998, and similar crises in Somalia, Afghanistan, and Bangladesh.
We will just have to see whether the victims of the Sumatran tsunami experience something similar. UN Secretary General Kofi Annan has predicted that it will take a decade for many of the countries affected by the tsunami to recover.
ANOTHER AD HOC RELIEF EFFORT?
Each time there is a crisis, the world’s aid organizations have to scramble to pass the hat.
|
The outpouring of all this assistance for the tsunami’s victims on short notice has been impressive. But perhaps we should not be so proud of ourselves. The reality is that this effort has been yet another ad hoc, “aid pick-up-game," where the world waits until there is already a life-and-death crisis with millions of people in peril to swing into action, raise money, and rush assistance to the front lines.
This reactive approach has many unfortunate side-effects:
~ Each time there is a crisis, the world’s aid organizations have to scramble to pass the hat, even as they are also scrambling to organize assistance.
~ The actual delivery of relief on the front lines is much slower than it needs to be.
As usual, in the case of the Sumatra tsunami, most of the victims are located in remote areas with poor transportation, sanitation, water, and health care systems, and many other problems. Several key regions – in this case Indonesia’s Aceh province, Sri Lanka’s eastern regions, and Somalia – also have active guerilla movements or local warlords. Some countries -- India, in this case – have also insisted that they don’t need any foreign assistance, showing more concern for nationalism than their own people.
However, when it comes to disaster relief, all of these problems are just par for the course, and predictable. What is inexcusable is the world has once again had to organize yet another massive relief effort from scratch.
One result is that in most of the affected countries, it has taken more than a week to get medical aid and substantial quantities of food, blankets, and clean water – to the victims. In a situation where hundreds of thousands are injured and each incremental day costs hundreds of lives, only Finland and Norway had relief planes in the air by Tuesday December 28, two days after the disaster. Most other donors needed a whole week.
~ Given the semi-voluntary nature of the relief process, national interests, domestic politics and media exposure play an excessive role in deciding how much aid is given, who manages the assistance, and how much goes to any particular crisis – as compared with raw human need.
~ One by-product of all this was last week’s unseemly spectacle, where donors like the US, the UK, and Japan conducted a veritable public auction for the value of their aid pledges. The results may have little to do with actual aid requirements. We can only hope that this time around most the pledges will be honored.
~ There is a tendency for global aid efforts to be limited by the media’s attention span – as Bam’s victims, the residents of Sudan’s Dafur region, and the victims of other disasters have learned the hard way. When the number of “new bodies” tapers off, so does the attention – and the aid.
THE NEEDS FOR A GLOBAL AID ORGANIZATION
If global humanitarian aid were run on a more business-like basis,
~ There would be an ample global “reserve” set aside for such emergencies. This would be funded by a global tax in proportion to objective measures of donor capacity like population size and wealth.
~ In case of an actual calamity, we would not try to assemble “aid brigades” on short notice from dozens of different organizations all over the globe and expect them to work well together under impossible conditions. There would be already be a solid global organization in place, ready to respond rapidly, with coordination agreements and contingency plans already worked out with local governments.
This organization would also have basic stocks of transportation equipment and relief supplies pre-positioned in key regions of likely need. After all, the US military alone now has 890 bases around the world that are on ready-alert, prepared to fight wars at a moment’s notice. The world community has zero “aid bases,” prepared to fight to save human lives at a moment's notice.
Given the increasingly global nature of so-called “natural” disasters, the current approach to global humanitarian relief is no substitute for a permanent, well-funded, global aid organization.
èèè
© James S. Henry, Submerging Markets™, January 05
http://www. submergingmarkets.com
January 4, 2005 at 04:00 PM | Permalink | Comments (1) | TrackBack
Saturday, January 01, 2005
SO-CALLED “NATURAL” DISASTERS Part I. Overview James S. Henry
For the second year in a row, December comes to a close with a dramatic reminder of the precariousness of daily life in the developing world -- and the continuing failure of the international community to provide adequate early warning systems, pre-crisis funding, and rapid, effective global relief for the victims of so-called “natural disasters” -- most of which are actually quite predictable, at least in the aggregate.
This year, on December 26, 2004, it was the 9.0Rs earthquake off the western coast of northern Sumatra, Indonesia’s second largest island, the fifth largest earthquake recorded since 1900.
One year ago to the day, on December 26, 2003, the disaster in question was the 6.6Rs earthquake that devastated the city of Bam in southeast Iran, at a cost of 26,500 lives, 25,000 injured and 80,000 homeless.
The death toll from this year's Sumatra quake is likely to exceed 150,000, with thousands of people still missing, several hundred thousand who have been seriously injured, and more than five million -- most of whom were impoverished to begin with -- suffering from thirst, hunger, homelessness, lost employment, and the threat of mass epidemics.
Furthermore, as we were also reminded in Bam, among the worst consequences of such catastrophic events are the longer-term traumas associated with disease, losing friends, family, fellow citizens, livelihoods, communities, and whole ways of life.
As usual -- and as was true in the case of 9/11, for example -- much of the initial media coverage of this Sumatra tsunami has focused on body counts, other dire visible consequences, and the massive relief effort that has followed.
That is to be expected. But before our attention span drifts too far off in the direction of some other new Third World calamity, it may be helpful to step back and examine some of the systematic factors that contribute to the high costs of such mishaps over and over again, and the extraordinary costs of this "natural" tsunami disaster in particular.
Our overall theme is that there is really no such thing as a “natural disaster” per se. This is not to say that man-made forces were responsible for Saturday’s tsunami. But, as discussed below, the degree to which any such event results in a social and economic “disaster” is often to a great extent under our control.
In the case of this particular tsunami, its high costs:
- Were entirely foreseeable, at least in a “sometime soon” sense, based on both long-term and recent experience with tsunamis in the Indonesian arena;
- Were actually foreseen by several geological experts, some of whom have been advocating (unsuccessfully) an Indian Ocean tsunami early warning system for years;
- Could have been substantially mitigated if US, Japanese, and other scientists around the globe who monitor elaborate earthquake- and tsunami-warning systems, and had ample warning of this event, had simply shown a reasonable degree of human concern, imagination, and non-bureaucratic initiative;
- Might have been avoided entirely with a relatively modest investment in tsunami “early warning systems” for Indonesia and the Indian Ocean.
Furthermore, the global response to this horrific disaster has been long on the size of aid pledges, dignitary press conferences, and “oh – the horror” press coverage.
It has been conspicuously short on actual aid getting through to the front lines. Today, almost a week after the disaster, aid efforts are well-funded, but they remain sluggish, disorganized, and ineffective, with at least as many additional lives in jeopardy right now for want of aid as perished in the original waves.
This is partly explained by the sheer difficulty of getting aid through to remote regions like northern Sumatra. But, as explained below, it is also due to political factors, and the fact that the world community still runs its humanitarian relief efforts like a “pick-up” softball game.
Fortunately, this particular crisis seems to have captured the attention of the world's donor community. At this point, with more than $2 billion in aid pledged by governments, multilateral institutions, and more than 50 private relief organizations, the real problem is not money, but organization.
But we may want to demand that the UN, the US Government, the EU, and all these relief organizations get their acts together, and establish a permanent, well-run, well-funded global relief organization that can move more quickly the next time around. Along the way, they should also pay far more attention to preventive systems that can help save the future victims of such disasters, before all the relief becomes necessary.
© James S. Henry, Submerging Markets™, January 05
January 1, 2005 at 05:29 PM | Permalink | Comments (0) | TrackBack
Wednesday, December 15, 2004
(Some) Justice Finally Comes to Chile - Is More on the Way? James S. Henry
After decades of inaction, Chile's own judicial system is finally beginning to hold leading members of the Pinochet dictatorship directly responsible for the brutal reign of terror that it inflicted on Chile from September 11, 1973 to March 11, 1990.
The latest piece of good news was this week's indictment of 89-year old former Chilean dictator General Augusto Pinochet Ugarte on charges stemming from the disappearance and murder of several individual activists.
In July 2002, another serious human rights case against General Pinochet had been dismissed in Chile on grounds that he was mentally incompetent to stand trial. That case echoed the controversial March 2000 decision by UK authorities to permit him to return home rather than extradite him to Spain, France, Switzerland, or Belgium, after he’d spent more than 17 months under house arrest in London.
As noted below, this week's indictment is just the latest in a series of recent efforts by Chile's judicial system to provide justice for more than 3,200 civilians who were murdered by the regime, more than 28,000 others who recently stepped forward to bare witness about being illegally jailed and tortured, and tens of thousands more who had similar experiences, but have so far kept silent.
Several other cases that have been brought against Pinochet in Chile have recently been allowed to proceed, especially after Pinochet gave a lucid, self-serving interview to Channel 22, a Spanish language TV station in Miami, in November 2003. This incredible act of hubris may have finally brought Pinochet to ground.
While this latest ruling will no doubt be appealed to Chile’s Supreme Court, it is at least another step in the right direction.
In addition to providing justice, such efforts also provide a measure of vindication for those who have long maintained that the Pinochet dictatorship was an unwarranted, illegal, and counterproductive intrusion on Chile's long-standing traditions of democracy and respect for human rights.
However, the real disgrace now is that while Spain, France, Belgium, Switzerland, Argentina, and Chile itself have already brought serious criminal charges against Pinochet or his key associates, the US Government has never done so.
This is despite the fact that several US citizens were also murdered by the regime, not only in Chile but also in the US itself.
Furthermore, as examined below, many of the Chilean junta's most important confederates and accomplices -- including leading bankers, economists, lawyers, media magnates, convicted professional terrorists, and several senior government officials -- continue to enjoy sanctuary abroad -- especially in the US, the supposed champion of the “post-September 11th” global anti-terrorist campaign.
Many Americans may not recall that there was another September 11th, one that was even more bloody and terror-stricken than their own. Indeed, this “other September 11th” was one that their own government helped to create.
To do so, the US worked closely with an incredible bevy of transnational terrorists, butchers, and tin-horn "generals," like Pinochet (Chile), and later, Videla (Argentina), Banzer (Bolivia), Stroessner (Paraguay), and Rios Montt (Guatemala). These valorous "generals" specialized in the use of terror against defenseless civilians.
It is time for Americans who really care about human rights to demand that their own government follow the courageous lead that Chile has taken, and help bring these uniformed savages and their foreign and domestic collaborators to justice.
THE END OF IMPUNITY?
As noted above, there have been several other important recent steps toward justice in Chile with respect to the Pinochet regime. Among the most important are the following:
è On December 10, 2004, Colonel Mario Manriquez, a retired Chilean colonel, was arrested and charged with ordering the September 1973 execution of Victor Jara, an internationally-reknowned Chilean folksinger and playwright. Jara, 38 at the time, was detained and tortured to death at Santiago’s infamous National Stadium, where thousands were detained after the coup. (For an example of Jara's songs, Download vctor_jara_ni_chicha_ni_limona_.mp3.) è In addition to the decision by Judge Juan Guzman noted above, on December 4, 2004, Chile’s Court of Appeals also lifted Pinochet's immunity from prosecution with respect to his possible involvement in the 1974 murder of his precedecessor as Commander-in-Chief of the Chilean Army, General Carlos Prats Gonzales.
General Prats, a highly-regarded "constitutionalist" who stalwartly opposed the Army's intervention in political affairs, had fled Chile in September 1973 after the coup against Allende’s duly-elected government.
In one of that decade’s clearest cases of international terrorism, General Prats and his wife Sofia Cuthbert were killed by a car bomb in Buenos Aires in September 1974.
In 2000, Eduardo Arancibia Clavel, a member of Chile's DINA, its secret service, was convicted by an Argentine court of helping to organize the killing, and was sentenced to life in prison. Argentina also requested the extradition of six other DINA agents to stand trial in the case, plus Pinochet, who was indicted for first-degree murder as the "intellectual author" of the crime. Chile's Supreme Court denied the extradition request in 2003, but the case was picked up by a Chilean prosecutor. The recent ruling means that the case can proceed. Pinochet has appealed the decision to Chile's Supreme Court.
è On November 29, 2004, the Chilean Government released the Valech report, a year-long investigation of human rights crimes committed by the Pinochet regime. The report, requested by Ricardo Lagos, Chile's current Socialist President, documented the fact, long-denied by Pinochet and his loyalists, that systematic, widespread torture had indeed been state policy during his reign -- including more than 3400 instances of sexual abuse. Even Pinochet’s eldest daughter was shocked by the report, which was based on nearly 28,000 interviews with victims of the junta’s repression – the first opportunity they had ever had to tell their stories. The victims who had managed to survive to tell their stories were offered modest pensions by the Chilean government in compensation.
In anticipation of the report, on November 5, General Juan Emilio Cheyre, the head of the Chilean Army, reversed the Army’s previous refusal to accept responsibility for these abuses, declaring:
"The (Chilean) Army has made the difficult but irreversible decision to acknowledge the responsibilities that it has as an institution in all the punishable and morally unacceptable acts of the past… Human rights violations never, and for no one, have an ethical justification."
è In October 2004, Chile’s IRS filed tax evasion charges against the former dictator and his long-time financial advisor, Oscar Aitken. For these charges alone, Pinochet may face fines up to three times any taxes that he evaded, plus a jail term of up to five years. In late November, on the eve of the General’s 89th birthday, a Chilean judge did manage to freeze more than $4 million of Pinochet’s Chilean assets, pending the outcome of investigations for tax fraud and money laundering.
THE RIGGS CASE
So far, the main US contribution to these developments took place in July 2004, when the US Senate's Permanent Subcommittee on Investigations released a report on General Pinochet’s offshore banking -- especially his ownership of up to $15 millions of "funny money" in more than a dozen Pinochet-owned bank accounts at
The reported, which had been conducted at the behest of Senator Carl Levin, the Subcommittee’s ranking Democrat, revealed that ten key Pinochet-related accounts were handled by Washington DC’s Riggs National Bank. These accounts, plus numerous Riggs-managed offshore trusts and companies, were reportedly established for General Pinochet in the mid-1980s and 1990s with the knowledge and active involvement of Rigg’s owner and CEO, Joseph L. Allbritton (Baylor Law ’49).
Allbritton, a prominent Houston-based banking and media magnate who is now in his seventies, is a close friend of the Bush family, and a trustee of the Lyndon B. Johnson Foundation, the George Bush Presidential Foundation, the Reagan Presidential Foundation, the Kennedy Center, the Houston Symphony, and the Houston Museum of Fine Arts. He has also served on the board of the National Geographic Society and Washington DC’s Federal City Council.
A former owner of the now-defunct Washington Star, Allbritton started acquiring Riggs in the early 1980s, eventually buying at least 40.1 percent of the $6.3 billion asset bank. He served as Rigg’s Chairman and CEO until 2001, when his only son, Robert L. Allbritton, took over.
The Allbrittons also owns several other businesses, including Allbritton Communications Co., which controlled WJLA, Washington D.C.'s main Disney/ABC affiliate, plus TV stations in Little Rock, Tulsa, Lynchburg, Charleston, Harrisburg, and Tuscaloosa.
But Riggs National Bank was the crown jewel in the empire. Founded in 1815, it was Washington D.C.'s oldest and largest bank, touted as "the most important bank in the most important city in the world,” where “at least 21 First Families banked.”
It acquired a commanding share of the city’s “Embassy banking” business, handling prominent diplomatic customers for key foreign embassys, like Saudi Arabia’s Embassy. (The US Senate Committee on Governmental Affairs continues to investigate Rigg’s relationships to the Saudis.
These reportedly included more than 150 private accounts, one of which was apparently used, perhaps unintentionally, by Princess Haifa al-Faisal, the wife of Prince Bandar, Saudi Arabia’s Ambassador to the US, to relay funds to two September 11th 2001 hijackers. In September 2004, several families of September 11th victims filed a class action lawsuit against Riggs with respect to this matter.)
Under the Allbrittons, Riggs was not shy about recruiting political allies. Fellow Texan Jack Valenti (U of Houston, Harvard), the influential motion picture industry representative, is a long-time Riggs board member.
In 1997, Riggs acquired J. Bush & Co., an asset management firm owned by Jonathan J. Bush, George H.W.’s brother, George W.'s uncle and a former Chairman of the New York State Republican Finance Committee. In May 2000, Jonathan Bush briefly became CEO of the Riggs Investment Management Company, but now he concentrates on managing the private assets of wealthy clients at J. Bush & Co. In 2003, “JJ’s” Yale classmate, William H. Donaldson, was nominated by President Bush to head the SEC.
Ironically enough, in 1999, the US Treasury had even selected Riggs to redesign and manage its “CA$HLINK” cash management system, reportedly the world's largest deposit/cash reporting system.
It is not yet clear whether “JJ” was involved in managing any assets for the bank’s wealthy private clients like General Pinochet, the Saudi Princess, or Equatorial Guinea's dictator, Teodoro Obiang Nguema, the bank's largest single private banking customer.
But other senior Riggs managers were involved in handling the relationship with General Pinochet, including the bank’s former President, Timothy C. Coughlin (Brown U., NYU MBA, US Marines, Federal City Council), who resigned in May 2004; Robert C. Roane, (UVA, George Washington MBA), the bank’s chief operating officer, and the former head of Riggs - London, who resigned on November 26, 2004; Carol Thompson, Riggs' former SVP for Latin America, who also resigned in 2004; and Raymond M. Lund, the former EVP of the International Banking Group, who left the bank in March 2004.
Also implicated in the Pinochet affair was Steven B. Pfeiffer (Wesleyan ‘69 – Chairman Emeritus; Oxford - Rhodes Scholar; Yale Law School, US Naval Commander; Council on Foreign Relations), a Riggs board member since 1989, a former chairman of the bank’s International Committee, a former Vice Chairman of Riggs Bank Europe Ltd., and the bank’s Chairman in 2001. Pfeiffer remains a senior partner at Fulbright & Jaworski, the leading DC law firm, where he has served as head of the firm’s International Practice, and, from 1998 to 2002, its Partner-in-Charge. According to the Senate report, Pfeiffer and his law firm provided key legal advice to Riggs with respect to its handling of the Pinochet accounts.
BANKING ON THE GENERAL
Riggs' banking relationship with Pinochet may have dated back to the 1970s, when it was reportedly involved in helping to finance several Chilean arms deals. But it really heated up in the mid-1990s, when Pinochet came under investigation for more than 66 international criminal complaints involving human rights violations, drug trafficking, torture, assassination, illegal arms sales, and corruption, not only by Judge Garzon in Spain, but also by Argentina, Belgium, France, Switzerland, and the UK. The General was trying desperately to conceal his family’s assets around the world.
According to the Senate report, Riggs National Bank participated in a long-term conspiracy to launder Pinochet's offshore holdings and conceal their ownership from US federal bank regulators, including the Comptroller of the Currency. Quite coincidently, the bank also reportedly hired R. Ashley Lee, an OCC bank examiner who happened to be in charge of auditing Riggs from 1998 to 2002.
That helped Riggs conceal the Pinochet matter for a couple years. But in May 2004, Riggs was fined $25 million in civil penalties for "willful, systemic" violations of anti-money-laundering laws" with respect to its dealings with two other dubious clients, Saudi Arabia and Equatorial Guinea.
In light of all these embarrassments, the Allbrittons decided to sell the bank. In July 2004, it agreed in principle on a $779 million sale to PNC Financial Services Group Inc. However, this merger is now reportedly on hold until at least April 2005, and it may never be consummated, given all the many lawsuits and investigations that have arisen out of Rigs involvement in money laundering. If permitted, the PNC transaction would effectively permit the Allbrittons and their close associates to "launder" their profits from more than twenty years of scandalous, morally-unconscionable behavior, undertaken at the behest of some of the world's worst dictatorships.
Meanwhile, Riggs is fighting a number of other legal actions pertaining to this scandal. In April 2004, three law firms that specialize in shareholder derivative suits teamed up to sue Riggs’ directors for “intentionally or recklessly” ignoring the risks of failing to update the bank’s money laundering controls and dealing with dubious clients like the Saudis and Pinochet.
In September 2004, several former Riggs senior managers and board members were named in a legal action filed by the courageous Spanish judge, Balthazar Garzon, who has been investigating human rights violations committed against Spanish citizens by the Pinochet regime. Judge Garzon has asked that US authorities seize the assets of their personal assets, prosecute them for money laundering, and also freeze more than $10.3 million in alleged Pinochet assets.
So far, the US Department of Justice has basically ignored Judge Garzon’s request.
December 15, 2004 at 09:05 PM | Permalink | Comments (0) | TrackBack
Friday, December 10, 2004
Global Growth, Poverty, and Inequality Part I. A Little Christmas Cheer? James S. Henry and Andrew Hellman
|
The Christmas season is a very special time of year, when Americans, in particular, engage in a veritable month-long orgy of holiday revels and festivities, including eggnog sipping, Santa sitting, package wrapping, neighborhood caroling, tree decorating, menorah lighting, turkey stuffing, and generally speaking, spending, getting, and giving as much as possible, at least with respect to their immediate friends and family.
We certainly don’t wish to question the legitimacy of all these festivities. After all, as this November’s Presidential election has reminded us, ours is surely one of the most powerful, vehement, unapologetic Judeo-Christian empires in world history. Like all other such empires, it has every right to celebrate its triumph while it lasts.
According to the latest opinion surveys, this is indeed an incredibly religious nation, at least if we take Americans at their word. More than 85% of Americans adults consider themselves “Christians,” another 1.5% consider themselves “Jews," 84% pray every week, 81% believe in life after death, 60% believe the Bible is “totally accurate in all its teachings,” 59% support teaching creationism in public schools, and fully 32% -- 70 million people, including 66% of all evangelicals -- would even support a Constitutional Amendment to make Christianity the official US national religion.
In light of all this apparent religious fervor, it is disturbing to read several recent analyses by OXFAM and the UN of certain persistent, grim social realities around the world – and our paltry efforts to redress them. Is the intensity of our religious rhetoric and this season's celebrations just a way of escaping these unpleasant realities?
CHRISTMAS CHEER?
· According to the UN’s International Labor Organization (December 2004), among those still waiting for economic justice are nearly three-quarters of the world’s population – 4.7 billion people -- who somehow manage to survive on less than $2.50 per day. These include 1.4 billion working poor, half of the 2.8 billion people on the planet who are employed.
· According to the UN’s Food and Agricultural Organization (December 2004), the world’s poor now include at least 852 million people who go to bed hungry each night – an increase of 20 million since 1997. The continuing problem of mass famine has many side-effects – including an estimated 20 million low-birth-rate babies that are born in developing countries each year, and another 5 million children who simply die of malnutrition each year. In some countries, like Bangladesh, half of all children under the age of six are malnourished.
· Overall, for the 5.1 billion residents of low- and middle-income countries, average life expectancy remains about 20-30 percent shorter than the 78 year average that those who live in First World countries now enjoy. By 2015, this will produce a shortfall of some 50 million poor children and several hundred million poor adults. But at least this will help us realize the perhaps otherwise-unachievable “Millennium Development Goals” for poverty reduction.
· According to UNICEF (December 2004), more than 1 billion children – half of all children in the world -- are now growing up hungry, in unhealthy places that are suffering from severe poverty, war, and diseases like HIV/AIDs.
· According to Oxfam (December 2004), First World countries have basically reneged on their 1970 promise to commit .7 percent of national income to aid to poor countries. Last year such aid amounted to just .24 percent of national income among OECD nations, half the 1960s average. And the US commitment level was just .14 percent, the lowest of any First World country, and less than a tenth of the Iraq War’s cost to date.
· This month’s 10th UN Conference on Climate Change (COP-10) in Johannesburg reviewed a growing body of evidence that suggests that climate change is accelerating, and that the world’s poor will be among its worst victims. Among the effects that are already becoming evident are widespread droughts, rising sea levels, increasingly severe tropical storms, coastal flooding and wetlands damage, tropical diseases, the destruction of coral reefs and arctic ecosystems, and, God forbid, a reversal of the ocean’s “thermohaline” currents.
Overall, as the conference concluded, for world’s poorest countries – and many island economies – the threat of such effects is much more threatening than “global terrorism.”
So far, however, the US – which with less than one-twentieth of the world’s population, still produces over a fifth of the world’s greenhouse gases -- seems determined to do nothing but stand by and watch while energy-intensive “economic growth” continues. This year’s oil price increases have slowed the sales of gas-guzzling SUVs somewhat, yet more than 2.75 million Navigators, Hummers, Land Rovers, Suburbans, and Expeditions have already been sold. The US stock of passenger cars and light trucks, which accounts for more than 60 percent of all US oil consumption, is fast approaching 220 million -- almost 1 per person of driving age.
Meanwhile, leading neoconservative economists and their fellow-travelers in the Anglo-American media continue to tout conventional measures of “growth” and “poverty.” Indeed, according to the most corybantic analysts, a free-market-induced “end to poverty as we have defined it” has either already arrived, or will only require the poor to hold their breath just a little bit longer – until, say, 2015.
As we will see in Part II of this series, this claim turns out to be -- like so many other elements of modern neoconservative dogma – a preposterous falsehood. But it does help to shelter our favorite dogmas – religious and otherwise -- from a day of reckoning with the truth.
èèè
December 10, 2004 at 03:16 PM | Permalink | Comments (0) | TrackBack
Friday, December 03, 2004
”WHERE’S WARREN?” Bhopal’s 20th Anniversary
Today marks the twentieth anniversary of the deadly December 3, 1984, chemical gas leak at an Indian pesticide plant in the very center of Bhopal, a city of 90,000 – just a little larger than Danbury, Connecticut -- in the state of Madhya Pradesh, in central India. At the time the plant was owned by Union Carbide India, Ltd. (UCIL), an Indian company whose majority (50.9%) shareholder was Danbury-based Union Carbide Corporation (UCC) which was acquired by Dow Chemical in 2001.
This anniversary provides us with an opportunity to reflect on “lessons learned” from this disaster – including the need to make sure that the globalization of trade and investment is also accompanied by the globalization of justice for the victims of transnational corporate misbehavior.
Download WhereIsWarren12032004.pdf
THE COSTS
As a recent report by Amnesty International details, this industrial accident, perhaps the worst in history, killed more than 7,000 to 10,000 people in the first few days, including many children.
There were also serious long-term injuries to up to 570,000 others who were exposed to the fumes.
At least 15,248 of these survivors have already died because of their injuries – in addition to the 7,000 to 10,000 initial victims.
Up to 570,000 others continue to suffer from a wide range of serious health problems, including birth defects, cancer, swollen joints, lung disease, eye ailments, neurological damage, and many other painful, long-term illnesses.
Thousands of animals also died, and many people lost their homes, jobs, income, and access to clean water.
WHO WAS TO BLAME?
U |
nion Carbide’s ultimate “parent authority” for this accident is very clear. In the middle of the night, a cloud of lethal gas caused by the leak of at least 27 tons of “methyl isocyanate” (MIC), a high-toxic odorless poison, and another 13 tons of “reaction products” began wafting through the city center. The gas spread without warning throughout the town. The leaks continued for more than two hours before any alarms were sounded.
All six of the plant’s alarm systems failed. It was later shown that the company management had systematically tried to cut corners on safety and warning equipment – by, for example, failing to equip the plant with adequate safety equipment and trained personnel to handle bulk MIC storage; failing to apply the same safety standards that it used in the US; and failing to insure that there was a comprehensive plan to warn residents of leaks.
In fact, company staff and many others were aware of the risks created by this situation. In June 1984, six months before the accident, an Indian journalist had written an article about them: “Bhopal – On the Brink of Disaster.” But nothing was done – partly, according to Amnesty, just to cut costs.
The result was that shortly after midnight on December 3, 1984, Bhopal’s families woke up screaming in the dark, unable to breathe, their eyes and lungs on fire from the poison, choking on their own vomit. By daybreak there were already hundreds of bodies on the ground, with scores of funeral pyres burning brightly.
In addition, long before the 1984 accident, there had been a series of leaks at the site that management was well aware of, and which caused serious pollution – contamination that continues to this day.
All told, as the Amnesty report makes clear, this amounts not only to an health and environmental disaster, but a serious infringement of the human rights of thousands of Indian citizens.
CONTINUING IMPUNITY
All this was bad enough. But the other key part of Bhopal’s injustice has to do with the fact that key actors like Dow Chemical/Union Carbide, the Indian Government, and the individual US and Indian senior executives and other officials who were responsible for the accident have managed to avoid liability for the full costs of the “accident,” as well as personal accountability.
This impunity was underscored this week when the BBC fell victim to a hoax perpetrated by someone who pretended to be a Dow Chemical executive. He concocted a false statement that the company was reversing its denial of all responsibility for Bhopal, and was establishing a E12 billion fund for 120,000 victims.
In fact,
· Union Carbide (UCC) and Dow Chemical, UCC's new owner since it purchased the company for $10.3 billion in 2001, have consistently denied any liability for the disaster. They have argued, for example, that UCC was a “domestic” US company, with no “operations” in India. Supposedly it was also not responsible for UCIL’ actions, because UCIL was just an “independent” Indian company.
· In fact, while UCC disposed of its interests in UCIL in 1994, until then, UCC maintained at least 51 percent ownership in UCIL. Furthermore, according to the Amnesty report, UCC played an active role in UCIL’s management and board activities, and was responsible for the detailed design, senior staffing, and on-going operating procedures and safety at the Bhopal plant.
· Furthermore, as UCC’s CEO at the time, Warren Anderson, bragged before the US Congress in 1984, Union Carbide had 100,000 employees around the world. At the same time, another senior UCC executive, Jackson Browning, said that UCC’s “international operations represented 30 percent of sales,” and that “India was one of three dozen countries where the company has affiliates and business interests.”
· After the spill, according to the Amnesty report, UCC officials (1) tried to minimize MIC’s toxicity, (2) withheld vital information about its toxicity and the reaction products, which they treated as trade secrets; and (3) refused to pay interim relief to the victims.
· The Indian Government and the State Government of Madhya Pradesh also bear grave responsibility for the disaster itself, and then for striking an irresponsible private settlement with the perpetrators. As the Amnesty report makes clear, environmental regulations were very poorly enforced against UCIL. Then, having sued for $3 billion in damages in 1988, the Indian Government settled for just $470 million in 1989, without adequate participation from victims. The Indian Government has also discontinued medical research on the impact of the gas leak, and failed to publish its interim findings.
In October 2003, it was disclosed that by then, some 15,298 death claims and 554,895 claims for other injuries and disabilities had been awarded by the Madhya Pradesh Gas Relief and Rehabilitation Department – five times the number assumed in the settlement calculations by the Indian Supreme Court.
· UCC’s insurance paid that paltry amount in full. But then the Indian Government was very slow to pay out the money to victims. As of July 2004, $334.6 million had been paid out, while $327.5 million was still sitting in Indian government custody. At that point, 20 years after the disaster, the Indian Supreme Court finally ordered that the remaining money be paid out to some 570,000 registered victims – an average of $575 apiece. Even these payments won’t all get to the victims; a significant portion is reportedly consumed by India’s notorious bribe-ridden state bureaucracy.
· Local authorities in Bhopal filed criminal charges against both UCC its former CEO Warren M. Anderson in 1991-2. Anderson was charged with “culpable homicide (manslaughter),” facing a prison term of at least 10 years. He failed to appear, and is still considered an “absconder” by the Bhopal District Court and the Supreme Court of India.
However, despite the existence of a US-India extradition treaty, the Indian Government has failed to pursue a request for Anderson’s extradition vigorously.
The 82-year old Anderson, who is still subject to an Indian arrest warrant, has a very nice home with an unlisted number in Bridgehampton, New York, and another in Vero Beach, Florida.
Meanwhile, while the Indian Government has been willing to hold local Indian companies that operate hazardous businesses strictly liable for damages caused by them, it has been reluctant to apply this rule to transnational companies -- perhaps because it is more worried about attracting foreign investment than insuring that foreign investors manage their activities responsibily.
SUMMARY – GLOBALIZING JUSTICE
Overall, twenty years after the original incident, Bhopal remains a striking example of transnational corporate misconduct, an incredible case of the negligent mishandling of a true “chemical weapon of mass destruction.”
This behavior may not have been as culpable, perhaps, as the willful use of toxic weapons against innocent civilians by former dictators like Saddam and Syria's Assad. But it was no less deadly.
As we saw above, Bhopal was also an example of the incredible loopholes that still apply to leading companies in globalized industries.
Especially in corruption-ridden developing countries like India, they have often been able to take advantages of lax law enforcement, weak safety regulations, clever holding company structures that limit liability, and the sheer expense of bringing them to justice.
Evidently the globalization of investment and trade is not sufficient. Economic globalization needs to be augmented by the globalization of justice. Among other things, that means that it is high time for transnational corporations to be subject to an enforceable code of conduct, back up by an International Court for Corporate Responsibility.
èèè
© James S. Henry, SubmergingMarkets™, 2004
December 3, 2004 at 09:26 PM | Permalink | Comments (1) | TrackBack
THE BEST WEEK IN UKRAINIAN HISTORY? KIEV REPORT #2 Matthew Maly
Ukraine has just had perhaps its luckiest week in its history -- a week where everything went right, and the educational process was so intense, so logical, so marvelously cogent that the country went through a hundred years of political evolution. It is now, by right, a part of Europe, a part of the Western cultural tradition, a true democracy! This has been not merely a week in life of one country, but an entire page in the history of the world -- and, let us hope, global democracy!
MEET THE CANDIDATES
It all started out rather unpromising with this fall's presidential elections. The alternatives were the usual: on one hand, Prime Minister Viktor Yanukovich, with a much smarter and more sinister person, Viktor Medvedchuk (and Kuchma’s son-in-law Viktor Pinchuk in tow), lurking behind; on the other hand, former Prime Minister Viktor Yushchenko, with a much smarter and more sinister person, Yulia Timoshenko, lurking behind.
Both Yanukovich and Yushchenko were appointed by one and the same President Kuchma. Medvedchuk and Timoshenko had both proved that they know how to turn their government connection into enormous personal wealth. I could tell the difference between these two teams no better than I could tell Coke from Pepsi.
There was another divide: Ukraine speaks Ukrainian and Russian, it has Western and Eastern part, it can join NATO or ally itself with Russia. Yanukovich is from the Russian-speaking part of Ukraine, speaks bad Ukrainian, and so his campaign was getting its local currency by exchanging the rubles it had. Yanukovich’s campaign tone was thuggish, unmistakably betraying presence of Russian political consultants.
Yushchenko’s power base was in Western Ukraine, its tone was uncharacteristically sleek, it tended to mention democracy, and it was not hurting for dollars and euros.
Since I wanted Ukrainian people to actually have a better life, I had no preference between the two. But I strongly felt that using a thuggish tone was a better electoral tactic as that was something the Ukrainians could relate to. My money was on Yanukovich, and I thought I was smart.
SURPRISES
Then an amazing thing happened: people appeared to deeply resent Yanukovich talking down to them, while actually appreciating Yushchenko’s respectful and meaningful messages.
Yanukovich’s billboards, with wonderful, in a certain way, ultimate, slogans “Just because!” and “That’s the way it should be!” were everywhere, while Yushchenko’s slogan “Yes!” (with a horseshoe as a symbol of luck), were nowhere to be seen. In Ukraine, it is called “using the administrative resource”: after all, Yanukovich is a current Prime Minister while Yushchenko is just an opposition leader. Ukraine is a democracy, and everyone can sell advertising space to Yushchenko if they wish to have all their documents for the last five years audited by someone extremely unfriendly and their permits and licenses withdrawn. And of course there could not be any fair description of Yushchenko on any state-owned media.
Then, right in the middle of the campaign, Yushchenko had a little problem: in one day, his entire face got covered with horrific acne-like inflammation, his nose swelled to twice its former size, so that he started to look like a Hollywood movie monster. Since nobody in the world has ever had such a disease, especially one that developed so quickly and in such an inopportune time, people suspected that Yanukovich may have poisoned Yushchenko, probably with dioxin.
There is no direct evidence that Yanukovich was involved. But if you look like a thug, talk like a thug, and behave like a thug, people tend to suspect that you are a thug. It does not help matters if you come from a mafia-controlled Donetsk region of Ukraine, have a long history of association with the murderous mafioso who controls the region, and having had two criminal convictions, did time in jail. In a word, Yanukovich got stereotyped.
Yushchenko got stereotyped as well: people assumed that since he had suffered horrible disfigurement for talking about the truth and democracy he must have really meant it. You have the stigmata – you must be Jesus Christ. And as a result, two things happened: Yushchenko became infallible; for Ukrainians, he became the focus of all their dreams and aspirations, and (pay close attention!) truth and democracy became important because Yushchenko was talking about them. As far as poisoning, believers duly noted that mortal poison did Yushchenko no lasting harm.
Let me say it again, since this is very important. Jesus was the guy who could perform a miracle here and there. People did not have, nor could they understand, the moral code that Jesus was proposing. But since Jesus could do miracles, people accepted his moral code as their own.
Same here. Make no mistake, truth and democracy meant nothing for Ukrainians. Ukrainian students cheated and waited for the time when they become businessmen so they could start stealing. Kiev’s Mercedes-riding bureaucrats were the very epitome of graft, and now they are walking after Yushchenko with the strange smile on their faces, inviting students who demonstrated in the cold to spend the night in their palaces: obviously, there was more to them than graft.
Ukrainians did not accept the theoretical notion that somewhere there could be a government that actually served the people, respected them, and told them the truth. It has all changed now in Ukraine, probably forever, just because Yushchenko said that it should.
THE "ELECTIONS"
Then there came the first round of voting on October 31. It is now clear that Yushchenko got more than 50% of the vote. But God had mercy on Ukraine and prevented Yushchenko’s win in the first round. God has firmly decided to make Ukraine free and democratic once and for all, and for that Ukraine needed to go through some suffering so as to earn its freedom. There was serious fraud in favor of Yanukovich, with the result that both Yushchenko and Yanukovich got about 39% each, with Yushchenko getting marginally more votes. The total is less than 100% because there were 22 other candidates.
Now, that was really something. Never in the post-Soviet space did the opposition presidential candidate get more votes than the candidate of power. In fact, many people thought that voting against Yanukovich was quite useless. Other people thought that voting for Yanukovich was quite useless, but Yanukovich did much to disabuse them of this notion. Regional officials of the regions where Yanukovich would not win, knew they would be immediately fired (and they were). So, a lot of little tricks were used. Voting in Ukraine is voluntary, and students who wanted to get a failing grade on their next exam could abstain from voting. For the rest, the procedure was as follows: a student would obtain a ballot, choose any candidate he or she wanted as long as it was Yanukovich, show the ballot to the person quietly standing by the ballot box, probably their Civics or Ethics professor, drop the ballot into the ballot box, and be assured of a passing grade on their next Chemistry examination. For the older generation, there was another trick. Fail to vote for Yanukovich, and see your heat and water turned right off.
And yet, even with all these tricks, Yanukovich only obtained 45% of the vote. It meant that his true level support could not have been greater than 30%. And yet, Yanukovich HAD to win the second round, had to get at least 20% more. Why could not Yanukovich lose? Because there is a lot of stolen property at stake, and when property is stolen, there is no receipt to prove ownership. That means that a new government could take the property away. Whatever you could say about the famous Khodorkovsky case, it is no longer advisable for the oligarchs to have property for which they have absolutely no legal title.
In the second round, there was even more fraud on behalf of Yanukovich. As this is now well documented, I will not go into that. People of Ukraine are no strangers to fraud, and they could have submitted to it easily, with barely a sigh. But now they had a lightning rod of Yushchenko, a highly moral person they imagined him to be. Regardless of who Yushchenko actually is, what matters is that people appointed him as a personification of all the best that is in them, a focal point of all their hopes. From now on, a stab at Yushchenko was a stab in their heart.
And that made Yushchenko much more than a President: he is now in the same league with Reagan, the Pope of Rome, and the Beatles. There is a Woodstock atmosphere in Kiev, and Yushchenko supporters, people of all ages, look like the happiest people in the world. They have clearly found themselves, their inner freedom, and their strength.
IRONIES
Let us again return to what happened here, as it is unprecedented. In the second round of voting on November 21, Yushchenko got about 70% of support, and if the elections were fair he would have by now be the President-elect. Would Ukraine be a real democracy now?
Absolutely not!!! Ukraine would have gotten Yushchenko as President, and would have calmly gone back to cheating, stealing, hiding from authorities, and suffering as it always has.
After all, Yushchenko already HAD BEEN a Prime Minister, and quite recently. And there was no enthusiasm, no support, no happiness, no tears of joy, and lousy economic results, worse than those demonstrated by Yanukovich.
If the elections were fair, it would have made no difference who won, Yushchenko or Yanukovich. Again. After all, Yanukovich is no worse than Ken Lay: just a tough, self-serving, and cynical kind of guy.
What happened? With great help from Yanukovich, and thanks to him, Yushchenko has lit the inner light in the souls of mortally insulted people, the light they did not know was there. Truth, democracy, justice, or heroism mean nothing unless they are sanctified. In one fell swoop, Yushchenko have sanctified them all, and faces of his followers have changed. What happened in Ukraine is that the truth and justice were lost and are now being defended as a newfound, cherished, and fundamental possession of every citizen.
If Yushchenko now represents God, his detractors are with the Devil. Some of them, to be sure, are hellish personalities, gangsters and murderers, such as the Head of the Donetsk Region or the Mayor of Odessa.
TRANSFORMATION
I am more interested in a momentary transformation. There is a woman I know who went on TV to defend Yanukovich. Her eyes looked dead, her hands were shaking, and she had made her worst hairdo. Across from her, there was a woman who defended Yushchenko, and that one grew ten years younger in four days, her back was straight and her eyes were shining. Was the debate about Yushchenko? Absolutely not.
Actually both of these women used to be equally skeptical about him. But inside one woman a tiny flickering light was now almost extinguished, while the other had the light of her soul burning as brightly as it could, just like the bright orange she wore. It was the debate about human dignity, with Yushchenko as a starting point.
Yushchenko is a tall man with a robust face of a peasant. Women here consider him attractive. But after the second round of elections, with his pockmarked and swollen face, he is universally seen as beautiful. This adoration brings a real danger of Yushchenko not being able to live up to it, but this is the case right now.
And let me just mention the designer revolution that happened here: people that used to choose between gray and gray were now proudly wearing bright orange, and 99% of them are doing so for the very first time in their lives.
Ukrainian TV these last four days is beyond description. First, there was Channel 5, a pro-Yushchenko private channel beamed straight to the Independence Square where most of the demonstrations took place. Channel 5 had a constant stream of interviews with intellectuals, artists, politicians, etc, all of them wearing something orange. Most of the times, the orange color looked attractive, stylish, appropriate for a successful, accomplished, strong-willed person that was being interviewed.
But I remember a man in his seventies, in the cheapest brown suit and a pitiful tie. Around his thin wrinkled neck, he was wearing an bright orange woolen scarf they gave him in the studio, and this scarf looked horribly out of place on this frail, poor, sickly creature. Very uncomfortable in front of the camera, the man said, “I am a retired army major, a World War II veteran. I wish to address myself to the soldiers. My sons! Soldiers! I am kneeling before you! Do not shoot at people, the people have made their choice! I want to say… Soldiers… When I was young…”
The old man was searching for words, panicking as he was being broadcasted live to a huge, country-wide audience. “Thank you very much!” said the anchorwoman in a bright and crisp professional voice, eager to end his suffering. The camera zoomed in for a goodbye. For the first time, the old man looked straight into the camera and said, “People just want to be free!” The orange scarf did not look out of place, and I could suddenly tell the guy really was a major. The guy was having his moment, even if a bit late in life.
And this is important. God did not create Man to chew cud for seventy years and then die: God created Man for one, two, or three defining moments, and the old man was having his.
But sometimes official channels were even better than the opposition TV. One report I saw informed the public that “special riot police is always ready to defend the Constitution” and the camera proceeded to get a close-up of fearsome black police dog.
Then I realized that President Kuchma, unwittingly, is a true father of his nation!
There are things in the world, such as trees, oceans, and countries, whose existence depends on all people. And then there are things, such as love, truth, dignity, inspiration, that exist only because of you. If you are not ready to give your all to defend your love and your truth they cease to exist. Again: truth is something that depends solely and exclusively on you; if you fail to defend it, it dies.
There simply is no way to transfer this responsibility on someone else’s shoulders. And from the bottom of my heart I thank God for the existence of President Kuchma, who showed guard dogs to thousands and thousands of Ukrainian students so that they would stand up, put away their Chemistry textbooks, attach their orange armbands and go to the Square.
For truth cannot be defined as a statement that corresponds to reality. Truth is what makes you unafraid of guard dogs. What happens on that Square is no rock concert, people are braving dogs and weapons – and hallelujah! – a nation is reborn!
December 3, 2004 at 02:30 PM | Permalink | Comments (1) | TrackBack
Monday, September 27, 2004
Democracy in America and Elsewhere: Part IIIB. Campaigns, Voting, and Representation
There was perhaps a time when much of the rest of the world viewed the United States of America not just as a militaristic crusader, but as a role model for advocates of liberal democracy everywhere.
These days, however, as former President Carter’s recent caustic comments about the likelihood of continued rigged voting in Florida this year have underscored, the American beacon of liberty is flickering in the wind.
Increasingly the US is viewed by the rest of the world -- and by some of its most astute internal critics -- as an arrogant, hypocritical, sclerotic plutocracy, whose own electoral institutions, as former President Carter correctly observes, are at risk of no longer meeting even minimal international standards for democratic elections.
Meanwhile, the US has the temerity to lecture other countries (as it did, for example, just this week with respect to Hong Kong) about the meaning of “democracy,” and to selectively intervene in the affairs of other countries in the name of democracy, whenever it suits perceived US interests.
As shown in Table A, this concern with democratizing the world may be a preoccupation of US policy elites, or it may just be pure rhetoric.
A recent poll by the Chicago Center on Foreign Relations shows that for the American public at large, the goal of bringing democracy to other nations ranks last on a list of fourteen major US foreign policy goals, with only 14 percent believing it to be “very important.” This made it less than one fifth as as important as securing energy supplies, protecting US jobs, or fighting terrorism. Even among policy elites, “democratization” ranked 12th on the list.
Once again, our aim here is to help Americans understand why their own electoral system has become so anachronistic and dysfunctional. We also hope to encourage them to take more interest in the rest of the world, be a little more modest, and understand that they too live in a “developing country” – one whose own particular version of “democracy” is in dire need of rejuvenation.
Campaign Duration/ Candidate Selection
Brazil. Like the UK and many other democracies, Brazil’s process for selecting Presidential candidates and the political campaigns that follow are relatively abbreviated, with most candidates selected by national parties caucuses, and campaign advertising and other electioneering activities limited to 60 days before the general election. (This is far from the shortest period. Indonesia just held a Presidential election, with 150 million registered voters, about 10 million more than in the US. It limits public campaigning to just 3 days in order to curb political violence, a long-standing Indonesian problem.)
The US. The US primary process for selecting Presidential candidates is a costly and tedious. It begins up to two-and-a-half years before the general election, wends its way through more than 37 state primaries and party caucuses by mid-March of the election year, and concludes with the main candidates already selected at least 6 months before their party conventions, and 8 months before election day.
The US primary process also gives extraordinary influence to a tiny fraction of voters who turn out for caucuses or primaries in “white bread,” states like Iowa (total primary turnout = just 9 percent of VAP) and New Hampshire (total primary turnout= 29 percent of VAP) that happen to have early primaries.
Several of these states also follow primary procedures that are only remotely democratic. In Iowa’s bizarre “house party” caucuses, for example, there is no secret ballot, so that one’s corporate bosses or fellow union members can easily observe whether one follows instructions.
The primary process also has a dramatic impact on campaign costs. With so many primaries bunched together, candidates are compelled to run national campaigns in multiple states. On the other hand, since the order of primaries has nothing to do with each primary’s weight in national totals, the country as a whole is compelled to sit through 9-sided “debates” between pseudo-candidates, over and over again.
All this probably reduces the supply of first-rate candidates who are willing to endure this grueling march. By the time the November election finally rolls around, most Americans are also probably sick and tired of the whole affair. On the other hand, campaign advisors, advertisers, and the news media like the current system -- it generates a prolonged “horse race” and recurrent employment.
Campaign Finance
While most leading democracies, like Germany, France, and Japan, have experienced serious campaign finance abuses, finance has much more leverage in systems like Brazil and the US, where Presidents are selected through expensive direct elections, not parliamentary polls. So far, despite numerous efforts, neither Brazil nor the US has been able to contain the excessive influence of campaign finance directly, but Brazil has made progress indirectly, by providing media access and some public financing.
In Brazil’s case, the campaign finance issue was highlighted by a recent series of scandals, including the 1992 impeachment and removal of President Fernando Collor, the 1994 “Budgetgate” scandal, the 1997 “Precartorios” scandal in Sao Paulo, the 2002 scandals involving illicit funds raised by Roseanna Sarney and Jose Serra, and the 2004 scandals involving senior Workers Party official Diniz and the “Blood Mafia.”
In the US, this year’s Presidential election will be by far the most expensive in history, nearly twice as expensive as the 2000 election, despite new legislation like the Bipartisan Campaign Finance Reform Act of 2002 (McCain-Feingold), which tried to limit the use of “soft money.”
“Supply-Side” Limits. Both the US and Brazil have repeatedly tried to legislate against the corrosive influence of money on elections by establishing such “supply-side” limits. In the US these efforts date back more than a century. Since the 1980s there have been a string of “reforms of reforms” in both countries to require disclosure of contributions by individual and corporate donors (Brazil), disclose receipts by candidates and parties (Brazil, US), limit the size of individual, union, and corporate contributions (Brazil, US), ban anonymous donations (Brazil, US) limit campaign expenditures (Brazil), and limit the amount that parties can raise (Brazil).
In the US, virtually all elections at all levels of government are privately funded, with no spending limits unless candidates opt for matched public funding. In federal races, contributions limits are now quite high -- $4,000 per election cycle. And these may be increased if opponents spend their own money. Corporations and unions are prohibited from making campaign contributions, but they have found ways to make huge indirect contributions, by way of Political Action Committees (PACs), state and national political parties, unregulated “in-kind” contributions (like Cisco’s $5 million of free networking services to each party convention) and most recently, virtually-unregulated “527” committees and 501-C4s.
So far, these “supply side” efforts to limit campaign contributions have proved unsuccessful in both countries. Donors and recipients are simply too creative, laundering contributions through “independent” fronts, providing in-kind contributions, spreading contributions across multiple family members, and so forth. Ultimately the reality is that in advanced capitalist societies that have very powerful private interest groups, highly unequal income distributions, sophisticated lawyers, and important government policies up for grabs, the most one can hope for from supply-side regulation is window-dressing – and that the insiders occasionally cancel each other out.
Demand-Side Limits. A much more effective approach is to limit the demand for private campaign funds with a combination of free media, direct controls on campaign spending, and public funding.
In the US, efforts to limit campaign expenditures directly have been throttled by the 1976 Supreme Court decision in Buckley vs. Valeo, which determined that “money is speech” – e.g., that any direct limits on campaign spending by candidates or parties violate the First Amendment.
US law does allow limits to be imposed on candidates who accept matched public funds, which have been available since 1974. But public funding has been limited, and is getting scarcer – the US matched public funding system is now on the brink of insolvency, partly because voluntary taxpayer contributions on tax returns have plummeted. (Among other reasons, the $3 check-off limits has not recently been increased.)
So while lesser candidates like Ralph Nader and Al Sharpton relied heavily on public funding for their campaigns, well-funded candidates like Bill Clinton in 1996, George W. Bush in 2000 and 2004, and John Kerry in 2004 have either refused public funding completely, or have limited their use of it to the final few months of the campaign. A few states – Maine, Minnesota, and Arizona, for example – have experimented with providing public funds for state candidates, with positive results, including a sharp reduction in the amount of time politicians devote to fund-raising. But these programs are not expanding, partly because of state budget crises.
As noted, no US-like constitutional limits on campaign spending controls exist in Brazil, or for that matter, in most other democracies, like the UK or Canada, for example. Brazil has adopted some spending limits for political parties and individual candidates, which seem to have been somewhat effective. Brazil has also provided some public funds for all registered political parties to defray campaign and administrative costs.
Brazil. By far the most effective measure that Brazil has introduced to level the campaign finance playing field, however, has been the provision of free access to TV and radio for political candidates. TV advertising, in particular, is otherwise by far the most important ingredient in campaign costs. Furthermore, at least in principle, the airwaves are owned by the public. So it is not surprising that Brazil and more than 70 other democracies around the world have adopted such provisions, including most Western European countries, South Africa, India, Russia, and Israel. (Among First World countries that provide free political airtime: the UK, France, Germany, Spain, Italy, Sweden, and Japan.)
In Brazil’s case, ironically, the early adoption of this provision is partly due to the fact that the ownership of radio and TV broadcasting networks in Brazil are even more concentrated than they are in the US, dominated by the Marinho family’s Globo empire and 2-3 other private owners. On the other hand, a majority of Brazilians are poor and semi-literate, and depend heavily on TV and radio for all their news.
Under Brazil’s free media law, the only forms of paid political advertising permitted are newspaper advertising, direct mail, and outdoor events. Sixty days before an election, 1.5 hours of programming per day - the “horário eleitoral gratuito” - are requisitioned from broadcasters and divided up among political parties in proportion to their seats in the Camara, with a minimum allocation for parties that have no seats. Newscasters and debate broadcasters are required to give equal time to all candidates during this period, punishable by fines. Brazil’s political parties are also entitled to free public transportation, the free use of schools for meetings, and special tax status.
Some broadcasters have complained that these provisions are very costly (to them), that they may reduce news coverage for controversial subjects, and that the viewing public will simply tune out. However, it appears that most Brazilians continue to tune in as usual, and that they actually welcome the concentrated dose of campaigning, as compared with prolonged agony of the US approach.
The US. The US is one of a minority of leading democracies that still provides no free radio or TV access for political candidates. (In addition to Brazil, others that do include the UK, France, Italy, Israel, Hungary, India, Germany, Spain, and Sweden.)
This is especially important, because in many ways, TV advertising is the key factor underlying the whole campaign finance issue. From 1970 to 2000, US campaign spending on TV ads increased more than 10-fold, faster than any other campaign cost element. This year, the political ad revenues of US broadcasters will exceed $1.4 billion. This makes political advertising second only to automotive advertising as a revenue generator for TV networks. This year, most of this revenue will flow to broadcasting conglomerates like ClearChannel, Cox, and Viacom that are especially well-positioned in the battleground states.
In theory, since 1971 the Federal Communications Commission – now chaired by Michael Powell, son of the Secretary of State Powell -- has required US broadcasters to sell airtime to political candidates at the “lowest unit rates” available, but in practice this requirement has not been very effective. The 2002 BCRA also tried to limit corporate and labor funding for broadcast advertising to the last 30 days before primaries, but it did nothing about so-called “Section 527” or 501-C4 advertising by issue-oriented groups that are theoretically independent of campaigns.
Among its many other impacts, this policy compels incumbent US politicians to spend a huge portion of their time – at least 28 percent, in one recent study -- raising campaign funds. It also invites incumbents to be unduly empathetic to the media conglomerates that now dominate the US cable, satellite, broadcasting, and publishing markets.
Its key beneficiaries include:
ClearChannel(1200 radio stations, 37 local TV stations, 103 mm listeners);
Cox Enterprises(43 newspapers in Florida, Ohio, Texas, North Carolina, Georgia, Colorado; 9 TV stations; 75 radio stations;
Disney (ABC, E!, A&E, History Channel, Lifetime, Tivo (partial), Miramax, ESPN, 10 local TV stations, 66 radio stations);
GE(NBC, PAX TV Network, MSNBC, CNBC, Universal Pictures, Telemundo, Bravo, Sci-Fi, Vivendi Entertainment, 28 local TV stations);
News Corp/Rupert Murdoch (Fox Network, 20th Century Fox, National Geographic, HarperCollins, The NY Post, Sunday Times (UK), The Times(UK), multiple other Australian and UK newspapers, Avon Books, TV Guide (part), The Weekly Standard, BSkyB, 34 local TV stations);
Tribune Company(27 TV stations, Newsday, Chicago Tribune, Baltimore Sun, LA Times, 11 other newspapers, 1 radio station, etc.)
Time Warner(CNN, HBO, Warner Bros., Court TV, TNT, New Line Cinema, AOL, Netscape, Time Inc. (People, Time, Fortune, Sports Illustrated, Bus 2.0, Life, Popular Science, etc.) Mapquest, Little,Brown);
Viacom(CBS, BET, MTV, Comedy Central, 13 other cable channels, Paramount, Simon & Schuster, Free Press, Scribners, Infinity Radio (185 radio stations), 39 TV stations, 5 other radio stations).
Not surprisingly, under the influence of this tidy little interest group, federal regulation has become more and more lax on a wide range of broadcast- and cable-related issues in the last twenty years, including acquisitions, “lowest unit rate” regulations, “equal time,” digital spectrum, and pricing guidelines for cable network franchise agreements.
Meanwhile, the detente established between incumbent politicians and this “Sun Valley” alliance has stymied proposals for free political media in the US – and helped to perpetuate the role of money in American politics.
Vote Fraud/ Absentee Ballot Abuse/ Missing Ballots
Brazil. Outright vote fraud and vote buying are long-standing problems in Brazil, as in many other developing countries. One recent survey indicated that up to 3 percent of voters – almost 2 million people - may have sold their votes in Brazil’s 2002 election. A new law was adopted in 1999 to increase penalties for buying votes, but evidently there is still much work to do.
On the other hand, one beneficial side-effect of Brazil’s particular approach to electronic voting – which only provides printed copies for a sub-sample of voters – is that most voters aren’t able to prove that they have voted as instructed.
Absentee ballots are also not a problem in Brazil or other countries with centralized voter registration, because they don’t exist – people can vote anywhere, once. Those who happen to be outside the country on Election Day are permitted to cast ballots at Brazilian consulates.
The US. Outright vote buying is also an ancient American tradition – at least since George Washington bought a quart of booze for every voter in his district when he first ran for the Virginia legislature in 1757. Vote buying still occasionally turns up in the US today, and the Internet has added a new dimension to traditional vote buying, with the appearance in 2000 of sites like www.voter-auction.net and www.winwin.org that permitted voters to swap or even sell their votes in online auctions.
As one of these site’s authors rationalized it, this approach is much more efficient and direct than the standard practice, in which political middlemen like campaign consultants and advertising gurus routinely take 10-15 percent off the top of a campaign’s media spending in exchange for delivering votes.
However, the main areas that are ripe for massive abuse in the US now are probably absentee ballots and “missing/ faulty votes.”
Absentee Ballot Problems. The liberalization of absentee ballot laws since the late 1970s, combined with increasingly lax voter registration laws, and the absence of a national identity system or central voter registration list, have made this a growing problem in the US.
This year, absentee ballots may account for as much as 20 percent of the total US vote, up from 14 percent in 2000. This enables a panoply of specific malpractices, including (1) fraudulent registration and voting in the names of phantom, relocated, or deceased voters; (2) voting by the same individuals in multiple states; (3) gross violations of voter secrecy, including pressuring senior citizens to fill out their absentee ballots in a specific way; and (4) fraudulent voting by Americans civilians or military personnel who are located offshore – that also played a key role in the 2000 Presidential election.
It is not clear which major party has benefited the most from such practices. Bogus registration may have had its finest hour in the notorious 1960 “Chicago miracle,” when thousands of dead people allegedly voted for President Kennedy. This year, however, with the Republican Party in control of all three branches of the US federal government, 28 out of 50 state governorships (including 5 BG states), and 17 state legislatures (including 9 BG states), the Republicans may bear the most watching.
Missing/ Faulty Ballots. Of course the recent US track record with respect to mechanical vote processing is also not encouraging. During the 2000 US Presidential election, the whole country was forced to agonize for months over Florida’s mechanical voting machines, until a highly-politicized US Supreme Court awarded the election to President Bush by a 5-4 vote. CNN and other mainstream media organizations later hired the National Opinion Research Center to conduct a six-month audit of the 2000 Florida vote. Amazingly, when it was concluded, they declared that it showed that “Bush would have won anyway.” In fact a careful reading of the report shows that precisely the opposite was the case.
Furthermore, subsequent analysis of the 2000 Florida outcome shows that the balloting problems that figured in the Supreme Court decision were just the tip of the iceberg – compared with other glaring problems, like the bogus exclusion of thousands of black “pseudo-felons” from Florida’s voter rolls, and the intentional spoiling of thousands of black ballots.
According to the nonpartisan election monitoring group Votewatch, in the 2000 election Florida’s “hanging chad” problem was dwarfed by other counting problems. An estimated 4 to 6 million voters simply had their votes wasted because of faulty equipment and confusing ballots (1.5- 2 million), registration mix-ups (1.5 – 3 million), and screw-ups at polling places (up to 1 million).
This year, the US State Department has responded to concerns about a repetition of such behavior by inviting international observers from Vienna’s Organization for Security and Cooperation in Europe to monitor the US Presidential election – for the first time ever.
To prevent public officials in the Executive Branch from abusing their powers to manipulate voters, Brazil’s Constitution also requires them to resign six months before elections if they or their immediate family members intend to run. The US has no such disqualification period for political candidates who have worked in the executive branch.
National Administration of Elections
Brazil. Like many other emerging democracies, Brazil has an entire separate judicial branch, the Justicia Eleitoral, that is responsible for implementing all campaign finance and voting procedures all levels of government. (1988 Constitution, Articles 118-121).
This system is by no means perfect, but it is far more objective and “party-neutral” than the US system, which is heavily influenced by state and local politics. Brazil’s approach has also contributed to the rapid nation-wide adoption of reforms, like electronic voting.
The US. Since 1974, campaign finance for US federal elections has been administered by an independent regulatory body, the Federal Election Commission. But US voting procedures remain under the control of state and local authorities, even in the case of federal elections. While the National Institute of Standards and Technology makes recommendations concerning voting machines and registration procedures, these are voluntary.
The result is a hodge podge of inconsistent, incompatible and often out-dated paper ballot, machine, and electronic recording procedures that vary enormously among states, and often even within the same states.
Direct Elections/Run-Offs/”Electoral College”
Brazil. In the case of Brazil’s Presidential elections, there is no “electoral college” that stands in the way of the popular will. Article 77 of Brazil’s 1988 Constitution explicitly provides that Brazil’s President is that political party candidate who obtains an absolute majority of bona fide votes. If no candidate emerges from the first round of voting with an absolute majority, a run-off election is to be held within 20 days – as it was in October 2002. Elections for Brazil’s 26 state governors and the mayors of its large cities are also subject to similar runoffs.
The US. Presidential elections are decided by the notorious “electoral college.” As discussed below, this is really just a throwback to the protection of Southern slavery. Currently it serves mainly to protect the influence of “strategic minorities” in a handful of smaller so-called battleground states. (See the discussion in Part I of this series. ) In today’s highly partisan environment, it also (temporarily) gives minority candidates like Nader tremendous destructive power – without, however, affording them any representation at all between elections.
Indeed, the US’ “winner-take-all” system systematically discourages third party power and minority representation. In a more democratic system with proportional representation, preference voting, or even Presidential run offs (see below), voters could freely vote for candidates like Nader without fear of electing their “third choice.” It is no accident that parties like the Green Party, the Libertarian Party, and the Reform Party are in deep crisis, even as Nader’s stubborn crusade continues. Would that more of his spite were directed at “winner take all” and the electoral college, which has helped to institutionalize a political duopoly.
Technically, under the US Constitution (Article II), Americans don’t even have the right to vote for their President or Vice President. This right is delegated to Presidential electors, who may be selected by state legislatures any way they see fit. Indeed, popular vote totals for President and Vice President were not even recorded until 1824, and most state legislatures chose the electors directly. Since then, most states have chosen them according to the plurality of the popular vote in each state. But South Carolina – always a hotbed of reaction, with the highest ratio of slaves to whites -- continued to ignore the popular vote until the end of the Civil War.
Nor is the number of Presidential electors per state even determined by the relative population or voters per state. It is fixed at the number of Senators and Congressmen per state, with a minimum of 3 for each state and the District of Columbia. This guarantees that small rural states are over-represented. Because of the winner-take-all system, it also effectively disenfranchises everyone in a state who votes for candidates who lose that state, and partly disenfranchises all those in large states. At least four times in US history – 1804, 1824, 1876, 1888, and 2000 – this anti-democratic system produced Presidents who failed to win a plurality of the popular vote.
In the context of other democracies around the world, the electoral college is a very peculiar institution indeed. Kenya’s former dictator, Daniel Arap Moi, adopted something similar to control the influence of the country’s dominant tribe, the Kikuyu. The Vatican’s College of Cardinals has something similar.
There have been many calls for the electoral college’s abolition, most recently by the New York Times in August 2004. But this would require a Constitutional amendment that would have to be ratified by three-fourths of the states – at least a quarter of whom have disproportionate power under the existing system.
It is course possible for a state to provide that its own electors are awarded to Presidential candidates in proportion to the popular vote within a state -- as Colorado is considering this year. However, unless all battleground states go along with this, which is unlikely, it is unlikely to gain national momentum.
Electronic Voting/ Registration
Brazil. To the surprise of many, Brazil has recently become the world pioneer in electronic voting and registration. When it held national elections in October 2002, 91 million out of its 115 million registered voters turned out – more than 70 percent of those of voting age, and 3 million more than voted in the US elections that same fall. In terms of global electoral history, the number of votes received by the winner, Luis Ignacio de Silva (“Lula”), was second only to Ronald Reagan’s total in 1980.
To handle this heavy turnout, Brazil relied heavily on electronic voting. The Tribunal Superior Eleitoral (TSE) had been experimenting with electronic voting systems since the early 1990s, becoming a real pioneer in the use of “DREs” (“direct recording electronic”) voting machines. Brazil first used DREs on a large scale in its 1996 elections, with 354,000 in place by 2002. For that election, it deployed another 52,000 “Urnas Eletronica 2002,” a state-of-the-art DRE that had been designed by Brazilian technicians with the help of three private companies – Unisys and National Semiconductor, two US companies, and ProComp, a Brazilian assembler that has since been acquired by Diebold Systems, the controversial American leader in electronic voting systems.
Because Brazil has been willing to commit to such a large-scale deployment, each Urna costs just $420, less than 15 percent of the cost of the $3000 touch-screen systems that Diebold features in the US. The Brazilian system lacks a touch screen; voters punch in specific numbers for each candidate, calling up his name and image, and then confirm their selections. The numerical system was intended to overcome the problem of illiteracy, which is still a problem in parts of the country. To handle operations in remote areas like the Amazon, the machine runs on batteries up to 12 hours. Initially there were no printed records, but the Electoral Commission decided to retrofit 3 percent with printers, to provide auditable records.
Like any new technology, Brazil’s approach to electronic voting is by no means perfect. Indeed, significant concerns have been voiced about the system’s verifiability and privacy – especially about the TSE’s recent move to eliminate the printers, supposedly because they slowed voting.
Among the most important proposed improvements are a requirement that all voting machines produce both electronic and paper records, in order to leave an audit trail and increase voter confidence in the system; that system software be based on “open” standards and available for audit; and that the system for identifying eligible voters be separated from voting, to insure privacy.
Despite these concerns, most observers agree that Brazil’s system performed very well in 2002. In Brazil’s case, within a couple days, winners were announced almost entirely without dispute, not only for the Presidential race, but also for the 54 Senate races, 513 Congressional races, 27 state governorships, 5500 mayors, 57,316 councilmen, and many other local contests – all told, races involving more than 315,000 candidates.
Given this success story, many other countries that have traditionally had serious problems with paper ballot fraud are also considering its use, including Mexico, Argentina, the Dominican Republic, India, the Ukraine, and Paraguay.
The US. This country, where the choice of voting technology is localized and highly political, still relies very heavily on paper ballot-based and mechanical voting system – while 42 states will have new voting machines in 2004, only about 29 percent of US voters (at most) will cast electronic ballots this year. And with each state free to adopt its own local variant on the technology, it is not surprising that implementation has been problematic -- recent experiments with electronic voting in New Mexico, California and Florida have all been riddled with problems.
In 2002, the US Congress passed the “Help America Vote Act” (HAVA), which authorized $3.9 billion to be spent by 2006 to help state and local governments upgrade their election equipment. Given the fragmented nature of US election administration, however, this has produced a competitive scramble for government contracts among 3-4 leading US electronic voting firms, including Diebold Election Systems, ES&S, and Smartmatic, the Florida-based company whose 20,000 electronic voting machines successfully handled Venezuela’s Presidential Recall referendum this summer.
But with no mandatory national standards and the numerous operating problems that we’ve already seen at state and local levels, it is not surprising that there is widespread, perhaps exaggerated, concern about the potential for hacking and manipulation. In this highly-politicized context, with no adequate checks and balances over the procedures, some observers have argued that computerizing US voter registration and electronic voting will actually make things worse.
Relative to more basic problems like absentee ballots, phony registration, malapportionment, and misrepresentation, we believe that these concerns are exaggerated. However, like many other elements of our “pseudo-democratic” political system, it is very hard to argue that the US track record with respect to electronic voting is an achievement.
Proportional Representation
Brazil. Brazil’s 81 Senators are elected by simple plurality for eight-year terms. But Article 45 of Brazil’s Constitution provides that the 513 members of Brazil’s House of Deputies are elected for four year terms according to a voting system called “proportional representation” (PR). Brazil also uses PR to elect city councils and state legislatures.
Unlike the “plurality/single member district/ “first past the post” system that is used in most US federal and state elections, this approach insures that the overall mix of elected representatives more accurately reflects voter preferences. It also represents a wider range of opinions, since party candidates compete with each other. Indeed, various forms of PR are now used for the election of “lower houses” by all other countries in Latin America, almost all European countries, all the world’s largest democracies, the new democracies in Iraq and Afghanistan, and, indeed, the vast majority of democracies in the world today -- except for the US, Canada, the UK, and former British Caribbean colonies like Jamaica and the Bahamas.
Indeed, many of the world’s leading corporations have also turned to proportional representation, cumulative voting, or other forms of preference voting for purposes of electing their corporate boards.
Brazil uses what’s known as the "open list d’Hondt” version of PR. Here, political parties or coalitions register proposed lists of congressional candidates with the Electoral Court. There are no single-member congressional districts – rival candidate lists are drawn up by each party for each of Brazil’s 26 federal states, with the number of representatives per state determined by population, subject to minimums and maximums. Voters can opt for a party’s entire list, or select among individual candidates – unlike a “closed list” PR system, where candidates don’t compete with each other.
Of course Brazil’s proportional representation system is by no means perfect. Some have argued, for example, that using its 26 states as electoral districts has made Brazil’s legislators too remote from voters, and that smaller districts, or even a “mixed” PR system like that used in Mexico, Germany, or Venezuela may be preferable. Some voters may also find it too difficult to choose among so many different candidates, leading them to opt for party slates.
However, when it comes to buying shampoo or automobiles, most consumers believe that increased variety is a good thing – why should politics be any different? And it is hard to imagine an elected official who is more remote from minority voters than one who is elected – as more than 95 percent of US Congressmen now are – in a district that predictably goes either Democrat or Republican, year after year. This system, in turn, encourages the country to divide into polarized sectarian camps, where even majority voters feel less well-represented because incumbents can take them for granted.
Some have argued that the Brazil’s open list version of the PR system has contributed to a weaker party system, and to competition among a party’s own candidates. For example, any political party in Brazil can put forth candidates without having to obtain a minimum of the national vote. Party weakness may also be encouraged by the fact that incumbents can change parties without losing their spots on the ballot, one of several measures that favors candidates over parties. This may indeed have encouraged a proliferation of political parties – in Brazil’s 2004 Camara dos Deputados and Senado Federal, for example, 16 different ones are represented. Even with stable coalitions among the top 3-4 parties, the concern is that all this can substantially increase the negotiation costs of legislation, block Presidential initiatives, and lead to deadlock.
On the other hand, such negotiations may also produce outcomes that are more reflective of the popular will. And recent studies of the actual operation of Brazil’s Congress suggest that the concerns about fragmentation and policy deadlock have been overstated. Furthermore, the fact that delegate turnover in Brazilian elections since democracy has averaged more than 50 percent may be viewed as a good thing – especially compared with the “dynastic legislature” that the US has acquired.
The US. There is actually a long history of preference voting and proportional representation in the US at the state and local level. For example, a majority of the original 13 colonies’ legislatures were elected using multi-member candidate slates, and more than 60 percent of city councils in the US are still elected with at-large slates. There have also been numerous proposals to expand their use at the federal level, especially for the US House, which could be done without a Constitutional amendment.
However, currently, delegates to the House and Senate and almost all state legislatures are chosen in single-member-district “first one past the post” contests, where voters each district choose just one candidate for each office. Those voters whose candidate wins a plurality of counted votes (e.g., less than or equal to 50.1%) are awarded 100 percent of a district’s seats; all other voters get zero representation. This implies two kinds of inefficiency – “over-represented votes,” those that a winning candidate didn’t need in order to prevail, and “under-represented votes,” those cast for losing candidates.
Compared with a PR system like Brazil’s, the US is monumentally inefficient – in two-way races, 49.9 % of all votes in “winner take all” races are always wasted, in the sense that they are either more than winners need to prevail, or are votes for the loser that receive no representation.
There is a huge literature on the merits and demerits of proportional representation, which contrast its increased representation for minority interests with its alleged tendency to produce unstable coalition governments (the classic cases being Italy and Israel.) At the end of the day, there are undoubtedly some empirical trade-offs to be made. But the fact is that the vast majority of the world’s democracies, as well as many of the world’s leading private companies, have opted for various forms of PR systems for representation – with the exceptions of the US, Canada, and the UK. And among those that have opted for PR systems are all of the world’s newest democracies, including Brazil, South Africa, Indonesia, East Timor -- and even our own favorite new Middle Eastern democracies, Afghanistan and Iraq. Is there no content in this signal?
Apportionment/Gerrymandering/ “Safe Seats”
Like other countries with federal systems, Brazil and the US have struggled to (1) provide representation that is proportional to the actual number of voter-aged residents in all regions of the country, while at the same time (2) providing at least a minimum degree of representation for all regions, no matter how heavily populated.
In Brazil’s case, significant “malapportionment” – departures from representation that is strictly proportional to population -- continues exist, mainly in the form of overrepresentation for rural states in the National Camara and Senate. This is because Brazil’s Constitution guarantees each of its 26 states, plus the Federal District, at least 3 Senators regardless of population, and because representation in Brazil’s Camara is a truncated function of population, with each state guaranteed a minimum of 8 representatives and a maximum of 70, regardless of population.
However, as shown in Table 6,the overall degree of “mal-apportionment” -- defined as the median ratio of a state’s share of representatives to its share of VAP – is only slightly higher for Brazil’s Camara than in the US House, while the degree of mal-apportionment for Brazil’s Senate is much lower.
Furthermore, unlike the US, Brazil permits its federal district – Brasilia – to elect both Senators and Congressmen. In contrast, the US has stubbornly refused to permit the 563,000 residents of the District of Columbia to have any voting representation in either the House or Senate.
Finally, the total number of elected national representatives in Brazil – 513 deputies for the Camara and 81 Senators – is more than twice as high per voting age resident in Brazil than the US. Since the US has been slow to adjust the total number of House and Senate members in response to increases in population, it now has one of the lowest ratios of members to voters among leading democracies – more than six times the ratios in the UK, Canada, and South Africa.
Compared with countries like Brazil that elect their Congressmen from multi-candidate lists for fixed states, the US political system’s focus on “single-member districts” is also far more open to partisan gerrymandering. There is a long history of dominant parties drawing district lines to favor their own candidates or create safe seats. Traditionally, every ten years, state legislatures in the US redraw their district boundaries to the fit the latest Census data.
Several other factors have also exacerbated this trend toward the creation of careerist legislators in the US. As one recent analysis concluded,
…(T)he incidence and extremism of partisan redistricting have escalated. Voting patterns have become more consistently partisan, enabling political mapmakers to better predict how voters will vote. And advances in computer technology and political databases allow cartographers to fine- tune district boundaries to maximize partisan advantage.
More generally, districting isn’t an issue that necessarily favors either main party. But one consequence of it is a clear trend toward more safe seats for incumbents of both parties, fewer competitive races, and a growing geographic polarization into “red” and “blue” districts – the so-called “Retro-Metro” divide.
In this fall’s US Congressional races, for example, out of 435 US House races, only about 30 will be decided by margins of less than 10 percent, and just 5-6 percent, or 20-25, that will be “effectively contested,” with a serious possibility that seats may change party hands. Many of these involve seats where incumbents are voluntarily retiring. For the US Senate, where redistricting is not an issue, the incumbency advantage is only slightly lower -- only 4-5, or 10-12 percent, of this year’s 34 Senate races will be effectively contested.
So regardless of who wins the Presidency this year, the reality is that the US House and Senate are both likely to remain under Republican control.
This is consistent with a trend toward increased advantages for incumbents in US elections. In 2002, for example, just 4 out of 383 US House incumbents lost their seats to non-incumbent challengers. This was the highest rate of incumbent reelection since 1954. Of 435 races in 2002-04 (including special elections), 81 percent were won by incumbents with margins of more than 20 percent. Of the 379 incumbents who were reelected in 2002, the median margin of victory was 39 percent. (See Table 7.)
Among the 56 non-incumbents who were elected, 11 won in newly created “safe” districts, by a median margin of 19 percent. Thirty-two were elected from the same parties that had represented their districts before, by a median margin of 15 percent. Just 13 were new non-incumbents who managed to oust the prevailing party in their districts, mostly by defeating other new non-incumbents. In fact, in 2002, more US Congressmen were elected in uncontested races, with 100 percent of the vote (n=31), than in “close” races where the winning margin was 10 percent or less.
Given these trends, it is not surprising that the average lifespan of Congressmen has been increasing -- apart from voluntary resignations. As of 2004, the median Congressman had served 8 years, and 20 percent had served at least 16 years or more. The main source of turnover in this increasingly entrenched, carefully-districted, careerist “people’s house” is now retirement, death, or incarceration, not voter decisions.
Similar trends are evident for the US Senate and the Presidency, although the advantages of incumbency are less than for House races. In the case of 2002 Senate races, 16 of 33 races (48%) were won by more than 20 percent, and 67 percent were won by more than 10 percent. Incumbents were reelected in 23 (88 percent) out of 26 races where they decided to run. As in the House, the main source of turnover was retirement or death. As of 2004, the median Senator had been in office 10 years, and the top 20 percent had median tenures of 28 years.
As for the Presidency, out of 42 Presidents through Clinton, 25 ran for reelection, and 16 (64 percent) were reelected. Whether or not President Bush can take comfort in these odds is less clear, however – among the ten post-War Presidents that preceded him, just four (Eisenhower, Nixon, Reagan, and Clinton) were reelected.
At least at the Congressional level, there appear to be strong interdependencies between incumbent advantage and the existing systems for financing campaigns, conducting elections, representing voters, and defining districts. Finance not only strengthens incumbent advantage; it also follows from it, in the sense that incumbency makes it much easier to raise money.
Having once established this interdependent system of interests, it is very hard to unravel. Is it any wonder that more and more Americans have simply decided that they have better things to do with their time than vote – even though the issues at stake have never been more important, not only for Americans, but for the world at our mercy?
All this adds up to an electoral system that is, on the whole, much more democratic than that in the United States.
Nor is Brazil alone in providing a potential role model for those in the US who are serious about revitalizing democracy at home:
In April 2004, South Africa held its third national election, with 15.9 million South Africans, or 76.7 percent of registered voters, turning out. While the ANC won a commanding 69.7 percent of the vote, 20 other parties also participated, and 11 of them won seats in Parliament. Compared with 1994, when South Africa held its first post-apartheid election, public confidence in the political system and the future have both risen substantially.
This 2004 turnout, while impressive, was below the record level set in 1994, when more than 19 million South Africans voted in the country’s first democratic elections ever. However, the difference may be explained by the fact that in that first election, no formal registration was required -- South Africa wisely considered it far more important to hold national elections as soon as possible, rather than worry too much about registration niceties. In subsequent elections, as it implemented formal registration, voter turnout has declined somewhat. (The new US-backed regime in Iraq and Afghanistan, which have devoted an inordinate amount of time to preparing for national elections and registering voters, might well have learned from this example.)
To reinforce confidence in its electoral processes, South Africa has adopted special procedures to insure that independent international observers are present at its elections; in 1999 its elections were attended by more than 11,000 observers, including representatives from the OAU and the Commonwealth.
India, the world’s largest constitutional democracy, also held a successful parliamentary election in April-May 2004. More than 387 million people, or 56 percent of registered voters, voted, choosing among 220 parties and 5400 candidates, using more than 1 million electronic voting machines to register their preferences.
Indonesia, the world’s largest Muslim country, and its second largest democracy, held its first-ever Presidential election, as well as parliamentary and local elections, in May-September 2004. More than 155 million people, or 75 percent of registered voters,turned out to choose among six different Presidential candidates, for the most part by driving nails through the pictures of their favorite candidates on paper ballots. With few exceptions, the voting proceeded peacefully, and although there was substantial vote-buying on all sides, about 9 percent invalid votes, and some intimidation, independent monitors like the Carter Center and the LP3ES-NDI found the election generally fair.
Venezuela’s President Chavez is undoubtedly a very divisive leader – not unlike some recent US Presidents that we have known. But at least his opponents have been able to make use of a right that no Americans have – the constitutional right to initiate a recall petition half-way through a President’s term, and if 20 percent of registered voters agree, to demand a recall referendum.
After a prolonged efforts by the opposition to gather the necessary signatures, in May 2004 Venezuela’s Supreme Court – albeit, like the US Supreme Court, populated with supporters of the President -- certified that there were indeed enough signatures to require a referendum on whether Chavez could serve out his current term until 2007.
In August 2004, in a national referendum that was conducted with electronic voting machines, Chavez won by an overwhelming 59-41 percent margin. While diehard opposition leaders, as well as the Wall Street Journal and the US Government -- which had supported a 2002 coup attempt against Chavez -- expressed doubts about the margin, the referendum was validated by independent observers like President Carter, the OAS, and a team of Johns Hopkins/Princeton political scientists.
This means that Chavez has now held five free elections and referenda in seven years, more than any other Venezuelan President. He won them all by commanding margins.
We recall the fact that in 2000, the hapless Al Gore captured a plurality of the US popular vote (48.4 percent), despite all the games played with ex-felons, absentee vote abuses, and lost ballots discussed above, and even with third- party candidates taking another 3.7 percent of the vote.
Since it is hard to believe that President Bush’s absolute popularity has increased very much since then, one wonders what the results might have been if only the US were as democratic as Venezuela – e.g., if only Americans had been able to initiate such a Presidential recall referendum, and like our good Latin neighbor to the south, determine the outcome by popular vote of the nation as a whole, under the watchful eyes of international observers.
Instead, as we've argued here, the US is still captive to age-old anti-democratic contraptions, superstitions, and subterfuges, and its particular version of "democracy" still labors in the long dark shadows cast by venerable institutions like states rights, felon disenfranchisement, and white supremacy.
We can try to impose our self-image on to the world if we like, but we should not be surprised if the world asks us to hold up a mirror.
**
©James S. Henry and Caleb Kleppner, SubmergingMarkets™, 2004
September 27, 2004 at 12:07 PM | Permalink | Comments (0) | TrackBack
Wednesday, September 22, 2004
Democracy in America and Elsewhere: Part III: How the US Stacks Up: - A. Qualifying Voters
We certainly wish President Bush much greater success than President Woodrow Wilson, who saw his own favorite proposal to “make the world safe for democracy,” the Versailles Treaty, throttled by Republican Senators who opposed the League of Nations, and suffered a stroke in the ensuing battle.
Knowing President Bush, he will probably not be dissuaded from his mission by this unhappy history, or by the fact that many other world leaders, like France's Chirac and Brazil's Lula, are now much more concerned about fighting global poverty and taxing "global bads" like arms traffic, anonymous capital in offshore havens -- an idea we first proposed in the early 1990s -- and environmental pollution than they are about neo-Wilsonian evangelism.
But of course any suggestion by the US that democracy can actually be propagated by multilateral consensus rather than by unilateral military aggression is always to be welcomed.
Before proceeding any farther with this latest American crusade to sow democracy abroad, however, it may be helpful to examine how the US itself really stacks up as a “democracy," relative to "best democratic practices" around the world.
One approach to this subject would be to start off with a comparison with other leading First World democracies like the UK or France. After all, at the outset, one might think that only such countries have the well-educated, politically-engaged citizenry, political traditions, affluence, and technical know-how needed to implement truly state-of-the-art democratic processes.
However, following the lead of former President Jimmy Carter’s brief comparative analysis of Peru in 2001, we find it more interesting to see how the US compares with younger developing democracies that lack all these advantages – much less access to the yet-to-be-created UN Democracy Fund.
In our case, we’ve chosen Brazil, the world’s sixth most populous country, with 180 million inhabitants, two-thirds of South America’s economic activity, a federal system and a long history of slavery (like the US).
As we’ll see, our overall finding is that while Brazil’s democracy has plenty of room for improvement, it already boasts a much more democratic electoral system than the United States of America.
While Brazil’s electoral institutions are by no means perfect, and its campaign finance laws and federal structure have many of the same drawbacks as the US, it has recently been working very hard to improve these institutions. I
Indeed, it turns out that Brazil is making remarkable progress toward effective representative democracy, especially for a country with enormous social problems, a high degree of economic and social inequality, and a per capita income just one third of the US level
Brazil’s new democracy provides a striking contrast along many dimensions – in particular, the processes and structures by which it (1) qualifies voters, (2) conducts campaigns, (3) administers voting, and (4) provides fair representation of voter preferences. The following essay focuses in on the first of these elements; the sequel will deal with the others.
1. Mandatory Voting/Registration
Actually “mandatory voting” is a misnomer – people are just required to show up at a polling station or consular office and submit a vote, which can be blank. There are fines for violators who lack valid excuses, like illness.
Brazil adopted mandatory voting in part to overcome the apathy induced by more than two decades of military rule. It is just one of many countries that have mandatory voting, including Australia, Belgium, Cyprus, Greece, Luxembourg, Liechtenstein, one Swiss canton, Egypt, Fiji, Singapore, Thailand, Argentina, Bolivia, Costa Rica, the Dominican Republic, Ecuador, Uruguay, and Venezuela.
Mandatory voting in Brazil is facilitated by the fact that, as in 82 other countries, all Brazilians age 18 or over are required to obtain a national identity card, with their photo, fingerprint, signature, place and date of birth, and parents’ names.
These cards, which are now becoming digital, are needed to qualify for government services and to conduct financial and legal transactions. They also enable cardholders to vote at polling booths anywhere in the country, eliminating the need for a separate, costly voter registration process.
To encourage voter turnout, Brazil also makes Election Day a national holiday, and often holds its elections on Sundays. Any eligible voter may be required to assist for free at the polls.
Mandatory voting, plus Brazil’s proportional representation system (See Part IIIB), have yielded voter turnouts in recent national elections that have routinely exceeded 75 percent of the voting-age population (VAP).
By comparison, US voter turnouts have recently averaged less than 45 percent of the VAP.
Brazil’s mandatory system has also had many other benefits. It has probably increased turnout the most among social groups that have much less access to education and income, thereby boosting their “voice” in the political system. It has also placed pressure on public authorities to implement efficient voting procedures, and shifted responsibility for registration and turnout away from Brazil’s political parties, allowing them to focus on campaigning.
As one might expect, mandatory voting does produce slightly more blank votes as a proportion of all votes than we see in US elections. But the system also seems to have made voting more habitual.
Some countries, like Austria and the Netherlands, have recently abandoned the practice, and Brazil is also considering this, now that the population has re-acquired the voting habit. As Brazil matures, especially given its use of proportional representation, it may well be able to follow in the footsteps of these other countries and eliminate mandatory voting without sacrificing high turnout.
The US. Voting is entirely voluntary in the US, and there are no national identity cards or centralized voter registration systems. Originally, many states viewed voter registration as undemocratic. But in the course of the 19th century, growing concerns over vote fraud, combined with the desire in some states to curb voting by blacks and the lower classes, led to the widespread adoption of stricter voter registration laws. By now, every state but North Dakota requires voters to “register” before they can “vote.” US elections are also never held on Sundays, nor is Election Day a national holiday.
As we’ll examine closer in Part IIIB, the US’ “winner-take-all” electoral system is also highly inefficient, with more than 95 percent of all Congressional incumbents now re-elected, and almost all US House and Senate races now a foregone conclusion. So US voters are naturally not eager to participate in such “Potemkin” elections, which are approaching Soviet-like party reelection rates (though the US does have TWO Soviet-like parties.)
None of this has helped to encourage voter turnout. Not surprisingly, therefore, for the entire period 1948-1998, US voter turnout averaged just 48.3 percent as a share of VAP, and ranked 114th in the world. This was the lowest level among all OECD countries -- forty percent lower than the average turnouts recorded in First World countries like Germany, Italy, Sweden, and New Zealand. Even if we omit the 17 countries like Brazil with mandatory voting, it is hard to make this track record look like an achievement.
One can argue that relatively low turnout is precisely the point. Indeed, participation by ordinary Americans in their political system has always been a bit trifle unwelcome. For example, just 6 percent of all American citizens – 20 percent of whom were slaves -- participated in George Washington’s election in 1789. This was mainly because most state legislatures at the time had decreed that voters had to be white, propertied, male, Protestant and at least 21 years old. Studies of 19th century voter turnout in the South also show that turnout, which once exceeded 60 percent in the 1880s, plummeted sharply in the next 30 years under the impact of tougher registration laws that targeted black voters. To this day, the Neo-Republican South still boasts the lowest turnout rates and highest black population shares in the country.
Some cynics argue that low US turnout rates are just a sign of how deeply “satisfied” American voters are with the way things are. However, these turnout rates have declined sharply over the last three decades, at a time when it is hard to believe that Americans have become more and more satisfied with their political system.
In 1968, for example, 73.2 million Americans voted, a 61 percent turnout level. Thirty years later, in 1998, the number of Americans who voted was still just 73 million -- despite the fact that US population had increased by 40 percent.
Beyond voting, as of 2002, one US citizen in three (33.6 percent) did not even bother to register to vote. And that proportion was higher than it was in 1993, when Congress passed the National Voter Registration Act, which was intended to facilitate voter registration.
Evidently a majority of American voters have now become so “satisfied” that they no longer choose to participate in it at all. According to this bogus "apathy" theory of non-registration, the most “satisfied” groups of all must be blacks, other minorities, youth, the poor, and residents of Southern states, whose turnout rates are all miserably low.
In 2002, in four states (Texas, West Virginia, Indiana, and Virginia), less than 40 percent of all eligible citizens of voting age voted. Of 24 million Americans between the ages of 18 and 24, 38 percent registered, and 4.7 million, or 19.3 percent, voted. Just 27 percent of unemployed citizens, 30 percent of Hispanic citizens, 30 percent of Asian American citizens, 30 percent of the 35 million disabled Americans, 35 percent of all women ages 18 to 44, 37 percent of high school graduates, and 42 percent of all black citizens voted.
In fact, as we’ll examine later, there are very important structural reasons that help to explain why these groups fail to register or vote.
In the case of black males, for example, prisoner and ex-felon disenfranchisement may account for a substantial fraction of their relatively low participation rates. And 70 percent of those who registered and didn’t bother to vote in 2002 blamed logistical problems – transportation, schedule conflicts, absence from home, registration problems, homelessness (2.3-3.5 million adult Americans, depending on the year), the failure to get an absentee ballot on time, inconvenient polling places, or illness (including 44% of non-voting registrants age 65 or older).
All these obstacles affect poorer, less educated, older voters more than others. Most of them might easily be addressed with improved voting technology, if this country’s leaders, despite their putative concern for democratization around the world, were really serious about implementing democracy at home.
Meanwhile, in 1998, some 83 million Brazilians voted – 5 million more than in the entire US, which has about 100 million more citizens. Brazil’s voter turnout increased dramatically since the 1960s, from 37 percent of VAP in 1962 to an average of more than 80 percent in 1994-2002. In 2002, while 88 million Americans were proudly exercised their right to vote, so were 91 million Brazilians – for an 81 percent turnout. On the “satisfaction” theory, all these Brazilians must be nostalgic for the dictatorship.
After the 2002 Congressional elections, some US political pundits were impressed because voter turnout had increased slightly, from 41.2 percent in 1998 to 42.3 percent (46.1 percent of all citizens).
From an international perspective, however, that merely put the US on a par with Haiti and Pakistan –- just half of Brazil’s level.
Overall, the US trends described here are hardly indicative of “voter satisfaction.” Rather, they are a very disturbing sign that there are deep structural impediments to voting in America. Furthermore, the grass roots organizing power that has always been essential for getting out the vote in this country, much of it supplied by parties and unions, may have been waning.
From this angle, it will be very interesting to see whether this November’s contest, and the elaborate new organizing drives that have been mounted to increase US voter turnout and registration, will reverse these trends. No doubt turnout will be higher than it was in the dismal 2002 off-year election, but that's not saying very much. A more telling indicator will be to see whether turnout surpasses the (relatively modest) 59 percent median VAP turnout rate that the US recorded in nine Presidential elections over the whole period 1968-2000. We would love to see it happen, but since that would amount to a 10 percent improvement over the turnouts recorded in 1996 and 2000, we doubt it will happen.
2. Voting Rights for Prisoners and Ex-Felons
Brazil. Disenfranchising prisoners and ex-felons is unfortunately a longstanding, widespread departure from “one person, one vote” -- a legacy of the age-old practice of ex-communicating social outcasts. Worldwide, there is a growing trend toward discarding this medieval practice, with 32 countries now allowing all prisoners to vote and 23 more that allow certain classes of them to do so.
Brazil is one of 54 countries that prohibit prisoners from voting while they are in jail, but it permits them to vote after they are released, or are on parole or probation.
The US. The American approach to prisoner voting is much more restrictive than Brazil's. All but 2 (Vermont and Maine) of the 50 states disenfranchise all incarcerated prisoners, including those awaiting trial. Thirty-four states disenfranchise all felons on parole, while thirty disenfranchise those on probation.
Furthermore, the US is one of only 8 countries where ex-felons are temporarily or permanently disenfranchised even after they have completed their sentences, unless they successfully petition the authorities to have their voting rights restored. In 7 US states, felons are disenfranchised for several years after serving their sentences – for example, 5 years in Delaware, or 3 years in Maryland. In 3 states – Arizona, Maryland, and Nevada -- recidivists are permanently disenfranchised. And in 7 other states – Alabama, Nebraska, Kentucky, Mississippi, and the “battleground states” of Iowa, Florida, and Virginia – all ex-felons are permanently disenfranchised.
Many of these rules date back to the Ante-Bellum period of the 1880s, when they were enacted by Southern and border states to maintain control over the newly-freed blacks -- contrary to the spirit of the 15th Amendment.
The impact of prisoner and ex-felon disenfranchisement on electoral outcomes is much greater in the US than Brazil, because of the electoral college system and the size, composition and location of the US convict population. Indeed, while Brazil's prison system is horribly overcrowded, its entire prison population is just 285,000 inmates -- .2% of Brazil’s voting-age population.
The US, in contrast, now has the world’s highest proportion of its population in prisons, jail, on probation or parole, or under correctional supervision, outside jail. As of August 2004, this “correctional population” totaled 7.2 million adults, 3.3% of the US VAP. Relative to population, as well as in absolute terms, this is the largest US prison population ever. It is also by far the largest prison population in the world, well ahead of the US’ closest competitors, China and Russia.
There are also another 3.2 million American citizens – 1.4% of the US VAP -- who have served time in state or federal prison for felonies and are no longer in correctional programs. Depending on their states of residence, they may be subject to the voting restrictions imposed on former felons in the US.
Both these totals have soared since 1980 because of stiffer drug laws and sentencing laws -- the “correctional” population as share of VAP has almost tripled, from 1.17% to 3.3%. (See Figure 3A-1.) (See Figure 3A-1.)
Furthermore, compared with 1980, when a majority of state and federal prison inmates were serving time for violent crimes, a majority are now either awaiting trial because they cannot afford bail, or are serving time for non-violent offenses, more than a quarter of which were relatively minor drug-related offenses.
Drug Offenses and Disenfranchisement. As other analysts have recently noted, such drug offenses rarely involve “victims,” and there is a high degree of prosecutorial discretion. This makes them especially vulnerable to racially-discriminatory arrest practices. For example, recent studies of drug arrest rates show that black arrest and conviction rates for drug-related offenses are way out of proportion to drug use in the black community, and that the disparity between black and white arrest rates for drug use has been soaring because of policing practices, not because of greater underlying criminality.
The resulting steep rise in the US prison population since the 1980s provides a strong contrast with European countries and leading developing countries, where per capita prison populations have been stable or even declining. Not surprisingly, the disparity is also consistent with the fact that Europe’s drug laws are much less punitive.
Unemployment Impacts.The increase in the US correctional population as a share of the population since 1980 has not only reduced the ranks of poorer voters. It has also reduced the size of the “observed” civilian labor force and the official US unemployment rate by 18-20 percent. In other words, the US unemployment rate in July 2004, for example, would have been 6.43 percent, not the official 5.43 percent reported by the Bureau of Labor Statistics. So without this swollen prison population, there would now be more than 10 million unemployed in the US – at least 2.2 million more than the official statistics show, and more than enough to swamp any alleged “job growth” in the last year.
So US penal policies have not only removed a huge number of prisoners from the ranks of potential voters. They have also helped to disguise the seriousness of the US economy’s rather tepid recovery.
And some of us thought the point of the US’ punitive drug laws was to reduce drug trafficking! (Note to reader: US real retail cocaine prices have plummeted since the 1980s. See Figure 3A-2.)
While it is not easy to measure the impact that US prisoner disenfranchisement has had on recent elections, it may have been substantial, as several analysts have recently noted. For example, one recent study estimated that in 2000, more than 3.0 million prisoners, parolees, and probationers, plus 1.5- 1.7 million ex-felons, were formally disenfranchised – 2.1% of the US voting age population. Another recent study of prisoner disenfranchisement in the state of Georgia found that 13% of adult black males were disenfranchised by this policy, and that it explained nearly half the voter registration gap between black males and non-black males.
There were also another 358,000 who had been jailed awaiting trial, and 218,000 more who had been jailed on misdemeanor charges. All these people were also effectively disenfranchised.
All told, during the 2000 Presidential race, the total number of potential American citizen/voters who were disenfranchised because of the US penal system and its archaic laws was about 5 million. Since the numbers have continued to grow since then, by now they have reached 5.5 – 5.8 million.
As other commentators have noted, this policy is also practically unique -- no other putative “democracy” comes anywhere close to this kind of systematic vote deprivation.
No doubt there are some determined ex-felons, parolees, and probationers who manage to slip through and vote even in states that prohibit them from doing so. Many others would not vote even if given the chance. However, even apart from the question of whether such harsh treatment encourages better behavior, this disenfranchisement policy is far from politically neutral:
Texas alone has at least 500,000 ex-felons and more than 200,000 prisoners and other inmates who have been disenfranchised, the overwhelming majority of whom are black or Hispanic.
Of Florida’s 13.4 million people of voting age, at least 600,000 to 850,000 prisoners, parolee/probationers, and ex-felons, have been disenfranchised by such voter registration laws, including at least one-fifth of all adult black males who reside there. Other battleground states, including New Mexico, Virginia, Iowa and Washington, have also used such laws to disenfranchise 15-25 of their adult black male populations.
All told, the top 15 battleground states account for at least 1.4 to 1.6 million excluded potential prison/ ex-felon votes this year. Combined with US’ knife-edged “winner take all” electoral system, this is clearly a very important policy choice.
Furthermore, in states like Florida, Texas, Mississippi, and Virginia, the opportunity to purge thousands of minority voter from the polls in the search for “ex-felons” has opened the doors to many other abuses.
For example, in 2000, there was the notorious purge by Florida’s Republican Secretary of State of 94,000 supposed “felons.” It later turned out that this number included more than 50,000 blacks and Hispanics, but just 3,000 actual ex-felons.
One might have hoped that one such flagrant anti-democratic maneuver would have been enough. But that was followed attempts by Florida’s Republican state administration to do the very same thing again in 2002 and again this year, when Florida tried to use another “bogus felons” list with another 40,000 names.
From this angle, all of the many arguments over Nader’s candidacy, “hanging chads,” and the narrow 537 vote margin by which Bush carried that state in 2000, were side-shows.
We are reminded of the Reconstruction period from 1867 to 1877, when Florida and 8 other Southern states had to be put under military occupation by the US Government, to prevent the white elites’ systematic attempts to deprive freed slaves of their voting and other civil rights. By the late 1870s, Northern passions toward the South had cooled, the Union troops left, and white-supremacist governments reacquired power. Unfortunately, unlike the 1860s, the “Radical Republicans” in Congress now side with the closet supremacists.
Counting Prisoners for Apportionment.The punitive US policy toward current and former prisoners appears even more bizarre, once we take into account the fact that for purposes of redistricting, the US Census – unlike Brazil – counts prison and jail inmates as residents of the counties where the prisoners are incarcerated, rather than the inmates’ home towns.
In general, this approach to counting prisoners for districting purposes tilts strongly in favor of rural Southern and Western states – areas that also now happen to vote Republican. (See Figure 3A-3), It has an important impact on the apportionment of Congressional seats and seats in state legislatures, the allocation of federal funds to Congressional districts, and the total number of electoral college votes that each state receives. It also creates a huge, influential, coalition of interests -- construction companies, prison administrators and guards, and politicians -- that mounts to a “politician-prison-industrial complex,” with powerful selfish motives to support tough sentencing laws and the construction of new prisons and jails.
The resulting combination of disenfranchisement and malapportionment recalls the “three-fifths compromise” that was built into the US Constitution in 1787, to accommodate the original six Southern slave states, where slaves constituted more than forty percent of the population. Under this provision, even though slaves could not vote, they were counted as three-fifths of a person, for purposes of determining each state’s Congressmen and Presidential electors.
Given this provision, it was no accident that 7 of the first 8 US Presidents were Virginian slave owners. This exaggerated Southern political power, entrenched by the anti-democratic electoral college, had disastrous consequences – it made resolving the problem of slavery without a regional civil war almost impossible. (Contrast Brazil’s relatively peaceful abolition of slavery.) From this perspective, the electoral college and prisoner disenfranchisement are both just throwbacks to America’s “peculiar institution,” slavery. As John Adams wrote in 1775,
All our misfortune arise(s) from a single source, the reluctance of the Southern colonies to republican government….The difficulties lie in forming constitutions for particular colonies and a continental constitution for the whole…This can only be done on popular principles and maxims which are so abhorrent to the inclinations of the barons of the South and the proprietary interests of the middle colonies…..
In a sense, the modern analog is even worse: prisoners can’t vote either, but they count as one whole person in the districts where they are imprisoned, for purposes of redistricting. In general, this approach to counting prisoners for districting purposes tilts strongly in favor of rural Southern and Western states – areas which also now happen to vote Republican.
Surprisingly, illegal immigrants are also included in the US Census count for redistricting purposes. Depending on where immigrants locate, this may reinforce the prisoner effect in some key states. The US illegal immigrant population has also been growing rapidly, with a Census-estimated 7.7 - 8.9 million illegals in the US by 2000, compared with about 3.5 million in 1990. According to the INS, two-thirds are concentrated in just five states – California, Texas, New York, Illinois, and Florida. However, unlike prisoners, estimating where illegal immigrants are located is much more uncertain. So the US policy of including non-voting illegals in the Census for purposes of drawing voting districts is also very peculiar.
Brazil. To encourage young people to get involved in politics, Brazil gives those who are 16 or 17 the right (but not the duty) to vote. This measure increases Brazil’s VAP by abou 6 percent. Brazil argues that a relatively low voting age is consistent with the spirit of the UN’s Convention on the Rights of the Child. It also argue that this youth vote acknowledges the basic fact that a majority of 16-17 year olds (in both Brazil and the US) pay taxes and can marry, drive, and be tried as adults, so they ought to be able to vote. So far Brazil has only been joined in this experiment by a handful of other countries, including Indonesia (age 17), Cuba (16), Iran (15), and Nicaragua (16). But the UK is now also seriously considering teen voting.
The US. The minimum voting age in the US has been 18 since 1971, when the 26th Amendment was adopted. A few states (Maine, California) have recently considered reducing the voting age below 18, but so far voting rights for 16-17 year-olds, much less the more radical proposal to let children of all ages vote, has not taken off. Obviously this cause has not been strengthened by abysmal voter turnout levels by 18-24 year old Americans in recent elections.
September 22, 2004 at 02:50 PM | Permalink | Comments (1) | TrackBack
Thursday, September 16, 2004
Democracy in America and Elsewhere: Part II: Recent Global Trends Toward Democracy
Of course we are also very proud of our free markets, our relative affluence, and our occasional ambitions -- at the moment perhaps a bit muted -- to provide equal opportunities for all our citizens.
However, when we try to market our country’s best features to the rest of the world, or teach our children to be proud of their country, it is not the economy that we brag about.
Even self-styled “conservatives” usually lead, not with glowing descriptions of perfect markets and opportunities for unlimited private gain, but with our supposedly distinctive commitment to defending and expanding political democracy and human rights at home and abroad.
Indeed, one of the most important official justifications for recent US forays into the Middle East, as well as our many other foreign interventions, has been to help bring “democracy” to supposedly backward, undemocratic societies like Iraq and Afghanistan (…and before that, Haiti, Colombia, Panama, Nicaragua, Grenada, Panama, the Dominican Republic, Cuba, Guyana, Guatemala, Iran, Laos, Vietnam, the Philippines, etc. etc. etc.)
Even though, time and again, this noble commitment has turned out to be pure rhetoric, it provides such an elastic cover story for all our many transgressions that it keeps on being recycled, over and over and over again.
Whatever the truth about US motives for such interventions, it may come as a surprise to learn that in the last two decades, the United States itself has actually fallen behind the rest of the democratic world in terms of “best democratic practices” and the overall representativeness of our own domestic political institutions.
Meanwhile, many developing countries have recently been making very strong progress toward representative democracy, without much help from us.
Indeed, in some cases, like South Africa, this progress was made in the face of opposition from many of the very same neoimperialists who have lately voiced so much concern about transplanting democracy to the Middle East.
While we have been resting on our democratic laurels, or even slipping backwards, the fact is that emerging democracies like Brazil, India, and South Africa, as well as many of our First World peers, have been adopting procedures for electing governments that are much more democratic at almost every stage of the electoral process than those found in the US.
The institutions they have been developing include such bedrock elements of electoral democracy as the rules for:
Of course effective democracy has many other crucial elements beside electoral processes alone. These include (1) the relative influence of legislative, executive, and judicial branches; (2) the concrete opportunities that ordinary citizens have -- as compared with highly-organized special interests and professional lobbyists -- to influence government decisions between elections; (3) the respective influence of private interests, religious groups, and the state; (4) the degree to which the rule of law prevails over corruption and "insider" interests; and (5) the overall degree of political consciousness and know-how.
However, fair and open electoral processes are clearly a necessary, if not sufficient, condition for effective democracy -- all these other elements simply cannot make up for their absence.
We hope that increasing the recognition of this “electoral democracy gap” between the US and the rest of the democratic world will be helpful in several ways:
This used to be much easier than it is now. As of the early 1970s, there were only about 40 countries that qualified as “representative democracies,” and most were First World countries.
Since then, however, there has been a real flowering of democratic institutions in the developing world. This was partly due to the collapse of the Soviet Empire in the late 1980s. But many more people were in fact “liberated” by the Third World debt crisis, which undermined corrupt, dictatorial regimes all over the globe, from Argentina, Brazil, and Chile to Indonesia, the Philippines, South Africa, and Zaire.
Voting in the Philippines, 2004
Assessments of the degree of “freedom” of individual regimes by organizations like Freedom House or the UN Development Program’s Human Development Indicators, are notoriously subjective. However, while there is plenty of room for disagreement about specific countries, there is little disagreement on the overall trend. (See Table 3.)
By 2004, about 60 percent, or 119, of the nearly 200 countries on the planet could be described as “electoral democracies,” compared with less than one-third in the early 1970s. Another 25-30 percent have made significant progress toward political freedom.
Voting in South Africa, 1994
Indeed, notwithstanding our present challenges in Iraq and Afghanistan, from the standpoint of global democracy, this has been a banner year. As of September 2004, 32 countries had already held nationwide elections or referenda, with 886 million people voting. (See Table 4.) By the end of 2004, another 33 countries will join the US in doing so – nearly three times as many national elections as were held each year, on average, in the 1970s.
All told, this year, more than 1.7 billion adults – 42 percent of the world’s voter-age population -- will be eligible to vote in national elections, and more than 1.1 billion will probably vote. That that will make American voters less than 10 percent of the global electorate.
Of course, some of these elections will be held in countries where democratic institutions and civil liberties are still highly imperfect. And some developing countries like Russia and Venezuela have recently been struggling to find a balance between democracy and national leadership, partly to undo the effects of neoliberal policies in the 1990s, or in response to terrorist threats.
But the good news is that democracy is clearly not a “luxury good.” The demand for it is very strong even in low-income countries like Bolivia, Bangladesh, Mozambique, Guatemala, and Botswana. And while self-anointed dictators, military rulers, and one-party elites or theocracies are still clinging to power in 50-60 countries that have more than 2.4 billion residents, such regimes are more and more anachronistic. (See Table 5.)
Interestingly, Asian dictatorships, especially China and Vietnam, now account for more than three-fifths of the portion of the world’s population that still lives under authoritarian rule. While several Islamic countries appear on the list of authoritarian countries, they account for just one fifth of the total. Furthermore, by far the most important ones happen to be close US “allies” like Pakistan, Egypt, Morocco and Saudi Arabia.
Evidently the simple-minded neoconservative “clash of cultures” model, which pits supposedly democratic, pluralist societies against an imaginary Islamic bloc, doesn’t have much explanatory power.
Furthermore, the US also clearly faces some very tough choices, if it is really serious about promoting non-discriminatory, secular democratic states that honor the separation between church and state among its Islamic allies, as well as in Palestine, and, for that matter, Israel.
Voting in East Timor. 2001
A more encouraging point is that many developing countries are already providing useful lessons in democratization. Indeed, as we will see in Part III of this series, there is much to learn from the experiences of new democracies like Brazil and South Africa.
These countries are undertaking bold experiments with measures like free air time for candidates, “registration-free” voting, direct Presidential elections, electronic voting, proportional representation, and the public finance of campaigns. While not all these experiments have worked out perfectly, the fact these countries have already demonstrated a capacity to innovate in “democratic design” is very encouraging.
Of course there is a long-standing tension between the US dedication to Third World democracy and its tolerance for the independence that democratic nationalism often brings. By renewing and deepening our own commitment to democracy at home, we will also protect it abroad -- even though (as in Venezuela, Russia, Iran, and perhaps eventually also Iraq) it does not always produce governments that we agree with.
September 16, 2004 at 09:08 PM | Permalink | Comments (2) | TrackBack
Wednesday, September 15, 2004
Democracy in America and Elsewhere: Part I: Does It Really Have to Be This Way?
Although many pundits and politicians have hailed this contest as the “most important election of our lives," and talked about the “striking difference” between the two top candidates, most opinion polls show that a majority of ordinary Americans are profoundly dissatisfied with the limited choices available on this year's Presidential menu, with a majority of undecided voters disapproving of both Bush and Kerry in this week's polls.
They correctly sense that the prolonged, torturous process by which we choose the Leader of the Free World is deeply flawed. Many are asking, “Am I the only one who is unhappy with having to choose between these two Yale-bred prima donnas?......the only one who feels that this year's campaigns and the mass media have systematically avoided most of the critical issues that confront our country??”
If so many Americans prefer a different type of politics, however, why can’t they have it?
It turns out that this year’s unsatisfying campaign isn’t just an aberration. Rather, many of its less attractive features are a direct byproduct of deep-seated structural flaws in our electoral system, most of which are decades- or even centuries-old.
Unless we undertake the fundamental reforms required to fix these problems, our version of "democracy" is likely to become less and less attractive as a role model.
These structural flaws come into sharp relief is when we compare the US to other democracies, especially several younger ones that have proved to be much more innovative than we are when it comes to "designing democracy."
When we do so, we arrive at a disturbing conclusion: in many respects, American democracy is falling behind the rest of the democratic world.
As we will explore in this series, the fact is that many other countries – including several developing countries as well as our First World peers – have adopted electoral processes that are much more democratic than our own.
Rather than bemoan this year’s Presidential campaign, therefore, we propose to explore the root causes of our political malaise. We will tackle this problem with the help of a comparative approach, examining "best practices" in other leading democracies that are working hard to insure that national elections are more than just the costly, high-carb biennial beauty pageants that they have become in the US.
Another factor that has made the race close is the record level of campaign spending on both sides – more than $795 million for the Presidential race alone, plus at least $272 million of “527” money, including $20 million from the National Rifle Association and $2.6 million from the Swift Boat Veterans group. Contrary to expectations, both parties have stayed about even in the money rush, despite President Bush’s renowned fund-raising abilities.
Finally, there have also been an unusual number of wild-cards – putative terrorist threats, oil price shocks, North Korean nukes, job growth, Ralph Nader’s quixotic quest, and the continuing ups and downs of the Iraq and Afghan Wars, plus all the typically American quasi-religious disputes over gay marriage, the Vietnam War, assault rifles, stem cell research, and late-term abortions.
Along the way, we've had a floodtide of small-bore reporting, encouraged by instant polling, “rapid response,” and the army of several thousand reporters who have nothing better to do than cover the campaign 24/7 and 31/12.
Much of the resulting reportage reads like a kluge of the Daily Racing Form and the National Enquirer, with far less attention paid to hard policy issues than to campaign tactics, polls, and candidate “features" -- values, personal histories, eating habits, wives, children, appearances, misquotes, and mood swings. (For a good example, witness, for example, today's cover stories on "where's Edwards?" in both the San Francisco Chronicle and the New York Times.)
So far the candidates, their campaigns, and their advertising have reinforced this pattern, spending far more time on their own values and competence than on fundamental issues -- or on the real benefits that voters would derive from electing them. Even when they do get down to issues and actual benefits, the focus is on just a handful that won't offend undecided voters in key states. (See Box A.) Given the electoral college, they are also spending almost all their time and resources in the same 15-16 “battleground states,” where the polls show a gap between Kerry and Bush of 3 percent or less. (See Table 1.) Within these states, they are also focusing on the same 6-10 percent of voters who have somehow managed to remain “undecided” even at this late date.
As a result, this year's election will be decided by just a sliver of potential voters in a handful of states. The battleground states account for less than a third of all US “voter-age” residents or “potentially-eligible voters” -- after deducting non-citizens, convicts, and others not eligible to vote. If these states repeat the modest turnouts that prevailed in 2000, less than 57 percent of their voting age populations will vote. And since a winning candidate only needs a plurality, which can be less than 50%, this implies that in a country with more than
Given the way our particular version of democracy is structured, therefore, the rest of us face the fact that the chances that we will die in an accident on the way to the voting booth are infinitely greater than that our votes will have any impact whatsoever on this election. As we will see below, the chances are also slim to nilthat we will exert any influence on the vast majority of Senate or Congressional races.
All this might not matter so much if undecided voters in battleground states were good proxies for the rest of us. But they are not.
As indicated in Table 2, they differ from the rest of the country in many important respects. Most of these states are relatively backward, in the bottom half of all 50 states in terms of per capita incomes and education levels, with a much higher-than-average share of poor residents. Nearly a quarter of their populations live in rural areas, twice the average share for non-BG states.
Compared with non-BG states, the residents of these states are also more likely to own guns, and much less likely to have lost factory jobs in the recent recession. They are also more religious -- Catholic voters account for 40 percent and 35 percent, respectively, of all potential voters in New Mexico and New Hampshire, while conservative Christians account for more than a third of the population in 10 of the 15 BG states, and orthodox Jews are a key voting bloc in Florida. Hispanics account for more than 15 percent of potential voters in Florida, Arizona, Nevada, and New Mexico. Illegal immigrants constitute a critical part of the work force and Census headcount (for apportionment purposes) in half the BG states.
Furthermore, as shown in Table 1, almost half of the BG states have harsh “felon disenfranchisement” laws. Most of these permanently deprive anyone ever convicted of a felony within their states of the right to vote, even after their sentences have been served. These laws could play a decisive role in battleground states like Florida, Virginia, Arizona, Tennessee, and Iowa, just as they did in 2000.
All told, given the “winner take all” nature of the US electoral college system (see below) and the strategic role of so-called “undecided voters” will play in battleground states, it is not surprising that there is a long list of important issues where both Bush and Kerry have either both been completely silent, or have adopted straddling positions, many of which can only be distinguished from the President's under an electron microscope.
So far, at least, Kerry appears to have bet heavily that the American people will choose him mainly because he's brighter, more competent and more trustworthy than Bush, not because his foreign and domestic policy alternatives are wildly different and exciting. This is a bet that the far more well-liked, if quasi-competent and semi-literate Bush has been delighted to accept.
As a result, as discussed in Box A, even though a majority of US voters may well be open to far less timid new approaches to issues like the Iraq War, farm subsidies, illegal immigration, gun control, energy conservation, corporate crime, our relationships with “allies” like Israel and Saudi Arabia and “enemies” like Cuba, Iran, and Venezuela, not to mention the Patriot Act, balancing the budget, global warming, reforming Social Security, and revising drug laws, from the standpoint of capturing marginal voters in battleground states, many of these issues have been deemed “no win” and too-hot-to-handle.
Furthermore, when other candidates – independents and third party candidates – try to raise such issues, they are either ignored or tagged as “spoilers” whose presence only serves to help those voters’ least preferred candidates. Since such candidates are marginalized by the mainstream media and excluded from US Presidential debates, the major party candidates can easily skip over any special issues that they seek to raise.
Despite all the hoopla, therefore, many American voters justifiably feel like they've been invited to dinner and served pictures of food. In effect, discourse on fundamental issues has been stifled by the rules of the American electoral game.
E.g., it is not only the felons who have been disenfranchised by this US Presidential election process.
Does it really have to be this way? To get a handle on this question, it will be helpful for us to step back and take stock of how the US stacks up against other democratic countries, especially those in the developing world -- not in terms of economic performance, but in terms of effective electoral democracy. Interestingly, it turns out that we actually have quite a few things to learn from them.
September 15, 2004 at 10:38 PM | Permalink | Comments (2) | TrackBack
Friday, July 02, 2004
Fourth of July, 2004: "A Time to Weep"" Ted Sorensen Commencement Address, The New School, New York City, May 21, 2004
Considering the unhealthy state of our laws today, they probably could use another doctor.
My reciprocal obligation is to make a speech.
This is not a speech. Two weeks ago I set aside the speech I prepared. This is a cry from the heart, a lamentation for the loss of this country's goodness and therefore its greatness.
For me the final blow was American guards laughing over the naked, helpless bodies of abused prisoners in Iraq. "There is a time to laugh," the Bible tells us, "and a time to weep." Today I weep for the country I love, the country I proudly served, the country to which my four grandparents sailed over a century ago with hopes for a new land of peace and freedom. I cannot remain silent when that country is in the deepest trouble of my lifetime.
I am not talking only about the prison abuse scandal - that stench will someday subside. Nor am I referring only to the Iraq war - that too will pass - nor to any one political leader or party. This is no time for politics as usual, in which no one responsible admits responsibility, no one genuinely apologizes, no one resigns and everyone else is blamed.
The damage done to this country by its own misconduct in the last few months and years, to its very heart and soul, is far greater and longer lasting than any damage that any terrorist could possibly inflict upon us.
The stain on our credibility, our reputation for decency and integrity, will not quickly wash away.
Last week, a family friend of an accused American guard in Iraq recited the atrocities inflicted by our enemies on Americans, and asked: "Must we be held to a different standard?" My answer is YES. Not only because others expect it. WE must hold ourselves to a different standard. Not only because God demands it, but because it serves our security.
Our greatest strength has long been not merely our military might but our moral authority. Our surest protection against assault from abroad has been not all our guards, gates and guns or even our two oceans, but our essential goodness as a people. Our richest asset has been not our material wealth but our values.
We were world leaders once - helping found the United Nations, the Marshall Plan, NATO, and programs like Food for Peace, international human rights and international environmental standards. The world admired not only the bravery of our Marine Corps but also the idealism of our Peace Corps.
Our word was as good as our gold. At the start of the Cuban Missile Crisis, former Secretary of State Dean Acheson, President Kennedy's special envoy to brief French President de Gaulle, offered to document our case by having the actual pictures of Soviet nuclear missiles in Cuba brought in. "No," shrugged the usually difficult de Gaulle: "The word of the President of the United States is good enough for me."
Eight months later, President Kennedy could say at American University: "The world knows that America will never start a war. This generation of Americans has had enough of war and hate…we want to build a world of peace where the weak are secure and the strong are just."
Our founding fathers believed this country could be a beacon of light to the world, a model of democratic and humanitarian progress. We were. We prevailed in the Cold War because we inspired millions struggling for freedom in far corners of the Soviet empire. I have been in countries where children and avenues were named for Lincoln, Jefferson, Franklin Roosevelt and John F. Kennedy. We were respected, not reviled, because we respected man's aspirations for peace and justice. This was the country to which foreign leaders sent not only their goods to be sold but their sons and daughters to be educated. In the 1930's, when Jewish and other scholars were driven out of Europe, their preferred destination - even for those on the far left - was not the Communist citadel in Moscow but the New School here in New York.
What has happened to our country? We have been in wars before, without resorting to sexual humiliation as torture, without blocking the Red Cross, without insulting and deceiving our allies and the U.N., without betraying our traditional values, without imitating our adversaries, without blackening our name around the world.
Last year when asked on short notice to speak to a European audience, and inquiring what topic I should address, the Chairman said: "Tell us about the good America, the America when Kennedy was in the White House." "It is still a good America," I replied. "The American people still believe in peace, human rights and justice; they are still a generous, fair-minded, open-minded people."
Today some political figures argue that merely to report, much less to protest, the crimes against humanity committed by a few of our own inadequately trained forces in the fog of war, is to aid the enemy or excuse its atrocities. But Americans know that such self-censorship does not enhance our security. Attempts to justify or defend our illegal acts as nothing more than pranks or no worse than the crimes of our enemies, only further muddies our moral image.
Thirty years ago, America's war in Vietnam became a hopeless military quagmire; today our war in Iraq has become a senseless moral swamp.
No military victory can endure unless the victor occupies the high moral ground. Surely America, the land of the free, could not lose the high moral ground invading Iraq, a country ruled by terror, torture and tyranny - but we did.
Instead of isolating Saddam Hussein - politically, economically, diplomatically, much as we succeeded in isolating Khadafy, Marcos, Mobutu and a host of other dictators over the years, we have isolated ourselves. We are increasingly alone in a dangerous world in which millions who once respected us now hate us.
Not only Muslims. Every international survey shows our global standing at an all-time low. Even our transatlantic alliance has not yet recovered from its worst crisis in history. Our friends in Western Europe were willing to accept Uncle Sam as class president, but not as class bully, once he forgot JFK's advice that "Civility is not a sign of weakness."
All this is rationalized as part of the war on terror. But abusing prisoners in Iraq, denying detainees their legal rights in Guantanamo, even American citizens, misleading the world at large about Saddam's ready stockpiles of mass destruction and involvement with al Qaeda at 9/11, did not advance by one millimeter our efforts to end the threat of another terrorist attack upon us. On the contrary, our conduct invites and incites new attacks and new recruits to attack us.
The decline in our reputation adds to the decline in our security. We keep losing old friends and making new enemies - not a formula for success. We have not yet rounded up Osama bin Laden or most of the al Qaeda and Taliban leaders or the anthrax mailer. "The world is large," wrote John Boyle
O'Reilly, in one of President Kennedy's favorite poems, "when its weary leagues two loving hearts divide, but the world is small when your enemy is loose on the other side." Today our enemies are still loose on the other side of the world, and we are still vulnerable to attack.
True, we have not lost either war we chose or lost too much of our wealth. But we have lost something worse - our good name for truth and justice. To paraphrase Shakespeare: "He who steals our nation's purse, steals trash. T'was ours, 'tis his, and has been slave to thousands. But he that filches our good name...makes us poor indeed."
No American wants us to lose a war. Among our enemies are those who, if they could, would fundamentally change our way of life, restricting our freedom of religion by exalting one faith over others, ignoring international law and the opinions of mankind; and trampling on the rights
of those who are different, deprived or disliked. To the extent that our nation voluntarily trods those same paths in the name of security, the terrorists win and we are the losers.
We are no longer the world's leaders on matters of international law and peace. After we stopped listening to others, they stopped listening to us. A nation without credibility and moral authority cannot lead, because no one will follow.
Paradoxically, the charges against us in the court of world opinion are contradictory. We are deemed by many to be dangerously aggressive, a threat to world peace. You may regard that as ridiculously unwarranted, no matter how often international surveys show that attitude to be spreading. But remember the old axiom: "No matter how good you feel, if four friends tell you you're drunk, you better lie down."
Yet we are also charged not so much with intervention as indifference - indifference toward the suffering of millions of our fellow inhabitants of this planet who do not enjoy the freedom, the opportunity, the health and wealth and security that we enjoy; indifference to the countless deaths of children and other civilians in unnecessary wars, countless because we
usually do not bother to count them; indifference to the centuries of humiliation endured previously in silence by the Arab and Islamic worlds.
The good news, to relieve all this gloom, is that a democracy is inherently self-correcting. Here, the people are sovereign. Inept political leaders can be replaced. Foolish policies can be changed. Disastrous mistakes can be reversed.
When, in 1941, the Japanese Air Force was able to inflict widespread death and destruction on our naval and air forces in Hawaii because they were not on alert, those military officials most responsible for ignoring advance intelligence were summarily dismissed.
When, in the late 1940's, we faced a global Cold War against another system of ideological fanatics certain that their authoritarian values would eventually rule the world, we prevailed in time. We prevailed because we exercised patience as well as vigilance, self-restraint as well as self-defense, and reached out to moderates and modernists, to democrats and dissidents, within that closed system. We can do that again. We can reach out to moderates and modernists in Islam, proud of its long traditions of dialogue, learning, charity and peace.
Some among us scoff that the war on Jihadist terror is a war between civilization and chaos. But they forget that there were Islamic universities and observatories long before we had railroads.
So do not despair. In this country, the people are sovereign. If we can but tear the blindfold of self-deception from our eyes and loosen the gag of self-denial from our voices, we can restore our country to greatness. In particular, you - the Class of 2004 - have the wisdom and energy to do it. Start soon.
In the words of the ancient Hebrews:
July 2, 2004 at 10:01 AM | Permalink | Comments (0) | TrackBack
Sunday, June 27, 2004
"Letters from the New World:" Fighting Corruption at Eye Level in Nigeria
Whenever we talk Nigeria, we talk corruption. The two go together. Finland, ice and cellphones. Israel, strife. Australia, complacency and kangeroos. When you talk Nigeria, corruption is the first thing that comes up.
Try it. Tell people “I’m going to Nigeria.” Four out of five responses will give you detail, usually second-hand, about the necessity of spreading your dollars through different pockets in different denominations. From there they’ll go into horror-stories about corruption.
I found my first trip to Nigeria challenging. Scary, too, but that was a different thing. When the captain said: “We’re commencing the descent” I clutched my wallet. The scare starts fading once reality replaces rumour, but the challenge doesn’t fade at all.
Corruption is one of the main reasons why Africa spent its first half-century of liberation heading, on statistical averages, backwards. Another major one is Indigenisation, Transformation, or whatever the name is for shoving wrong people into wrong roles. Indigenisation is tricky to deal with, and the real answer still awaits.
Corruption is not tricky at all. The simple answer is: don’t do it. To me that’s the sole answer. It’s ingrained. I can’t pay a bribe, any more than I can kill an animal or fling a bottle. The neurons are just programmed another way. Which poses a certain fluttering of the pulse when one is landing in Lagos, stocked high with dire warnings. But I remind myself that visitors to Johannesburg, too, are warned direly about our corruption, whilst I who live in Johannesburg remain a virgin after all these years – apart from sandwiches and cold-drinks.
Initially a little teeth-gritting is required, to take the same approach to Lagos, but it settles. By the time the first guy outright demanded a bribe (“because I am the one who is in charge of your baggage, heh, heh, heh…”) I was emboldened. All he got from me was two short traditional words.
I recognise that it’s hollow to sound holy when all you’re risking is a suitcase of used clothes, or the R500 fine for phoning while you drive. I admit I have no bosses to sack me if I fail to secure the contract,
no shareholders to re-deploy me as Deputy Manager of the Jammerdrif depot.
Still, the principle is not much different: Most times that you give the bribe-seeker short words he backs down. The times he doesn’t, it’s better to suffer the consequences than to hammer another nail into the coffin of your continent’s aspirations. In which light, the way that the going-into-Africa discussion usually plays out can be depressing.
It starts well, invariably. “We’re all Africans now, isn’t it wonderful. And you should see how much good we’re doing!”
They are, too. And it is spectacular, often. On Monday the housewives of Ndola are buying scrawny ox-shank, little more than bone with hair, chowed on for a week by 10,000 flies. They’re buying baked beans in rusted tins, a year after sell-by. They’re paying three times the price that their privileged southern sisters pay for first-class merchandise at our fancy Cresta or Cavendish supermarkets.
On Tuesday the new South African supermarket opens its doors. By Friday, Ndola is dancing to a new tune. The city is galvanised. Everybody has to give service, give value, wake up. On a March visit to a provincial capital, the dinner options are a suspicious unnameable stew on a sweltering dusty roadside, or a two-star menu at a five-star hotel with the seven-star tariff designed for the expense accounts of the aid brigade (who keep the aircon on frigid to remind them of home). In April a South African chain introduces middle-class eating at middle class prices. By July the city is holding its head higher.
Similar processes occur in every industry from brick machines to water meters. Good is being done, no two ways. But then the question the question comes up: “and, er, ahem, how do you handle the matter of adaptation to local mores?” There is a common answer to that question: “Oh, no, no, no. No corruption for us. We know that corruption causes ruin and destruction. We have no part in it, except of course when absolutely necessary.”
There is also the increasingly fashionable answer: “You know, we do have to grow into African ways. The time for arrogance is over now. We must mature into an acceptance that our sectional traditions are not universal.”
Then there is the answer that nobody gives unless he’s absolutely certain you are never going to quote him: “Why should I worry? That country is a total stuff-up anyway. If I can give some guy a million rand and make ten million in return, what do you think?”
Finally there is the still small voice that says: “No, on no account do we do it.” You hear that voice not often, and believe it less often. When you do believe it, perhaps because you know the people concerned especially well and repose in them a special faith, it is jolting indeed to find that the rumour factory is thick with alleged inside tales that place your faith under constant question.
The net result is disappointing, especially at the times that I am revelling in the magnificent welcome that tropical Africa addresses to Seffricans of the paler kind. Tropical Africa addresses magnificent welcomes to most people in most circumstances, but they have an added knack of making the whiteys from “South” as they call it, feel like a long lost brother.
You’re taken as a member of the family – a fairly pushy member, perhaps, rich in annoying habits, but in some way one of us, something more than solely a buccaneer on the profit trail. You’re a curiosity factor as well, and you’re assumed to be – potentially, at least – a handily systematic sort of character, the long lost brother who maintained the household inventory and made sure the insurance premiums were paid.
It’s a delightful combination. Not for nothing does every second SA expat go on and on about being wanted, being needed, being befriended, being loved (in the intervals between going on and on about not being robbed).
The prospects are wonderful, moving, emotional. A continent actually moving upward, after fifty years of empty talk about moving upward; moving upward and forward and with us, us, the once untouchable white South Africans, in there and part of it, in the engine room, the galley, the bridge, the lot.
Unfortunately that vista gets harder to glimpse as time goes by. Reality intrudes. I look at the heavy hands that RSA brings into the rescue of this or that failing African mine or plant or factory. I look at the hubris “stand aside, mere locals; we’re very friendly, as you see, calling us Jack and Joe and not ‘Bwana’ any more, but we’re in charge again so keep out of our way.” I look at the sickening crass insensitivity; the pulling of rank, sometimes unwitting; the disinterest in learning the barest syllable of even French or Portuguese, let alone Swahili or Hausa; the fervour to adopt every management fad emanating from New York or LA.
The pioneers carrying business to the tropics could and should be our heroes, our champions. Too often they become embarrassments. The many lesser embarrassments could usefully be discussed.
The one big embarrassment is not susceptible to much discussion. A culture of corruption means a pathetic nation; that is no more arguable than that the sun comes up in the east, and a critical mass of pathetic nations means a continued pathetic continent.
There’s sabotage in there.
Some foreign companies that have recently tried to enter Nigeria, like Shell and Halliburton, have apparently been following this well-trodden road to perdition. But there are signs of hope. A few others, like Vodafone, have recently been told by their shareholders to refuse to play at all unless they can play it straight.
Ironically, behind the closed doors of our cynical business community, it is Vodafone that gets the most ridicule. Indeed, almost wheresoever two or three businesspersons gather together in South Africa these days, one hears: “Look at these wussies, getting their asses whupped in Nigeria; buncha sissies calling themselves an African company, squealing ‘good governance’ because they’ve come short. What business did they have leaving just because they couldn’t play it straight? ”
Somebody’s got this upside-down. The real question we should be asking of is of those who stayed: “Precisely how did you manage to stay on and keep playing it straight?”
June 27, 2004 at 10:20 PM | Permalink | Comments (0) | TrackBack
Tuesday, June 22, 2004
"Farmingville" A New Film About Agro-Business, Globalization, and Poor Mexican Farmers
This week marks the television premier of Farmingville, an outstanding documentary on the devastating impact that a really quite lethal combination of globalization plus First World farm subsidies is having on developing countries like Mexico.
Produced and directed by fellow Long Islanders Carlos Sandoval (Amagansett, NY) and Catherine Tambini (Hampton Bays, NY), Farmingville won this year’s “Special Award for Documentary” at the Sundance Festival, and it has also received many other prestigious awards. (For those of you in Long Island, it will also be shown on Thursday June 24 on Ch. 21, accompanied by a discussion with Sandoval and several of the film’s participants, moderated by OLA’s outstanding local leader, Isabel Spevedula de Scanlon.)
The social crisis described by Farmingville is a striking example of one of neoliberalism’s more disturbing patterns – the combination of “socialism for the rich” with “free trade for the poor.” Each year the US government provides more than $10 billion in subsidies to American corn farmers in politically-influential states like Iowa, Minnesota, Nebraska, and Kansas. From a political standpoint, these subsidies are usually justified in the name of preserving the “American family farm.” In fact the vast bulk of the subsidies goes to a handful of incredibly rich US agro-conglomerates, such as Cargill and Archer Daniels, Midlands (“ADM”).Together, these corporate giants now account for more than 70 percent of domestic US corn production.
These subsidies have not saved America’s family farmers, who continue to disappear at a rapid rate. But the $10 billion a year in subsidies has the giants to overproduce, resulting in surpluses that have been dumped onto world markets at artificially-low prices.
As documented in Farmingville, combined with the “free trade” policies adopted by the US and Mexico in the last decade, these surpluses have devastated family farmers throughout Mexico.
Of course Mexican farmers were the original source of “corn” – they’ve been growing it for at least 10,000 years. Until recently, corn accounted for at least half of the acreage they planted. In fact corn is not just a product in Mexico; it is also at the core of a whole cuisine and culture.
Since the adoption of the North American Free Trade Treaty (NAFTA) in 1993, however, the real price of corn has dropped more than 70% in Mexico. even as domestic non-labor production costs have risen dramatically.
Most of the price declines are due to escalating US corn imports. Recent estimates by an Oxfam study of “The Mexican Corn Crisis,” for example, show that US corn is dumped in Mexico at between $105m to $145m a year less than the cost of US production.
As a result, many campesinos are being forced out of business -- the country has lost the majority of its corn farmers in just the last 10 years. This has caused havoc in the entire rural economy, produced mass unemployment and forcing a mass migration to Mexico’s already overstuffed cities. And that, in turn, has accelerated emigration, with thousands of desperate, hungry people trying to leave Mexico every day, and dozens of them literally dying in the desert wastelands along the border, trying to get to
“El Norte.”
Indeed, according to the latest statistics from the US Bureau of Immigration and Naturalization, illegal immigration along the Mexican border is now at an all-time high.
Meanwhile, US agricultural conglomerates like ADM and Cargill have become more profitable than ever. They are using their fat profits to extend their dominance abroad. For example, Cargill now owns 30 percent of Maseca, the giant Mexican food distributor that dominates the Mexican tortilla market.
As Oxfam’s recent report on this neoliberal debacle concludes,
"The Mexican corn crisis is yet another example of world trade rules that are rigged to help the rich and powerful, while destroying the livelihood of millions of poor people.”
Indeed, the story that Farmingville relates is an especially graphic example of the perverse consequences that neoliberal policies can have once powerful interests get hold of them -- when US corporate giants are able to have their way with free trade, wide-open capital markets, lavish government subsidies, political leaders on both sides of the border, and poor farmers all at once.
Obviously this is tough time for leading US politicians to take on the powerful farm lobby, much less propose policies that might trim US exports at a time of massive trade deficits. But are there no US or Mexican political leaders with longer-term vision, willing to tackle this grossly-inequitable, morally-reprehensible situation?
June 22, 2004 at 02:05 PM | Permalink | Comments (0) | TrackBack
Monday, June 21, 2004
"Letters from the New World (South Africa)." Denis Beckett #6:"Soweto Revisited"
RETURN TO SOWETO
By his sixth day in South Africa, Tony had been shown around Sandton (the high-end business district in Jo’burg) five times. “A fine precinct,” he said. “Not dissimilar to some we have in Toronto.”
It seemed possible that the glimmer of a hint might lurk inside this information. I cancelled plans to show off certain classy office towers, and pointed to other quarters.
Whereupon began an excellent day. The high point was Soweto, for him because of the dining-out prospects once back in the 30 degrees below; for me because of the great march forward since I was last there. For instance I recall Moroka Park as a shambolic wasteland. Now it’s green and kempt with decorative railings and families sitting in Saturday sun.
Everyone, evidently, has a doctorate in “Making Foreign Visitors Go Dewy-Eyed.” A bunch of kids fiercely debated the geography of Canada (they all got “north”; debate was whether north of America, Britain or Russia). Adults were hospitable from the start and added an extra notch when the Canadian connection came up. When a kid grabbed Tony’s pen I thought ‘uh oh’, but he was just eager to write our names on his hand.
Miraculously I did not get lost, a pity in a way because getting un-lost in Soweto is throat-lumping; people take such trouble over you. But we did traverse a wider cross-section than intended, which meant lots of exposure to changes like shops looking chic and houses looking bourgeois.
I’ve always felt a gap between the perception of Soweto from the white north – all danger, squalor, tension – and the sight of Soweto close up, which includes life, buzz, flowerbeds. Never more than this time, which made it doubly odd that the most jarring note came at the most sacred ground, the old Mandela home.
For his decades behind bars, his house looked pleasant and modest. Now it’s behind its own bars, a massive ugly fence so tight that it seems to be choking the house. Next to it an electricity sub-station would look pretty. The new guardhouse outside is scruffy and boarded, and they hunted down the dirtiest, raggedy-assed flag in existence for their big proud flagpole.
In contrast, the Hector Pieterson Museum (they spell him with an ‘i’ now)put up a good showing. Actual exhibits are stunningly few – a dustbin lid, a desk, some placards and two firmly welded guns – but the arsenal of photography, still and video, is a stomach-punching reminder.
And it’s not a caricature; amazing. One expects depictions of pre-1994 life to be, for a while yet, snarling iron-teethed whiteys kicking gentle black choir-boys to pulp; but here, not really. The brutality shows up, all right, and so does the disdain which was arguably more odious and certainly more widespread. So does the extraordinariness of shoving Afrikaans down black throats; the old State writing its death certificate. But dissent is displayed as well, and plain ordinariness.
The net impact on me – and I would think anyone white, wherever they stood in the old days – is a surge of relief. How tiny are our troubles now, compared to the gross contortion involved in keeping our foot on the other guy’s neck.
Hillbrow is populated by West Africans proud of their video kiosks and cellphone kiosks; entirely warm and chatty though less than entirely clear about the origins of their merchandise. In Yeoville, only, were we made to feel like markets – many people definitely wanted to sell us something, but were strangely coy about telling us what. Looked sort of like seedlings in packets. Newtown was spic and span and treed and under-occupied, an asset waiting to be exploited. Downtown is spoiled by litter – gutters are static rivers of waste, and papers and wrappers swirl like after a nuclear blast – but is on the up nonetheless, especially the west side, smarter and more occupied than a while ago.
Thanks, visitor to our shores, for awakening this Jo’burger to his turning world.
June 21, 2004 at 05:15 PM | Permalink | Comments (0) | TrackBack
Monday, June 14, 2004
The "Reagan Revolution," Part One: Did He Really Win the Cold War?
INTRODUCTION
Former President Reagan’s $10 million taxpayer-funded bicoastal funeral extravaganza is finally over, so we may now be able to regain a little objectivity about the man’s true accomplishments. This really was an extraordinary Hollywood-scale production – one of Reagan’s best performances ever. Apparently the actor/President started planning it himself way back in 1981, shortly after he took office, at the age of 69. Evidently he never expected to live to be 94.
For over a week we have been inundated with neoconservative hagiography from adoring Reagan fans -- one is reminded of Chairman Leonid Brezhnev's funeral in 1982. Even the final event’s non-partisan appeal was slightly undercut by the fact that only die-hard conservatives like President George W. Bush, former President George H. W. Bush, Margaret Thatcher, and Canada’s Brian Mulroney were invited to give eulogies. But at least this saved Democrats the embarrassment of having to say nice things about their fiercest, most popular, and most regressive antagonist of the 20th Century.
Some Bush campaign staff members reportedly recommended shipping the casket home from DC to California by train. Cynics suggested that this was intended to prolong the event and further distract voters from Bush’s serious political difficulties.
Thankfully Nancy Reagan spared us this agonizing spectacle. She probably recognized that it would only invite unfavorable comparisons with FDR, whose family eschewed the state funeral in favor of the more humble train ride. Furthermore, AMTRAK no longer serves most of the towns along the way, due in part to service cutbacks that really got started under President Reagan.
There has already been quite a bit of dissent from the many one-sided tributes to Reagan. Most of it has focused on domestic policy -- especially Reagan's very mixed track record on civil rights and the HIV/AIDs epidemic, his strong anti-union bias, the huge deficits created by his “supply-side” legerdemain, his deep cutbacks in welfare and education spending, and his weak leadership on conservation, the environment, consumer protection, and energy policy. Reagan’s hard-right bias on domestic policy certainly was underscored by the almost complete absence of blacks and other minorities among the ranks of ordinary Americans who lined up to mourn his passing.
With respect to foreign policy, the “Iran-Contra” arms scandal and Reagan’s support for apartheid were recalled by some observers. But most of the attention was directed to Reagan's supposedly uniformly positive contributions to the demise of the Soviet Union and the end of the Cold War.
In this article, the first of two in this series, we'll examine Reagan's foreign policy contributions more closely. The analysis has important implications not only for our assessment of Reagan, but also the White House's current incumbent.
DID RONNIE REALLY “WIN THE COLD WAR?”
We can debate this alleged role endlessly. Of course, like John Kennedy (“Ich bin ein Berliner”), Reagan made one very forceful speech in Berlin (“Tear down this wall, Mr. Gorbachev.”) Especially during his first term, he also supported policies that tried to roll back the Soviet Empire’s frontiers in distant places like Afghanistan, Angola, Nicaragua, and Grenada. He also expanded the US defense budget, accelerated the deployment of theater-nuclear missiles in Europe that had already been started by President Carter, and financed the (largely-nonproductive) first round of the “Star Wars” anti-missile program. All these moves no doubt increased pressure on the Soviets, and probably encouraged them to negotiate and reform.
However, Reagan was hardly responsible for the fact that the “Soviet Empire” had been more or less successfully “contained” almost everywhere except Cuba, Vietnam, and Afghanistan from the 1950s to the 1980s, and that even these client states had become more of a burden to the USSR than a blessing.
Nor was he responsible for the fact that President Carter had initiated anti-Soviet aid to the Poles and the Afghan rebels in the late 1970s; deployed the first long-range cruise and Pershing II missiles in Europe in December 1979, partly in response to the Soviets’ deployment of SS-20 missiles in Eastern Europe; suspended Senate consideration of the SALT II Treaty in January 1980; and issued Presidential Directive 59 in August 1980, adopting a new, much more aggressive “countervailing force” strategy for nuclear war.
Nor did Reagan have much to do with the fact that a whole new generation of Soviet leaders, including Mikhail Gorbachev, took power in 1984-85, or the fact that these new leaders chose the “glasnost/ big bang” route to reform rather than the more gradual and successful one that has kept the Chinese Communist Party in power to this day. This was also a matter largely of the Soviets' own choosing.
Nor was Reagan responsible for the fact that Gorbachev, who actually sought to preserve a stronger, reformed version of the Soviet Union rather than disband it, proved to be much less adept at Russian politics than Boris Yeltsin.
Even if we acknowledge that Reagan’s policies contributed to ending the Cold War, therefore, the historical record is very far from giving him “but for” credit for this happy ending.
In fact, even if "Cold War liberals" like Jimmie Carter and Fritz Mondale had presided over the US throughout the 1980s, the odds are that the very same key systemic and generational factors that helped to produce fundamental change in the Soviet system would have still applied – with very similar outcomes.
WHAT RISKS DID RON RUN?
In the literature on the economics of investment, it is well established that (at least in equilibrium, with competitive markets) there are no increased rewards without increased risk. When it comes to evaluating historical leaders, however, apparently this basic principle is often overlooked.
Reagan’s confrontational approach to the “Evil Empire” clearly was very distinctive. But this was hardly an unmixed blessing. Indeed, we now know that he took incredible risks in the early 1980s, and, as discussed below, that we are all extraordinarily lucky to have survived this period intact.
Furthermore, we are all still living with serious systemic risks that are a direct byproduct of Reagan’s high-risk strategies—even apart from the long-term legacy of his Afghan “freedom fighters” and latter-day terrorists.
For example, only in the mid-1990s, after the USSR’s collapse, did we learn that the Soviet Politburo and top Soviet military planners really had become convinced in the early 1980s that Reagan had adopted a new pro-nuclear war-fighting strategy, changing from “mutually assured destruction” to the pursuit of an all-out victory.
Soviet leaders came to this conclusion partly because of several key developments in military technology and strategy.
- By the early 1980s the US had acquired a growing advantage in submarine-based nuclear weapons (D-5 Trident missiles, with greater accuracy and short flight times) and anti-submarine warfare techniques, as well as space-based communications, surveillance, and hunter-killer satellite capabilities.
- As noted, Carter and Reagan both started to deploy cruise and Pershing II missiles in Europe and on submarines. These were just 4-6 minutes from the Soviets’ command-and-control centers and many of their ICBM silos, which they counted on for up to two-thirds of their deterrent capability.
- In the early 1980s the US also took several steps that were apparently intended to increase its chances of surviving a nuclear war. These not only included “Star Wars,” but also hardened telecommunications, new command-and-control systems and some “civil defense” measures, and revised policies for “continuity in government.”
On top of these structural changes, the Reagan Adminstration’s aggressive rhetoric and behavior also contributed to this new Soviet view of US intentions.
In early 1981, for example, Reagan ordered the military to mount a still-highly-classified series of “psyops” that probed USSR airspace and naval boundaries with US and NATO jet fighters and bombers, submarines, and surface ships. The US and NATO also conducted several large-scale exercises in 1982-84. The US also sharply increased its assistance to “freedom fighters” like the Nicaraguan contras, the Afghan rebels, and Jonas Savimbi’s bloodthirsty South-Africa-assisted renegades in Angola.
As we now know, all this belligerent US activity scared the living daylights out of old-line Soviet leaders like Yury Andropov. It reminded them of Hitler’s sudden blitzkrieg attack on the Soviet Union in 1941, a searing experience for which Stalin had been surprisingly unprepared. They came to believe that the US was actually planning the nuclear equivalent of this blitzkrieg -- a first strike that would decapitate Soviet command-and-control while minimizing the effects of retaliation on the US. Of course Europe would be probably destroyed in such a confrontation. But the Soviets assumed, perhaps correctly, that the US saw “Old Europe” as dispensable.
In response to this perceived US threat, the Soviets did not roll over and play dead. Rather, drawing on their 1941 experience, their first response was to assume the worst and try to prepare for it.
- From May 1981 on, they ordered a worldwide intelligence alert, code-named “RYAN”, aimed at keeping the Politburo informed on a daily basis of US preparations for a first strike.
- The Soviets shifted their nuclear posture decisively to “launch-on-warning.” For the first time they also provided the Politburo with the ability to sidestep the Soviet General Staff and launch all strategic missiles with a central command. To support this shift, they also deployed new ground-based radar and space-based early-warning systems.
- Most striking of all, in the early 1980s the Soviets also implemented a full-scale nuclear “doomsday” system, code-named “Perimeter.” This system, first tested in November 1984, placed the power to unleash a devastating retaliatory strike against the US essentially on autopilot, whenever the system “sensed” that a nuclear strike against Moscow had either occurred, or was about to occur.
Together, all these shifts in Soviet defensive strategy cut the decision time available to their leaders, when deciding how to respond to a perceived US/ NATO attack, to as little as 3-4 minutes.
As Gorbachev later “Never, perhaps, in the postwar decades was the situation in the world as explosive and hence, more difficult and unfavorable, as in the first half of the 1980s.”
THE LEGACY
To our great distress, despite the mutual de-targeting that was announced with so much fanfare by Presidents Clinton and Yeltsin in 1994, both these Cold War “hair trigger” responses to Reagan’s initiatives are still in place today, responsible for controlling at least the 5000+ strategic nuclear warheads that Russia still maintains.
Lt. Colonel Petrov had been forced to make a profound decision about world civilization in a matter of minutes, with alarms and red lights going off all around him. |
These systems have already experienced several close calls. Among the incidents that we know about were those in September 1983, August 1984, and January 1995. In this last incident, President Yeltsin -- who was not always a picture of mental health and stability -- came within minutes of unleashing a full-scale nuclear retaliation in response to a false alarm set off by a Norwegian research missile that was sent aloft to study the Northern Lights. Apparently it bore a striking resemblance to an incoming Trident missile on Soviet radar until it crashed harmlessly in the sea.
The September 1983 incident, at the height of Soviet tensions with the Reagan Administration, and just a few months after the huge anti-Soviet NATO exercise “Able Archer” in Western Europe, was even more scary. In 2000, Lt. Colonel Stanislov Petrov, the duty officer who had been in charge of an early-warning bunker south of Moscow at the time, told Western journalists what happened when the new early-warning computers at his facility suddenly reported a full-scale US attack:
"I felt as if I'd been punched in my nervous system. There was a huge map of the States with a US base lit up, showing that the missiles had been launched. I didn't want to make a mistake…..I made a decision and that was it. In principle, a nuclear war could have broken out. The whole world could have been destroyed. After it was over, I drank half a liter of vodka as if it were only a glass and slept for 28 hours."
Ultimately it turned out that the new Soviet early-warning system had malfunctioned. Lt. Colonel Petrov had been forced to make a profound decision about world civilization in a matter of minutes, with alarms and red lights going off all around him.
Fortunately for all of us, he decided to not to believe his own computers.
Unfortunately for all of us, a modified version of that same hair-trigger early warning system is still in place in both Russia and the US to this day, since neither side has ever reverted to the pre-Reagan “MAD” strategy -- and Lt. Colonel Petrov has long since retired to a humble Moscow flat.
SUMMARY
From this vantage point, President Reagan’s long-term legacy is a little more difficult to evaluate, even with respect to his impact on the Cold War.
- Clearly he had a great deal of help from others, as well as from sheer fortuity.
- We are still living with the heightened risks in the world system that were partly created by the aggressive nuclear strategy adopted by President Reagan, and to some extent by President Carter before him. If Russia’s early warning systems and doomsday systems – both of which are now reportedly starved for maintenance funds -- should ever fail, history may not be so kind to Ronald Reagan, assuming that there is anyone left to write it.
- Much of Reagan’s vaunted “strength” was really based on a blithe combination of sheer ignorance, blind faith, and risk taking. Compared with President Nixon (who, like FDR, also eschewed a state funeral), Reagan knew almost nothing about world affairs other than what he read in The Readers Digest and (perhaps) The National Review.
- On the other hand, compared with the insecure Nixon, who was constantly seeking reassurance from his advisors, Reagan certainly did have much more faith in his own convictions. With respect to the Soviet Union’s nuclear strategy, like a determined child, he may have never fully appreciated the fact that he was playing with…well, er.., much more than dynamite.
After the fact, of course, like any high-stakes gambler who bets it all on “black,” spins the wheel, and wins, Reagan looks like a hero, at least to many Americans.
However, whether or not ordinary citizens of the world should look back on this track record and cheer, much less encourage our present and future leaders to adopt similar blind-faith strategies, is very doubtful.
Indeed, today, most of the rest of the world seems to regard President Reagan -- rather more accurately than many Americans -- as the friendly, fearless, perhaps well-meaning, but really quite reckless “cowboy” that he truly was.
(c) James S. Henry, SubmergingMarkets.com, 2003. Not for reproduction or other use without express consent from the author. All rights reserved.
June 14, 2004 at 03:00 AM | Permalink | Comments (1) | TrackBack
Wednesday, June 02, 2004
"Letters from the New World" (Ukraine): #1.""Schwartzennation" - Microwave Democracy"
"SCHWARTZZENATION" - MICROWAVE DEMOCRACY
From where I sit, here in Kiev, it seems that the United States of America has become a nation of super-people. At the cost of a very few lives it has defeated an army of hundreds of thousands in Iraq, and occupied a country of 25 million. Like the Spanish conquistadors facing the Incas, America appears to be an era ahead of the rest of the world. And just like the Incas facing the conquistadors, the world is ambiguous towards America, fascinated yet fearful, trying democracy and Wrigley's Spearmint Gum for the first time.
If an American soldier dies in Iraq, every inhabitant of our planet learns about his death almost instantly: a giant falls with a thud. When a crowd of Iraqis carried the helmet of a dead American soldier it seemed like it took fifty of them to carry it. Yet like every giant from a fairy tale, the American giant has a vulnerability that may prove its undoing.
Historical eras are often distinguished from one another by technology, both industrial (how things are made) and social (how people interact). A key secret of America's economic and political advantages lies in its use of pioneering social technology, especially the concept of “win/win.”
There are countries where for every ten people who enable there are eight, ten, or twenty of those who destroy or impede. Per capita productivity in Russia is one tenth that of the US. Does this mean that a Russian can’t lift a five-pound sack of potatoes? No, it means that if a Russian wants to open a hot dog stand, a bandit and a tax collector immediately visit him. In America, one of your neighbors works to feed you and another to educate you. In Iraq, one neighbor spies on you and another teaches you hatred instead of arithmetic.
It is “win/win” social cooperation, supported by social values and a legal system that Americans often take for granted that opens the way for the introduction of new technology, not the reverse: industrial technology can be used only if your neighbors realize that your personal success will in turn help them advance their goals. America has long since accepted the basic premises of “win/win,” and this is what helps to make the American soldier grow a hundred feet tall.
But technology is a human attribute, not the essence of what a human being is all about. Technology, both social and scientific, has helped to make America successful, but America is in danger of neglecting human character, proposing solutions that are purely technical, and thus may well be inadequate.
This danger is nothing new. Paganism was a fascination with the technologies of nature: to be strong, people wore wolf's teeth or feather head-dresses. The Industrial Age worshiped the Machine Tool, a new God that produced everything, and people wanted to be like the Machine Tool's products: unanimous, marching in step, and wearing steel helmets. The Information Age proclaims: you are what you appear to be; ultimately it is all “bits and bytes.” If the celluloid Terminator can save the world, it follows that a human Arnold Schwarzenegger can save California. But is it really just a matter of technique and force?
If we compare a McDonald's to a French restaurant, we are likely to conclude that the McDonald's is cheaper, cleaner, faster, and friendlier. It is a triumph of technology, research, and training. The French restaurant has only two things going for it: you will not remember a McDonald's meal for the rest of your life, and you cannot propose at McDonald's. McDonald's stands for a satisfying technologically-assured result, but the French restaurant stands for life, whatever it is. McDonald's has a very useful role to play, but when it proposes itself as a substitute for a sit-down meal, there is a problem.
Too often, America says to the world, "Accept our technology because it is really works." And indeed it does usually “work”, but the world does not want to accept it - it prefers to keep its old ways of life. People want to be, not just to appear. America wants the world to wear a mask of "nice" and “new,” but the world wants to keep its tastes and traditions, its blemishes, its uncertainties, and even its vices. It is not that the world wants to remain "bad": the world simply resists the notion that every problem has a technological solution. The world may not be ready for such “solutions,” or it may believe that there are problems that await spiritual rather than technological solutions. The technocratic side of America seems to be saying, "If your marriage is unhappy it could only mean that your marriage contract was not elaborate enough," but the world sees this as technocratic madness, the worship of a new false pagan god, even in the midst of America’s purported “spiritual revival.”
Of course democracy can be a reasonable goal, be it for Canadians or for Afghans or Iraqis. But when democracy is presented as a ready-made technological solution – three minutes in the microwave, with a pickle and a smile - then people will refuse to swallow this prepackaged sandwich. The world wants to slaughter the lamb, skin it, and eat it with their hands.
The world resists American idea that politics (and art) are no longer about people, but about the application of various technologies – a democratic system of government being one of them. The Terminator saves the world not because he has the largest heart, but because, at the right moment, his guns make the greatest holes. The world sees this exclusion of people, with their hot beating hearts and their imperfect histories, as a serious threat.
India invented quiet contemplation and has congested, noisy streets; Britain invented good manners and reads the stolen letters of royalty; Russia stood for the soul elevated by beautiful literature, and so Russian prostitutes are the best-read in the world. The world abandons its values, and American culture pours in and rules. But the world understands that the American version of good is not good, and the stronger America becomes, the more it tries to impose its will, the more it will be resisted.
America should be extending its "win/win" spirit, which has been so successful at home, to its (belated) efforts to spread democracy abroad. It should not be turning itself into a fearsome giant that pretends that technology has made love, identity, and history obsolete. Just like in every fairy tale, at the end a single human child will defeat it.
June 2, 2004 at 07:30 PM | Permalink | Comments (0) | TrackBack
Thursday, May 27, 2004
052704.Iraq - High Time to "Cut and Walk" Defining Alternatives to"Stay and Die" and "Cut and Run"
“A nation has prestige according to its merits. America's contribution to world civilization must be more than a continuous performance demonstration that we can police the planet."
“A nation should not send half a million military personnel to a distant continent or stake its international standing and domestic cohesion unless its leaders are in a position to describe victory. This implies a definition of attainable political goals and a realistic strategy to achieve them.”
Back in the days when fixed exchange rates were the order of the day, Citibank's former CEO, Walter Wriston, had a very simple rule for deciding when to short a country's currency -- whenever a Finance Minister reassured investors that under no circumstances whatsoever would the country’s currency ever be devalued.
One might have thought that a similar rule would apply to US military misadventures. Perhaps it does, but with a much longer time lag.
In the investment world, of course, the cardinal rule is to “cut one’s losses early."
Recently, however, a growing number of US politicians, senior officials and pundits on all sides of the political spectrum have counseled us to do precisely the opposite in Iraq.
Despite the many recent setbacks, and the evident lack of any clear strategy, they’ve repeatedly warned about the perils of “cutting and running."
This is as if a precipitate withdrawal were the only conceivable alternative to the open-ended military occupation that the Bush and Blair Administrations and the would-be Kerry Administration are all promising to maintain even after June 30.
Ironically, public opinion is far ahead of these timid political lemmings, and much more consistent with the “cut your losses” strategy. Right now, just 45 percent of US adults still believe that it was “worth going to war in Iraq,” down from 76 percent a year ago. A majority in the US and 66 percent in the UK now oppose sending any additional troops to Iraq, and fully 40 percent of US adults and 55 percent of UK adults now support a withdrawal of US and/or UK troops after the handover of power to the interim Iraqi Government on June 30.
This is comparable to the level of popular US opposition to the Vietnam War that prevailed immediately after the January 1968 Tet offensive. Of course the vast majority of the Iraqi people have favored the withdrawal of Coalition forces for some time.
In fact there is a clear alternative to the current “prolonged occupation” strategy. As explained below, this “cut and walk” strategy has many advantages -- not the least of which are the many lives that it would save on all sides.
WHERE’S OUR BOBBY?
Despite these strong public sentiments, one searches in vain for any mainstream American political leaders, other than Dennis Kucinich and Ralph Nader, who are prepared to insist on a definite timetable for a withdrawal of US troops.
In other words, with respect to the Iraq War, we’re still missing our Robert Kennedy, our Eugene McCarthy, and our Martin Luther King. I'd hate to think that this leadership deficit reflects something fundamental about our ethical and cultural regression since the 1960s, but I fear that it may. (Where are, after all, the student protests in this period? Where are the Berrigan Brothers, the Sloane Coffins, the teach-ins, the Chicago 7s, and so forth?...Was all that just about the fear of being drafted?)
In the UK and Australia, a few daring military and political leaders are beginning to catch up to public opinion and advocate a more rapid exit. Even there, however, the alternative to the status quo is often described rather pejoratively and unimaginatively as “cut and run.”
This “muy macho” lingo was used in early May by Australia’s Prime Minister John Howard, who vowed that Australia’s troops in Iraq would not “cut and run,” despite the unpopularity of this position. It was also used by the UK’s Tony Blair on May 17. Ignoring the mounting political crisis that he faces over the war, Blair repeated at least three times that under no circumstances would the UK “cut and run.”
President Bush, whose political fortunes are also flagging, has also recently issued a crescendo of assurances that he will not “cut and run” from Iraq. The first assurance was given last November, when Iraqi resistance started to take off, and it was repeated in March, April, and again on May 10. Every time the President repeated it, more and more people had doubts about whether he really meant it.
Some pro-war pundits have also become fond of the “cut and run” formulation. The Wall Street Journal’s arch-conservative Deputy Editor George Melloan warned in May that “those who counsel a “cut and run” solution to the problems of Iraq are kidding themselves.” This echoed a November WSJ Op-Ed Page piece entitled “Don’t Cut and Run,” and another one last July by Paul Gigot that proclaimed that “The Iraqis' greatest fear is that America will cut and run.”
Of course the truth, as indicated by Iraqi polls, is that most Iraqis would probably be delighted if they woke up one morning to find us gone.
Many Democrats who support the war have also been using the "cut and run" formula as a convenient way to avoid serious discussion of alternatives. Among the leading practitioners on this side of the isle is John Kerry. As early as September 2003, with respect to Iraq, Kerry promised that “We're not going to cut and run and not do the job." In November he said, “I know we have to win. I don't want to cut and run.” In December, in an “attack from the right” speech before the Council on Foreign Relations, he warned that the Bush Administration itself might be on the verge of a “cut and run.” In April, annoyed by anti-war critics, Kerry insisted that “…The vast majority of the American people understand that it's important to not just cut and run. I don't believe in a cut-and-run philosophy (sic)." ("Muy macho, muy macho!")
Similarly, Joe Biden, the ranking Democrat on the U.S. Senate Foreign Relations Committee, recently warned that “To succumb to political pressure and cut and run would be a catastrophe for U.S. interests.” Similarly, Joe Lieberman: “We simply cannot lose. We can't cut and run,” and again: “…America cannot cut and run from Iraq. Both parties and both Presidential candidates agree that we should send more troops….” Similarly, Indiana’s Evan Bayh, who’d love to be Kerry’s VP: “We can't cut and run…” New York's Chuck Schumer commented on the killing of a US citizen in Iraq: “If they think this is going to make us cut and run, they are dead wrong.”
On the Republican side, John McCain is also fond of the phrase. On November 5 he told the Council on Foreign Relations, “Iraq is not Vietnam. There is no popular, anti- colonial insurgency…..I was heartened to here the President say there will be no cut and run.” On April 7 he cautioned, “Is it time to panic? To cut and run? Absolutely not.” On May 18, he warned, "If we fail, if we cut and run, the results can be disastrous.”
Variations on the same construction have been used by many other Republicans, including Virginia’s John Warner, Minnesota’s Norm Coleman ("It’s not to cut and run"), Missouri’s Chris Bond, and Alabama’s Richard Shelby. (May 12: “We've got a lot at stake. We cannot cut and run.”)
Other senior US officials also love this dismissive. Former Defense Secretary James Schlesinger, who actually presided over the tail end of the Nixon-Ford Adminstration’s “cut and run” strategy in Vietnam, told the US Senate with respect to Iraq on April 20 that “We going to be there for an extended period, unless we decide to cut and run, which I trust will not be the case.”
Last November, Secretary of State Colin Powell commended Italy for its decision to “… not cut and run.” On May 18 he echoed Blair: "We don't want to stay a day longer than we have to but we are not going to walk away. We are not going to cut and run."
Powell, a Vietnam vet like Kerry and McCain, uses this construction a lot -- evidently that aspect of the war is still a sore point. But at least with respect to Powell’s warnings against “run cuts,” the “Walter Wriston contrarian” rule may apply. In 2001, for example, with respect to US troops in the Balkans, Powell remarked that “The U.S. would not "cut and run" from the region.” At the time this was viewed as an implied criticism of neoconservatives who were seeking to limit America’s ”imperial overstretch” and “open ended military commitments,” as Presidential candidate Bush actually pledged to do in 2000. Since then, the number of US troops in the Balkans has been reduced from 6900 to 4100, and the Administration is now reportedly looking for ways to eliminate this commitment completely.
Earlier, in September 1993, when Powell was Chairman of the Joint Chiefs, he responded to the deaths of 18 US soldiers in Somalia with the comment, “Because things get difficult, you don’t cut and run…You work the problem.” Within two weeks of this statement, President Clinton announced that the remaining 5000 US troops in Somalia would be withdrawn within six months.
In any case, the last time the “cut and run” phrase was used so heavily was over a decade ago, around the time of this Somalia decision. In October 1993, William Safire, the New York Times columnist, former Nixon speechwriter, and self-styled lexicologist, devoted an entire “On Words” column to the origins of “cut and run,” pointing out that it derives from an 18th century nautical term for putting out to sea quickly by cutting an anchor cable.
Given the phrase’s recent revival, in early May 2004 Safire was able to dust off his earlier column and recycle it.
He did not, however, bother to remind his readers that by far the most important occasion for its use was the unhappy experience of the Vietnam War, much of which was presided over by his former boss, who had made quite a point of refusing to "cut and run."
In 1967-68, as opposition to the Vietnam War mounted, President Johnson repeatedly deplored the “nervous Nellies” who wanted to “cut and run.” After the Democrats relinquished the White House and responsibility for the war, many leading Republicans, including future President Gerald Ford and Pennsylvania’s Senator Scott, disparaged those who wanted to “cut and run.” So did President Nixon, in his 1968 campaign against Hubert Humphrey as well as in his 1972 campaign against George McGovern. He repeatedly assured conservatives that the US would somehow achieve “peace with honor” – that we would never “cut and run.” At the same time, Nixon winked and nodded to the increasingly anti-war public that he had a “secret plan” to end the war. But this was turned out to be just one of his many lies, told to garner a few votes from gullible peaceniks.
In fact recent historical research has showed that Nixon and Kissinger decided on a cynical “decent interval” approach to the US withdrawal from Vietnam as early as 1970, after having tried in vain to win the war with expanded bombing in 1969. The result was that the US spent another three years flailing around in Vietnam without a winning strategy, at the cost of 21,000 extra US lives and up to 1.5 million Vietnamese, Laotian, and Cambodian lives. After that, it really did cut and run. This cynical, slow-motion approach to making “peace” probably helped Nixon win reelection. But it did nothing for the Vietnamese people, US troops, or, in the end, American honor. Far better for us to have “cut and run” way back in 1968.
Indeed, it turns out that by 1968, the Pentagon had indeed developed a plan for a rapid US withdrawal from Vietnam. But Hubert Humphrey refused to consider it, despite the fact that it might actually have won him the election. Evidently he was loyal to President Johnson. And Hubert also didn’t want to be perceived as having “cut and run.”
IRAQ - THE CASE FOR “CUT AND WALK”
It may not be surprising that President Bush has forgotten these painful lessons. But the fact that Vietnam veterans like Kerry and McCain have forgotten them, and, indeed, are repeating the same misleading phrase that Nixon once used to describe all alternatives to “staying the course,” is disappointing.
Of course it is nonsense to suggest that the only alternative to an open-ended military occupation of Iraq is to “cut and run.”
One such alternative might be described as “cut and walk.”
In broad strokes, this would include a reasonable (say, six-nine month) deadline for the withdrawal of (almost) all US and UK forces from Iraq, combined with a greatly-accelerated timetable (say, 90 days) for “interim”/ “first-round” elections at the local and regional levels.
The case for such a “cut and walk” policy is very strong, especially when viewed side by side with the high costs and highly uncertain benefits of the current strategy.
First, it is now clear that the Coalition’s purely-defensive security interests in the continued occupation of Iraq are very limited at best.
While the Iraqi Army has been destroyed, we have not found any WMDs, WMD programs, or terrorist training facilities in Iraq.
Meanwhile, the entire region has become an al-Qaeda recruiter’s wet-dream.
Finally, if Iraq ever did become a clear and present danger to us, there’d be nothing to prevent our reoccupying the country – evidently, having done it twice, that is something that we are good at.
Second, far from helping to insure peace and stability in Iraq, the absence of a clear deadline for the presence of Coalition Forces helps to (1) attract foreign terrorists; (2) legitimize terrorism and spread the resistance; (3) increase the power base of those who happen to have private armies at their disposal; (4) undermine Iraqi support for moderate leaders; and (5) militarize ethnic, religious and tribal conflicts.
In contrast, the existence of a firm near-term (say, six months) deadline, along with an accelerated (90 day) timetable for local and regional elections, would almost instantly cause the Iraqi resistance to quiet down.
That, in turn, would create several “virtuous cycles:” (1) It would permit the Iraqi people to turn their attention away from violence and toward the upcoming political elections. (2) It would discredit and undermine popular support for any leaders or foreign fighters who tried to perpetuate the resistance; (3) It would permit economic reconstruction, now largely on hold because of the security situation, to resume, permitting more and more ordinary Iraqis to feel that they are indeed better off than before Saddam’s demise.
Third, far from helping to promote positive relations with the US, other Western powers, and the UN, the Coalition’s continued military presence, including its construction of 14 military bases and its dominance over economic policy, (1) feeds suspicions about neo-imperialism; (2) helps to promote anti-Western ideology; and (3) provides an opportunity for a variety of outside forces – Iran, Syria, and perhaps al-Qaeda – to gain influence.
In contrast, the announcement of a firm withdrawal date would make it much clearer than it is now that the US/UK invasion of Iraq is “well-motivated,” in the sense that there are no grand designs on Iraqi oil, economic policy, or military bases.
Fourth, far from encouraging democracy in the country, the continued occupation, combined a US- or even a UN-anointed, non-elected transitional regime, actually helps to undermine it, by (1) tainting the appointees with “guilt by association” with the occupying army, and (2) deferring popular elections, against the wishes of the vast majority of Iraqis.
For all their imperfections, moving directly to “snap elections” for local governments, regional assemblies, and, soon thereafter, an interim (say, two-year) national congress, would (1) help to satisfy the Iraqi peoples’ strong desire for representative government; (2) give “unstable” parts of the country an incentive to settle down so that they would be entitled to participate in the elections; and (3) perhaps most important, make the Iraqis feel responsible for their own destiny.
With a more representative body in place, it would of course be entitled to request foreign assistance for its police and military. In that event, forces under the auspices of the UN might even be asked back in. They would have much more legitimacy among ordinary Iraqis than the Coalition forces have now.
From the standpoint of the US and the UK, that would also have the added attraction of sharing the astronomical costs of this expensive, misbegotten venture.
SUMMARY
Throughout its entire history, the US has almost never “cut and run” from any foreign military intervention that it has undertaken – even in the case of Vietnam, where the last US combat troops did not leave until March 1973, after nearly 14 years in the country.
In fact, an examination of more than fifty US military interventions since the 1880s reveals that the far greater risk has been for US troops to intervene repeatedly and stay too long, overstaying their welcome, especially in the case of relatively-weak developing countries – for example, China (1894-95, 1898-1900, 1911-40, 1948-49), Cuba (1898-1902, 1906-09, 1912, 1917-1933, 1961), the Dominican Republic (1903-4, 1914, 1916-24, 1965-66), Guatemala (1920, 1966-67) Haiti (1891, 1914-1934, 1994-96, 2004-), Honduras (1903, 1907, 1911-12, 1919, 1924-25), Mexico (1913, 1914-18), Nicaragua (1894,1896,1898-99,1907,1910,1912-1933), Panama (1895, 1901-1914, 1918-20, 1925, 1958, 1964, 1989-90, plus bases), the Philippines (1898-1910, plus continuing bases), Russia (1918-22), Vietnam (1959-73), and Korea (1894-96, 1904-5,1950-53, plus continuing bases.)
Despite this, American policymakers continue to fixate on the spurious risk that the US might be viewed as “cutting and running” from such engagements. Just like earlier fixations with Saddam’s WMDs and his purported links to al-Qaeda, this could prove to be very costly.
May 27, 2004 at 06:16 AM | Permalink | Comments (1) | TrackBack
Tuesday, May 11, 2004
4510. Iraq: From a Strategic Standpoint, We've Already Lost Strategic Anomalies, Denial, Redoubling, and the Case for Withdrawing Now
In the wake of the recent upsurge in popular resistance in Iraq, and the evidence of soaring hostility among ordinary Iraqis and Iraqi elites of all persuasions toward the US-led occupation, a growing number of seasoned US military professionals are concluding that, from a strategic viewpoint, the US is either just now losing the "Iraq Peace," or may have already lost it, and should now focus its attention on accelerating the withdrawal of Coalition Forces.
This viewpoint, combined with harsh criticism of the "Bush/Cheney/Rumsfeld/ Wolfowitz/neocon clique," is becoming more and more widespread among senior Pentagon officers and planners, though most are still reluctant to go on the record.Of course the "usual suspects" on the American Left, like Nader, Chomsky, Jonathan Schell, and Howard Zinn, are ahead of the pack on this issue. But the interest in "withdrawal now" is also gaining ground among some conservative intellectuals, like the New York Times' David Brooks, who argued forcefully in an editorial just this week that the US needs "to lose in order to win" in Iraq. Support for withdrawal is also gaining ground with the American public at large, as noted below. But as in the case of the Vietnam War, the masses are way ahead of their "leaders."
Already, we've heard this revisionist view expressed in public by such Pentagon strategy heavyweights as former Reagan National Security Agency boss General William C. Odom, Army Major General Charles H. Swannack Jr., Commander of the 82nd Airborne Division, and Army Colonel Paul Hughes, who headed strategic planning for the Coalition Authority in Baghdad.
Even before the War, some senior officers, like former Army Chief of Staff Eric Shinseki, had grave concerns about the feasibility of Rumsfeld's war plan. But those had to do mainly with resources and tactics -- whether the plan provided for enough troops and heavy armor, and so forth.
In the last few months, however, the deepening concerns have shifted from tactics to strategy -- as in, do we really have one?
By this we mean:
- Does the US have a clear "definition of victory" and a long-term strategy for accomplishing it? Are these the same goals that it has announced publicly, or are there others? Is it now just floundering reactively from crisis to crisis, wishfully hoping that things will somehow work out, while getting locked in to a vicious cycle of anti-Americanism and violence? Even worse, as in the case of Vietnam, are US leaders just staying the course and sacrificing lives mainly for domestic political reasons, or because the US fears appearing to have been "defeated?"
- Are the initial or revised goals realistic, not only in terms of military might, but also political, economic, diplomatic, and moral capital? Has the US reached the point where -- as in Vietnam in 1967-68 -- these goals of the war are no longer feasible, either because, as in the case of WMDs, they based on misinformation, or, as in the case of "democratization," they may be inconsistent with continued US occupation, or have an unacceptable price tag?
- What are the real long-term costs of the current strategy likely to be, in terms of both "direct" and "opportunity" costs, and costs to credibility, image, and international relationships, as well as human and cash costs? How badly were these costs underestimated by the war's planners? Do we have any reason to believe that cost prediction has improved?
- What impact is this war having on other fronts in the war on terrorism? Has it become a costly distraction? Is it actually helping the terrorist cause, by providing a rallying point, an enticing opportunity to strike at US troops and foster internal divisions in Iraq, and a new source of armaments? If the US withdraws now, would that strengthen or weaken global terrorism? Would it clear the way for other countries or the UN to become more involved?
- Is a continued substantial US military presence in Iraq an obstacle to peace and security, and a source of increased religious and ethnic polarization in Iraq and the Middle East in general?
- Rather than announce increased, prolonged US troop commitments, isn't this the time for the US to announce a specific schedule for a US troop withdrawal, perhaps contingent on a ceasefire.?
As we'll see below, the overall answer is that a fundamental Iraq strategy rethink is overdue. We need to take into account the many "anomalies" that we've discovered in the last year, which have fundamentally undermined the original strategy behind the War.
When policymakers find their pet strategies challenged by such anomalies, however, their first response is usually to dig in.
In the case of the Vietnam War, for example, most top Democrats and many Republican leaders had already agreed by the end of 1968 -- in private, at least -- that no "strategic victory" was feasible at an acceptable price, and that a US withdrawal was indicated.
Shamefully, largely for reasons of cosmetics, the war was continued and even expanded during the next four years. At the time, both President Johnson and President Nixon were terribly concerned about "peace with honor" -- the country's appearing "weak" before the supposedly=unified global Communist menace.
That unity soon proved to be chimerical, with Vietham actually fighting China and Cambodia in the mid-1970s. But it took the US from 1968 until March 1973 to remove its last combat troops. And two more years of fighting were needed before the inevitable reunification of the artificial, largely US-inspired creations, "North" and "South" Vietnam. Today, of course, Vietnam remains united and "Communist," but it is known to us mainly as a worthy supplier of shrimp and coffee, and a hefty World Bank client.
As Senator John Kerry of all people must remember, those four extra years cost an additional 21,000 American lives, plus over 1.5 million extra Vietnamese, Laotian, and Cambodian lives. For what? Indeed, as Henry Kissinger himself admitted in a 2001 interview with documentary filmmaker Stephen Talbot, when asked about what difference it would have made if Vietnam had gone Communist right after World War II,
"Wouldn't have mattered very much. If the Vietnam domino had fallen then, no great loss."
So, according to Kissinger, the architect of that war's misguided strategy for withdrawal (and numerous other policy blunders), 58,000 Americans and 3 million Vietnamese, Cambodians, and Laotians basically died for nothing. This precedent is worth thinking about, as we each decide individually whether to continue to sit and watch while yet another cockamamie "national security" strategy chews up thousands of innocent lives.
DENY AND REDOUBLE
Unfortunately, the first response to strategic anomalies is usually a denial (or reinterpretation) of the new evidence, followed by a redoubling of efforts to make the tired old strategies work.
Where, as here, senior politicians who are also running for office are in charge, these tendencies are reinforced, since they fear being branded by their opponents as "inconsistent." As in Vietnam, the result could easily be over-commitment to a pipedream, ending in an eventual forced withdrawal that is much more costly than it needed to be -- and yet another young generation of Americans that never quite views their country in the same way.
Despite all this, you wouldn't guess from our President or his Democratic challenger that the US is facing any strategic crisis whatsoever in Iraq.
Both major parties' senior politicos and bosses are stuck like deer in the headlights, committed to the same pro-war strategies they all supported last year, as if nothing has been learned since then. Locked in a tight contest, both candidates are running toward the center of the field at full speed, and not paying much attention to the new hard facts on the ground.
Indeed, Senator Kerry, Bush, Senator Clinton, Senator Lieberman, Senator Biden, Senator Lugar, and almost all leading Democrats and Republicans appear to be in violent agreement about the Iraq War, except about picayune tactical details. Consistent with the "denial/redouble" pattern, all of these "leaders" are also now calling for a significant expansion of US troop commitments in Iraq. In Bush's estimates, this would require at least 135,000 to 160,000 US troops in Iraq at least through 2006. But there is no reason to expect that only two more years would be enough, if the current strategy is continued. Indeed, as discussed below, the Pentagon is already proceeding with a little-noticed plan to build 14 permanent US military bases in Iraq.
All these leaders, including Bush, also now give lip service to some kind of increased role for the UN in Iraq, without defining precisely what that would be. They all a bit too glib about how they expect the UN to reenter. Presumably this would be by way of a new UN resolution, but it is not at all clear how much "control" the US is willing to yield to its owl nemeses on the Security Council, France, Germany and Russia. Most important, if there is no fundamental improvement in the security situation, the UN is unlikely to want such a thankless job. But improving the security situation requires, as we'll argue, a new view of where the insecurity is coming from, and a US/ UK withdrawal timetable.
All this implies a substantial increase over the $175 billion that the US has already spent in Iraq -- Bush's latest $25 billion supplemental budget request is consistent with an annual "run rate" of at least $50-$60 billion a year. If recent casualty rates are any indication, this also implies at least another 1500-2000 US war dead through 2006, and probably 5-6 times that many US wounded, not to mention thousands more Iraqi dead and wounded, including many civilians.
In Senator Kerry's case, this lack of leadership is especially disappointing. As he said recently,
"Americans differ about whether and how we should have gone to war, but it would be unthinkable now for us to retreat in disarray and leave behind a society deep in strife and dominated by radicals."
One might have hoped that Senator Kerry would have learned something about the high costs and radicalizing effects of dead-end wars from his years in the Vietnam Veterans Against the War, if not from his four months on a gunboat in Vietnam.
PLUMMETING SUPPORT FOR THE WAR
According to the latest USA Today national poll, taken May 10, 2004, in the wake of the disgusting Abu Ghraib prison abuse scandal, for the first time since the initiation of the war in March 2003, over half of Americans -- 54 percent -- now believe that it was a "mistake" to send US troops to Iraq. This is a dramatic turnaround in less than six months -- and, interestingly, a much faster erosion of support for this war than occurred during the Vietnam period.
Fully 29 percent now believe that all US troops should be withdrawn now, up from just 16 percent in January, and another 18 percent believe that some troops should be withdrawn. At least 75 percent oppose expanding the number of US troops in Iraq.
Despite this, as noted earlier, this is still the official position of both President Bush and John Kerry, and almost all other leading Republicans and Democrats politicos. As in the case of the Vietnam War, the unwashed masses are far ahead of the "leaders."
At the moment, indeed, the only candidate who supports an immediate (six month) withdrawal from Iraq is the stubborn 70-year old Green Party candidate, Ralph Nader. Many Democrats still vehemently (and incorrectly) still believe that he was the "spoiler" for a Democratic victory in the 2000 Presidential election. Whatever we may think about his candidacy, the fact is that at the moment, he is our only Robert Kennedy. To his great credit, Ralph has opposed this "preemptive war on a defenseless country" since the start, and is now advocating a withdrawal of all US troops from Iraq within six months.
As the most recent polls show, Nader's opposition to the Iraq War has given him a burst to 5 percent in the May 10th polls. Perhaps Kerry could take a few lessons from Ralph with respect to his position on the war. For the same poll that showed American support for the war plummeting, and Bush's approval dropping from 52 percent in mid-April and 49 percent in early May to just 46 percent this week, also found Kerry dropping two full points, and losing to President Bush, 47-45, among likely voters. Meanwhile, Ralph gained two points, to 5 percent among likely voters.
Of course this is just one national poll, but if Kerry really wants to win in November, he'd better think about the implications of his me-too position on the war -- the main source of public dissatisfaction with Bush. As one analyst has suggested, he might even consider cutting a deal with Nader, getting Ralph to appoint the same list of Presidential electors as Kerry, in exchange for permitting him to get on the ballot in 50 states. He might also consider adopting Ralph's more popular position on the war, as well that of the National Council of Churches proposal, which has also just recommended withdrawing and turning over control in Iraq to the UN.
Oddly enough, this situation also gives President Bush an incredible opportunity. If he really wanted to insure his victory over Kerry, the smooth move might be for President Bush to reverse course and announce plans for a definite US withdrawal. If the polls are any indication, this "win by losing" approach to fixing Iraq would very popular with the American people -- especially those who are wavering in the center. It would also be popular with many Bush supporters on the right, who fear that aggressive nation-building in Iraq may jeopardize other vital concerns on their agenda -- like gay marriage, abortion rights, more tax cuts, new Supreme Court judges.
However, Bush, like Kerry, also seems determined to dig himself in even deeper on Iraq. This is what he did just this week by unnecessarily backing Secretary Rumsfeld in the Abu Ghraib scandal, despite the many calls from right and left alike for the Secretary's resignation.
STRATEGIC ANOMALIES - "REALITY CHECK, PLEASE!"
Meanwhile, as we've just learned in the towns of Falluja and Najaf, an aggressive US military presence may just lead to increased hostility. This is only one of many "strategy anomalies" that the war's architects -- Democrats, Republicans, and Blair's Laborites in the UK as well -- have encountered. There are many others. Most are already familiar to those who have followed recent events. But it is worth restating them, just to put the case in order.
1. Iraq's WMD Threat. Of course the first basic assumption, declared innumerable times in the fall of 2002 and early 2003 by US and UK officials during the run-up to the war, was that Saddam's Iraq posed a grave threat to the US and its allies. Either it already possessed WMDs and the means to deliver them, or was actively attempting to acquire them.
A key related assumption was that this threat could only be removed by an immediate US invasion, and the complete removal of Saddam from power. The UN weapons inspection program, according to the war's supporters, had been a failure.
In this regard, it is also important to note that the removal of Saddam's regime from power was never a goal of the invasion per se -- apart from the reduction of the WMD threat, reduced terrorism, and democratization. After all, the world is filled with lousy governments, and just replacing Saddam's nasty regime with another nasty regime could never have justified the invasion. So while the supporters of the war have often trumpeted Saddam's removal from power as a sign that we have already triumphed, in fact this depends on whether or not these other goals are achieved. And this is very much in doubt for all of them.
Reality Check, Please: Of course no WMD stockpiles or serious WMD programs have been found, after months of searching by thousands of highly-trained US and UK personnel.
It also now appears that the UN weapons inspection programs was in fact very successful at identifying whatever WMD programs Saddam had, and getting him to curtailing them. For all its imperfections, the UN approach worked.
Indeed, if, as France, Germany, and Russia proposed, weapons inspection had been permitted to continue, the war might have been avoided completely, or, at worst, eventually proceeded with better preparations and much broader multilateral support, as in the 1991 Gulf War.
That, in turn, would have meant less US influence over post-war Iraq (as in military bases and oil). But the costs to the US and Iraq would have been much lower, and the transition to peace and a new representative government much smoother and less violent.
2. Iraq's Role in Supporting Terrorism (Pre-War). The second key strategic premise for the war was that Saddam's Iraq was aiding al-Qaeda and other global "terrorist" groups.
Reality Check, Please: In fact one of the few bona fide pre-war "terrorists" who was living in Saddam's Iraq turned out to be the aging Abu Nidal,, who had been inactive since the mid-1980s. Abu Nidal was reportedly suffering from leukemia, but he died of multiple gunshot wounds in Baghdad in August 2002, long before the invasion -- perhaps the victim of an attempt by Saddam to head it off.
The only other "terrorist group" operating in Iraq before the war was Anwar al-Islam, which was located in northern Iraq in the "no-fly" zone, outside Saddam's control. Its headquarters could easily have been bombed at any time. But the US chose to wait until after the war started, so that it could say that it actually destroyed some terrorists.
Beyond this, no definitive pre-war links between Saddam and al-Qaeda have ever been established. As former Bush Administration counter-terrorism czar Richard Clarke and many other experts have long argued, Saddam and al-Qaeda were, if anything, antagonists, and even if he had had WMDs, Saddam was not about to share control over WMDs with a radical like Bin Laden.
The US also made much of the alleged medical refuge that Saddam allowed a Jordanian sympathizer, Abu Musab al-Zarqawi. The US claimed that Zarqawi had links to al-Qaeda. But in fact it seems that his organization, al-Tawhid, was actually a rival to al-Qaeda before the war, focused on overthrowing Jordan's King Abdullah. Of course Zarqawi's influence in Iraq does appear to have been strengthened by the US invasion.
3. Iraq's Role in Supporting "Terrorism" (Post-War). Whatever the details of Saddam's links to global terrorism were before the War, it was assumed by the war's supporters that the invasion would reduce Iraq's role in global terrorism.
Reality Check, Please: In fact just the opposite has occurred. Since the invasion, Iraq has actually become a terrorist Mecca, with anti-US fighters from all over the Muslim world pouring into the country across its now-wide-open borders, eager to kill Americans. They have no need to bring automatic weapons, grenade launchers, mines, or explosives. Saddam's huge stockpiles of these ordinary weapons have been very poorly secured by the under-manned Coalition Army. And arms have also reportedly been for sale from the new Iraqi police force.
So, as Bin Laden's most recent recorded messages have made clear, far from being a "defeat for terrorism," the Iraq War has actually been something of a boon -- rather like the December 1979 Soviet invasion of Afghanistan ultimately proved to be. The Soviets, we recall, lasted nearly 10 years in Afghanistan, at a cost of more than 15,000 Soviet lives and hundreds of thousands of Afghanis. The US is of course vastly more powerful than the Soviet Union. And, unlike that situation, there is no foreign aid available to the Iraqi resistance. Still, as noted earlier, such aid may not be necessary here. And the US is already well on its way to keeping pace with the Soviet casualty count.
Not only has the Iraq War provided opportunities for young radicals to secure weapons and attack Americans close at hand. It may have also distracted some resources from the hunt for other terrorist organizations. Most important, it has antagonized the whole Muslim world, providing the radical factions a wonderful opportunity to recruit new supporters.
4. Iraq's Warm Welcome for US "Liberators" The US also assumed that we would get a warm reception from Iraqis, as "liberators" of Saddam's Iraq.
The US war planners also assumed that Iraqi nationalism was weak, and that and resistance would disappear after Saddam and his "dead-ender" henchman were gone. They defined "victory" as the removal of Saddam and his Ba'athist regime. They also assumed that the Iraqi Shiite and Sunni communities were fundamentally at odds, and that there would be little opposition to the Coalition Forces outside the so-called "Sunni Triangle."
Reality Check, Please: The US invasion created a wave of genuine nationalism that spans Sunni and Shiite community lines and helped to unite radical factions in each against the "occupiers," as they united in their 1920 revolt against the British. The sharpest fighting in Iraq has taken place in just the last month, long after Saddam and all but ten of the other 55 "most wanted" Ba'athist leaders were killed or captured.
Furthermore, just 17% of all Sunnis live in the so-called "Sunni Triangle" north of Baghdad -- Iraq's Sunni and Shiite communities have historically been much closer than in other Muslim countries. Indeed, one key challenge faced by "foreign fighters' like Zarqawi has been to try and divide them. In the wake of the continued US occupation, the US has experienced growing armed resistance from Shiites and Sunnis alike, as exemplified by the May 10 US strikes against the Shiite leader al-Sadr's headquarters in Baghdad. While the vast majority of Iraqis are still watching the battle from the sidelines, a majority also now supports an end to the US occupation, believe that Coalition Forces have conducted themselves badly, and believe that the Coalition will not withdraw until forced to do so.
In this situation, destroying the Ba'athist Party turns to have been insufficient for a return to peace and security. Nor, given the importance of "ordinary" former Ba'ath Party members in the educational system, the civil service, and the police, was it a necessary condition. It was, in fact, just a dumb move taken by Paul Bremer in the early days of the occupation, under pressure from Chalabi's INC, and recently reversed.
5. "Liberators" Vs. "Occupiers" The pro-war strategists also assumed that thousands of US and UK troops could be counted on to conduct themselves in Iraq as proper "liberators."
Reality Check, Please: The grim reality is that the US and UK forces have hardly distinguished themselves as "liberators." Rather, they were rushed into duty without adequate training or acculturization, with few skills in Arabic. They had also been encouraged from the top of the Bush Administration on down to believe that Iraqis had something to do with 9/11. As a result, many of our troops have behaved very badly toward ordinary Iraqis -- like crude, rude barbarians. As the events at Abu Ghraib prison have dramatized, there have been widespread human rights violations, ethnic slurs, religious slights, and indignities to Iraqi women. The result has been yet another public relations debacle for the Coalition Forces.
6. Iraq's Support for "Acceptable" Democracy. Yet another key strategic assumption was that in a relatively short time, the Coalition and its Iraqi allies would be able to lay the foundations for an "acceptable" democratic system.
In the US vision, this would be one that would be (a) reasonably representative, (b)able to avoid the kind of popular theocracy that has characterized Iran, (c) pro-US and at least neutral towards Israel, (d) able to maintain a unified federal system, including the Kurds, and, of course, (e) be willing to go along with other key 'imperial" requirements, like permission to build 14 long-term US military bases in the country, the use of oil revenue to defray the invasion's costs, and the opening of Iraq's oil resources to foreign investors like ExxonMobil and BP.
Saddam's Removal Alone Not a Victory.In this regard, it is important to note the simply removing Saddam and his associates was never a goal of th invasion for its own sake -- apart from the goal of removing the WMD threat, cutting his alleged support for terrorism, and ultimately installing democracy. In other words, the world is filled with nasty dictators -- there had to be some special reason for singling him out. And no one argued that simply replacing him with yet another nasty dictator would justify the war -- apart from achieving these other goals. So the fact that he and his regime have been removed only "justifies" the war if, in fact, we are able to achieve these other objectives. So far, as we've seen, the WMDs, reduced terrorism, and democratization have all proved elusive.
Reality Check, Please: In fact it has been impossible to square all these various requirements with each other. As of April 2004, after a year of occupation, while 82 percent of Iraqis still support "democracy" in the abstract, , outside Kurdistan, most Sunnis as well as Shiites are also opposed to a rigid separation between church and state. There is, at this late date, still no complete draft constitution that the various key interest groups in the Coalition-appointed Iraqi Governing Council Constitution have been able to agree on.
Ordinary Iraqis, it turns out, are also highly critical of the US, the UK and Israel. They are also highly critical of the Iraqi Governing Council created by the Coalition Forces, which is widely viewed as a puppet government. As the Shiite leader Sistani has said, "We want elections as soon as possible."
Finally, Kurdistan, the one part of the country that is now stable and enjoying economic recovery, also strongly favors complete independence, not federation. Since Kurdistan is one of Iraq's richest provinces -- the original source of much of its oil -- the rest of Iraq is determined to prevent this. As if we needed one, this issue provides another potential source of conflict. The whole country is a bubbling cauldron of such regional, ethnic, tribal, religious, and anti-foreigner feelings, and we have turned up the fire.
7. Reestablishing Security/ Avoiding a Military Draft. Yet another key strategic assumption was that it would be relatively easy to reestablish security with a "politically acceptable" commitment of less than 150,000 Coalition troops. This force level was supposed to diminish over the year, complemented by a large number of private security contractors.
This was supposed to be feasible, despite Bremer's disbanding of the entire pre-existing Ba'athist police and military structure, the fact that Saddam had emptied his prisons of all criminals prior to the invasion, and the fact that Coalition troops had little training in policing, crowd control, or non-lethal weaponry.
One implication was that the US and UK troops expected to rotate home on regular schedules, without undue burdens on their families and morale.
Another was that a new Iraqi national police force would be able to provide an adequate substitute for the Coalition's policing activities.
A third was that military forces from other countries, or the UN, might also become available to back-stop US and UK troop commitments, as the security situation stabilized.
Reality Check, Please: All these assumptions about security have proved false. Almost incredibly, the US military repeated the same exact mistake that was made in Haiti during the 1990s. The complete disbanding of Iraq's military, with no adequate substitute, played a key role in the initial looting that occurred in Baghdad in April, 2003, and the general crime wave and insecurity, especially in Baghdad, that has continued ever since.
It also turned out to be much harder than expected to "train up" an Iraqi police force willing to stand and fight (for what? the unelected IGC?) As the Iraqi resistance became more violent, this problem escalated, to the point where, during the recent turmoil in Falluja, more than 50% of the new police force graduates defected or disappeared into the crowds.
The continuing security problem, in turn, scared many private contractors out of the country, and jeopardized the whole schedule for Iraqi reconstruction, which has basically ground to a halt. The exposed yet another dubious assumption by the war planners -- the decision to rely so heavily on private contractors for security services and reconstruction.
The security crisis has also prolonged service terms for US troops, and led many of them to be given assignments to "policing functions" for which they were never trained. That, in turn, led to even great frustration among US troops -- more than 40,000 of whom are members of the US Army Reserves or National Guard. This encouraged increased hostility among Americans and Iraqis, many of whom are now viewed as "criminals who hate us."
The resulting morale problems have caused US Army Reserve and National Guard enlistment and reenlistment rates, as well as regular military enlistment rates, to plummet to 30 year lows. Another byproduct of the continuing security nightmare is that it has been very difficult to get other countries, or the UN, to maintain, much less expand, their troop commitments.
If Iraq's security situation continues to demand increased Coalition troop commitments, and reenlistment/ enlistment rates don't improve, some observers have even speculated that the US might be forced to reintroduce a military draft. For the moment this appears unlikely, unless some new "front" opens up in Syria, Iran, or North Korea. But those of us with college-age children take no comfort from the fact that several members of Congress have already introduced the necessary legislation.
8. Modest, or At Least "Acceptable," Costs. The last key assumption was that this whole effort could be mounted at a relatively modest, or at least politically-acceptable cost, not only in terms of direct financial costs, but also human lives, and "opportunity costs" as well.
Evidently it was assumed by some planners -- Assistant Defense Secretary Wolfowitz, for example -- -- that Iraq's oil production would resume quickly enough to let it make a substantial contribution to funding the costs of the War. It was also assumed that, as indicated earlier, security would improve rapidly after Saddam's demise, and that the new Iraqi police force would be able to substitute for US troops, at a fraction of their cost. Finally, the super-optimists in the pro-war camp may have even assumed that part of the War's freight would be paid by other UN members, even though the Security Council was never permitted to rule on the final decision to go to war.
Reality Check, Please: The reality is that, just one year in, the Iraq War is a budget-buster, both in terms of cash and lives.
The Pentagon's cost accounting for this effort is inscrutable -- perhaps intentionally so, although it is never easy to know precisely what fraction of, say, a hospital in Frankfurt or "wear and tear" on a particular aircraft is properly assignable to a specific front. However, most estimates put the "sunk cost" to date of the Iraq venture at about $175-$180 billion, including the current interest on this spending, since all of it has to be deficit financed.
Going forward, there is now a continuing "run rate" of about $5 billion per month. In terms of real dollars, this is close to the peak $5.1 billion per month run rate for the Vietnam War. These numbers omit the costs incurred by the UK and other Coalition members, which have supplied about twenty percent of the troops.
Nor is Iraqi oil production anywhere close to covering these financial costs. Indeed, production has still not recovered to its pre-War levels, and the cost of securing Iraqi oil exports against increasing sabotage attempts is eating up almost all the profits.
Coalition casualties have also been much greater than expected. As of May 10, after 13 months of combat, the Coalition has sustained a total of 881 combat fatalities and approximately 4716 wounded, assuming that US "dead to wounded" ratios also apply to non-US Coalition Forces.
While these totals are well below those sustained during the peak years of the Vietnam War -- 1967-69 -- they are far greater than those sustained during the first three years of that War, 1961-64, and comparable to the losses sustained by the US in 1965, the first big year of the Vietnam War, allowing for improved survival rates because of improved body armor and "just-in-time" medicine.
As for the Iraqis, officially, our "new Pentagon" no longer keeps track of "enemy body counts" much less civilians -- one major way in which Vietnam was indeed different, at least for PR purposes.
However, efforts have been made by some observers to keep track of Iraqi civilian fatalities reported in the press. While these statistics are probably an understatement, they indicate at least 9,016 to 10,918 Iraqi civilian deaths through April 24, 2004.
In addition, of course, there have also been at least 4-5 times this number of civilians wounded. In a country with a population of 25 million, this is quite a blow. For the 80 percent that is Arab, and has suffered almost all the casualties, this would be comparable, in US terms, to a loss of 100,000 dead and 600,000 wounded. Assuming that the average Arab family in Iraq has four members, that each family member knows 10 people, and each of these 10 people also knows 10 people, the entire Arab population of Iraq is within "two degrees of separation" of experiencing these losses personally.
Is it any wonder that we have already lost the peace?
LONGER-TERM COSTS
But the real long-term costs of this war are even higher.
Far from striking a decisive blow for "democracy and liberation" in the Middle East, and setting an example for other Arab countries to follow, this war has become a lightning rod for anti-Americanism, and a text-book example of hegemony run amok.
Far from teaching the world to respect and admire America's newfound power, our global reputation has plummeted to an all-time low.
The citizens of other countries that have practiced imperialism against their neighbors or their own peoples know what it feels like to be despised and hated whenever they travel. Americans are not used to this treatment. As a direct result of this war, as well as our other policies in the Middle East and elsewhere, we are going to have to get used to it.
Far from providing the world an inspiring example of our truthfulness and honor, the way in which the case was made for this war, and the way it has been conducted, have severely damaged our nation's credibility.
THE CASE FOR WITHDRAWING NOW
Few Americans doubt that we will someday withdraw all US troops from Iraq, as we did from Vietnam.
Probably most of them are not aware that, unlike in Vietnam, the Pentagon's military engineers are already hard at work designing and constructing 14 enduring" military bases all over the country, in Baghdad, Mosul, Taji, Balad, Kirkuk, and near Nasiriyah, Tikrit, Fallujah, Irbil, and elsewhere.
Apparently the objective is to provide a substitute for our bases in Saudi Arabia, both in terms of oil and military presence. Presumably this is being done in part to keep the Saudis happy, because they are afraid that the presence of US troops on their soil excites domestic resistance. The influential Saudi royal family no doubt prefers to have US troops and Iraqi leaders facing down resistance in Baghdad than in Riyadh.
But of course we are still telling the Iraqis that they will "eventually vote and govern" their own country, and that we have no intention to "occupy" it.
In any case, for those who oppose this war, as well as for the vast majority of Americans who have swallowed the brave new lies that our only long-term interest in Iraq was to "remove Saddam's tyranny," "rebuild Iraq's economy and democracy" and "withdraw," the key issue is -- when should withdrawal start?
In light of the recent crisis, the war's architects have not been able to get on with their agenda quite so easily as they once hoped. Faced with the acute security crisis noted above, they now tell us that if we only increase the number of troops in Iraq for two more years, and provide the extra $120 billion that this will require, we can all return to the original smooth transition plan.
On the other hand, they also claim that, unless the Coalition stays the course, Iraq will disintegrate into civil war, instability, and chaos -- even worse conditions, somehow, than already exist.
To this hokum we say, first, as noted above, the credibility of war's architects is not exactly unsullied.
And now they seem to be proposing yet another episode of "Who Do You Believe -- Me, Or Your Lying Eyes?" As Peter Gutmann once remarked,
We're standing there pounding a dead parrot on the counter, and the management response is to frantically swap in new counters to see if that fixes the problem.
The good news is that the war's architects and pamphleteers are actually very few in number. As the New York Times' Thomas Friedman said in an interview with Ha'aretz, the leading Israeli newspaper, in 2002,
This is a war the neoconservatives wanted.....(and) marketed. Those people had an idea to sell when September 11th came, and they sold it. Oh boy, how they sold it. This is not a war that the masses demanded. This is the war of an elite. I could give you the names of 25 people (all of whom sit within a 5-block radius of my Washington D.C. office, who, if you had exiled them to a desert island a year and a half ago, the Iraq War would not have happened.
So if this tiny band was able to wield so much influence in our erstwhile democracy, and a much larger number now realize that their assumptions were wrong, shouldn't it be possible to reverse course?
Or is it the case that we are now locked into sitting through this whole dreary play?
We say that it is actually the continued occupation that is the greatest threat to stabilization, democratization, the restoration of the Iraqi economy and oil exports, and the preservation of a unified Iraq.
Indeed, Iraqi hostility to the US/UK occupation has now reached the point where securing these goals is much more likely if the Coalition forces withdraw as soon as possible.
This is true for several reasons:
- The announcement of a definite schedule for a US withdrawal will almost instantly cool the resistance, reduce the leverage of the radicals and the "foreign fighters," permit Iraq's beleaguered police force to focus on fighting crime, and allow the reconstruction of the economy to proceed.
- Mainstream Iraqis know full well how to restore order in their own neighborhoods, once the US provocation is gone. Absent the occupation, the vast majority of Iraqis have will have little patience for foreign subversives like Zarqawi, and will throw the bastards out.
Indeed, an astute US withdrawal from Iraq would surely be as sad a day for "terrorists" as the US invasion of Iraq was a happy one.
- A clear withdrawal timetable would make it clear to the Iraqis that the US has no imperial intentions with respect to their oil wealth. It would permit them, indeed, to recover some of the pride that they must have lost, by having to rely on a foreign power to get rid of their dictator -- even if they did not oust Saddam, they could feel, at least they were able to oust the most powerful hegemon on the planet. That along could provide the ideological foundations for a vast rebirth of national pride.
- Such a withdrawal would also permit us to remove the provocative presence of largely-Christian US troops in this overwhelmingly Muslim country. It would also clear the way for UN assistance and multilateral development aid, as well as debt relief, to flow more freely into the country.
- Few Iraqis want a return to dictatorship -- the demand for democracy is overwhelming. It is indeed odd and un-American for the United States of America, in particular, to insist that Iraqi democracy can only be established at the point of a gun, under the guiding hand of a foreigner. I don't recall the Founding Fathers at Constitution Hall in Philadelphia requiring much help from Tony Blair's predecessors or the British monarchy.
- As General Odom has argued, a clear timetable for a US withdrawal would actually help the UN, or perhaps other Muslim states, to send in peace-keeping forces of their own.
- So long as the war in central and southern Iraq continues, the Kurds have every incentive to continue to move toward complete independence. Only the restoration of Iraq's central government, and the prospect of "win win" gains from interregional trade and development, can break down these regional ethnic barriers, and keep Kurdistan within Iraq.
- A US/UK withdrawal, according to a pre-announced timetable that is brisk, yet responsible, and conditioned on the preservation of security, will give Iraqis an incentive to observe and enforce their own general ceasefire, so long as they clearly see progress being made. Unlike the situation in Israel, where the occupiers have been stalling for more than 30 years on "security" grounds, the US has no settler minority that is trying to hold on to Iraqi resources, unless it is Chalabi's band of thieves, thirsting after an oil privatization. But we don't have to be hostage to his demands; if he becomes a problem, we can simply approve Iraq's new extradition treaty with Jordan, cut off his $300,000 per month allowance, and dump him over the border in Jordan, so that he can finally stand trial for bank fraud.
- If it did turn out that this little experiment with a pre-scheduled withdrawal failed, and the Iraqis themselves, perhaps with UN assistance, were unable to develop a peaceful government, there'd be nothing to prevent the UN or even the US from returning with a more rested, better trained peace-keeping force. After all, we're now pretty sure that the Iraqi Army is not about to fight us with WMDs!
- In the event that a civil war erupted in Iraq, and the situation somehow managed to deteriorate from the abysmal state where it is right now -- which 58 percent of Iraqis believe is "the same or worse" than before Saddam's removal -- peacekeepers would probably be welcomed by the majority of Iraqis. This is unlike the current situation, where only a third of Iraqis believe the Coalition forces are doing more good than harm.
All told, the case for a US unilateral withdrawal from Iraq seems very compelling. If the case for it is made to the American people by a leading political figure, it could also be politically very successful. But, other than a marginal, if courageous and thoughtful, candidate like Ralph Nader, is willing to pick up this torch?
Where, indeed, is our Robert Kennedy? Where is the major US political figure who will stand up to this war?
Of course there are some self-styled "conservatives" in the audience -- people who have otherwise somehow found it possible to give a hearty "Sieg Heil" to one of the most radical, un-Constitutional, internationally illegal, risky, costly and irresponsible exertions of military power in US history -- who will no doubt argue, as they did in the case of Vietnam, that this abrupt policy reversal " might be risky."
After all, it might undermine US credibility! It might encourage the world's terrorists! It might make our own allies distrust us! It might jeopardize national security!
These are the same folks whose tidy little war plan has just sullied America's image and credibility almost beyond repair. It has poured hundreds of billions of dollars and thousands of liters of human blood into the sands of the Iraqi desert. It has helped make Iraq more of a sanctuary for the world's worst terrorists than ever before. It has alienated the entire world, and succeeded in making many Iraqis actually long for the old regime.
As Bertrand Russell once remarked, "The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts."
Or, as Peter Gutmann might have said, the war's supporters are still trying to swap in new counters and pound them with the same old dead parrots.
May 11, 2004 at 05:15 PM | Permalink | Comments (0) | TrackBack
Sunday, May 09, 2004
04509.US Brutilitarianism Comes to Iraq- Part II: The Roots of Brutality
In the midst of all the hoopla and finger-pointing over Secretary Rumsfeld’s apology for the Iraqi prisoner abuses at Abu Ghraib prison, we seem to have avoided getting to the bottom of the fundamental question begged by all those ugly photos: why did it happen?
In other words, how could young American soldiers, raised in a nominally democratic, civilized “Judeo-Christian” society, and members of the world's most advanced military, which has no business being in Iraq if not to “liberate” it from precisely this kind of oppression, come to act in this way?
From this angle, whether or not Rumsfeld or a few military commanders resign is beside the point – a juicy chance for Senator Kerry and his supporters to make political hay, perhaps, but largely irrelevant to our understanding of these disturbing events and the prevention of their recurrence.
This is especially true if, as we will argue here, they may have been part and parcel of the very nature of this ethnically-divisive dirty little urban guerilla war.
ALTERNATIVE EXPLANATIONS
At this point, the official US investigation, as well as press accounts, of the recent abuses at Abu Ghraib prison are incomplete. Already, however, there are several conflicting explanations.
“Exceptional Evil-Doers.” As noted in Part I of this series, the prevailing view of US officials is the “bad apple” theory -- in President Bush's words, "the wrongdoing of a few." This explanation -- which has deep roots in American culture, dating as least as far back as the Salem Witch trials, and is also at the heart of our conventional view of "terrorists" -- attributes the problem to brutal, distinctly “un-American” misbehavior by handful of “bad” people. In this view, this tiny group is clearly distinct from the vast majority of decent, Geneva Convention-abiding US military personnel. This explanation has been adopted by a wide variety of political and military leaders, from President Bush, Secretary Rumsfeld, and General Myers to Senators MeCain, Kerry, and Clinton. It also appears to be the predominant view in the mainstream press, perhaps because it lends itself to the kind of lengthy profiles of soldiers that, for example, the New York Times and the Washington Post have both front-paged several times this week. It is also necessarily more comforting to supporters of the Iraq War -- including all the leaders and newspapers just mentioned -- who view this scandal as an embarrassing, unhelpful distraction from the immediate task at hand, which is to get on with "stabilizing" the security situation in Iraq (e.g., crushing the resistance).
This kind of explanation is a standard one for individual criminal conduct in general. Typically it locates the roots of abusive behavior in the supposed predispositions of particular abusers to commit them. The contributing dispositive factors may vary -- pathological or "authoritarian" personalities, genetic defects, retributions for perceived injustices, inadequate schooling, too much TV, weak role models, or Salem witchery, for all we know. Whatever these underlying, the indicated prescription focuses on identifying and and handling these “bad seeds,” and in this case, any individual commanders who may have also “failed” to supervise them.
(to be continued....)
May 9, 2004 at 02:55 PM | Permalink | Comments (2) | TrackBack
Thursday, May 06, 2004
0505.US Brutilitarianism Comes to Iraq- Part I: Rogue Behavior, Sheer Stupidity, or Something More?
Since all the claims about Saddam’s WMDs and other threats to global security are now in tatters, the goal of replacing his regime with a more humane one is one of the few justifications for the Iraq War that still has any credibility.
It is therefore deeply disturbing to learn that serious human rights violations, including several cases of torture and outright murder, may have been committed against scores of Iraqi prisoners by leading elements of the Coalition Forces, including the US Army Reserve Military Police, US Military Intelligence (INSCOM), the CIA, and private contractors like CACI International that have been providing so-called “intelligence collection” services to the Pentagon. Similar allegations have also surfaced about the British Army, although those charges may have been exaggerated by the Daily Mirror. This is of course in addition to the thousands of civilian deaths that our precision-guided military has produced all over the country with its "fire and forget" tactics.
A preliminary investigation of these charges by US Major General Antonio M. Taguba, disclosed last week by CBS News and the New Yorker, concluded in February that the alleged abuses included:
Breaking chemical lights and pouring the phosphoric liquid on detainees; pouring cold water on naked detainees; beating detainees with a broom handle and a chair; threatening male detainees with rape; allowing a military police guard to stitch the wound of a detainee who was injured after being slammed against the wall in his cell; sodomizing a detainee with a chemical light and perhaps a broom stick, and using military working dogs to frighten and intimidate detainees with threats of attack, and in one instance actually biting a detainee.
While this behavior pales by comparison with the sadism that was routinely practiced by Saddam’s minions on a massive scale, it has no place in a post-Saddam Iraq, and the US and UK certainly have no business encouraging it. Nor, despite the usual defences often heard in right-wing quarters for "selective torture" to deal with terrorism, is there any evidence that the interrogation methods employed here produced anything more than a public relations debacle for the US and its allies.
These charges may be new to American audiences, but this is far from the first time that they have been made. According Iraq’s former Human Rights Minister, Abdel Basset al-Turki, who resigned on May 4th in protest over these allegations, US Pro-Consul Paul Bremer was put on notice about this widespread mistreatment as early as November 2003, but did nothing. The International Red Cross also reports that if has been complaining for months to the Coalition of methods "far worse" than those depicted in the photos released by CBS, but to no avail.
This is consistent with the brush-off that Paul Bremer and Condi Rice reportedly both gave to the concerns that Amnesty International first raised about conditions in Iraqi prisons way back in July 2003.
The Pentagon also now says that it ordered a “high-level review” of the issue last fall, but this must have had little impact on the ground, since the abuses noted above took place in December 2003.
Now that President Bush himself has finally pronounced these abuses “abhorrent” on two Arab television stations, the military may find more time to focus on the results of the "30 investigations” of these and related charges that it claims to have conducted over the past 16 months. In the wake of Donald Rumsfeld's passive "know-nothing" response to the escalating scandal, calls for his resignation are mounting, and the heads of more than just a few mid-tier officers may also have to roll before justice is done here. Certainly Rumsfeld has a lot of questions to answer, including why he said nothing to Congress about the investigation; why so many cases of brutality have been recorded; why nothing was done to change conditions at Abu Ghraib and elsewhere, despite numerous complaints over many months from Amnesty International and the International Red Cross; why private contractors were permitted to play such a large role in military intelligence; and, most important, why the rules of the Geneva Convention are not being enforced by US military and intelligence personnel.
All told, this scandal is shaping up to be a touchstone for the whole Iraq enterprise. It is worth trying to understand how this happened – and why, in particular, American soldiers who were supposedly raised in a democratic, more or less civilized country and trained by our sophisticated, extraordinarily-expensive modern military, should have participated rather gleefully in such bizarre behavior -- and even photographed it, too!
THE RHETORIC - “TORTURE IS ILLEGAL”
To begin with, lest all the civil and military servants in the audience need to be reminded, this kind of behavior, if substantiated, constitutes a clear violation of one of the most fundamental, widely-shared principles of international and US law – the absolute prohibition against torture.
This prohibition, which is as universal as the ones against slavery or piracy, extends to all prisoners of war, civilians, and all other war-time detainees. Indeed, while Iraqi insurgents may not be deemed to be part of the Iraqi armed forces, and therefore are not technically “prisoners of war” for purposes of the “combatant’s privilege – to fire on enemy troops without fear of prosecution - they are still entitled to the same basic rights so far as interrogation is concerned.
It is interesting to see just how many times this prohibition has recently been repeated in international law -- especially since so many countries still routinely engage in the practice.
- The 1948 Universal Declaration of Human Rights, Article 5: “No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment.”
- The 1949 Geneva Convention (4th Convention, Article 31): “No physical or moral coercion shall be exercised against protected persons, in particular to obtain information from them or from third parties.”
- The 1966 International Covenant on Civil and Political Rights, Article 7: “No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment.”
- The 1975 Declaration on the Protection of all Persons from Being Subjected to Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment, unanimously adopted by the UN General Assembly: “No state may permit or tolerate torture or other cruel, inhuman or degrading treatment or punishment. Exceptional circumstances such as a state of war or a threat of war, internal political instability or any other public emergency may not be invoked as a justification of torture or other cruel, inhuman or degrading treatment or punishment.”
- The 1987 UN Convention Against Torture, which focuses on official conduct, repeats the prohibitions noted above, and also provides that (Article 2): "An order from a superior officer or a public authority may not be invoked as a justification of torture."
All of these conventions have been signed and ratified by both the US and the UK -- although the Bush Administration has actually cooperated with countries like Syria, Libya, and Cuba to oppose efforts to give international inspection “teeth” to these anti-torture conventions.
Of course such prohibitions against torture, cruel and degrading treatment of prisoners prior to conviction, arbitrary detention, “cruel and unusual punishment,” and self-incrimination are also cornerstones of the American and British legal systems.
In the case of the US, they have at least a 200-year history, and are deeply embedded in the US Constitution especially the Fifth, Eighth, Thirteenth, and Fourteenth Amendments. They have also been recognized by numerous state and federal statutes, including the Uniform Code of Military Justice, the War Crimes Act (18 USC 2441), and the 1991 Torture Victims Prevention Act (28 USC 1350 App.)
More recently, the US also played a leading role in prosecuting war crimes at Nuremberg and Tokyo after World War II, and helped finance and organize the prosecution of the war crimes committed in the Balkans and Rwanda in the 1990s. Of course the Bush Administration has also recently taken pride in distinguishing itself from “Axis of Evil” regimes like Saddam’s, North Korea and Iran in this respect.
RELIEF?
So what relief will all this weighty legal doctrine actually provide for the individual Iraqis who were victimized in this case? The short answer is – not much, at least in case of the US.
There appears to be probable cause to prosecute the officers and enlisted personnel involved, as well as any senior officials who knew or should have known about these activities, for war crimes in US or UK military courts, or, in the case of the UK, by the newly-created International Criminal Court (see below.) But while that might satisfy the victims’ needs for retribution, it won’t provide them with any compensation.
The Iraq Special Tribunal that we established in December 2003 to prosecute war crimes and crimes against humanity – mainly those committed by the Ba’athist regime – was carefully limited just to jurisdiction over Iraqi citizens and residents.
In the case of the private US contractors involved in this case, the Iraqi torture victims might at least be able to sue in a US federal district court for the damages inflicted against them under “color of law,” as provided by the Torture Prevention and Alien Tort Claims statutes. But this could be a hard case to prove, especially if the contractors involved were careful to let the military police do all their dirty work.
If the allegations of British misbehavior hold up, the Iraqis might also be able to bring war crimes charges against UK soldiers and their bosses at the new International Criminal Court -- since, unlike the US, Britain ratified the treaty establishing this court last year. This might make for an interesting complement to Saddam’s war crimes tribunal, scheduled to begin later this year. However, even apart from the practical obstacles to such prosecutions, Iraq’s “new government,” hand-picked by the Coalition, would probably readily grant the UK a waiver against such complaints. But British troops might still be subject to the UK's Human Rights Act, which might provide compensation to Iraqi victims for violations of the European Convention on Human Rights. Indeed, 14 Iraqi families have already commenced an action in the UK's High Court for what they consider unlawful killings of their civilian relatives in the last year by British troops.
DAMAGE CONTROL?
When all these revelations first appeared – reportedly after CBS News held the story for two weeks at the request of the US military, which feared the impact on Arab opinion ("duh") -- senior military officers in the US and the UK, as well as political leaders like Prime Minister Blair, President Bush, and even the rather cautious Senator Kerry, were all quick to condemn the obviously indefensible misbehavior. But they were also quick to claim, prior to any investigation, that this behavior must have been exceptional, engaged by at most a handful of “rogue elements” in the military and the CIA, who will now all be sternly dealt with.
However, as Amnesty International has noted, these were not just isolated incidents. Indeed, as we’ll argue below, they appear to be part of a disturbing trend toward the increasing use of “hard-core” interrogation techniques on Arab detainees by the US and its new allies in the “war on terror,” both abroad and at home.
Moreover, the US and UK civilian and military intelligence services and their contractors and surrogates have a long history of intimate involvement with such interrogation methods.
In fact what’s really most unusual about these recent scandals is not the revelation that all these services routinely use such methods, but that in this case they got their hands dirty.
Usually they are smarter than that, outsourcing the "wet work" to Third Worlders in countries with fewer reporters, human rights observers, or young Army reservists.
Ironically, in the case of occupied Iraq, these intelligence services had no other country to "outsource" the work to. With thousands of Iraqi prisoners to deal with, and a growing insurgency, they also had little choice but to rely on at-hand Army Reservists, at least one of whom decided to turn in his comrades. If the military and the intelligence services had not gotten caught, one wonders how many of those "30 investigations" and 25+ deaths in detention we'd have ever heard about.
There is also evidence that, especially in the wake of 9/11, similar tactics may be spreading to domestic law enforcement back in the USA – reflecting a growing militarization of police work. Interestingly, such tough tactics may not actually produce any more solved crimes. But they do provide nice opportunities for frustrated investigators to blow off steam.
Tying all this together, the patterns revealed here really belie the conventional notion that the hard-core interrogation tactics recently seen in Iraq were simply rogue actions by a group of unprincipled individuals.
Nor were they, in Donald Rumsfeld’s words (after four days of silence on the subject), simply “unacceptable and un-American.”
It is more accurate to say that, under the license of our new post-9/11 crypto-culture, many military and civilian intelligence and law enforcement officials apparently feel entitled to violate fundamental civl rights – especially those of Arabs and other suspect minorities – in the interests of pursuing “bad guys.” This is a little taste of what it was like to have been a white cop in Macon, Georgia, or Jackson, Mississippi, in 1956.
So this kind of behavior has become, if anything, all too acceptable, all too American. To make sure that it diminishes, rather than continues to grow, we need to get to the bottom of the institutional failures , not just the individual errors in judgment, that foster it. Just as we (almost) once got to the bottom of the systemic problems in Macon and Jackson.
THE REALITY
The latest disclosures from the Pentagon, when added to other reports in the last two years, add up to some disturbing patterns:
Iraq. As the Pentagon only disclosed on May 4, since December 2002 it has launched investigations of at least 25 suspicious deaths of prisoners in custody in Iraq and Afghanistan, and 10 cases of assaults. In addition, in Iraq, three Army reservists –Army Reserve military police, like those accused in the most notorious incidents -- were discharged in January 2004 for abusing prisoners at Camp Bucca, south of Baghdad near Umm Qasr. In December 2003, two British soldiers were arrested but released after an Iraqi prisoner died in their custody. In November 2003, Major-General Abed Hamed Mowhoush, of Saddam’s Republican Guard, fell ill and died during “an interview with US forces." In August 2003, a US Army lieutenant colonel received a fine, but no court marshall, for firing a shot near a detainee’s head during an interrogation. There have been reports of similar abuses at “Camp Cropper” and “Camp Bucco,” near the Baghad International Airport. Furthermore, the US has been very slow to provide information on the whereabouts and conditions of up to 10,000 civilians who have been detained in Iraq, leaving many family members completely in the dark about them. And, according to Amnesty International, these detainees have been “routinely subjected to cruel, inhuman or degrading treatment during arrest and detention.”
Afghanistan. According to Amnesty International, detainees interrogated by the CIA at Bagram Air Base have allegedly been subjected to "stress and duress" techniques that include ”prolonged standing or kneeling, hooding, blindfolding with spray-painted goggles, being kept in painful or awkward positions, sleep deprivation, and 24-hour lighting.” In December 2002, two Bagram detainees died under suspicious circumstances. A 13-year old Afghan boy who was detained in Bagram for two months described it as
“...a very bad place. Whenever I started to fall asleep, they would kick on my door and yell at me to wake up. When they were trying to get me to confess, they made me stand partway, with my knees bent, for one or two hours. Sometimes I couldn't bear it anymore and I fell down, but they made me stand that way some more."
A December 2002 press report on standard practices at Bagram sounds like it is not all that different from what recently was discovered to be going on at Abu Ghraib prison in Iraq:
Captives are often "softened up" by MPs and U.S. Army Special Forces troops who beat them up and confine them in tiny rooms. The alleged terrorists are commonly blindfolded and thrown into walls, bound in painful positions, subjected to loud noises and deprived of sleep. The tone of intimidation and fear is the beginning, they said, of a process of piercing a prisoner's resistance. The take-down teams often "package" prisoners for transport, fitting them with hoods and gags, and binding them to stretchers with duct tape…..
The US military has also been accused of standing by and watching while its allies in the Northern Alliance slaughtered up to 4000 captured Taliban prisoners of war.
Other Secret Detention Centers. There have also been reports of serious psychological and physical abuse at Guantanamo’s Camp X-Ray and Camp Delta, and other off-limits detention centers, such as Diego Garcia in the Indian Ocean. Up to 3000 detainees are being held in such facilities, but US military officials have refused to disclose their precise names and numbers, and have only allowed intermittent visits from the International Red Cross. In Guantanamo alone, after two years, more than 660 inmates are currently in detention, including some children. All these detainees have been designated as “unlawful combatants” by the US military; we do not know their names or the charges against them, and none of them have received any judicial review, access to lawyers, or even contact with relatives. Indeed, even US citizens are being held in this indefinite “right-less” limbo status, the legality of which is now being challenged in the US Supreme Court. It seems likely that the prisoners’ right-less status has helped to encourage abuses against them.
“Refoulement.” The US has also apparently subjected hundreds of suspects – including, according to the CIA’s George Tenet, at least 70 before September 11th -- to “extraordinary renditions” to countries like Jordan, Egypt, Uzbekistan, Morocco, Saudi Arabia, Israel, and Syria where edgy interrogation methods are routine. According to one report, in the mid-1990s the CIA significantly expanded its efforts to snatch suspected Arab terrorists for purposes of such renditions, from which it shared in any resulting information, by way of a new secret Presidential “finding” that purportedly "authorizes" it -- in violation of all the international treaties noted earlier. According the US State Department’s own country reports, the interrogation methods employed by these US allies include:
- Egypt: Suspension from a ceiling or doorframe; beatings with fists, whips, metal rods, and other objects; administration of electric shocks; being doused with cold water; sexual assault or threat with sexual assault
- Israel: Violent shaking; smelly head-bag; painful positions; "truth serums;" torture of teenagers.
- Jordan: Beatings on the soles of the feet; prolonged suspension in contorted positions; beatings
- Morocco:Severe beatings
- Pakistan: Beatings; burning with cigarettes; sexual assault; administration of electric shocks; being hung upside down; forced spreading of the legs with bar fetters
- Saudi Arabia: Beatings; whippings; suspension from bards by handcuffs; drugging
- Syria: Administration of electric shocks; pulling out fingernails; forcing objects into the rectum; beatings; bending detainees into the frame of a wheel and whipping exposed body parts.
Deeper Roots. While the “war on terrorism” has given hard-core interrogation techniques a new lease on life, in fact the Afghan and Iraqi situations are only the most recent examples of their development by both the US military and the CIA. They sponsored a great deal of primary research on the subject, drafted “how-to” manuals for use in torture/interrogation training, and provided a great deal of instruction and assistance to the global hard-core interrogation industry. Among the many recipients of this development assistance were the Shah’s Iran, Brazil and Uruguay in the 1960s (by way of Dan Mitrione and others), Vietnam (by way of the Phoenix Program’s “Provincial Interrogation Centers), Guatemala and El Salvador in the 1980s, and Honduras’ infamous Battalion 316 in the early 1980s -- where our new US Ambassador to Iraq, John Negroponte, also served in 1981-85.
Dubious Methods Back Home. There is also evidence that the same rough-trade interrogations tactics that US soldiers have recently employed offshore are also showing up more frequently in the US. For example, last December, a Department of Justice investigation disclosed the widespread use of mistreatment and abuse during the interrogation of dozens of Muslim detainees at the Metropolitan District Detention Center in Brooklyn, in the aftermath of September 2001.
More generally, there have also been a growing number of instances of gross brutality inflicted on prisoners in the US' own increasingly over-crowded prison system -- which currently houses more than 2.1 million inmates, the world's largest prison population. Some state systems -- like California's, with more than 160,000 inmates -- are on the verge of breakdown, with crowded conditions, "guard gangs" as well as prisoner gangs, budget shortages, and rampant violations of prisoner rights. Just this year, for example, an inmate on a dialysis machine at California's Corcoran State Prison bled to death while guards ignored his screaming, as they watched the Super Bowl. This is just one of many horror stories that occur on a daily basis in the US prison system. So it is perhaps no accident that at least two of the six US soldiers facing criminal charges in connection with the Abu Ghraib scandal in Iraq were prison guards back home, and one of them was employed at a Pennsylviania prison that is notorious for prisoner abuse.
In an episode that is in some ways even spookier, in February 2004, agents of the US Army’s Intelligence and Security Command (INSCOM) -- which also oversees the same military intelligence units that have been misbehaving in Iraq -- actually showed up undercover and uninvited at a University of Texas Law School Conference in Austin on “The Law of Islam” in civilian garb, and later demanded a list of all the attendees and questioned several students in an aggressive manner. The Army later apologized, and promised to institute new refresher courses on the proper limits of its domestic authority.
Indeed, the US Army might start by reviewing the fundamental federal law that has been on the books since 1879 – the "Posse Comitatus Act" (PCA) (18 USC 1385), which provides for fines and imprisonment for anyone who uses the US military for domestic law enforcement or surveillance, except in times of national emergency or under certain other limited exceptions, none of which applied here.
This growing militarization of US law enforcement -- complete with Predator drones, SWAT teams, US Marines shooting 18-year old goat-herders in the back on the Mexican border, and now the very latest fine refinements on interrogation techniques from Guantanamo and Abu Ghraib -- may be an inevitable byproduct of our brand new global wars on terrorism, drugs, anti-imperialism, and Islamic radicals whose faces we don't happen to like. But those of us who are back home, supposedly the beneficiaries of all this "national security," had better wake up and pay attention to the impact that this emerging "state of siege" mentality is having on our rights -- and those of the Iraqis people that we are supposed to be "liberating."
NEXT: PART II: THE ROOTS OF BRUTALITY.
_________
May 6, 2004 at 01:12 AM | Permalink | Comments (2) | TrackBack
Monday, April 26, 2004
426."Letters from the New World" (South Africa) Denis Beckett #3: "Hair Cut."
HAIR CUT
Scared? No, not scared, not really. You can’t be scared of a barber. Can you? True, this (South African) Shavathon was invented by sadists. True, going in wasn’t easy, past streams of bald guys coming out, clutching their heads in shell-shocked daze.
But that’s not “scared”, surely, so much as rational. A lifetime of long hair gives a ou identity, right? It instills a persona, a self-image, which has never been short hair, let alone no hair. Now you’re about to make the brushcut guys look like hippies. For sure you aren’t overjoyed.
But not “scared”, please. “Lunatic”, maybe. When they start to cut, red-alert clangs. This is the only mirrorless barber seat you ever met. Your head gets cold, in a way you never knew. Clumps of you slide down your shoulders, clumps not of plain untidy excess, but of what has been part of you since you were in nappies....
Your friends who did the easy option, the one-day green dye, are hosing themselves, pointing at you and high-fiving and slapping their sides like at Barry Hilton on the cuzzin routine.
There comes an instant that you can not believe this thing that you are doing. You feel central processing unit contacting your leg muscles with the instruction to bolt. But before the message downloads, the volunteer barber is shaking out the towel. They’re speedy here, whipping off the entire woolsack in 10% of the time that a real barber with mirrors takes to do a trim.
Rub the head, feel strange, be relieved by the touch of a film of stubble. It’s short, but it’s hair. Then the sadists guide you to the blades.
The blades. So far has been only Army-short. Kojak is yet to come. The blades are gonna abolish every wisp, everything but eyebrows.
You can choose to duck this, but a mad instinct says go the whole hog. Partly, there’s testosterone and rank order. The Kojaks are main manne, army-cuts come second and green-dyes are but honorary members of the human race. The other part is duty. Companies pay money to the Cancer Association for every bald head. Plus the world record, 55 000 heads in a day, is up for challenge, and it’s held by… Australia.
And hey, anyway, it’s just this once.
So the shaver lady sprays the lather. This time, it does take time. A head is a bigger thing than you think. A head-shave covers the acreage of six or ten face-shaves, and is a bumpier ride.
The end is shock. A hand ascends to explore, and recoils in instant horror. This is no longer foreign hair on a familiar pate. This is horror-story, feeling not like a head at all, any head, let alone the personal private head you’ve known since youth. It’s a lumpy sticky thing, foreign to the touch, as if a mother dinosaur plonked a reject misshapen egg on top of your neck.
You can choose to duck this, but a mad instinct says go the whole hog. Partly, there’s testosterone and rank order....The other part is duty. Companies pay money to the Cancer Association for every bald head. Plus the world record, 55 000 heads in a day, is up for challenge, and it’s held by… Australia. |
A ou gets a skrik, but not as much as when the lady says: “there you are then. It nearly always grows back, even at your age. Just scrub the skin or it can grow inward.”
Nearly always? Can grow inward? This night was poor in sleep; strong in images of traumatised dead hair refusing to re-start, of trapped stalactite strands clutching downward and strangling the brain.
Furthermore, resting a newly bald head on a pillow is like rolling a brick on a croquet lawn. Hair is a lubricant; one of its less known virtues. Shining and slithery as the naked noggin appears, it glues to the
pillow. Each toss and each turn is a sticky, jerky, jolt.
But nightmares end. The second day the dogs stopped barking. By the third I could enter my bathroom without startling at the ugly bald stranger. By the fourth I ceased to instinctively reach for the hairbrush after the morning shower. Now my family are saying “quite nice, really”, and I’m relishing seeing the world from a short-hair vantage-point.
Sixty thousand bristly heads are walking around town feeling interconnected and a tiny bit smug. We were arm-twisted into it, yes, but whatever the motives, our haircuts brought packets into cancer support and brought us a flash of solidarity with a deeply real cause.
We give each other friendly recognising nods to say “we shared that scared moment, which we won’t admit to.” And the Aussies have come second at something.
April 26, 2004 at 10:54 PM | Permalink | Comments (0) | TrackBack
Saturday, April 24, 2004
424."Letters from the New World" (South Africa) Denis Beckett #2: "Walk Tall."
WALK TALL
In 1973 no-one was as non-apartheid as we all now like to think we were. Hiring the Smuts farm at Irene for a black staff picnic was a mission. Blacks? Little old United Party people in Moth badges wrung hands for weeks. They talked about “the natives”, and “raising expectations”.
But people were moving forward, groping, in that way they do. Permission was finally granted, subject to a stack of promises and guarantees nine yards high.
And everything went fine. Until lunchtime in the meat queue, when Walk Tall got aggrieved. In his view he’d been given an undersize portion, by a little old UP lady.
As the name suggests, Walk Tall was a toughie, a towering strong guy with attitude. He was also afloat in ingested substances. He drew a knife as long as a thigh, and informed the lady that he would cut off her ears to make up his protein.
At this point, you can imagine, picnic day came a little unstuck. Monday morning Walk Tall was contrite as well as hung over. I was sorry to sack him; normally he was a dynamo, and full of personality. But he had to go.
Fast forward 15 years. I’m in Anderson street, back end of town. On foot. It’s winter, it’s dusk, the air is thick with smoke. I’m alone, very alone. I’m vulnerable.
Suddenly there’s a gang around me. Instantly, I know this is farewell to my possessions. Perhaps it’s farewell to blood and breath too. Then I see that one of them is Walk Tall. My heart clangs on the tar. Day of vengeance! I telepath goodbyes to my loved ones.
Walk Tall stares, holding his pals back. Then he roars out my name. To my astonishment he’s not roaring in fury, but in tones you’d use for a long-lost brother. He grabs me – I get a close-up of a wicked blade, sideways on – and smothers me with a huge hug.
I’m introduced to the pals. Knives vanish and all four walk me to the Carlton Hotel. “You can’t walk alone!” says Walk Tall. “There are bad people here!”
On route he says he hasn’t had a job since I fired him. Once in the Carlton’s light I risk the question: doesn’t he perhaps bear ever so slight a bit of, uh, anger?
Walk Tall cracks up like I’ve made a great joke. “What! Angry! At you! No, you had to fire me! And you shook my hand!” The gang returns to the night, waving.
Having latched on to the high ground,Walk Tall has made it his life. You can practically see the halo. But this old world is full of rabbit punches. A year ago he got a job. A month ago he was told to take certain steps in re a collectable debt. In his old incarnation these steps were well in the day’s work. But this is the new moral Walk Tall. He says “I don’t do assault”, and he quits. And who gets the rap? Yeah, right. Me. “You’re the one who told me to stop doing crime, and I listened to you so now I have nothing to eat.” |
Fast forward again, 12 years to 2000. Walk Tall re-appears. He has adventure stories that make Sinbad look stay-at-home. He also has a shining moral high point of his career, viz, having not killed me. His reasons have become complex enough to baffle Freud, wild flights into love and hate and black pride and conquering demons. But the outcome is clear. He has come to see Not Killing Beckett as a Nobel-deserving achievement, or at minimum worthy of eternal thanks.
I mention the mundane fact that six billion other people have also, to date, not killed me, but he is unfazed. He says 5 999 999 999 never had to do anything to not kill me; he alone stayed the knife. It is a bit different, I grant, but no way am I in lifelong debt because he aborted a crime he should never have started. He shakes his head, saddened that such callous ingratitude exists.
That subject remains an impasse, though we’ve tried several times to work it out. Once was on radio where he spoke grippingly about crime and white victims, the mugger’s eye view. People couldn’t stop listening. One guy missed a plane.
Having latched on to the high ground, Walk Tall has made it his life. You can practically see the halo. But this old world is full of rabbit punches. A year ago he got a job. A month ago he was told to take certain steps in re a collectable debt. In his old incarnation these steps were well in the day’s work. But this is the new moral Walk Tall. He says “I don’t do assault”, and he quits.
And who gets the rap? Yeah, right. Me. “You’re the one who told me to stop doing crime, and I listened to you so now I have nothing to eat.”
He hopes I’ll cough up in return for my unpunctured ribcage. I’ve told him to forget that, but another factor grows: all those parables about mercy and lost sheep and returned prodigals. Isn’t that how we’re supposed to live, giving a chance to a guy who reforms? Why do all the reformed sheep I see seem to be staring at slammed doors? Somewhere there’s an employer-type person who believes in reformed characters or in happy endings or in both, and who might communicate with (the name on the ID), at (Joburg address.)
April 24, 2004 at 09:50 PM | Permalink | Comments (0) | TrackBack
Thursday, April 22, 2004
422."Letters from the New World" (South Africa) Denis Beckett #1: "Return."
RETURN
When she left for Australia, Joy cited all the regular reasons, crime and decline and Africa’s uncertainties. She got a nice job in Sydney, and lots of peace and security, and nobody stole the paper from public toilets, never mind the seats.
But Joy’s six siblings in Jo’burg made much reason for many visits. From time to time she’d get a sense that in South Africa she felt the sun on her face in a warmer way.
She kept it quiet, of course. Men in white coats would come. Aus was for The Chosen..
One day Joy and her laaitie, Luke, were at the jungle-gym at the Zoo Lake. This is a very individual jungle gym. It didn’t come out of a box, with plastic fittings. It came out of a forest, big solid logs. It’s the gorilla of jungle gyms, a cousin of the army’s combat training courses, high on opportunity for kids to break arms and bash heads.
Luke was nervous. This was a fearsome thing, after the park at home in Sydney. That park was lawsuit proof, waxed and pasteurised and shrinkwrapped, certified safety precautions at every hinge.
But as he got into it Joy noticed a strange thing: he was having more fun. In fact, she was having more fun too.
She was enjoying this pre-waxed Africa-type park, and enjoying Luke enjoying it. Also, people greeted Luke, and greeted her, and talked to her, sommer, as in “a stranger’s a friend you do not know”. In the waxed world, thought Joy, that was stuff you heard in church. To walk up in a spirit of “hullo, friend I do not know” … you’d get a harassment charge.
On a train in Cape Town an old man befriended Luke, like a grandad, sharing sweets and games, as if Joy wasn’t there. And the conductor ad libbed. Each time he called the route he gave her a wink and a last line like “and enjoy the ride.” It struck Joy that if his Aussie counterpart broke the rules like that, there’d be disciplinary hearings.
A touring black school group and their teacher include Luke as an honorary member. Joy asks the teacher why. She’s implying: “he pays you no fees and no bonus, why should you bother?” She’s also implying, deeper down: “and what is more he’s not even your, um, race”. The teacher replies: “children are our future”. Full stop.
Joy returns to Aus. After a year there, she tallies how many strangers interacted with Luke. Answer = two. In South Africa, she reckons, it’d be hundreds, mainly of course blacks, the pastmasters, but some of the pale lot as well. That was a thing about SA’s new freedoms; the whites were picking up the good habits of Africa, by osmosis.
Joy mumbles about a return to SA. Everybody says What! You crazy? Not only the Aussies say that, so do Seffricans. She feels very alone. Is she crazy? Someone points her to www.homecomingrevolution.co.za, and she’s astounded. Lots of South Africans are going home. That fortifies her, but people say: “To indulge yourself you’re inflicting crime and decline and dead-end-for-whites on your innocent son.” She says: “No, that’s exactly wrong, it’s for my son, so he can grow up enriched by the human connectivity of Africa.” The chorus: What! You crazy?
Joy knows she’s right, in her bones, but damn, she’s scared. The chorus says: “at least put him in private school”. She can’t afford it. Everyone insists a SA government school will doom Luke to placards on street corners. She nearly chickens out. Then her Jo’burg teacher friend Dale phones to say “you’re hearing junk, you want to be here, block your ears and get here.” She did.
Last week Luke’s government school asked Joy if she could do transport, for an outing. She burst into tears. They were surprised. She explained. She’d love to help with transport. She wasn’t allowed, before. Only designated buses and certified drivers carry Aus school outings. No cowboy stuff like Mom’s Taxi.
“I realised”, says Joy, “that I like the cowboy stuff. I like a world with loose ends. I like a world that isn’t all comfort zone. And I like being required to give. It’s not ideal that so many are in need, but for me it’s better to have to give than to never give. You think bigger.”
You do, hey. You think frinstance that we may never be the world’s richest country or continent, or the calmest, or even the kings of the pitch. But heck, we can be the nicest.
April 22, 2004 at 09:55 PM | Permalink | Comments (0) | TrackBack
Sunday, April 11, 2004
412.The Coffee Connection: Globalization's Long Reach, From Vietnam To Nicaragua To Starbucks
INTRODUCTION – THE "G" WORD
I remember the first time I heard the “G” word - “globalization.” It was 1985, and I was interviewing a new McKinsey recruit, a former assistant Harvard Business School professor who had decided to exchange the classroom lectern for a larger bank balance. He was about as excited as business intellectuals ever get about the latest HBS paradigm. This was the notion that, in the wake of the 1980s debt crisis, countries would soon be forced to “globalize.” According to him, this meant that they would soon dramatically reduce all barriers to trade, investment, and labor migration, so that, over time, the world would become one great big happy marketplace.
I reacted with the economist’s usual disdain for business school paradigms. While “globalization” might be a new term, surely the basic concept was not new. For example, the world had also experienced a dramatic rise in trade and investment in the late 19th and early 20th centuries. Nor was it nirvana. As I recalled, this earlier period of free trade been marked by numerous speculative bubbles, debt crises, and even some devastating famines in India, China, and Ireland. Then, during the 1930s, many countries had retreated from the winds of global competition behind tariff barriers, import controls, exchange controls, and fixed exchange rates. At the time these “beggar thy neighbor” policies were damaging to the world recovery. But after the world economy was revived by World War II, it did pretty well during the period from 1950 to 1973. Indeed, many economic historians now refer to that distinctly “un-global” period as the 20th century’s “Golden Era,” the only prolonged period in the 20th century when global growth and income equality have both improved dramatically at the same time.
So I could not help but tweak the young professor's nose a bit about the fact that “globalization,” as he called it, was not new, and was evidently neither necessary nor sufficient for strong performance by the world economy.
TWENTY YEARS LATER…
I t is now twenty years later, and many neoliberal pundits are still discussing “globalization” as if it were something strange and new – and as if it did not already have a very long and really quite problematic track record, including its very mixed record since the early 1990s.
What should by now be clear to any careful student of the subject is that in fact there really is no such thing as “globalization” per se. Its effects cannot be assessed or even measured apart from specific historical contexts. In other words, the liberalization of trade and investment is never implemented across all markets or trading partners at once. Its impact depends crucially on the precise sequence of deregulation, initial conditions, and on complex interactions with all the other market and regulatory imperfections that remain after specific barriers have been removed.
EXAMPLE - MEXICO Vs. CHINA
Just to take one specific example – in 1993, Mexico signed the NAFTA, giving its export sector much more access to the US market. However, the gains reaped by Mexican exports have been somewhat disappointing, because it discovered that just as NAFTA was being implemented, China was also dramatically expanding its exports to the US. This was partly just a reflection of China’s lower labor costs. However, for capital intensive sectors, it also reflected China’s artificially lower cost of capital. Unlike Mexico, where the banking sector had been privatized, China’s banking sector remained entirely in state hands, and it provided $billions in subsidized credit to the export companies that Mexico had to compete with. Having liberalized its capital market at the same time that it liberalized trade, Mexico had essentially given up one of its main weapons in its competitive battle with China.
The following case study of the global coffee market provides another example of “globalization’s” complex side effects. In the early 1990s, the World Bank and the IMF, which have been two of the most fanatical sponsors and promoters of “globalization” around the world, decided to encourage the Socialist Republic of Vietnam to boost its exports and growth rate by aggressively entering the world coffee market. Millions of poor coffee farmers around the world are still suffering from the effects of this grand strategy.
THE COFFEE BARRONS
If one is looking for a good example of the unintended impacts of “globalization,” a good place to start is with the world’s second most globalized commodity -- coffee, which is consumed almost everywhere, produced in 70 countries by more than 25 million farmers, and second only to oil as a share of world trade.
Coffee has certainly has had its ups and downs in the health literature, although the latest scientific evidence is apparently that, at least in moderation, it can do some good, at least if one is prone to gallstones, asthma attacks, cirrhosis of the liver, headaches, or heart trouble. But it has been undeniably beneficial to the shareholders of leading First-World companies like Nestle, Kraft, Sara Lee, P&G, and Germany’ Tchibo, the giant conglomerates that dominate the international business of roasting, processing, wholesaling, and at least in Starbucks’ case, retailing gourmet coffee to millions of First World customers.
In the last decade, all these coffee conglomerates have prospered, and Starbucks, in particular, has struck a veritable gold-mine. Founded in 1985, since its 1992 IPO, Starbuck’s share price has risen at an astounding average rate of 28 percent a year, with four 2-1 stock splits along the way. In February 2004 its market value reached a 12-year high of almost $15 billion, and its revenues now exceed $4.4 billion, growing at 32 percent a year. Entering the vaunted ranks of truly global brands like IBM or Coca-Cola, Starbucks now has more than 74,000 “partners” (actually, employees) and more than 6000 stores in over 30 countries, and it expects to add another 1300 stores this year. Indeed, it is even opening stories in markets that one might have thought would be difficult to crack, like Paris, Saudi Arabia, Mexico, the Philippines, Indonesia, and Lima, Peru, where a cup of Starbucks java reportedly two-thirds of the minimum wage.
One might have hoped that the folks at the other end of the pipeline who actually grow all the coffee might have benefited a bit from all this downstream prosperity. But in fact it would actually come as a something of surprise to the world’s 25 million coffee farmers around the globe, most of whom exist at the very bottom of the income distribution in developing countries like Vietnam, Brazil, Nicaragua, Kenya, and Ghana. Even as the processors, roasters, and retailers were cashing in, conditions for these farmers became more fiercely competitive than ever before. Indeed, from 1997 to 2001, composite world coffee prices fell by two-thirds, reaching the lowest levels in 30 years. Since then, average prices have recovered slightly, but they still remain at just half their (real) 1997 levels.
Since the demand for coffee beans is price-inelastic, the result was that coffee bean exports and the incomes of coffee farmers all over the world just collapsed. The result was one of the most socially -devastating commodity market crashes in modern history, with millions of poor coffee growers from Mexico’s Chiapas region, Guatemala, and Nicaragua to Kenya and Ghana to Indonesia and Vietnam all suffering the effects.
"FAIR TRADE" - MORE OR LESS
As one of the younger, less diversified companies in the industry with a retail brand to protect, Starbucks was perhaps more sensitive to the growing contrast between its own prosperity and the farmers’ desperate situation. In the late 1990s it responded with a new emphasis on “corporate responsibility.” This included support for “fair trade-certified” and “organic” farming, the implementation of sourcing guidelines that emphasized “sustainable” farming practices, paying premium “fair trade-like” prices above market averages, providing a certain amount of credit to coffee farmers and financial aid to poor farming communities, and other measures. In 2003, for example, Starbucks’ paid an average of $1.20 per pound for its Arabica beans, at a time when the open market price was less than half that much.
True, this amounted to just 4-5 cents per cup at most for the farmers, compared with a “vente” coffee based drink that might go for $2.50 to $4.50, depending on what’s in it. True, much of the $1.20 per pound did not get through to the farmers, but was digested by middlemen – even in 2003, at least half of Starbucks’ coffee was purchased through brokers and short-term contracts.
True, the 2.1 million pounds of “fair-trade certified” coffee that Starbucks purchased in 2003 amounted to less than 1% of its bean purchases. And true, the social programs and credit that Starbucks distributed to poor coffee farming communities in 2003 amounted to just $1 million and $2.5 million, respectively, scattered across nine countries – less than 1 percent of its operating income that year. But at least Starbucks deserves credit for making an effort, which the other giants in the industry have failed to do.
However, if Starbucks had really wanted to assist poor coffee farmers around the world, it would not have wasted time with all the “fair trade” and “green farming” activity, as valuable as these symbolic gestures might be in the abstract.
As the following tale explains, Starbucks and the fair traders would have had far more social impact if they had simply persuaded the World Bank to keep its mitts off coffee production.
GLOBALIZING COFFEE PRODUCTION
In 2000-2002, an acute coffee market crisis hit poor countries like Nicaragua, Guatemala, and Kenya broadside. The tale of this fiasco is worth telling just because of its dire impact on such countries, which depend on coffee for 25 to 30 percent of their exports. But it is also a striking example of the unintended side-effects of globalization, and of neoliberal development banking at its worst. After all, as noted, coffee is grown by more than 25 million small farmers in more than 50 developing countries, including several of the world’s most heavily-indebted nations. Indeed, it is second only to crude oil as a developing country export. So if you wanted to pick one global commodity market not to screw up, this would be it. But that did not stop the World Bank, the IMF, and the Asian Development Bank from doing so.
As this story also demonstrates, even in the 21st century, countries like Nicaragua not only remain at the mercy of intransigent rightists, corrupt elites, and egomaniacal leftists. They are also at the mercy of massive screw-ups by half-baked neoliberal experiments located half-way around the globe – and by fellow “former socialist countries!”
In 1986, the Socialist Republic of Vietnam’s Communist Party leadership decided to switch from central planning to a liberalization policy called “doi moi” – “change and newness.” This was partly just because, like Cuba, Vietnam could no longer depend on the (crumbling) USSR for huge subsidies. It was also because senior economists at the IMF, UNDP, World Bank, and ADB were preaching the glories of free markets, and holding out the prospect of billions in aid.
The resulting program, designed with extensive assistance from the world’s leading development banks, was a controlled version of a standard orthodox adjustment program. It set out a 10-year plan – oops, “strategy” - for export-led growth, based on opening up Vietnam’s heretofore-closed economy to trade and investment, allowing state-owned banks freedom to lend to individual borrowers, decollectivizing the farm sector, and – in particular -- encouraging small farmers and state-owned companies to develop new cash crops for export.
At the same time, political power was to be kept firmly in the hands of the Communist Party’s Politburo. Despite that slightly-illiberal grace note, from 1993 on, this doi moi economic liberalization package was generously supported with plenty of advice and more than $2 billion a year of foreign loans and grants from the Asian Development Bank, the UNDP, Japan’s JIBC, the France’s Development Fund (AFD), the World Bank, the IMF, and the aid agencies of the US, Sweden, France, and several other Western governments.
Nicaragua’s 44,000 small coffee farmers, the 6 million small farmers in 49 other countries who collectively produced more than 80 percent of the world’s coffee beans, and the more than 100 million people whose jobs and livelihoods depended on coffee beans, had probably never heard of doi moi. But they became one of its first targets. Right from the start, evidently without much thought about collateral damage, Vietnam and its neoliberal wizards decided to embark on a brave new coffee export business.
While coffee had been grown in Vietnam ever since the 1850s, production and exports had been limited. The domestic market was small, and there were few facilities to process the raw beans. As of 1990, green bean exports were a mere 1.2 million 60-kilo bags per year. But Vietnam’s central highlands did have rich hilly, lots of rainfall, and low labor costs, which were ideal conditions for achieving high yields and low prices.
This was especially true for low-grade, easy-to-grow robusta beans. From a consumer’s standpoint, this species was inferior to the Arabica beans grown by Nicaragua and most other Central American producers, as well as big producers like Brazil and Colombia. Arabica had traditionally accounted for more than three-fourths of the world’s coffee production. But robusta had twice the caffeine content of Arabica at half the price, and it could also be used as a cheap filler and blending ingredient.
By the 1990s, bean quality was no longer an absolute barrier to entry in coffee farming. The global market was increasingly dominated by a handful of giant First World coffee processors, roasters, and grinders, including Nestle, Kraft, Sara Lee, P&G, and the German company Tchibo, as well as retail store owners like Starbucks, which generated their own blends. Increasingly, these companies sourced coffee beans from all over the planet, mixing and matching them to produce blends that not only satisfied customer tastes, but also minimized costs. These global buyers had been working overtime on new technologies that took the edge off the cheaper robusta beans and allowed them to be used for extra punch and fill. With the help of commodity exchanges, the giants had also defined standardized forward and futures contracts that allowed them to hedge against price fluctuations – making for a much more “perfect” global coffee market.
From the standpoint of small farmers, most of whom did not have easy access to such hedging devices, “market perfection” was in the eyes of the beholder. The changes introduced by the giant buyers amounted to a radical commoditization of the market that they depended upon for their livelihoods, a sharp increase in direct competition. Accordingly, even as downstream market power became more and more concentrated in the hands of the First World giants, the farmers’ share of value-added plummeted. In 1984, for example, raw coffee beans accounted for more than 64 percent of value-added in the US retail coffee market. By 2000, this share had dropped to 18 percent. From 1990 to 2000, while global retail coffee revenues increased from $30 billion to $60 billion, the revenues earned by bean-growing countries dropped from $10 billion to $6 billion. By then, for every $3.50 café latte sold by Starbucks, the farmers earned just 3.5 cents.
The farmers’ shrinking role was due in part to the basic structure of the global coffee industry. On the supply side, as noted, by the 1990s, raw beans were being exported by more than fifty countries, who were competing head-to-head. But while a few growers like Brazil and Colombia had tried to break into foreign markets with their own processed brands, a handful of global First World buyers still dominated processing and marketing. Indeed, many of the world’s leading exporters of processed coffee, like Germany and Italy, grew no coffee at all.
This long-standing First World control over global coffee processing is partly due to technical factors. There are economies of scale in processing, but not in coffee farming. Unlike petroleum or natural gas, which can be warehoused for free in the ground, coffee beans are costly to store. Unlike wine, aged beans also have no incremental value. Furthermore, most small coffee farmers depend on coffee sales for their current incomes. Global coffee demand is actually not very price-sensitive, and it is only growing at a modest 1 percent per year. All this means that prices tend to fluctuate wildly with current production, so there is an incentive for processors to stay out of farming, shifting market risks to millions of poorly-diversified producers. The fact that coffee beans be stored 1-2 years, while roasted or ground products have a much shorter shelf-life, also favors locating processing facilities close to the final consumer markets. And anyone who has been to France, Italy, or Brazil knows that tastes for particular kinds of coffee vary significantly across countries.
But the coffee industry’s international division of labor is not only based on such technical factors, many of which are actually declining in importance. It is also based on long-standing trading patterns and colonial relations – for example, the 16th century role of the Dutch in smuggling coffee plants out of Yemen to their colony in Java, which fostered Indonesia’s entire coffee industry; the role of French, British, Portuguese, and Japanese trading companies in Africa, Jamaica, Guyana, Brazil, and Asia, and the role of American companies in Colombia, Central America, and Southeast Asia. The First World’s dominance has been reinforced by trade barriers that favor the importation of raw beans over processed coffee.
The net result of all this is as if France, Italy and California were compelled to export all their grapes to Managua, Nairobi, and Jakarta, in order to have them processed into wine.
Along these lines, given the importance of small coffee farmers to debtor countries, and the World Bank’s supposed commitment to “poverty alleviation,” it may seem surprising that the World Bank, the IMF, and other development lenders devoted zero energy in the 1990s to designing a monopsony-breaking strategy for coffee growing countries, to help them break down this division of labor its supporting trade barriers.
Instead, the development bankers did just the opposite, helping Vietnam implement an anti-producer-cartel strategy that ultimately helped to drive the coffee- countries’ association, a rather pale imitation of OPEC, completely out of business in 2001. Could it be that these First World development banks were not influenced by the fact that the world’s leading coffee conglomerates also happen to be based in countries like the US, Japan, France, Switzerland, and Germany, not far from the development banks’ headquarters?
COFFEE CONTRAS
Vietnam’s decision to push coffee bean exports as a cash generator in the 1990s was not just based on rational economics. Like most critical decisions in economic development, it also had a crucial political motive. Vietnam’s best region for growing coffee turns out to be the Central Highlands, along the border with Cambodia and Laos. This region is inhabited by about 4 million people, including 500,000 to 1 million members of non-Buddhist ethnic minorities who are known collectively as the Montagnard/Dega hill tribes. These fiercely independent peoples have battled the Communist Party, and, in fact, most other central authorities, for as long as anyone can remember. In the 1960s, 18,000 of them joined the CIA’s Village Defense Units and fought hard against the NLF. They had many run-ins with South Vietnam’s various dictators. After the war ended in 1975, some Montagnard tribes continued armed resistance at least until the late 1980s.
To shore up control over this volatile region, in the early 1980s Vietnam’s government embarked on its own version of ethnic cleansing – or at least dilution. It actively encouraged millions of ethnic Kinh – Vietnam’s largest ethnic group – plus some other non-Montagnard minorities, to migrate from the more crowded lowlands to the Central Highlands. At first, these migrations were organized directly by the government. But by the 1990s, they were being driven by a combination of market forces and government subsidies. On the one hand, the migrants sought to escape the poverty and resource exhaustion of the lowlands. On the other, they were attracted by the prospect of obtaining cheap land and credit to grow coffee, the exciting new cash crop, which became known to the peasants as “the dollar tree.”
The result was an influx of up to 3 million people to the Central Highlands provinces in less than two decades. In 1990-94 alone, some 300,000 new migrants arrived in the provinces of Dak Lak, Lam Dong, Gia Lai, and Kontum, looking for land. By 2000, these four provinces alone accounted for 85 percent of Vietnam’s coffee production. This reduced the Montagnard tribes to the status of a minority group in their own homelands. They watched in anguish as their ancestral lands were reassigned to outsiders, including state-owned companies, controlled by influential Party members in Hanoi who had close ties with leading Japanese, American, and Singaporean coffee trading companies. Many Montagnards were forced to resettle on smaller plots, without compensation. Over time, as the local economy became more vulnerable to fluctuations in world coffee prices, this contributed to explosive social conflicts.
From the standpoint of Nicaragua’s campesinos, the key impact of all this was on world coffee prices. In Vietnam, the migrants and Montagnards alike turned to coffee for support on increasingly-crowded plots. At the time, in the early 1990s, coffee still offered greater revenue per unit of land, compared with other cash crops like rice or peppers, and it was also being actively promoted as a cash crop by state banks, trading companies, and the government.
It took three to four years for a new coffee bush to mature, so the real surge in exports did not occur until 1996-2000. Then, in just a four-year period, Vietnamese exports flooded the market. From 1990 to 2002, they increased more than ten-fold, from 1.2 million 60-kilo bags to more than 13.5 million bags. By 2000, Vietnam had become the world’s second largest coffee producer, second only to Brazil and ahead of Colombia. In the crucial market segment of cut-rate green robusta beans, the blenders’ choice, Vietnam had become the world leader. While other producers like Brazil also increased their robusta exports during this period, Vietnam alone accounted for more than half of all the increased exports. This helped to boost robusta’s share of all coffee exports to 40 percent.
In pursuing this strategy, Vietnam did not bother to join coffee’s OPEC, the Association of Coffee Producing Counties. Indeed, it acted rather like a scab, providing an incremental 800,000 metric tons of low-priced coffee by 2000, roughly equal to the world market’s overall surplus. The giant coffee buyers were quite happy to buy up all this low-priced coffee and swap it into blended products like “Maxwell House” and “Tasters’ Choice,” using it to discipline other leading supplier-countries. At the same time, foreign debt-ridden countries like Indonesia, Brazil, Uganda, Peru and Guatemala also boosted their coffee sales, in order to generate more exports. In September 2001, partly because of this beggar-thy-neighbor strategy, the ACPC completely collapsed and was disbanded.
The resulting export glut caused world coffee prices to tumble to a 33-year low by 2002. According to the World Bank’s own estimates, this caused the loss of at least 600,000 jobs in Central America alone, and left more than 700,000 people in the region near starvation.
Worldwide, the effects of the coffee glut were even more catastrophic, because the world’s fifty-odd coffee producing countries included many of the world’s poorest, most debt-ridden nations. Ironically, just as they were supporting Vietnam’s rapid expansion into exports like coffee, in 1996 the World Bank and the IMF had launched a new program to provide debt relief to the world’s most “heavily-indebted poor countries” -- the so-called HIPC program. By 2001, indeed, the HIPC program had made some progress in debt reduction, cutting the “present value” of the foreign debts for those countries that completed the program by a median of thirty percent. However, of the 28 heavily-indebted poor countries that had signed up for the World Bank’s HIPC program by 2003, no less than 18 of them were coffee growing countries – including not only Nicaragua, but also desperately poor places like Bolivia, Honduras, Uganda, the Congo, Cameroon, Rwanda, the Ivory Coast, and Tanzania.
Indeed, for the larger coffee exporters in this group, even when they managed to wend their way through HIPC’s complex program and qualify for debt relief, they found that most of its benefits had been offset by the coffee crisis! For example, Uganda, the very first country to qualify for HIPC relief, discovered that by 2001, just one year after qualifying for HIPC, its foreign debt was higher than ever -- mainly because it had to borrow abroad to offset the impact of the coffee crisis on exports!
Furthermore, many other “not-quite-so-heavily indebted” developing countries that produced coffee, like India, Indonesia, Peru, Guatemala, Kenya, Mexico, and El Salvador, were also hurt badly. Overall, if one had set out to create destitution and suffering in as many of the world’s developing countries as possible at one fell swoop, one could hardly have devised a better strategy than to encourage Vietnam to thoughtlessly expand its commodity exports in general, and coffee in particular – free markets be blessed, all other developing countries be damned.
In Nicaragua’s case, the average wholesale price for its Arabica beans fell from $1.44 a pound in 1999 to $.51 cents a pound in 2001 and less than $.40 a year later, compared with typical production costs of $.83 a pound.
Among the hardest hit were Nicaragua’s 44,000 small producers, who accounted for two-thirds of Nicaragua’s production and provided jobs that supported another 400,0000 Nicaraguans, most of them landless campesinos in the rural northwest around Matagalpa, north of Managua. They depended upon Nicaragua’s annual coffee harvests for most of their employment and income. The resulting crisis in the countryside set off a migration to Managua and other cities, with thousands of hungry, landless people crowding into makeshift shacks on the edge of town.
Obviously all these developments begged many questions so far as the role of the World Bank and Vietnams’ other international lenders and advisors was concerned. After all, Vietnam was just a very poor state-socialist country that was undertaking all these free-market reforms for the first time – after fighting and winning a thirty-years war of its own with the US. The World Bank, IMF, and the ADB, on the other hand, were supposed to be the experts – they had implemented such reforms all over the world, backed by billions in loans and boatloads of Ivy-League economists. And Vietnam was intended to be one of their poster stories for de-socialization, and for the claim that growth, free markets, and “poverty alleviation” could go hand-in-hand.
In April 2002, sensitive to NGO charges that the World Bank and the other development lenders might actually bear some responsibility for this fiasco, the World Bank went out of its way to issue a press release denying any responsibility for the crisis whatsoever. Or more precisely, it denied having directly provided any financing to expand coffee production in Vietnam. It also maintained that its $1.1 billion of lending to Vietnam since 1996 had tried – though evidently without much success – to diversify farmers away from cyclical crops like coffee. It also argued that, after all, its lending to Vietnam’s rural sector had only started up after 1996, while coffee production had increased since 1994, and that none of its investments had been “designed to promote coffee production. (emphasis added) ” It did identify two World Bank projects that “could be linked” to coffee production – a 1996 Rural Finance Project that helped Vietnamese banks lend money to farmers, and a Agricultural Diversification Project. But for these projects, the Bank simply observed that it didn’t dictate how Vietnamese banks re-loaned the funds that it had loaned to them.
Overall, then, World Bank basically washed its hands of the coffee crisis -- one of the worst disasters to strike small farmers, their dependents, and debtor countries in modern times. The World Bank did assure the public that it was extremely concerned about the plight of these farmers, and promised to address their woes.
On closer inspection, this defense had more than a few holes. First, whether or not the Bank financed any new coffee farms, clearly the World Bank and its cousins at the IMF, the UNDP, and the ADB were up to their elbows in designing, managing, and financing Vietnam’s economic liberalization program. In the first place, they played a key role in pushing Vietnam to liberalize trade, exchange rates, and banking quickly. To set targets for Vietnam’s macroeconomic plans, they had to have known which export markets the government planned to go after. After all, coffee was not just another export. After the removal of Vietnam’s quotas on coffee and other exports in 1990, partly at the request of IMF, coffee quickly became the country’s number-two export, second only to oil. It continued to be one of the top ten exports even after prices cratered. The ADB and the World Bank also worked closely with Vietnam’s Rural Development Bank, the country’s largest rural lender, to improve management and structure new lending programs. They also advised Vietnam on how to set up a Land Registry, so that rival land claims could be settled and farms – at least the non-Montagnard claimants who found it easier to get titles -- could borrow to finance their new crops more easily.
At the same time, far from encouraging Vietnam to work with other coffee producers to stabilize the market, or design an overall long-term strategy to break up the buy-side power in the market, the development banks bitterly opposed any such interference with “free markets” – no matter how concentrated the buyers were, or how many artificial restrictions had been placed by First World countries on the importation of processed coffee. As one senior World Bank economist remarked in 2001, at the very depths of the coffee glut:
Vietnam has become a successful (coffee) producer. In general, we consider it to be a huge success...It is a continuous process. It occurs in all countries - the more efficient, lower cost producers expand their production, and the higher cost, less efficient producers decide that it is no longer what they want to do.
So, despite its 2002 press release, the World Bank’s true attitude about this whole fiasco appears to have been a combination of “not my problem,” sauve qui peut, and Social Darwinism.
Meanwhile, back in Vietnam, the small farmers in the Central Highlands learned the hard way about the glories of global capitalism – thousands of them had decided that it was “no longer what they wanted to do,” but were finding few easy ways out. After the 1999-2002 plunge in coffee prices, Vietnam’s export earnings from coffee fell by 75 percent from their level in 1998-99, to just $260 million in 2001-02. In 2002-03, they fell another 30 percent. In the Central Highlands, thousands of the small farmers – low-lenders and Montagnards alike -- had gone deeply into debt to finance their growth, and were struggling to feed their families and send their children to school, because market prices now covered just 60 percent of their production costs.
In short, ten thousand miles from Managua, on the opposite side of the globe, these highland farmers were facing the same bitter truths that Nicaraguan campesinos were facing -- that they had more in common with each other than with the stone-hearted elites who governed their respective societies, and designed futures that did not necessarily include them.
In Vietnam, the resulting economic crisis severely aggravated social and political conflicts in the Central Highlands. In February 2001, several thousand Montagnards held mass demonstrations in Dak Lak, demanding the return of their ancestral lands, an end to evictions for indebtedness, a homeland of their own, and religious freedom ( since many Degas are evangelical Christians). Vietnam responded with a harsh crackdown, sending thousands of elite military troops and riot police to break up their protests. They arrested several hundred of them, and then used torture to elicit confessions and statements of remorse. They also destroyed several local churches where the protestors had been meeting. Those protest leaders who did not manage to escape to Cambodia were given prison sentences up to 12 years.
From one angle, this repressive response was the typical handiwork of a Communist dictatorship. From another angle, however, it was just another example of the repressive tactics that neoliberalism required to implement free-market “reforms” by non-Communist regimes, in countries like Venezuela, Ecuador, Bolivia, Egypt, Indonesia, the Philippines, Argentina, and post-FSLN Nicaragua.
In Vietnam’s case, far from helping to solved its political problems in the Central Highlands, the Politburo discovered that their neoliberal reforms had inadvertently helped to revive the Dega separatist movement. Evidently, economic and political liberty did not always go hand in hand.
At least the Politburo and their foreign advisors did have something to show for the coffee strategy, however. In 2000-2002, the profit margins earned by the five giant companies that dominated the global coffee market were higher than ever. Furthermore, cocaine producers in the Andean region no longer had to worry about small farmers substituting coffee for coca. In Colombia’s traditional coffee-growing regions, just the opposite started to happen in the late 1990s, as many farmers converted coffee fields to coca, in the wake of the coffee glut.
Indeed, from 1995 to 2001, coca cultivation more than tripled in Colombia, including a 20 percent increase in 2000-01 alone. This occurred despite hundreds of millions of dollars spent by the USG on coca eradication efforts, the so-called “centerpiece” of its “Plan Colombia.” In 2000-01, coca production started to increase again in Peru, Bolivia, Ecuador and Venezuela. There were also reports that farmers were even turning away from coffee and towards coca in areas that had never before seen coca, like the slopes of Kenya’s Mount Kilimanjaro. Cocaine production from the Andean countries also rose sharply from 1998 to 2002.
After all, unlike coffee, at least coca and cocaine were products for which both the farming and the processing could be done at home.
EXAMPLE - THE IMPACT ON NICARAGUA
Overall, by 2003, Nicaragua’s real per capita income had fallen to $400 (in real $1995), roughly its 1951 level. With population growth averaging 2.4 percent a year in this overwhelmingly Catholic country, the economy would have to grow at 5 percent a year for 30 years just to recover the 1977 per capita income level – compared with the actual average growth rate of 1.3 percent during the 1990s. By now, the country’s entire national income is just $11.2 billion, less than three Starbuck’s annual revenues.
By 2003, underemployment levels exceeded 60-70 percent in many parts of the country, and the overall proportion of people living in poverty was 67 percent, second only to Honduras in Latin America. This means that there were some 1.6 million more Nicaraguans living on the borderline of existence than in 1990, at the end of the contra war.
Earlier, in the 1980s, the Sandinistas had been justifiably proud of their health, education, and literacy programs. Even in the depths of the contra war, rates of infant and maternal mortality, malnutrition, and illiteracy had declined. Infant mortality fell sharply from 120 per 1000 live births in 1979, immunization coverage rose, and the share of the population with access to health care increased from 43 percent to 80 percent.
In the 1990s, however, there were sharp increases in all these maladies, aided a 75 percent cut in public health and education spending by 1994. By 2000, Nicaragua was spending four times as much on debt service as on education, and a third more than on public health. The infant mortality rate was still 37 per 1000, and the under-age-five mortality rate was 45 per thousand, among the highest in Latin America. (Cuba’s equivalent rates, for comparison, were 7 and 9 per thousand.) As of 2000, 12 percent of Nicaraguan children were underweight, and 25 percent were under height. More than 22 percent of children under the age of 9 – 300,000 children -- were malnourished. By 2000, 37 percent of school-age children were not enrolled in classes, and illiteracy, which an intense campaign by the Sandinistas in 1980-81 had reduced to 15 percent, had climbed back up to 34 percent, and was even higher in rural areas. Women’s rights also suffered, as the Church conspired with the new conservative governments to drive abortion underground, even at the cost of higher maternal mortality rates because of botched illegal abortions.
Coincidently, Nicaragua’s $400 per capita income was almost exactly the same as that of the Socialist Republic of Vietnam, its new direct competitor on the other side of the planet. Indeed, in one of history’s many ironies, these two formerly “leftist” countries were now passing each other on the globalization escalator, heading in opposite directions. By 1998, if we believe the statistics published by the UNDP, Vietnam’s poverty rate had dropped to 37 percent, below Nicaragua’s, while adult literacy had reached 94 percent, above Nicaragua’s (declining) rate of 63 percent. Vietnam’s average life expectancy had also matched Nicaragua’s 68.3 years. And far from having a chronic foreign debt crisis, which Nicaragua has had since 1979, Vietnam became one of the development banks’ darlings, as we saw earlier, drawing down $2 billion a year in concessional finance throughout the decade, plus more than $30 billion in foreign investment. Yet Vietnam’s ratio of debt to national income was just 35 percent – not exactly low, but only one-tenth that of Nicaragua’s.
Furthermore, with all the outside help, on top of its entry into the coffee export market, Vietnam’s growth rate averaged more than 9 percent a year in the 1990s, even as Nicaragua’s growth stagnated. In 2001, when Vietnam’s Ninth Communist Party Congress adopted its “Tenth Ten Year Strategy” for the period 2001-10, the World Bank and the IMF were both on hand in Hanoi to celebrate with yet another generous structural adjustment loan program – carefully shielded, of course, from any angry Montagnards who might wish to complain.
All told, by the New Millennium, out of 173 nations ranked by the UNDP according to their “human development” metrics, by the year 2000, Nicaragua had dropped from 68th in 1980 to 118th. It passed Vietnam on the way down, which was in 101st place and rising. The responsibility for Nicaragua’s decline appears to have been almost evenly divided between the contra war of the 1980s and the neoliberal war of the 1990s. Relative to more prosperous (haven) neighbors like Panama and Costa Rica, as well as to the pro-US, military-dominated abattoirs to the north, Guatemala and El Salvador, Nicaragua’s relative decline has been even more striking.
So evidently it wasn’t enough to pull off a revolution and defeat a US-backed puppet army, as both Vietnam and Nicaragua had succeeded in doing. Daniel Ortega and his comrades must have occasionally wondered a little wistfully, “If only we had managed to install a full-fledged, centrally-planned Communist dictatorship, as we were accused of trying to do! Maybe the world would have been as generous to us as it has been to the Socialist Republic of Vietnam!”
April 11, 2004 at 08:46 PM | Permalink | Comments (0) | TrackBack