NOTE: The views expressed here belong to the individual contributors and not to Princeton University or the Woodrow Wilson School of Public and International Affairs.

Friday, April 29, 2011

Priming for the primaries: Eliminate the caucus

Theresa Chalhoub, MPA

It’s once again time for presidential primary season, and while the selection process this cycle will be decidedly less competitive than the 2008 race (at least on the Democratic side), voters across the country will nonetheless go to the polls and make their decisions. As they do, I’d ask that our political leaders assess the systems that choose our nominees, paying particular attention to the shortcomings of the caucus process.

As we saw in 2008, primary season can be long and somewhat confusing. Adding to the confusion is the fact that some states hold primaries to nominate their candidates, while some hold caucuses. Caucuses have long been a part of our electoral process, but this system has become ridden with problems. Caucuses hurt voters’ rights and the electoral process in four key ways:

1) Increased social pressure on voters to vote a certain way

Because of the communal nature of caucuses, where citizens vote in front of each other, there is often pressure on certain voters to vote the “right” way. Members of minority groups may feel compelled to vote along with the majority, and some women may feel bound to vote with their husbands. This not only affects voters, but less popular candidates as well, as voters who might vote for them in the privacy of a booth are reluctant to do so in front of their community.

2) Decreased voter participation rates

Caucuses also decrease voter participation by discriminating both against in-state voters who cannot attend or remain at a caucus, and out-of-state voters who are denied the right to vote absentee. Because a caucus is held on a particular day and can last several hours, many voters – including those who work long hours, have families with small children, or are sick or unable to easily leave their homes – are often absent and left out of the voting process. Similarly, many voters who are not in the state, such as those in the military or away at school, are often not allowed to vote by mail.

Illustrating the differences in voter turnout between caucuses and primaries are the voter participation rates in the 2008 presidential primaries in Iowa and New Hampshire. In Iowa, which holds a caucus, 354,355 ballots were cast, out of a voting-eligible population of 2,196,724, representing roughly 16.1% of eligible voters. In contrast, New Hampshire, which holds a primary, saw 529,711 ballots cast out of a voting-eligible population of 988,708 – a 53.6% voter turnout rate.[1]

3) Decreased efforts to attract new voters

Caucuses also decrease state efforts to attract new voters. Many caucus-goers are older voters who are more politically active and represent issues that are different from the general electorate. Candidates’ campaigns often target these reliable voters and are discouraged from engaging new voters who are often intimidated by the caucus process and disinclined to participate in a system that requires hours of their time.

4) Unfair advantages for specific candidates

Lastly, as shown by the 2008 presidential primaries, the system a state uses can often be more telling of who will win than its population’s political make-up. In 2008, Barack Obama won 13 of 14 caucus states and Hillary Clinton, 20 of 37 primary states.[2] The differences between the candidates’ ability to win using different systems was particularly apparent in Texas, which held both a primary and caucus on March 4, 2008. In the primary, held during the day, Hillary Clinton won with 51% of the vote. At the caucus, held that evening, Obama came out victorious winning 56% of caucus-goers.[3]


While caucuses are a traditional part of our electoral process, this system has become detrimental to states’ efforts to achieve fair voting practices. As we move into a new presidential election season, state and party leaders should assess the disadvantages of caucuses and create ways for all voters to participate equally in our democracy.

[1] Michael McDonald, "2008 Presidential Nomination Contest Turnout Rates," United States Elections Project (George Mason University), October, 2008.
[2] This includes Texas as both a primary and caucus state. CNN Election Center, August 2008. Available:
[3] Ibid. Available:

Islamophobia and the etymological roots of the King hearings, part II: The emerging Western semantics of Islam and Muslims

Editor’s Note: This is the second of a three-part series on Islamophobia in America. Part I discusses the premises and implications of the King hearings. Part III examines Islamophobic language trends in major English and Arabic media outlets and their implications for public policy.

Nazir Harb, MPA

Ludwig Wittgenstein said, “The limits of my language are the limits of my mind. All I know is what I have words for.” I recently gave a talk on the semantics of talking about Islam and Muslims in America as part of a new series, Islamic Literacy at the Woodrow Wilson School, which took place throughout April. During the talk we briefly examined the etymology of Islamophobia—that is, the words, phrases, and orthographies that exist in our language and its history that convey anti-Muslim or Islamophobic sentiment. While some of these terms have fallen out of popular usage, they are increasingly coming up on anti-Muslim and Islamophobic websites and in books and articles written by Islamophobes. As future policymakers it is important, if only for analytical purposes, to be able to identify these thinly-veiled lexical forms of bigotry.

Descriptor or epithet?
The word “Moor” is a derogatory term for a Muslim that arguably stems to 8th century Muslim Spain—it is related to the Greek adjective for “black” and referred to someone from Mauritania. It took on a racist connotation when it became the slur used to refer to all Muslims. Another egregious term for Muslim is “Saracen”: a Roman Crusader word for Muslims that harks back to the 11th century. Some may also come across the word “Mohammedan,” or its derivation, Mohammedanism. This is an archaic construction that referred to a Muslim and was common in Western and English literature, taken from the 14th century Latin word “Macamethe.” The use of such words today is not only inaccurate, as they are no longer conventional, but it’s also derisive: it disregards the words and names that Muslim people have chosen for themselves and for their religion. It should therefore be avoided by anyone who is trying to maintain a measure of inter-communal tolerance or by academics attempting to use sterile terminology.

Moslem or Muslim?
While not always necessarily Islamophobic, the way a word is spelled might also raise a red flag; like other groups, Muslims have reached a consensus regarding the preferred spellings of words that pertain to their identity. As such, “Moslem” is now obsolete. The preferred spelling is “Muslim,” as it better approximates the correct pronunciation of the word. “Muslim” is a proper noun, and so one can refer to “a Muslim” or to “Muslims” in the plural. “Muslim” is also the adjective that pertains to a Muslim, so while it is grammatically appropriate to refer to a “Muslim person,” “Muslim man,” “Muslim woman,” or the “Muslim people,” it is incorrect to write, “Islamic person.” (Needless to say, "Islamic" is exclusively an adjective, so certainly "an Islamic" is also not a correct way to refer to a Muslim person.) Furthermore, a country cannot be “Islamic”—it may contain a population that has a Muslim majority, in which case it is a “Muslim majority country” or “Muslim majority society,” but it is not an “Islamic country,” nor is there an “Islamic world.” “Islam” refers to the religion and “Islamic” is an adjective that is reserved for cases wherein something has a distinctly Islamic property, as opposed to “Muslim” that refers to something or someone pertaining to a Muslim person or persons, i.e. Muslim identity.

Thus, an “Islamic government” is a government which is, or asserts itself to be, in accordance with sharia (Islamic law), not merely one that presides over a preponderance of Muslim adherents or is run by government officials who are Muslim. (Note though, that it is usually a misnomer, or at least a subjective articulation. Many Muslims take issue with claims by governments in the Middle East, North Africa, and South East Asia that claim to be "Islamic" or assert that they govern in accordance to sharia because such usage disingenuously implies that there is indeed one clearly-defined way to govern Islamically or that sharia somehow exists in a book or a written decree that can readily be referred to, easily interpreted, and facilely lends itself to political implementation.)

Similarly, while the awkward phrase “Islamic terrorism” has become a frequently used buzz word in popular discourse, I would argue that it is, strictly speaking, grammatically incorrect. It would be better (grammatically) to say “Muslim terrorist,” as neither the act of terrorism nor the actor carrying out the terrorist action is “Islamic.” One way to employ the phrase without legitimating it grammatically is to write it between quotation marks.

Mohammed or Muhammad?
On a related note, “Muhammad” is the standard spelling of the name when it refers to the last prophet of Islam; “Mohammed” is a common spelling for other people who have that name. The spelling is the same in Arabic in either case, but in Arabic the Prophet’s name is customarily followed by an honorific. In English, the honorific usually appears in parenthesis immediately after every use of the Prophet’s name. The one generally used is “(pbuh),” an acronym for “peace and blessings be upon him.”

Koran or Quran?
While some continue to argue that the spelling “Koran” is “more germanic” and better follows standard English spelling conventions, this spelling is largely considered obsolete and most Muslim scholars or scholars of Islam use the spelling “Quran.” This is due to the fact that Arabic has two different letters, “kaf” and “qaf,” each pronounced differently. The first is pronounced similar to the English “K” while the second has a more guttural sound, officially transliterated with a “Q” (with no necessary “u” to follow, as in Iraq). The holy book of Islam is spelled with the latter. Thus while in English the pronunciation is “kəˈrän” (kor-aan), the word should be spelled “Quran” (alt. Qur’an, Qur’ān).

Islam or ‘Islām?
Not to worry – “Islam” is the correct and conventional spelling. (‘Islām is the official academic transliteration.) It is grammatically correct to refer to “the religion of Islam” or, similarly, “the religion of Muslims,” or “the Islamic faith.” However, “Muslim religion” is inaccurate and ungrammatical because “Muslim” is either a noun or an adjective that describes someone who practices Islam—the religion of Islam is not “Muslim.”

Moderate or Mainstream
In response to the uptick in the use of phrasings that conflate Muslims with extremism, fanaticism, radicalization, fundamentalism, “jihadism,” and terrorism, many – including some Muslims – have started adding the word “moderate” as a neutralizing adjective to indicate a Muslim who is not radicalized. However, scholars such as Saba Mahmood and Talal Asad agree that this only perpetuates the collapsing of the terms “Muslim” and “terrorist”—thus, appending the word “moderate” before “Muslim” implies that “Muslim” itself is inherently pejorative. Without “moderate,” a Muslim is therefore presumed to be a violent extremist or a terrorist. If this newfound convention prevails, then Western discourse would have accepted al-Qaeda’s worldview wherein “(true) Muslims” sympathize with or engage in violent global jihad. For al-Qaeda, Muslims who do not sympathize with them are “nominal Muslims,” a term which refers to the same people as the phrase “moderate Muslims.”

“Al-Qaedaist,” Not Islamic or Islamist
There are 1.5 billion Muslims across the globe; al-Qaeda’s total membership is less than 30,000 people. Academics and specialists agree that for those relatively few Muslims who join al-Qaeda or affiliated networks, Islam is not the primary motivating factor, let alone the only one. Al-Qaeda pulls from Islamic and non-Islamic sources alike—they quote Samuel Huntington’s Clash of Civilizations alongside Islamic sources (usually quoting him in context while distorting Islamic sources). Still, al-Qaeda’s actions are not characterized as “Huntingtonian terrorism”—it is wrong to allow al-Qaeda to appropriate and unquestionably claim Islam, Islamism, and Islamic terminology. It is best then to describe their operations and ideological tenets as “al-Qaedaist,” (granted, a bit tautological) as well as for the actions and conceptions of those who are affiliated with al-Qaeda or sympathize with their views and methods.

Al-Qaeda’s “Jihad”
It is critical to consider that in the phrase “commit jihad,” for example, what is implied is that jihad is in itself an action (or even a crime) that is committed—this is at least true on the syntactical level. That connotation, however, is not correct according to mainstream Islam and is disingenuous to Muslims for whom the term maintains a certain spiritual, completely non-violent, resonance. In avoiding the use of al-Qaeda’s vocabulary, it is important to resist its militant misappropriations of spiritual and theological terms like “jihad.”

When talking about American Muslims, it is important to consider that most estimates hold that there are between 6-7 million Muslims in the United States. Most of them are of South Asian origin. Thirty-eight percent are African American, born and raised in the US. Only a minority are Arabs or of Middle Eastern origin. Additionally, though the Arabian Peninsula is the birthplace of Islam, only 12% of all Muslims are Arab. The top five largest Muslim majority countries are Indonesia, Pakistan, Bangladesh, India, and China—none of them an Arab country. Moreover, not all Arabs are Muslim. In fact, in the US most Arab-Americans are Christian.

Next week I will discuss the implications of this terminology (and its use in the media) for public policy.

Get involved! Please sign our petition to stop the targeting of American Muslims:

Rot in my backyard: The importance of engaging with communities hosting nuclear power plants

Sophia Peters, MPA

More than a month after the devastating earthquake and near meltdown at the Fukushima Daiichi nuclear power plant, on April 22nd the Japanese government finally imposed a mandatory evacuation zone of 12 miles surrounding the plant. Those who lived near the damaged plant flocked to the area before the midnight deadline, collecting whatever they could to bring into their new lives.[1] Some remain, refusing to change the way they have lived for decades.

As the cleanup begins, now is a good time to reflect on the relationship between communities and the nuclear power plants they host.

It was only on April 12th that Japanese officials raised the severity rating of the nuclear crisis at the Fukushima plant to the highest level possible on the international nuclear disaster scale – on par with the 1986 Chernobyl disaster. An official from Tokyo Electric Power Company stated that “the amount of leakage could eventually reach that of Chernobyl or exceed it.”[2] This reality check reinforced the sense that this nuclear emergency will persist longer and cause more problems than first predicted by the government, which had consistently downplayed long-term safety concerns. The Japanese government’s hesitation to be clear about their level of knowledge of what was occurring at the Fukushima plant has troubling implications for the area’s residents and sets a dangerous precedent for how nuclear energy agencies engage with host communities.

The Japanese government owes the local community a rational and reasoned assessment of the impacts of the destabilized reactors and spent fuel storage containers. Yet while the US and Australian governments advised their citizens to remain 50 miles away from the plant and the IAEA issued repeated warnings to expand the evacuation zone, Japan had had consistently refused to do so.[3] Furthermore, officials refuse to admit their confusion as to how much nuclear fuel was released in the initial hydrogen explosions and whether radioactive fuel is leaking into the containment structures.[4] When government officials do engage with the community to share facts about the situation at the plant, they do so by presenting raw data without any explanation of its practical relevance.[5] This has serious and dangerous consequences for the community surrounding the plant.

This behavior is consistent with a larger pattern on the part of the international nuclear industry and its supporters to shield the full truth from communities that host nuclear power plants and a refusal to talk candidly about the risks, costs, and implications of living near a nuclear facility. This originates from the belief of nuclear engineers and utility managers that the less the community knows, the more likely it is to accept the siting of a nuclear facility. Unfortunately, this could not be farther from the truth. From Yucca Mountain in Nevada to Anmyeon Island in South Korea, history has shown repeatedly that this is not the case.

But it does not have to be this way. In fact, recent events show that engaging with the community facilitates the construction of the nuclear power plant or the spent fuel repository. In South Korea, the government was able to site a low and intermediate nuclear waste repository in Wolsong province with a near-90% local approval rate.[6] In Finland, the government entered into an extended dialogue and candid negotiation with several communities in order to find a host for its spent fuel final repository. It was eventually sited in Eurajoki, where it was met with widespread community support and a 20-7 vote in favor of its construction by the local council. Together with Sweden, these two Nordic countries are the only ones that have been able to site final waste repositories.

Barring any major breakthroughs, we will have to heavily utilize nuclear power if we want to avert the unjust and unequal consequences of global climate change. There is no other energy technology that can compete economically with coal as a base load power substitute. But in the world of nuclear physicists and mechanical engineers, conversations about Fukushima still center on the location of the liquid storage pool and the technical specifications of the reactor casing. We need to remember that the reason we care about these important scientific details is because of the impact that they have on the community that houses the nuclear power plant and the people throughout the country that depend on nuclear energy as a source of power. Helping ensure that these communities benefit from hosting power plants, mitigating the risks they could potentially suffer from being close to radioactive waste, and prioritizing their needs in the rare case of a terrible accident should be the focus of policymakers hoping to expand nuclear power in the future.

Editor’s Note: You can read more about this subject in “A Proposal for Spent-fuel Management Policy in East Asia,” a WWS graduate policy workshop final report on current and future spent-fuel management policy in China, Japan, and South Korea. Unfortunately, this work has become more relevant today than when the project began. Available here.

[1] Andrew Pollack, “Japanese Visit the Nuclear Zone While They Can,” New York Times, April 21, 2011.

[2] Chico Harlan, “Japan Rates Nuclear Crisis at Highest Severity Level,” Washington Post, April 12, 2011.
[3] Hiroko Tabuchi and Keith Bradsher, “Japan Put on Par with Chernobyl,” New York Times, April 12, 2011.
[4] Hiroko Tabuchi and Keith Bradsher, “Lack of Data Heightens Japan’s Nuclear Crisis,” New York Times, April 8, 2011.
[5] Ibid.
[6] “Nuclear Power in South Korea,” World Nuclear Association, March 2011.

Friday, April 22, 2011

Confronting climate change: Not just carbon dioxide

Matt Frades, MPA

When you think about avoiding dangerous climate change, what comes to mind? Thanks to a decade of climate education efforts, much of the public is now aware of the scientific consensus on the need for reductions in global emissions of the greenhouse gas carbon dioxide (CO2). Awareness of carbon dioxide’s role in the climate is a crucial step towards building support for policies that address climate change. However, while carbon dioxide is the most important climate warmer, it is not the only player that demands attention.

The full story of how humans affect Earth’s climate is complicated, multi-faceted, and involves some uncertainty (just like everything else in life). The most popular climate change messaging is simple and short enough to tweet: “We need to reduce emissions of CO2, a greenhouse gas, in order to avoid dangerous climate change.” While this message goes a long way, a slightly more nuanced and accurate view is apropos: “We need to address a variety of human activities, including the emission of various greenhouse substances to the atmosphere—the most prominent of which is CO2—in order to avoid dangerous climate change.” At 200 characters, this revised message may be over the sacred Twitter limit, but the extra words are worth it. Here’s why:
  • climate warmers other than CO2 are responsible for between 30 and 60 percent of human-caused warming (depending on how you choose to account for future warming), and
  • addressing climate warmers other than CO2 presents opportunities to meaningfully address climate change right away, despite the present political and fiscal constraints.

The human-caused climate warmers are diverse, including methane from agriculture and landfills, black carbon particles emitted from vehicles, gases used to make foams and semiconductors, gases contained in air-conditioners and refrigerators, the lightness or darkness of manmade surfaces like rooftops, and others. Acknowledging the true diversity of mankind’s impacts on the climate compels us to address a broad set of human activities that significantly warm the planet but propitiously it also provides policymakers with a more diverse set of politically-feasible policy opportunities to address climate change in the short term.

Editor’s Note: You can read more about this subject in “Complements to Carbon: Opportunities for Near-Term Action on Non-CO2 Climate Forcers,” a WWS graduate policy workshop final report presented to the U.S. Department of Energy and the Environmental Protection Agency, identifying domestic and international fast-action strategies that are available under current agency authority to address non-CO2 climate warmers. Available here.

Philanthropy, defender of civil society

Heather Lord, MPP

“Chorus: Let not thy love to man o'erleap the bounds
Of reason, nor neglect thy wretched state:
So my fond hope suggests thou shalt be free
From these base chains, nor less in power than Jove.

 Prometheus: Not thus — it is not in the Fates that thus
These things should end; crush'd with a thousand wrongs,
A thousand woes, I shall escape these chains.
Necessity is stronger far than art.”
                   – Aeschylus, Prometheus Bound

The term “philanthropy” was first used in the Aeschylus work quoted above, a mythical tale about early humans dwelling in dark caves, living and dying at the whim of the gods. Prometheus shows exceptional philanthropos (love of humanity) when he defies the gods to give humans fire. The gift of fire symbolizes the beginning of all knowledge and optimism for the future – the two key ingredients necessary for improving the human condition. Because Prometheus’s gift was an act of rebellion against tyranny, philanthropia became inextricably associated with the arduous battle for a free and just civil society.

This association holds as true today as it did in ancient Greece. Global philanthropy is a massive international industry of strategic charitable organizations and activities broadly defined as “private initiatives for public good, focusing on improved quality of life.”[1] While laws and government policies generally encourage charities to flourish, philanthropy is often referred to as the “independent sector.” This independence is fiercely important. Ideally philanthropy and the state have a collegial relationship; however, like Prometheus, the philanthropic sector often finds itself fighting to defend a concept of justice and humanity at odds with the goals of the state, an unfortunate reality given the potential benefits of philanthropic causes to advance civil society in all societies.

In the United States, we can see an example of this tension in the government’s post-9/11 anti-terrorism funding guidelines which require that no resources from US non-profits fund terrorist organizations or suspicious individuals in a direct or even indirect way. For major foundations operating hundreds of international programs, following the letter of this law means they have to stay abreast of the multiple suspicious persons lists and run background checks on every conference attendee or every local vendor providing key program services in under-resourced high-risk regions. Charitable foundations and the US Treasury Department certainly share a commitment to combat terrorism, but clearly have a different threat calculus. Is the complex Gordian knot of terrorism really best addressed with such a blunt stroke against US-based foundations? Although it may be necessary for the US to want CYA (cover your assets) laws on the books, surely there is a more efficient way to address this problem than diverting foundation assets away from programs and into legal fees as they try ensure against and quell concerns about terrorism funding largely inapplicable to most US foundations.

There are also tensions in other countries as increased indigenous interest in private philanthropy forces a redefinition of the role between the citizen and the state. For example, in China there is some concern in the government that a robust home-grown independent philanthropic sector could invoke a destabilizing, undesirable “Western-style” democracy. The relative difficulty of officially registering a private foundation in China has resulted in growing informal extra-judicial philanthropic “gray market.” The fledgling Chinese Foundation Center was established in 2010 to build in-country philanthropic capacity, but continued crackdowns on freedom of expression in China may have a harmful effect on the burgeoning sector. In another part of the globe, an example of the tension between philanthropy and government in Africa is evidenced in the in the 2009 laws passed in Ethiopia capping NGOs’ foreign-sourced funding at 10% of their total budget and forcing organizations to go through an onerous re-registration process, effectively putting many of them out of business.

Philanthropy makes governments nervous only in countries where civil society makes those same governments nervous. This is all the more reason to consider the philanthropic sector as an essential civil society safeguard worthy of attention and investment. But putting aside strategic considerations (for once), philanthropy at its best is poised to act as a defender and advocate for the forward-looking goals of civil society in all countries, liberal or repressive. Regardless of the many challenges, human optimism remains unbound and hopefully each society will use the forces of philanthropos to fight its way to a better future for all of us. 

E-mail Heather at or check out her philanthropy blog,

[1] This definition from Wikipedia condenses excellent and extensive work on philanthropy by leading scholars John W. Gardner, Lester Salamon, and Robert Bremner.

Islamophobia and the etymological roots of the King hearings, part I: The premises and implications of the King hearings

Editor’s Note: This is the first of a three-part series on Islamophobia in America. Part II discusses the emerging semantics of Islam and Muslims in the West. Part III examines Islamophobic language trends in major English and Arabic media outlets and their implications for public policy.

Nazir Harb, MPA

Despite opposition from the Obama administration and a wide array of American minority groups, especially major Arab-American and American Muslim organizations, on March 10th the House Committee on Homeland Security convened a hearing entitled “The Extent of Radicalization in the American Muslim Community and that Community’s Response.” It was truly a tragic event in our nation’s history but unfortunately only the beginning of a year-long series of hearings that attempt to put Islam and Muslims on trial. The next hearing is supposed to take place in the next few weeks. Representative Peter King (R-NY), who chairs the committee and is the driving force behind the hearings, has been rather enigmatic about the exact dates of these show trials.

The title of King’s hearings is telling in and of itself, as it reveals the innate biases of the Congressman and his witnesses. The hearings attempt to legitimize a premise that is not only baseless and untrue, but also brazenly racist, prejudicial, and provocative. They antagonize a susceptible, peaceful community that constitutes a diverse multi-national and multi-cultural American minority with a longstanding history of contributions to the United States and the world.

Obama administration officials have stressed that the hearings are condemnable and that their premise must be amended to investigate radicalization in America in general as a phenomenon independent of Islam or Muslims. Instead, the hearings present a forgone conclusion damning an American minority without so much as giving it the opportunity to speak for itself. Indeed, during the first hearing, each of the speakers was well-known for harboring and fomenting Islamophobic and anti-Muslim sentiments (save Rep. Keith Ellison, a Muslim congressman from Minnesota). Their testimonies that day predictably served King’s fear-mongering and self-aggrandizing political agenda. While some of King’s witnesses were “practicing Muslims” who admittedly had very negative—but undoubtedly unrepresentative—experiences with Islam and Muslims inside or outside of the US, their testimonies thus far have supported the sort of abominable and unwarranted claims that Rep. King has recently made, such as that “85% of mosques in America are ruled by the extremists.” To quote King, who has faced relatively little castigation for such statements, his hearings are meant to demonstrate that Muslims in the United States are “an enemy living amongst us.”

Notably, countless American Muslims and non-Muslims who have requested to testify, including specialists who would represent the counterargument to these allegations and provide for real debate on the topic, have been declined the right to testify. There could therefore be no doubt that these hearings are political show trials which target a vulnerable minority that is politically difficult to defend in public. Fortunately at least Senator Richard Durbin (D-IL) had the fortitude to use his own bully pulpit for such a noble purpose and convened a Senate hearing on March 29th on threats to American Muslim civil rights.

Sadly such mistrust and public aspersions on the loyalty of American citizens is not without precedent in our recent history. These events recall the regrettable and horrific treatment of Japanese Americans following the tragedy of Pearl Harbor. True, these hearings don’t rise to the level of mass internment (though something of the sort did take place immediately following 9/11), but we cannot stand by as another American minority is profiled, singled out, and blamed for a foreign attack. American Muslims have begun protests and educational programs to emphasize that this is a critical matter of civil rights which concerns every American, and is not what Rep. King has characterized as strictly a “Muslim problem.” Attorney General Eric Holder is right to assert that anti-Muslim bigotry is “the civil rights issue of our time.”

Get involved! Please sign our petition to stop the targeting of American Muslims:

Friday, April 15, 2011

Another one bites the dust: A new beginning for the Ivory Coast

Jake Velker, MPA

As of April 11th, Laurent Gbagbo has been deposed as president of Cote d’Ivoire (Ivory Coast). Gbagbo ruled the West African country since taking power in a chaotic election in 2000, weathering several coup attempts and a civil war. In his place will be Alassane Ouattara, a former IMF official who observers agree rightfully won the election held in November 2010. Until now, Gbagbo had refused to concede.

It is hard not to view the recent events with a measure of optimism. Gbagbo was, by all accounts, the kind of strongman that Africa needs to eliminate from its politics sooner rather than later. Despite strong growth in per capita GDP, Gbagbo did nothing in his ten years in power to heal the ethnic, religious, and economic rifts that divide the northern and southern portions of Cote d’Ivoire. Instead, he pursued a relentlessly clientelist political program and subverted democratic expression and civil society opposition wherever possible. In the most recent election, he chose to return the country to the cusp of civil war rather than step down.

Ouattara, on the other hand, appears to be a sincere reformer. He is from Cote d’Ivoire’s long-marginalized north. Much like the successful post-war leader of Liberia, Ellen Johnson-Sirleaf, he is a western-educated technocrat with a background in economics and bureaucratic experience as Cote d’Ivoire’s former prime minister.

Some caution is in order, however. There are already circumstantial reports that pro-Ouattara forces committed atrocities against civilians in their march from the north toward Abidjan, the economic capital. Gbagbo also started as a western-educated professor and trade unionist before resorting to dirty politics. Any number of African presidents has come to power on sincere reform platforms, only to find that their countries are woefully difficult to govern effectively. Those who hope to stay in office often find the temptations of pandering to the country’s entrenched ethnic and business elite difficult to resist. Let’s hope that Ouattara is more immune to that siren song than Gbagbo.

Perhaps most intriguing for the future of Africa’s notoriously chaotic and corruption-ridden electoral systems is the relative ease with which Gbagbo’s ouster occurred, once the political will solidified. In uncharacteristically robust language, the UN refused to recognize any version of the election results that did not definitively hand Ouattara the victory. Furthermore, the Security Council unanimously condemned Gbagbo’s intransigence and strengthened the mandate of the UN Operation Cote d’Ivoire (UNOCI) to vigorously protect the peace. It helped that the African Union—typically the defender of the continent’s despots—supported Ouattara from the start. There was no ambiguity in this attempted fraud.

Finally, and perhaps most importantly, the international community stepped in at a critical time to force the issue via military means. On April 4th, combined UN and French units began attacking pro-Gbagbo forces occupying the presidential compound in Abidjan. Within 24 hours, Gbagbo’s generals were negotiating terms for his surrender. Although he hung on for another week in a spiteful attempt to render Cote d’Ivoire ungovernable for Mr. Ouattara, Gbagbo was finished.

This underscores how comparatively easy it is to secure election results in the world’s poorest countries if there is a political will. UNOCI’s modest presence of 10,000 peacekeepers was sufficient to secure the election result with only a minimum of civilian casualties. In its seven-year deployment, UNOCI has suffered only 72 fatalities.

After the hundreds of billions spent on democracy-promotion in the Middle East, one wonders whether the West might use its resources more cost-effectively simply making sure that elections in Africa don’t spark civil wars. This doesn’t need to always involve costly ground occupations, but just enough force to tilt a precarious situation in the right direction.

Recent attempts to steal elections in Africa have gone a variety of ways. In Kenya (2007) and Zimbabwe (2008), the incumbent presidents blatantly rigged election results. However, each caused enough violence, confusion, and instability to necessitate power-sharing agreements with the opposition winners. After the international community’s fickle attention flagged, they of course consolidated power, marginalized their “unity governments,” and proceeded to once again rule by fiat.

Gbagbo was unlucky to fail in similar mischief. It was of course fortunate that the UN already had a robust peacekeeping presence in Cote d’Ivoire and that France has consistently demonstrated a willingness to intervene in the affairs of its former West African colonies. Thus, it remains to be seen whether these events are the harbinger of a new, cost-effective, and (relatively) peaceful model for the enforcement of election results in Africa.

More importantly, it is entirely unclear if Ouattara will lead more responsibly than his predecessor. Indeed, there is little to suggest that African opposition parties have coherent platforms, beyond deposing the current holders of power. In the absence of independent institutions, competent bureaucracies, and equitable growth, one is forced to wonder who—other than the winner—is benefited by such nasty contests.

When a strongman is justifiably dethroned, it is democracy that wins, or just another strongman-to-be?

Tirade for a smoggy day: The hidden costs of endless consumption

Katherine Manchester MPA

The psychoanalytic term “cathexis” refers to a process of attachment where we come to think of material goods as extensions of ourselves. Today’s American consumer culture – where even national parks are commodities for those vacationers who can afford them – might be described in this way. Starting in the 1950s, when wartime efficiency in manufacturing turned from supplying military needs to supplying civilian wants, government policies have intervened to make sure that demand keeps up with supply. Financial incentives subsidize the cost of housing, cars, and household appliances, recasting consumption as patriotic and necessary for economic growth, while allowing industrialists to engineer built-in obsolescence into their products.

With increased consumption and disposability comes increased pollution: the average American is now responsible for 20 tons of carbon dioxide emissions annually, five times the emissions of the average person on the planet. The impact of American consumption habits on the world is two-fold: (1) affecting the global environment through disproportionately heavy resource use, of which most of the resulting pollutants are externalized; and (2) modeling an affluent lifestyle that we admit is unsustainable on a global scale, yet while refusing to alter our own behavior.

On one hand, the nature of American production systems makes it all too easy for decision makers to ignore many of their environmentally harmful impacts.* On the domestic level, having power plants physically removed from urban areas, combined with utility subsidies and electricity’s “clean” appearance at the point of consumption, propagate misconceptions about the abundance and low price of using fossil fuels. On the international level, increasingly globalized production chains disguise the real costs of manufacturing these products.

On the other hand, even when we are aware of industry’s impacts, we are more than willing to defray those costs to the developing world. In a leaked memo in 1991, the ever-quotable Larry Summers, then chief economist for the World Bank, mused over the “impeccable economic logic behind dumping a load of toxic waste in the lowest wage country.” Why, he questioned, should toxic chemical waste not be disposed of in “under-polluted” regions such as Africa? Couldn’t that serve as their comparative advantage in global trade? Attitudes of this sort within the leadership hamper the realization that the United States exists within a closed system of finite resources, and that significant changes are needed to maintain a comfortable standard of living in the long term.    

Inspired by our terrible example, many in the developing world are striving to reach such a quality of life. Over one billion people from developing countries, 41% of them in China and India, have recently joined the ranks of established OECD consumers. These new consumers own virtually all of their respective countries’ cars and are adopting resource-intensive preferences such as a meat-heavy diet and increased use of electricity. These preferences have already had global impacts, such as the recent increase in the price of grain due to pressure on international markets.

Granted, a major difference between consumption in the 1950s and today is the contribution of technological innovation for increased production efficiency. But even as energy intensity has fallen, consumption in absolute terms has soared, with worldwide emissions of carbon dioxide growing at an average of 3% annually since 2000. Yes, technological efficiency must continue to play a vital role in cutting down the rate of dangerous emissions, but a simultaneous, absolute reduction in resource consumption seems unavoidable. Agriculture land already takes up 40% of the earth’s ice-free lands; urban areas, roads, and airports take up another 2% of land area; forest coverage has decreased by 50 million square kilometers and deserts have expanded by almost 10 million square kilometers.

To achieve lower consumption patterns, the United States will have to invest in expensive structural and institutional changes, including revamping public transport systems and providing economic incentives for retrofitting housing, offices, and factories. Greater investment is needed for research into renewable energies, and for helping farmers convert to environmentally responsible crops. These are politically unpopular proposals to be sure, but the wave of consumerism that began in the 1950s – created by government policies, corporations, and individuals - could be similarly reversed if these same actors put their minds to it.

*This is not so true for low-income and minority communities which, despite great progress made by advocates for environmental justice, are disproportionately burdened with the likes of garbage incinerators, landfills, and power plants.

1. “Furor on Memo at World Bank,” The New York Times, February 7, 1992.

2. Ramachandra Guha, “How much should a person consume?” in How Much Should a Person Consume? Berkley: University of California Press, 2006.
3. John Holdren, “Science and Technology for Sustainable Well Being,” Science 319:5862, January 2008.
4. Norman Myers and Jennifer Kent, “New consumers: The influence of affluence on the environment,” Proceedings of the National Academy of Sciences 100:8, 2003.
5. Heather Rogers, Gone Tomorrow: The Hidden Life of Garbage, New York: New Press, 2005.

Rebalancing civilian-military operations: A two-way street

Rashad Badr, MPA

There is a great deal of discussion about the need to adjust the balance between civilian agencies and the military in executing U.S. foreign policy and programs. The easiest argument to make is that the vast American defense complex overshadows US diplomatic and developmental efforts in almost every way. The Department of Defense (DoD) budget in 2010 was $691 billion, whereas the State Department’s budget for that year was just $16.4 billion. The United States Agency for International Development (USAID), the development arm of State, similarly faced a shorter stack in pursuing its goals abroad. However one wants to add it up, defense spending surpasses civilians projects by about 40 to 1.

Other State advocates point to the sprawling manpower that the military possesses when compared to its civilian counterparts. Just one example of this mismatch: the personal staff of the Central Command Combatant Commander (CCDR) – known for previously serving a famous Woodrow Wilson School graduate, General David Petraeus *85 *87 – is larger than many of the embassies that fall into Central Command’s theatre. When CCDRs travel, they normally arrive with a small army of assistants and personnel. When I saw Assistant Secretary of State Jeff Feltman travel to the Middle East last summer, he traveled with a single assistant.

Secretary of Defense Robert Gates has argued before Congress that the State Department needs more funding. Secretary of State Hillary Clinton, with the help of our very own former dean, Professor Anne-Marie Slaughter, has pioneered the Quadrennial Diplomacy and Development Review (QQDR). In it, Clinton and Slaughter aim to remap American diplomatic and developmental efforts, putting them on par with defensive operations, in accordance with President Obama’s “3D” approach to foreign policy: defense, diplomacy, and development. This document has many creative ideas and useful insights. But what have we seen of it so far?

More importantly, we’ve heard all of this before. So what should we do about it?

Unfortunately, the situation is bit more complex than a simple issue of funding parity or even mission creep. But first off, let’s get one thing straight: I’m a big fan of State and a staunch advocate for the need to elevate diplomacy as a tool of national security. That being said, the department needs to critically alter its mission and operations in three ways.

First, State has to get serious about assuming greater risks while conducting diplomacy. Current security measures, left largely in the hands of Regional Security Officers abroad, effectively keep diplomats trapped behind embassy walls. If a country is deemed “dangerous,” then diplomats have to jump through numerous hoops before they are allowed to leave compound – and when they do get permission, they have to be escorted by armed guards and in armored vehicles. There is something counter-intuitive about effectively marginalizing our Foreign Service Officers in the places that need the greatest diplomatic efforts. Of course relaxing these standards will come with attendant risks and dangers, but diplomacy is a dangerous endeavor. Unfortunately, the State Department’s allergy to potentially hostile situations – to which the military is largely immune – has ultimately led to its marginalization.

Second, the department needs to reassert control over peacekeeping, nation building, and wartime operations. The US’s two biggest engagements currently are in Iraq and Afghanistan. A study of peacekeeping operations and nation building efforts in both of these countries reveal DoD dominance in both developmental and diplomatic activities. Diplomats and aid workers argue that they simply don’t have the funding or the operational capacity to work in these environments. Fine, but let’s also not forget that the State Department’s mission has been steadily cut down since the Clinton presidency (without much of a fight might I add) and traditional State and USAID operations have been farmed out to DoD. In fact, one of the biggest complaints I hear from people in uniform (at all levels) is that State and USAID are just not stepping up to the plate. State and USAID will have to not only reassert themselves in these areas on a macro level, but take substantive steps to fund and train civilians in taking over from DoD.

Which brings me to my third point: the State Department needs to implement its goal of “engaging beyond the state,” as referenced in the QDDR. In Iraq and Afghanistan, that means picking up some of the work done by Special Operations in navigating and channeling tribal and ethnic currents. Elsewhere in the world, it means engaging outside of our “comfort zone,” to include more engagement with Islamist groups and opposition movements. American diplomatic efforts will always be limited if we (read: American policymakers) are content engaging with official, traditional government counterparts and Western, liberal thinkers. American diplomacy cannot be considered robust if it is not widened to take into account the full spectrum and picture of political actors operating in today’s complex international environment.

These criticisms may come off as a bit harsh on the State Department and USAID, but these are necessary issues to keep in mind if civilian efforts will ever near parity with military power. Because at the end of the day, American policymakers can’t just ask DoD to give up turf; they need to have a strong and aggressive civilian sector willing to pick up and take over.

Wednesday, April 6, 2011

New clear thinking about nuclear weapons

Alex Bollfrass, MPA

The dangers from nuclear weapons are often explained as terrible outgrowths of inhuman rationality. Dr. Strangelove stands as the paragon symbol of the madness in MAD, in which detached and logically consistent decision-making leads to catastrophe. But the trouble with nuclear weapons today is exactly the opposite. Instead of rational choices leading to irrational outcomes, now we find that it is irrational psychological processes that are leading nuclear policymakers astray.

To implement a sustainable strategy to protect Americans from nuclear dangers, decisions cannot be distorted by the psychological idiosyncrasies from which we all suffer. These foibles can be overcome, but first we must understand how they can impact the way national security decision-makers view their nuclear choices.

A Clear and Present Risk

Applying lessons from psychological research, the biggest scope for miscalculation is in deciding between present and future risk. As humans, we have a difficult time understanding risk and making decisions on questions with complex trade-offs. Studies have shown that that we are especially vulnerable to postponing difficult actions if the consequences of inaction – however grave – are in the future. Thus, when deciding how to mitigate the nuclear threat we end up overvaluing the US nuclear weapon arsenal at the expense of pushing for universal denuclearization, the only true way to protect against nuclear annihilation.

But such a course correction invariably produces short-term risks and trade-offs. The product of our reluctance to embrace these short-term obstacles is a nuclear weapons policy that implicitly accepts that weapons will continue to spread to more nations. The problem with that is that the likelihood that they will be used increases with the number of states deploying such weapons, as does the danger that they will fall into the arms of terrorist organizations.

There are steps we could take to shift the proliferation danger into a lower gear, but these come at a cost. Placing all uranium enrichment and plutonium reprocessing facilities under multinational control would be an enormous nonproliferation achievement, but would be vehemently resisted by the nuclear industry. Similarly, the implementation of the International Atomic Energy Agency’s “Additional Protocol” safeguards would prevent the exploitation of nationally-controlled nuclear reactors for building nuclear weapons. Getting more than the current 94 countries (all without nuclear weapons) to sign up and agree to these intrusive “anytime, anywhere” inspections would require significant concessions from the US and other nuclear weapon states, such as limiting their weapons’ capabilities as well as reducing their numbers.

Despite President Obama’s “Nuclear Security Summit” denuclearization initiative, given the difficulty of these short-term measures and the nebulous distance of their benefits, it is unlikely the administration and Congress will act to protect Americans from this gathering threat. This difficulty of translating good intentions for the future into unpleasant action today is familiar to everyone from dieters to well-intentioned savers.

The Final Count-up

We habitually take on more risk if we stand to gain something, but suddenly develop much greater caution if there is a chance of a loss. This status-quo bias also finds expression in what psychologists call an “endowment effect,” which was first demonstrated in a lab with subjects who consistently believed that the value of coffee mugs they owned were worth more than mugs they had no ownership claim over.

The same phenomenon exists in our nuclear debates and policies. We think about how many weapons we are willing to give up, which feels like a major security sacrifice, instead of surveying the strategic landscape and counting up the number of nuclear warheads necessary to fulfill the missions assigned to them.

This irrational overvaluation makes it hard to agree to the nonproliferation measures that aim to prevent nuclear weapons from falling into the wrong hands. To see this fallacy in motion, witness the favorite talking point of nuclear weapons hawks: our arsenal is at the lowest level since the Eisenhower administration. Referencing such an absurdly high baseline has little analytical use, while it creates the impression that we have very few nuclear weapons and any reductions would therefore be dangerous.

Fortunately, we can overcome these psychological impediments to a smarter nuclear strategy that balances the trade-offs of risks today and in the future. Several nations have demonstrated that it is possible. South Africa, Ukraine, Kazakhstan, and Belarus went so far as to dispose of their nuclear arsenals. Throughout the nuclear age, more governments have started nuclear weapons programs and abandoned them than brought the programs to completion.

The US need not go that far, but the next time a senator denounces an arms control measure with Russia or the Comprehensive Test Ban Treaty as too big a concession, it is worth asking which part of the Senator’s brain this judgment comes from.

Between North and South: Reorienting Mexico’s trade posture

Héber M. Delgado-Medrano, MPA

In recent years, perhaps for the first time in its history, Mexico saw itself and its northern neighbors in a weaker economic position than its South American counterparts. As the US sank deeper and deeper into a recession, Mexico was brought along for the ride, which culminated in a spectacular contraction of almost 7% of Mexico’s GDP in 2009. As a result, many Mexican intellectuals and policymakers have begun to ask themselves whether Mexico’s prospects in North America are still better than elsewhere. Some go so far as to suggest that Mexico should simply reorient its economy entirely towards the South. And yet others exhort Mexico not to turn its back on the NAFTA project just yet, claiming that further benefits from North American integration are still to come and that further unification with the North is the answer, not less.

Based on these arguments, Mexico faces a stark choice: look either towards the North or the South, but not both. But the sensible course of action, I believe, lies somewhere in between. Mexico can neither ignore the benefits of its privileged—though admittedly oftentimes troubled—relationship with the US, nor can it overlook the opportunities waiting for Mexico in South America, which are real and in many ways are already materializing. Instead of burning bridges to build others or cementing old relationships while disregarding new opportunities, Mexico should establish an explicit policy that seeks to convert it into a critical link connecting the diverse economies of all of the Americas.

In the past 15 years, mostly as a result of the implementation of NAFTA, Mexico has reached an extraordinary degree of economic integration with the US and—to a lesser degree—with Canada. Today, 80.5% of Mexico’s exports end up in the US, while the most dynamic and critical sectors in the Mexican economy—export-oriented manufactures and tourism—are highly dependent on foreign direct investment (FDI) and consumption from the US. NAFTA in many ways remains incomplete and many gaps remain to be filled, but as we saw just a few weeks ago when presidents Obama and Calderon (partially) settled a long-held dispute regarding the access of Mexican trucks into the US, there is still enough goodwill between both nations to continue along the slow but constant path towards further economic integration.

This is evident in both countries today. Visitors to Mexico City are amazed—perhaps disappointed—by the seemingly endless supply of Starbucks they encounter along Paseo de la Reforma. Northern cities like Chihuahua that have been invaded by American chains and are now crisscrossed by American-style superhighways are looking more and more like El Paso, Texas or Phoenix, Arizona. Wal-Mart, too, today is ubiquitous throughout Mexico, as one of the largest businesses and employers in the country. In the US, you also see a stronger, albeit more subtle, Mexican presence: has anyone else noticed that Mexican glass-bottled Coca-Cola is now offered in many convenience stores across the US?

But the weaknesses of this economic marriage (made nowhere near heaven) were crudely exposed during the last recession: when the US faltered, the effects were greatly amplified in Mexico. Mexico was by far the hardest hit Latin American country during the 2009 recession. Given that states like Brazil, Chile, and Colombia fared much better, it begs the question: is Mexico now facing the consequences of putting all its eggs in one basket? Should Mexico therefore rethink its commercial orientation?

North Americanistas like NYU professor and former Mexican secretary of foreign affairs Jorge Castañeda begin by highlighting the incontrovertible benefits that have accrued to Mexico as a result of NAFTA and point out Mexico’s solid economic recovery as the US lifts itself out of the recession: 5.5% growth in 2010 and projected 4.0-5.0% growth for 2011. Further, Castañeda and others who share his view claim that in order to prosper Mexico needs to tighten the gap between itself and its North American neighbors by pushing for deeper economic, political and social integration with the North. From this point of view what Mexico needs to do is not turn its back on the US but to pursue a tighter relationship with the North that eliminates the loopholes and imperfections still plaguing NAFTA. Castañeda thus calls for a liberalization of the labor markets between the three countries, further and more serious cooperation on security issues, and even a North American economic union in the long run.

The Latin Americanistas on the other hand, which to my knowledge are not unified under the aegis of any particular individual, begin by questioning whether the United States will ever once again become the economic dynamo it was during the 20th century and whether Mexico has greater prospects for growth by integrating with the rapidly-growing economies of South America. They remind us that Brazil, Argentina, and Colombia have large consumer markets with increasing purchasing power and that Mexico is already well-positioned to enter and even dominate many industries within these markets. Mexican firms in telecommunications, mining, services, food processing and distribution, cement, and other key sectors are oftentimes larger and more productive than their Latin American counterparts. Already, Mexican business magnate Carlos Slim has extended his tentacles to every corner of Latin America with his telecom giant Telmex (known as Claro in some countries), which is now the leading player in countries as wide-ranging as the Dominican Republic, Colombia, and Chile. Other companies like Cemex, Bimbo (the largest bakery in the world), and even FEMSAS’s Oxxo convenience stores are also rapidly growing in South America. Moreover, many economic analysts are asking themselves why Mexico can’t export to Brazil and other countries the high-tech goods and consumer durables it exports to the US that are not produced elsewhere in Latin America: smartphones, plasma televisions, refrigerators, etc. Finally, economic integration between Mexico and the rest of Latin America is already on the table: just last week, Peruvian president Alan García announced in Bogotá that Mexico, Colombia, Peru, and Chile are planning to form an economic bloc to strengthen integration between the four countries and promote a common cross-continental trade agenda.

Certainly both of these views have their advantages, but they each flatly ignore the drawbacks of their own recommendations. Further economic, political, and social integration with North America will be anything but easy. Immigration policy in the US is becoming tighter, not laxer, and the political climate in the US may resist further liberalization in the labor markets for some time. Similarly, simply mentioning the prospect of a North American economic union in the US elicits passionate and—more often than not—negative responses, particularly as the world continues to witness how the European Union struggles to keep Greece and Portugal afloat. And perhaps more importantly, we cannot ignore the fact that Mexico needs to hedge its risks. Having a globalization and trade strategy that is contingent on the economic fates of only two countries is simply irresponsible.

On the other hand, despite appearances Latin America is not all fun and games. The spectacular growth of countries like Peru, Brazil and Argentina in the past few years has relied to a large extent on their exports of raw materials and commodities to China and other countries around the world. It is unlikely that these high rates of growth will continue forever, particularly if further economic reforms are not pursued in these countries. It is therefore not clear whether these economies will continue to mature at the same high rate or whether and how fast they will grow into the large and wealthy consumer markets they are expected to become. More importantly, South American economies are becoming increasingly competitive as well and they may be able to service their own markets before Mexico or other large players in the region may arrive to satisfy the needs of local consumers.

The prudent course of action is therefore not one that chooses one region over the other, but rather one that (1) explicitly recognizes the risks of concentrating all of Mexico’s commerce in one region, and (2) seeks to establish and strengthen its ties with both North and South America by (3) promoting further integration with the North and (4) fostering trade with the Caribbean and Central and South America. Specifically, as Mexico continues to promote its integrationist agenda with North America—particularly with regards to labor migration, security issues, and correcting the deficiencies of NAFTA—it should assist competitive Mexican businesses in accessing southern markets by negotiating lower barriers to trade and more actively promoting their entry into those markets. Additionally, Mexico should establish policies that continue to attract FDI from the US and Canada, but also from strong South American economies like Brazil and Chile and even Colombia and Peru. Finally, Mexico should create incentives for foreign investors to export from Mexico to Latin American countries in addition to exporting to the US and Canada.

All of these things are easier said than done, but they are feasible and in some respect already happening. The Mexican government now needs to define a unified North-plus-South trade policy explicitly and work with the private sector in defining goals and strategies to support them on ventures between Mexico and its neighbors on both poles of the hemisphere. If it plays its cards right, rather than having to gamble on one camp or the other and risk losing it all, Mexico could become the pivotal player that holds together the economies of the Americas, North as well as South.

Rediscovering discovery: Investing in public school science labs

Jacob Hartog, MPA

Here’s a Jeopardy clue for IBM's supercomputer, Watson:

“This educational institution has produced more Nobel Prize-winning graduates than Princeton, MIT, Caltech, Oxford, or Yale.”

The question, surprisingly, is “What is the New York City Public School system?”

From the 1930s through the 1970s, the New York schools (and not just that one high school in the Bronx) produced legions of future scientists and engineers, including many who later made pathbreaking discoveries: from superconducting materials to mating bacteria, from Arrow's Impossibility Theorem to the design of the atomic bomb.

Where should we assign the credit for this astonishing productivity? Some of it, undoubtedly, is due to the particular mix of immigrants whose children filled the schools. Some of it is also due to the culture of those times, which celebrated science and discovery, which knew you could free the world from Fascism, stave off Sputnik, and make it to the moon, if you just got the differential equations right.

But part of the credit must go to the science labs.

At the same time as it was teaching long division to future laureates like Richard Feynmann and Robert Solow, New York City built and rehabilitated thousands of schools. On each floor of every one of the intermediate and high schools, they put science labs—rows of lab benches with gas lines and sinks, rows of cabinets full of chemical supplies, and a huge demonstration bench across the front of the room with ring stands and clamps, for showing off what happens when you drop pure sodium into a beaker of water or mix hydrogen peroxide with liquid soap and then add potassium iodide. (Watson, take note: the former explodes, the latter turns to foam.)

I began my teaching career in such a room, in a school built in the South Bronx in 1960, in the immediate aftermath of Sputnik’s blinking passage across the sky. Of course, by the time I got there, the gas lines had long ago been permanently shut off. The faucets were hacked off at the base, leaving jagged pieces of metal coming out of the lab benches. The sinks were choked with debris and, in some cases, emptied not into plumbing but directly into the cabinet below, so that when young Jose and Arthur tried out their model volcano, a flood of baking soda-and-vinegar lava coursed across the floor. The chemical cabinets were empty, except for decades of old dittos and worksheets, several nests of mice, and one unstoppered bottle of concentrated hydrochloric acid, the fumes of which had gradually turned the locker that held it into an ocean of rust.

The South Bronx has had some troubled years, no doubt. But this scene of decay is not unique to poor, inner-city schools. A trip through a middle-class suburban school might also show you sinks that don’t work and gas lines that no one knows how to turn on, microscopes with broken lenses and chemical cabinets with nothing in them but some aged litmus paper that no longer changes color in acid and base. Science class in such rooms can feel less like a trip to the future than a sojourn at an archeological dig.

What happened, to make us disinvest so thoroughly in science education?

Competing priorities within education deserves some of the blame. Although teachers’ salaries in American schools are lower than average for developed countries, American schools employ many more people than they once did, especially in special education and among specialists employed outside the classroom. And increasing regulation (and unnecessary asbestos abatement) has made even modest overhauls of school buildings prohibitively expensive.

Computer technology, meanwhile, has become a tantalizing investment for principals and grant agencies, at the expense of science supplies and equipment. Even science teachers themselves, some of them uncomfortable with the risks and inconvenience of doing real experiments, are often all too willing to pass out the laptops or textbooks and give their students something to do that, unlike hands-on labs, is quiet and indisputably safe. But staring at a screen is not conducting an experiment, and no amount of individualized support can make up for a classroom experience that, regrettably, involves filling out worksheets instead of blowing things up.

The “Sputnik Moment” that President Obama asserted was upon us, in which our relative educational attainment in science and math seems to fall further every year, cannot be solved with any single government intervention. But one thing we can take away from the original Sputnik moment is the importance of investing in high quality science labs and equipment, so that along with lots of Oobleck, Green Slime, and Elephant Toothpaste, the next generation’s Nobel Prize winners can also be made in the public schools.