Payal Hathi, MPA
I met with a woman this week that came into my office with huge bruises on her arms and face. She has been married to the man who gave her those bruises for 16 years, and they have two children together. She believes that she deserved what happened, that although she called the police for help in making the violence stop, she couldn’t actually tell them the truth about what was happening in her home; that if she just stays quiet and stays in the basement, her husband won’t get angry; and that all she needs to do is wait for 5 more years until her daughter is old enough to go to college. Then it will all be over.
Violence against women is not something that comes up often in policy circles – domestic violence in particular is often considered a family issue, or one that is generally addressed by activists. But there is certainly a place for policy to play a role. Societal portrayals and beliefs of women – from the media to childhood socialization to educational and job opportunities – all play a role in allowing the feminine to be seen as the inferior. In addition, the general attitude of blaming the victim and the cultural stigma often attached to women who “can’t keep their families together” perpetuates not only intergenerational cycles of violence, but also silence and shame around the issue.
A major part of the problem is that in the world of metrics and quantitative measures of success that we live in today, it often seems that there is little space or value given to work that cannot be counted using numbers. When I am asked how many women with whom I work have finally left their partners, found a job, or have secured housing for themselves, it seems much of the rest of the work that I do is invalidated because I cannot quantify what it means to get a woman to believe that she has the right to live free of fear or that she deserves to earn a living wage for her work. Of course the importance and power of statistics and clear, concise, and concrete information is critical in making policy issues relevant to people. But having worked in the violence against women movement for over a year now, it is clear that an additional system of evaluation is necessary if we as policymakers are to make the world a safer and more just place.
I came into graduate school with a background in quantitative evaluation work, hoping to gain the skills to enable small non-profits to incorporate evaluation as a routine practice to improve their own work. Through my work this past year with women facing human rights and economic justice violations in almost every aspect of their lives, I have become acutely aware that as policymakers, it is not enough to think about the efficiency with which different ideas can be put into practice, or the cost-effectiveness of one approach versus another. Rights, and the way in which they are put into practice in our families and daily interactions—while not easy to measure—must form the foundations of our policy frameworks in order to make meaningful impacts in the communities we serve. As I graduate, I recognize that as we look towards more sophisticated ways to evaluate the work of service providers, we cannot diminish the work that instills in people an awareness of their rights and gives them the tools to exercise them.
Payal Hathi worked at Sakhi for South Asian Women from 2010 to 2011. If you or someone you know has an issue with domestic violence, please go to Sakhi’s website http://sakhi.org or call their helpline at (21) 868-6741.
A student-run public policy blog of the Woodrow Wilson School of Public and International Affairs at Princeton University.
NOTE: The views expressed here belong to the individual contributors and not to Princeton University or the Woodrow Wilson School of Public and International Affairs.
Friday, May 20, 2011
Handing China’s Nuclear Power Report Card to Zhou Enlai, Part II
Editor’s Note: This is the second of a two-part series on China's nuclear energy sector. Part I examines the program's practicality and safety.
Ruiwen Lee, MPA
As the first leader who pushed for China’s nuclear power in the early 1970s, then-Premier Zhou Enlai laid out four key principles for its development: practicality, safety, economic viability, and self-reliance (适用、安全、经济、自力更生). Forty years on, as the nuclear power industry begins taking off at an accelerated pace, these four principles continue to steer nuclear power developments in China.
This two-part blog post discusses the Chinese nuclear power sector’s performance under each of these principles. Last week we discussed the first two of these principles, practicality and safety. This week we look at the latter two, economic viability and self-reliance.
III. Economic viability: Economies of scale vs. technological lock-in
Given that China is coal-rich and has not instituted measures to internalize the negative health and environment externalities from fossil-fuel combustion, nuclear energy is yet to be price-competitive against coal. The estimated 0.05 yuan per unit supplied cost-gap implies that the cost of generating nuclear power remains an important factor in its economic viability.
Economies of scale in construction allow each subsequent unit of a particular nuclear reactor model to be built at a lower cost. This is important for nuclear power plants since the high construction costs are usually the factor working against its economic viability. The downside is that as older reactor designs become cheaper to build, newer models with greater efficiency and improved safety features are kept out of the market as a result of “technological lock-in.” Technological lock-in describes a situation whereby a technology, typically an older, less efficient one, gains such dominance in a market that it prevents newer, better technologies from being adopted. The market could become locked-in when there are increasing returns to adopting existing technology. This is due to a) decreasing costs from economies of scale and learning by doing, and b) increasing benefits arising from factors like network externalities.
Competition is fierce in the Chinese nuclear energy market today. Although the companies allowed to operate and hold majority stakes in nuclear power plants are all state-owned enterprises, they compete against each other for projects. Against the backdrop of the nuclear energy market, economic factors such as the cost of building and operating nuclear power plants are increasingly important in determining which type of reactors are built. It is in such a market that technological lock-in is more likely to occur.
Further, as Generation III+ reactors are being developed, China’s nuclear power market should avoid becoming saturated with or locked into older models so that it has the capacity to absorb newer models when they are released in the future. Of the more than 25 reactors under construction, 16 are the Generation II+ CPR-1000 model. While on the one hand economies of scale can be achieved by using many units of the same reactor model, there could be technological lock-in as newer, better models become less price-competitive to build and operate.
Similarly, small modular reactors (SMR) should be given ample room to compete in the market when they are introduced. Designed to be simpler and safer, SMRs could prove to be more effective in serving smaller load centres in remote areas and inland regions with small electricity grids that are not well-connected to the national grid. If plans to construct nuclear power plants in more remote areas are allowed to proceed before SMRs are introduced, nuclear operators might preclude SMRs from consideration in their eagerness to compete for market share.
IV. Self-reliance: Indigenizing nuclear power technology
Developing indigenous technology has been a key factor in the industry’s direction. Qinshan 1 in Zhejiang Province uses the CNP-300 reactor, designed by the China National Nuclear Corporation (CNNC). The CNNC subsequently developed the CNP-600 reactor, which is used in Qinshan 2, and is developing the CNP-1000 model, though it has temporary been put on hold. The CNP-1000/600/300 models are all fully Chinese and free from foreign intellectual property rights.
The most popular reactor design in China currently is the CPR-1000. The China Guangdong Nuclear Power Corporation (CGNPC) developed the CPR-1000 based on the French reactor design used in Daya Bay, progressively indigenizing it with each nuclear power plant it builds.
A major acquisition by the Chinese nuclear power industry is Westinghouse’s AP1000, with the CNNC and the China Power Investment Corporation (CPIC) investing in an initial two reactors each. Besides allowing China to become the first country to build an AP1000 reactor, the deal with Westinghouse is significant for the technology transfer involved. As part of the agreement, China will own the intellectual property rights to subsequent larger models it designs based on the AP1000.
Conclusion: Moving forward in a post-Fukushima Asia
China's nuclear power industry has come a long way, and looks set for an increasing role in the future. Having achieved practical use for nuclear power, the industry will continue growing in its self-reliance, while balancing the dual goals of improving economic cost and safety.
Of course, much of China’s nuclear power development plans going forward would be seen by the government, the domestic public, and the international audience through the lens of Fukushima. While predicting China’s intended nuclear power expansion trajectory in the light of Fukushima is difficult, it is fair to expect the government to now give greater emphasis to safety. However, as a consequence, the country’s diversion of resources and expertise to improving safety is likely to apply natural brakes to the growth of nuclear power. How crippling this will be to the industry remains to be seen.
Ruiwen Lee, MPA
As the first leader who pushed for China’s nuclear power in the early 1970s, then-Premier Zhou Enlai laid out four key principles for its development: practicality, safety, economic viability, and self-reliance (适用、安全、经济、自力更生). Forty years on, as the nuclear power industry begins taking off at an accelerated pace, these four principles continue to steer nuclear power developments in China.
This two-part blog post discusses the Chinese nuclear power sector’s performance under each of these principles. Last week we discussed the first two of these principles, practicality and safety. This week we look at the latter two, economic viability and self-reliance.
III. Economic viability: Economies of scale vs. technological lock-in
Given that China is coal-rich and has not instituted measures to internalize the negative health and environment externalities from fossil-fuel combustion, nuclear energy is yet to be price-competitive against coal. The estimated 0.05 yuan per unit supplied cost-gap implies that the cost of generating nuclear power remains an important factor in its economic viability.
Economies of scale in construction allow each subsequent unit of a particular nuclear reactor model to be built at a lower cost. This is important for nuclear power plants since the high construction costs are usually the factor working against its economic viability. The downside is that as older reactor designs become cheaper to build, newer models with greater efficiency and improved safety features are kept out of the market as a result of “technological lock-in.” Technological lock-in describes a situation whereby a technology, typically an older, less efficient one, gains such dominance in a market that it prevents newer, better technologies from being adopted. The market could become locked-in when there are increasing returns to adopting existing technology. This is due to a) decreasing costs from economies of scale and learning by doing, and b) increasing benefits arising from factors like network externalities.
Competition is fierce in the Chinese nuclear energy market today. Although the companies allowed to operate and hold majority stakes in nuclear power plants are all state-owned enterprises, they compete against each other for projects. Against the backdrop of the nuclear energy market, economic factors such as the cost of building and operating nuclear power plants are increasingly important in determining which type of reactors are built. It is in such a market that technological lock-in is more likely to occur.
Further, as Generation III+ reactors are being developed, China’s nuclear power market should avoid becoming saturated with or locked into older models so that it has the capacity to absorb newer models when they are released in the future. Of the more than 25 reactors under construction, 16 are the Generation II+ CPR-1000 model. While on the one hand economies of scale can be achieved by using many units of the same reactor model, there could be technological lock-in as newer, better models become less price-competitive to build and operate.
Similarly, small modular reactors (SMR) should be given ample room to compete in the market when they are introduced. Designed to be simpler and safer, SMRs could prove to be more effective in serving smaller load centres in remote areas and inland regions with small electricity grids that are not well-connected to the national grid. If plans to construct nuclear power plants in more remote areas are allowed to proceed before SMRs are introduced, nuclear operators might preclude SMRs from consideration in their eagerness to compete for market share.
IV. Self-reliance: Indigenizing nuclear power technology
Developing indigenous technology has been a key factor in the industry’s direction. Qinshan 1 in Zhejiang Province uses the CNP-300 reactor, designed by the China National Nuclear Corporation (CNNC). The CNNC subsequently developed the CNP-600 reactor, which is used in Qinshan 2, and is developing the CNP-1000 model, though it has temporary been put on hold. The CNP-1000/600/300 models are all fully Chinese and free from foreign intellectual property rights.
The most popular reactor design in China currently is the CPR-1000. The China Guangdong Nuclear Power Corporation (CGNPC) developed the CPR-1000 based on the French reactor design used in Daya Bay, progressively indigenizing it with each nuclear power plant it builds.
A major acquisition by the Chinese nuclear power industry is Westinghouse’s AP1000, with the CNNC and the China Power Investment Corporation (CPIC) investing in an initial two reactors each. Besides allowing China to become the first country to build an AP1000 reactor, the deal with Westinghouse is significant for the technology transfer involved. As part of the agreement, China will own the intellectual property rights to subsequent larger models it designs based on the AP1000.
Conclusion: Moving forward in a post-Fukushima Asia
China's nuclear power industry has come a long way, and looks set for an increasing role in the future. Having achieved practical use for nuclear power, the industry will continue growing in its self-reliance, while balancing the dual goals of improving economic cost and safety.
Of course, much of China’s nuclear power development plans going forward would be seen by the government, the domestic public, and the international audience through the lens of Fukushima. While predicting China’s intended nuclear power expansion trajectory in the light of Fukushima is difficult, it is fair to expect the government to now give greater emphasis to safety. However, as a consequence, the country’s diversion of resources and expertise to improving safety is likely to apply natural brakes to the growth of nuclear power. How crippling this will be to the industry remains to be seen.
Disaster risk reduction: Africa’s development challenge
Carolyn Edelstein, MPA
Natural disasters happen. When, where, and how disasters strike is hard to forecast, but they occur often, and increasingly so. Usually, we try to mitigate disasters by shoring up our defenses, mostly through large-scale engineering feats. We apply similar strategies everywhere in the world, regardless of local conditions.
Problematically, this concept of disaster management leaves little room for human agency, and tends to over-rely on skills and resources unavailable in many parts of the developing world. But there is a new conception of disaster management, and the change highlights the need for professionals in developing countries—Africa especially—to generate their own solutions.
The last two decades have witnessed an emerging paradigm of “disaster risk reduction.” It contends that disasters are not just events to which we should respond, but rather the result of human vulnerabilities to environmental hazards in local contexts. The roots of vulnerability may be a matter of an individual’s characteristics, like old age or a disability, or may have structural causes: a lack of affordable housing, poor government service provision, and high crime rates; at a macro level, the legacies of colonialism, the global economic system, and so on. The explanations for vulnerability—and people’s strategies for overcoming vulnerability—quickly grow complex.
As such, disaster risk reduction demands a highly contextualized response. Researchers partner with residents of localities to identify hazards, vulnerabilities, and sources of resilience. Data-gathering and fine-resolution mapping inform local risk management practices. Disaster risk reduction heavily emphasizes preparation and adaptation, not just post-disaster relief. Such efforts are most successful when they incorporate an understanding of existing practices and perceptions. Who better to conduct the work than resident scholars and practitioners?
There is a blanket need for more research, especially in Africa. Existing work has focused on Asia and Latin America, though the need to understand the African context is clear. In spite of rising disaster incidence, deaths from natural disasters have been decreasing everywhere but in Africa. Elsewhere, sudden-onset crises prevail. Africa, by contrast, experiences “creeping emergencies,” when slow-onset hazards like droughts become unmanageable, or when underlying challenges like HIV/AIDS and malnutrition turn a relatively small event catastrophic. In this context, the typical emergency relief-based response to disasters proves less effective than preventative disaster risk reduction approaches.
Not only are relief efforts less helpful, but development suffers when plans ignore local risks. Mozambique offers an illustrative example of this inefficiency. There, the World Bank financed the construction of 487 schools over twenty years, but just one disaster, the floods of 2000, damaged or destroyed roughly 500 primary schools alone. The World Bank, Red Cross, and others have shown that every dollar invested in preventative risk reduction measures saves between $2 and $10 in disaster losses.
To encourage uptake of disaster reduction in development, the UN declared the 1990s the “International Decade for Natural Disaster Reduction.” In 2005, 164 member nations signed the Hyogo Framework for Disaster Risk Reduction, pledging to build a risk reduction approach into disaster management and development.
And yet, myopia persists amongst development agencies. The World Bank’s Independent Evaluation Group found that disasters are “still sometimes treated as an interruption in development rather than as a risk to development.” Forty-four percent of current World Bank-supported country assistance strategies make no mention of disasters. More broadly, 96% of disaster-related assistance from the industrialized world still comes solely as emergency relief.
Better information may improve vulnerability reduction efforts, with added developmental benefits if Africans drive the effort themselves. Despite potential advantages of an African-led research initiative, the continent’s scholars produced only two of the African disaster risk-related research papers published in 2008. There were also few African programs for educating and training disaster management practitioners.
Early signs exist of a growing Africa-based field of study. A network of ten universities across the continent has started graduate-level programs in disaster risk science. Called Peri-Peri U, they model themselves after a research center based at the University of Cape Town, which primarily uses community-level risk assessments and spatial data mapping to analyze the vulnerability. The unit has the ear of government officials, community organizers, and a growing number of Southern African graduate students.
In 2011, the Peri-Peri network will seek renewed funding from the US Agency for International Development. Program reviews have consistently demonstrated positive yields of the new research centers for disaster management, and certainly for the emergence of a new generation of African-trained researchers, planners, and practitioners. With increased support, the local scholarship program can be scaled up and better tackle the urgent need to increase African preparedness to disaster risks.
Natural disasters happen. When, where, and how disasters strike is hard to forecast, but they occur often, and increasingly so. Usually, we try to mitigate disasters by shoring up our defenses, mostly through large-scale engineering feats. We apply similar strategies everywhere in the world, regardless of local conditions.
Problematically, this concept of disaster management leaves little room for human agency, and tends to over-rely on skills and resources unavailable in many parts of the developing world. But there is a new conception of disaster management, and the change highlights the need for professionals in developing countries—Africa especially—to generate their own solutions.
The last two decades have witnessed an emerging paradigm of “disaster risk reduction.” It contends that disasters are not just events to which we should respond, but rather the result of human vulnerabilities to environmental hazards in local contexts. The roots of vulnerability may be a matter of an individual’s characteristics, like old age or a disability, or may have structural causes: a lack of affordable housing, poor government service provision, and high crime rates; at a macro level, the legacies of colonialism, the global economic system, and so on. The explanations for vulnerability—and people’s strategies for overcoming vulnerability—quickly grow complex.
As such, disaster risk reduction demands a highly contextualized response. Researchers partner with residents of localities to identify hazards, vulnerabilities, and sources of resilience. Data-gathering and fine-resolution mapping inform local risk management practices. Disaster risk reduction heavily emphasizes preparation and adaptation, not just post-disaster relief. Such efforts are most successful when they incorporate an understanding of existing practices and perceptions. Who better to conduct the work than resident scholars and practitioners?
There is a blanket need for more research, especially in Africa. Existing work has focused on Asia and Latin America, though the need to understand the African context is clear. In spite of rising disaster incidence, deaths from natural disasters have been decreasing everywhere but in Africa. Elsewhere, sudden-onset crises prevail. Africa, by contrast, experiences “creeping emergencies,” when slow-onset hazards like droughts become unmanageable, or when underlying challenges like HIV/AIDS and malnutrition turn a relatively small event catastrophic. In this context, the typical emergency relief-based response to disasters proves less effective than preventative disaster risk reduction approaches.
Not only are relief efforts less helpful, but development suffers when plans ignore local risks. Mozambique offers an illustrative example of this inefficiency. There, the World Bank financed the construction of 487 schools over twenty years, but just one disaster, the floods of 2000, damaged or destroyed roughly 500 primary schools alone. The World Bank, Red Cross, and others have shown that every dollar invested in preventative risk reduction measures saves between $2 and $10 in disaster losses.
To encourage uptake of disaster reduction in development, the UN declared the 1990s the “International Decade for Natural Disaster Reduction.” In 2005, 164 member nations signed the Hyogo Framework for Disaster Risk Reduction, pledging to build a risk reduction approach into disaster management and development.
And yet, myopia persists amongst development agencies. The World Bank’s Independent Evaluation Group found that disasters are “still sometimes treated as an interruption in development rather than as a risk to development.” Forty-four percent of current World Bank-supported country assistance strategies make no mention of disasters. More broadly, 96% of disaster-related assistance from the industrialized world still comes solely as emergency relief.
Better information may improve vulnerability reduction efforts, with added developmental benefits if Africans drive the effort themselves. Despite potential advantages of an African-led research initiative, the continent’s scholars produced only two of the African disaster risk-related research papers published in 2008. There were also few African programs for educating and training disaster management practitioners.
Early signs exist of a growing Africa-based field of study. A network of ten universities across the continent has started graduate-level programs in disaster risk science. Called Peri-Peri U, they model themselves after a research center based at the University of Cape Town, which primarily uses community-level risk assessments and spatial data mapping to analyze the vulnerability. The unit has the ear of government officials, community organizers, and a growing number of Southern African graduate students.
In 2011, the Peri-Peri network will seek renewed funding from the US Agency for International Development. Program reviews have consistently demonstrated positive yields of the new research centers for disaster management, and certainly for the emergence of a new generation of African-trained researchers, planners, and practitioners. With increased support, the local scholarship program can be scaled up and better tackle the urgent need to increase African preparedness to disaster risks.
Friday, May 13, 2011
Handing China’s Nuclear Power Report Card to Zhou Enlai, Part I
Editor’s Note: This is the first of a two-part series on China's nuclear energy sector. Part II examines the program's economic viability and self-reliance.
Ruiwen Lee, MPA
As the first leader who pushed for China’s nuclear power in the early 1970s, then-Premier Zhou Enlai laid out four key principles for its development: practicality, safety, economic viability, and self-reliance (适用、安全、经济、自力更生). Forty years on, as the nuclear power industry begins taking off at an accelerated pace, these four principles continue to steer nuclear power developments in China.
This two-part blog post discusses the Chinese nuclear power sector’s performance under each of these principles. This week we look at the first two of these principles, practicality and safety.
I. Practicality: The nuclear component in China’s energy mix
As with other nuclear weapon states, China’s nuclear energy industry has military roots. But given that political reasons were responsible for the country’s first nuclear power plants, China’s nuclear power capacity remained flat for decades. The early 2000s saw more power plants come online for assorted reasons dating back to the mid-1990s, but not because there was a fundamental shift in energy or nuclear policy. A country with abundant and cheap coal resources, China only began seriously considering nuclear energy as a serious alternative when power shortages hit regions nationwide in 2002.
The domestic thirst for energy to fuel the economy’s rapid growth, coupled with mounting international pressure to reduce its greenhouse gas emissions as climate change negotiations gain momentum, are the main drivers of nuclear energy’s recent rise. By 2007, nuclear power contributed only less than 2% of China’s electricity production. However, the Chinese government has plans to greatly expand the country’s nuclear power installed capacity, from the present 10.8 GWe (gigawatt electrical, one gigawatt being equal to one billion watts) to 70GWe by 2020.
Beyond becoming a practical source of energy for China’s booming economy, nuclear power has also emerged as a critical non-fossil fuel source set to reduce the dominance of coal in China’s energy mix, helping China achieve its green goals.
II. Safety: The Fukushima wake-up call
The March 11th Tohoku earthquake and tsunami that struck northeast Japan has given China a timely jolt from the fantasy of its economy growing at an unbridled pace on “clean” nuclear power. Previous nuclear meltdowns at Three Mile Island and Chernobyl occurred in the late-70s and mid-80s respectively, before China had embarked on its nuclear power expansion drive. That, and because it’s been a quarter-century since the previous nuclear crisis might have put significant distance between actual danger and that perceived by the Chinese nuclear authority and industry.
However, in the aftermath of Fukushima, the immediate reaction from the Chinese government was to suspend approvals for proposed nuclear power plant projects while declaring that its plans to develop nuclear energy would not stall. A Ministry of Environmental Protection official has gone as far as to claim that “there is a guarantee for the safety of China’s nuclear power facilities” and that China “will not abandon” its nuclear power plan for “fear of slight risks.”
Before Fukushima, nuclear power enjoyed a generally positive reception from the Chinese public, given its critical role in reducing air pollution and shifting the country to a low-carbon economy. This sentiment quickly vanished as a widespread nuclear scare took hold of China, with people buying large amounts of salt with iodine, mistakenly thinking that it could help ward off any radioactive effects drifting from Japan. The public’s reaction makes it obvious that nuclear power can quickly cease being everyone’s blue-eyed boy, and as such, any positive sentiment toward nuclear—no matter how long-lasting—simply cannot be taken for granted.
Other environmental issues have proven to be sensitive to the Chinese public. Escalating cancer rates associated with environmental pollution caused by factories have incited protest in various provinces. Hydropower dam construction plans have also faced strong and sometimes violent resistance. Within the nuclear industry, a construction project in Rushan, Shandong province was halted after local petitioning. To ensure that nuclear power retains a fair degree of public acceptance, the industry’s clean safety record is of utmost importance.
Next week we’ll tackle economic viability and self-reliance.
Ruiwen Lee, MPA
As the first leader who pushed for China’s nuclear power in the early 1970s, then-Premier Zhou Enlai laid out four key principles for its development: practicality, safety, economic viability, and self-reliance (适用、安全、经济、自力更生). Forty years on, as the nuclear power industry begins taking off at an accelerated pace, these four principles continue to steer nuclear power developments in China.
This two-part blog post discusses the Chinese nuclear power sector’s performance under each of these principles. This week we look at the first two of these principles, practicality and safety.
I. Practicality: The nuclear component in China’s energy mix
As with other nuclear weapon states, China’s nuclear energy industry has military roots. But given that political reasons were responsible for the country’s first nuclear power plants, China’s nuclear power capacity remained flat for decades. The early 2000s saw more power plants come online for assorted reasons dating back to the mid-1990s, but not because there was a fundamental shift in energy or nuclear policy. A country with abundant and cheap coal resources, China only began seriously considering nuclear energy as a serious alternative when power shortages hit regions nationwide in 2002.
The domestic thirst for energy to fuel the economy’s rapid growth, coupled with mounting international pressure to reduce its greenhouse gas emissions as climate change negotiations gain momentum, are the main drivers of nuclear energy’s recent rise. By 2007, nuclear power contributed only less than 2% of China’s electricity production. However, the Chinese government has plans to greatly expand the country’s nuclear power installed capacity, from the present 10.8 GWe (gigawatt electrical, one gigawatt being equal to one billion watts) to 70GWe by 2020.
Beyond becoming a practical source of energy for China’s booming economy, nuclear power has also emerged as a critical non-fossil fuel source set to reduce the dominance of coal in China’s energy mix, helping China achieve its green goals.
II. Safety: The Fukushima wake-up call
The March 11th Tohoku earthquake and tsunami that struck northeast Japan has given China a timely jolt from the fantasy of its economy growing at an unbridled pace on “clean” nuclear power. Previous nuclear meltdowns at Three Mile Island and Chernobyl occurred in the late-70s and mid-80s respectively, before China had embarked on its nuclear power expansion drive. That, and because it’s been a quarter-century since the previous nuclear crisis might have put significant distance between actual danger and that perceived by the Chinese nuclear authority and industry.
However, in the aftermath of Fukushima, the immediate reaction from the Chinese government was to suspend approvals for proposed nuclear power plant projects while declaring that its plans to develop nuclear energy would not stall. A Ministry of Environmental Protection official has gone as far as to claim that “there is a guarantee for the safety of China’s nuclear power facilities” and that China “will not abandon” its nuclear power plan for “fear of slight risks.”
Before Fukushima, nuclear power enjoyed a generally positive reception from the Chinese public, given its critical role in reducing air pollution and shifting the country to a low-carbon economy. This sentiment quickly vanished as a widespread nuclear scare took hold of China, with people buying large amounts of salt with iodine, mistakenly thinking that it could help ward off any radioactive effects drifting from Japan. The public’s reaction makes it obvious that nuclear power can quickly cease being everyone’s blue-eyed boy, and as such, any positive sentiment toward nuclear—no matter how long-lasting—simply cannot be taken for granted.
Other environmental issues have proven to be sensitive to the Chinese public. Escalating cancer rates associated with environmental pollution caused by factories have incited protest in various provinces. Hydropower dam construction plans have also faced strong and sometimes violent resistance. Within the nuclear industry, a construction project in Rushan, Shandong province was halted after local petitioning. To ensure that nuclear power retains a fair degree of public acceptance, the industry’s clean safety record is of utmost importance.
Next week we’ll tackle economic viability and self-reliance.
Latin Americans: A disproportionate share of the uninsured
Veronica Guerra, MPA
Healthcare in America is designed as a multi-payer or privatized system, meaning everyone is responsible for securing his or her own health services. Excluding the young and old who are covered under federal or state social welfare programs, the majority of Americans get healthcare either through private insurance or their employer. Yet, nearly 55.6 million Americans lack access to basic health services, and even those who do find costs prohibitive: nearly half of bankruptcies in America are related to medical care payments. These statistics become paradoxical when one considers ballooning national expenditures in the health services industry.
In 2005 almost 34% of Latinos were uninsured, constituting a disproportionate share of the nation’s uninsured population. (This high rate has also led to further exacerbation of existing health inequities and to more pronounced health disparities.) The likelihood of being uninsured is far higher among non-citizen Latinos who primarily speak Spanish. There are various factors that contribute to the increasingly uninsured status of the Latino population, including employment, employee benefits, household income, language, and citizenship status.
Immigration from Latin America to the United States has steadily risen over time. Latino citizens and non-citizens are less likely to have both public and private insurance coverage. Non-citizens do not qualify for various public programs including Medicaid and the State Children’s Health Insurance Program (SCHIP), and legal immigrants often do not apply for public programs because of fear of jeopardizing their residency status or because they are ineligible during the first five years of gaining residency. When employed, they are less likely to receive employee health benefits, and low-wages prevent the purchase of private insurance. In general, minorities and immigrant families have lower average incomes than white citizen families. These income differences pose a challenge in obtaining health benefits because low incomes lead many families to make difficult decisions between health care coverage and other basic necessities due to the increasing costs of health coverage.
Language also has an implicit effect on uninsured status. Those who have limited English proficiency may have limited employment opportunities and may work in low-wage sectors that do not offer employee health benefits. Furthermore, language barriers both pose a challenge in completing insurance applications and may compromise the quality of health care when access is obtained.
One important recommendation that could decrease the number of uninsured is to restore public insurance eligibility for legal immigrants, either through federal legislation or at the state level through programs such as Medicaid and SCHIP. Resources should be concentrated in the Latino community to help reduce the number of uninsured and decrease medical expenses incurred through emergency care visits. Existing resources that provide care for the uninsured such as safety-net clinics should be improved and provided increased funding to meet the high demand for their services. Additionally, policies that improve the quality of jobs held by Latinos or that incentivize businesses to offer insurance to low-income workers could lead to increased offers in employer-based health insurance. Processes to apply for insurance—whether public or private—should be streamlined and accessible to those who require language assistance. It is important that government efforts be focused on decreasing health disparities, improving preventive efforts, and increasing access to health care coverage for adults and children. This is not simply beneficial to the Latino community, but to Americans nationwide.
Healthcare in America is designed as a multi-payer or privatized system, meaning everyone is responsible for securing his or her own health services. Excluding the young and old who are covered under federal or state social welfare programs, the majority of Americans get healthcare either through private insurance or their employer. Yet, nearly 55.6 million Americans lack access to basic health services, and even those who do find costs prohibitive: nearly half of bankruptcies in America are related to medical care payments. These statistics become paradoxical when one considers ballooning national expenditures in the health services industry.
In 2005 almost 34% of Latinos were uninsured, constituting a disproportionate share of the nation’s uninsured population. (This high rate has also led to further exacerbation of existing health inequities and to more pronounced health disparities.) The likelihood of being uninsured is far higher among non-citizen Latinos who primarily speak Spanish. There are various factors that contribute to the increasingly uninsured status of the Latino population, including employment, employee benefits, household income, language, and citizenship status.
Immigration from Latin America to the United States has steadily risen over time. Latino citizens and non-citizens are less likely to have both public and private insurance coverage. Non-citizens do not qualify for various public programs including Medicaid and the State Children’s Health Insurance Program (SCHIP), and legal immigrants often do not apply for public programs because of fear of jeopardizing their residency status or because they are ineligible during the first five years of gaining residency. When employed, they are less likely to receive employee health benefits, and low-wages prevent the purchase of private insurance. In general, minorities and immigrant families have lower average incomes than white citizen families. These income differences pose a challenge in obtaining health benefits because low incomes lead many families to make difficult decisions between health care coverage and other basic necessities due to the increasing costs of health coverage.
Language also has an implicit effect on uninsured status. Those who have limited English proficiency may have limited employment opportunities and may work in low-wage sectors that do not offer employee health benefits. Furthermore, language barriers both pose a challenge in completing insurance applications and may compromise the quality of health care when access is obtained.
One important recommendation that could decrease the number of uninsured is to restore public insurance eligibility for legal immigrants, either through federal legislation or at the state level through programs such as Medicaid and SCHIP. Resources should be concentrated in the Latino community to help reduce the number of uninsured and decrease medical expenses incurred through emergency care visits. Existing resources that provide care for the uninsured such as safety-net clinics should be improved and provided increased funding to meet the high demand for their services. Additionally, policies that improve the quality of jobs held by Latinos or that incentivize businesses to offer insurance to low-income workers could lead to increased offers in employer-based health insurance. Processes to apply for insurance—whether public or private—should be streamlined and accessible to those who require language assistance. It is important that government efforts be focused on decreasing health disparities, improving preventive efforts, and increasing access to health care coverage for adults and children. This is not simply beneficial to the Latino community, but to Americans nationwide.
Tags:
Field III (Domestic),
health,
Latin America
Guess Who’s Coming to Breakfast? And Lunch, and Dinner?
Jenn Onofrio, MPA
Earlier this month the Food and Drug Administration issued guidelines for food manufacturers on recommended decreases in the level and frequency of sugar-sweetened cereals marketing to children. The guidelines, though voluntary, remind us again of the pervasive place of the food industry at the kitchen table.
Ask a parent who has tried to get her child to eat the boring oatmeal instead of the Cocoa Puffs before dashing out the door—the task is daunting to say the least. Food research tells us, though, that this is not entirely a matter of children’s taste buds being so normalized to sugar that they just hate oatmeal—it’s also the product of millions of dollars of targeted advertising that reminds children over and over again through television and internet commercials that Tony the Tiger is “grrrrrrreat!”
According to the Rudd Center for Food Policy and Obesity at Yale University, “food marketing to youth has been shown to increase preference for advertised foods; consumption of advertised foods; overall calorie consumption; requests to parents to purchase advertised foods (known as “pester power”); and snacking.”
The food industry has literally wedged itself between parents and children.
I studied food policy this fall as part of a working group preparing recommendations for the Robert Wood Johnson Foundation’s Childhood Obesity Group. We researched and visited programs all over the country that were tackling the issue of childhood obesity. I was fortunate to be able to meet with leadership in San Francisco about the hot topic of the time, the so-called “Happy Meal ban.” What was amazing was that it wasn’t a ban at all, but rather a requirement that fast food companies could not hand out a free toy with a meal that contained over 600 calories (with more than 35% derived from fat), and more than 640mg of sodium. It was actually an incentive for companies to increase their nutritional standards. Make it healthier, and add a serving of fruit and veggies. So long as they complied, they could reintroduce the toy.
But that wasn’t the argument heard ’round the world. Frustrated parents accused the government of trying to take the happy out of the meal. Parent after parent protested, “But my kid wants the Happy Meal.”
Without greater regulation of marketing standards, we’re getting our battles confused. Kids (with the help of the food industry) rebel against adults; adults rebel against government initiatives because of what their kids want. (Conveniently, it’s also what the food industry wants.) It creates a lot a noise and not a lot of change in the fact that the childhood obesity rate has tripled since 1980. According to the Centers for Disease Control, 17% of American children are obese.
Regulation won’t cure everything. Our research found that the most effective programs implemented a mix of bans, incentives, and education. To complement this troika, it may be time to think about setting one less place at the kitchen table. Tony the Tiger, you’re out.
Editor’s Note: You can read more about this subject in “Tipping the Scales: Strategies for Changing How America’s Children Eat,” a WWS graduate policy workshop final report presented to the Robert Wood Johnson Foundation. Available here.
Earlier this month the Food and Drug Administration issued guidelines for food manufacturers on recommended decreases in the level and frequency of sugar-sweetened cereals marketing to children. The guidelines, though voluntary, remind us again of the pervasive place of the food industry at the kitchen table.
Ask a parent who has tried to get her child to eat the boring oatmeal instead of the Cocoa Puffs before dashing out the door—the task is daunting to say the least. Food research tells us, though, that this is not entirely a matter of children’s taste buds being so normalized to sugar that they just hate oatmeal—it’s also the product of millions of dollars of targeted advertising that reminds children over and over again through television and internet commercials that Tony the Tiger is “grrrrrrreat!”
According to the Rudd Center for Food Policy and Obesity at Yale University, “food marketing to youth has been shown to increase preference for advertised foods; consumption of advertised foods; overall calorie consumption; requests to parents to purchase advertised foods (known as “pester power”); and snacking.”
The food industry has literally wedged itself between parents and children.
I studied food policy this fall as part of a working group preparing recommendations for the Robert Wood Johnson Foundation’s Childhood Obesity Group. We researched and visited programs all over the country that were tackling the issue of childhood obesity. I was fortunate to be able to meet with leadership in San Francisco about the hot topic of the time, the so-called “Happy Meal ban.” What was amazing was that it wasn’t a ban at all, but rather a requirement that fast food companies could not hand out a free toy with a meal that contained over 600 calories (with more than 35% derived from fat), and more than 640mg of sodium. It was actually an incentive for companies to increase their nutritional standards. Make it healthier, and add a serving of fruit and veggies. So long as they complied, they could reintroduce the toy.
But that wasn’t the argument heard ’round the world. Frustrated parents accused the government of trying to take the happy out of the meal. Parent after parent protested, “But my kid wants the Happy Meal.”
Without greater regulation of marketing standards, we’re getting our battles confused. Kids (with the help of the food industry) rebel against adults; adults rebel against government initiatives because of what their kids want. (Conveniently, it’s also what the food industry wants.) It creates a lot a noise and not a lot of change in the fact that the childhood obesity rate has tripled since 1980. According to the Centers for Disease Control, 17% of American children are obese.
Regulation won’t cure everything. Our research found that the most effective programs implemented a mix of bans, incentives, and education. To complement this troika, it may be time to think about setting one less place at the kitchen table. Tony the Tiger, you’re out.
Editor’s Note: You can read more about this subject in “Tipping the Scales: Strategies for Changing How America’s Children Eat,” a WWS graduate policy workshop final report presented to the Robert Wood Johnson Foundation. Available here.
Tags:
Field III (Domestic),
food,
health,
marketing,
psychology,
youth
O, Canada…has voted! (In case you missed it.) And it matters.
Julian Lee, MPA
To the casual outside observer, it looks like the Canadian federal elections on May 2nd didn’t produce much change; after all, Stephen Harper is still Canada’s Conservative prime minister, as he has been for four years. But look a little closer, and you will observe something of a sea change.
The center-left Liberal Party—long dubbed Canada’s “natural governing party” due to its 69-year reign in the 20th century—was decimated, earning only 34 of parliament’s 308 seats, its worst showing in history.
The Liberal Party leader, former Oxford, Cambridge, and Harvard intellectual Michael Ignatieff, was long better-known in his adoptive home countries of England and the US than in Canada before he returned after 30 years to enter politics. The Conservative Party never ceased to attack his foreign residence and his supposed elitist arrogance; the Canadian electorate proved sufficiently parochial to buy the message. It didn’t help that Ignatieff seemed unable to communicate what he stood for. Adding insult to injury, he failed to regain his seat in parliament.
Into the left-of-center vacuum stormed the social democratic New Democratic Party (NDP). Led by the affable, energetic Jack Layton, the party achieved the unthinkable: in Quebec, it dethroned the separatist Bloc Québécois, which since 1993 had a stranglehold on the sovereigntist left vote there. The Bloc plunged from 49 seats to a mere 4, which enabled the NDP to be become the second-strongest parliamentary force for the first time in history.
In another first, in Mr. Harper’s third successive electoral victory, voters have endowed him with a parliamentary majority. Granted, only 39.6% of ballots were cast in his favor, but in Canada’s first-past-the-post electoral system, a minority of votes can give you a majority of seats.
The big deal: Changing values
As I mentioned, all of this amounts to a sea change in Canadian politics. With his new-found majority, Harper will no longer need to rely on compromise in order to push through his agenda. Though in his acceptance speech, he promised to “be the government of all Canadians,” his record to date casts some doubt on that claim. Harper has consistently ignored the views of groups that do not form part of his power base even while forming a minority government.
More importantly however, this election points to shifting Canadian values. The political left has been outraged by Mr. Harper’s autocratic style of governing: he suspended parliament when a parliamentary investigation into the abuse of Afghan detainees transferred from Canadian custody became uncomfortable for his government; he slapped a gag order on bureaucrats, barring them from speaking to the press without his office’s authorization in order to control his government’s messaging; and his government was the first ever to be found in contempt of Parliament for withholding costing information for several big-ticket expenditures. Yet none of these episodes gained traction with an electorate that was focused on an economy that was humming along better than in most developed countries.
The Liberal Party used to be able to count on the immigrant vote, but that has shifted to the Conservative Party. Whereas immigrants used to be predominantly European and settle in the east, they now come mainly from Asia and more of them go west. This development has gone hand-in-hand with an economic shift: Canada’s Ontario- and Quebec-based manufacturing base has been in decline for years, whereas western Canada has done well with its natural resources, services, and Asian outlook.
As the economy has gone west, so has politics. Stephen Harper hails from Alberta, often nicknamed “Canada’s Texas” by Easterners not only for its oil wealth and cattle ranches, but for its small-government, conservative political bent. His power base lies there, but crucially, he has convinced many in the former Liberal heartland of Ontario that his vision of more a limited government is the way of the future.
The Canada that emerged on May 2nd is questioning the role of government and the social-democratic state it built during the 20th century. This is the most important lesson from this year’s elections. It will take the left-of-center parties years to adapt to this new reality.
A Conservative future?
But it won’t just be smooth sailing for the Conservatives. Their party is still more conservative than the average voter and will have to avoid the temptation to govern roughshod and alienate newly-acquired supporters. The NDP will have to learn how to be an effective opposition with a large caucus of neophytes, while representing Quebec for the first time and probably moving to the center in an effort to widen its appeal. As for the Liberal Party, it will have to pick up the pieces and start rebuilding its identity from scratch.
As the two leftist parties figure out their new roles, it would be unsurprising if Canada were in for more than one Conservative term. That would suit Mr. Harper well. By his own admission, his goal is to shift Canada permanently rightward, positioning the Conservatives as Canada's new “natural governing party."
To the casual outside observer, it looks like the Canadian federal elections on May 2nd didn’t produce much change; after all, Stephen Harper is still Canada’s Conservative prime minister, as he has been for four years. But look a little closer, and you will observe something of a sea change.
The center-left Liberal Party—long dubbed Canada’s “natural governing party” due to its 69-year reign in the 20th century—was decimated, earning only 34 of parliament’s 308 seats, its worst showing in history.
The Liberal Party leader, former Oxford, Cambridge, and Harvard intellectual Michael Ignatieff, was long better-known in his adoptive home countries of England and the US than in Canada before he returned after 30 years to enter politics. The Conservative Party never ceased to attack his foreign residence and his supposed elitist arrogance; the Canadian electorate proved sufficiently parochial to buy the message. It didn’t help that Ignatieff seemed unable to communicate what he stood for. Adding insult to injury, he failed to regain his seat in parliament.
Into the left-of-center vacuum stormed the social democratic New Democratic Party (NDP). Led by the affable, energetic Jack Layton, the party achieved the unthinkable: in Quebec, it dethroned the separatist Bloc Québécois, which since 1993 had a stranglehold on the sovereigntist left vote there. The Bloc plunged from 49 seats to a mere 4, which enabled the NDP to be become the second-strongest parliamentary force for the first time in history.
In another first, in Mr. Harper’s third successive electoral victory, voters have endowed him with a parliamentary majority. Granted, only 39.6% of ballots were cast in his favor, but in Canada’s first-past-the-post electoral system, a minority of votes can give you a majority of seats.
The big deal: Changing values
As I mentioned, all of this amounts to a sea change in Canadian politics. With his new-found majority, Harper will no longer need to rely on compromise in order to push through his agenda. Though in his acceptance speech, he promised to “be the government of all Canadians,” his record to date casts some doubt on that claim. Harper has consistently ignored the views of groups that do not form part of his power base even while forming a minority government.
More importantly however, this election points to shifting Canadian values. The political left has been outraged by Mr. Harper’s autocratic style of governing: he suspended parliament when a parliamentary investigation into the abuse of Afghan detainees transferred from Canadian custody became uncomfortable for his government; he slapped a gag order on bureaucrats, barring them from speaking to the press without his office’s authorization in order to control his government’s messaging; and his government was the first ever to be found in contempt of Parliament for withholding costing information for several big-ticket expenditures. Yet none of these episodes gained traction with an electorate that was focused on an economy that was humming along better than in most developed countries.
The Liberal Party used to be able to count on the immigrant vote, but that has shifted to the Conservative Party. Whereas immigrants used to be predominantly European and settle in the east, they now come mainly from Asia and more of them go west. This development has gone hand-in-hand with an economic shift: Canada’s Ontario- and Quebec-based manufacturing base has been in decline for years, whereas western Canada has done well with its natural resources, services, and Asian outlook.
As the economy has gone west, so has politics. Stephen Harper hails from Alberta, often nicknamed “Canada’s Texas” by Easterners not only for its oil wealth and cattle ranches, but for its small-government, conservative political bent. His power base lies there, but crucially, he has convinced many in the former Liberal heartland of Ontario that his vision of more a limited government is the way of the future.
The Canada that emerged on May 2nd is questioning the role of government and the social-democratic state it built during the 20th century. This is the most important lesson from this year’s elections. It will take the left-of-center parties years to adapt to this new reality.
A Conservative future?
But it won’t just be smooth sailing for the Conservatives. Their party is still more conservative than the average voter and will have to avoid the temptation to govern roughshod and alienate newly-acquired supporters. The NDP will have to learn how to be an effective opposition with a large caucus of neophytes, while representing Quebec for the first time and probably moving to the center in an effort to widen its appeal. As for the Liberal Party, it will have to pick up the pieces and start rebuilding its identity from scratch.
As the two leftist parties figure out their new roles, it would be unsurprising if Canada were in for more than one Conservative term. That would suit Mr. Harper well. By his own admission, his goal is to shift Canada permanently rightward, positioning the Conservatives as Canada's new “natural governing party."
Friday, May 6, 2011
Islamophobia and the etymological roots of the King Hearings, Part III: The public policy implications of language use
Editor’s Note: This is the third of a three-part series on Islamophobia in America. Part I discusses the premises and implications of the King hearings. Part II examines the emerging semantics of Islam and Muslims in the West.
Nazir Harb, MPA
With the death of bin Laden, the world faces another opportunity to re-evaluate status quo assumptions and modes of operation. However, this is also an important time to think critically about how we use language to describe the events occurring in the Middle East and the “War on Terror.” Obama was right to describe bin Laden not as a Muslim leader but as a mass murderer of Muslims—having orchestrated the killing of, according to several studies, x38 more Muslims than non-Muslims in his lifetime. But among bin Laden and al-Qaeda’s most insidious contributions to the targeting and vilification of Muslims everywhere is their malicious manipulation of Islam’s idioms, phrases, and its co-optation and distortion of sacred Quranic conceptions. Unfortunately, as I discussed last week, post-9/11 anti-Islamic sentiment led many to unwittingly adopt al-Qaedaists’ language and its corresponding ideology that perpetuates both the network’s self-portrayal as defenders of Islam and the notion that Islam is a geo-political entity rather than a world religion that is not only devoid of the terrorist organization’s political agenda but in fact contravenes it at every turn.
In short, I argue, language matters a lot—the way we talk about Islam, Muslims, and Arabs in the United States impacts people in the Middle East in real ways. Not only insofar as the United States is participating in a third major combat operation in a Muslim country—Libya—but also on a social and cultural level. The way English-language media describe events and people affects Arabic media and, critically, innocent lives are often lost in translation.
The adoption of English media norms by Arabic media
Research I conducted on English and Arabic media prior to 2011 indicated that certain politically-charged discourse items in Western English-language media ranging from “Islam” and “Muslim” to “terrorism,” “threat,” “violence,” or “fundamentalism” tended to readily go in and out of Western political discourse following crises where, whether true or not, Muslims or al-Qaeda were suspected of inciting or carrying out violent acts. After their respective frequencies would peak, these terms did not always decrease to pre-crisis levels but they generally experienced a period of significant decline until another event triggered them. However, the usage of these same discourse items in Arabic media, once becoming politically-charged in the context of a crisis situation (e.g. terrorist incident) and introduced into Arabic political discourse in popular media (Al-Jazeera, Al-Arabiyya, etc.), tended to escalate increasingly from that point onward with no abatement, and in many cases exceeded the limitations of charts and graphs over time.
Since 9/11, the use of terms relating “Islam” and “Muslims” to “terrorism” or “extremism” in Western English-language discourse has been steadily rising—by now there is a conflation of the words “Islamist” and “threat” such that “Islamist” has come to imply a threat in its own right. In Arab media there is a time lag as these terms creep into the Arabic language environment initially via translation and secondary-source references. As such, phrases that conflate “Islam” with words like “terror” (irhab), “terrorism” (irhabiyya), and “violence” (‘onf) had an average frequency of 8.27/year pre-9/11. After 9/11, the rate escalated rapidly, especially after the invasion of Afghanistan and the beginning of the Iraq war. There was another peak after January of this year with the onset of the Arab revolutions across the Middle East. On average, since 9/11, the conflation of Islam with terror and violence in Arab media has increased to the unprecedented level of 1,222.57/yr.
Why? What is the difference between the Arabic-language political environment and its English counterpart? Why might a post-crisis politically-charged word’s or phrase’s frequency not just linger on but continue to rapidly increase in the former environment but not the latter?
How do ideas become paradigmatic?
An April 2011 study by the Wellcome Trust Centre for Neuroimaging at University College London reported that new word associations form ideas in the mind by physically creating synaptic connections through the process of potentiation—a finding that challenges prior assumptions about the brain’s ability to learn new ideas after a certain age. This cerebral plasticity, the brain’s ability to learn and change, was found in adults over the age of 18. The Centre found that new connections are triggered by repeated novel sensory experiences, which include new combinations of words.
The fact that Arab media are repeating phrases that associate Islam with violent extremism and terrorism should concern U.S. policymakers. While al-Qaeda recruiting numbers remain very low in absolute terms, there is a correlation between new recruits and the increased incidence of new phrasings that cause neurological associations between the words “Islam” and “Muslim” with “violence,” “terrorism,” and “al-Qaeda”. Though U.S. foreign policy in the Middle East remains overwhelmingly the most commonly cited motivation for violent extremism, language plays an important role in individual and collective national identity formation in the Middle East. (Suleiman 2003) This factor to date has been largely ignored—at our peril.
While many argue that an integral part of the “War on Terror” is combating al-Qaeda’s narrative, it is not easy to come up with a counter-narrative when many do not understand the master narrative of the Arabic-speaking and, for that matter, non Arabic-speaking Muslim world, let alone its complicated, radical offshoots—or what socio-linguists would call a “restricted narrative,” i.e. the particular idiom of al-Qaeda and its affiliate networks that portends to re-appropriate Islam’s vocabulary such as “Ummah” (community) and Quranic spiritual concepts like “Jihad” (striving for self-improvement and closeness to God). Countering this narratology requires considerable expertise. Such a campaign may not be possible in the near future given the low levels of even the most basic Islamic literacy among average Americans, policymakers, and leaders in the “War on Terror” the world over. But we can help to change this by educating ourselves. And the first step can be as easy as how you spell the word “Muslim.”
Get involved! Please sign our petition to stop the targeting of American Muslims: http://www.ipetitions.com/petition/hearings/
Nazir Harb, MPA
With the death of bin Laden, the world faces another opportunity to re-evaluate status quo assumptions and modes of operation. However, this is also an important time to think critically about how we use language to describe the events occurring in the Middle East and the “War on Terror.” Obama was right to describe bin Laden not as a Muslim leader but as a mass murderer of Muslims—having orchestrated the killing of, according to several studies, x38 more Muslims than non-Muslims in his lifetime. But among bin Laden and al-Qaeda’s most insidious contributions to the targeting and vilification of Muslims everywhere is their malicious manipulation of Islam’s idioms, phrases, and its co-optation and distortion of sacred Quranic conceptions. Unfortunately, as I discussed last week, post-9/11 anti-Islamic sentiment led many to unwittingly adopt al-Qaedaists’ language and its corresponding ideology that perpetuates both the network’s self-portrayal as defenders of Islam and the notion that Islam is a geo-political entity rather than a world religion that is not only devoid of the terrorist organization’s political agenda but in fact contravenes it at every turn.
In short, I argue, language matters a lot—the way we talk about Islam, Muslims, and Arabs in the United States impacts people in the Middle East in real ways. Not only insofar as the United States is participating in a third major combat operation in a Muslim country—Libya—but also on a social and cultural level. The way English-language media describe events and people affects Arabic media and, critically, innocent lives are often lost in translation.
The adoption of English media norms by Arabic media
Research I conducted on English and Arabic media prior to 2011 indicated that certain politically-charged discourse items in Western English-language media ranging from “Islam” and “Muslim” to “terrorism,” “threat,” “violence,” or “fundamentalism” tended to readily go in and out of Western political discourse following crises where, whether true or not, Muslims or al-Qaeda were suspected of inciting or carrying out violent acts. After their respective frequencies would peak, these terms did not always decrease to pre-crisis levels but they generally experienced a period of significant decline until another event triggered them. However, the usage of these same discourse items in Arabic media, once becoming politically-charged in the context of a crisis situation (e.g. terrorist incident) and introduced into Arabic political discourse in popular media (Al-Jazeera, Al-Arabiyya, etc.), tended to escalate increasingly from that point onward with no abatement, and in many cases exceeded the limitations of charts and graphs over time.
Since 9/11, the use of terms relating “Islam” and “Muslims” to “terrorism” or “extremism” in Western English-language discourse has been steadily rising—by now there is a conflation of the words “Islamist” and “threat” such that “Islamist” has come to imply a threat in its own right. In Arab media there is a time lag as these terms creep into the Arabic language environment initially via translation and secondary-source references. As such, phrases that conflate “Islam” with words like “terror” (irhab), “terrorism” (irhabiyya), and “violence” (‘onf) had an average frequency of 8.27/year pre-9/11. After 9/11, the rate escalated rapidly, especially after the invasion of Afghanistan and the beginning of the Iraq war. There was another peak after January of this year with the onset of the Arab revolutions across the Middle East. On average, since 9/11, the conflation of Islam with terror and violence in Arab media has increased to the unprecedented level of 1,222.57/yr.
Why? What is the difference between the Arabic-language political environment and its English counterpart? Why might a post-crisis politically-charged word’s or phrase’s frequency not just linger on but continue to rapidly increase in the former environment but not the latter?
How do ideas become paradigmatic?
An April 2011 study by the Wellcome Trust Centre for Neuroimaging at University College London reported that new word associations form ideas in the mind by physically creating synaptic connections through the process of potentiation—a finding that challenges prior assumptions about the brain’s ability to learn new ideas after a certain age. This cerebral plasticity, the brain’s ability to learn and change, was found in adults over the age of 18. The Centre found that new connections are triggered by repeated novel sensory experiences, which include new combinations of words.
The fact that Arab media are repeating phrases that associate Islam with violent extremism and terrorism should concern U.S. policymakers. While al-Qaeda recruiting numbers remain very low in absolute terms, there is a correlation between new recruits and the increased incidence of new phrasings that cause neurological associations between the words “Islam” and “Muslim” with “violence,” “terrorism,” and “al-Qaeda”. Though U.S. foreign policy in the Middle East remains overwhelmingly the most commonly cited motivation for violent extremism, language plays an important role in individual and collective national identity formation in the Middle East. (Suleiman 2003) This factor to date has been largely ignored—at our peril.
While many argue that an integral part of the “War on Terror” is combating al-Qaeda’s narrative, it is not easy to come up with a counter-narrative when many do not understand the master narrative of the Arabic-speaking and, for that matter, non Arabic-speaking Muslim world, let alone its complicated, radical offshoots—or what socio-linguists would call a “restricted narrative,” i.e. the particular idiom of al-Qaeda and its affiliate networks that portends to re-appropriate Islam’s vocabulary such as “Ummah” (community) and Quranic spiritual concepts like “Jihad” (striving for self-improvement and closeness to God). Countering this narratology requires considerable expertise. Such a campaign may not be possible in the near future given the low levels of even the most basic Islamic literacy among average Americans, policymakers, and leaders in the “War on Terror” the world over. But we can help to change this by educating ourselves. And the first step can be as easy as how you spell the word “Muslim.”
Get involved! Please sign our petition to stop the targeting of American Muslims: http://www.ipetitions.com/petition/hearings/
The challenges facing an expanded Japanese role in US-Japan security alliance
Ani Akinbiyi, MPA
Following the conclusion of World War II, Japan became one of the United States’ most important global allies, and ever since the US-Japan security alliance has grown increasingly more important to maintaining the stability of the East Asia. And yet, despite its pivotal role in the US’s global security strategy, there have recently been calls from within the US policymaking establishment to reevaluate America’s security partnership with Japan. Officials claim that Japan has not carried its equal weight within the alliance, and that the Japanese have come to take the American security umbrella for granted. They conclude that Japanese unwillingness to take on more responsibility within the alliance indicates their lack of commitment to the partnership, and that perhaps the US should consider downgrading its military ties to the island nation. However, this view fails to consider the possibility that Japan is simply unable to assume greater responsibility within this relationship because it is bound by domestic political realities and because the two countries may not share the same objectives for the alliance itself. In essence, Japan has yet to decide, one way or the other, what its defense posture will be and what role the US security alliance will play in that strategy.
Since the end of World War II, the Japanese government has had a difficult time galvanizing public support for an increased role for its military, in self-defense matters or otherwise. After the war, not only did the people of Japan have to deal with the consequences of a nuclear attack, they also had to come to terms with the aftermath of a US firebombing campaign in which, according to former Secretary of Defense Robert S. McNamara, “50-90% of the people in 67 Japanese cities” were killed, and more than two-thirds of those cities were between 40-99% destroyed. This profound level of physical and human destruction left the nation with a deep-seated mistrust of the military – which many Japanese blamed for taking the country to war – and a sense that their government did not have the capacity to adequately control it.
This sense of mistrust has lingered over the course of the 60 years since the post-war constitution came into effect. During that time, attempts to increase the range and scope of military responsibility have been met with significant popular resistance, and what has emerged is a society where anti-militarism is so profound that successive Japanese governments have gone beyond constitutional restrictions to impose additional restraints on its military potential, such as banning the pursuit of nuclear weapons, the development of power-projection capabilities, and arms exportation. Thus, in the face of such popular support for a debilitated military, it is no surprise that the democratically-elected Japanese government has had trouble mustering the political will necessary to make changes to its military structure.
Japan’s squeamishness with militarism is also evident from its limited overseas operations, which are only approved on a case-by-case basis. In early 2010 the newly-empowered Democratic Party of Japan cancelled its Indian Ocean refueling mission, stationed in support of the US’s war in Afghanistan. While shifts in foreign policy are expected with the introduction of a new administration, such flip-flopping gives the impression that Japan lacks a vision for its military and makes Tokyo’s allies and enemies alike question why Japan seems so intent on remaining defensively castrated.
This is further baffling when one looks at the very real threats in Japan’s front yard. Japan may be taking for granted America’s willingness to remain a regional protector indefinitely, gambling that as long as the US protects its own interests in the region Japan’s interests will also be served. While this gambit has proved successful in the past, there is no guarantee that the US’s strategic configuration will remain unchanged, especially as Washington tries its best to avoid military postures that antagonize China and increase the likelihood of entanglement with its second-largest trading partner.
Given this reluctance for military assertiveness, it is no surprise that there is a disconnect between what Washington wants from its alliance with Japan and what Tokyo is able to provide. The US wants to see the Diet contribute more to Japan’s homeland defense and increase its out-of-area military commitments. Unfortunately, it seems that the US would do well to have Japan focus solely on its domestic defense, as the Japanese parliament has shown diminished capacity to successfully juggle both security objectives. The Liberal Democratic Party, Japan’s former ruling political party, tried to use the success of recent overseas engagements to bolster arguments for an expansion of the Self-Defense Forces mandate. However, it was unable to turn overseas successes into the concrete political will necessary to make the needed changes at home. Exacerbating this dead loss is the fact that these missions have only served to siphon critical resources away from the development of Japan’s domestic defense capabilities.
In sum, Japan’s self-defense policy appears directionless, with policymakers seemingly not knowing where best to focus the government’s efforts. With a new political party in control, an emboldened China, a sabre-rattling North Korea, and an increasingly impatient US, the Japanese need to get their act together and demonstrate some decisiveness about their desired defense posture. Currently it is a hodge-podge of components that together do not make an impressive statement. The new government must also find a way to move the Japanese citizenry past the fear and pacifism that shackle Tokyo’s ability to increase its military capability. Only with these two obstacles behind them will the Japanese be able to support the US-Japan alliance in a meaningful way.
Following the conclusion of World War II, Japan became one of the United States’ most important global allies, and ever since the US-Japan security alliance has grown increasingly more important to maintaining the stability of the East Asia. And yet, despite its pivotal role in the US’s global security strategy, there have recently been calls from within the US policymaking establishment to reevaluate America’s security partnership with Japan. Officials claim that Japan has not carried its equal weight within the alliance, and that the Japanese have come to take the American security umbrella for granted. They conclude that Japanese unwillingness to take on more responsibility within the alliance indicates their lack of commitment to the partnership, and that perhaps the US should consider downgrading its military ties to the island nation. However, this view fails to consider the possibility that Japan is simply unable to assume greater responsibility within this relationship because it is bound by domestic political realities and because the two countries may not share the same objectives for the alliance itself. In essence, Japan has yet to decide, one way or the other, what its defense posture will be and what role the US security alliance will play in that strategy.
Since the end of World War II, the Japanese government has had a difficult time galvanizing public support for an increased role for its military, in self-defense matters or otherwise. After the war, not only did the people of Japan have to deal with the consequences of a nuclear attack, they also had to come to terms with the aftermath of a US firebombing campaign in which, according to former Secretary of Defense Robert S. McNamara, “50-90% of the people in 67 Japanese cities” were killed, and more than two-thirds of those cities were between 40-99% destroyed. This profound level of physical and human destruction left the nation with a deep-seated mistrust of the military – which many Japanese blamed for taking the country to war – and a sense that their government did not have the capacity to adequately control it.
This sense of mistrust has lingered over the course of the 60 years since the post-war constitution came into effect. During that time, attempts to increase the range and scope of military responsibility have been met with significant popular resistance, and what has emerged is a society where anti-militarism is so profound that successive Japanese governments have gone beyond constitutional restrictions to impose additional restraints on its military potential, such as banning the pursuit of nuclear weapons, the development of power-projection capabilities, and arms exportation. Thus, in the face of such popular support for a debilitated military, it is no surprise that the democratically-elected Japanese government has had trouble mustering the political will necessary to make changes to its military structure.
Japan’s squeamishness with militarism is also evident from its limited overseas operations, which are only approved on a case-by-case basis. In early 2010 the newly-empowered Democratic Party of Japan cancelled its Indian Ocean refueling mission, stationed in support of the US’s war in Afghanistan. While shifts in foreign policy are expected with the introduction of a new administration, such flip-flopping gives the impression that Japan lacks a vision for its military and makes Tokyo’s allies and enemies alike question why Japan seems so intent on remaining defensively castrated.
This is further baffling when one looks at the very real threats in Japan’s front yard. Japan may be taking for granted America’s willingness to remain a regional protector indefinitely, gambling that as long as the US protects its own interests in the region Japan’s interests will also be served. While this gambit has proved successful in the past, there is no guarantee that the US’s strategic configuration will remain unchanged, especially as Washington tries its best to avoid military postures that antagonize China and increase the likelihood of entanglement with its second-largest trading partner.
Given this reluctance for military assertiveness, it is no surprise that there is a disconnect between what Washington wants from its alliance with Japan and what Tokyo is able to provide. The US wants to see the Diet contribute more to Japan’s homeland defense and increase its out-of-area military commitments. Unfortunately, it seems that the US would do well to have Japan focus solely on its domestic defense, as the Japanese parliament has shown diminished capacity to successfully juggle both security objectives. The Liberal Democratic Party, Japan’s former ruling political party, tried to use the success of recent overseas engagements to bolster arguments for an expansion of the Self-Defense Forces mandate. However, it was unable to turn overseas successes into the concrete political will necessary to make the needed changes at home. Exacerbating this dead loss is the fact that these missions have only served to siphon critical resources away from the development of Japan’s domestic defense capabilities.
In sum, Japan’s self-defense policy appears directionless, with policymakers seemingly not knowing where best to focus the government’s efforts. With a new political party in control, an emboldened China, a sabre-rattling North Korea, and an increasingly impatient US, the Japanese need to get their act together and demonstrate some decisiveness about their desired defense posture. Currently it is a hodge-podge of components that together do not make an impressive statement. The new government must also find a way to move the Japanese citizenry past the fear and pacifism that shackle Tokyo’s ability to increase its military capability. Only with these two obstacles behind them will the Japanese be able to support the US-Japan alliance in a meaningful way.
A return to normalcy: Reflecting on 9/11 in a post-Osama world
Jacob Hartog, MPA
Here’s how I know I’m the oldest MPA1: When 9/11 happened, I was already teaching. In fact, I was administering a beginning-of-the-year standardized test to my 6th grade homeroom when the aide who was handing out the bubble sheets to all the classrooms told me someone had crashed a plane into the World Trade Center, and that there were reports of a second plane as well. The kids overheard the aide’s words.
“Who crashed a plane, Mr. Hertawg?”
“Let’s have a moment of silence,” I said, pretty sure that was what I was supposed to do in these situations.
There were some uncomfortable giggles and a few restless seconds of silence, and then I kept handing out the bubble sheets.
From there, the day got more surreal. Parents began arriving around 10AM to pick up their kids, and the school—a huge and disorganized block of cement in the South Bronx, with 1,500 kids—became still more disorganized. Pretty soon, a long line of anxious parents snaked down the second-floor hallway, waiting to be told where their child was. When that situation began degenerating further, the main office began calling out name after name over the PA system, and the hallways began filling still more.
Finally, the teachers were told one by one to bring their classes down to the auditorium, where they marched past parents standing along the walls, who were supposed to point out their children like suspects in a lineup. When the last kids were dismissed at 3:20PM, rushing unafraid into a bright September day that had suddenly turned ominous for adults, I made my way home, walking from the Bronx to my apartment in Washington Heights. The subways and buses weren’t running, and the streets gradually filled with pedestrians walking in the middle of the road. As we crossed the bridge over the Harlem River into Manhattan, several people pointed at the sky, insisting they could see smoke in the distance of Downtown.
I was always surprised by the public reaction to 9/11, which seemed to be more bellicose the further one traveled from the attacks. Yes, in New York as anywhere else in the country there was anger as well as grief. But for those of us in New York, 9/11 represented a frightening answer to a question that anyone who lives in the City must inevitably pose: whether in living amid that buzzing and bustling confusion one sacrificed a crucial measure of control over life; whether the thin veneer of civility by which packed subway cars, crowded classrooms, and darkened streets become tolerable would one day vanish, revealing savagery and violence. My departure from normalcy was of course but the merest side-trip, compared to those of thousands of others, who returned that night to empty and grieving homes.
In the days after 9/11, perhaps inevitably, our leaders embraced the rhetoric of violence and revenge. But what was needed then was not cowboy swagger but civility, kindness, decorum: not just the kindness of the shopkeepers who had lined Broadway to pass out bottles of water to streets of weary and frightened commuters, nor just rightfully honoring the heroism of fallen rescue workers, nor just the admirable patriotism of thousands who volunteered for the armed forces in the following months; but also a belief in the power of laws, process, normalcy: that criminals can be brought to justice through the courts, that everyone can go back to school and work, and that the trains can run on time.
This is perhaps why the news of Osama bin Laden’s death fills me with ambivalence. It comes after a signal failure by President Obama—a man who believes as much as any in the power of laws and the limitations of violence—to close Guantanamo, to bring Khalid Sheikh Mohammed to trial in a civilian court, and to fully to end unlawful interrogation practices and infringements on civil liberties. Perhaps none of these goals would have much impressed Osama bin Laden, and that is just why we should have embraced them. Instead, we are happy to crow over an enemy’s bloody death, a reaction bin Laden would have very much understood.
Any sane answer to Osama’s legacy of bloodshed, now as on September 11th, 2001, must start with a commitment to the power of process and normalcy. This commitment to the boring things in life is, perhaps, why we’re in a policy program: a belief that the war, the election, the revolution is never as important as the society in which we wake up on the morning after. A cost of such realism is, perhaps, a limitation to our political passions and a regulation of our hatreds and adorations. Ironically, the same weekend that Osama bin Laden was successfully assassinated, an unsuccessful NATO attempt on the life of Moammar Gadhafi instead killed the Libyan leader’s son as well as his three young grandchildren. Whether we are speaking of Obama, bin Laden or Bush, it is not leaders that ruin or redeem the world but the patterns of action they leave behind.
Here’s how I know I’m the oldest MPA1: When 9/11 happened, I was already teaching. In fact, I was administering a beginning-of-the-year standardized test to my 6th grade homeroom when the aide who was handing out the bubble sheets to all the classrooms told me someone had crashed a plane into the World Trade Center, and that there were reports of a second plane as well. The kids overheard the aide’s words.
“Who crashed a plane, Mr. Hertawg?”
“Let’s have a moment of silence,” I said, pretty sure that was what I was supposed to do in these situations.
There were some uncomfortable giggles and a few restless seconds of silence, and then I kept handing out the bubble sheets.
From there, the day got more surreal. Parents began arriving around 10AM to pick up their kids, and the school—a huge and disorganized block of cement in the South Bronx, with 1,500 kids—became still more disorganized. Pretty soon, a long line of anxious parents snaked down the second-floor hallway, waiting to be told where their child was. When that situation began degenerating further, the main office began calling out name after name over the PA system, and the hallways began filling still more.
Finally, the teachers were told one by one to bring their classes down to the auditorium, where they marched past parents standing along the walls, who were supposed to point out their children like suspects in a lineup. When the last kids were dismissed at 3:20PM, rushing unafraid into a bright September day that had suddenly turned ominous for adults, I made my way home, walking from the Bronx to my apartment in Washington Heights. The subways and buses weren’t running, and the streets gradually filled with pedestrians walking in the middle of the road. As we crossed the bridge over the Harlem River into Manhattan, several people pointed at the sky, insisting they could see smoke in the distance of Downtown.
I was always surprised by the public reaction to 9/11, which seemed to be more bellicose the further one traveled from the attacks. Yes, in New York as anywhere else in the country there was anger as well as grief. But for those of us in New York, 9/11 represented a frightening answer to a question that anyone who lives in the City must inevitably pose: whether in living amid that buzzing and bustling confusion one sacrificed a crucial measure of control over life; whether the thin veneer of civility by which packed subway cars, crowded classrooms, and darkened streets become tolerable would one day vanish, revealing savagery and violence. My departure from normalcy was of course but the merest side-trip, compared to those of thousands of others, who returned that night to empty and grieving homes.
In the days after 9/11, perhaps inevitably, our leaders embraced the rhetoric of violence and revenge. But what was needed then was not cowboy swagger but civility, kindness, decorum: not just the kindness of the shopkeepers who had lined Broadway to pass out bottles of water to streets of weary and frightened commuters, nor just rightfully honoring the heroism of fallen rescue workers, nor just the admirable patriotism of thousands who volunteered for the armed forces in the following months; but also a belief in the power of laws, process, normalcy: that criminals can be brought to justice through the courts, that everyone can go back to school and work, and that the trains can run on time.
This is perhaps why the news of Osama bin Laden’s death fills me with ambivalence. It comes after a signal failure by President Obama—a man who believes as much as any in the power of laws and the limitations of violence—to close Guantanamo, to bring Khalid Sheikh Mohammed to trial in a civilian court, and to fully to end unlawful interrogation practices and infringements on civil liberties. Perhaps none of these goals would have much impressed Osama bin Laden, and that is just why we should have embraced them. Instead, we are happy to crow over an enemy’s bloody death, a reaction bin Laden would have very much understood.
Any sane answer to Osama’s legacy of bloodshed, now as on September 11th, 2001, must start with a commitment to the power of process and normalcy. This commitment to the boring things in life is, perhaps, why we’re in a policy program: a belief that the war, the election, the revolution is never as important as the society in which we wake up on the morning after. A cost of such realism is, perhaps, a limitation to our political passions and a regulation of our hatreds and adorations. Ironically, the same weekend that Osama bin Laden was successfully assassinated, an unsuccessful NATO attempt on the life of Moammar Gadhafi instead killed the Libyan leader’s son as well as his three young grandchildren. Whether we are speaking of Obama, bin Laden or Bush, it is not leaders that ruin or redeem the world but the patterns of action they leave behind.
Tags:
9/11,
Field III (Domestic),
New York
Subscribe to:
Posts (Atom)