Wednesday, September 25, 2013

Minneapolis will likely see increased voter turnout in November

Today's Blog  appears in Minnpost.

Predicting voter turnout is always difficult, including for political scientists.   The same will be true this November in the Minneapolis elections.  But for those who contend that confusion due ranked choice voting (RCV) or that 35 candidates are running for mayor will depress voter turnout, they had better think again.  Instead, examination of the factors affecting voter turnout and evidence from races and jurisdictions where there are competitive multi-candidate races suggests Minneapolis could have a larger than expected turnout this fall.
    As I point out in my forthcoming book Election Law and Democratic Theory, many variables influence voter turnout.  Party affiliation and intensity of partisan attachment are two factors. Being a member of a political party and the intensity of that party loyalty affect turnout.  But other demographic factors such as class, race, and religion too are important.  Religious affiliation–especially regular religious attendance–correlate positively with voter turnout.  Age too is a factor, with middle age voters more likely to vote than those under the age of 30.  Turnout in the United States is also greater in presidential than non-presidential election years.  There is also good data suggesting that better educated and informed voters are more likely to vote and that campaigns that  are better covered by the media have higher rates of turnout.
    But there are two other factors that are significant drivers of voter turnout–the appearance of a close or competitive election and candidate choice.  There is powerful evidence that voters are more likely to vote when there is a reality or perception that their vote matters.  All things being equal, voters like competitive horse races and will show up when they think their vote will make a difference.  Related to that, when there are competitive elections candidates and parties  generally do a better job  identifying and mobilizing voters to turnout, acting on the same belief that every vote matters.  Second, in situations where individuals feel like there is a real choice they are more likely to vote.  Voters who do not find candidates whom they wish to vote for or in races where they are not excited about their choices, are less likely to vote than in races where they perceive a choice.
    Anecdotal and hard evidence support the notion that competitive elections and candidate choice are powerful forces affecting turnout.  There is evidence that the Electoral College depresses turnout in non-competitive presidential states.  Generally competitive close elections have higher turnout than lopsided victories. Around the world multi-candidate (more than two viable choices) races yield higher voter turnouts than non-competitive or two person races.  In 2010, those of us who predicted that the shift to an August primary would decrease turnout as was the case in the other states that did the same, the close DFL gubernatorial race demonstrated how candidate choice and competition increased turnout.  Finally, for those who  have worked on real campaigns (including me), they know that candidates and parties hustle more for voters when it is perceived to be a close election.
    Now apply all this to the coming Minneapolis elections.  All indications are that the mayor’s race is very competitive and that there is no clear favorite.  There are perhaps four or more individuals who have a real chance to win, each appealing to different constituencies. This means already candidates are working hard to mobilize their voters and that Minneapolis residents have several choices for mayor.  These factors alone should positively affect turnout.  But now throw RCV in and the calculations change.  Some might argue that RCV and the appearance of too many choices will confuse voters and depress turnout. Only elitists lacking faith in the people should make such an argument.  Instead, RCV, as it was designed to do, enhances voter choice and creates real possibilities that candidates who were the second or third choice of voters might win.  This creates, as is happening in Minneapolis, candidates who are working harder to convince voters to consider them a second choice and it is encouraging voters to think, in a race that seems even more competitive than usual, that their vote will matter.  Bottom line–there is a reason to vote.
    Four years ago Minneapolis used RCV for the first time and turnout was abysmal. Critics labeled RCV as the reason.  I was asked by the City of Minneapolis back then to evaluate the implementation of RCV.  My report indicated that the main reason for low voter turnout was a perception that the mayor’s race was not competitive.   The 2009 was not a good test of RCV.  Moreover, there was no or little evidence that RCV depressed turnout among the poor or people of color.  Yes, some areas of the city did have signs of voter confusion but again no evidence that RCV depressed turnout.  Evidence of use of RCV in the United States and around the world substantiates that conclusion.
    Overall, given the low voter turnout and perceived lack of competitive elections in 2009, it is almost too easy to predict that in 2013 turnout will increase across the board.  Many factors will drive this including the hard competition of many fine candidates in a close election where RCV could determine the outcome of the mayor’s race.

Saturday, September 21, 2013

Let Them Eat Cake: The Compassionate Conservatism of Paul Ryan and the Republican Party

    Michael Harrington’s 1962 The Other America told the story of two countries.  One was a country of affluence, where people had enough to eat, a roof over their heads, health insurance, and the prospects  of a good life.  The other was a country where a quarter of the population lived at or below poverty, often were homeless, lacked health insurance, and whose prospects for a good life were dim at best.  Both countries were the United States.  And unfortunately 50 years later, not  much has changed, with America still  a tale of two countries–rich and poor, hopeful and hopeless.  At least this is the conclusion of recent Census Bureau study on poverty in America.  But despite this news, Paul Ryan and the Republicans want to gut food stamps and defund Obama Care. Yet the great irony is that the people they hurt the most are their own constituents.

    First, lets look at the numbers.  This past week the Census Bureau released its report  Income, Poverty, and Health Insurance Coverage in the United States: 2012.  Among the major conclusions were that the numbers of individuals in poverty in the United States had not changed much in the last year and that the numbers of individuals without health insurance had approximately remained the same.  We have made little progress in terms of economic recovery for most Americans in the last few years since the crash of 2008.  No surprise here.  But what is more startling are two other points.  First, one needs to read the report since it offers some historical benchmarks about poverty in America.  Second, it provides a picture of whose is in poverty.

    Consider the people of Michael Harrington’s Other America, When that book was published in 1962, 18% of the population or 37 million Americans lived at or below the poverty level.  For children (under 18), 23% were in poverty.  Fifty years later in 2012, 15% of the population, or 46.5 million American live at or below the poverty level.  We have 21.8% of children (under 18) in poverty with a whopping 24.4% of children under the age of six living at or below poverty.

Think about it–the richest country in the world and a quarter of our children are living in poverty.  We look like a third world country.  We are condemning a quarter of our population to a bleak future, especially when we know from other studies that the rates of economic mobility in the US have literally frozen.  By that, few people in the lowest income levels ever move out and are condemned to intergenerational poverty.  We know that they live in neighborhoods with few services and bad schools and high crime.  We have written off a quarter of our population right from the start.

But look beyond the children.  Still 15% of the population below poverty and more people today who are poor than 50 years ago.  Representative Paul Ryan (and many Republicans) look to 50 fears of  social welfare post-Great Society New Deal spending and say the trillions spent has been a wasteful sinkhole that has not succeeded and therefore want to end it.  They see a glass half empty and want to throw out the water with the glass.  They ignore that poverty was cut dramatically under the Great Society programs of the 60s until Nixon cut it back and our society began a now nearly two generation reversal on helping the poor. 

At least since Reagan we have concentrated tax cuts to benefit the rich at the expense of the rest of us and the redistributive economic policies since 1980 have largely shifted money from the poor to the affluent.  This Census Bureau report as well as others, including those by the Congressional Budget Office and many other organizations point to an America today with the greatest concentration of wealth and income since the 1920s.  It has dramatically grown in the last few decades, helped by Reagan and Bush era tax cuts. 

The point?  The social welfare have failed to reduce poverty both because they have been cut themselves while at the same time under-minded by other more powerful inegalitarian tax and economic policies.  We have made little progress in 50 years not because we tried and failed to help the poor but because we either gave up or did not do enough.

Consider a few other facts found in the report.  During the 3-year period from 2009 to 2011, approximately 31.6 percent of the population had at least one spell of poverty lasting 2 or more months.  The median household income in the US was $51,017 in 2012, down from the peak of $56,080 that occurred in 1999.  Median household incomes are essentially what they were in 1989.  Few Americans have gained any ground in the last quarter century, with there being a steady slide  that begin more or less with the Bush presidency of 2000.  Finally, we have approximately 48million individuals without health insurance.

We are a poorer and less equal nation now than in 2000.  For all of this, Paul Ryan and the Republicans want to cut food stamps and funding for Obama Care.  This merits awarding them the “Marie Antoinette Let Them Eat Cake Award” for social compassion and humanity.

But less you believe that their policy choices are only hurting Democrats, think again.  In raw numbers more white Caucasians are in poverty than people of color.  The highest poverty rates in America are in the south and rural America–the heart of the GOP base. For the South, the poverty rate remained unchanged at 16.5 percent in 2012, while the number in poverty increased to 19.1 million, up from 18.4 million in 2011. In 2012, the poverty rate and the number in poverty for the Northeast (13.6 percent and 7.5 million) and the Midwest(13.3 percent and 8.9 million) were not statistically different from 2011 estimates. Inside metropolitan statistical areas, the poverty rate and the number of people in poverty were 14.5 percent, while in rural America it was 17.7 percent in 2012.  Finally, the Northeast had the lowest uninsured rate in 2012 at 10.8 percent. The uninsured rate for the Midwest was 11.9 percent; for the West, 17.0 percent; and for the South, 18.6 percent. 
Do an overlay of electoral maps showing regions where voters supported Republicans, there you will find the highest poverty and uninsured rates and the lowest household incomes.  Either Republicans are screwing over their own constituents or for some reason those who most need the help are supporting candidates and policies they are least supportive of their own interests.

Red and Blue America is a tale of two countries.  The United States, especially Red Republican America, is paradoxically much of the other America that Michael Harrington described.  Yet it is the America that fights hardest against helping itself and others.   Whatever happened to compassionate conservatism?  I guess "Let them eat cake" is their new social philosophy.

Monday, September 9, 2013

Law, Ethics, and Syria: What Should We Do?



Law matters.  But the law is the not sum total of what matters when it comes to asking the question “What is the right thing to do?,” be that in our personal or professional lives.  Often obedience to the law–asking if doing something is legal–is the starting point for evaluating conduct.  But there is a long lineage of people from St. Augustine, Henry David Thoreau, to Martin Luther King, Jr. Who would point out that unjust laws are not morally binding and that in some cases disobeying them is the right thing to do.  Conversely, mere conformity to the letter of the law also does not necessarily mean one is acting ethically or that following the rule is the right thing to do. More is required.  This is also true when it come to the decision by President Obama to take military action against Syria.
            Syria’s use of chemical weapons raises problems for the United States.  Specifically, any use of force raises three questions: 1) presidential authority to act; 2) what is distinct about Syria; and 3) what is the end game for the US?   All three of the questions have to be answered satisfactorily before the United States takes any action.
           
Presidential Authority to Act
            Obama wants congressional approval to use force, but he still had not ruled out doing something absent their acquiescence. What constitutional authority does President Obama have to justify military action in Syria? This is not clear.  Domestically, the two sources of legal authority he can reference would be either the Commander-in-Chief clause of Article II of the Constitution, or the 1973 War Powers Act.
            It is not clear how the Commander-in-Chief clause supports this action. The constitutional framers intended for Congress to be the dominant branch when it came to military and perhaps foreign affairs. Article I textually commits to Congress the power to declare war along with a host of other powers related to the military. Here Congress has not declared war and it is unlike after 9-11 when Congress did enact the Authorization to Use Military Force that gave Bush the authority (arguably) to deploy troops in Afghanistan. At least Bush had some legal authority to wage a war on terrorism, no matter how tenuous.
            If Obama is relying on his Commander-in-Chief powers, it is hard to see how they come in. Syria  has not attacked the US, it is not threatening vital interests, and it is not otherwise doing something that directly conflicts with American national security. Instead, to contend that the Commander-in-Chief clause gives Obama unilateral authority to deploy these troops is no different or better than Bush era assertions by advisors such as John Yoo and others that the president had inherent constitutional authority to act. He does not.
            There is no extra-constitutional authority for presidents to act. This was supposedly another issue or lesson learned from Vietnam; presidents should not unilaterally drag the country into war.  LBJ and then Nixon abused their presidential powers when it came to Vietnam.  Disputes over presidential power to deploy troops were supposedly addressed by the War Powers Act in 1973. It placed limits on presidential power to deploy troops for limited purposes, subject to consultation with and notification to Congress that the Act was being invoked. Here again Obama is not invoking the Act in asking Congress to approve.  However, overall, there seems little authority for the president to act here absent congressional approval.

What is distinct about Syria?
            But even if Congress does approve, the second problem is what is distinct about Syria? Assume for now that Obama has the constitutional authority to act. Why Syria and why not Kim Jong-Il in North Korea, Iran, Sudan, or Zimbabwe?  In all of these countries we have repressive dictators or regimes abusing the rights of their people.  Should the US use force in all of these countries to oust dictators?  If mere oppression were the justification for action the US would be busy around the world acting.  Moreover, if mere oppression were enough justification, the US should have ousted Assad years ago.  Something more is required.
            First at the international level is the authority to act.  Article II, section 7 of the United Nations Charter declares: “Nothing contained in the present Charter shall authorize the United Nations to intervene in matters which are essentially within the domestic jurisdiction.”  Is not what is happening in Syria a domestic matter and none of our business?  Maybe, but the legal case for it has to be made.  The UN allows for this under international law through resolutions and Security Council action.  With a Russian veto, chances for this type of authorization are nil.  Obama appears to want to justify intervention under international law that bans the use of chemical weapons or by invoking some other principles of humanitarianism, but again the justification is not obvious.
            But even if the United States can find justification under international law to act, there is still another question:  Why should the US act, potentially alone?  Again, Syria is less of a threat to the US than Iran and Korea. From a strategic point of view it is hard to justify intervention. Korea and Zimbabwe are equally as brutal regimes. Why not them? Perhaps the difference here is that there is a popular movement to oust him and that is the reason why we are acting? Maybe the issue is about prospects of success in ousting him? All of these are possible answers yet it is difficult to see a reason or argument that principally distinguishes Syria from acting in the other countries, unless of course it is the use of chemical weapons.  Similar reasons about weapons of mass destruction led Bush into Iraq and why the US is viewed as a hypocrite when it comes to the country supporting or placating some repressive regimes.

What is the End game?
            The final troubling issue is the end game for Obama. What are our goals and what are we really trying to accomplish? Is it  simply to punish Assad for using chemical weapons?  Is it because he has killed 50,000 of his people?  Do we hope that military action will oust him and if so, what are we prepared to do next?   What is the definition of success and what plans does the country have to exit from intervention?  These are all important questions that need to be asked.  Even if the US merely does drone strikes or other limited action, the US needs to be clear regarding what it hopes to accomplish and prepared for what might be the result?
            During the first Gulf War General Powell espoused a doctrine that has been named after him.  The Powell doctrine, supposedly based on what we learned from Vietnam, said that US military action needed to be evaluated by asking questions regarding clearly defining what national interests are at stake, whether the goals of intervention are clear, is there international support for action, what are the alternatives and risks to military action, and then determining what the end game and exit strategies are.  Using the Powell Doctrine to evaluate the comments by Secretary of State Kerry and Obama recently, it is not clear that they have adequately answered this question.
            What to do with Syria is a difficult question.  But it is a terrific case study in decision making and in demonstrating how questions about legality are only the starting point in determining what is the right thing to do.

Monday, September 2, 2013

Remembrance of Labor Days Past: The War Against Workers



Labor Day just is not what it used to be.
            Labor Day is supposed to be the American alternative to May Day--the international celebration of workers on the anniversary of the 1886 Chicago Haymarket Affair where violence ripped through a peaceful demonstration for the right to an eight-hour working day.  Labor Day is supposed to be celebration of American workers and all they have done.  It is Aaron Copeland's Fanfare for the Common Man set to the calendar. At one time Labor Day was also a celebration of and for labor unions.  But sadly there is little to celebrate today.  If anything, Labor Day, especially with so many people who work, now represents the war against unions and working class Americans.
            People forget why we have unions. The last 150 years of American history is the battle of workers and unions against corporations. America in the late nineteenth and early twentieth century was the country of  trusts.  It was the emergence of the railroads, steel, big oil, and monopolies.  It was also the era of sweatshops, child labor, adulterated and unsafe foods, and the six day, 70 hour+ work weeks.  It was also the era of piecemeal below subsistence wages, poor working conditions and high injury rates, no health benefits, no retirement benefits, and no protections against discrimination and harassment.  It was the world of Upton Sinclair’s The Jungle. Unions were illegal, and workers who stood up for their rights were beat up by the Pinkertons–company hired security–or arrested by the newly created public police forces which were created to control and brake unions. 
            No one should wax romantically for this era if you care about workers and the people.  The American economically may have grown exponentially, but it did so unevenly, producing massive fortunes for a few but significant economic inequalities for the rest.  The America of the early nineteenth century–the one that Alexis De Tocqueville so famously described in his Democracy in America as one characterized by a general equality of conditions–had vanished.  By the time the stock market crashed in 1929 the income and wealth gap in America had literally produced two Americas:  One was the country of F. Scott Fitzgerald’s The Great Gatsby, the other of the depression-era novel The Grapes of Wrath by John Steinbeck.
            The 1935 National Labor Relations Act (NLRA) or the Wagner Act brought relative peace to the labor market in that it recognized the right of workers to collectively bargain.  The NLRA established a process for how to unionize, organize workers, hold elections, and bargain for benefits.  It was a victory for workers, but also for the American people and the economy.  The Wagner Act was part of the New Deal, it was one element in a package of legislation to restructure the economy and fix the market failures in the economy.
            The NLRA had more than an economic purpose or impact.  Many of the economic problems in America are political.  They are produced by asymmetric political power between corporations, the rich, and rest of the people.  Unions at their best can be what Arthur Schlesinger, Jr., once called the countervailing power to help limit the power of businesses and corporations.  The Wagner Act thus reset the political equilibrium in American politics to help favor the people.
            And it worked.  Labor density and unionization in America dramatically increased in the United States, peaking in 1954 with over 35% of the workforce collectively bargained.  But what did unions accomplish?  There is powerful evidence first that they brought tremendous economic benefits to American workers and the economy.  They produced the minimum wage, the eight hours, five day work week.  They improved workplace safety, gave us health insurance, retirements, and workers compensation.  They raised the standard of living of most Americans, often even those not in unions. They also helped bring more economic equality to the economy, significantly erasing the disparities of the Gilded and Robber Barron eras.  Unions grew and flourished  at a time of significant economic growth, and there is little hard data to show that they caused rises in unemployment.  America's post WW II affluence is tied in with unions.
            But in addition to the economic benefits that unions bring, there was a political aspect to them.  Unions were part of the Democratic New Deal coalition.  The strength of the Post World War II Democratic Party dominance was tied to unions.  Unions got out the vote and they did so to the advantage of Democrats.
            But many employers, conservatives, and Republicans hate unions.  Even many workers, especially white collar professionals, share this animosity, thinking they are better off on their own. Almost from the day the NLRA was passed opponents sought ways to circumvent the law.  The found ways to fire striking workers and replace them.  They harassed and fired organizers, they found ways in court to delay or challenge elections.  They claimed unions hurt the economy or restricted individual freedom and passed right-to-work legislation.  Yet unions remained a potent force in American politics until President Reagan became president and signaled with the firing of the air traffic controllers in 1981 that it was okay to go to war against the unions.
            It is bad enough that a war is being fought against unions, but that battle extends to workers across the board.  The current Supreme Court in cases such as Dukes v. Walmart has made it harder for workers to sue for sex discrimination or if one is over 40, to prevail in an age discrimination case.  Companies continue to cut benefits, use part time workers, or engage in other practices to make it difficult for workers to earn a decent wage.
            As Barry Bluestone and Bennett Harrison tell in The Great U-Turn, the Reagan era war against unions was part of a strategy along with deregulation and tax cuts to restructure the economy.  It was also part of a political restructuring of American politics.  The strategy has largely worked.  Overall, less than 12% of all workers are now in unions in the United States, with only 7% of the private labor force collectively bargained. 
            The decline of the American income in the last 30 years goes part and parcel with the decline of unions. In the last thirty years the American economy has seen a dramatic increase in the gap between the rich and poor such that it now mirrors that of the 1920s.  According to the United States Census Bureau in 2010 the richest five percent of the population accounted for 21% of the income, with the top 20% receiving over 50% of the total income in the country.  This compares to the bottom quintile accounting for about 3% of the total income.
            A second study by the Center on Budget and Policy Priorities in 2010, drawing upon Congressional Budget Office research, found that income gap between the top one-percent of the population and everyone else more than tripled since 1973.  After-tax income for the top one-percent increased by 281% between 1973 and 2007, while for middle class or middle quintile it increased by 25%, for the bottom quintile it was merely 16%.  Looking beyond income to wealth, the maldistribution has not been this bad since the 1920s.  According to the Institute for Policy Studies, in 2007 the top one-percent controls almost 34% of the wealth in the country, with half of the population possessing less than 3%.
            Opposing unions and workers costs families money.  There is a significant difference in median family incomes in states that are right to work (RTW) versus those that are not.  Using a three years average median family income for 2009 to 2009, RTW states have a median family income of $46,919, non RTW it is $53,418Ba difference of $6,499 or 13.9% per year.  Testing for the statistical impact of RTW on median family incomes, the relationship is -0.4.  This means there is statistical evidence that RTW is associated with lower incomesBRTW depresses wages.  If all of this does not demonstrate a war against unions it definitely does reveal an attack on workers.
            Yet Americans have been convinced unions and workers' rights are bad.  They resent successful unions that pay better wages than they receive instead of organizing to bring themselves up to that level.  We live in a culture that worships the Donald Trumps and MBA-led management teams, yet these are the people who brought us the economic crash of 2008, gross mismanagement of the economy, and the mass layoffs that frequently dot our workplaces.  For many middle class workers, the image of a surprise visit to your cubicle by a HR person with a box telling you that you are fired and have one hour to clear out your desk is all too real.  Yet despite this, Americans continue to believe that they are better off without unions and worker protections.
            Fixing the NLRA is a must to yet again reset the economic and political imbalances in the law.  Some claim that unions are no longer relevant or that their corruption has led to their own demise.  There is no question that unions need to clean up their act and support meaningful government reform, but there is also evidence that many people do want to organize and want representation in a union.  If it were easier to organize, perhaps more people would have health care even without Obamacare, or maybe more people would have retirement pensions.
            At the federal level, unions made fixing the Wagner Act a top priority in 2008 and 2009 with the Employee Free Choice Act.  The law would have streamlined organizing and holding elections.  While initially as candidate saying he would support such changes, President Obama never pushed the Act when the Democrats had control of Congress, and now the chances for its passage are dead.  Perhaps the most important structural reform of the economy Obama could have made, he simply ignored.
            It's hard to make the case that Labor Day is a celebration of workers anymore.  Today ain't what it used to represent and that is bad for all of us.