Showing posts with label polling. Show all posts
Showing posts with label polling. Show all posts

Saturday, September 21, 2024

Profit-Driven Polling and the 2024 Election

 


Yet again this election cycle, polling and the polls are a major controversy. The issue is that the polls are all over the map, or again, will be simply wrong, as allegedly, they were both in 2016 2020 and even in 2022 during the midterm elections. The real problem with the polls is not their accuracy. Instead, it is a misunderstanding of the purpose of polls and the problem of profit-driven polling.

Recent polls,  as reported  on sites such as Real Clear Politics, especially those after the September 10 Trump Harris debate, seem to be all over the place. Some national polls have Trump up by three, some have Harris up by four, with others offering different margins. This has led some to conclude that this year will again be a mess for polling.

The problem with polling lies both in misunderstanding what polls are meant to do and in the motives for the polling. First, remember that polls are snapshots in time. They are not predictors. Polling is not some type of model that inexorably declares or states what will happen on November 5 this year. Polls merely tell us on any given day what some individuals think about some subject, such as, who they likely to vote for President.

Many black swans, October surprises, or unknown unknowns have already happened in the 2024 race, and many more could still occur, thereby impacting the final voting decisions of  voters regarding whether they will vote and for who,  Ascertaining who is likely to vote, which is critical to polling, is not easily predictable and subject to some guesses and some polls and pollsters are better or worse at doing that.

That is the second point to remember. There are some polls that are more accurate, and some with more biases or inaccuracies over time. Casting all polls as of equal value is inappropriate, and one needs to think about good versus bad.

A third issue is interpreting the margin of error. Most polls indicate a specific number in terms of polling results, such as the recent ABC/IPSOs poll indicating among likely voters Harris has a 52% to 46% lead over Trump, with a margin of error of plus or minus two precent. This is a small margin of error. But for many polls these margins seem to range from three to four to five points. In part the margins of error reflect many polls using small samples to reach their conclusions. But to say that somebody has a one- or two-point lead, according to a poll, with a margin of error of three to four points tells us very little. It could be that one candidate has a larger lead or a smaller lead than thought, or that with such a margin of error, the other person could be winning,

Deciding about who is ahead or who is behind, based on one poll is insufficient. It fails to provide evidence of trends. Even if more than one poll is used but if the results in them are both within margins of error, it still may not be enough to establish a trend.

Polls also have confidence levels. Confidence levels refer to the issue of accuracy and sampling certainty. These are questions regarding from a mathematical or statistical perspective, how likely a sample of respondents might mirror a larger population. Most standard polls have a confidence level of .05, or 95% certainty. This means that even on the best days, there is a one in twenty chance that the poll will just be wrong. But sometimes polls, to save money, reduce the sample size of those surveyed, thereby reducing the confidence level.

There is then another problem where some websites or aggregators average out the different polls and to give some type of composite number with the belief that their average is more accurate. Statistically, this is not sound practice. Such composites average good and bad polls together, with different methodologies, dates, and questions. One cannot really average them together.

Finally, when it comes to polling, especially national polls for the presidency, ignore them all. We do not elect presidents by national popular vote, and national polls do not tell us anything about what' is going to happen in the six or seven swing states that will decide the election. Here it is 150,000 to 200,000 voters that would be decisive, and polling cannot be done easily at this level of granularity.

But beyond all these methodological misinterpretations of polling, there is a bigger problem, and that is profit driven polling. It is the habit of some organizations to do repeated polling to make their polls the new stories of the day, as opposed to covering the campaigns or examining the public policy issues that the candidates are espousing. Profit-driven polling is meant to create a horse race and to focus on who is ahead or who is behind.

Profit-driven polling is not about providing accurate reporting of public opinion, but about making money, or in some cases, organizations releasing polls to confuse or impact public opinion. It is possible that the misunderstandings among many journalists or websites regarding polls is simply a consequence of what polling can and cannot do. But it is also possible that all this misunderstanding is more intentional in terms of seeking to maximize profits from polling.

Monday, July 1, 2024

Biden's Abysmal Debate: Denial ain't just a river in Egypt

                    

As much as partisan Democrats seek to put lipstick on a pig, Biden's first and probably last
presidential debate was a disaster, if not fatal. Not only did it fail to do what it was supposed to do but it also confirmed 
public perceptions about Biden and trendlines in the presidential race that had largely been frozen for nearly six months.

            Even before this debate there were growing concerns about Joe Biden's age and cognitive capacities.  Four years ago the public was concerned about Biden's age and that has only grown since he was president.  Biden has held the fewest number of press conferences since Ronald Reagan was president, and his public appearances have mostly been canned and scripted. For many, he looks like their aging grandfather who has good and bad days, with many wondering that while we see Biden on his good days, what is he like on those days which are not so good?

            Polls indicate a majority of Americans are concerned about his age and think he shouldn't run for reelection.

            Even beyond the age factor, political science models all suggest that Biden was going to lose the 2024 election. While the models are not perfect, they look to presidential approval rating and perceptions about the economy as key to predicting reelection.

            Biden has approval numbers worse than Trump did in 2020. In fact, no incumbent president has ever won reelection with the numbers that Biden has.  In many cases, the economy looks good yet the public remains very fraught and fearful about inflation and the future of the economy.

            Even if we did not look at a traditional political science prediction model, polls indicate that for at least the last six months, Joe Biden and Donald Trump have been frozen in terms of what the numbers say in the five or six swing states that are going to decide the election. Last October I looked at Arizona, Georgia, Michigan, Pennsylvania and Wisconsin. Five swing states that will decide the 2024 election. Joe Biden was behind in enough of those swing states that were the election held back then he would lose to Donald Trump in the Electoral College. A couple of months later, the New York Times did a similar analysis, but added Nevada to the list. It too found that Joe Biden would lose to Donald Trump and the Electoral College.

            Leading up to this debate, the polls indicated in those critical swing states that Biden and Trump are close, but Trump enjoyed a consistent if not narrow lead. Generally this election as I've argued will come down to about 150,000 to 200,000 swing voters in five or six swing states. We are looking at an incredibly small number of people who effectively will decide the election.  Biden needed to move these voters.  They are probably low-information voters not paying a lot of daily attention to politics, but might nonetheless be affected by mass, pop culture or otherwise impressions of the candidates.

            Biden and his staff too were looking at the polls in the swing states.  They needed to do something to shake up the dynamics of the race.

            Thus an early debate.  Thus far, neither abortion politics as it did in 2022, nor fears of Donald Trump being reelected seem to be enough to change the trajectory of the Biden campaign. He and his staff placed a lot on the value of this first debate.

            The debate did little to move the trendline. There is no indication thus far that it altered people's perceptions of Trump versus Biden in a significant way or at least in a way to Biden's advantage. But what it did do was to confirm what many people believed about Biden and that he lacked the wherewithal to serve out a second term as President of the United States.

            After the debate party loyalists did their best to address Biden's bad performance.

            They said that it was just one bad day and he will be able to recover from it. In doing that, they drew parallels back to 2012 where Barack Obama had a bad debate against Mitt Romney but managed to recover. But that analogy is not appropriate. No one questioned Obama's cognitive capacities in 2012. There was not a belief that he was too old or too feeble to be president. He had a bad debate.

            Biden's bad performance confirmed what most people are believing. The debate might have been a bad day, even if Biden had a good day afterwards it does not alter the  impression that he is an eighty-one-year old in cognitive decline.

            It is nearly impossible to shake those types of public impressions for those few undecided voters who are out there if they were paying attention. There's an old adage you don't have a second chance to make a first impression. This first impression for them might well have been decisive.

            Party loyalists are also trying to argue that they would rather have a president with a sore throat and who stammers a little ahead of a president who lies. It may be true that the debate was between a person who lied about the facts and one who forgot the facts. But it still doesn't change the fact that there is no indication that this debate changed perceptions about Biden or changed the trajectory of the race. Does this mean that people, especially those 150,000 to 200,000, are more likely to vote for Trump? Perhaps not. It certainly doesn't mean they're more likely to vote for Biden. They could very well stay home on election day. They could vote for an alternative third party candidate. But certainly Biden did nothing to win them over.

            The reaction to the debate shifts to the question of decision making. All indications are for the last several months there were concerns among some in the Biden campaign regarding his mental capacities. But nonetheless the Biden campaign and partisans are rallying around him.  Party loyalty and loyalty to Biden seems stronger than the resolve for the Democrats to win the election. They fear an open convention more than they appear to fear losing. They fear being politically ostracized within the party. Much like Congressman Dean Phillips was when he said Biden shouldn't run.

            Somehow, Democrats are thinking he can still win this one.

            Perhaps they hope abortion fear of Donald Trump or some other black swan will intervene and change the trajectory of the election, perhaps even a second debate.  While I think a second debate is unlikely because Donald Trump has no incentive to do it. The risks of a repeat of this are too great for even Biden to consider. Yet he probably will do it or insist on it.

            There is something wrong with this level of insularity in decision making. If a candidate who was so unpopular and now so demonstrates lack of cognitive capacity even only occasionally still is nominated, there is something wrong in how political decisions are being made.  Yet despite all this, unless a black swan emerges, the theme of the Biden campaign might as well be “Denial ain’t just a river in Egypt.

Saturday, January 16, 2021

The Use and Abuse of Polls as Predictive Tools

The 2020 presidential election is finally over. Among the enduring stories of the election


cycle was that the pollsters again got it wrong.  Specifically, in the closing week or so of the election Real Clear Politics documented polls from the Economist, Quinnipiac, NBC/Wall Street Journal, Survey USA, CNN, and Fox which predicted the national popular vote to have Joe Biden winning over Donald Trump by 10, 11, 10, 8, 12 and 8 points respectively.  In reality, Biden’s final national popular vote lead over Trump was 4.4%.  These errors are on top of claims that in 2016 pollsters and prediction machines such as FiveThirtyEight were wrong in not seeing that Trump would win.  Up to Election Day, FiveThirtyEight gave  Clinton a 72% chance of winning.

There may be a crisis in polling, but much of it has to do with a failure to understand survey research,  the  employment of bad polls, and the misuse and interpretation of them.

Remember first that polls or surveys are not meant to be predictive tools.  They are snapshots in time regarding what a statistical sample of people think about an issue.  This is one of the most fundamental errors that analysts, the media, and the predictive machines make.  When a survey is done it is predicated upon the answers individuals give  at the time they are surveyed.    They do not tell us what they are going to think in two weeks, they do not tell us what  people who are undecided are going to believe, and they do not tell us how many  individuals are actually included in the entire population of those who hold similar opinions or may actually vote.  All of these matters are predictive issues which polls cannot do.

Many in the media also simply do not understand statistics.  When a poll says that it  has a confidence level of .05 or 95% that does not mean it is 95% certain that this poll is an accurate prediction of what the final results will be on election day.  Many seem to think that.  The  confidence level refers to the fact that a pollster believes that a poll has a 95% chance of being an accurate random sample of the population being surveyed at that time.  Again, this is not a prediction for the future, but a statement about the current poll, and it also recognizes that there is a 5% or one-in-twenty chance the poll does not accurately represent the population it wants to survey.  This suggests that even a good pollster can get it.  Thus, some polls are not valid when done, and many are consistently not reliable over time and should simply be discounted or ignored.

Polls also vary in terms of what is called their margin of error.  The margin of error reflects  the size and composition of the sample done.  Surveys might report margins of error of plus or minus two, three, four, or more points.  The smaller the margin of error generally the better.  A poll saying Biden is leading Trump by seven percentage points, plus or minus a margin or error of three points, means the lead could be four or 10 points.  In tight elections, especially at the state level where presidents are selected due to the electoral college, leads of one or two point points with margins of error of three points are still technically correct even if the predicted winner loses.

Traditionally surveys used confidence intervals to assess these margins or error but increasingly some are using what are called Bayesian Credibility Levels.   They are not the same thing. 

Credibility Levels are used in  non-random samples  and  assess the probability that a sample reflects a  pollster’s predetermined  sample composition.  The use of non-random samples and Bayesian Credibility Levels opens up new sources of bias and inaccuracy in polls.  This might include mis-estimating some voters, such as non-college-educated males whose turnout was greater than these surveys assumed in the last two presidential election cycles. Phrased another way, confidence levels assess the probability that a sample is representative of  the real population. Credibility level assesses the probability the survey sample matches a  pollster’s predetermined belief of what it should look like. The American Association for Public Opinion Research has cautioned against this increasingly popular survey methodology, perhaps for good reasons.

Remember also that national polls for presidential elections are effectively meaningless.  We do not elect presidents with a national popular vote but with the electoral college that  makes it 50 separate state contests.  In the critical swing states of 2020 such as Georgia, Michigan, Pennsylvania, and Wisconsin, polls there predicted close races and once all the votes came in—not just those on election day and reported that night—polls in those states were accurate and the final results came within the accepted  margin of error.   Everyone seems to have forgotten this.

There are also other problems with polls that predict voting.  One needs to think about the questions asked, assumptions about who will show  up to vote, how many are undecided and when and if they will make up their mind.  Finally, yes in a world where no one picks up their  phone anymore it is hard to do samples, but if one is willing to spend the time and money and increase sample sizes, one can still get accurate polls.   The issue is willingness to commit the effort to doing it right.

Given the above and recognizing that polls are not predictive tools, we can see the fundamental flaws with tools such as FiveThirtyEight.  They take a collection of all polls—good and bad—average them, and then makes a statistical prediction of what is likely to happen in an election.  Phrased otherwise, they take instruments not meant for prediction and which already have statistical assumptions in them which might not be accurate, and then use them to make statistical predictions for a future event.  All this is highly questionable.  Then analysts still ignore the fact that a prediction machine that says it is 70% likely something will happen means, even by its own estimate, a 30% chance of being wrong.  Error compounds error, statistical assumptions multiple upon themselves, and a failure to understand statistics yields the belief that the polls simply have it wrong.

Polling is an exercise based on probability and chance.  It is not a perfect predictive tool that can foresee the future in a clairvoyant way. If  viewed in this light we find the crisis in polling and predictive machines is less about traditional polling and more about their misuse and abuse.

Monday, October 26, 2020

The Use and Abuse of Polls in US Elections

Among the single most frequent questions I am asked every election cycle, but especially this one, is

“Are the polls accurate?”  This question is generally preceded with the statement “The polls were entirely wrong in 2016, they said Clinton would win and she did not.”  Are the polls accurate, is there a problem with them?  Are the polls in 2020, as in 2016, missing hidden Trump voters?  To answer this question one needs to understand some basic points about polling.


Good Polls are Sort of Accurate: But Know What You are Surveying
First, when it comes to 2016, the good  national polls were entirely accurate.  They said that Hillary Clinton would win the national popular vote by about 2-3 percentage points, with a margin of error or about 3 points.  These polls were dead on score.  The problem was not the polling but  their relevance. 

 We do not elect the presidential by the national popular vote and instead it is the electoral college which is essentially 50 separate state elections (plus the District of Columbia).  National polls for the purposes of predicting presidential winners are entirely irrelevant.  Ignore them all because they are looking at the wrong unit of analysis.

Second, the state polls were largely accurate too.  If one tracked what was happening in states such as Pennsylvania days before the election one could see the polls tightening and Trump narrowing the lead on Clinton as undecided voters made up their mind.  On the Monday before the election in 2016 in states such as Pennsylvania the polls had the race dead even. 
 
Third, there were no hidden Trump voters.  Nationally and in the critical states Trump did not receive many more votes than did Romney in 2012.  The issue was not a Republican voter surge  for Trump but Democrats staying home and not voting for Clinton.

Polls are Not Predictors But Statistical Snapshots in Time
But additionally, remember first that polls are supposed to be statistical profiles of a population.   This means  that a good poll is a small sample of a larger population that resembles the latter in all relevant characteristics.  Polls are only as good as the assumptions that go into them.  Good pollsters accurately reflect who is likely to vote, the partisan, geographic, or other makeup of the electorate.  If you make bad assumptions, you get bad results.  This is the old “garbage in, garbage out” theory.

Polls also are not predictors–they are snapshots in time.  Lots of things can happen between the time a poll is done and an election occurs.  Candidate strategies matter, as do messaging, and other intervening variables.  Thinking that polls are predictors is the root of many problems.

The Flaws in FIveThirtyEight
Consider Nate Silver and FiveThirtyEight.  Four years they predicted an 80%+ chance Clinton would win.  As of October 26, 2020 the prediction is an 88% chance of a Biden victory.  The model used here is based on polls–using them as predictors of what will happen on election day.  If the polls on which they are based are wrong, the predictions will be wrong, even if we still concede that polls are not predictors.  FiveThirtyEight’s predictive model is premised on a way of thinking about polls that is simply wrong.

An Example of Bad Polling: Minnesota US Senate Race
It is possible that Biden will win, but the polls are very close in the critical swing states such as Pennsylvania, Michigan, and Wisconsin.  But accepting everything I said in this essay, there is also a difference between good and bad polls.  

Let me use as an example a recent poll conducted in Minnesota and released last week declaring the US Senate race between Tina Smith and Jason Lewis to be a dead heat.  It reported Smith with 43%, Lewis 42%, down from an 11-point lead just a few weeks ago.  Is it possible that the race has tightened, but more is going on here to question the validity of the poll.

First, the poll had a margin of error of +/- five points.  Smith could actually be at 48% and Lewis and 37%, an 11-point difference.  This margin of error was driven by the fact that there were only 625 voters registered in Minnesota.  This is a pitifully small sample.  

Second, it identified likely  voters as a registered voter and traditionally 10-15% of voters in Minnesota register on election day. 

Third, the sample contained 38% Republicans and 35% Democrats.  Unless there has been a major shift in partisan alignment in Minnesota, no credible survey lists there as being more people who identify as Republican than Democrat.  If anything, one can make the argument that a good sample should be 38% Democrat and 35% Republican, especially keeping in mind that those who do register on election day tend to be younger voters who tend to vote for Democrats.  Effectively, this survey may be skewed six or more points in favor of a Republican.

Fourth, the survey was done on-line.  Not all surveys done on-line are bad, but there is a significant digital bias of self-selection in such surveys that warrant correction.  There is no indication this survey did that.

Nerd Warning: Confidence Levels Versus Credibility Intervals
Finally, there is one last problem that only nerds like me can appreciate.  The survey did not employ confidence levels but instead a credibility interval to determine the accuracy of the poll.  Why is this important?

When polls are done the question to be asked is what is the probability that the sample is a good representation of the entire relevant population.  The smaller the confidence level, statistically the better the chances it is a good survey.  The gold standard for survey research is a confidence level of .05.  This means there is a 95% chance that the sample is an accurate representation of the entire population.  This .05 also means there is still a 5% chance the sample is skewed and therefore poll is bad.

A credibility is something different.  It is based on Bayesian statistics and it asks what are the chances that a given sample is an accurate representation of a  prediction that you have made.  

A confidence level does predict what the sample should look like but instead asks whether the sample is probably a good mini-version of the entire population, whatever its relevant characteristics are.  A credibility interval asks what are the chances a sample mirrors the pre-existing  assumptions one has made about the entire population.   

A credibility interval, in my opinion, is the wrong way to do a survey.  Effectively you make your assumptions about the composition of the electorate and test to see if you have a sample that  mirrors it.  Your initial assumptions are held constant and tested.  With a confidence level, you are not holding constant your initial electorate assumptions and instead are asking if the results you get probabilistically correct.  In effect, credibility intervals test garbage in, confidence levels test garbage out.
Many do not see a difference in these two statistical methods but they can yield differences in results and potentially skew results.

This poll was a bad one.  It made a lot of mistakes.  The only benefit to it is for Smith and Lewis who can both now say the race is very tight and therefore send money and votes.  Beyond that, it is an example of a bad poll, the kind that can also skew presidential polls which in turn can skew  predictive models such as FiveThirtyEight.

Conclusion
The morale of the story is that polls done well can be good and accurate and accurate snapshots in time.  But there is a lot of bad polling.  Even worse, there is a lot of bad analysis based on polling.  Four years ago analysts got it wrong when they let the disbelief of a Trump victory cloud their thinking.  They also failed to understand the proper level of analysis to do presidential polling and how to understand whether a poll is valid or reliable.

Sunday, October 21, 2018

Minnesota’s Governor’s Race Tightening According to Star Tribune/MPR Poll? A Lesson in How Not to Read Polls

The Star Tribune/MPR declare in a new October 21, 2018 poll that the race for governor has tightened in Minnesota, with Walz holding a narrow lead over Johnson  Is that the reality?  The simple answer is that we do not know based on the polling data the paper provides, but the poll also provides a lesson in how not to read and interpret polls.

I teach polling and survey research.  I learned how to        do this both from professors at Rutgers University who have gone on to run the Pew Research Center and their polls, and from Charlie Backstrom at the University of Minnesota who wrote one of the best books ever on polling.  I say this because while I may not be able to do a good poll myself, I do understand what constitutes a good versus bad poll, or at least how to interpret their results.

In an October 21, 2018 poll of 800 likely voters the Star Tribune/MPR poll shows Walz with a 45%-39% lead over Johnson, with 12% undecided.  The poll has a respectable 95% confidential level and a margin of error of plus/minus 3.5%.  The results in the poll compare to a similar one done by the Star Tribune/MPR on September 16, 2018, also among 800 likely voters and a margin of error of plus/minus 3.5% showing that Walz had a 45%-36% lead with 16% undecided.  The conclusion of the paper was that Walz had a narrow lead that was tightening between the two polls.  Is this a correct conclusion?

There are many reasons to correct how accurate such a conclusion is.  First, consider the margins of error in both polls +/- 3.5%.  In the September 16, poll Walz could have been as high as 48.5% or as low as 41.5%, and Johnson could have ranged from 42.5% to 35.5%.  Compare this to the October 21, poll where the range for Walz could be 48.5% or as low as 41.5%, while for Johnson it could be 42.5% to 35.5%.  What we get are so polls so close in terms of their results that given the margins or errors, the differences in the poll results could be simple polling or sampling errors.  It is difficult on the basis of these two polls alone to conclude very much in terms on anything.

Perhaps the only thing that intuitively makes sense is that fewer voters are undecided in October than  September.  But the difference of four percentage points is so close to the margin of error that it is too possible to conclude that any shift in the number of undecideds is statistically almost insignificant.

There are three other issues with the two polls that raise questions about how much one can infer from them.  First, in both polls 40% of those polled came from cellphones and 60% from landlines.  Nationally and in Minnesota we have reached a point where more than 50% of the population is without a landline according to the National Center for Health Statistics and industry surveys.  The best surveys research now seeks to have approximately 60% cellphone numbers.  The Star Tribune/MPR poll has an almost exact reverse of what is recommended.

Who still uses landlines?  Generally, the older you are the more likely to only have a landline while the younger the more likely to have only wireless.  This is significant because age is a variable in terms of voting patterns, with presently older people more likely to vote Republican than Democrat.  Thus, even though  both the September and October polls may have an approximately correct balance of Republicans, Democrats, and independents, they might have over sampled those who are more likely to vote Republican, especially among those who call themselves independents.  The reason for this is that many independents really are not independent–their voting patterns actually favor one party over another. Thus, both polls might have been biased in favor of Republicans.

Second, both polls perhaps over-sampled the metro area with 61% of respondents coming from the metro area, compared to a more historical norm of 53-55%.  While demographics in the state are changing, this metro bias perhaps meant that  the polls favored voters more likely to lean Democratic.

Third, it is unclear from the polling methodology who is considered a likely voter.  We know that historically between 10-15% of those who vote in Minnesota register at the time of voting.  Mason-Dixon, which does the polling here for the Star Tribune/MPR, in the past has not produced a good methodology that accounts for this phenomena.  Failure to do this again raises questions about the poll’s accuracy.  Finally, the poll fails to account for the fact that voters in greater Minnesota vote in greater percentages than those in the urban areas.

So what is the point I am making?  Comparing these two polls it is hard to infer as much in terms of trends as suggested by the data and we really cannot say that the Minnesota gubernatorial race is  close or tightening.

Saturday, November 5, 2016

So How Close is the Presidential Election Now?

So how close is the election right now?  Depending on the polls–along with their inter-
pretation and misinterpretation–one gets varying answers. The simple answer is that there is a lot of misinformation out there, fed in part by cherry picking of data, partisan pushing, or simply a misunderstanding or interpretation of polling and statistics.

On Friday Nate Silver gave Clinton a 66.5% chance to win, down a lot from last week. Until a week ago I had Clinton at about a 75% chance. Before the first presidential debate I had Clinton 50%+ to 55%. I am back to that prediction. Clinton's position in critical swing states appears to be eroding, and data in the Washington Post suggests that too.  However, as of Saturday, November 5, Clinton still has enough of a lead in the critical swing states to put her over 270 electoral votes and win.

But consider two polls.  Earlier this week a Washington Post-ABC Poll had Trump beating Clinton 46-45% among likely voters, with a margin of error of +/- 3%.  Now in a new poll released today Clinton leads 47-43%, with a margin of error of  +/- 3%.  The interpretation of these two polls is that Clinton has recovered from the latest FBI e-mail controversy.  But has she?  Not necessarily.  Consider the margins or error.  A Trump 46-45 lead with a margin of error of 3% could mean the race was Trump ahead 49-42, or Clinton ahead 48-43%, with today Clinton now leading 50-40 or losing 46-44.  Margins of errors are, well, margins of error and not pinpoint statistics.  This means that in the last it is possible there has been no overall shift in the polls and that instead what the Wash Po poll is revealing is nothing more results well within margins of error.  We really do not know if the race has shifted much in the last seven days.  However, given that most other polls have listed Clinton as generally ahead in national polls she may be.

The big issue is how undecided voters will break this weekend. This election reminds me of 1980 when in the last 72-96 hours undecideds broke for Reagan over Carter, preferring change over the status quo because of their disgust with current politics. I see many of the same conditions here now and could see lots of voters either not voting or throwing caution to the wind and breaking for Trump. Often undecideds break for the challenger and against incumbents when they do not like the status quo. This election is really close but I can see possibilities for a Trump or Clinton win, a split between the electoral and popular vote, or even a 269-269 tie that sends the election to Congress to decide.  Do not rule out these possibilities, especially a popular and electoral college split.

 On election night I am looking at North Carolina. If Clinton wins that state it is all over because it will be mathematically hard for Trump to win without NC.

So what happens if the election melts down and the candidates challenge the results?  as I point out in a recent Huffington Post piece, don’t necessarily count on Congress or the Supreme Court to fix this election if it is contested or challenged.  Those institutions too are broken by partisanship.

Finally, for both candidates a major mistake is that neither of them are ending their campaigns with making the case for their election by offering a narrative for governance.  Both are sill running for office by declaring they are not as bad as their opponent.  Neither candidate will have a mandate to govern when they take office.