Mason-Dixon recently released a new statewide poll testing the two pending solar amendments currently making their way through the process in Florida.
Despite the fact that both dealt with a similar subject, solar power, it was a tale of two initiatives: one very passable and the other with seemingly no path to victory.
The initiative backed by the solar industry (titled, “Limits or Prevents Barriers to Local Solar Electricity Supply) saw only 30 percent of respondents saying they would vote for it, while 45 percent said they would vote “no.” This is, of course, no surprise given the negative and confusing ballot title language. The other initiative (titled, “Rights of Electricity Consumers Regarding Solar Energy Choice”) received a pretty strong 66 percent of the vote.
Why the difference?
As M-D correctly explains, “The result of two questions related to solar energy can only be explained by the ballot language.”
This analysis is not to examine those differences in detail but to evaluate the methodology of the poll.
Let’s break out the saltshaker.
The survey was made up of 625 registered Florida voters. While that is a somewhat smaller sample than we usually see for statewide polls, when you only need to look at overall trends or topline measurements (as in this case), it serves the purpose well enough.
They mixed landline and cell phone calls. Good.
The sample was, according to the authors, balanced by geography to “reflect voter turnout by county.” Also, good.
As for those “registered voters,” readers may recall that this was a serious problem in the Presidential Preference Primary (PPP) poll conducted earlier this month by the same polling company. In the present case, we don’t have a large problem with it and here is why. That prior PPP poll was supposed to measure frequent voting Republicans and Democrats (with an estimated turnout of around 40 percent) but in this case we are looking at a November (as opposed to March) election where we expect a very high turnout. Would we have preferred there to be some voter history? Sure. But for a November presidential election it is a fair assumption that pretty much anyone who is willing to take the time to participate in a phone poll and has a valid working number on the voter file is likely a probable voter. Further, the question for us would be, “Are those who participated any different than those who should have been excluded?” In this case we would have to conclude they probably are not.
And how about those demographics?
We are not sure where they got their targets. For example, their breakdown by party was 43 percent Democrat, 39 percent Republican and 18 percent “Independent/Other.” If that is supposed to reflect current voter registration, is very far off (current voter registration shows 27 percent (not 18 percent) of voters in the “Independent/Other” category.) If it is supposed to represent projected turnout, it is important to note that in both the 2008 and 2012 Presidential elections, Democrats only outnumbered Republican voters by one percent, not four. Additionally, the poll is somewhat heavy on Hispanics/Cubans and slightly high on male voters.
While these variations force us to give the poll a grain of salt, it is important to note that even if these differences were adjusted for, it would not change the conclusions.
So while we will stick to our guns with the single grain, we see this as a good poll that serves the purpose.
Oh and one last thing.
There is a legitimate discussion to be had about how voters react to “seeing” the ballot language (as they would during the actual voting process) versus ”hearing it” on a phone poll. One could make the case (and we have heard this from supporters of that initiative) that the ballot language for the losing solar initiative (the one that begins with the confusing and negative, “Limits or Prevents Barriers to Local Solar Electricity Supply”) will do better when voters view it in the voting booth. To that argument, we would say “fair enough,” but we would have to see the evidence for that in order to say this method is flawed. Show us a poll taken using in-person “mall intercepts” or other such techniques and we will gladly break out the saltshaker.
On the other hand, we can’t resist the temptation to note the stark difference in the ballot titles. The second one, “Rights of Electricity Consumers Regarding Solar Energy Choice” gently calls out for ethereal harp music when you read it.
As noted earlier by the poll’s author, “the result of two questions related to solar energy can only be explained by the ballot language.”
Key for the Salt Shaker test:
- No salt needed: Solid pollster, solid methodology, and the sample appears to be nicely balanced.
- A grain of salt: The poll has one or two non-critical problems and should be taken with a grain of salt.
- A few grains: There are several concerns with how the poll was conducted, but not enough to throw it out entirely.
- A half shaker: There are enough problems with the methodology to warrant serious concerns, and the poll should not be taken seriously.
- A full shaker: The poll has so many problems it should not only be completely disregarded but pollsters receiving multiple “full shakers” will no longer have their polls covered by Florida Politics/SaintPetersBlog.
Steven J. Vancore is the President of VancoreJones Communications and Clearview Research. He can be reached at firstname.lastname@example.org.