Note: these projections are good guides but if the race looks to be very, very, very close and the difference between the candidates can be measured in fractions of a percent they probably won't have the level of precision needed at that level so please use them as a guide more than gospel.
All projected numbers appear in red. All actual raw vote totals appear in black.
Most of these projections are calculated by estimating the totals for each candidate by county and then aggregating up the county totals. For each county I simply apply the current vote ratio and extrapolate those numbers as if 100% of that area's precincts had reported.
For example, if in one area 5 out of 10 precincts have reported and Candidate A had 100 votes and Candidate B has 80 votes the extrapolated Projected totals predict that Candidate A is projected to receive 200 votes and Candidate B is projected to receive 160 votes in that area. Do that for all 110 election authorities and add up the results to get the projected totals.
It's a very simple projection formula and it performs poorly very early in the night when only a few precincts have reported. It obviously gets better with more data. When it is early in the night and many or most election authorities haven't yet reported any data it can show some really weird projections so just be patient.
Having said that there are still two situations where problems can cause weird projections and those need to be managed:
To manage the first issue any time a county/municipality is showing election returns but is also showing 0 precincts reporting the projections formula just uses an assumption of 15% of the precincts reporting. 15% is not a perfect assumption but it's a reasonable ballpark and it causes far fewer problems with the projection model than 0 does.
To manage the second issue any time a county/municipality is showing no election returns the model just uses a projection based on a historical weighted average. (full description below)
If you look at the historical vote share numbers (the percentage of the total statewide vote that comes from a given region) you'll notice that they don't vary much year to year, even from Presidential to midterm elections. The closer the projected vote share numbers get to what the numbers have been historically then the more likely the projected vote total numbers are reasonably accurate. If early in the evening when not many results are in the projected vote share numbers differ from the historical pattern by a great amount then you shouldn't put too much stock in the projected vote total numbers, but as it gets later in the night and those vote share numbers get closer to the historical percentages you'll know that the model is getting better. To get a sense of how accurate the current projections are to historical vote share numbers you can find two "Projection Variance" calculations, when the projections are pretty accurate these should be down around 3.00 or maybe even lower. If the projection variance is a lot higher than that then we really need more data to have a little more confidence in the projections. Full explanation of projection variance calculation below.
As outlined in the solution to problem 2 listed above I calculated a historical weighted average for party performance (Dem, Rep, 3rd) to use in the projections for the counties that have not yet reported any vote totals.
Once you have these historical weighted average percentages you can calculate the projected raw total as follows: total projected statewide vote * projected vote share * candidate's party historical weighted average percentage. To calculate the projected vote share I just use the number of precincts in that county/number of precincts in the state.
Important reminder: the historical data is only used for areas where no results are in, when there is no current actual voting data to use. As soon as any results are reported in that county, including just one precinct reporting, for that county the historical data is not used in the calculations in any way and only the actual voting data is used.
This is a very simple calculation, since we are projecting the vote totals for every area and we know what has already been reported you just subtract the numbers already reported from the projections and display how each candidate is expected to perform in the remaining uncounted vote based on current voting patterns.
The break even calculation is just a measure to tell you the percentage of the remaining vote each candidate would need to have the race finish in a tie. If either candidate wins more than the break even percentage for the remaining uncounted vote it should produce a win. This is a more complicated calculation when you have more than two candidates so to simplify the assumption that is used is that the 3rd party candidates will continue to recieve the same level of support in the uncounted vote as they have so far in the counted vote. In instances where that assumption isn't true, such as where a 3rd party candidate has much stronger support in one part of the state than the rest, that can weaken the accuracy of the break even projection.
When looking at the Uncounted Vote projection and the Break Even projection whichever candidate's percentage in the Uncounted Vote projection is higher than their Break Even projection percentage is in the stronger position. The greater the difference, the more likely the candidate is to win.
The Projection Variance is a measure of how well the projections are tracking with expected final results by vote share. It measures the sums of the absolute values of the difference between the projected vote share and the historical vote share. Smaller is better, the true vote share variance compared to historical vote share once all the votes are counted is typically around or under 3.00.
For example, to calculate the traditional collars variance take the absolute value of the difference between the projected Chicago vote share and last cycle's actual Chicago vote share. Do the same for the Cook County suburbs, the collars and downstate and then add them up. Multiply that sum by 100 to improve readability and that's the number being displayed. When the projections are performing well this number should be under 3 but not necessarily 0.
When you look at the variances historically this group has this calculation all fall under 3.00: a) 2012 PRES compared to 2008 PRES, b) 2010 GOV compared to 2006 GOV, c) 2008 PRES compared to 2004 PRES, d) 2006 GOV compared to 2002 GOV, e) 2004 PRES compared to 2000 PRES and f) 2002 GOV compared to 1998 GOV.