Technology Customized for Your Business.
Technology Customized for Your Business.

Competition Scoring Issues

Scoring Systems Issues and Problems

Most competitions use a point system for scoring. Points can range from 70 to 100 points per tune, or 0 to 100 points per tune, or 100 points for the total performed set of tunes (typically two or three) - the method used varies by area and contest. The challenge is to make sure all judges (usually three or five) use the same SCALE for scoring. Judges scores are all added together (sometimes a high and low score is dropped from consideration) to decide the overall point total. If the judges are not all consistent in their scoring very unpredictable results can occur. It is even possible that a person who no judge had in first place could win the competition based on point totals.

  

A judge who uses a wider point spread for their evaluation can effectively have greater influence on the total score than a judge that clusters their points very closely. we call this “Point Spread Bias”, and it happens in almost every case unless the judges are highly experienced and set a “baseline” or “starting point” by discussing the performance of the first contestant in each round after they play so they can better align their point spreads before the second contestant starts.  Many contestants do not like the fact that judges might converse during the competition and some competitions do not allow this at all, preventing a common starting point.

 

 

 

Scoring Systems used by Fiddle Contests

The Grand Master Fiddler Championship uses a judging format of five experienced judges and discards the highest and lowest scores for each contestant and adds the remaining three for the contestant’s total score. This was the system also used many years by the U.S. National Oldtime Fiddlers’ Contest uses (and is still in use for the OPEN category). While the system of dropping the high and low score can be thought to prevent an attempt by a judge to influence a contest, the fact remains that two judges in collusion could still create a skewed result by aligning their scoring. While dropping the high and low scores also has a psychological influence on the perceived fairness of a contest, data analysis of scores from 2005 through 2023 at the Grand Master Fiddler Championship has shown that when experienced judges are used and an initial scale established, dropping the high and low score has almost no effect on the outcome other than perhaps in a couple of placements in the entire contest.  Again, results can only be considered valid while using very experienced and knowledgeable judges and not all contests can or will consistently hire judges of that caliber.

 

The Texas Old Time Fiddlers Association (TOTFA) scoring system is a step in the right direction. This system has each of 3 judges score contestants from 70 to 100 points per performance (regardless of number of tunes) and rank their scores in numerical order from first place to last place. Points are guidelines and not necessarily the indicator of the winner. If two judges agree on a placement then the contestant is awarded that place; unless the third judge disagrees and that judge can then call for a playoff of the contestants in question to determine the final order. In other words, the RANK of the contestant is what matters, and this eliminates issues with points being added together and generating unpredictable results. Unfortunately, this system also has its flaws. Personal feelings, favoritism, “you scratch my back, and I will scratch yours” affecting the ranking are common accusations, as is the potential for a strong-willed judge to impose their will on the other judges and sway the results to his/her preference. Even with the potential for bias, this system is the closest to being fair.

 

 

The Normalized Scoring System Model

Fairness and the perception of fairness to the audience is key in a competition’s ability to grow and thrive and attract contestants year after year. Most contestants would agree that using points is the most widespread, accepted, and best method, given the potential for personal feelings using a ranking system like TOTFA. Those well-schooled in and trained to use the TOTFA system might disagree, and arguments could be made either way. The answer is a “normalized” scoring system that takes points from each judge and determines a way to make each judge’s scores count equally and eliminates point spread bias. This idea led to the development of the Normalized Scoring System (NSS).

 

NSS is loosely based on an idea NASCAR once used for their scoring. That methodology ranks each contestant by their finish in the race and then assigns a set number of points for each ranking. The field was always set to 40, regardless of the final number of cars racing. The first-place finisher gets 40 points. The second-place finisher gets 35 points. After that each car gets one point less (with spots 36 and 60 get one point each).

 

These points are added across the year to determine their winner. (NASCAR also assigned extra points for winning the race as well as points for leading a lap; but that does not apply to a single competition but rather several competitions over a span of time).

 

NSS uses a scale starting at 100 and goes down by 1 for each ranking. Each judge’s scores are ranked from high to low. For each judge, the person they have in first place gets 100 points. The person they have in second gets 99 points, and so on. For example, in a field of 75 players, first place would get 100 points and 75th place would get 26 points. The point scores for a judge that accomplishes the ranking generates a “normalized” score based on the point scale. It is these scores - not the judges “raw” scores which are the actaul numbers they wrote down - that are added to generate a point total. Using the normalized scores does not alter how the judge scored and ranked the contestants; it just makes the point spread uniform across all judges and eliminates different point scales and spreads in judging.

 

NSS was assessed and evaluated by several champion contest fiddlers and a PhD using data from several contests including the Grand Master Fiddler Championship. A scoring tool has been developed to use this system and it was used for the evaluation and compared “normalized” results to actual results.

 

The attractiveness of this system is clear. Many judges have no issue in deciding how they would rank the contestants they heard, but they have trouble determining the “right” number of points to use in determining the ranking. With NSS, a judge can use whatever scale they want and still not affect the contest results adversely. It also makes up for some lack of experience in judging while using and assigning points to each contestant. It has the added effect of making the final point spread moot, which could ease some hurt feelings if a lesser experienced player is competing against some of the top players in the contest. 

 

Normalizing scores and eliminating point spread bias gives far more accurate contest results and combines the best of both scoring systems using points and using rankings.

 

However, this system does allow the ability to use the “raw scores” - those actually written by each judge – to determine the winner - just as conventionally done for most contests. NSS has the ability to use the new Ranking scoring or the traditional point scoring. Results are automatically calculated for both, and the user could elect to use the traditional scoring method.

 

 

 

Example - The Normalized Scoring System Difference

Benny, Major, and Clark are judging a horse show. In the finals we have two horses, a brown horse, and a white horse. Each judge will score each horse up to 100 points based on how well they walk around the arena. The points will be added together for each horse to determine the winner.

 

The white horse walks around the arena. Benny really likes it and gives it a score of 95. Major likes the horse as well and gives it 93. Clark is not as impressed and gives the horse 80 points.

 

The brown horse now walks around the arena. Benny likes it OK and gives it a score of 93. Major likes it OK also and gives it 90. Clark was impressed and gave the horse 90 points.

 

The scores are added. The White horse has a total of 268 points. The Brown horse, however, has a total of 273 points! Even though two of the three judges agree that the white horse should win – the fact that they did not all use the same point spread, or scale, skewed the results. Most people would agree that since two of the three picked the white horse then it should win. But that is not how the points came out.

 

Using NSS, the white horse would have gotten 100 points from Benny, 100 points, from Major and 99 points from Clark, for a total of 299. The brown horse would have gotten 99 points from Benny, 99 points, from Major and 100 points from Clark, for a total of 298. The white horse wins because the majority of judges felt it was the best that day.

 

 

The NSS gives far more accurate contest results and combines the best of both scoring systems using points and using rankings. 

Get social with us

Recommend this page on:

Print | Sitemap
Copyright 2025 Carnes Group LLC. All Rights Reserved.