All-Star Comparative Scoring

Welcome to our Cheerleading Community

Members see FEWER ads... join today!

Fair enough, but would you put TGLC at the very top of the scoresheet if they went first and you knew cheetahs was coming later? Or vice versa?

There's only a rough rubric for difficulty so it's not like team double ups automatically equals some score.

If they do that plus x y z etc possibly, who is to say people can't have the same difficulty score.. what if they did the exact same thing (i know not realistic).. would one team be more difficult?
 
If they do that plus x y z etc possibly, who is to say people can't have the same difficulty score.. what if they did the exact same thing (i know not realistic).. would one team be more difficult?

No they should have the same score of they did the same thing. But if you lived in a vacuum, you'd never know what difficulty a later team could come out with. You'd always have to leave some room for a team with more difficulty.
 
Last edited:
Many score systems work perfectly in theory. In practice, however, comparative scoring (like Worlds) tends to have scores rise towards the end of the schedule - even with the best available judges. This effect gets more pronounced as the division gets larger.

Worlds using a lottery system for day 1 order allowed some great statistical analysis of scoring. You had a random order in multiple divisions. (random within the same bid type.) We were able to demonstrate very clearly that scores tended to rise as you went later in the division - despite having the order determined randomly. To me, this should have immediately led to the whole system being changed to something with, at minimum, some "anchor points" for the difficulty ranges to keep them from drifting. My perception of what has been done to fix this was telling the judges to "try not to do that".
 
@King , @BlueCat poke holes or take it and expand and I've only thought of the stunt aspect of the idea, but here it goes.

Code of points with 1 point skills, 2 point skills and 3 point skills with a max total of 15 for difficulty
You can do 5 3pt skills or 15 1pt skills to get there (maybe a .5 combo bonus)(maybe a creativity score 1=below avg, 2=avg, 3=above avg, 4=wow)
You only have your 2:30 to work with, so you would have to strategically decide how much time to use and the risk vs reward of the easier vs more difficult skills.

You could then have a 5 pt execution/tech score that's done like it currently is

That's the core of the idea- numbers may need to be tweaked to make it really work, but it takes the subjectivity out of difficulty but allows for creativity and strategy
 
Many score systems work perfectly in theory. In practice, however, comparative scoring (like Worlds) tends to have scores rise towards the end of the schedule - even with the best available judges. This effect gets more pronounced as the division gets larger.

Worlds using a lottery system for day 1 order allowed some great statistical analysis of scoring. You had a random order in multiple divisions. (random within the same bid type.) We were able to demonstrate very clearly that scores tended to rise as you went later in the division - despite having the order determined randomly. To me, this should have immediately led to the whole system being changed to something with, at minimum, some "anchor points" for the difficulty ranges to keep them from drifting. My perception of what has been done to fix this was telling the judges to "try not to do that".
I feel like the bingo ball effect flows through to finals too with the "lower" score going first and judges have a preconceived impression that the final results should pretty much stay as they are unless a team has major issues. Often times if that "lower" score had gone later in semi finals they would have been placed higher and therefor gone later in the finals session. The system is flawed but sadly I don't see it changing any time soon.
 
Last edited:
  • Thread starter
  • Moderator
  • #55
It also makes me wonder if it will change how the rule MOST = 75% will impact teams. To me if you are now using comparative scoring and 1 team has 80% and another team has 100% (given technique and difficulty are equal) the team with 100% will score higher. For example, it makes me think having 100% standing tumbling would be more important than it has been in the past. * I wish I could think of a way to word that better *
 
Last edited:
It also makes me wonder if it will change how the rule MOST = 75% will impact teams. To me if you are now using comparative scoring and 1 team has 80% and another team has 100% (given technique and difficulty are equal) the team with 100% will score higher. For example, it makes me think having 100% standing tumbling would be more important than it has been in the past. * I wish I could think of a way to word that better *
I think it was sortof established that way to begin with wasn't it? That 75% would get you INTO the range but then the range would stratify from there? I sortof always thought that range within the high range would differentiation between the "majority, most" and actual (de facto) "squad." That was always my impression anyway.
 
  • Thread starter
  • Moderator
  • #57
I think it was sortof established that way to begin with wasn't it? That 75% would get you INTO the range but then the range would stratify from there? I sortof always thought that range within the high range would differentiation between the "majority, most" and actual (de facto) "squad." That was always my impression anyway.

When I e mailed @ASCheerMan to ask about it I was told most 75%=100%. If 75% threw it you got 100% credit, but they also looked at technique and difficulty.
 
Last edited:
Many score systems work perfectly in theory. In practice, however, comparative scoring (like Worlds) tends to have scores rise towards the end of the schedule - even with the best available judges. This effect gets more pronounced as the division gets larger.

Worlds using a lottery system for day 1 order allowed some great statistical analysis of scoring. You had a random order in multiple divisions. (random within the same bid type.) We were able to demonstrate very clearly that scores tended to rise as you went later in the division - despite having the order determined randomly. To me, this should have immediately led to the whole system being changed to something with, at minimum, some "anchor points" for the difficulty ranges to keep them from drifting. My perception of what has been done to fix this was telling the judges to "try not to do that".
Perfectly said. Thank you for all your level headed, professional posts.
 
Back