All-Star 2016 Scoring...thoughts???

Welcome to our Cheerleading Community

Members see FEWER ads... join today!

I'll use level 1 as an example but couldn't scoring be like this-

There is a baseline scoresheet for every level that explains the middle requirements for every section. That section starts at a total of 50 points.
Ex- Standing tumbling
50 points if 15/20 girls have a back walkover
+1 point for every extra girl who has a back walkover (i.e. if 19/20 girls have bwo then score is 54/50)
-1 point for every girl who does not have a back walkover (i.e. if 12/20 girls have bwo then score is 47/50)
Ex- Stunts
50 points if 4/5 halves go up(some groups might require a front spot)
+2 points for each group that hits 2 body positions, +1 point if group only hits 1 body position
+2 points for each group that executes transition skill (tick tock, full up (not level 1) etc)
Ex Judge's Perspective
5 points are given to each judge who can give points for creativity/innovation/originality

I guess something like this could get complicated, but it would make things more numerical and coaches would know exactly how many points their stunt section will earn if executed. The 5 points for creativity is a minor part of the score sheet but still encourages teams to use their imaginations and come up with new things.
How do you compare the scores of teams with, say, 6 kids and 20 kids? Why would back walkovers be the baseline? You can do all kinds of other tumbling skills in level 1. How do you quantify combination passes? In your example, what is the maximum score? If there is no maximum, wouldn't teams with more kids automatically be at an advantage, as they have the opportunity to add more "bonus" points than smaller teams? I know you used level 1 as an example, but how would you deal with the increasing complexity as levels progress? And probably the most important question to me, do you not think that, if you were able to figure out solutions to all of the above questions, that if coaches were given a formula to max out, that every single coach would do skills to max the formula, and then, in essence, the ENTIRE score would therefore be determined by your "judges perspective" category, thus basically making the scoresheet both incredibly formulaic and compulsory AND results entirely driven by subjectivity?
 
Side vent: I don't mind competitions that aren't sanctioned by Varsity BUT I hate it when they don't score the same. I know they can score however they want but its so frustrating when you build your routine to fit one score sheet and the next week you are given 264 points as a total score and a 3/5 for jump difficulty (even though @ all Varsity events its 5/5) because the judges score a different way.

I wish there was a universal base line for scoring to keep it all an even playing field across all competitions.
 
I'll use level 1 as an example but couldn't scoring be like this-

There is a baseline scoresheet for every level that explains the middle requirements for every section. That section starts at a total of 50 points.
Ex- Standing tumbling
50 points if 15/20 athletes have a back walkover
+1 point for every extra athlete who has a back walkover (i.e. if 19/20 athletes have bwo then score is 54/50)
-1 point for every athlete who does not have a back walkover (i.e. if 12/20 athletes have bwo then score is 47/50)
Ex- Stunts
50 points if 4/5 halves go up(some groups might require a front spot)
+2 points for each group that hits 2 body positions, +1 point if group only hits 1 body position
+2 points for each group that executes transition skill (tick tock, full up (not level 1) etc)
Ex Judge's Perspective
5 points are given to each judge who can give points for creativity/innovation/originality

I guess something like this could get complicated, but it would make things more numerical and coaches would know exactly how many points their stunt section will earn if executed. The 5 points for creativity is a minor part of the score sheet but still encourages teams to use their imaginations and come up with new things.


Sorry, just some quick edits above. There's an interesting reversed gender bias in cheerleading, so I'm just making sure we keep it neutral and inclusive. I was always the one boy on a team/ in a program and it was always "Girls warm up! Ladies take it from jumps!" Again, an interest taste at how gender asserts itself even subconsciously and as a male, eye opening to get a glimpse of what I can only imagine is a daily battle in a number of domains for women battle male privilege. That said, it propagated this stigma about cheerleading being only for females--> remember that there are probably so many guys dying to try out this amazing sport, but are prevented from participating by ignorant parents, friends, or classmates. We are all welcome in this sport, boy, girl, athlete alike.
 
Side vent: I don't mind competitions that aren't sanctioned by Varsity BUT I hate it when they don't score the same. I know they can score however they want but its so frustrating when you build your routine to fit one score sheet and the next week you are given 264 points as a total score and a 3/5 for jump difficulty (even though @ all Varsity events its 5/5) because the judges score a different way.

I wish there was a universal base line for scoring to keep it all an even playing field across all competitions.

I am torn on this. Originally, when the scoresheet merger first came out, I didn't like it because I felt I had an upper edge as being able to know all scoresheets and fix accordingly. However the first year of the "Unified Scoresheet" Varsity then came out with "Alterations" so it really didn't meet its purpose. This year, now that its fairly universal minus local IEP, but I feel it has too much subjectivity.
 
The two things I read here that really get me fired up is the variation from week to week. Ex cheersport we have .5 in deduction both days. Final score on the floating sheet 94.7. Next weekend nca two well executed zero deduction routines ( only team in division to hit zero twice) final score 90.1 [emoji54] I watched all the teams ( we went first on day 1 super early ) and I didn't get it. Our coaches had no explanation. They basically ended up just telling the kids that the judges simply didn't care for the routine.

Second thing is the variety in panels as mentioned previously. When bids are at stake there has to be something that averages out some of the bias. For example it appears in many cases this year teams that hit in the large division were awarded higher scores than teams that hit in a small division. If they aren't going to allocate bids to divisions directly then they need to make sure the scoring is equitable. I wish summit declarations were mandatory and much more clear.

Worst example I saw was peach at cheersport vs peach at nca just the massive difference in raw score seemed unreasonable to me.
 
The two things I read here that really get me fired up is the variation from week to week. Ex cheersport we have .5 in deduction both days. Final score on the floating sheet 94.7. Next weekend nca two well executed zero deduction routines ( only team in division to hit zero twice) final score 90.1 [emoji54] I watched all the teams ( we went first on day 1 super early ) and I didn't get it. Our coaches had no explanation. They basically ended up just telling the kids that the judges simply didn't care for the routine.

Second thing is the variety in panels as mentioned previously. When bids are at stake there has to be something that averages out some of the bias. For example it appears in many cases this year teams that hit in the large division were awarded higher scores than teams that hit in a small division. If they aren't going to allocate bids to divisions directly then they need to make sure the scoring is equitable. I wish summit declarations were mandatory and much more clear.

Worst example I saw was peach at cheersport vs peach at nca just the massive difference in raw score seemed unreasonable to me.
This is definitely my biggest problem, and we even see it from day 1 to day 2 at the same comp. Different panel of judges were scoring almost 2 points different for the same routine, same quality of execution. That just shouldn't happen, and it makes a difference when bids are at stake.
 
I am torn on this. Originally, when the scoresheet merger first came out, I didn't like it because I felt I had an upper edge as being able to know all scoresheets and fix accordingly. However the first year of the "Unified Scoresheet" Varsity then came out with "Alterations" so it really didn't meet its purpose. This year, now that its fairly universal minus local IEP, but I feel it has too much subjectivity.

Yeah there is a lot of subjectivity...the small (or sometimes large) differences from a 4.2 in one section at one comp to a 4.6 at another comp all add up in the end.

But I have a question...When divisions are split into A, B, and C it's randomly done correct? I've just noticed the main 'average' score in the divisions are different. I was in one division I would have gotten first or second place, but if I competed in another I would have gotten 5th or 6th...not sure if it's just complete luck of the draw or different panel of judges? Anyone else noticed this?
 
Yeah there is a lot of subjectivity...the small (or sometimes large) differences from a 4.2 in one section at one comp to a 4.6 at another comp all add up in the end.

But I have a question...When divisions are split into A, B, and C it's randomly done correct? I've just noticed the main 'average' score in the divisions are different. I was in one division I would have gotten first or second place, but if I competed in another I would have gotten 5th or 6th...not sure if it's just complete luck of the draw or different panel of judges? Anyone else noticed this?
At least at NCA, the splits are done by number of participants. Small A might be 12-15 participants, Small B 16-18, Small C 19-20. They are often judged by different panels, so comparing scores between divisions may not be accurate.
 
Yeah there is a lot of subjectivity...the small (or sometimes large) differences from a 4.2 in one section at one comp to a 4.6 at another comp all add up in the end.

But I have a question...When divisions are split into A, B, and C it's randomly done correct? I've just noticed the main 'average' score in the divisions are different. I was in one division I would have gotten first or second place, but if I competed in another I would have gotten 5th or 6th...not sure if it's just complete luck of the draw or different panel of judges? Anyone else noticed this?
I have, and when bids are at stake its a huge issue. We had a team at Cheersport come in 7th that would have been 3rd in the other division. The average score in one division was about 2.5 points lower than the other. I saw both divisions and didn't think there was enough difference in the 2 to warrant scores being that different.
 
At least at NCA, the splits are done by number of participants. Small A might be 12-15 participants, Small B 16-18, Small C 19-20. They are often judged by different panels, so comparing scores between divisions may not be accurate.
Why is that ok when there are bids at stake? Shouldn't we be able to compare scores across all divisions to determine the bid winner?
 
Why is that ok when there are bids at stake? Shouldn't we be able to compare scores across all divisions to determine the bid winner?
It is an imperfect system. They usually do use overall high scores to determine bid winners. It isn't a perfect way to to determine bids/grand/etc, but it is probably the best option given the budget and time constraints most EPs have.
 
Why is that ok when there are bids at stake? Shouldn't we be able to compare scores across all divisions to determine the bid winner?
But isn't that what Cheersport does? I thought they took the division winners in each level and rescored those with one panel of judges to get the actual high score for each division. So, for example, all level 2 winners were rescored with the same panel before awarding the level 2 bid. I think that should be how all bids are awarded so there is no score difference based on luck of the (judging panel) draw.
 
But isn't that what Cheersport does? I thought they took the division winners in each level and rescored those with one panel of judges to get the actual high score for each division. So, for example, all level 2 winners were rescored with the same panel before awarding the level 2 bid. I think that should be how all bids are awarded so there is no score difference based on luck of the (judging panel) draw.
Can anyone confirm that is how Cheersport actually does it? I didn't look up this year but last year I saw no variation between who got bids versus top scores from the weekend.


Sent from my iPhone using Tapatalk
 
Can anyone confirm that is how Cheersport actually does it? I didn't look up this year but last year I saw no variation between who got bids versus top scores from the weekend.


Sent from my iPhone using Tapatalk
Yes, I believe CS essentially re-judged the top teams. I know because one of our Austin teams actually had a higher score than one of the eventual Summit paid bid winners. While that stung a bit, it was clearly stated in the bid declaration that that was their policy. However, CS also essentially hired an extra panel and took a whole extra day to announce these bids - which is not always ideal either.
 
Can anyone confirm that is how Cheersport actually does it? I didn't look up this year but last year I saw no variation between who got bids versus top scores from the weekend.


Sent from my iPhone using Tapatalk
I can't confirm that what I've heard is accurate, but I can confirm that the bids awarded did not necessarily match up with the highest scores.
 
Back