- Dec 4, 2009
- 14,108
- 19,303
- Moderator
- #1
A team turns in a list of set skills right before the team competes of everything that team does (or should do) on the floor. That list (along with a high def video) goes to the difficulty judge. His/her whole job is to determine the difficulty of what is attempted on the floor. (note this is NOT the execution or performance of the skill, just the difficulty of the skill itself). At the end of the process a team should have a fairly accurate assessment of the difficulty of their team that (barring catastrophe from day 1 to day 2 or vice versa) should not vary much. The process should be objective and allow the live scoring judges (the ones who do the execution and performance scores done live) to not worry about difficulty.
What is wrong with this and why?
What is wrong with this and why?