00:00
00:00
Random-storykeeper
Composer for Team Spontaneous Combustion and various indie projects, AIM organizer.
Avatar + banner by Frostyflytrap (https://twitter.com/frostyflytrap)

Annette @Random-storykeeper

Age 29, Female

Admin Assistant

Canada

Joined on 3/21/14

Level:
6
Exp Points:
303 / 400
Exp Rank:
> 100,000
Vote Power:
4.59 votes
Audio Scouts
7
Rank:
Civilian
Global Rank:
> 100,000
Blams:
0
Saves:
6
B/P Bonus:
0%
Whistle:
Normal
Trophies:
2
Medals:
185

The AIM 2020 Judging Process

Posted by Random-storykeeper - August 1st, 2020


The judging process for AIM 2020 proved to be challenging, and picking out 20 winners from the 88 entries was going to result in some great entries not making it into the final list. There are plenty of entries that I really love that didn’t make it onto the album at all, and I’m certain the other judges have entries that they really love that didn’t make it onto the album too. Nevertheless, decisions had to be made, and I wanted to make sure that the winners took into account the evaluations of every judge as best as I could do.




Taking Over

I took over organizing AIM from RealFaction, and one of the questions I asked before AIM 2019 (the first of these contests that I ran) started was how to judge the entries. Basically, in the past, each judge would list out their top 20 entries (which included their top 3 entries), then the organizer (who would also be the judge) would compile these lists and tally up the votes. Entries in which every judge picked, for instance, would be considered runner-ups at the very least. 


Last year, I was able to use this system to determine both the runner-ups and top 3 entries without much trouble. Most of the judges’ picks had overlap in terms of who they thought should be winners, so it was easy to extrapolate this data and determine the first, second and third place from it. I assigned 3 points to an entry ranked first in a judge’s list, 2 points for the second and 1 point for the third. Based on the rankings, the first place winner for AIM came out with 8 points (it appeared in all of the judges’ top 3 lists), the second place winner had 5 (appeared on two judges’ lists in 1st and 2nd place) and the third place winner had 2 points (appeared on two judges’ lists in 3rd place). While there were other entries that would have scored a higher amount of points, they only appeared in one judge’s list, so entries where at least two judges picked them would be prioritized. 


The judging process mostly takes place over Discord, and I use Excel / Google Sheets to compile the data. 


AIM 2020

This year, picking out 1st, 2nd and 3rd place appeared to be more difficult. I do think part of it has to do with the amazing turnout of entries this year - 88 entries compared to last year’s 43. When the judges put in their picks, there was almost no overlap. Some of us debated a bit over whether we needed a more refined points system for determining the top 3. Ultimately, after every judge’s top 3 lists were in, only one of the entries had overlap - the first place entry. This ended up being 1st in two of the judges’ list, while every other entry in the judges’ top 3 lists only appeared once. 


Seeing as second and third place would not be possible to extrapolate from this data alone, unlike last year, I decided to compare the top 3 against the picks for the top 20 runner-ups. There ended up being exactly 3 entries in the top 3 list in which 3 of the 4 judges had picked to be at least in the runner-ups (the rest had only appeared in 2 or 1 of the judges’ top 20 lists). Of the three entries that appeared in 3 of the judges’ runner-up lists, I proposed the following:


  • 1st place would be the entry that appeared 1st place in two of the judges’ lists, as well as appearing in 3 of the judges’ top 20 lists.
  • 2nd place would be the entry that appeared in 3 of the judges’ top 20 lists and was ranked 1st by one judge in the top 3. 
  • 3rd place would be the entry that appeared in 3 of the judges’ top 20 lists and was ranked 2nd by one judge in the top 3. 


I proposed this lineup to the judges on Discord and asked if they were okay with it, if they wanted to make any changes, etc. They were fine with it overall, but requested that an explanation be made as to how we came to this process. 


From there, the rest of the top 20 was determined by comparing the judges’ top 20 lists. Out of the 88 entries, 50 of them appeared at least once in the judges’ picks. There was one entry that ended up in every judges’ list, but only as a runner-up. I asked if anyone felt if it should get a spot in the top 3, but the judges, overall, felt it was only suitable as a runner-up. I then included all the entries in which 3 of the judges had them in their top 20 lists, along with the entries in which 2 judges picked them and they had a placement somewhere in the top 3. We now had 11 of our winning entries set. There were 9 remaining spots to fill, and 11 entries left that had appeared in two of the judges’ top 20 lists. So I posted these 11 tentative entries and from there, we determined two entries to eliminate to narrow the list down to 20. 


In the end, I tried to keep the judging process similar to last year’s (and similar to how the previous organizer determined the winners) and use the judging data to objectively (or as close to objectively as possible) figure out the top 3 and 17 runner-ups. Only about 23% of the entries could actually make it as the judge’s winners, compared to the near 50% of the last contest. An entry that did not win was not necessarily “bad” by any means. As with any music compo, judging comes with a considerable amount of bias and it’s okay to not agree with the decisions made. Just know that the winners were not determined solely by any single person by their own particular list. 



Tags:

3

Comments

Wow...what a breakdown! Thanks for posting this, it really brings the sheer amount of discussion and analysis you all went into to light. More competitions should go into this sort of thing when concluding because not everyone has the knowledge of the effort it takes.

No prob - thank you to the other judges for suggesting doing this in the first place.