Topic: Judging a Bricks in Motion Contest

How to Judge a contest

Judge Pool

The first part of running is a contest is selecting a list of judges. Ideally there should be 3 to 5 judges lined up for the contest. More is alright but three should be the minimum. The more judges the harder it is to coordinate getting the results in order. An odd number of judges is preferred because it minimizes the chances for ties. The criteria need to be a judge is left up to the contest runner. They can pull from experienced members of the community to people or people outside the community with film experience.

Watching the films

The judges should independently watch all the films. In the past the contest runner has provided a short list of films but I would advise heavily against that since this increases the sway the contest runner has over the results. Maybe the contest runner skipped over some films other judges would have ranked and thus never got a chance to view. When the judge has completed watching the films he or she should created an ordered rank list of number of placed films plus five. So for example if the contest is doing a top ten then each judge needs to create an independent top 15 list. These lists will be collected from each judge and ideally the contest runner should not look at them until he or she has finished their own list to prevent biasing their own rankings. The size of the number of ranked film is up the the contest runner. Most of the time it is a top 10. In the past all films were ranked but BiM had several good reason for getting rid of that.

Instant Runoff Voting

Bricks in Motion uses a voting system called Instant Runoff Voting or IRV for short. This voting system has been modified slightly to create rankings rather than just a winner. The advantage of IRV is that it create a consensus ranking off all the judges while allowing the judges to pick outlier films without consequence.

Algorithm

1. Each judge submits a ranked list of films
2. A list of all nominates films is created. If a judge has not added a particular film to their ranking that film is scored as the total number of ranked films plus one.
3. Sum the total of all judges rankings and pull the film or films with the highest total out and give it a ranking equal to the number of films left plus one.
4. For each judge rankings decrease all rankings for films ranked above the eliminated films or films by the number of films eliminated. Leave all rankings below alone.
5. Repeat steps 3 and 4 until all films have been eliminated.

Running the Algorithm

Since this manually doing IRV is rather tedious I have developed a Google Sheets add on to automate the process. This add-on can be installed here: <https://chrome.google.com/webstore/deta … ndkbencfpl> The add-on is also open sourced and the code is available here: <https://github.com/AquaMorph/BiM-Judging> The steps to run the BiM IRV algorithm or as follow.

1. Go to the Chrome Webstore link and install the "BiM Judging" add-on.
2. Create a new Google Sheets Spreadsheet.
3. In the new sheet click on the "Add-ons" menu item and select "Manage Add-ons."
4. A pop up window should appear and the BiM Judging add-on should be in the list. Click the green "Manage" button and select "Use in this document."
5. Close the pop up window and refresh the page.
6. The first row is reserved for column headers. The first few columns should contain film titles, creators name, or any other relevant information. After that in the first row should contain the name of each judge in their own column.
7. Enter each of the judges rankings giving the film a numeric value equal to that judges rank. Add films in a new row as necessary. There should only be one row per film.
8. Once all data is inputted create a copy of the sheet by clicking the error on the bottom where it says "Sheet1" and then click "Duplicate." This is a back up of the raw data before the algorithm begins manipulating the data.
9. Now navigate on Add-ons -> BiM Judging -> Rank by Instant Runoff.
10. A pop up will appear asking for the number of columns before ranking information. Enter a numerical of the number of columns that are before the judging rankings and select ok. If you are using the example document found below one would enter 2.
11. Another pop up will appear asking for the number of judges. Enter the number of judges as a number and click ok. If one is following the example document on would enter 5.
12. The algorithm will be begin running with an indicator on the screen showing that it is still running. It should take around a minute depending on the number of judges and films.
13. Once a ranks column is created at the very beginning of the sheet the algorithm is done running and those are the IRV rankings.

Here is an example sheet with fake data <https://docs.google.com/spreadsheets/d/ … sp=sharing> One can copy the raw data sheet into their own sheet and run the algorithm to make sure they get the same results as the processed results sheet.

Advantages of IRV

Instant Runoff Voting has a number of advantages and they are best highlighted by comparing IRV with other systems. In the past Bricks in Motion contests have used other judging systems which could be improved with IRV.

Rubric

Art is subjective and the idea that one can create an objective judging system is ridiculous. That being said steps can be done to help minimize the impact of subjectivity in the ranking process. An older judging system Brick in Motion used was having a rubric the judges filled out for each film. Aspects of a film were divided into categories and were given scores. These categories were things like animation, lighting, sound, writing, etc. The problem with this system is it overvalues the technical aspects of a film. A more holistic view of the films is needed. A film is a collection of all those parts and how well the entire film works with the summation of all individual parts. The weighting of these categories is also a problem for the rubric method. Should lighting be given the same weight as writing? Should it be given more and if do how much? Depending on how all this stuff is weighted it changes the competition and limits creativity because it encourages one to create a film to do well on a rubric rather than making a good film.

Averaging

Also in the past Brick in Motion contests have averaged score from judges. For the most part this is fine, but there are a couple of issues that may pop up. The first is that averaging gives more weight to outlier opinions. If all judges rank a film as first except for all one judge who ranked it eleventh. Then for another film all judges rank the film second. With five judges using averages the first film would have an average of 3.0 while the second film has an average of 2.0. With averaging the first film would get second despite being the favorite film of majority of the judges. IRV would fix this because as films were eliminated the outlier ranking would come down in value as the outlier judge rankings are adjust with the elimination of films on his or her list. This is closely related to the second problem with averaging which is that it does not produce the rankings that the judges are most content with. By using IRV several rounds of voting can be simulated without requiring the judges time. This assures that the rankings will reflect what the pool of judges want and allows for minority options to be represented without fear of wasting votes or being over represented in the outcome.

Judge Consensus

After the IRV algorithm has generated a ranked list of films, all the judges should get together and discuss the rankings. The IRV rankings are by no means the final rankings and can be completely changed by the judges as they see fit. The IRV should generate a ranked list that the judges are most happy with but they should discuss the rankings and can advocate for reordering. This process is left up to the contest runner but in most cases judges have opted to have unanimous consent to reorder the rankings. The judges are given complete autonomy to adjust rankings as they see fit but from the several contests I have been apart of for the most part the IRV rankings have been left mainly untouched. Changes typically came about from judges discussing the films and why they liked select films and changing the minds of other judges.

Conclusion

I hope this helps clarify how the Bricks in Motion judging system works. Documenting the process is important for two reasons. The first is transparency so that people entering contest will understand how the judging process works. The second is for the judges and contest runners to have something to reference so I do not have to be involved in the process. I will continue to update this as need to add clarification or when new steps are added. Suggestions for improvement are welcome from improving the code base of the add-on to improving the judging process.

Re: Judging a Bricks in Motion Contest

Thank you for putting this together, AquaMorph. This will be very useful for judging THAC as well as future contests. I do think this yields the best results for contests.

I would encourage anyone who's interested in the behind the scenes of BiM contests, or who would like to run a contest in the future, to give the IRV add on a try using the sample data.

Re: Judging a Bricks in Motion Contest

Did this need to be stated? I thought this was already pretty common knowledge. Previous THAC postings, and frequent judges such as Smeagol have mentioned this many times over.

Re: Judging a Bricks in Motion Contest

Dyland wrote:

Did this need to be stated? I thought this was already pretty common knowledge. Previous THAC postings, and frequent judges such as Smeagol have mentioned this many times over.

This is correct but this is the first time I have completely covered it along with documentation to run the algorithm so that I no longer have to do it.  I can be hit by a bus and the current BiM contests process can run without a hitch.

Re: Judging a Bricks in Motion Contest

My question is really about theme interpretation. I thought a lot of the judging on the Movie Magic summer contest (which I did not enter) came down hard on how the judges didn't like how some people interpreted the theme, even though it was rather vague. I would have thought the theme was wide open to interpretation as it has usually been in the past, but apparently the judges for that competition had a very specific idea of what they were looking for, and didn't communicate that clearly, then made a lot of comments on how people took the theme too literally or not enough. I just thought it was weird, and some of the judges feedback/comments I read weren't the most helpful. Maybe it's better not to do comments at all.

https://bricksafe.com/files/thistof/hillbillyheist/TofAnimation.png

Re: Judging a Bricks in Motion Contest

I've always loved judges comments, but I agree that, in the past, judges & entrants interpretations of the theme differed enough to create slight confusion and a bit of a disconnect.

Generally, though, I think that the judges do a great job of ranking entries on merits aside from how fitting a brickfilm is to the theme - which, to be honest, shouldn't be ranked super highly in the first place, as theme is really just another way to prevent people from preparing films beforehand/cheating.

Re: Judging a Bricks in Motion Contest

Dyland wrote:

I've always loved judges comments, but I agree that, in the past, judges & entrants interpretations of the theme differed enough to create slight confusion and a bit of a disconnect.

Generally, though, I think that the judges do a great job of ranking entries on merits aside from how fitting a brickfilm is to the theme - which, to be honest, shouldn't be ranked super highly in the first place, as theme is really just another way to prevent people from preparing films beforehand/cheating.

I agree with these sentiments completely. Once again, the wisdom of Dyland prevails.

https://bricksafe.com/files/thistof/hillbillyheist/TofAnimation.png

Re: Judging a Bricks in Motion Contest

I'm sorry you didn't think the forum comments I left on every film were helpful. I tried to give honest, constructive feedback, and in a few cases I was sad to find myself the only person to have done so. Personally I feel that acting as an avenue for creators to receive feedback that points out areas of improvement and offers advice on where to focus in the future is one of the best things BiM and similar art-focused sites can do; I think it's a real shame that so many film threads go without comments.

I won't debate whether the theme for Movie Magic was communicated clearly- obviously I thought it was or I would have gone further to elaborate, but what matters in the end is whether the entrants were aware of what was expected of them, and it sounds like there was confusion there so that's unfortunate. I wish that people who were unclear would have asked for more explanation, but I get it: sometimes you don't know what you don't know. It wasn't our direct intention to be hard-nosed about theme interpretation as a part of the judging, but when the theme is an integral part of how the film is produced (like we intended with Movie Magic) it's difficult to excuse it not being incorporated without cheating the people who did understand and incorporate it successfully.

Judging is subjective- no amount of assigning numbers is going to change that. I tried to be open and transparent about why Movie Magic results came out the way they did, because I think that's the best outcome for the entrants who put so much time into entering these contests, only to be reduced to a placement at the end of the day. That's the nature of competition, but ultimately I don't want BiM contests (particularly the ones I am in charge of) to be purely about who placed where. When I say I consider everyone who entered a winner in some way, that's not lip-service: I think the act of creating something is important and I don't want to undermine that by acting like the contest results are a flawless representation of the value of a film. They're just how we felt the film fit some arbitrary criteria that we attempted to describe up-front. We can (and do) strive to make contests fair, but they're never going to be objective.

Last edited by Squash (December 31, 2018 (08:28pm))

Re: Judging a Bricks in Motion Contest

Dyland wrote:

theme is really just another way to prevent people from preparing films beforehand/cheating.

...what?

I could see this as a cynically narrow interpretation of the themes for THAC/BRAWL, perhaps, but the intention with every summer contest has been to provide a core idea so everybody is creating something that expresses their unique creativity in how they interpret it. If you have to completely define contests around something just so people won't cheat, because cheating would be widespread otherwise, you've got a bigger problem to deal with. I'd like to think most people enter brickfilming contests for the satisfaction of competition and making a film, not to pull one over on their competitors.


As for the larger discussion here, I know there's room for discussing improvements from time to time. But I find it frustrating that a vocal minority are focused on dampering the contest experience for everybody by picking apart how the judges' results differed from their personal preferences after every contest, instead of celebrating the entries and being grateful to the volunteers and sponsors who tried did their best to put together a fun event for people for free. And it's never framed as "I personally liked X entry best," it is always "the contest runner / judges / etc. screwed up."

Most contests here are so much more competently, transparently, and smoothly run than just about any outside, typically larger film competitions (or festivals, for that matter) that I've been a part of. It's good to have documentation like Aquamorph's above, because sadly there is no winning with this community from the perspective of a contest runner. Putting together a brickfilming contest is a mostly thankless job. Complaints and accusations of mishandling are the primary fruit of every contest no matter how well it is run, so at least writeups like this one can demonstrate that there's a sound methodology involved and efforts are made to minimize the effects of any one judge's personal biases.

http://i.imgur.com/wcmcdmf.png

Re: Judging a Bricks in Motion Contest

Dyland wrote:

theme is really just another way to prevent people from preparing films beforehand/cheating.

Clearly, my words have been misinterpreted. I only meant to say that a contest's theme is a way to focus entries in a specific way - similar to the mod elements. This, by design, prevents people from being able to enter something old into (especially in the case of THAC and BRAWL) a competition with a very short time span.

That's not to say that 'cheating' is widespread (It isn't, and I've never tried to insinuate this). I also didn't mean to imply that a theme has only one purpose (honestly, it not only focuses entrants, but also gives each contest a unique spin, and helps judges to more easily compare the plethora of creative works entered).

Smeagol wrote:

sadly there is no winning with this community from the perspective of a contest runner.

Perhaps I've only very little experience, having only hosted one contest myself, but, aside from some initial thoughts in the discord chat whilst announcing results live, I thought everybody was happy with the S.H.A.C. results. Although, perhaps that could have been the specific (small) group that participated - and the random draw prize perk.

Re: Judging a Bricks in Motion Contest

Squash, it was great of you to give feedback to everyone. That's an extra mile that you took to be helpful. I didn't mean to come across as picky or cynical. I don't want to discourage anyone ever, and I try and hope to always be encouraging. Maybe it wasn't fair to single out the Movie Magic comp, it's just a recent one that came to mind. Thank you for making that extra effort to provide comments.

I really liked brickfilmday's tiered competition this past summer, in which the focus was solely on character animation, the movement and portraying emotions of the characters. The judging category was limited and specific, while still allowing for creative interpretation as well as technique. In most of these contests there are too many things to take into account, like story, cinematography, animation, story, parrots in the trash memes, etc. Judging is hard because it's impossible to remove personal biases or expectations, and everyone's personal biases and expectations are different.

But I agree with Smeagol, that mostly it's about the fun challenge of doing a film in 24 hours or a week or whatever, and the motivation to get your ass in gear and actually shoot something instead of mulling it over for weeks or months and never getting around to it. Getting our grubby hands dirty with the plastic filth of Legos.

As for agreeing or not with results, I think every awards show ever should have taught us all by now that it doesn't really matter what the academy thinks, our favorites are still winners in our hearts. I just think something as highly subjective as theme interpretation should be a very forgiving category, as long as it has the general idea of the theme.

https://bricksafe.com/files/thistof/hillbillyheist/TofAnimation.png

Re: Judging a Bricks in Motion Contest

thistof wrote:

I think every awards show ever should have taught us all by now that it doesn't really matter what the academy thinks, our favorites are still winners in our hearts. I just think something as highly subjective as theme interpretation should be a very forgiving category, as long as it has the general idea of the theme.

*slow claps

So many lower ranked (or not publicly ranked) contest entries have brought me continued joy for even years afterward. And that's nothing against the judges. If anything, it's just a positive upon the filmmakers and contest theme.

Re: Judging a Bricks in Motion Contest

mini/lol

Last edited by thistof (September 22, 2020 (10:16pm))

https://bricksafe.com/files/thistof/hillbillyheist/TofAnimation.png