Judging and Awards
How Judging Works

How Judging Works

Track Judging

Judging is done by a panel of judges. Each judge is an expert in their field and includes a mix of academic professors and industry professionals. The judging process at WildHacks is outlined below.

Round 1 - Initial Judging

During the initial judging round, judges will review all submissions and score them based on the WildHacks project evaluation criteria. Each project will have multiple judges assigned to it and the average score will be used as the project's final score for round 1. The top 10 projects (or more in the case of ties) will move on to round 2.

While Round 1 judging occurs, the optional Crowd Favorite presentations will take place during this time!

⚠️

Raw scores from judges may have biases inherent to different standards of judging between different judges. Scores are normalized using a statistical model detailed here.

💡

Teams are ranked using the following tiebreakers:

  1. Normalized score, as described in the link above.
  2. Median score, or the total score with the highest and lowest scores dropped. If your project is evaluated by four judges, this will be the average of the middle two scores. This tiebreaker is taken directly from the National Speech and Debate Association national circuit.
  3. Average score, without dropping the highest or lowest scores.
  4. A random number from 0-1. This is extremely unlikely to be necessary. If it comes down to a random number, we will award both projects the same rank. All scores are rounded to three decimal places.

After the hackathon is over, you may email us for a full score report. If you have questions about any of the math or implementation, you may contact us about that too.

Round 2 - Live Presentations

The teams with the top-scoring projects will be invited to present their projects to the judges (and other hackers) in a live presentation. Rather than a short video pitch that emphasizes both a quick demo and discussion of the problem and tech stack, the live presentations focus more on a deep demo of the project's functionality and serve as an interactive way for judges to evaluate the projects. Each team will have 5 minutes total to present their project and judge Q+A. Teams are encouraged to demo for 2-2.5 minutes to ensure that they have 3 minutes for Q+A from judges.

During Round 2, all judges review all top projects and will nominate their pick for 1st, 2nd, and 3rd place teams. All teams will receive:

  1. 5 points for every 1st place nomination
  2. 3 points for every 2nd place nomination
  3. 1 point for every 3rd place nomination

The team with the most of these points will be ranked 1st Place; second highest 2nd place; third highest 3rd place.

In the event of a tie, we will break ties by median score, and then mean score -- as described above. We will not use the normalized score for this round.

The winning teams will be announced during the closing ceremony. Scores will be calculated the same as Round 1.

⚠️

Top teams will have exactly 5 minutes to present. The organizing team will mute microphones and cut teams off if they exceed 5 minutes.

Challenge Judging

MLH Challenges

All challenges presented by MLH will be judged by an MLH representative while track judging is taking place. Winning teams will be announced during the closing ceremony.

Crowd Favorite

During Round 1 of track judging, hackers will have the opportunity to present their projects to the crowd. The crowd will vote on their favorite projects, and the project with the highest vote will be awarded the Crowd Favorite prize. The Crowd Favorite prize will be announced during the closing ceremony. Only projects who choose to enter the Crowd Favorite challenge will be eligible to present and win.