Create fair, transparent judging that everyone trusts. The right criteria turn subjective opinions into objective evaluation that drives better projects.
Nothing kills hackathon energy faster than teams feeling like judging was unfair or arbitrary. When participants don't understand how they'll be evaluated, they optimize for the wrong things - or worse, they don't try as hard because "it's all subjective anyway."
After judging hundreds of hackathons, we've found this 4-category framework strikes the right balance between comprehensive evaluation and simplicity. It works for corporate, university, community hackathons, and everything in between.
How novel and creative is the solution?
What to Look For:
How well is the solution built?
What to Look For:
How valuable is this solution?
What to Look For:
How well is the idea communicated?
What to Look For:
Adjust the Weights
The voting method you choose affects judging speed, fairness, and how differentiated your results are. Here are the most effective approaches.
Recommended for most hackathons
Judges score each project 1-10 across your criteria categories. Good balance of speed and differentiation. Industry standard for most hackathons.
Pro: Fast to score, enough granularity, familiar to judges. Con: Judges may cluster around 7-8 for most projects.
Simple & democratic
Each judge (or participant) picks their top 3 favorite projects. Most votes wins. Simple, fast, and eliminates scoring complexity.
Pro: Fast, no scoring required, very clear. Con: Less nuanced, may miss solid middle projects.
More nuanced rankings
Judges rank their top 5-7 projects in order of preference. More detailed than Top 3, less complex than scoring every category.
Pro: Forces differentiation, captures relative preferences. Con: More complex to calculate than Top 3.
Celebrate different strengths
Award winners in multiple categories instead of (or in addition to) overall winners. Celebrates diverse excellence and allows more teams to win.
Pro: Multiple winners, celebrates different types of excellence. Con: Requires more careful judging in each area.
Quality threshold voting
Judges give binary vote: Would we ship this? Yes or No. Requires written feedback explaining the decision. Focuses on quality bar rather than ranking.
Pro: Emphasizes quality over ranking, valuable feedback. Con: May create many ties at the top.
Head-to-head matchups
Judges compare projects two at a time: "Which is better, A or B?" Repeat for multiple pairs. The algorithm determines overall rankings from head-to-head results.
Pro: Easier decisions (A vs B), statistically robust, reduces bias. Con: Requires many comparisons for accuracy.
When and how you publish your criteria is just as important as what the criteria are. Here's the right approach.
Teams should know how they'll be judged before they sign up. Include criteria in your announcement email and registration page.
Put criteria on your hackathon homepage, in the welcome email, and reference it during kickoff. Don't hide it in a PDF that no one will read.
Don't just list categories. Explain why Innovation matters (drives breakthrough thinking) and why Presentation matters (great ideas need great communication).
Changing judging criteria after teams start building breaks trust completely. If you must adjust, make it an additive bonus category, not a change to core criteria.
Here's a complete example rubric using the 1-10 scale. Copy and adapt this for your hackathon.
Judge Name: _______________ | Round: ___
How novel and creative is the solution?
How well is the solution built?
How valuable is this solution?
How well is the idea communicated?
Share Example Scores with Judges