Hackathon Judging Criteria

Create fair, transparent judging that everyone trusts. The right criteria turn subjective opinions into objective evaluation that drives better projects.

Why Judging Criteria Matters

Nothing kills hackathon energy faster than teams feeling like judging was unfair or arbitrary. When participants don't understand how they'll be evaluated, they optimize for the wrong things - or worse, they don't try as hard because "it's all subjective anyway."

Without Clear Criteria

  • Teams focus on flashy demos over substance
  • Judges struggle to compare different types of projects
  • Results feel arbitrary and demotivating
  • Winners can't explain why they won

With Clear Criteria

  • Teams build with clear goals in mind
  • Judges evaluate consistently across projects
  • Results feel fair and well-reasoned
  • Everyone learns what makes a strong project

The Recommended Framework

After judging hundreds of hackathons, we've found this 4-category framework strikes the right balance between comprehensive evaluation and simplicity. It works for corporate, university, community hackathons, and everything in between.

Innovation & Creativity

How novel and creative is the solution?

30%
Weight

What to Look For:

  • • Unique approach to the problem
  • • Creative use of technology or resources
  • • Fresh perspective on existing challenges
  • • Originality of the core idea

Technical Implementation

How well is the solution built?

25%
Weight

What to Look For:

  • • Code quality and architecture
  • • Technical difficulty and complexity
  • • Completeness of the implementation
  • • Demonstration of technical skill

Business Value & Impact

How valuable is this solution?

25%
Weight

What to Look For:

  • • Clear problem being solved
  • • Potential real-world impact
  • • Market viability or internal value
  • • Scalability of the solution

Presentation & Communication

How well is the idea communicated?

20%
Weight

What to Look For:

  • • Clarity of the pitch and demo
  • • Quality of supporting materials
  • • Ability to answer questions
  • • Overall professionalism

Adjust the Weights

These percentages are starting points. For a technical hackathon, increase Technical Implementation to 35%. For a business-focused event, increase Business Value to 35%. The key is publishing your weights before teams start building.

Voting & Scoring Methods

The voting method you choose affects judging speed, fairness, and how differentiated your results are. Here are the most effective approaches.

1-10 Scale Scoring

Recommended for most hackathons

Judges score each project 1-10 across your criteria categories. Good balance of speed and differentiation. Industry standard for most hackathons.

9-10 = Outstanding
7-8 = Strong
5-6 = Good
3-4 = Fair
1-2 = Needs Improvement

Pro: Fast to score, enough granularity, familiar to judges. Con: Judges may cluster around 7-8 for most projects.

Top 3 Voting

Simple & democratic

Each judge (or participant) picks their top 3 favorite projects. Most votes wins. Simple, fast, and eliminates scoring complexity.

How it works: Each voter selects 3 projects. Tally all votes. Projects with most votes win. Optional: Weight votes (1st choice = 3 points, 2nd = 2 points, 3rd = 1 point).

Pro: Fast, no scoring required, very clear. Con: Less nuanced, may miss solid middle projects.

Ranked Choice Voting

More nuanced rankings

Judges rank their top 5-7 projects in order of preference. More detailed than Top 3, less complex than scoring every category.

How it works: Judges rank projects 1-5. Calculate points (1st = 5pts, 2nd = 4pts, etc.). Highest total points wins.

Pro: Forces differentiation, captures relative preferences. Con: More complex to calculate than Top 3.

Category Winners

Celebrate different strengths

Award winners in multiple categories instead of (or in addition to) overall winners. Celebrates diverse excellence and allows more teams to win.

• Best Technical Implementation
• Most Innovative Idea
• Best User Experience
• Most Likely to Ship
• Best Business Value

Pro: Multiple winners, celebrates different types of excellence. Con: Requires more careful judging in each area.

Thumbs Up/Down + Feedback

Quality threshold voting

Judges give binary vote: Would we ship this? Yes or No. Requires written feedback explaining the decision. Focuses on quality bar rather than ranking.

How it works: Each judge votes Yes/No on "ship-worthiness" plus writes 2-3 sentence feedback. Projects with most "Yes" votes win.

Pro: Emphasizes quality over ranking, valuable feedback. Con: May create many ties at the top.

Pairwise Comparison

Head-to-head matchups

Judges compare projects two at a time: "Which is better, A or B?" Repeat for multiple pairs. The algorithm determines overall rankings from head-to-head results.

How it works: Show judges random pairs of projects. They pick the better one each time. Algorithm (like Elo or Swiss) calculates final rankings from matchup results.

Pro: Easier decisions (A vs B), statistically robust, reduces bias. Con: Requires many comparisons for accuracy.

Mix and match: Consider using 1-10 scoring for judging, but also having a separate "Audience Choice Award" using Top 3 voting from all participants. This combines expert evaluation with community input.

Publishing Your Judging Criteria

When and how you publish your criteria is just as important as what the criteria are. Here's the right approach.

1

Publish Before Registration Opens

Teams should know how they'll be judged before they sign up. Include criteria in your announcement email and registration page.

2

Make It Easy to Find

Put criteria on your hackathon homepage, in the welcome email, and reference it during kickoff. Don't hide it in a PDF that no one will read.

3

Explain the "Why" Behind Each Category

Don't just list categories. Explain why Innovation matters (drives breakthrough thinking) and why Presentation matters (great ideas need great communication).

4

Never Change Criteria Mid-Event

Changing judging criteria after teams start building breaks trust completely. If you must adjust, make it an additive bonus category, not a change to core criteria.

Example Judging Rubric

Here's a complete example rubric using the 1-10 scale. Copy and adapt this for your hackathon.

Project Name: _______________

Judge Name: _______________ | Round: ___

Innovation & Creativity (30%)

Score: ___ / 10

How novel and creative is the solution?

9-10: Breakthrough idea, highly original approach
7-8: Creative solution with unique elements
5-6: Solid idea with some creativity
3-4: Incremental improvement on existing ideas
1-2: Common or obvious solution

Technical Implementation (25%)

Score: ___ / 10

How well is the solution built?

9-10: Exceptional technical execution, production-ready
7-8: Strong implementation, mostly complete
5-6: Functional prototype with core features
3-4: Basic implementation, some issues
1-2: Minimal technical work or major bugs

Business Value & Impact (25%)

Score: ___ / 10

How valuable is this solution?

9-10: High-impact solution, clear market opportunity
7-8: Strong value proposition, good potential
5-6: Useful solution, moderate impact
3-4: Limited value or unclear use case
1-2: Minimal practical value

Presentation & Communication (20%)

Score: ___ / 10

How well is the idea communicated?

9-10: Compelling presentation, excellent demo
7-8: Clear communication, good demo
5-6: Adequate presentation, gets the point across
3-4: Unclear presentation or weak demo
1-2: Confusing or incomplete presentation

Overall Comments & Feedback:

Total Weighted Score:___ / 10.0

Share Example Scores with Judges

Before judging starts, have 2-3 people independently score a sample project using your rubric. If they arrive at very different scores, your criteria need clearer definitions. This calibration exercise helps ensure consistent scoring across all judges.
HackHQ automates judging: Set up your criteria once, judges score on their phones, and results calculate automatically with transparent real-time updates. No spreadsheets, no manual averaging, no mistakes. See how it works