scalr
- Task Description: Given a legal question, identify the holding statement (amongst a set of candidates) that best answers a particular legal question.
- Task Type: 5-way classification
- Document Type: legal question
- Number of Samples: 571
- Input Length Range: 20-698 tokens
- Evaluation Metrics: accuracy (maximize), balanced_accuracy (maximize), f1_macro (maximize), f1_micro (maximize), valid_predictions_ratio (maximize)
- Tags: constitutional law, rhetorical understanding, rhetorical-analysis
- Paper: LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
- Dataset Download: https://hazyresearch.stanford.edu/legalbench/
7 submissions
Rank | Model | accuracy | balanced_accuracy | f1_macro | f1_micro | valid_predictions_ratio | Date | Results |
---|---|---|---|---|---|---|---|---|
1 | meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo | 0.763 | 0.765 | 0.762 | 0.763 | 0.996 | 2025-08-04 | View |
2 | meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo | 0.735 | 0.738 | 0.736 | 0.735 | 0.993 | 2025-07-25 | View |
3 | gpt-4o-mini | 0.734 | 0.732 | 0.734 | 0.734 | 1.000 | 2025-07-02 | View |
4 | claude-3-5-haiku-20241022 | 0.727 | 0.730 | 0.728 | 0.727 | 0.982 | 2025-08-01 | View |
5 | google/gemma-2-27b-it | 0.630 | 0.628 | 0.632 | 0.630 | 0.993 | 2025-07-24 | View |
6 | gpt-4.1-nano | 0.553 | 0.553 | 0.554 | 0.553 | 1.000 | 2025-07-03 | View |
7 | claude-3-haiku-20240307 | 0.533 | 0.529 | 0.543 | 0.533 | 0.874 | 2025-07-25 | View |