nielsr HF Staff commited on
Commit
0b26c4f
·
verified ·
1 Parent(s): 3bcc1a4

Improve dataset card: Add metadata, paper/code links, images, and test section

Browse files

This PR significantly enhances the dataset card for AgriEval by:
- Adding comprehensive metadata tags for `task_categories` (question-answering), `language` (zh), `license` (apache-2.0), `tags` (agriculture, benchmark, llm-evaluation), and `size_categories` (10K<n<100K) to improve discoverability.
- Including direct links to the associated Hugging Face paper ([AgriEval: A Comprehensive Chinese Agricultural Benchmark for Large Language Models](https://huggingface.co/papers/2507.21773)) and the GitHub repository (`https://github.com/YanPioneer/AgriEval`) for easy access to more information and code.
- Integrating illustrative figures from the GitHub README to provide a clearer visual overview of the dataset's characteristics and examples.
- Adding the "2.5 Test" section from the GitHub README, offering practical guidance on how to evaluate using the benchmark.

These changes collectively make the dataset card more informative, user-friendly, and compliant with Hugging Face Hub best practices.

Files changed (1) hide show
  1. README.md +27 -0
README.md CHANGED
@@ -1,6 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # AgriEval: A Comprehensive Chinese Agricultural Benchmark for Large Language Models
 
 
 
 
2
  ## 1. Introduction
3
  In the agricultural domain, the deployment of large language models (LLMs) is hindered by the lack of training data and evaluation benchmarks. To mitigate this issue, we propose AgriEval, the first comprehensive Chinese agricultural benchmark with three main characteristics: (1) \textit{Comprehensive Capability Evaluation.} AgriEval covers 6 major agriculture categories and 41 subcategories within agriculture, addressing four core cognitive scenarios—memorization, understanding, inference, and generation. (2) \textit{High-Quality Data.} The dataset is curated from university-level examinations and assignments, providing a natural and robust benchmark for assessing the capacity of LLMs to apply knowledge and make expert-like decisions. (3) \textit{Diverse Formats and Extensive Scale.} AgriEval comprises 20,634 choice questions and 2,104 open-ended question-and-answer questions, establishing it as the most extensive agricultural benchmark available to date. We also present comprehensive experimental results over 45 open-source and commercial LLMs. The experimental results reveal that most existing LLMs struggle to achieve 60\% accuracy, underscoring the developmental potential in agricultural LLMs. Additionally, we conduct extensive experiments to investigate factors influencing model performance and propose strategies for enhancement.
 
 
4
 
5
  ## 2. Description
6
 
@@ -36,7 +56,14 @@ The directory structure of the dataset is as follows:
36
  | class | dict | Cognitive category (including major and minor categories) |
37
  | type | str | Original domain (deprecated) |
38
 
 
 
 
 
 
 
39
 
 
40
 
41
  ## 3. Protocol
42
 
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ language:
5
+ - zh
6
+ license: apache-2.0
7
+ tags:
8
+ - agriculture
9
+ - benchmark
10
+ - llm-evaluation
11
+ size_categories:
12
+ - 10K<n<100K
13
+ ---
14
+
15
  # AgriEval: A Comprehensive Chinese Agricultural Benchmark for Large Language Models
16
+
17
+ Paper: [AgriEval: A Comprehensive Chinese Agricultural Benchmark for Large Language Models](https://huggingface.co/papers/2507.21773)
18
+ Code: https://github.com/YanPioneer/AgriEval
19
+
20
  ## 1. Introduction
21
  In the agricultural domain, the deployment of large language models (LLMs) is hindered by the lack of training data and evaluation benchmarks. To mitigate this issue, we propose AgriEval, the first comprehensive Chinese agricultural benchmark with three main characteristics: (1) \textit{Comprehensive Capability Evaluation.} AgriEval covers 6 major agriculture categories and 41 subcategories within agriculture, addressing four core cognitive scenarios—memorization, understanding, inference, and generation. (2) \textit{High-Quality Data.} The dataset is curated from university-level examinations and assignments, providing a natural and robust benchmark for assessing the capacity of LLMs to apply knowledge and make expert-like decisions. (3) \textit{Diverse Formats and Extensive Scale.} AgriEval comprises 20,634 choice questions and 2,104 open-ended question-and-answer questions, establishing it as the most extensive agricultural benchmark available to date. We also present comprehensive experimental results over 45 open-source and commercial LLMs. The experimental results reveal that most existing LLMs struggle to achieve 60\% accuracy, underscoring the developmental potential in agricultural LLMs. Additionally, we conduct extensive experiments to investigate factors influencing model performance and propose strategies for enhancement.
22
+ ![img](https://github.com/YanPioneer/AgriEval/blob/main/image/NIPS_main_figure_01.png)
23
+ Fig1. \textit{Left}: Domains classification in AgriEval. \textit{Middle}: Cognitive ability classification in AgriEval. \textit{Right}: A brief overview of human and LLMs' performance on AgriEval.
24
 
25
  ## 2. Description
26
 
 
56
  | class | dict | Cognitive category (including major and minor categories) |
57
  | type | str | Original domain (deprecated) |
58
 
59
+ ### 2.4 Data Examples
60
+
61
+ ![img](https://github.com/YanPioneer/AgriEval/blob/main/image/Example_main_01.png)
62
+ Fig2. Examples for AgriEval.
63
+
64
+ ### 2.5 Test
65
 
66
+ run ./eval/run_evalate_multi_choice.py
67
 
68
  ## 3. Protocol
69