apeters commited on
Commit
0726202
Β·
verified Β·
1 Parent(s): f78881b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,7 +12,7 @@ pinned: false
12
 
13
  ## 🌐 About OpenDataArena
14
 
15
- **[OpenDataArena (ODA)](https://opendataarena.github.io)** is an open research initiative devoted to evaluating, benchmarking, and creating high-value datasets for the post-training era of large language models (LLMs).
16
  We believe **data quality defines model capability** β€” and that **open, reproducible evaluation** is key to accelerating progress in AI.
17
 
18
  ### πŸš€ Our Mission
@@ -20,12 +20,12 @@ To make **data evaluation scientific, transparent, and community-driven**, while
20
 
21
  ### πŸ”‘ Key Features
22
 
23
- - πŸ† **Dataset Leaderboard** β€” [Leaderboard](https://opendataarena.github.io/leaderboard.html) ranks the most valuable datasets across multiple domains, based on diverse benchmarks.
24
  - πŸ“Š **Comprehensive Scoring System** β€” [Scoring tool](https://github.com/OpenDataArena/OpenDataArena-Tool/tree/main/data_scorer) measures dataset quality, diversity, and learning values using reproducible pipelines.
25
  - 🧰 **Open-Source Toolkit** β€” [OpenDataArena-Tool](https://github.com/OpenDataArena/OpenDataArena-Tool) enables dataset evaluation, scoring with a standardized, community-driven workflow.
26
  - 🌱 **High-Value Data Generation** β€” beyond evaluation, ODA continuously produces and shares new, top-quality datasets for fine-tuning and alignment research.
27
 
28
 
29
- If you find our work helpful, please consider ⭐ **starring** and **subscribing** to support open, data-driven AI research. Learn more at [opendataarena.github.io](https://opendataarena.github.io).
30
 
31
  (OpenDataArena is part of [OpenDataLab](https://huggingface.co/opendatalab)).
 
12
 
13
  ## 🌐 About OpenDataArena
14
 
15
+ **[OpenDataArena (ODA)](https://arena.opendatalab.org.cn/)** is an open research initiative devoted to evaluating, benchmarking, and creating high-value datasets for the post-training era of large language models (LLMs).
16
  We believe **data quality defines model capability** β€” and that **open, reproducible evaluation** is key to accelerating progress in AI.
17
 
18
  ### πŸš€ Our Mission
 
20
 
21
  ### πŸ”‘ Key Features
22
 
23
+ - πŸ† **Dataset Leaderboard** β€” [Leaderboard](https://arena.opendatalab.org.cn/leaderboard.html) ranks the most valuable datasets across multiple domains, based on diverse benchmarks.
24
  - πŸ“Š **Comprehensive Scoring System** β€” [Scoring tool](https://github.com/OpenDataArena/OpenDataArena-Tool/tree/main/data_scorer) measures dataset quality, diversity, and learning values using reproducible pipelines.
25
  - 🧰 **Open-Source Toolkit** β€” [OpenDataArena-Tool](https://github.com/OpenDataArena/OpenDataArena-Tool) enables dataset evaluation, scoring with a standardized, community-driven workflow.
26
  - 🌱 **High-Value Data Generation** β€” beyond evaluation, ODA continuously produces and shares new, top-quality datasets for fine-tuning and alignment research.
27
 
28
 
29
+ If you find our work helpful, please consider ⭐ **starring** and **subscribing** to support open, data-driven AI research. Learn more at [OpenDataArena](https://arena.opendatalab.org.cn/).
30
 
31
  (OpenDataArena is part of [OpenDataLab](https://huggingface.co/opendatalab)).