nielsr HF Staff commited on
Commit
f4f9b6b
·
verified ·
1 Parent(s): 00ee1ee

Update task category, license, add tags and sample usage

Browse files

This PR addresses several improvements to the dataset card:
- Updates the `license` metadata from `apache-2.0` to `cc-by-4.0`, aligning with the Creative Commons license for dataset materials as specified in the GitHub README.
- Changes the `task_categories` metadata from `question-answering` to `text-ranking`, which better reflects the dataset's focus on evaluating retrieval and ranking models.
- Adds `tags` such as `retrieval`, `embeddings`, and `benchmark` for better discoverability and categorization.
- Includes a "Sample Usage" section with a Python code snippet, directly sourced from the GitHub README, demonstrating how to load the dataset using the `datasets` library.

These changes aim to make the dataset card more accurate, informative, and user-friendly.

Files changed (1) hide show
  1. README.md +18 -5
README.md CHANGED
@@ -1,6 +1,14 @@
1
  ---
2
- license: apache-2.0
3
  language: en
 
 
 
 
 
 
 
 
 
4
  dataset_info:
5
  - config_name: default
6
  features:
@@ -46,10 +54,6 @@ configs:
46
  data_files:
47
  - split: queries
48
  path: queries.jsonl
49
- task_categories:
50
- - question-answering
51
- size_categories:
52
- - 10K<n<100K
53
  ---
54
 
55
  # LIMIT
@@ -63,6 +67,15 @@ A retrieval dataset that exposes fundamental theoretical limitations of embeddin
63
  - **Full version**: [LIMIT](https://huggingface.co/datasets/orionweller/LIMIT/) (50k documents)
64
  - **Small version**: [LIMIT-small](https://huggingface.co/datasets/orionweller/LIMIT-small/) (46 documents only)
65
 
 
 
 
 
 
 
 
 
 
66
  ## Dataset Details
67
 
68
  **Queries** (1,000): Simple questions asking "Who likes [attribute]?"
 
1
  ---
 
2
  language: en
3
+ license: cc-by-4.0
4
+ size_categories:
5
+ - 10K<n<100K
6
+ task_categories:
7
+ - text-ranking
8
+ tags:
9
+ - retrieval
10
+ - embeddings
11
+ - benchmark
12
  dataset_info:
13
  - config_name: default
14
  features:
 
54
  data_files:
55
  - split: queries
56
  path: queries.jsonl
 
 
 
 
57
  ---
58
 
59
  # LIMIT
 
67
  - **Full version**: [LIMIT](https://huggingface.co/datasets/orionweller/LIMIT/) (50k documents)
68
  - **Small version**: [LIMIT-small](https://huggingface.co/datasets/orionweller/LIMIT-small/) (46 documents only)
69
 
70
+ ## Sample Usage
71
+
72
+ You can load the data using the `datasets` library from Huggingface ([LIMIT](https://huggingface.co/datasets/orionweller/LIMIT), [LIMIT-small](https://huggingface.co/datasets/orionweller/LIMIT-small)):
73
+
74
+ ```python
75
+ from datasets import load_dataset
76
+ ds = load_dataset("orionweller/LIMIT-small", "corpus") # also available: queries, test (contains qrels).
77
+ ```
78
+
79
  ## Dataset Details
80
 
81
  **Queries** (1,000): Simple questions asking "Who likes [attribute]?"