File size: 2,463 Bytes
f21423b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---
dataset_info:
  features:
  - name: question
    dtype: string
  - name: solutions
    dtype: string
  - name: starter_code
    dtype: string
  - name: input_output
    dtype: string
  - name: difficulty
    dtype: string
  - name: raw_tags
    dtype: string
  - name: name
    dtype: string
  - name: source
    dtype: string
  - name: tags
    dtype: string
  - name: skill_types
    dtype: string
  - name: url
    dtype: string
  - name: Expected Auxiliary Space
    dtype: string
  - name: time_limit
    dtype: string
  - name: date
    dtype: string
  - name: picture_num
    dtype: string
  - name: memory_limit
    dtype: string
  - name: Expected Time Complexity
    dtype: string
  splits:
  - name: train
    num_bytes: 4239311973
    num_examples: 25443
  - name: test
    num_bytes: 481480755
    num_examples: 1000
  download_size: 2419845110
  dataset_size: 4720792728
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
source_datasets: BAAI/TACO
license: apache-2.0
task_categories:
- text-generation
- feature-extraction
language:
- en
tags:
- BAAI/TACO
size_categories:
- 10K<n<100K
---

# BEE-spoke-data/TACO-hf

Simple re-host of https://huggingface.co/datasets/BAAI/TACO but saved as hf dataset for ease of use.


Features:

```py
DatasetDict({
    "train": Dataset({
        "features": [
            "question",
            "solutions",
            "starter_code",
            "input_output",
            "difficulty",
            "raw_tags",
            "name",
            "source",
            "tags",
            "skill_types",
            "url",
            "Expected Auxiliary Space",
            "time_limit",
            "date",
            "picture_num",
            "memory_limit",
            "Expected Time Complexity"
        ],
        "num_rows": 25443
    }),
    "test": Dataset({
        "features": [
            "question",
            "solutions",
            "starter_code",
            "input_output",
            "difficulty",
            "raw_tags",
            "name",
            "source",
            "tags",
            "skill_types",
            "url",
            "Expected Auxiliary Space",
            "time_limit",
            "date",
            "picture_num",
            "memory_limit",
            "Expected Time Complexity"
        ],
        "num_rows": 1000
    })
})
```

Refer to the original dataset for more details.