File size: 3,292 Bytes
a56de94 d78673f a56de94 8e40735 8cf5483 b2569d2 51fdbad a56de94 51fdbad a56de94 d78673f a56de94 d78673f c391be1 d78673f a0da932 ff85c1a a07da27 d78673f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
---
size_categories: n<1K
dataset_info:
features:
- name: column_name
dtype: string
- name: id_faker_arguments
struct:
- name: args
struct:
- name: letters
dtype: string
- name: text
dtype: string
- name: type
dtype: string
- name: column_content
sequence: string
splits:
- name: train
num_bytes: 4583
num_examples: 4
download_size: 7535
dataset_size: 4583
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for faker-example
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/ninaxu/faker-example/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/ninaxu/faker-example/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"column_content": [
"594564793936",
"422645724655",
"142688374180",
"151546611521",
"685542688520",
"041197636946",
"742485071901",
"259581023351",
"242310937846",
"161331443479",
"089946558053",
"892937709085",
"371747353204",
"130825763690",
"715314093651",
"199735005780",
"776005192229",
"533330763559",
"133642433775",
"400474040702",
"236402665456",
"359951161260",
"858505534111",
"035009831008",
"909566483105",
"849472289056",
"234702877781",
"264888822024",
"047437476067",
"482031650266",
"275058435264",
"042763642003",
"504739016897",
"052402347800",
"661215629471",
"346545308924",
"790927754992",
"927973073123",
"500126151170",
"989947453568",
"769940564398",
"043814193121",
"215740713849",
"301021291360",
"322580292726",
"033918946671",
"482122191043",
"637850719148",
"368826758961",
"267609231778"
],
"column_name": "uplift_loan_id",
"id_faker_arguments": {
"args": {
"letters": null,
"text": "############"
},
"type": "id"
}
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("ninaxu/faker-example", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("ninaxu/faker-example")
```
</details>
|