Update README.md
Browse files
README.md
CHANGED
|
@@ -33,3 +33,24 @@ configs:
|
|
| 33 |
- split: validation
|
| 34 |
path: data/validation-*
|
| 35 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
- split: validation
|
| 34 |
path: data/validation-*
|
| 35 |
---
|
| 36 |
+
|
| 37 |
+
Dataset used for training text to sql.
|
| 38 |
+
I've pre-tokenized this for faster loading.
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
Here is the tokenizer code:
|
| 42 |
+
```
|
| 43 |
+
def tokenize_function(example):
|
| 44 |
+
start_prompt = "Tables:\n"
|
| 45 |
+
middle_prompt = "\n\nQuestion:\n"
|
| 46 |
+
end_prompt = "\n\nAnswer:\n"
|
| 47 |
+
|
| 48 |
+
data_zip = zip(example['context'], example['question'])
|
| 49 |
+
prompt = [start_prompt + context + middle_prompt + question + end_prompt for context, question in data_zip]
|
| 50 |
+
example['input_ids'] = tokenizer(prompt, padding="max_length", truncation=True, return_tensors="pt").input_ids
|
| 51 |
+
example['labels'] = tokenizer(example['answer'], padding="max_length", truncation=True, return_tensors="pt").input_ids
|
| 52 |
+
# print(prompt[0])
|
| 53 |
+
# print()
|
| 54 |
+
|
| 55 |
+
return example
|
| 56 |
+
```
|