Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
Tags:
agent
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,75 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- agent
|
| 9 |
+
size_categories:
|
| 10 |
+
- 10K<n<100K
|
| 11 |
+
---
|
| 12 |
+
# LiteGPT Dataset
|
| 13 |
+
|
| 14 |
+
This repository contains a synthetic conversational dataset designed for training lightweight GPT-style language models like LiteGPT. The dataset consists of user-assistant dialogues with enriched prompts and responses.
|
| 15 |
+
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
## Dataset Description
|
| 19 |
+
|
| 20 |
+
- **Format**: Each line in `corpus.txt` represents a single conversation in the following format:
|
| 21 |
+
|
| 22 |
+
<BOS> <user>: <user_input> <assistant>: <assistant_response> <EOS> ```
|
| 23 |
+
Special Tokens:
|
| 24 |
+
|
| 25 |
+
<BOS>: Beginning of sequence
|
| 26 |
+
|
| 27 |
+
<EOS>: End of sequence
|
| 28 |
+
|
| 29 |
+
<user>:: Marks the user's input
|
| 30 |
+
|
| 31 |
+
<assistant>:: Marks the assistant's response
|
| 32 |
+
|
| 33 |
+
<PAD>: Padding token for fixed-length sequences
|
| 34 |
+
|
| 35 |
+
Number of Conversations: At least 25,000 generated examples
|
| 36 |
+
|
| 37 |
+
Content: The conversations include a variety of topics such as greetings, jokes, advice, AI knowledge, science questions, history, coding, and small talk.
|
| 38 |
+
|
| 39 |
+
Dataset Generation
|
| 40 |
+
The dataset is generated automatically using lite_gpt.py:
|
| 41 |
+
|
| 42 |
+
python
|
| 43 |
+
Copy code
|
| 44 |
+
from lite_gpt import create_synthetic_corpus
|
| 45 |
+
|
| 46 |
+
create_synthetic_corpus()
|
| 47 |
+
This will:
|
| 48 |
+
|
| 49 |
+
Randomly select a user prompt from a predefined list.
|
| 50 |
+
|
| 51 |
+
Randomly select a corresponding assistant reply from a predefined list.
|
| 52 |
+
|
| 53 |
+
Save the generated conversations into data_v2/corpus.txt.
|
| 54 |
+
|
| 55 |
+
Directory Structure
|
| 56 |
+
bash
|
| 57 |
+
Copy code
|
| 58 |
+
data_v2/
|
| 59 |
+
└── corpus.txt # Synthetic conversational dataset
|
| 60 |
+
Tokenization
|
| 61 |
+
The dataset is designed for GPT-2 tokenization.
|
| 62 |
+
|
| 63 |
+
Each conversation is tokenized and padded to a maximum length (MAX_LENGTH) for model training.
|
| 64 |
+
|
| 65 |
+
Special tokens are added to distinguish user and assistant turns.
|
| 66 |
+
|
| 67 |
+
Usage
|
| 68 |
+
Can be used to train lightweight language models.
|
| 69 |
+
|
| 70 |
+
Supports sequence chunking for longer conversations.
|
| 71 |
+
|
| 72 |
+
Works with any PyTorch-based GPT-style model.
|
| 73 |
+
|
| 74 |
+
License
|
| 75 |
+
This dataset is generated synthetically and is free to use under the MIT License.
|