Datasets:
Tasks:
Text Generation
Formats:
parquet
Languages:
English
Size:
100M - 1B
Tags:
salesforce/wikitext
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,4 +6,17 @@ language:
|
|
| 6 |
- en
|
| 7 |
tags:
|
| 8 |
- salesforce/wikitext
|
| 9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
- en
|
| 7 |
tags:
|
| 8 |
- salesforce/wikitext
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
This is the tokenized data of salesforce/wikitext dataset. All the samples in the train set are concatenated for pretraining the llm.
|
| 14 |
+
|
| 15 |
+
To see how the tokenized dataset is created please see : https://github.com/SSahas/Implementing-LLM-From-Scratch/blob/main/assets/preprocessing.ipynb
|
| 16 |
+
|
| 17 |
+
PROJECT
|
| 18 |
+
|
| 19 |
+
Implementing Decoder only Model (GPT style) from scratch with PyTorch
|
| 20 |
+
|
| 21 |
+
Pretraining a LLM model for Text generation, used Salesforce/wikitext for training. The model was trained for 30000 iterations with a batch size of 8 for ~2.5 hours on Tesla P100 (Kaggle Free gpu support). The training loss is around 3.5. Used adam optimizer with a learning rate of 5e-4. After training, the model is producing little reasonable english, can be trained for more time with bigger n_embd and block size for better generation.
|
| 22 |
+
|