Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ During pretraining, BharatGPT mini was optimized for the causal language modelin
|
|
| 12 |
|
| 13 |
Through this training process, BharatGPT mini develops a deep internal understanding of language patterns, grammar, and semantics. While it can be fine-tuned for various downstream tasks such as classification, summarization, or question answering, it performs best in text generation tasks, which align with its original training objective.
|
| 14 |
|
| 15 |
-
|
| 16 |
```python
|
| 17 |
import torch
|
| 18 |
from transformers import GPT2Tokenizer, GPT2LMHeadModel
|
|
|
|
| 12 |
|
| 13 |
Through this training process, BharatGPT mini develops a deep internal understanding of language patterns, grammar, and semantics. While it can be fine-tuned for various downstream tasks such as classification, summarization, or question answering, it performs best in text generation tasks, which align with its original training objective.
|
| 14 |
|
| 15 |
+
|
| 16 |
```python
|
| 17 |
import torch
|
| 18 |
from transformers import GPT2Tokenizer, GPT2LMHeadModel
|