Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
eswarankrishnamurthy
/
murli-assistant-distilgpt2-maximum
like
0
PEFT
Safetensors
English
spiritual-ai
brahma-kumaris
murli
distilgpt2
maximum-accuracy
experimental
research-only
lora
License:
mit
Model card
Files
Files and versions
xet
Community
Use this model
main
murli-assistant-distilgpt2-maximum
114 MB
1 contributor
History:
2 commits
eswarankrishnamurthy
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
ea1d0fd
verified
3 months ago
checkpoint-286
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
checkpoint-308
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
checkpoint-330
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
.gitattributes
1.52 kB
initial commit
3 months ago
README.md
9.76 kB
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
adapter_config.json
856 Bytes
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
adapter_model.safetensors
9.44 MB
xet
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
merges.txt
456 kB
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
special_tokens_map.json
131 Bytes
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
tokenizer.json
3.56 MB
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
tokenizer_config.json
507 Bytes
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
training_info.json
1.17 kB
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago
vocab.json
798 kB
Upload DistilGPT-2 MAXIMUM - Best possible training (LoRA r=32, 15 epochs, 500 murlis)
3 months ago