Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Tino3141
/
FullData-8000
like
0
Text Generation
Transformers
Safetensors
llama
text-generation-inference
arxiv:
1910.09700
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
FullData-8000
5.52 GB
1 contributor
History:
3 commits
Tino3141
Model: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
5402f0b
verified
about 1 month ago
.gitattributes
Safe
1.63 kB
Tokenizer: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
about 1 month ago
README.md
Safe
5.17 kB
Tokenizer: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
about 1 month ago
config.json
943 Bytes
Model: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
about 1 month ago
generation_config.json
Safe
184 Bytes
Model: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
about 1 month ago
model-00001-of-00002.safetensors
4.99 GB
xet
Model: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
about 1 month ago
model-00002-of-00002.safetensors
487 MB
xet
Model: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
about 1 month ago
model.safetensors.index.json
12 kB
Model: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
about 1 month ago
special_tokens_map.json
Safe
439 Bytes
Tokenizer: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
about 1 month ago
tokenizer.json
29.5 MB
xet
Tokenizer: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
about 1 month ago
tokenizer_config.json
11.7 MB
xet
Tokenizer: Using 4 speaker Tino3141/4SpeakersDropCocktail180000 as pretraining, then trained on full dataset for 80000
about 1 month ago