distilgpt2-validator-quick-v1
This model was trained using the Babelbit MLOps platform and deployed via the integrated deployment pipeline.
Model Details
- Model Name: distilgpt2-validator-quick-v1
- Challenge Score: 0.0580
- Training Dataset: Babelfish/Puzzle_Codebreaking_Association
- Base Model: distilgpt2
- Parameters: 82M
Training Configuration
{}
Deployment
This model is deployed on:
- 🤗 HuggingFace Hub: sasn59/distilgpt2-validator-quick-v1
- ⛓️ Chutes Network: Active
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("sasn59/distilgpt2-validator-quick-v1")
model = AutoModelForCausalLM.from_pretrained("sasn59/distilgpt2-validator-quick-v1")
# Generate text
inputs = tokenizer("Hello, I am", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Performance Metrics
- Lexical Score: 0.0000
- Semantic Score: 0.0000
- Earliness Score: 0.0000
Generated by Babelbit MLOps Dashboard
- Downloads last month
- 1