Yoda Chatbot

A fine-tuned DialoGPT model trained to respond like Yoda from Star Wars.

Model Description

This model is based on Microsoft's DialoGPT-medium and has been fine-tuned on Yoda-style dialogue data to generate responses in Yoda's characteristic speech pattern.

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Asgar-Ali-T/yoda-chatbot")
model = AutoModelForCausalLM.from_pretrained("Asgar-Ali-T/yoda-chatbot")

# Example usage
prompt = "Human: What is the Force?\nYoda:"
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(inputs, max_length=100, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Training Data

The model was trained on custom Yoda dialogue data to learn his unique speech patterns and wisdom.

Model Performance

The model has been trained to respond in Yoda's characteristic manner, including:

  • Inverted sentence structure
  • Wise philosophical responses
  • Star Wars universe knowledge
  • Yoda's distinctive speech patterns

Limitations

  • This is a fine-tuned model and may not always produce perfect Yoda responses
  • Generated content should be used responsibly
  • The model is for entertainment purposes

License

This model is released under the MIT License.

Downloads last month
1
Safetensors
Model size
0.4B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Asgar-Ali-T/yoda-chatbot

Quantizations
1 model