pathikg/DogLLAMA-small
Viewer • Updated • 3.18k • 79 • 2
How to use pathikg/DogLLaMA_7b with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="pathikg/DogLLaMA_7b") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("pathikg/DogLLaMA_7b")
model = AutoModelForCausalLM.from_pretrained("pathikg/DogLLaMA_7b")# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("pathikg/DogLLaMA_7b")
model = AutoModelForCausalLM.from_pretrained("pathikg/DogLLaMA_7b")DogLLaMA is a language model specifically designed to translate user queries into the language of dogs. Built on the LLaMA-7b architecture and DogLLaMA-small dataset generated using gpt3.5, this model provides responses in a playful and enthusiastic manner, mimicking the communication style of our furry friends. Whether you're curious about a dog's perspective on jokes, treats, or the weather, DogLLaMA is here to bark out the answers!
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="pathikg/DogLLaMA_7b")