# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("moyix/dolly-replication")
model = AutoModelForCausalLM.from_pretrained("moyix/dolly-replication")Quick Links
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Dolly Replication
This is a replication of the "dolly" model from Databricks, which is GPT-J-6B fine-tuned on the alpaca instruction following dataset. It was trained in 6 hours on a high-end workstation (2xA6000 GPUs).
Licensing
Because the alpaca dataset was created from text generated by OpenAI's text-davinci-003 model, it is not clear whether this model can be used commercially; OpenAI's ToS does not allow one to "use output from the Services to develop models that compete with OpenAI".
datasets: - tatsu-lab/alpaca
- Downloads last month
- 12
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="moyix/dolly-replication")