# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("UncleanCode/anacondia-70m")
model = AutoModelForCausalLM.from_pretrained("UncleanCode/anacondia-70m")Quick Links
Anacondia
Anacondia-70m is a Pythia-70m-deduped model fine-tuned with QLoRA on timdettmers/openassistant-guanaco
Usage
Anacondia is not intended for any downstream usage and was trained for educational purposes. Please fine tune for downstream tasks or consider more serious models for inference if this doesn't fall into your usage aim.
Training procedure
The following bitsandbytes quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
Framework versions
- PEFT 0.4.0
Inference
#import necessary modules
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "UncleanCode/anacondia-70m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
input= tokenizer("This is a sentence ",return_tensors="pt")
output= model.generate(**input)
tokenizer.decode(output[0])
- Downloads last month
- 11
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="UncleanCode/anacondia-70m")