Instructions to use platzi/chivoom with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use platzi/chivoom with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="platzi/chivoom")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("platzi/chivoom", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use platzi/chivoom with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "platzi/chivoom" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "platzi/chivoom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/platzi/chivoom
- SGLang
How to use platzi/chivoom with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "platzi/chivoom" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "platzi/chivoom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "platzi/chivoom" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "platzi/chivoom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use platzi/chivoom with Docker Model Runner:
docker model run hf.co/platzi/chivoom
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("platzi/chivoom", dtype="auto")
Chivoom: Spanish Alpaca (Chiva) 🐐 + BLOOM 💮
IMPORTANT: This is just a PoC and still WIP!
Adapter Description
This adapter was created with the PEFT library and allowed the base model BigScience/BLOOM 7B1 to be fine-tuned on the Stanford's Alpaca Dataset (translated to Spanish) by using the method LoRA.
Model Description
BigScience Large Open-science Open-access Multilingual Language Model
Training data
We translated to Spanish the Alpaca dataset.
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from Self-Instruct framework and made the following modifications:
- The
text-davinci-003engine to generate the instruction data instead ofdavinci. - A new prompt was written that explicitly gave the requirement of instruction generation to
text-davinci-003. - Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500). In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by Self-Instruct.
Training procedure
TBA
How to use
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
peft_model_id = "platzi/chivoom"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-7b1")
model = PeftModel.from_pretrained(model, peft_model_id)
model.eval()
# Based on the inference code by `tloen/alpaca-lora`
def generate_prompt(instruction, input=None):
if input:
return f"""A continuación se muestra una instrucción que describe una tarea, emparejada con una entrada que proporciona más contexto. Escribe una respuesta que complete adecuadamente la petición.
### Instrucción:
{instruction}
### Entrada:
{input}
### Respuesta:"""
else:
return f"""A continuación se muestra una instrucción que describe una tarea. Escribe una respuesta que complete adecuadamente la petición.
### Instrucción:
{instruction}
### Respuesta:"""
def generate(
instruction,
input=None,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
**kwargs,
):
prompt = generate_prompt(instruction, input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_config = GenerationConfig(
temperature=temperature,
top_p=top_p,
top_k=top_k,
num_beams=num_beams,
**kwargs,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256,
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
return output.split("### Response:")[1]
instruction = "¿Qué es un chivo?"
print("Instrucción:", instruction)
print("Respuesta:", generate(instruction))
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="platzi/chivoom")