Instructions to use MRAIRR/minillama3_8b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MRAIRR/minillama3_8b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MRAIRR/minillama3_8b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MRAIRR/minillama3_8b") model = AutoModelForCausalLM.from_pretrained("MRAIRR/minillama3_8b") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MRAIRR/minillama3_8b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MRAIRR/minillama3_8b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MRAIRR/minillama3_8b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/MRAIRR/minillama3_8b
- SGLang
How to use MRAIRR/minillama3_8b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MRAIRR/minillama3_8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MRAIRR/minillama3_8b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MRAIRR/minillama3_8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MRAIRR/minillama3_8b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use MRAIRR/minillama3_8b with Docker Model Runner:
docker model run hf.co/MRAIRR/minillama3_8b
Train Config
base_model: allganize/Llama-3-Alpha-Ko-8B-Instruct model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer
load_in_8bit: false load_in_4bit: true strict: false
datasets:
- path: ? type: alpaca dataset_prepared_path: val_set_size: 0 output_dir: ./outputs/qlora-out
adapter: qlora lora_model_dir:
sequence_len: 2048 sample_packing: true pad_to_sequence_len: true
lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules:
- q_proj
- v_proj
lora_target_linear: true lora_fan_in_fan_out: lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model:
gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 3 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 0.0002
train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false
gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 100 xformers_attention: flash_attention: true
warmup_steps: 10 evals_per_epoch: 4 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.01 neftune_noise_alpha: 5 fsdp: fsdp_config: special_tokens: pad_token: :"<|end_of_text|>"
- Downloads last month
- 11