Instructions to use LLaMAX/LLaMAX3-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LLaMAX/LLaMAX3-8B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LLaMAX/LLaMAX3-8B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LLaMAX/LLaMAX3-8B") model = AutoModelForCausalLM.from_pretrained("LLaMAX/LLaMAX3-8B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LLaMAX/LLaMAX3-8B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LLaMAX/LLaMAX3-8B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LLaMAX/LLaMAX3-8B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/LLaMAX/LLaMAX3-8B
- SGLang
How to use LLaMAX/LLaMAX3-8B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LLaMAX/LLaMAX3-8B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LLaMAX/LLaMAX3-8B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LLaMAX/LLaMAX3-8B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LLaMAX/LLaMAX3-8B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use LLaMAX/LLaMAX3-8B with Docker Model Runner:
docker model run hf.co/LLaMAX/LLaMAX3-8B
LLaMAX/LLaMAX3-8B training notebook
can I kindly get the notebook that was used to fine-tune the model for the translation task? I want to adapt it to train it to translate a low resource language
can I kindly get the notebook that was used to fine-tune the model for the translation task? I want to adapt it to train it to translate a low resource language
Thank you for your interest in our work. LLaMAX is trained based on the LLaMA model, so any training framework that supports LLaMA can be directly used to fine-tune LLaMAX. If you want to perform supervised fine-tuning on your own data based on LLaMAX, the following code will be useful: https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py
Note that you may need to adjust the special tokens in lines 27-30 of the alpaca code to be compatible with LLaMA3-based models.
In addition, you can refer to the following template to organize your translation data:
"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Instruction:
Translate the following sentences from English to Chinese Simpl
Input:
"We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.
Response:他补充道:“我们现在有 4 个月大没有糖尿病的老鼠,但它们曾经得过该病。”
"""
thank you for this
I'm relatively new to this so I apologize for asking a "dumb" question
but I would like to know how I can run inference on the code. The current inference code I have is not giving me the required output.
thank you for this
I'm relatively new to this so I apologize for asking a "dumb" question
but I would like to know how I can run inference on the code. The current inference code I have is not giving me the required output.
You can try our instruction-tuned model(https://huggingface.co/LLaMAX/LLaMAX3-8B-Alpaca) and follow the example given in the README.