Instructions to use ahxt/llama2_xs_460M_experimental with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ahxt/llama2_xs_460M_experimental with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ahxt/llama2_xs_460M_experimental")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ahxt/llama2_xs_460M_experimental") model = AutoModelForCausalLM.from_pretrained("ahxt/llama2_xs_460M_experimental") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ahxt/llama2_xs_460M_experimental with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ahxt/llama2_xs_460M_experimental" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ahxt/llama2_xs_460M_experimental", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/ahxt/llama2_xs_460M_experimental
- SGLang
How to use ahxt/llama2_xs_460M_experimental with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ahxt/llama2_xs_460M_experimental" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ahxt/llama2_xs_460M_experimental", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ahxt/llama2_xs_460M_experimental" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ahxt/llama2_xs_460M_experimental", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use ahxt/llama2_xs_460M_experimental with Docker Model Runner:
docker model run hf.co/ahxt/llama2_xs_460M_experimental
LLaMa Lite: Reduced-Scale, Experimental Versions of LLaMA and LLaMa 2
In this series of repos, we present an open-source reproduction of Meta AI's LLaMA and LLaMa 2 large language models. However, with significantly reduced model sizes, the experimental version of llama1_s has 1.8B parameters, and the experimental version of llama2_xs has 460M parameters. ('s' stands for small, while 'xs' denotes extra small).
Dataset and Tokenization
We train our models on part of RedPajama dataset. We use the GPT2Tokenizer to tokenize the text.
Using with HuggingFace Transformers
The experimental checkpoints can be directly loaded by Transformers library. The following code snippet shows how to load the our experimental model and generate text with it.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# model_path = 'ahxt/llama2_xs_460M_experimental'
model_path = 'ahxt/llama1_s_1.8B_experimental'
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.eval()
prompt = 'Q: What is the largest bird?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
tokens = model.generate(input_ids, max_length=20)
print( tokenizer.decode(tokens[0].tolist(), skip_special_tokens=True) )
# Q: What is the largest bird?\nA: The largest bird is the bald eagle.
Evaluation
We evaluate our models on the MMLU task markdown table
| Models | #parameters | zero-shot | 5-shot |
|---|---|---|---|
| llama | 7B | 28.46 | 35.05 |
| openllama | 3B | 24.90 | 26.71 |
| TinyLlama-1.1B-step-50K-105b | 1.1B | 19.00 | 26.53 |
| llama2_xs_460M | 0.46B | 21.13 | 26.39 |
Contact
This experimental version is developed by: Xiaotian Han from Texas A&M University. And these experimental verisons are for research only.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 26.65 |
| ARC (25-shot) | 24.91 |
| HellaSwag (10-shot) | 38.47 |
| MMLU (5-shot) | 26.17 |
| TruthfulQA (0-shot) | 41.59 |
| Winogrande (5-shot) | 49.88 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 5.51 |
- Downloads last month
- 1,049