Instructions to use Intel/gpt-j-6b-sparse with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Intel/gpt-j-6b-sparse with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Intel/gpt-j-6b-sparse")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Intel/gpt-j-6b-sparse") model = AutoModelForCausalLM.from_pretrained("Intel/gpt-j-6b-sparse") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Intel/gpt-j-6b-sparse with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Intel/gpt-j-6b-sparse" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Intel/gpt-j-6b-sparse", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Intel/gpt-j-6b-sparse
- SGLang
How to use Intel/gpt-j-6b-sparse with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Intel/gpt-j-6b-sparse" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Intel/gpt-j-6b-sparse", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Intel/gpt-j-6b-sparse" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Intel/gpt-j-6b-sparse", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Intel/gpt-j-6b-sparse with Docker Model Runner:
docker model run hf.co/Intel/gpt-j-6b-sparse
Sparse GPT-J 6B
Model Description
The sparse version of GPT-J 6B is a pruned variant derived from the original GPT-J 6B model and the vast majority of linear layers maintain a 40% unstructured sparsity (except for the 'lm_head').
| Hyperparameter | Value |
|---|---|
| 6053381344 | |
| 28* | |
| 4096 | |
| 16384 | |
| 16 | |
| 256 | |
| 2048 | |
| 50257/50400β (same tokenizer as GPT-2/3) | |
| Positional Encoding | Rotary Position Embedding RoPE |
| RoPE Dimensions | 64 |
* Each layer consists of one feedforward block and one self attention block.
β Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3.
Evaluation results
Evaluating the accuracy of the sparse model of gpt-j-6b using the lambada_openai dataset in lm_eval, providing the accuracy fluctuation under two precisions: FP32 and BF16.
| Sparsity | Dataset | Precision | Dense Acc β | Sparse Acc β | Acc fluctuations |
|---|---|---|---|---|---|
| 40% | Lambada_openai | FP32 | 0.6831 | 0.6922 | +1.33% |
| 40% | Lambada_openai | BF16 | 0.6771 | 0.6874 | +0.63% |
- Downloads last month
- 46
docker model run hf.co/Intel/gpt-j-6b-sparse