Text Generation
Transformers
ONNX
TensorRT
English
text-generation-inference
causal-lm
int8
ENOT-AutoDL
Instructions to use ENOT-AutoDL/gpt2-tensorrt with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ENOT-AutoDL/gpt2-tensorrt with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ENOT-AutoDL/gpt2-tensorrt")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("ENOT-AutoDL/gpt2-tensorrt", dtype="auto") - TensorRT
How to use ENOT-AutoDL/gpt2-tensorrt with TensorRT:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ENOT-AutoDL/gpt2-tensorrt with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ENOT-AutoDL/gpt2-tensorrt" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ENOT-AutoDL/gpt2-tensorrt", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/ENOT-AutoDL/gpt2-tensorrt
- SGLang
How to use ENOT-AutoDL/gpt2-tensorrt with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ENOT-AutoDL/gpt2-tensorrt" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ENOT-AutoDL/gpt2-tensorrt", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ENOT-AutoDL/gpt2-tensorrt" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ENOT-AutoDL/gpt2-tensorrt", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use ENOT-AutoDL/gpt2-tensorrt with Docker Model Runner:
docker model run hf.co/ENOT-AutoDL/gpt2-tensorrt
Update README.md
#1
by ivkalgin - opened
README.md
CHANGED
|
@@ -1,3 +1,63 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- lambada
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
library_name: transformers
|
| 8 |
+
pipeline_tag: text-generation
|
| 9 |
+
tags:
|
| 10 |
+
- text-generation-inference
|
| 11 |
+
- causal-lm
|
| 12 |
+
- int8
|
| 13 |
+
- tensorrt
|
| 14 |
+
- ENOT-AutoDL
|
| 15 |
---
|
| 16 |
+
|
| 17 |
+
# GPT2
|
| 18 |
+
|
| 19 |
+
This repository contains GPT2 onnx models compatible with TensorRT:
|
| 20 |
+
* gpt2-xl.onnx - GPT2-XL onnx for fp32 or fp16 engines
|
| 21 |
+
* gpt2-xl-i8.onnx - GPT2-XL onnx for int8+fp32 engines
|
| 22 |
+
|
| 23 |
+
Quantization of models was performed by the [ENOT-AutoDL](https://pypi.org/project/enot-autodl/) framework.
|
| 24 |
+
Code for building of TensorRT engines and examples published on [github](https://github.com/ENOT-AutoDL/ENOT-transformers).
|
| 25 |
+
|
| 26 |
+
## Metrics:
|
| 27 |
+
|
| 28 |
+
### GPT2-XL
|
| 29 |
+
|
| 30 |
+
| |TensorRT INT8+FP32|torch FP16|
|
| 31 |
+
|---|:---:|:---:|
|
| 32 |
+
| **Lambada Acc** |72.11%|71.43%|
|
| 33 |
+
|
| 34 |
+
### Test environment
|
| 35 |
+
|
| 36 |
+
* GPU RTX 4090
|
| 37 |
+
* CPU 11th Gen Intel(R) Core(TM) i7-11700K
|
| 38 |
+
* TensorRT 8.5.3.1
|
| 39 |
+
* pytorch 1.13.1+cu116
|
| 40 |
+
|
| 41 |
+
## Latency:
|
| 42 |
+
|
| 43 |
+
### GPT2-XL
|
| 44 |
+
|
| 45 |
+
|Input sequance length|Number of generated tokens|TensorRT INT8+FP32 ms|torch FP16 ms|Acceleration|
|
| 46 |
+
|:---:|:---:|:---:|:---:|:---:|
|
| 47 |
+
|64|64|462|1190|2.58|
|
| 48 |
+
|64|128|920|2360|2.54|
|
| 49 |
+
|64|256|1890|4710|2.54|
|
| 50 |
+
|
| 51 |
+
### Test environment
|
| 52 |
+
|
| 53 |
+
* GPU RTX 4090
|
| 54 |
+
* CPU 11th Gen Intel(R) Core(TM) i7-11700K
|
| 55 |
+
* TensorRT 8.5.3.1
|
| 56 |
+
* pytorch 1.13.1+cu116
|
| 57 |
+
|
| 58 |
+
## How to use
|
| 59 |
+
|
| 60 |
+
Example of inference and accuracy test [published on github](https://github.com/ENOT-AutoDL/ENOT-transformers):
|
| 61 |
+
```shell
|
| 62 |
+
git clone https://github.com/ENOT-AutoDL/ENOT-transformers
|
| 63 |
+
```
|