Text Generation
Transformers
PyTorch
Safetensors
English
llama
Eval Results (legacy)
text-generation-inference
Instructions to use pankajmathur/model_51 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use pankajmathur/model_51 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="pankajmathur/model_51")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("pankajmathur/model_51") model = AutoModelForCausalLM.from_pretrained("pankajmathur/model_51") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use pankajmathur/model_51 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "pankajmathur/model_51" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pankajmathur/model_51", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/pankajmathur/model_51
- SGLang
How to use pankajmathur/model_51 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "pankajmathur/model_51" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pankajmathur/model_51", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "pankajmathur/model_51" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pankajmathur/model_51", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use pankajmathur/model_51 with Docker Model Runner:
docker model run hf.co/pankajmathur/model_51
Commit History
Update README.md 4d76fa9 verified
Update README.md 2dfd7e5 verified
Adding Evaluation Results (#2) c285b3d verified
Adding Evaluation Results (#1) ba48a52
Update README.md 6012fb6
Update README.md 53f2ad3
Update README.md 3e14a07
Pankaj Mathur commited on
Update README.md 78d381f
Pankaj Mathur commited on
Update README.md d1de5af
Pankaj Mathur commited on
Update README.md 9542702
Pankaj Mathur commited on
Create README.md 90e3603
Pankaj Mathur commited on