Instructions to use jtglover/stageinternaltesting with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jtglover/stageinternaltesting with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="jtglover/stageinternaltesting")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jtglover/stageinternaltesting") model = AutoModelForCausalLM.from_pretrained("jtglover/stageinternaltesting") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use jtglover/stageinternaltesting with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "jtglover/stageinternaltesting" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jtglover/stageinternaltesting", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/jtglover/stageinternaltesting
- SGLang
How to use jtglover/stageinternaltesting with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "jtglover/stageinternaltesting" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jtglover/stageinternaltesting", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "jtglover/stageinternaltesting" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jtglover/stageinternaltesting", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use jtglover/stageinternaltesting with Docker Model Runner:
docker model run hf.co/jtglover/stageinternaltesting
Upload folder using huggingface_hub
Upload folder using huggingface_hub
Multi commit ID: f71ec0e9a948115eb259c851bd97a5fc0d37dba314a30db7530f5c574262ebd5
Scheduled commits:
- Upload 1 file(s) totalling 4.9G (422ad1b4f45802389ffcd33722ee54f137dc81193072034fe3dee2363c24f416)
- Upload 1 file(s) totalling 4.9G (b7d5af3e9afafed188c68cf7b7413fb69176eda5ec4605044291674976075639)
- Upload 1 file(s) totalling 3.6G (79968badbb425cdbd4d7ebf9a9c6716035b0a7bc41e11e5c21daabbdc9b7b514)
- Upload 6 file(s) totalling 2.3M (2723c983f3f945e49440128fd0490ddb6890b13607e75fb5f49d5ee79ad41bbb)
This is a PR opened using the huggingface_hub library in the context of a multi-commit. PR can be commented as a usual PR. However, please be aware that manually updating the PR description, changing the PR status, or pushing new commits, is not recommended as it might corrupt the commit process. Learn more about multi-commits in this guide.
create_pr=False has been passed so PR is automatically merged.
This is a comment posted using the huggingface_hub library in the context of a multi-commit. Learn more about multi-commits in this guide.
Multi-commit is now completed! You can ping the repo owner to review the changes. This PR can now be commented or modified without risking to corrupt it.
This is a comment posted using the huggingface_hub library in the context of a multi-commit. Learn more about multi-commits in this guide.