Instructions to use Sharathhebbar24/Instruct_GPT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Sharathhebbar24/Instruct_GPT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Sharathhebbar24/Instruct_GPT")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Sharathhebbar24/Instruct_GPT") model = AutoModelForCausalLM.from_pretrained("Sharathhebbar24/Instruct_GPT") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Sharathhebbar24/Instruct_GPT with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Sharathhebbar24/Instruct_GPT" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sharathhebbar24/Instruct_GPT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Sharathhebbar24/Instruct_GPT
- SGLang
How to use Sharathhebbar24/Instruct_GPT with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Sharathhebbar24/Instruct_GPT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sharathhebbar24/Instruct_GPT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Sharathhebbar24/Instruct_GPT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Sharathhebbar24/Instruct_GPT", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Sharathhebbar24/Instruct_GPT with Docker Model Runner:
docker model run hf.co/Sharathhebbar24/Instruct_GPT
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Sharathhebbar24/Instruct_GPT")
model = AutoModelForCausalLM.from_pretrained("Sharathhebbar24/Instruct_GPT")This model is a finetuned version of gpt2-medium using databricks/databricks-dolly-15k dataset
Model description
GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifting one token (word or piece of word) to the right. The model uses a mask mechanism to make sure the
predictions for the token i only use the inputs from 1 to i but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a prompt.
To use this model
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> model_name = "Sharathhebbar24/Instruct_GPT"
>>> model = AutoModelForCausalLM.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2-medium")
>>> def generate_text(prompt):
>>> inputs = tokenizer.encode(prompt, return_tensors='pt')
>>> outputs = mod1.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
>>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
>>> return generated[:generated.rfind(".")+1]
>>> generate_text("Should I Invest in stocks")
Should I Invest in stocks? Investing in stocks is a great way to diversify your portfolio. You can invest in stocks based on the market's performance, or you can invest in stocks based on the company's performance.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 31.46 |
| AI2 Reasoning Challenge (25-Shot) | 28.24 |
| HellaSwag (10-Shot) | 39.33 |
| MMLU (5-Shot) | 26.84 |
| TruthfulQA (0-shot) | 39.72 |
| Winogrande (5-shot) | 54.30 |
| GSM8k (5-shot) | 0.30 |
- Downloads last month
- 920
Dataset used to train Sharathhebbar24/Instruct_GPT
Space using Sharathhebbar24/Instruct_GPT 1
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard28.240
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard39.330
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard26.840
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard39.720
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard54.300
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard0.300
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Sharathhebbar24/Instruct_GPT")