Instructions to use nlpcloud/instruct-gpt-j-fp16 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nlpcloud/instruct-gpt-j-fp16 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="nlpcloud/instruct-gpt-j-fp16")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("nlpcloud/instruct-gpt-j-fp16") model = AutoModelForCausalLM.from_pretrained("nlpcloud/instruct-gpt-j-fp16") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use nlpcloud/instruct-gpt-j-fp16 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "nlpcloud/instruct-gpt-j-fp16" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nlpcloud/instruct-gpt-j-fp16", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/nlpcloud/instruct-gpt-j-fp16
- SGLang
How to use nlpcloud/instruct-gpt-j-fp16 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "nlpcloud/instruct-gpt-j-fp16" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nlpcloud/instruct-gpt-j-fp16", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "nlpcloud/instruct-gpt-j-fp16" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nlpcloud/instruct-gpt-j-fp16", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use nlpcloud/instruct-gpt-j-fp16 with Docker Model Runner:
docker model run hf.co/nlpcloud/instruct-gpt-j-fp16
Few shot learning
Hey, finetuning the GPTJ with Stanford alpaca instructions is a great idea, I'm looking for something like this, it's great to see someone has already done a great job.
I'm trying to do few-shot learning (classification) with the GPT-J model, but it doesn't seem to do a good job. I have tried the alpaca model (LORA), and it has increased the accuracy, but it's still not enough. So, I've come across your instruct-gptj model. I tried the few-shot learning, it doesn't seem to understand few-shot learning now, does it? I got very bad results all over. Maybe I'm doing something completely wrong. I was looking for the documentation for this model and it says that I don't need the few-shot learning as the model can understand the instructions.
what if I want to do the few-shot and provide a few examples for each class? will it work? Do you have any sample prompts? or have you not tested this? I'm very curious to know.
Thanks, and cool to see that you had the same idea :)
In our tests this model seems to still perform correctly for few-shot learning, even if fine-tuned for instructions. For to be honest I don't really see why you would use this model if you are trying to perform text classification with few-shot learning.
Maybe you can have a look at our guide to see how to perform few-shot learning for classification: https://nlpcloud.com/effectively-using-gpt-j-gpt-neo-gpt-3-alternatives-few-shot-learning.html#zero-shot-text-classification
I have tried GPT-J and NEO for few-shot learning. the quality is not good enough. In fact, I tested a few examples on your playground, I think it's more or less the same. Did you retrain or fine-tune the gptj model before you create an API?
"I think it's more or less the same" --> I am not exactly sure what you mean by that.
Maybe you can copy paste a few-shot example here so I can advise, and so it benefits everyone who is reading this?
"Did you retrain or fine-tune the gptj model" --> Yes this instruct GPT-J model is a fine-tuned version of GPT-J
if you want to see something insane, try nlpcloud's GPT_Neo fien tuned mdoel.