Instructions to use AlexWortega/instruct_rugptSmall with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AlexWortega/instruct_rugptSmall with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="AlexWortega/instruct_rugptSmall")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("AlexWortega/instruct_rugptSmall") model = AutoModelForCausalLM.from_pretrained("AlexWortega/instruct_rugptSmall") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use AlexWortega/instruct_rugptSmall with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "AlexWortega/instruct_rugptSmall" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AlexWortega/instruct_rugptSmall", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/AlexWortega/instruct_rugptSmall
- SGLang
How to use AlexWortega/instruct_rugptSmall with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "AlexWortega/instruct_rugptSmall" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AlexWortega/instruct_rugptSmall", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "AlexWortega/instruct_rugptSmall" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AlexWortega/instruct_rugptSmall", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use AlexWortega/instruct_rugptSmall with Docker Model Runner:
docker model run hf.co/AlexWortega/instruct_rugptSmall
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AlexWortega/instruct_rugptSmall")
model = AutoModelForCausalLM.from_pretrained("AlexWortega/instruct_rugptSmall")Instructions ruGPT Small v0.1a
Model Summary
Я дообучил small rugpt на датасете инструкций, хабра, QA и кода
Quick Start
from transformers import pipeline
pipe = pipeline(model='AlexWortega/instruct_rugptSmall')
pipe('''Как собрать питон код?''')
or
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AlexWortega/instruct_rugptSmall")
model = AutoModelForCausalLM.from_pretrained("AlexWortega/instruct_rugptSmall")
License
The weights of Instructions ruGPT Small v0.1a are licensed under version 2.0 of the Apache License.
Hyperparameters
I used Novograd with a learning rate of 2e-5 and global batch size of 6 (3 for each data parallel worker). I use both data parallelism and pipeline parallelism to conduct training. During training, we truncate the input sequence to 1024 tokens, and for input sequence that contains less than 1024 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency.
References
#Metrics
SOON
BibTeX entry and citation info
@article{
title={GPT2xl is underrated task solver},
author={Nickolich Aleksandr, Karina Romanova, Arseniy Shahmatov, Maksim Gersimenko},
year={2023}
}
- Downloads last month
- 10
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="AlexWortega/instruct_rugptSmall")