Instructions to use deepvk/plato-9b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use deepvk/plato-9b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="deepvk/plato-9b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deepvk/plato-9b") model = AutoModelForCausalLM.from_pretrained("deepvk/plato-9b") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use deepvk/plato-9b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "deepvk/plato-9b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deepvk/plato-9b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/deepvk/plato-9b
- SGLang
How to use deepvk/plato-9b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "deepvk/plato-9b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deepvk/plato-9b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "deepvk/plato-9b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deepvk/plato-9b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use deepvk/plato-9b with Docker Model Runner:
docker model run hf.co/deepvk/plato-9b
plato-9b
plato-9b is a fine-tuned version of the google/gemma-2-9b-it model for generating responses in the Russian language.
This 9-billion parameter model excels at conversational tasks, offering rich contextual understanding and fine-grained results.
Usage
To use plato-9b with the transformers library:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepvk/plato-9b")
model = AutoModelForCausalLM.from_pretrained("deepvk/plato-9b")
input_text = "ะงัะพ ััะพะธั ะฟะพัะตัะธัั ะฒ ะ ะพััะธะธ?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
output = model.generate(input_ids, max_length=150, do_sample=True, temperature=0.7)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
# ะงัะพ ััะพะธั ะฟะพัะตัะธัั ะฒ ะ ะพััะธะธ?
# 1. ะัะฐัะฝะฐั ะฟะปะพัะฐะดั ะธ ะัะตะผะปั ะฒ ะะพัะบะฒะต
# 2. ะญัะผะธัะฐะถ ะฒ ะกะฐะฝะบั-ะะตัะตัะฑััะณะต
# 3. ะะฐะนะบะฐะป
# 4. ะกะพะปะพะฒะตัะบะธะต ะพัััะพะฒะฐ
# 5. ะะฐะผัะฐัะบะฐ ะธ ะตั ะฒัะปะบะฐะฝั
# 6. ะะพะปะพัะพะต ะะพะปััะพ
# 7. ะะฐะทะฐะฝัะบะธะน ะัะตะผะปั
# 8. ะะปัะฐะน
# 9. ะัััะฐั
ะฐะฝัะบะฐั ะพะฑะปะฐััั ะธ ะะพะปะณะพ-ะะพะฝัะบะพะน ะบะฐะฝะฐะป
# 10. ะะฐะฒะบะฐะทัะบะธะต ะณะพัั ะธ ะงะตัะฝะพะผะพััะบะพะต ะฟะพะฑะตัะตะถัะต
#
# ะะฐะถะดะพะต ะธะท ััะธั
ะผะตัั ะฟัะตะดะปะฐะณะฐะตั ัะฝะธะบะฐะปัะฝัะต ะบัะปััััะฝัะต, ะธััะพัะธัะตัะบะธะต ะธ ะฟัะธัะพะดะฝัะต ะดะพััะพะฟัะธะผะตัะฐัะตะปัะฝะพััะธ,
# ะบะพัะพััะต ะดะตะปะฐัั ะ ะพััะธั ััะพะปั ัะดะธะฒะธัะตะปัะฝะพะน ะธ ัะฐะทะฝะพะพะฑัะฐะทะฝะพะน ัััะฐะฝะพะน.
Dataset
We applied both Supervised Fine-Tuning (SFT) and Preference Optimization (PO). For SFT, we used an 8B token instruction dataset, with 4B tokens consisting of dialogues and the rest covering math, biology, chemistry, code, and general knowledge. The PO dataset contains 200M tokens featuring common knowledge instructions. We trained on both datasets for several epochs.
Evaluation
To evaluate, we applied LLM-as-a-judge approach on academic tasks.
Specifically, we used arena-general-ru and arena-hard-ru with gpt4o judge and gpt4o-mini baseline.
arena-general-ru
| Model | Score | Score w/ SC |
|---|---|---|
| gpt-4o-2024-11-20 | 81.87 (-2.04, +1.81) | 78.42 (-2.39, +2.33) |
| gpt-4o-mini-2024-07-18 | 50.00 (-0.00, +0.00) | 50.00 (-0.00, +0.00) |
| deepvk/plato-9b | 41.27 (-2.18, +2.24) | 32.13 (-1.97, +2.05) |
| t-tech/T-lite-it-1.0 | 38.52 (-2.04, +2.98) | 30.38 (-1.90, +3.15) |
| google/gemma-2-9b-it | 27.46 (-2.06, +1.74) | 25.80 (-2.09, +1.98) |
| Qwen/Qwen2.5-7B-Instruct | 24.60 (-2.36, +2.38) | 23.67 (-2.36, +2.28) |
| IlyaGusev/saiga_gemma2_9b | 17.83 (-1.95, +1.66) | 18.46 (-2.22, +1.69) |
arena-hard-ru
| Model | Score | Score w/ SC |
|---|---|---|
| gpt-4o-2024-11-20 | 85.70 (-1.45, +1.38) | 80.19 (-1.99, +2.04) |
| gpt-4o-mini-2024-07-18 | 50.00 (-0.00, +0.00) | 50.00 (-0.00, +0.00) |
| t-tech/T-lite-it-1.0 | 34.80 (-1.98, +2.38) | 26.99 (-1.74, +2.67) |
| deepvk/plato-9b | 31.81 (-1.92, +1.90) | 24.25 (-1.71, +1.84) |
| Qwen/Qwen2.5-7B-Instruct | 20.84 (-1.99, +1.67) | 17.70 (-1.63, +1.68) |
| google/gemma-2-9b-it | 12.98 (-1.36, +1.57) | 12.97 (-1.46, +1.69) |
| IlyaGusev/saiga_gemma2_9b | 9.72 (-1.34, +1.50) | 10.64 (-1.40, +1.78) |
Citation
Both authors contribute equally, order is alphabetical.
@misc{deepvk2024plato-9b,
title={plato-9b},
author={Eliseev, Anton and Semin, Kirill},
url={https://huggingface.co/deepvk/plato-9b},
publisher={Hugging Face}
year={2025},
}
- Downloads last month
- 6
docker model run hf.co/deepvk/plato-9b