How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "datapaf/DeepSeekCoderCodeQnA"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "datapaf/DeepSeekCoderCodeQnA",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/datapaf/DeepSeekCoderCodeQnA
Quick Links

Model Card for DeepSeekCodeCodeQ&A

This is a version of DeepSeek-Coder model that was fine-tuned on the grammatically corrected texts.

Model Details

Model Description

  • Model type: LLaMa
  • Number of Parameters: 6.7B
  • Supported Programming Language: Python
  • Finetuned from model: DeepSeek-Coder

Model Sources [optional]

  • Repository: GitHub Repo
  • Paper: "Leveraging Large Language Models in Code Question Answering: Baselines and Issues" Georgy Andryushchenko, Vladimir V. Ivanov, Vladimir Makharev, Elizaveta Tukhtina, Aidar Valeev

How to Get Started with the Model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('deepseek-ai/deepseek-coder-6.7b-instruct')
model = AutoModelForCausalLM.from_pretrained('datapaf/DeepSeekCoderCodeQnA', device_map="cuda")

code = ... # Your Python code snippet here
question = ... # Your question regarding the snippet here

q = f"{question}\n{code}"

inputs = tokenizer.encode(q, return_tensors="pt").to('cuda')
outputs = model.generate(inputs, max_new_tokens=512, pad_token_id=tokenizer.eos_token_id)
text = tokenizer.decode(outputs[0])
print(text)
-->
Downloads last month
4
Safetensors
Model size
7B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for datapaf/DeepSeekCoderCodeQnA