How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="hpcgroup/hpc-coder-v2-16b", trust_remote_code=True)
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("hpcgroup/hpc-coder-v2-16b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("hpcgroup/hpc-coder-v2-16b", trust_remote_code=True)
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

HPC-Coder-v2

The HPC-Coder-v2-16b model is an HPC code LLM fine-tuned on an instruction dataset catered to common HPC topics such as parallelism, optimization, accelerator porting, etc. This version is a fine-tuning of the Deepseek Coder V2 lite base model. It is fine-tuned on the hpc-instruct, oss-instruct, and evol-instruct datasets. We utilized the distributed training library AxoNN to fine-tune in parallel across many GPUs.

HPC-Coder-v2-1.3b, HPC-Coder-v2-6.7b, and HPC-Coder-v2-16b are the most capable open-source LLMs for parallel and HPC code generation. HPC-Coder-v2-16b is currently the best performing open-source LLM on the ParEval parallel code generation benchmark in terms of correctness and performance. It scores similarly to 34B and commercial models like Phind-V2 and GPT-4 on parallel code generation. HPC-Coder-v2-6.7b is not far behind the 16b in terms of performance.

Using HPC-Coder-v2

The model is provided as a standard huggingface model with safetensor weights. It can be used with transformers pipelines, vllm, or any other standard model inference framework. HPC-Coder-v2 is an instruct model and prompts need to be formatted as instructions for best results. It was trained with the following instruct template:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:
Downloads last month
11
Safetensors
Model size
16B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hpcgroup/hpc-coder-v2-16b

Finetuned
(1)
this model
Quantizations
4 models

Datasets used to train hpcgroup/hpc-coder-v2-16b

Collection including hpcgroup/hpc-coder-v2-16b