How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="hemlang/Hemlock-Coder-7B")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("hemlang/Hemlock-Coder-7B")
model = AutoModelForCausalLM.from_pretrained("hemlang/Hemlock-Coder-7B")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

image/png

Hemlock-Coder-7B

Training Configuration

Parameter Value
Training Mode SFT
Base Model nbeerbower/Hemlock-Qwen2.5-Coder-7B
Learning Rate 0.0001
Epochs 2
Batch Size 1
Gradient Accumulation 16
Effective Batch Size 16
Max Sequence Length 2048
Optimizer paged_adamw_8bit
LR Scheduler cosine
Warmup Ratio 0.05
Weight Decay 0.01
Max Grad Norm 0.25
Seed 42
LoRA Rank (r) 128
LoRA Alpha 128
LoRA Dropout 0.05
Target Modules up_proj, down_proj, gate_proj, k_proj, q_proj, v_proj, o_proj
Quantization 4-bit (NF4)
GPU NVIDIA RTX A6000

Trained with Merlina

Merlina on GitHub

Downloads last month
12
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hemlang/Hemlock-Coder-7B

Base model

Qwen/Qwen2.5-7B
Finetuned
(1)
this model
Quantizations
3 models

Dataset used to train hemlang/Hemlock-Coder-7B