Text Generation
Transformers
Safetensors
qwen2
merlina
grimoire
sft
conversational
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("hemlang/Hemlock-Codex-7B")
model = AutoModelForCausalLM.from_pretrained("hemlang/Hemlock-Codex-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
Hemlock-Codex-7B
Training Configuration
| Parameter | Value |
|---|---|
| Training Mode | SFT |
| Base Model | hemlang/Hemlock2-Coder-7B |
| Learning Rate | 0.0001 |
| Epochs | 3 |
| Batch Size | 2 |
| Gradient Accumulation | 16 |
| Effective Batch Size | 32 |
| Max Sequence Length | 8192 |
| Optimizer | paged_adamw_8bit |
| LR Scheduler | cosine |
| Warmup Ratio | 0.05 |
| Weight Decay | 0.01 |
| Max Grad Norm | 0.25 |
| Seed | 42 |
| LoRA Rank (r) | 128 |
| LoRA Alpha | 128 |
| LoRA Dropout | 0.05 |
| Target Modules | k_proj, o_proj, q_proj, v_proj, down_proj, gate_proj, up_proj |
| Quantization | 4-bit (NF4) |
| GPU | NVIDIA RTX A6000 |
- Downloads last month
- 845
Model tree for hemlang/Hemlock-Codex-7B
Base model
Qwen/Qwen2.5-7B Finetuned
Qwen/Qwen2.5-Coder-7B Finetuned
Qwen/Qwen2.5-Coder-7B-Instruct Finetuned
hemlang/Hemlock2-Coder-7B

# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="hemlang/Hemlock-Codex-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)