Text Generation
Transformers
Safetensors
English
qwen2
code
cobol
code-documentation
qwen
qwen2.5
instruction-tuning
llm
generative-model
conversational
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("V7W3D/qwen-code-doc-ft")
model = AutoModelForCausalLM.from_pretrained("V7W3D/qwen-code-doc-ft")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
Qwen2.5-Coder-3B-Instruct – Fine-tuned for COBOL Code Documentation
This model is a fine-tuned version of Qwen/Qwen2.5-Coder-3B-Instruct, optimized for generating natural language documentation from COBOL source code. The fine-tuning was done using freeze fine-tuning on the last transformer layer only, preserving the rest of the model's pretrained weights.
🔧 Model Description
- Architecture: Qwen2.5-Coder-3B (decoder-only transformer)
- Base Model: Qwen/Qwen2.5-Coder-3B-Instruct
- Fine-tuning Method: Freeze fine-tuning (only last transformer block's parameters were updated)
- Training Objective: Instruction-following text generation for COBOL code documentation
🧠 Use Cases
This model is specialized in generating descriptive documentation for legacy COBOL code, especially useful for:
- Legacy system maintenance
- Automated codebase documentation
- Migration planning
- COBOL code understanding and onboarding
✍️ Example Usage
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_name = "V7W3D/qwen-code-doc-ft"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
doc_gen = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "### Document this COBOL code:\n\n IDENTIFICATION DIVISION.\n PROGRAM-ID. HELLO-WORLD.\n PROCEDURE DIVISION.\n DISPLAY 'HELLO, WORLD!'\n STOP RUN.\n\n### Documentation:"
response = doc_gen(prompt, max_new_tokens=200, do_sample=False)
print(response[0]["generated_text"])
- Downloads last month
- 1
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="V7W3D/qwen-code-doc-ft") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)