| --- |
| license: apache-2.0 |
| base_model: Qwen/Qwen3-Coder-1.5B |
| tags: |
| - causal-lm |
| - qwen |
| - qwen3 |
| - code |
| - coder |
| - lora-merged |
| - code-analysis |
| pipeline_tag: text-generation |
| --- |
| |
| # Code_analyze_1.0 |
|
|
| **Code_analyze_1.0** is a merged LoRA fine-tuned version of **Qwen3-Coder-1.5B**, optimized for |
| code analysis, code understanding, and reasoning over source code. |
|
|
| ## Model Details |
|
|
| - **Base model:** Qwen/Qwen3-Coder-1.5B |
| - **Model type:** Causal Language Model |
| - **Fine-tuning method:** LoRA (merged into base weights) |
| - **Languages:** Primarily English (code-focused), supports multilingual comments |
| - **Domain:** Programming / Software Engineering |
|
|
| This model is **fully merged and standalone** — no additional LoRA adapters or base model |
| dependencies are required at inference time. |
|
|
| ## Intended Use |
|
|
| The model is designed for: |
| - Static code analysis |
| - Bug detection and explanation |
| - Code review and refactoring suggestions |
| - Understanding unfamiliar codebases |
| - Explaining algorithms and logic in source code |
|
|
| ## Usage |
|
|
| ```python |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| |
| model_id = "Vilyam888/Code_analyze_1.0" |
| |
| tokenizer = AutoTokenizer.from_pretrained( |
| model_id, |
| trust_remote_code=True |
| ) |
| |
| model = AutoModelForCausalLM.from_pretrained( |
| model_id, |
| trust_remote_code=True, |
| device_map="auto" |
| ) |
| |
| prompt = "Analyze this Python function and find potential issues:\n\n```python\ndef f(x): return x + 1\n```" |
| inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
| outputs = model.generate(**inputs, max_new_tokens=256) |
| |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| |