File size: 1,522 Bytes
4b18296 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | ---
base_model: Qwen/Qwen3-4B-Instruct-2507
language:
- en
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
- lora
- qwen
- unsloth
- structeval
---
# exp_camelcase
**Model ID**: `ekunish/exp_camelcase`
exp008a + camelCase augmented data (21K + 1.6K camelCase conversion variants)
## Training Configuration
| Parameter | Value |
|-----------|-------|
| Base model | `Qwen/Qwen3-4B-Instruct-2507` |
| Method | QLoRA (4-bit) |
| Max sequence length | 512 |
| Epochs | 1 |
| Learning rate | 1e-06 |
| LoRA r | 64 |
| LoRA alpha | 128 |
| Batch size | 2 × 8 = 16 |
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "ekunish/exp_camelcase"
tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
base,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
```
## Training Data
- Dataset: `data/sft_u10bei_camelcase`
- License: CC-BY-4.0 (where applicable)
## Sources & License
- **Training Data**: u-10bei/structured_data_with_cot_dataset_512_v2, daichira/structured-3k-mix-sft, etc.
- **Dataset License**: Creative Commons Attribution (CC-BY-4.0)
- **Compliance**: Users must comply with both the dataset's attribution requirements and the base model's original terms of use.
## Competition
松尾研LLMコミュニティ 2025年度講座 メインコンペ (StructEval-T)
|