Upload merged Qwen3-4B-Instruct-2507 model (auto-generated README)
Browse files- README.md +14 -17
- config.json +1 -1
- model-00001-of-00002.safetensors +2 -2
- model-00002-of-00002.safetensors +2 -2
README.md
CHANGED
|
@@ -1,11 +1,11 @@
|
|
| 1 |
---
|
| 2 |
-
base_model:
|
| 3 |
datasets:
|
| 4 |
-
-
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
license: apache-2.0
|
| 8 |
-
library_name:
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
tags:
|
| 11 |
- lora
|
|
@@ -17,11 +17,8 @@ tags:
|
|
| 17 |
|
| 18 |
# <qwen3-4b-agent-trajectory-lora>
|
| 19 |
|
| 20 |
-
This repository provides a
|
| 21 |
-
**
|
| 22 |
-
|
| 23 |
-
This repository contains **LoRA adapter weights only**.
|
| 24 |
-
The base model must be loaded separately.
|
| 25 |
|
| 26 |
## Training Objective
|
| 27 |
|
|
@@ -34,8 +31,10 @@ tool use, and recovery from errors.
|
|
| 34 |
|
| 35 |
## Training Configuration
|
| 36 |
|
| 37 |
-
- Base model:
|
| 38 |
-
- Method: LoRA
|
|
|
|
|
|
|
| 39 |
- Max sequence length: 2048
|
| 40 |
- Epochs: 2
|
| 41 |
- Learning rate: 2e-06
|
|
@@ -45,24 +44,22 @@ tool use, and recovery from errors.
|
|
| 45 |
|
| 46 |
```python
|
| 47 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 48 |
-
from peft import PeftModel
|
| 49 |
import torch
|
| 50 |
|
| 51 |
-
|
| 52 |
-
adapter = "your_id/your-repo"
|
| 53 |
|
| 54 |
-
tokenizer = AutoTokenizer.from_pretrained(
|
| 55 |
model = AutoModelForCausalLM.from_pretrained(
|
| 56 |
-
|
| 57 |
torch_dtype=torch.float16,
|
| 58 |
device_map="auto",
|
| 59 |
)
|
| 60 |
-
model = PeftModel.from_pretrained(model, adapter)
|
| 61 |
```
|
| 62 |
|
| 63 |
## Sources & Terms (IMPORTANT)
|
| 64 |
|
| 65 |
-
Training data:
|
|
|
|
| 66 |
|
| 67 |
Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License.
|
| 68 |
Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: unsloth/Qwen3-4B-Instruct-2507
|
| 3 |
datasets:
|
| 4 |
+
- u-10bei/dbbench_sft_dataset_react
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
license: apache-2.0
|
| 8 |
+
library_name: transformers
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
tags:
|
| 11 |
- lora
|
|
|
|
| 17 |
|
| 18 |
# <qwen3-4b-agent-trajectory-lora>
|
| 19 |
|
| 20 |
+
This repository provides a merged model that includes both the base model
|
| 21 |
+
**unsloth/Qwen3-4B-Instruct-2507** and the LoRA adapter. No separate LoRA loading is required.
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
## Training Objective
|
| 24 |
|
|
|
|
| 31 |
|
| 32 |
## Training Configuration
|
| 33 |
|
| 34 |
+
- Base model: unsloth/Qwen3-4B-Instruct-2507
|
| 35 |
+
- Method: LoRA
|
| 36 |
+
- dtype: torch.bfloat16
|
| 37 |
+
- load_in_4bit: False
|
| 38 |
- Max sequence length: 2048
|
| 39 |
- Epochs: 2
|
| 40 |
- Learning rate: 2e-06
|
|
|
|
| 44 |
|
| 45 |
```python
|
| 46 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
| 47 |
import torch
|
| 48 |
|
| 49 |
+
model_id = da1ch812/advanced-comp-model
|
|
|
|
| 50 |
|
| 51 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 52 |
model = AutoModelForCausalLM.from_pretrained(
|
| 53 |
+
model_id,
|
| 54 |
torch_dtype=torch.float16,
|
| 55 |
device_map="auto",
|
| 56 |
)
|
|
|
|
| 57 |
```
|
| 58 |
|
| 59 |
## Sources & Terms (IMPORTANT)
|
| 60 |
|
| 61 |
+
Training data:
|
| 62 |
+
- u-10bei/dbbench_sft_dataset_react
|
| 63 |
|
| 64 |
Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License.
|
| 65 |
Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.
|
config.json
CHANGED
|
@@ -4,7 +4,7 @@
|
|
| 4 |
],
|
| 5 |
"attention_bias": false,
|
| 6 |
"attention_dropout": 0.0,
|
| 7 |
-
"dtype": "
|
| 8 |
"eos_token_id": 151645,
|
| 9 |
"head_dim": 128,
|
| 10 |
"hidden_act": "silu",
|
|
|
|
| 4 |
],
|
| 5 |
"attention_bias": false,
|
| 6 |
"attention_dropout": 0.0,
|
| 7 |
+
"dtype": "bfloat16",
|
| 8 |
"eos_token_id": 151645,
|
| 9 |
"head_dim": 128,
|
| 10 |
"hidden_act": "silu",
|
model-00001-of-00002.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b548ca350ccfd7e6b50af77cd5cb296374c029cb94acba85cae8854fa357735f
|
| 3 |
+
size 4967215360
|
model-00002-of-00002.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c06842070760200934a1818592a41c844bb6b9d8455328c7a3c395b4a6398b59
|
| 3 |
+
size 3077766632
|