File size: 1,072 Bytes
a8302e3
c4a6f56
 
a8302e3
c4a6f56
 
a8302e3
c4a6f56
 
cf8358f
c4a6f56
a8302e3
 
 
 
cf8358f
a8302e3
c4a6f56
a8302e3
c4a6f56
cf8358f
c4a6f56
 
a8302e3
c4a6f56
a8302e3
c4a6f56
 
 
a8302e3
c4a6f56
 
 
 
 
 
 
a8302e3
c4a6f56
cf8358f
c4a6f56
 
 
 
 
a8302e3
c4a6f56
a8302e3
cf8358f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
language:
- en
base_model: Qwen/Qwen3-0.6B
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation
- lora
- axolotl
license: apache-2.0
---

# Delphermes-0.6B-R1-LORA

This is a merged LoRA model based on Qwen/Qwen3-0.6B, fine-tuned for language tasks.

## Model Details

- **Base Model**: Qwen/Qwen3-0.6B
- **Language**: English (en)
- **Type**: Merged LoRA model
- **Library**: transformers

## Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "justinj92/Delphermes-0.6B-R1-LORA"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto"
)

# Example usage
text = "Hey"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

## Training Details

This model was created by merging a LoRA adapter trained for language understanding and generation.