Upload LoRA adapter trained with Tinker
Browse filesBase model: meta-llama/Llama-3.1-8B
Tinker checkpoint: tinker://7f4705e7-551f-5133-b4bb-33444c0c405b:train:0/sampler_weights/test-push-to-hub
Uploaded: 2025-12-28T22:36:55.297557
README.md
CHANGED
|
@@ -5,6 +5,7 @@ tags:
|
|
| 5 |
- tinker
|
| 6 |
- lora
|
| 7 |
- sl
|
|
|
|
| 8 |
license: llama3.1
|
| 9 |
---
|
| 10 |
|
|
@@ -39,16 +40,22 @@ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B")
|
|
| 39 |
model = PeftModel.from_pretrained(base_model, "joschu0/tinker-llama-lora-test")
|
| 40 |
```
|
| 41 |
|
| 42 |
-
### With Tinker (for
|
|
|
|
|
|
|
| 43 |
|
| 44 |
```python
|
| 45 |
import tinker
|
| 46 |
|
| 47 |
sc = tinker.ServiceClient()
|
| 48 |
-
|
| 49 |
-
|
| 50 |
```
|
| 51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
## Training
|
| 53 |
|
| 54 |
This model was trained using the Tinker API. For more information about training
|
|
|
|
| 5 |
- tinker
|
| 6 |
- lora
|
| 7 |
- sl
|
| 8 |
+
- math
|
| 9 |
license: llama3.1
|
| 10 |
---
|
| 11 |
|
|
|
|
| 40 |
model = PeftModel.from_pretrained(base_model, "joschu0/tinker-llama-lora-test")
|
| 41 |
```
|
| 42 |
|
| 43 |
+
### With Tinker (for sampling/inference)
|
| 44 |
+
|
| 45 |
+
This checkpoint is also available on Tinker for high-throughput sampling:
|
| 46 |
|
| 47 |
```python
|
| 48 |
import tinker
|
| 49 |
|
| 50 |
sc = tinker.ServiceClient()
|
| 51 |
+
sampling_client = sc.create_sampling_client("tinker://7f4705e7-551f-5133-b4bb-33444c0c405b:train:0/sampler_weights/test-push-to-hub")
|
| 52 |
+
result = sampling_client.sample(...)
|
| 53 |
```
|
| 54 |
|
| 55 |
+
**Tinker path:** `tinker://7f4705e7-551f-5133-b4bb-33444c0c405b:train:0/sampler_weights/test-push-to-hub`
|
| 56 |
+
|
| 57 |
+
> **Note:** This is a sampler checkpoint and can only be used for inference, not for continued training.
|
| 58 |
+
|
| 59 |
## Training
|
| 60 |
|
| 61 |
This model was trained using the Tinker API. For more information about training
|