File size: 947 Bytes
905f2e3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# wpu_semantic_20

> LoRA adapter uploaded automatically.

## Overview
- **Type:** LoRA adapter (PEFT)
- **Task type:** `CAUSAL_LM`
- **Base model:** `/home/praveen/coreset/outputs/llama_3_1_8b_finetuned`
- **LoRA r:** `8`
- **LoRA alpha:** `16`

## Usage
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

peft_model_id = "coreset-selection/wpu_semantic_20"
cfg = PeftConfig.from_pretrained(peft_model_id)
base = cfg.base_model_name_or_path
tok = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, torch_dtype='auto')
model = PeftModel.from_pretrained(model, peft_model_id)
```

## Files
- `adapter_config.json`
- `adapter_model.bin` or `adapter_model.safetensors`

## Notes
- This repo contains only the LoRA adapter weights.
- Load with the matching base model specified above.

_Uploaded from: `/home/praveen/coreset/outputs/gd_semantic_20_model`_