File size: 995 Bytes
dfe7db9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
library_name: peft
tags:
- base_model:adapter:unsloth/Meta-Llama-3.1-8B-bnb-4bit
- lora
- transformers
language:
- en
metrics:
- bleu
- bertscore
- rouge
pipeline_tag: summarization
---

# Model Card for Model ID

Basically paired the [unsloth/Meta-Llama-3.1-8B-bnb-4bit](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-bnb-4bit) base model with the fine-tuned [Chilliwiddit/Openi-llama3.1-8B-WeightedLoss-small2](https://huggingface.co/Chilliwiddit/Openi-llama3.1-8B-WeightedLoss-small2) adapter.


## Training Details

### Training Data

I used the Open-i dataset


#### Training Hyperparameters

- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

- 16 Mixed Precision
- LR of 0.0-1
- 5 Epochs
- lambda medical weight of 20 and lambda negation weight of 20
- Used 2nd iteration of summary medical concepts file