NYUAD-ComNets commited on
Commit
eea982c
·
verified ·
1 Parent(s): be4ba9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -113,6 +113,14 @@ for k in range(606):
113
  print(pred)
114
  ```
115
 
 
 
 
 
 
 
 
 
116
  We used Low-Rank Adaptation (LoRA) as the Parameter-Efficient Fine-Tuning (PEFT) method for fine-tuning utilizing the unsloth framework.
117
 
118
  The hyper-parameters of Llama 3.2-11B are as follows:
 
113
  print(pred)
114
  ```
115
 
116
+
117
+
118
+
119
+
120
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656ee240c5ac4733e9ccdd0e/jRSB8JxqqoV-2E97N5QQM.png)
121
+
122
+
123
+
124
  We used Low-Rank Adaptation (LoRA) as the Parameter-Efficient Fine-Tuning (PEFT) method for fine-tuning utilizing the unsloth framework.
125
 
126
  The hyper-parameters of Llama 3.2-11B are as follows: