WaiLwin commited on
Commit
952bd9a
·
verified ·
1 Parent(s): 3b3bd44

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +21 -13
README.md CHANGED
@@ -1,22 +1,30 @@
1
  ---
 
2
  base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
3
  tags:
4
- - text-generation-inference
5
- - transformers
6
- - unsloth
7
- - llama
8
- - trl
9
  license: apache-2.0
10
- language:
11
- - en
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** WaiLwin
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: peft
3
  base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
4
  tags:
5
+ - network-configuration
6
+ - networking
7
+ - lora
8
+ - adapter
 
9
  license: apache-2.0
 
 
10
  ---
11
 
12
+ # Network Configuration Analysis LoRA Adapter
13
 
14
+ LoRA adapter for network configuration analysis, optimized for memory efficiency.
 
 
15
 
16
+ ## Usage
17
 
18
+ ```python
19
+ from peft import PeftModel
20
+ from transformers import AutoModelForCausalLM, AutoTokenizer
21
+
22
+ base_model = AutoModelForCausalLM.from_pretrained("unsloth/llama-3.2-3b-instruct-bnb-4bit")
23
+ tokenizer = AutoTokenizer.from_pretrained("unsloth/llama-3.2-3b-instruct-bnb-4bit")
24
+ model = PeftModel.from_pretrained(base_model, "WaiLwin/network-model-adapter")
25
+ ```
26
+
27
+ ## Training Details
28
+ - Optimized for Google Colab memory constraints
29
+ - LoRA Rank: 8, Alpha: 8
30
+ - Max Sequence Length: 1024