YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Test LoRA Adapter
This is a test LoRA adapter (randomly initialized without tuning) with customizable target modules.
python create_test_embedding_layer.py
Configuration
- Base model: meta-llama/Llama-2-7b-hf
- LoRA rank (r): 8
- LoRA alpha: 16
- Target modules: embed_tokens, lm_head, q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Weight Shapes
- embed_tokens.lora_A: (8, 32000)
- embed_tokens.lora_B: (4096, 8)
- lm_head.lora_A: (8, 4096)
- lm_head.lora_B: (32000, 8)
- q_proj.lora_A: (8, 4096)
- q_proj.lora_B: (4096, 8)
- k_proj.lora_A: (8, 4096)
- k_proj.lora_B: (4096, 8)
- v_proj.lora_A: (8, 4096)
- v_proj.lora_B: (4096, 8)
- o_proj.lora_A: (8, 4096)
- o_proj.lora_B: (4096, 8)
- gate_proj.lora_A: (8, 4096)
- gate_proj.lora_B: (11008, 8)
- up_proj.lora_A: (8, 4096)
- up_proj.lora_B: (11008, 8)
- down_proj.lora_A: (8, 11008)
- down_proj.lora_B: (4096, 8)
Usage with SGLang
This adapter contains randomly initialized weights for testing purposes only.
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support