Add support for batched generation

#18
Files changed (1) hide show
  1. README.md +40 -12
README.md CHANGED
@@ -4,15 +4,43 @@ This is a replica of Alpaca by Stanford' tatsu
4
 
5
  Trained using the original instructions with a minor modification in FSDP mode
6
 
7
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
8
-
9
- | Metric | Value |
10
- |-----------------------|---------------------------|
11
- | Avg. | 41.96 |
12
- | ARC (25-shot) | 52.3 |
13
- | HellaSwag (10-shot) | 77.09 |
14
- | MMLU (5-shot) | 41.6 |
15
- | TruthfulQA (0-shot) | 37.58 |
16
- | Winogrande (5-shot) | 69.46 |
17
- | GSM8K (5-shot) | 1.44 |
18
- | DROP (3-shot) | 14.23 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  Trained using the original instructions with a minor modification in FSDP mode
6
 
7
+ # Other versions:
8
+ 13B: https://huggingface.co/chavinlo/alpaca-13b
9
+
10
+ 13B -> GPT4 : https://huggingface.co/chavinlo/gpt4-x-alpaca
11
+
12
+ ## Compute Used
13
+ Trained on 4xA100s for 6H
14
+ Donated by redmond.ai
15
+
16
+ NO LORA HAS BEEN USED, this is a natively-finetuned model, hence "alpaca-native"
17
+
18
+ If you are interested on more llama-based models, you can check out my profile or search for other models at https://huggingface.co/models?other=llama
19
+
20
+ This (MIGHT) be a quantized version of this model, but be careful: https://boards.4channel.org/g/thread/92173062#p92182396
21
+
22
+ CONFIGURATION (default except fsdp):
23
+
24
+ ```shell
25
+ torchrun --nproc_per_node=4 --master_port=3045 train.py \
26
+ --model_name_or_path /workspace/llama-7b-hf \
27
+ --data_path ./alpaca_data.json \
28
+ --bf16 True \
29
+ --output_dir /workspace/output \
30
+ --num_train_epochs 3 \
31
+ --per_device_train_batch_size 4 \
32
+ --per_device_eval_batch_size 4 \
33
+ --gradient_accumulation_steps 8 \
34
+ --evaluation_strategy "no" \
35
+ --save_strategy "steps" \
36
+ --save_steps 200 \
37
+ --save_total_limit 1 \
38
+ --learning_rate 2e-5 \
39
+ --weight_decay 0. \
40
+ --warmup_ratio 0.03 \
41
+ --lr_scheduler_type "cosine" \
42
+ --logging_steps 1 \
43
+ --fsdp "shard_grad_op auto_wrap" \
44
+ --fsdp_transformer_layer_cls_to_wrap 'LLaMADecoderLayer' \
45
+ --tf32 True --report_to="wandb"
46
+ ```