Matthias Vogler
ayyylemao
AI & ML interests
None yet
Organizations
Is your approach also feasible for 2x faster inference?
1
#2 opened over 1 year ago
by
ayyylemao
Considerable speed loss after Lora Finetuning
14
#14 opened over 1 year ago
by
ayyylemao
gpu requirement
1
#7 opened over 1 year ago
by
mdeniz1
Transformer Issue ?
5
#1 opened over 1 year ago
by
jgsmcmahon
The model often enters infinite generation loops
π 5
13
#32 opened over 1 year ago
by
sszymczyk
LLama-3.1-8B generates way to long answers!
π 1
3
#36 opened over 1 year ago
by
ayyylemao
Tokenizer error and/or 'rope_scaling' problem
5
#35 opened over 1 year ago
by
fazayjo
Lora Training OOM with 2x NVIDIA RTX A6000 (2x48GB)
6
#71 opened over 1 year ago
by
ayyylemao
Lora Training OOM with 2x NVIDIA RTX A6000 (2x48GB)
6
#71 opened over 1 year ago
by
ayyylemao