name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jdbnm9y
[removed]
1
0
2023-03-23T06:15:01
[deleted]
true
null
0
jdbnm9y
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdbnm9y/
false
1
t1_jdbndzf
Yes but no success. I'll wait a bit for now.
1
0
2023-03-23T06:11:51
Nevysha
false
null
0
jdbndzf
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdbndzf/
false
1
t1_jdbkm4z
I was just working on quantizing the 30b llama to 4bit. But some else released a 30b-4bit so I stopped it. As for the Lora the 7b trained on a 4090 16gb in 5hrs https://github.com/tloen/alpaca-lora, and almost ran out of memory. I’ve read the 30b has been trained on a single a100 with 40gb
1
0
2023-03-23T05:36:27
wind_dude
false
null
0
jdbkm4z
false
/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jdbkm4z/
false
1
t1_jdbjpfo
in the same boat as you, friend. LLaMA 13b int4 worked immediately for me (after following all instructions step-by-step for WSL) but really wanted to give the Alpaca models a go in oobabooga. Ran into the same exact issues as you. Only success I've had thus far with Alpaca is with the ggml alpaca 4bit .bin files for a...
3
0
2023-03-23T05:25:23
jetpackswasno
false
null
0
jdbjpfo
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdbjpfo/
false
3
t1_jdbje3e
What specs do you need to create a lora for the 30B? I have 3090 + 64GB RAM. I’d like to make a train a for my own writing preferably on the 30B model.
1
0
2023-03-23T05:21:45
Pan000
false
null
0
jdbje3e
false
/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jdbje3e/
false
1
t1_jdbfx05
I've been poking at chavinlo's alpaca and it's amazing how much more coherent the output is compared to untuned llama.
5
0
2023-03-23T04:43:28
Enturbulated
false
null
0
jdbfx05
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jdbfx05/
false
5
t1_jdbffw5
All good friend! I actually don't post much, so I was like "Damnnnnn" when I saw your reply. lol Then I was like, "fuck me, how did I miss that when I tried installing this fucker for like 6 hours?"
1
0
2023-03-23T04:38:40
monkmartinez
false
null
0
jdbffw5
false
/r/LocalLLaMA/comments/11yzosa/your_loadin8bit_error_may_be_due_to_nonsupported/jdbffw5/
false
1
t1_jdbf0ny
In the title, I also think it would be best if you clarify this is a separate bitsandbytes fix for older GPUs.
1
0
2023-03-23T04:34:23
Civil_Collection7267
false
null
0
jdbf0ny
false
/r/LocalLLaMA/comments/11yzosa/your_loadin8bit_error_may_be_due_to_nonsupported/jdbf0ny/
false
1
t1_jdbdyzw
I just reread the post. Sorry about that. I skimmed over this very quickly and assumed it was the same libbitsandbytes\_cuda116.dll patch. If you'd like, please repost this so it can be pushed back to the top of the sub. I think that would be better than unremoving this existing post.
1
0
2023-03-23T04:23:54
Civil_Collection7267
false
null
0
jdbdyzw
false
/r/LocalLLaMA/comments/11yzosa/your_loadin8bit_error_may_be_due_to_nonsupported/jdbdyzw/
false
1
t1_jdb1av4
Not trying to be rude, but it is not in the pinned guide. There are tens, possibly hundreds of issues reported that could be helped by this. Hope you reconsider and move it to the top.
1
0
2023-03-23T02:34:16
monkmartinez
false
null
0
jdb1av4
false
/r/LocalLLaMA/comments/11yzosa/your_loadin8bit_error_may_be_due_to_nonsupported/jdb1av4/
false
1
t1_jdb132w
this has been answered in the guide, the solution is [here](https://github.com/oobabooga/text-generation-webui/issues/416#issuecomment-1475078571) from oobabooga. this [one](https://github.com/oobabooga/text-generation-webui/issues/416#issuecomment-1475186467) worked for me too. if you have more questions on this plea...
1
0
2023-03-23T02:32:36
Civil_Collection7267
false
null
0
jdb132w
false
/r/LocalLLaMA/comments/11z5g9x/cuda_frustration/jdb132w/
true
1
t1_jdazde5
Using his setting for 4GB, I was able to run text-generation, no problems so far. I need to do the more testing, but seems promising. Baseline is the 3.1GB. With streaming, it is chunky, but I do not know if `--no-stream` will push him over the edge. With the CAI-CHAT, using --no-stream pushes it over to OOM very qui...
1
0
2023-03-23T02:19:26
SlavaSobov
false
2023-03-23T02:27:52
0
jdazde5
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdazde5/
false
1
t1_jdaxqtv
check the models directory part here for models: [https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/) you don't need the LoRA, and the file format is specified above. Alpaca is for havi...
1
0
2023-03-23T02:07:15
Civil_Collection7267
false
null
0
jdaxqtv
false
/r/LocalLLaMA/comments/11yycky/trying_to_get_continuous_conversations/jdaxqtv/
false
1
t1_jdanl08
1. Ah, that's how my models folder is supposed to be laid out. Good to know. I'll keep that in mind for any future models I download. I see now that when you throw the `--gptq-bits` flag, it looks for a model that has the correct bits in the name. Explains why it was calling for the `4bit-4bit` model now. 2. Yeah, I r...
2
0
2023-03-23T00:52:18
remghoost7
false
null
0
jdanl08
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdanl08/
false
2
t1_jdanhlf
Thanks for clarifying. My understanding was that it was removed to free-up space on the PCB. When announcing the retirement of NVlink Jensen pointed to the roll-out of PCI-E 5.0 noting that it is sufficiently fast: https://www.windowscentral.com/hardware/computers-desktops/nvidia-kills-off-nvlink-on-rtx-4090 . Look...
3
0
2023-03-23T00:51:37
D3smond_d3kk3r
false
null
0
jdanhlf
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jdanhlf/
false
3
t1_jdaeutn
Two notes: 1 Your directory should like something like this. For this example, you would run python server.py --model alpaca-30b --gptq-model-type llama --gptq-bits 4 for it to work correctly text-generation-webui/models/ -alpaca-30b --alpaca-30b-4bit.pt --tokenizer_config.json (etc.) 2 Looking at...
2
0
2023-03-22T23:50:16
Technical_Leather949
false
null
0
jdaeutn
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdaeutn/
false
2
t1_jdacyby
>I tried both fix advized here it did not help me I'd like to confirm, you tried both suggested solutions as seen here in issues [\#332](https://github.com/oobabooga/text-generation-webui/issues/332#issuecomment-1475425198) and [\#392](https://github.com/oobabooga/text-generation-webui/issues/392#issuecomment-14748...
1
0
2023-03-22T23:36:40
Technical_Leather949
false
null
0
jdacyby
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdacyby/
false
1
t1_jdac81v
You have to apply the GPTQ fix as seen in the [pinned comment](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/comment/jcwhu96/?utm_source=share&utm_medium=web2x&context=3) and the note in the guide: >For 4-bit usage, a recent update to GPTQ-for-LLaMA has made it necessary to change to a previous comm...
1
0
2023-03-22T23:31:25
Technical_Leather949
false
null
0
jdac81v
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdac81v/
false
1
t1_jdaaxae
Thanks! Curious too here. I think the new Jetson Orin Nano would be better, with the 8GB of unified RAM and more CUDA/Tensor cores, but if the Raspberry Pi can run llama, then should be workable on the older Nano. If the CUDA core can be used on the older Nano that is the even better, but the RAM is the limit for that...
2
0
2023-03-22T23:22:44
SlavaSobov
false
null
0
jdaaxae
false
/r/LocalLLaMA/comments/11yi0bl/build_llamacpp_on_jetson_nano_2gb/jdaaxae/
false
2
t1_jda9v90
Thanks for the tip! but this is already in the pinned guide.
1
0
2023-03-22T23:15:46
Civil_Collection7267
false
null
0
jda9v90
false
/r/LocalLLaMA/comments/11yzosa/your_loadin8bit_error_may_be_due_to_nonsupported/jda9v90/
true
1
t1_jda9nw3
For what it's worth, I'm really curious to find out how well it works out. I've always been curious about the nano.
1
0
2023-03-22T23:14:20
toothpastespiders
false
null
0
jda9nw3
false
/r/LocalLLaMA/comments/11yi0bl/build_llamacpp_on_jetson_nano_2gb/jda9nw3/
false
1
t1_jda4qfi
Ok, so installing it was fairly straightforward, but the part that I'm stuck on is the models now. I already have the llama and alpaca models in various sizes, but should i get the llora one too? And if so, what is the file format? Is it the quantized .bin format like I used with alpaca? ​ I see instructio...
2
0
2023-03-22T22:39:40
spanielrassler
false
2023-03-22T22:50:58
0
jda4qfi
false
/r/LocalLLaMA/comments/11yycky/trying_to_get_continuous_conversations/jda4qfi/
false
2
t1_jda4mtl
Heyo. These seem to be the main instructions for running this GitHub repo (and the only instructions I've found to work) so I figured I'd ask this question here. I don't want to submit a GitHub issue because I believe it's my error, not the repo. I'm looking to run the [ozcur/alpaca-native-4bit](https://huggingface....
3
0
2023-03-22T22:38:57
remghoost7
false
null
0
jda4mtl
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jda4mtl/
false
3
t1_jda2o33
THanks so much for your response -- I'll give it a try!!
1
0
2023-03-22T22:25:08
spanielrassler
false
null
0
jda2o33
false
/r/LocalLLaMA/comments/11yycky/trying_to_get_continuous_conversations/jda2o33/
false
1
t1_jda0mlx
i tried alpaca.cpp but although nice and minimal its not really easy for it to be conversational. its better to set this up: [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) just follow the guide.. not sure how it would work on an M2 but there is the option of ...
3
0
2023-03-22T22:11:04
megadonkeyx
false
null
0
jda0mlx
false
/r/LocalLLaMA/comments/11yycky/trying_to_get_continuous_conversations/jda0mlx/
false
3
t1_jd9m6jw
😂 Then you have the talking Wash Machine. Do you really want the appliance what knows your dirty laundry to talking? 😜
2
0
2023-03-22T20:35:31
SlavaSobov
false
null
0
jd9m6jw
false
/r/LocalLLaMA/comments/11yi0bl/build_llamacpp_on_jetson_nano_2gb/jd9m6jw/
false
2
t1_jd9lufa
After i try to start with: python [server.py](https://server.py) \--load-in-4bit --model llama-7b-hf i get always: Loading llama-7b-hf... Traceback (most recent call last): File "D:\\ki\\llama\\text-generation-webui\\[server.py](https://server.py)", line 243, in <module> shared.model, shared.tokenizer = l...
1
0
2023-03-22T20:33:23
TomFlatterhand
false
null
0
jd9lufa
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd9lufa/
false
1
t1_jd9fjcb
Meh, I'll wait till someone figures a way to run it on a washing machine.
3
0
2023-03-22T19:53:25
tyras_
false
null
0
jd9fjcb
false
/r/LocalLLaMA/comments/11yi0bl/build_llamacpp_on_jetson_nano_2gb/jd9fjcb/
false
3
t1_jd960wl
What does "LoRA" mean?
5
0
2023-03-22T18:53:24
petasisg
false
null
0
jd960wl
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd960wl/
false
5
t1_jd90vy0
Can't get Lora to load in WSL sadly. `per_gpu = module_sizes[""] // (num_devices - 1 if low_zero else num_devices)` `ZeroDivisionError: integer division or modulo by zero` I tried both fix advized here it did not help me. If I run on CPU only it does not crash but seems to load forever.
1
0
2023-03-22T18:21:17
Nevysha
false
null
0
jd90vy0
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd90vy0/
false
1
t1_jd8xre1
I'm trying to think of where I heard that. Now that I thinking about it, it might be that Nvlink made things simpler by handling the memory management, but that its still possible via pcie. I might be completely wrong... I really don't know much about it.
1
0
2023-03-22T18:01:35
ijunk
false
null
0
jd8xre1
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd8xre1/
false
1
t1_jd8r2oy
Just doing some formatting for legibility: >Yes so, depending on the format of the model the purpose is different: > - If you want to run it in CPU-mode , the ending format you want is *ggml-...q4_0.bin* - For the people running it in 16-bit mode, it would be *f16* there in the end. - If it ends with *.pt* th...
2
0
2023-03-22T17:19:58
c4r_guy
false
null
0
jd8r2oy
false
/r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/jd8r2oy/
false
2
t1_jd8njol
Style can be maintained using pre-promt (simply part of prompt that is send to sd with every text). Characters/places are not consistent most of the time.
2
0
2023-03-22T16:57:48
vaidas-maciulis
false
null
0
jd8njol
false
/r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jd8njol/
false
2
t1_jd8n7x4
Basically it's custom character promt to do adventure part, sd api extention for llama, that was modified to generate image for every text, sd run with - - api option, two gpus used
2
0
2023-03-22T16:55:44
vaidas-maciulis
false
null
0
jd8n7x4
false
/r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jd8n7x4/
false
2
t1_jd8mcyf
You either need to create a 30b alpaca and than quantitize or run a lora on a qunatitized llama 4bit, currently working on the latter, just quantitizing the llama 30b now. I think the Lora's are more interesting simply because they let you switch between tasks. Haven't tried it yet, [https://github.com/johnsmith0031/...
1
0
2023-03-22T16:50:30
wind_dude
false
2023-03-22T16:56:20
0
jd8mcyf
false
/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jd8mcyf/
false
1
t1_jd8ik3c
If this is the case, I would buy two 4090 in a heartbeat. Waiting for confirmation now. 😁
1
0
2023-03-22T16:26:42
RabbitHole32
false
null
0
jd8ik3c
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd8ik3c/
false
1
t1_jd82us2
Alpaca has a very specific instruction format it was trained with. `alpaca.cpp` will fix up prompts to use that format. `llama.cpp` just gives the model the prompt you wrote. Stuff like differences in how newlines are handled could have an effect also.
4
0
2023-03-22T14:46:43
KerfuffleV2
false
null
0
jd82us2
false
/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/jd82us2/
false
4
t1_jd81obr
Performance decreases after RLHF, but te benefit is friendlier behavior and a more natural speaking interfasce. Src. Openai Second, tthere is a probability distribution , ie, temperature setting. Run more tests thsan 1 ea.
3
0
2023-03-22T14:38:51
memberjan6
false
null
0
jd81obr
false
/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/jd81obr/
false
3
t1_jd7vlh1
I remember someone saying that it doesn't matter because you can just send the data across pcie... but that's all I know about it.
2
0
2023-03-22T13:57:20
ijunk
false
null
0
jd7vlh1
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd7vlh1/
false
2
t1_jd7s64m
Use this: [https://cocktailpeanut.github.io/dalai/#/](https://cocktailpeanut.github.io/dalai/#/)
2
0
2023-03-22T13:32:24
oliverban
false
null
0
jd7s64m
false
/r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/jd7s64m/
false
2
t1_jd7r2cx
Nvlink does not work anymore on the 40 series (please Google the source). I'm not aware of any alternative to Nvlink that can be used, although I'm not ruling out the possibility that I'm wrong.
4
0
2023-03-22T13:24:02
RabbitHole32
false
null
0
jd7r2cx
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd7r2cx/
false
4
t1_jd7p6pj
Yes not expecting the miracle, definitely will need swap file to compensate the RAM. :P When Raspberry Pi was running LLaMa, someone asked, "Can it run in the Jetson Nano?" so I thought, "Well, why not try?" We only have 2GB model, so that was the only option to try. :P
4
0
2023-03-22T13:09:39
SlavaSobov
false
null
0
jd7p6pj
false
/r/LocalLLaMA/comments/11yi0bl/build_llamacpp_on_jetson_nano_2gb/jd7p6pj/
false
4
t1_jd7oqby
Considering the smallest version of the llama model requires 4GB of RAM to load? Best case result will see your LLaMA looking a bit hobbled. LLaMEd, if you will. Best of luck!
2
0
2023-03-22T13:06:07
Enturbulated
false
null
0
jd7oqby
false
/r/LocalLLaMA/comments/11yi0bl/build_llamacpp_on_jetson_nano_2gb/jd7oqby/
false
2
t1_jd7nbef
Running in 8bit, it consumes 9.2GB on my system.
2
0
2023-03-22T12:54:49
antialtinian
false
null
0
jd7nbef
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd7nbef/
false
2
t1_jd7j8y2
it's a finetuned 7B model. unlike the LoRA reproductions, it's a real replication using the same Stanford dataset and training code, trained on 4xA100s
5
0
2023-03-22T12:20:25
Civil_Collection7267
false
null
0
jd7j8y2
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd7j8y2/
false
5
t1_jd7j2bg
Haha, no kidding. That sure brought me back
1
0
2023-03-22T12:18:46
D3smond_d3kk3r
false
null
0
jd7j2bg
false
/r/LocalLLaMA/comments/11xiwes/ati_gpu/jd7j2bg/
false
1
t1_jd7ixus
Can you point me to the source on the multiple 4090s being no bueno? Was considering picking up a couple to get more serious about some local models, but maybe I should stick to the RTX 3090s?
1
0
2023-03-22T12:17:41
D3smond_d3kk3r
false
null
0
jd7ixus
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd7ixus/
false
1
t1_jd7g196
>chavinlo's alpaca-native Is this a 7B model? The files are quite large by the look and I only have a 12GB 3060.
3
0
2023-03-22T11:50:46
dangernoodle01
false
null
0
jd7g196
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd7g196/
false
3
t1_jd75fzn
Some Stable Diffusion LoRAs are 1 MB and others are 77 MB.
3
0
2023-03-22T09:45:53
nizus1
false
null
0
jd75fzn
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd75fzn/
false
3
t1_jd75bii
You might check out the cheap 32 GB Tesla cards on eBay for under $300 as well. Obviously they'll be slower than 3090s but the cost per GB is so much lower.
5
0
2023-03-22T09:44:10
nizus1
false
null
0
jd75bii
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd75bii/
false
5
t1_jd6xzvd
That is awesome. Did you find a way to maintain the appearance of characters and the drawing style across pictures?
3
0
2023-03-22T07:58:17
countalabs
false
null
0
jd6xzvd
false
/r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jd6xzvd/
false
3
t1_jd6vgp2
AMD the third world of GPU support
3
0
2023-03-22T07:22:16
megadonkeyx
false
null
0
jd6vgp2
false
/r/LocalLLaMA/comments/11xiwes/ati_gpu/jd6vgp2/
false
3
t1_jd6q1py
Explanation on how? Or links?
5
0
2023-03-22T06:08:05
nero10578
false
null
0
jd6q1py
false
/r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jd6q1py/
false
5
t1_jd6jugc
Where can one find the full fine tunes?
3
0
2023-03-22T04:52:08
monkmartinez
false
null
0
jd6jugc
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd6jugc/
false
3
t1_jd5t5z7
no. for more on finetuning look at: [https://github.com/oobabooga/text-generation-webui/wiki/Using-LoRAs#training-a-lora](https://github.com/oobabooga/text-generation-webui/wiki/Using-LoRAs#training-a-lora) or: [https://github.com/nebuly-ai/nebullvm/blob/main/apps/accelerate/chatllama/README.md#hardware-requirements](...
1
0
2023-03-22T01:05:02
Civil_Collection7267
false
null
0
jd5t5z7
false
/r/LocalLLaMA/comments/11y0kqz/can_i_finetune_llama_7b_alpaca_or_gpt_neo_x_125m/jd5t5z7/
true
1
t1_jd5qo9u
python [server.py](https://server.py) \--model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20 --auto-devices --disk --cai-chat --no-stream --gpu-memory 3 That worked for about 4 exchanges. \^\^; Now I am trying with different combinations.
1
0
2023-03-22T00:46:31
SlavaSobov
false
null
0
jd5qo9u
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd5qo9u/
false
1
t1_jd5mj0z
[removed]
1
0
2023-03-22T00:16:20
[deleted]
true
null
0
jd5mj0z
false
/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jd5mj0z/
false
1
t1_jd5mi08
Fine-tuning usually requires additional memory because it needs to keep lots of state for the model DAG in memory when doing backpropagation. LLaMA is quantized to 4-bit with [GPT-Q](https://arxiv.org/abs/2210.17323), which is a post-training quantization technique that (AFAIK) does not lend itself to supporting fine-...
4
0
2023-03-22T00:16:08
singularperturbation
false
null
0
jd5mi08
false
/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jd5mi08/
false
4
t1_jd5lk65
I'm just running in text-generation-webui in 8bit mode.
2
0
2023-03-22T00:09:13
antialtinian
false
null
0
jd5lk65
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd5lk65/
false
2
t1_jd5k44t
Have you run chavinlo's with one of the c++ implementations? If so, how?
2
0
2023-03-21T23:58:32
uses-tabs-AND-spaces
false
null
0
jd5k44t
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd5k44t/
false
2
t1_jd5jtz7
I'm interested in the token per seconds. Llama.cpp for example gives you about 2 token/s on an m2 processor with the 65b model.
2
0
2023-03-21T23:56:35
RabbitHole32
false
null
0
jd5jtz7
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd5jtz7/
false
2
t1_jd5j0os
I am using int4 model with oobabooga repo. It is working, but it is limited to the numbers of generated tokens, so you cannot use long pre conditioned prompts.
2
0
2023-03-21T23:50:59
polawiaczperel
false
null
0
jd5j0os
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd5j0os/
false
2
t1_jd57qp6
>KeyError: 'model.layers.25.self\_attn.rotary\_emb.cos\_cached' Try experimenting with explicit memory control [as seen here](https://github.com/oobabooga/text-generation-webui/wiki/Low-VRAM-guide#split-the-model-across-your-gpu-and-cpu) and trying it both with and without --no-cache passed. There's also some extra...
2
0
2023-03-21T22:32:39
Technical_Leather949
false
null
0
jd57qp6
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd57qp6/
false
2
t1_jd52mga
From my own testing: Native Linux>WSL>Windows Windows was frustratingly slow, at least when compared to the others. WSL ran moderately faster than Windows, improving that experience. Native Linux is blitzing fast. When using 4-bit 30B LLaMA on high end hardware, for example, it can generate faster than the text ...
2
0
2023-03-21T21:57:47
Technical_Leather949
false
null
0
jd52mga
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd52mga/
false
2
t1_jd5240r
You can leave that at 1. If you want to know more about what it does, along with the other parameters, check the [documentation here](https://huggingface.co/docs/transformers/main_classes/text_generation): >**typical\_p** (float, *optional*, defaults to 1.0) — Local typicality measures how similar the conditional p...
2
0
2023-03-21T21:54:24
Technical_Leather949
false
null
0
jd5240r
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd5240r/
false
2
t1_jd51piv
Alpaca was trained on GPT-3. Following the same method it should be trivial to train it on GPT-4 and make something even better. Right? Not sure if it will cost more, somebody can do the math
6
0
2023-03-21T21:51:49
gunbladezero
false
null
0
jd51piv
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jd51piv/
false
6
t1_jd503ei
I don't think it's worth it. The smaller models are powerful enough for most purposes. Did you try Point Alpaca? [https://github.com/pointnetwork/point-alpaca](https://github.com/pointnetwork/point-alpaca)
7
0
2023-03-21T21:41:26
sswam
false
null
0
jd503ei
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd503ei/
false
7
t1_jd4zyqd
It's not a LoRA, but this is the best I've tried: [https://github.com/pointnetwork/point-alpaca](https://github.com/pointnetwork/point-alpaca) It requires a GPU.
3
0
2023-03-21T21:40:36
sswam
false
null
0
jd4zyqd
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd4zyqd/
false
3
t1_jd4utml
did you found solution ?
2
0
2023-03-21T21:07:53
Nondzu
false
null
0
jd4utml
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd4utml/
false
2
t1_jd4r0tq
We're all going to go broke doing this, but I think it's time to buy anyway. Once these tools become more accessible to the masses, cards with high VRAM are going to get scarce, and NVIDIA has a monopoly on this market right now.
6
0
2023-03-21T20:44:11
friedrichvonschiller
false
null
0
jd4r0tq
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd4r0tq/
false
6
t1_jd4qfxn
I was thinking about whether it makes sense to use two 3090 for the 65b model. As far as I know, multiple 4090 don't work. Can you give us an idea how fast the 65b model is with your setup?
1
0
2023-03-21T20:40:35
RabbitHole32
false
null
0
jd4qfxn
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd4qfxn/
false
1
t1_jd4q3fn
What a blast from the past. Good old days.
3
0
2023-03-21T20:38:24
RabbitHole32
false
null
0
jd4q3fn
false
/r/LocalLLaMA/comments/11xiwes/ati_gpu/jd4q3fn/
false
3
t1_jd4lpu2
Gotcha thanks!
2
0
2023-03-21T20:11:24
-becausereasons-
false
null
0
jd4lpu2
false
/r/LocalLLaMA/comments/11vbq6r/13b_llama_alpaca_loras_available_on_hugging_face/jd4lpu2/
false
2
t1_jd4kvcp
You can do either or both -- these are general guides for everyone, so we can't write everything. Just download the 13B LoRA and put it in your loras directory with the 7B, and they'll both be available in the Web UI. Have fun!
3
0
2023-03-21T20:06:10
friedrichvonschiller
false
null
0
jd4kvcp
false
/r/LocalLLaMA/comments/11vbq6r/13b_llama_alpaca_loras_available_on_hugging_face/jd4kvcp/
false
3
t1_jd4ksqz
While on this topic, why are the weights in samwit's repos so small? Their alpaca7B-lora has an 8.43mb file.
5
0
2023-03-21T20:05:44
_wsgeorge
false
null
0
jd4ksqz
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd4ksqz/
false
5
t1_jd4dsw1
Great instructions! Does it run any better on Linux native Vs win vs wsl?
1
0
2023-03-21T19:21:26
ehbrah
false
null
0
jd4dsw1
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd4dsw1/
false
1
t1_jd4a329
Cool, but I don't get it. The instructions you pointed to show how to install the 7b lora not the 13b...
2
0
2023-03-21T18:57:37
-becausereasons-
false
null
0
jd4a329
false
/r/LocalLLaMA/comments/11vbq6r/13b_llama_alpaca_loras_available_on_hugging_face/jd4a329/
false
2
t1_jd46lff
didnt work for me at all
1
0
2023-03-21T18:35:48
SDGenius
false
null
0
jd46lff
false
/r/LocalLLaMA/comments/11wndc9/toms_hardware_wrote_a_guide_to_running_llama/jd46lff/
false
1
t1_jd43v8b
I tried out several of the ones you listed. chavinlo's alpaca-native is very impressive, and the closest to ChatGPT. serpdotai's 13b lora provided some decent initial output but then reverts to a dialog between Assistant and Human. I can't run baseten's 30b lora. You need a GPU that can handle 30b at 8bit.
15
0
2023-03-21T18:18:29
antialtinian
false
null
0
jd43v8b
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd43v8b/
false
15
t1_jd42ca4
I think you mean AMD. ATI is long gone after being bought by AMD.
5
0
2023-03-21T18:08:50
nero10578
false
null
0
jd42ca4
false
/r/LocalLLaMA/comments/11xiwes/ati_gpu/jd42ca4/
false
5
t1_jd40r3b
I'm wondering the same. I still cannot understand why running the model and fine-tune it have different requirements.
2
0
2023-03-21T17:58:52
Christ0ph_
false
null
0
jd40r3b
false
/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jd40r3b/
false
2
t1_jd3pejm
Reporting here, so anyone else who may have the similar problem can see. Copied my models, fixed the LlamaTokenizer case, and fixed out of memory CUDA error, running with: `python`[`server.py`](http://server.py/) `--gptq-bits 4 --auto-devices --disk --gpu-memory 3 --no-stream --cai-chat` However, now I use the CAI-C...
1
0
2023-03-21T16:47:26
SlavaSobov
false
2023-03-21T17:47:58
0
jd3pejm
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd3pejm/
false
1
t1_jd3mu2j
There needs to be a quantitative comparison between different models, LoRAs, precisions, etc and a target dataset. Just like qwopqwop200 did here using 3 different datasets (wikitext2, ptb, c4): [https://github.com/qwopqwop200/GPTQ-for-LLaMa#result](https://github.com/qwopqwop200/GPTQ-for-LLaMa#result) I would also l...
13
0
2023-03-21T16:31:05
oobabooga1
false
null
0
jd3mu2j
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd3mu2j/
false
13
t1_jd3m0oa
CPU should be fine, it works on phones at about 1 token/s so should be faster than that with the power of a desktop processor.
1
0
2023-03-21T16:25:57
PM_ME_ENFP_MEMES
false
null
0
jd3m0oa
false
/r/LocalLLaMA/comments/11xiwes/ati_gpu/jd3m0oa/
false
1
t1_jd3k53i
I'm a bit of a noob at C++ - have been trying to compile and I get the following issues - could anyone provide any guidance on how to troubleshoot this? I llama.cpp build info: I UNAME\_S: Darwin I UNAME\_P: i386 I UNAME\_M: x86\_64 I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mf16c -mfma -mavx -mavx2 -...
1
0
2023-03-21T16:14:01
cthulusbestmate
false
null
0
jd3k53i
false
/r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/jd3k53i/
false
1
t1_jd3jfam
ok then, what is qualitatively the best full fine tune?
12
0
2023-03-21T16:09:22
blueSGL
false
null
0
jd3jfam
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd3jfam/
false
12
t1_jd3j4rm
You want 3090 to train finetunes/LORAs or run huge token count prompts/contexts? Are you familliar with ML in general? After all, 60b model is the largest one you can get so far. Or you want to run 60b in 8 bit? It seems that there is, if minor, difference. Personally, I'd love to play with a combination of prompt eng...
5
0
2023-03-21T16:07:31
BalorNG
false
null
0
jd3j4rm
false
/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd3j4rm/
false
5
t1_jd3hopr
Why are we still using LORAs when people have trained and released full fine-tunes already?
3
1
2023-03-21T15:58:13
starstruckmon
false
null
0
jd3hopr
false
/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/jd3hopr/
false
3
t1_jd3f45o
[deleted]
2
0
2023-03-21T15:41:46
[deleted]
true
null
0
jd3f45o
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jd3f45o/
false
2
t1_jd3ewau
Yes, but it still mentioned the cup afterwards being in an incorrect location and that seems odd to me.
2
0
2023-03-21T15:40:21
necile
false
null
0
jd3ewau
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jd3ewau/
false
2
t1_jd3dk1d
[deleted]
2
0
2023-03-21T15:31:31
[deleted]
true
null
0
jd3dk1d
false
/r/LocalLLaMA/comments/11xiwes/ati_gpu/jd3dk1d/
false
2
t1_jd3d9c3
The question is where is the ball, not the cup.
1
0
2023-03-21T15:29:34
morph3v5
false
null
0
jd3d9c3
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jd3d9c3/
false
1
t1_jd392zi
What about typical\_p setting tips?
1
0
2023-03-21T15:01:52
whitepapercg
false
null
0
jd392zi
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd392zi/
false
1
t1_jd37qm8
[deleted]
1
0
2023-03-21T14:52:47
[deleted]
true
null
0
jd37qm8
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd37qm8/
false
1
t1_jd36zn8
You might be able to run the plain fp16 version using PyTorch for ROCM, but anything related to int8 and int4 might be complicated due to dependencies like "bitsandbytes".
4
0
2023-03-21T14:47:40
okoyl3
false
null
0
jd36zn8
false
/r/LocalLLaMA/comments/11xiwes/ati_gpu/jd36zn8/
false
4
t1_jd331vj
technically, the lora answer to that question was wrong too. The cup didn't end up where it was before...
1
0
2023-03-21T14:20:15
necile
false
null
0
jd331vj
false
/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jd331vj/
false
1
t1_jd30vx6
That makes sense. Do you know whether the resources required for finetuning are more or less than for using the model?
2
0
2023-03-21T14:04:44
Pan000
false
null
0
jd30vx6
false
/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jd30vx6/
false
2
t1_jd30u68
Sadly, not yet, no.
2
0
2023-03-21T14:04:22
starstruckmon
false
null
0
jd30u68
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jd30u68/
false
2
t1_jd30mxs
Pretty cool but not quantized I presume.
2
0
2023-03-21T14:02:52
-becausereasons-
false
null
0
jd30mxs
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jd30mxs/
false
2
t1_jd2ztn7
He's asking how to install Ooba and setup the model on WSL (very difficult)
1
0
2023-03-21T13:56:52
-becausereasons-
false
null
0
jd2ztn7
false
/r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jd2ztn7/
false
1