name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jcc5dfu | [deleted] | 1 | 0 | 2023-03-15T19:50:43 | [deleted] | true | 2023-03-20T15:02:08 | 0 | jcc5dfu | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc5dfu/ | false | 1 |
t1_jcc5afp | Yes, correct. If you slow down and read what I already replied to you with, you'll see that I'm telling you that you need both.
To reiterate: Download the *entire* repo of 7b-hf. Place it as a folder in your models folder. *THEN* determine whether you are going to be using the 8-bit or 4-bit version of the 7b model. D... | 3 | 0 | 2023-03-15T19:50:12 | JunoGyles | false | null | 0 | jcc5afp | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc5afp/ | false | 3 |
t1_jcc4tm1 | [deleted] | 1 | 0 | 2023-03-15T19:47:18 | [deleted] | true | 2023-03-20T15:02:18 | 0 | jcc4tm1 | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc4tm1/ | false | 1 |
t1_jcc4lh6 | Those are the weights. They're part of but not the complete model. You need both. Look at the difference between the repos at the link I gave you between 7b-hf and 7b-hf-4int. Download the .pt file from either 8int or 4int and place it in the models folder beside your 7b-hf folder.
Edit: I may actually be getting "wei... | 2 | 0 | 2023-03-15T19:45:54 | JunoGyles | false | null | 0 | jcc4lh6 | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc4lh6/ | false | 2 |
t1_jcc46lb | [deleted] | 1 | 0 | 2023-03-15T19:43:20 | [deleted] | true | 2023-03-20T15:02:21 | 0 | jcc46lb | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc46lb/ | false | 1 |
t1_jcc432w | That would be your problem lmao you haven't actually downloaded the model itself yet.
https://huggingface.co/decapoda-research
Find your desire model here and download either the 8-bit or 4-bit version and make sure you're following the instructions posted as the first post on this sub to set up CUDA and the model. | 1 | 0 | 2023-03-15T19:42:45 | JunoGyles | false | null | 0 | jcc432w | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc432w/ | false | 1 |
t1_jcc2q5z | [deleted] | 1 | 0 | 2023-03-15T19:34:28 | [deleted] | true | 2023-03-20T15:02:25 | 0 | jcc2q5z | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc2q5z/ | false | 1 |
t1_jcc0ef5 | Thanks! I will check it out. Also, I am already satisfied by the speed of the original tortoise. | 2 | 0 | 2023-03-15T19:20:11 | Tree-Sheep | false | null | 0 | jcc0ef5 | false | /r/LocalLLaMA/comments/11s5f39/integrate_llama_into_python_code/jcc0ef5/ | false | 2 |
t1_jcbxyrt | I believe that the official repo presented some interfaces. With a quick search through Github repos I found these two that have what you're looking for (you'll need to adapt them to use your own interface instead of gradio)
[https://github.com/juncongmoo/pyllama](https://github.com/juncongmoo/pyllama)
[https://gith... | 6 | 0 | 2023-03-15T19:05:07 | estrafire | false | null | 0 | jcbxyrt | false | /r/LocalLLaMA/comments/11s5f39/integrate_llama_into_python_code/jcbxyrt/ | false | 6 |
t1_jcbxpu3 | Thank you! Apparantly I needed to add --cai-chat or something like that | 2 | 0 | 2023-03-15T19:03:33 | Yuki_Kutsuya | false | null | 0 | jcbxpu3 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jcbxpu3/ | false | 2 |
t1_jcbfmr8 | The model should be its own folder within a "models" folder. The .pt file should be placed within that models folder (but not the specific model's subfolder) alongside it. It's a bit confusing when written so I apologize I can't get a screenshot of my folder for you ATM. | 1 | 0 | 2023-03-15T17:12:54 | JunoGyles | false | null | 0 | jcbfmr8 | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcbfmr8/ | false | 1 |
t1_jcbemrr | Update 2: Getting sub second per token rn, will edit with precise measurement. 2 threads did the trick! | 1 | 0 | 2023-03-15T17:06:44 | thebluereddituser | false | null | 0 | jcbemrr | false | /r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/jcbemrr/ | false | 1 |
t1_jcbdn4t | Worth a shot! Update: I got it running on my other computer. Exact same configuration, I literally just copied files over from my old computer. 12th gen intel i5 12500h. 287.69 ms / token. So it's probably my shit cpu | 1 | 0 | 2023-03-15T17:00:29 | thebluereddituser | false | null | 0 | jcbdn4t | false | /r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/jcbdn4t/ | false | 1 |
t1_jcb73wu | The problem is the original model is not theirs | 3 | 0 | 2023-03-15T16:20:17 | Nhabls | false | null | 0 | jcb73wu | false | /r/LocalLLaMA/comments/11qng27/stanford_alpaca_7b_llama_instructionfollowing/jcb73wu/ | false | 3 |
t1_jcb626f | Also while running 7B on my note 10+ I noted that running on 8, 4 threads reduced the token generation speed by a lot, I mean A LOT. From 500ms/1 token to 15000ms, most likely due to how BIG.little work. Maybe you could reduce the threads count? For your CPU maybe trying 2 threads. | 1 | 0 | 2023-03-15T16:13:48 | Tight-Juggernaut138 | false | null | 0 | jcb626f | false | /r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/jcb626f/ | false | 1 |
t1_jcb5b9o | Good to know it's not just me! I tried running the 30B model and didn't get a single token after at least 10 minutes (not counting the time spent loading the model and stuff). The whole model was loaded into RAM and everything so idk what was wrong. I'm gonna try it on my newer computer, maybe the CPU is just too ol... | 1 | 0 | 2023-03-15T16:09:10 | thebluereddituser | false | null | 0 | jcb5b9o | false | /r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/jcb5b9o/ | false | 1 |
t1_jcb1kt4 | It is [text-generation-webui](https://github.com/oobabooga/text-generation-webui). The [guide here](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/) explains how to set it all up. | 5 | 0 | 2023-03-15T15:45:49 | Technical_Leather949 | false | null | 0 | jcb1kt4 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jcb1kt4/ | false | 5 |
t1_jcazeth | I hope you don't mind me asking, but how do you get this webui? What project are you using for this? I've managed to get it to run with llama.cpp and textgen, but that's about all I could find. | 3 | 0 | 2023-03-15T15:32:05 | Yuki_Kutsuya | false | null | 0 | jcazeth | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jcazeth/ | false | 3 |
t1_jcawh1g | I'm getting a second 4090 soonish, and my plan is to have it hanging out of my pc with a riser cable. | 1 | 0 | 2023-03-15T15:13:26 | SekstiNii | false | null | 0 | jcawh1g | false | /r/LocalLLaMA/comments/11qk5j9/anyone_with_more_than_2_gpus/jcawh1g/ | false | 1 |
t1_jcase44 | I joined this platform reddit just for this lol, seeing the research on twitter was fine and all but there were lot of other distracting things.
Very fast paced work. They also seem to have a discord server also. | 2 | 0 | 2023-03-15T14:47:05 | AcanthocephalaOk1441 | false | null | 0 | jcase44 | false | /r/LocalLLaMA/comments/11qv69h/int4_llama_is_not_enough_int3_and_beyond/jcase44/ | false | 2 |
t1_jcakqn8 | Here's the results with Alpaca:
https://preview.redd.it/3ucdqugh8yna1.png?width=801&format=png&auto=webp&v=enabled&s=0859cea17ab2d8e0adf589d98d857b5d564ac223
The first answer isn't so good but it did well with the second. I think finetuned 13B and 30B models will be the real factor for popularizing LL... | 4 | 0 | 2023-03-15T13:55:14 | Technical_Leather949 | false | null | 0 | jcakqn8 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jcakqn8/ | false | 4 |
t1_jcabdo4 | >I don't understand these steps at all as I don't know how to use powershell very well
I tried to make the guide as simple as possible, so follow the steps one by one and you should have it up and running. I just added alternate 8-bit instructions for a problem unrelated to what you're having, but I also made those... | 1 | 0 | 2023-03-15T12:43:25 | Technical_Leather949 | false | null | 0 | jcabdo4 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcabdo4/ | false | 1 |
t1_jca81lk | I don't understand these steps at all as I don't know how to use powershell very well
how exactly do I search a file and replace text with powershell? | 1 | 0 | 2023-03-15T12:14:01 | wintermuet | false | 2023-03-15T12:31:05 | 0 | jca81lk | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jca81lk/ | false | 1 |
t1_jca7oc9 | [removed] | 1 | 0 | 2023-03-15T12:10:36 | [deleted] | true | null | 0 | jca7oc9 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jca7oc9/ | false | 1 |
t1_jca4uxd | Other people [here](https://github.com/oobabooga/text-generation-webui/issues/310) are having the same problem, saying it was caused by a recent update with the repo. Anyway, I updated the 8-bit instructions with a tested fix. Try it out now and it should work. | 2 | 0 | 2023-03-15T11:43:09 | Technical_Leather949 | false | null | 0 | jca4uxd | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jca4uxd/ | false | 2 |
t1_jca3noz | [deleted] | 2 | 0 | 2023-03-15T11:30:43 | [deleted] | true | 2023-03-20T15:02:31 | 0 | jca3noz | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jca3noz/ | false | 2 |
t1_jc9ufhm | there's ChatLLaMA: https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama | 3 | 0 | 2023-03-15T09:38:06 | Civil_Collection7267 | false | null | 0 | jc9ufhm | false | /r/LocalLLaMA/comments/11rno6h/multigpu_training/jc9ufhm/ | false | 3 |
t1_jc9u64l | Can confirm I have the same issue with 8600T CPU. If you found anything, please let me know | 1 | 0 | 2023-03-15T09:34:30 | Tight-Juggernaut138 | false | null | 0 | jc9u64l | false | /r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/jc9u64l/ | false | 1 |
t1_jc9ngwl | Interesting results, I actually googled the exact same thing to test when I first got everything set up. For comparison, here's what I got with 8-bit LLaMA 13B:
https://preview.redd.it/eihre4srhwna1.png?width=2382&format=png&auto=webp&v=enabled&s=2bdc5a05b98cfaa4b23faea05df23b281fcf7a7a
It completely ... | 3 | 0 | 2023-03-15T08:00:05 | Technical_Leather949 | false | null | 0 | jc9ngwl | false | /r/LocalLLaMA/comments/11rkts9/tested_some_questions_for_ais/jc9ngwl/ | false | 3 |
t1_jc9lhye | make sure you have the .pt file beside the folder of the model you are using, not inside the folder...
I don't want to say how many hours I wasted. skimming directions and second guessing them never works out.
Anyhow, that fixed my zero token issue. lots of little things I had to do though so just keep at it and star... | 2 | 0 | 2023-03-15T07:33:06 | RobXSIQ | false | null | 0 | jc9lhye | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jc9lhye/ | false | 2 |
t1_jc9kb7u | That's weird though, it's not supposed to be that slow, your cpus is way better than that rasberry...
Maybe you didn't compile the exe in a good way I don't know... you should go to the github to talk about it with the creator | 2 | 0 | 2023-03-15T07:16:54 | Wonderful_Ad_5134 | false | null | 0 | jc9kb7u | false | /r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/jc9kb7u/ | false | 2 |
t1_jc9jono | After a few hours I hit ctrl c and started fiddling around with it to see if I could figure out what was wrong. The interesting thing is that it was using 0% CPU though
I was getting over a minute per token | 1 | 0 | 2023-03-15T07:08:23 | thebluereddituser | false | null | 0 | jc9jono | false | /r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/jc9jono/ | false | 1 |
t1_jc9jhia | What's your speed? | 1 | 0 | 2023-03-15T07:05:43 | Wonderful_Ad_5134 | false | null | 0 | jc9jhia | false | /r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/jc9jhia/ | false | 1 |
t1_jc9icwt | [removed] | 1 | 0 | 2023-03-15T06:50:42 | [deleted] | true | null | 0 | jc9icwt | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc9icwt/ | false | 1 |
t1_jc9iceg | Have you tried the same on alpaca? Their demo seemed to work much better yesterday, I got a chance to try a bunch of prompts with 30-50 second wait times. | 3 | 0 | 2023-03-15T06:50:31 | Salt_Jackfruit527 | false | null | 0 | jc9iceg | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc9iceg/ | false | 3 |
t1_jc9i7be | I'm having difficulty getting it to run quickly at all. I'm using a weak cpu (intel i5 7300hq) but I've heard people getting 10s/token on a raspberry pi and I can't manage that.
The command I'm running rn:
.\Release\llama.exe -m ./models/7B/ggml-model-q4_0.bin -p "The HitchHiker's Guide to the Galaxy entry on ... | 5 | 0 | 2023-03-15T06:48:41 | thebluereddituser | false | null | 0 | jc9i7be | false | /r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/jc9i7be/ | false | 5 |
t1_jc9he4v | I would be interested in this too. As a former ETH miner I have enough cards sitting around with 10 GB of vram | 1 | 0 | 2023-03-15T06:38:17 | Qxarq | false | null | 0 | jc9he4v | false | /r/LocalLLaMA/comments/11rno6h/multigpu_training/jc9he4v/ | false | 1 |
t1_jc9h75f | >Installing 8-bit LLaMA with text-generation-webui
Just wanted to thank you for this, went butter smooth on a fresh linux install, everything worked and got OPT to generate stuff in no time. Need more VRAM for llama stuff, but so far the GUI is great, it really does fill like automatic111s stable diffusion project.... | 4 | 0 | 2023-03-15T06:35:50 | Salt_Jackfruit527 | false | null | 0 | jc9h75f | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc9h75f/ | false | 4 |
t1_jc9g53a | I'm trying to get a 8bit version but I get a warning saying that my GPU was not detected and that it would fall back into CPU mode, how do I fix this? | 1 | 0 | 2023-03-15T06:22:27 | DannaRain | false | null | 0 | jc9g53a | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc9g53a/ | false | 1 |
t1_jc9fy13 | Another member of the community did a lot of testing and found a repetition penalty of 1/0.85 to produce the best results when combined with those other parameters. I generally agree, although what they recommend is what I've referred to as "LLaMA-Precise." For reference, [here's a great result](https://gist.github.com... | 2 | 0 | 2023-03-15T06:20:03 | Technical_Leather949 | false | null | 0 | jc9fy13 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc9fy13/ | false | 2 |
t1_jc9emm0 | Has anyone here tried using old server hardware to run llama? I see some M40s on ebay for $150 for 24GB of VRAM. 4 of those could fit the full-fat model for the cost of the midrange consumer GPU. | 3 | 0 | 2023-03-15T06:03:41 | R__Daneel_Olivaw | false | null | 0 | jc9emm0 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc9emm0/ | false | 3 |
t1_jc9d996 | I wonder if the 65b model has a bit more reason and logic or if it will just be a fancier way of talking nonsense. | 4 | 0 | 2023-03-15T05:47:23 | RobXSIQ | false | null | 0 | jc9d996 | false | /r/LocalLLaMA/comments/11rkts9/tested_some_questions_for_ais/jc9d996/ | false | 4 |
t1_jc9cem6 | Same I had issues when installing and I just showed it to GPT-4 and it told me step by step how to fix it! | 1 | 0 | 2023-03-15T05:37:32 | Euphoric-Escape-9492 | false | null | 0 | jc9cem6 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc9cem6/ | false | 1 |
t1_jc956gn | Here's a random one. I provided the information in bold.
Using LLaMA 13B 4bit running on an RTX 3080. 1.99 temperature, 1.15 repetition\_penalty, 75 top\_k, 0.12 top\_p, typical\_p 1, length penalty 1.5
Output generated in 31.66 seconds (1.73 it/s, 437 tokens)
​
>**Mark Zuckerberg Makes Shocking Revel... | 6 | 0 | 2023-03-15T04:22:15 | iJeff | false | null | 0 | jc956gn | false | /r/LocalLLaMA/comments/11r6mdm/you_might_not_need_the_minimum_vram/jc956gn/ | false | 6 |
t1_jc8zjn3 | oh and the last one is same 30B model but different settings(top p0.1, while others top p 1) | 2 | 0 | 2023-03-15T03:33:52 | Capable-Outside-601 | false | null | 0 | jc8zjn3 | false | /r/LocalLLaMA/comments/11rkts9/tested_some_questions_for_ais/jc8zjn3/ | false | 2 |
t1_jc8yccd | I personally recommend either the Storywriter preset for creative generations, or these parameters from the guide for more precise results:
>For a more precise chat, use temp 0.7, repetition\_penalty 1.1764705882352942 (1/0.85), top\_k 40, and top\_p 0.1 | 2 | 0 | 2023-03-15T03:24:19 | Technical_Leather949 | false | null | 0 | jc8yccd | false | /r/LocalLLaMA/comments/11oqbvx/repository_of_llama_prompts/jc8yccd/ | false | 2 |
t1_jc8xe26 | [removed] | 1 | 0 | 2023-03-15T03:16:51 | [deleted] | true | null | 0 | jc8xe26 | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jc8xe26/ | false | 1 |
t1_jc8p41k | Hacker News also comes to mind. | 1 | 0 | 2023-03-15T02:16:27 | oobabooga1 | false | null | 0 | jc8p41k | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc8p41k/ | false | 1 |
t1_jc8hdwm | The minimum for the 13b model probably sits between 6gb and 8gb.
My 1060 6gb definitely *cannot* load the 13b model. | 2 | 0 | 2023-03-15T01:24:47 | remghoost7 | false | null | 0 | jc8hdwm | false | /r/LocalLLaMA/comments/11r6mdm/you_might_not_need_the_minimum_vram/jc8hdwm/ | false | 2 |
t1_jc7zt50 | Unless you want to use RWKV, that error is normal. If you're at that step and everything has been working so far, you should be on the right track to getting 4-bit running. | 2 | 0 | 2023-03-14T18:29:09 | Technical_Leather949 | false | null | 0 | jc7zt50 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc7zt50/ | false | 2 |
t1_jc7ywwk | Are you sure you followed each step properly? I just tested it step-by-step on a completely new Windows install and everything works. I updated the 4-bit instructions to clarify some parts. Try removing the conda env and starting over again from step 1. | 2 | 0 | 2023-03-14T18:23:31 | Technical_Leather949 | false | null | 0 | jc7ywwk | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc7ywwk/ | false | 2 |
t1_jc7yoox | So far the demo of the 7b alpaca model is more impressive than what I've been able to get out of the 13b llama model.
Here are a few examples of the outputs, not cherry picked to make it look good or bad. I used all the default settings from the webgui. The non-bolded is the input and the bolded is the output from the... | 4 | 0 | 2023-03-14T18:22:04 | qrayons | false | null | 0 | jc7yoox | false | /r/LocalLLaMA/comments/11r6mdm/you_might_not_need_the_minimum_vram/jc7yoox/ | false | 4 |
t1_jc7rr3o | Yes, using the instructions in the first post you responded to. It was challenging and required a lot of trouble shooting. It is very much NOT user friendly. You have to set up a c++ dev environment and compile a module yourself, but the instructions are clear. | 1 | 0 | 2023-03-14T17:38:52 | antialtinian | false | null | 0 | jc7rr3o | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc7rr3o/ | false | 1 |
t1_jc7rkj8 | damn, can you guys share some impressive outputs? | 2 | 0 | 2023-03-14T17:37:45 | Madd0g | false | null | 0 | jc7rkj8 | false | /r/LocalLLaMA/comments/11r6mdm/you_might_not_need_the_minimum_vram/jc7rkj8/ | false | 2 |
t1_jc7nmyp | You made it work with the web ui? | 1 | 0 | 2023-03-14T17:13:20 | curtwagner1984 | false | null | 0 | jc7nmyp | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc7nmyp/ | false | 1 |
t1_jc7hezd | Do you know why the rep penalty is so specific? I've just been using 1.17. | 1 | 0 | 2023-03-14T16:34:10 | antialtinian | false | null | 0 | jc7hezd | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc7hezd/ | false | 1 |
t1_jc7h7yc | As a person that spent 2 days before finally getting 4bit to work, I really hope so!
I do feel like I passed some kind of initiation, though. | 2 | 0 | 2023-03-14T16:32:55 | antialtinian | false | null | 0 | jc7h7yc | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc7h7yc/ | false | 2 |
t1_jc7bbvv | Wow, you're right. Thank you for sharing!
I have a 2060S with 8gb of VRAM, and I get around these numbers:
Output generated in 28.42 seconds (7.00 tokens/s, 199 tokens)
Only problem is that if I use --cai-chat and try to load one of my characters, I get an Out of Memory error... EDIT: Also happens if the input i... | 4 | 0 | 2023-03-14T15:55:25 | fragilesleep | false | 2023-03-14T16:16:56 | 0 | jc7bbvv | false | /r/LocalLLaMA/comments/11r6mdm/you_might_not_need_the_minimum_vram/jc7bbvv/ | false | 4 |
t1_jc77fdk | [deleted] | 12 | 0 | 2023-03-14T15:29:52 | [deleted] | true | 2023-03-31T16:58:49 | 0 | jc77fdk | false | /r/LocalLLaMA/comments/11qv69h/int4_llama_is_not_enough_int3_and_beyond/jc77fdk/ | false | 12 |
t1_jc76nlg | [deleted] | 6 | 0 | 2023-03-14T15:24:44 | [deleted] | true | null | 0 | jc76nlg | false | /r/LocalLLaMA/comments/11qv69h/int4_llama_is_not_enough_int3_and_beyond/jc76nlg/ | false | 6 |
t1_jc6xghw | Any recommendations on what to use for "Generation parameters preset"? The default in the webui is "NovelAI-Sphinx Moth", though it looks like that preset uses a temperature of 1.99 even though from what I've seen the recommended temperature for language models is between .7-.9. | 2 | 0 | 2023-03-14T14:21:40 | qrayons | false | null | 0 | jc6xghw | false | /r/LocalLLaMA/comments/11oqbvx/repository_of_llama_prompts/jc6xghw/ | false | 2 |
t1_jc6rg42 | Also getting this potential error on the instructions:
"pip install torch==1.12+cu113 -f [https://download.pytorch.org/whl/torch\_stable.html](https://download.pytorch.org/whl/torch_stable.html)"
I'm not sure if that's from something else, but this is for a separate environment created to try and get the 4-bit stuff... | 1 | 0 | 2023-03-14T13:37:16 | jarredwalton | false | null | 0 | jc6rg42 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc6rg42/ | false | 1 |
t1_jc6q6ux | I'm hoping to not have to dual-boot or anything like it. Ideally, I want this working from Windows with as little external extras as possible, but I realize that may not happen.
What's the chance of getting AMD running through WSL2? I tried following the Linux instructions in a Ubuntu 22.04 LTS prompt, but it didn'... | 1 | 0 | 2023-03-14T13:27:18 | jarredwalton | false | null | 0 | jc6q6ux | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc6q6ux/ | false | 1 |
t1_jc6pl4q | >Install Visual Studio 2019 and Build Tools for Visual Studio 2019 (has to be 2019) here. Scroll down to "Older Downloads"
I did not, because that was not made clear in the instructions. I'll try selecting the Desktop Environment and see if that fixes the problem and let you know, so that you can update the origina... | 1 | 0 | 2023-03-14T13:22:34 | jarredwalton | false | null | 0 | jc6pl4q | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc6pl4q/ | false | 1 |
t1_jc6p2ai | I believe this is the finetuning data - https://huggingface.co/datasets/crumb/stanford_alpaca_full_prompts | 5 | 0 | 2023-03-14T13:18:23 | Disastrous_Elk_6375 | false | null | 0 | jc6p2ai | false | /r/LocalLLaMA/comments/11qng27/stanford_alpaca_7b_llama_instructionfollowing/jc6p2ai/ | false | 5 |
t1_jc6k5ts | Does 2 bit quantization really mean that the weights only have 2 bits of precision? That would mean each weight can only have one of 4 possible values, no? I can't wrap my head around how that could be possible, so maybe I'm missing something. | 6 | 0 | 2023-03-14T12:36:28 | qrayons | false | null | 0 | jc6k5ts | false | /r/LocalLLaMA/comments/11qv69h/int4_llama_is_not_enough_int3_and_beyond/jc6k5ts/ | false | 6 |
t1_jc6dwpr | LLMs are r/iamverysmart personified! | 1 | 0 | 2023-03-14T11:33:48 | manubfr | false | null | 0 | jc6dwpr | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc6dwpr/ | false | 1 |
t1_jc6at7l | > The question is how good they are at pretending to be smart.
I think this also apply to most of reddit users. | 8 | 0 | 2023-03-14T10:57:32 | a_devious_compliance | false | null | 0 | jc6at7l | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc6at7l/ | false | 8 |
t1_jc6a6dh | "While int2 quantization is not usable for LLaMa 13B, larger models may be 2-bit quantize-able without much performance drop."
Does it imply I'll be able to run 30b model on 12gb?!
What a time to be alive, indeed :) | 4 | 0 | 2023-03-14T10:49:40 | BalorNG | false | null | 0 | jc6a6dh | false | /r/LocalLLaMA/comments/11qv69h/int4_llama_is_not_enough_int3_and_beyond/jc6a6dh/ | false | 4 |
t1_jc674c5 | I'm at work rn. Try it with ChatGPT, it's pretty proficient. | 1 | 0 | 2023-03-14T10:08:18 | arjuna66671 | false | null | 0 | jc674c5 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc674c5/ | false | 1 |
t1_jc66xht | can you help me? what did you do? | 1 | 0 | 2023-03-14T10:05:42 | lankasu | false | null | 0 | jc66xht | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc66xht/ | false | 1 |
t1_jc66v33 | I had the same stuff but thanks to ChatGPT it worked out xD.
Lot of those guides just assume that ppl know what to do bec. for coders its just a given. But thanks to ChatGPT's help i learned some stuff. | 2 | 0 | 2023-03-14T10:04:47 | arjuna66671 | false | null | 0 | jc66v33 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc66v33/ | false | 2 |
t1_jc664p7 | Encountered several errors doing the webui guide on Windows:
\> conda install cuda -c nvidia/label/cuda-11.3.0 -c nvidia/label/cuda-11.3.1
PackagesNotFoundError: The following packages are not available from current channels:
​
\> git clone [https://github.com/oobabooga/text-generation-webui](https... | 1 | 0 | 2023-03-14T09:54:33 | lankasu | false | 2023-03-14T09:58:30 | 0 | jc664p7 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc664p7/ | false | 1 |
t1_jc5wwak | That’s f-ing ridiculous! So much progress in a few days. | 6 | 0 | 2023-03-14T07:39:40 | PM_ME_ENFP_MEMES | false | null | 0 | jc5wwak | false | /r/LocalLLaMA/comments/11qv69h/int4_llama_is_not_enough_int3_and_beyond/jc5wwak/ | false | 6 |
t1_jc5mxlj | Thanks for this incredibly useful post. I have 2x3090 with SLI. Any guidance on how I can run with it? | 2 | 0 | 2023-03-14T05:26:57 | manojs | false | null | 0 | jc5mxlj | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc5mxlj/ | false | 2 |
t1_jc5fhc9 | >I downloaded the "Build Tools for Visual Studio 2019 (version 16.11)"
Did you also download VS 2019 itself? You need Desktop Environment with C++. Make sure that's downloaded, then try starting over from step 1. | 2 | 0 | 2023-03-14T04:08:40 | Technical_Leather949 | false | 2023-03-14T05:21:00 | 0 | jc5fhc9 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc5fhc9/ | false | 2 |
t1_jc5dnig | >Does this also work on Windows, or only with Linux?
You can dual boot with Ubuntu. It's [quick and easy](https://help.ubuntu.com/community/WindowsDualBoot) to set up.
>What's the chance of getting this working with an Arc A770
I'm really not familiar with Intel GPUs so I'm not sure, but it [looks like here](h... | 1 | 0 | 2023-03-14T03:51:29 | Technical_Leather949 | false | null | 0 | jc5dnig | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc5dnig/ | false | 1 |
t1_jc59djs | I think the data set is more important. The training is cheap, and we can make our own one-off weights that way. But with training data we can make many weights. It's only a matter of time before we get that data too, and people start funding $1000-$20000 projects to fine tune models. Just like what happened with the d... | 6 | 0 | 2023-03-14T03:14:06 | Virtamancer | false | null | 0 | jc59djs | false | /r/LocalLLaMA/comments/11qng27/stanford_alpaca_7b_llama_instructionfollowing/jc59djs/ | false | 6 |
t1_jc54fdx | Does this also work on Windows, or only with Linux?
Related: What's the chance of getting this working with an Arc A770 16GB? :-D | 1 | 0 | 2023-03-14T02:34:31 | jarredwalton | false | null | 0 | jc54fdx | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc54fdx/ | false | 1 |
t1_jc549fh | I've got the 8-bit version running using the above instructions, but I'm failing on the 4-bit models. I get an error about the compiler when running this command:
`python setup_cuda.py install`
I'm guessing it's from not installing the Build Tools for Visual Studio 2019 "properly," but I'm not sure what the correct o... | 1 | 0 | 2023-03-14T02:33:15 | jarredwalton | false | null | 0 | jc549fh | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc549fh/ | false | 1 |
t1_jc4wnre | Hope they release the model weights. I mean this is a University, not a corporation. Let's hope they do the right thing. | 10 | 0 | 2023-03-14T01:35:13 | yahma | false | null | 0 | jc4wnre | false | /r/LocalLLaMA/comments/11qng27/stanford_alpaca_7b_llama_instructionfollowing/jc4wnre/ | false | 10 |
t1_jc4vygb | Keep in mind that you need a 3-bit CUDA kernel to run it properly which is why everyone stops at 4 bit. | 5 | 0 | 2023-03-14T01:29:52 | PartySunday | false | null | 0 | jc4vygb | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc4vygb/ | false | 5 |
t1_jc4t8st | ​
https://preview.redd.it/e2bvnaxvbnna1.png?width=523&format=png&auto=webp&v=enabled&s=9ec3f31ce80f77869dba818403876cc0347c99a4
tested 30B with the same settings and questions. works well. | 3 | 0 | 2023-03-14T01:09:25 | Capable-Outside-601 | false | null | 0 | jc4t8st | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc4t8st/ | false | 3 |
t1_jc46um2 | [removed] | 1 | 0 | 2023-03-13T22:28:23 | [deleted] | true | null | 0 | jc46um2 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc46um2/ | false | 1 |
t1_jc3vt03 | No problem, glad everything worked out for you! I agree about the throttling and restrictions. Local, open source LLMs is the real future. | 1 | 0 | 2023-03-13T21:13:18 | Technical_Leather949 | false | null | 0 | jc3vt03 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc3vt03/ | false | 1 |
t1_jc3ryq5 | Thanks, again! I'm having a coherent conversation in 30b-4bit about bootstrapping a Generative AI consulting business without any advertising or marketing budget. I love the fact that I can get immediate second opinions without being throttled or told 'as an artificial intelligence, I cannot to <x> because our ... | 2 | 0 | 2023-03-13T20:48:07 | Tasty-Attitude-7893 | false | 2023-03-13T20:53:38 | 0 | jc3ryq5 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc3ryq5/ | false | 2 |
t1_jc3qn7e | That finally worked and they updated the repositories for GPTQ for the fix you noted while I was downloading. Btw, I found another HF archive with the 4bit weights: [https://huggingface.co/maderix/llama-65b-4bit](https://huggingface.co/maderix/llama-65b-4bit) | 1 | 0 | 2023-03-13T20:39:35 | Tasty-Attitude-7893 | false | 2023-03-13T20:52:56 | 0 | jc3qn7e | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc3qn7e/ | false | 1 |
t1_jc3kpz7 | Yes, this guide still works. Are you sure you're searching the right file? The string is there, line 368. | 1 | 0 | 2023-03-13T20:01:43 | Technical_Leather949 | false | null | 0 | jc3kpz7 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc3kpz7/ | false | 1 |
t1_jc3kexa | ROCm works so you can run it with AMD:
pip3 install torch torchvision torchaudio --extra-index-url [https://download.pytorch.org/whl/rocm5.2](https://download.pytorch.org/whl/rocm5.2)
then follow GPTQ instructions | 1 | 0 | 2023-03-13T19:59:46 | Technical_Leather949 | false | null | 0 | jc3kexa | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc3kexa/ | false | 1 |
t1_jc3k3ca | [deleted] | 1 | 0 | 2023-03-13T19:57:43 | [deleted] | true | null | 0 | jc3k3ca | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc3k3ca/ | false | 1 |
t1_jc3dcah | Am I right in assuming that the 4-bit option is only viable for NVIDIA at the moment? I only see mentions of CUDA in the GPTQ repository for LLaMA.
If so, any indications that AMD support is being worked on? | 2 | 0 | 2023-03-13T19:14:30 | aggregat4 | false | null | 0 | jc3dcah | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc3dcah/ | false | 2 |
t1_jc368bf | are these docs still valid, I tried following the 8bit instructions for windows, but got to this step--- but those strings don't exist in the file
**In** \\bitsandbytes\\cuda\_setup\\main.py **search for:**
if not torch.cuda.is_available(): return 'libsbitsandbytes_cpu.so', None, None, None, None
**and repla... | 1 | 0 | 2023-03-13T18:29:25 | humanbeingmusic | false | null | 0 | jc368bf | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc368bf/ | false | 1 |
t1_jc2za14 | They list the VRAM requirements for various quantized models here: [https://github.com/qwopqwop200/GPTQ-for-LLaMa#memory-usage](https://github.com/qwopqwop200/GPTQ-for-LLaMa#memory-usage)
Ideally the 3-bit 13B would run on an 8GB card. | 1 | 0 | 2023-03-13T17:45:31 | triigerhappy | false | null | 0 | jc2za14 | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc2za14/ | false | 1 |
t1_jc2yvsf | You can see the benchmarks of various configurations here: https://github.com/qwopqwop200/GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa)
The gist of it is that GPTQ quantized 4-bit is only a negligible loss in accuracy, and as the parameters in the model increase, even 3-bit or potentially 2-bit may be... | 2 | 0 | 2023-03-13T17:43:00 | triigerhappy | false | null | 0 | jc2yvsf | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc2yvsf/ | false | 2 |
t1_jc2v8x2 | [deleted] | 1 | 0 | 2023-03-13T17:19:39 | [deleted] | true | null | 0 | jc2v8x2 | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc2v8x2/ | false | 1 |
t1_jc2rqqx | Thank you, by the way. I'm not sure why I didn't see that issue when I googled it. I'll give it a try. | 1 | 0 | 2023-03-13T16:55:24 | Tasty-Attitude-7893 | false | null | 0 | jc2rqqx | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2rqqx/ | false | 1 |
t1_jc2q9zz | > ensuring I'm using the v2 torrent 4-bit model
Are you talking about the one from the rentry guide? I saw someone else having very incoherent results from that so I don't know if it was converted properly.
> RuntimeError: Tensors must have same number of dimensions: got 3 and 4
This [issue here](https://githu... | 2 | 0 | 2023-03-13T16:46:08 | Technical_Leather949 | false | null | 0 | jc2q9zz | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2q9zz/ | false | 2 |
t1_jc2nepd | I redid everything on my mechanical drive, ensuring I'm using the v2 torrent 4-bit model and copying depacoda's normal 30b weights directory, exactly as specified on the oobabooga steps and with fresh git pulls of both repositories, and it got through the errors but now I'm getting this:
(textgen1) <me>@<my... | 1 | 0 | 2023-03-13T16:27:43 | Tasty-Attitude-7893 | false | null | 0 | jc2nepd | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2nepd/ | false | 1 |
t1_jc2jm5z | Looking at GitHub, I'm seeing another user with the [same](https://github.com/oobabooga/text-generation-webui/pull/206#issuecomment-1462804697) error. From the [page](https://github.com/oobabooga/text-generation-webui/pull/206#issuecomment-1462810249):
>You are either using the older version of transformers, the ol... | 2 | 0 | 2023-03-13T16:03:12 | Technical_Leather949 | false | null | 0 | jc2jm5z | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2jm5z/ | false | 2 |
t1_jc2hzkr | Unfortunately I believe those files are older and pre huggingface. In any case the 13b 3bit doesn't work. | 3 | 0 | 2023-03-13T15:52:34 | triigerhappy | false | null | 0 | jc2hzkr | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc2hzkr/ | false | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.