name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jcnufit | But Iβve heard it takes about 32-95gb ram and coy only should be fine but I get cuda not found errors myself. | 2 | 0 | 2023-03-18T04:52:16 | [deleted] | false | null | 0 | jcnufit | false | /r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jcnufit/ | false | 2 |
t1_jcnuddb | Iβve had a hard time but it should work, maybe with the rust cpu only software. I have 128gb ram and llama cpp crashes and with some models asks about cuda. Gotta find the right software and dataset, Iβm not too sure where to find the 65b model thatβs ready for the rust cpu llama on GitHub. Doesnβt take .pt and may not... | 2 | 0 | 2023-03-18T04:51:38 | [deleted] | false | null | 0 | jcnuddb | false | /r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jcnuddb/ | false | 2 |
t1_jcnseby | takes about 42gig of RAM to run via Llama.cpp .
the quantize step is done for each sub file individually, meaning if you can quantize the 7gig model you can quantize the rest | 4 | 0 | 2023-03-18T04:30:51 | blueSGL | false | null | 0 | jcnseby | false | /r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jcnseby/ | false | 4 |
t1_jcnqcib | [deleted] | 1 | 0 | 2023-03-18T04:10:29 | [deleted] | true | 2023-03-18T04:13:39 | 0 | jcnqcib | false | /r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jcnqcib/ | false | 1 |
t1_jcnmuy8 | Oh wow, thank you so much for this! I was looking for something like this but didn't know how to phrase the question. This is really helpful | 1 | 0 | 2023-03-18T03:37:18 | ed2mXeno | false | null | 0 | jcnmuy8 | false | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jcnmuy8/ | false | 1 |
t1_jcn2pwv | Which part of you stuck on? I don't have a Mac, but I can try to help if its something I know.
Alternatively, if you want a quick and easy way to test Alpaca on Mac, you can use [alpaca.cpp here](https://github.com/antimatter15/alpaca.cpp). | 1 | 0 | 2023-03-18T00:50:16 | Technical_Leather949 | false | null | 0 | jcn2pwv | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcn2pwv/ | false | 1 |
t1_jcn1ho7 | what i mean is that i came here because i got stuck trying to follow the instructions here on mac: [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
your writing is more clear so i was hoping that this guide would remove the need for trying to follow those instruc... | 1 | 0 | 2023-03-18T00:40:54 | nofrauds911 | false | null | 0 | jcn1ho7 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcn1ho7/ | false | 1 |
t1_jcmvoqj | >i came to this post because i don't know how to load the model in the text-generation-webui
It's the same way you load it as usual, everything is in the guide: python server.py --model llama-7b --load-in-8bit
Did you make sure to follow the installations steps for 8-bit/4-bit LLaMA above that section? | 2 | 0 | 2023-03-17T23:58:01 | Technical_Leather949 | false | null | 0 | jcmvoqj | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcmvoqj/ | false | 2 |
t1_jcmtzkz | for (New) Using Alpaca-LoRA with text-generation-webui
this guide was so good until step 5 where it just says "Load LLaMa-7B in **8-bit mode** and select the LoRA in the Parameters tab."
i came to this post because i don't know how to load the model in the text-generation-webui, even though i have everything download... | 1 | 0 | 2023-03-17T23:45:31 | nofrauds911 | false | null | 0 | jcmtzkz | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcmtzkz/ | false | 1 |
t1_jcmsnaz | yeah couldn't help but notice the install instructions just changed on the textgen repo, the conda create command now specifies the python version \`\`\`conda create -n textgen python=3.10.9\`\`\`
Update, after a clean install with the new instructions I got 7b working with
\`\`\`python3 [server.py](https://ser... | 1 | 0 | 2023-03-17T23:35:39 | humanbeingmusic | false | 2023-03-17T23:49:41 | 0 | jcmsnaz | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcmsnaz/ | false | 1 |
t1_jcmofxf | I think something has changed. I have tried installing on 2 machines in both windows and Ubuntu WSL. I cannot get past CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so either.
I had a working 4bit install and patched bitsandbytes several times correctly before.
Full error body:
python server... | 1 | 0 | 2023-03-17T23:05:05 | antialtinian | false | null | 0 | jcmofxf | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcmofxf/ | false | 1 |
t1_jcmgdrh | If anyone fine tunes a 13b model, use my PR addressing the issues in the dataset. The original Stanford dataset had a lot of issues. | 3 | 0 | 2023-03-17T22:06:50 | yahma | false | null | 0 | jcmgdrh | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jcmgdrh/ | false | 3 |
t1_jcmeyls | Not sure what is happening with reddit but every time I post the output it deletes my msg, essentially it got further in that CUDA runtime found but new errors now
```
CUDA SETUP: CUDA runtime path found: /home/seand/miniconda3/envs/textgen/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.... | 1 | 0 | 2023-03-17T21:56:47 | humanbeingmusic | false | null | 0 | jcmeyls | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcmeyls/ | false | 1 |
t1_jcmesr2 | [deleted] | 1 | 0 | 2023-03-17T21:55:39 | [deleted] | true | null | 0 | jcmesr2 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcmesr2/ | false | 1 |
t1_jcmcum3 | It's in the guide:
> You may need to create symbolic links to get everything working correctly. Follow the instructions in this comment then restart your machine: [https://github.com/microsoft/WSL/issues/5548#issuecomment-1292858815](https://github.com/microsoft/WSL/issues/5548#issuecomment-1292858815)
I recomme... | 2 | 0 | 2023-03-17T21:42:23 | Technical_Leather949 | false | null | 0 | jcmcum3 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcmcum3/ | false | 2 |
t1_jcmc0zg | I did the cudatoolkit in the conda env and that did get rid of a bunch of errors, I also tried setting my LD\_LIBRARY\_PATH to the 2 locations I found but not that particular path you're specifying here, still no joy, let me try the path you have here and this symbolic link fix | 1 | 0 | 2023-03-17T21:36:45 | humanbeingmusic | false | 2023-03-17T21:43:43 | 0 | jcmc0zg | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcmc0zg/ | false | 1 |
t1_jcmbk4e | Did you install cudatoolkit inside the conda environment? Take care not to install any Nvidia driver within WSL itself, as this can overwrite the WSL2 specific driver.
For me, doing the symbolic link fix along with entering "export LD\_LIBRARY\_PATH=$LD\_LIBRARY\_PATH:/lib/wsl/lib" and installing cudatoolkit resolved ... | 2 | 0 | 2023-03-17T21:33:32 | Technical_Leather949 | false | null | 0 | jcmbk4e | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcmbk4e/ | false | 2 |
t1_jcm871u | For those who don't want to have to download from Discord, here's the text to copy and paste:
{
"char_name": "LLaMA",
"char_persona": "LLaMA's primary function is to interact with users through natural language processing, which means it can understand and respond to text-based queries in... | 5 | 0 | 2023-03-17T21:10:33 | Technical_Leather949 | false | null | 0 | jcm871u | false | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jcm871u/ | false | 5 |
t1_jcm5xqt | can't seem to get past bitsandbytes errors on my WSL ubuntu despite CUDA apparently working, I don't understand why bitsandbytes isn't working with CUDA:
CUDA SETUP: Required library version not found: libsbitsandbytes\_cpu.so | 1 | 0 | 2023-03-17T20:55:11 | humanbeingmusic | false | null | 0 | jcm5xqt | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcm5xqt/ | false | 1 |
t1_jcm39e3 | I'm not sure how you're running it, but I didn't get much out of LLaMA until I used these ChatGPT style character settings:
[https://www.reddit.com/r/Oobabooga/comments/11sgabv/testing\_my\_chatgpt\_character\_with\_different/](https://www.reddit.com/r/Oobabooga/comments/11sgabv/testing_my_chatgpt_character_with_diffe... | 4 | 0 | 2023-03-17T20:37:20 | hatlessman | false | null | 0 | jcm39e3 | false | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jcm39e3/ | false | 4 |
t1_jclvekr | Still haven't had a chance I'm afraid. I'm guessing it's going to be a lot of me making mistakes for hours so wanted to make sure I've got a full weekend day to toss at it.
Edit: And the local worked! Long story with a little guesswork thrown in. I wound up getting tripped up on a few things while trying to get the lo... | 3 | 0 | 2023-03-17T19:45:40 | toothpastespiders | false | 2023-03-18T05:00:03 | 0 | jclvekr | false | /r/LocalLLaMA/comments/11s8585/can_someone_eli5_wheres_the_best_place_to_fine/jclvekr/ | false | 3 |
t1_jcliwzn | β€οΈβ€οΈπ² yeass | 1 | 0 | 2023-03-17T18:24:09 | Inevitable-Start-653 | false | null | 0 | jcliwzn | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcliwzn/ | false | 1 |
t1_jclftfu | [deleted] | 1 | 0 | 2023-03-17T18:04:22 | [deleted] | true | null | 0 | jclftfu | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jclftfu/ | false | 1 |
t1_jcl7430 | The example chat above was actually generated using this (credit to u/Inevitable-Start-653):
{
"char_name": "LLaMA",
"char_persona": "LLaMA's primary function is to interact with users through natural language processing, which means it can understand and respond to text-based queries in ... | 3 | 0 | 2023-03-17T17:09:02 | Technical_Leather949 | false | null | 0 | jcl7430 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcl7430/ | false | 3 |
t1_jcl5v0r | I'm new to all this AI world. But I've read there is a framework on windows caled DirectML that abstracts the need to run GPU specific software to run ML software.
Do you know if it would be possible to run LLAMA on DirectML? | 1 | 0 | 2023-03-17T17:01:02 | Christ0ph_ | false | null | 0 | jcl5v0r | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcl5v0r/ | false | 1 |
t1_jcl0dnw | No, I haven't made it to work yet. The compile for GPTQ-for-LLAMA always fails with a missing header import (some HIP file). I've given up for the moment and I'm using llama.cpp for now. It's a port to work on the CPU and my CPU is fast enough so that performance is acceptable. | 1 | 0 | 2023-03-17T16:26:27 | aggregat4 | false | null | 0 | jcl0dnw | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcl0dnw/ | false | 1 |
t1_jckpon8 | [deleted] | 1 | 0 | 2023-03-17T15:17:50 | [deleted] | true | null | 0 | jckpon8 | false | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jckpon8/ | false | 1 |
t1_jck13bh | Did it work? :) | 1 | 0 | 2023-03-17T12:07:46 | BalorNG | false | null | 0 | jck13bh | false | /r/LocalLLaMA/comments/11s8585/can_someone_eli5_wheres_the_best_place_to_fine/jck13bh/ | false | 1 |
t1_jck0tlh | Yea, without "instruct fine-tuning" this is just "undifferentiated slice of Internet". | 5 | 0 | 2023-03-17T12:05:12 | BalorNG | false | null | 0 | jck0tlh | false | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jck0tlh/ | false | 5 |
t1_jcjxy4e | Glad to see I'm not the only one struggling to get quality out of llama. I've kind of given up and am now just waiting for alpaca to be released. | 7 | 0 | 2023-03-17T11:36:29 | qrayons | false | null | 0 | jcjxy4e | false | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jcjxy4e/ | false | 7 |
t1_jcjs1ct | Hey! What am I doing wrong? I downloaded your llama.json, set the generation parameters, yet he stops very, very early, usually after a single sentence. | 1 | 0 | 2023-03-17T10:28:14 | dangernoodle01 | false | null | 0 | jcjs1ct | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcjs1ct/ | false | 1 |
t1_jcjkxd8 | <3 | 1 | 0 | 2023-03-17T08:48:39 | ed2mXeno | false | null | 0 | jcjkxd8 | false | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jcjkxd8/ | false | 1 |
t1_jcjkmx9 | good shit bro, gonna try this soon as I get home :D
thanks | 7 | 0 | 2023-03-17T08:44:19 | Sudden-Scientist2937 | false | null | 0 | jcjkmx9 | false | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/jcjkmx9/ | false | 7 |
t1_jcjkjb5 | I see, I think I should try your solution myself :) | 2 | 0 | 2023-03-17T08:42:47 | BalorNG | false | null | 0 | jcjkjb5 | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcjkjb5/ | false | 2 |
t1_jcjk35e | I'm glad that worked out for you! | 2 | 0 | 2023-03-17T08:36:05 | harrylettuce | false | null | 0 | jcjk35e | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcjk35e/ | false | 2 |
t1_jcjfrya | Yes this was it, it works much faster in WSL, not mention way easier to actually successfully install :| Should've just done this in the first place! | 6 | 0 | 2023-03-17T07:33:10 | EveningFunction | false | null | 0 | jcjfrya | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcjfrya/ | false | 6 |
t1_jcjfg9y | Thanks! | 2 | 0 | 2023-03-17T07:28:21 | Zyj | false | null | 0 | jcjfg9y | false | /r/LocalLLaMA/comments/11qk5j9/anyone_with_more_than_2_gpus/jcjfg9y/ | false | 2 |
t1_jcjeu1b | You are absolutely right; I didn't read the title properly.
Perhaps you will find this blog post from the creator of tortoise-tts interesting though: https://nonint.com/2022/05/30/my-deep-learning-rig/ | 5 | 0 | 2023-03-17T07:19:27 | SekstiNii | false | null | 0 | jcjeu1b | false | /r/LocalLLaMA/comments/11qk5j9/anyone_with_more_than_2_gpus/jcjeu1b/ | false | 5 |
t1_jcjemt6 | That doesn't qualify for "more than two" | 2 | 0 | 2023-03-17T07:16:29 | Zyj | false | null | 0 | jcjemt6 | false | /r/LocalLLaMA/comments/11qk5j9/anyone_with_more_than_2_gpus/jcjemt6/ | false | 2 |
t1_jciu0qt | [removed] | 1 | 0 | 2023-03-17T03:23:57 | [deleted] | true | null | 0 | jciu0qt | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jciu0qt/ | false | 1 |
t1_jcitzoy | I'm definitely waiting for this too. I feel like LLaMa 13B trained ALPACA-style and then quantized down to 4 bits using something like GPTQ would probably be the sweet spot of performance to hardware requirements right now (ie likely able to run on a 2080 Ti, 3060 12 GB, 3080 Ti, 4070, and anything higher.... possibly ... | 7 | 0 | 2023-03-17T03:23:42 | AI-Pon3 | false | null | 0 | jcitzoy | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jcitzoy/ | false | 7 |
t1_jcitpuv | The tokenizer class has been changed from LLaMATokenizer to LlamaTokenizer. If you receive the error: "ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported", you must edit tokenizer\_config.json to correct this.
For example, inside text-generation-webui/models/llama-7b-hf/tokenizer\_c... | 1 | 0 | 2023-03-17T03:21:19 | Technical_Leather949 | false | null | 0 | jcitpuv | true | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcitpuv/ | true | 1 |
t1_jcim5dt | Did that already, same issue | 2 | 0 | 2023-03-17T02:19:25 | EveningFunction | false | null | 0 | jcim5dt | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcim5dt/ | false | 2 |
t1_jcigb90 | >For some reason, decapoda-research still hasn't uploaded the new conversions here even though a whole week has passed.
I believe his CPU died after the 13b conversion. | 1 | 0 | 2023-03-17T01:33:51 | Prince_Noodletocks | false | null | 0 | jcigb90 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcigb90/ | false | 1 |
t1_jciaxl5 | No problem, glad everything is working for you now! | 3 | 0 | 2023-03-17T00:52:49 | Technical_Leather949 | false | null | 0 | jciaxl5 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jciaxl5/ | false | 3 |
t1_jcia0j3 | Thank you for all your help by the way! The guide is excellent that even a noob like me after some trial and error can get this up and running! | 2 | 0 | 2023-03-17T00:46:01 | Soviet-Lemon | false | null | 0 | jcia0j3 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcia0j3/ | false | 2 |
t1_jci9ho1 | I have it working now, I had to go into the C:\\Users\\username\\miniconda3\\envs\\textgen\\lib\\site-packages\\transformers directory and end up changing the name of every instance of LLaMATokenizer -> LlamaTokenizer, LLaMAConfig -> LlamaConfig, and LLaMAForCausalLM -> LlamaForCausalLM
After that it ended u... | 1 | 0 | 2023-03-17T00:42:02 | Soviet-Lemon | false | null | 0 | jci9ho1 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jci9ho1/ | false | 1 |
t1_jci5sjt | >from transformers import LlamaConfig, LlamaForCausalLM
You have to update the GPTQ repo. First, make sure text-generation-webui is up to date:
cd /path/to/text-generation-webui
git pull https://github.com/oobabooga/text-generation-webui
Reinstall transformers if needed. Then:
cd repositories/GPTQ-fo... | 2 | 0 | 2023-03-17T00:14:55 | Technical_Leather949 | false | null | 0 | jci5sjt | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jci5sjt/ | false | 2 |
t1_jci0vwm | User error, I just needed to rename the pt file, however after this I still seem to get the following transformer error:
Traceback (most recent call last):
File "C:\\Windows\\System32\\text-generation-webui\\[server.py](https://server.py)", line 215, in <module>
shared.model, shared.tokenizer = load\_mode... | 1 | 0 | 2023-03-16T23:39:19 | Soviet-Lemon | false | null | 0 | jci0vwm | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jci0vwm/ | false | 1 |
t1_jci0n69 | [deleted] | 1 | 0 | 2023-03-16T23:37:35 | [deleted] | true | null | 0 | jci0n69 | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jci0n69/ | false | 1 |
t1_jci01x8 | After having downloaded both the 13B and 30B 4 bit models from maderix I can't seem to get it to launch as it says it can't find [llama-13B-4bit.pt](https://llama-13B-4bit.pt) despite it just being in the models folder with the 13B-hf folder downloaded from the guide. Do I need to change where the hf folder is coming f... | 1 | 0 | 2023-03-16T23:33:22 | Soviet-Lemon | false | null | 0 | jci01x8 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jci01x8/ | false | 1 |
t1_jchy0zf | [deleted] | 1 | 0 | 2023-03-16T23:19:00 | [deleted] | true | null | 0 | jchy0zf | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jchy0zf/ | false | 1 |
t1_jchxb2e | It does say though? If you're on Linux, sudo apt install build-essential is enough.
If you're on Windows:
>Install Build Tools for Visual Studio 2019 (has to be 2019) [here](https://learn.microsoft.com/en-us/visualstudio/releases/2019/history#release-dates-and-build-numbers). Check "Desktop development with C++" w... | 2 | 0 | 2023-03-16T23:13:53 | Technical_Leather949 | false | null | 0 | jchxb2e | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jchxb2e/ | false | 2 |
t1_jchvicl | [deleted] | 1 | 0 | 2023-03-16T23:01:14 | [deleted] | true | null | 0 | jchvicl | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jchvicl/ | false | 1 |
t1_jchq8jy | >RuntimeError: CUDA error: an illegal memory access was encountered
This might be related to issues [357](https://github.com/oobabooga/text-generation-webui/issues/357) and [322](https://github.com/oobabooga/text-generation-webui/issues/322) on GitHub. First, make sure the repo is up to date, then rename LLaMAToken... | 3 | 0 | 2023-03-16T22:25:32 | Technical_Leather949 | false | null | 0 | jchq8jy | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jchq8jy/ | false | 3 |
t1_jchpcff | >using the the 4 bit 30B .pt file found under the decapoda-research/llama-smallint-pt/
That's actually the old 4-bit file and should not be used. For some reason, decapoda-research still hasn't uploaded the new conversions [here](https://huggingface.co/decapoda-research/llama-30b-hf-int4/tree/main) even though a wh... | 2 | 0 | 2023-03-16T22:19:31 | Technical_Leather949 | false | null | 0 | jchpcff | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jchpcff/ | false | 2 |
t1_jchjbgn | [removed] | 1 | 0 | 2023-03-16T21:39:18 | [deleted] | true | null | 0 | jchjbgn | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jchjbgn/ | false | 1 |
t1_jcgsicw | Can anyone help here?
I have only 16GB VRAM and not even at the place to get 4 bit running so I am using 7b 8bit The webgui seems to load but nothing generates. A bit of searching this suggest running out of VRAM but I am only using around 8 of my 16GB
&#x200B;
D:\\text-generation-webui>python [server.py](h... | 1 | 0 | 2023-03-16T18:46:36 | staticx57 | false | null | 0 | jcgsicw | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcgsicw/ | false | 1 |
t1_jcgoqa9 | I now appear to be getting a "Tokenizer class LLaMATokenizer does not exist or is not currently imported." error when trying to run the 13B model again. | 1 | 0 | 2023-03-16T18:23:06 | Soviet-Lemon | false | null | 0 | jcgoqa9 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcgoqa9/ | false | 1 |
t1_jcglb41 | I was able to get the 4bit 13B running on windows using this guide but now while trying to get the 30B version installed using the the 4 bit 30B .pt file found under the decapoda-research/llama-smallint-pt/ However when I try to run the model I get a runtime error in loading state\_dict. Any fixes or am I just using th... | 1 | 0 | 2023-03-16T18:01:29 | Soviet-Lemon | false | null | 0 | jcglb41 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcglb41/ | false | 1 |
t1_jcfn147 | The model believing "789" to resemble "heaven" reminded me of the " SolidGoldMagikarp" phenomenon (even though in hindsight, "seven" does sound like "heaven"). I thought, maybe there is a glitch token with an embedding close to 789, which the model can't output and generates "heaven" instead? Hence I wondered what woul... | 2 | 0 | 2023-03-16T14:22:36 | ain92ru | false | 2023-03-16T15:24:54 | 0 | jcfn147 | false | /r/LocalLLaMA/comments/11rkts9/tested_some_questions_for_ais/jcfn147/ | false | 2 |
t1_jcfbzby | What is that? | 1 | 0 | 2023-03-16T13:00:01 | qrayons | false | null | 0 | jcfbzby | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jcfbzby/ | false | 1 |
t1_jcfbio8 | This is exciting, but I'm going to need to wait for someone to put together a guide. Not sure how to get this to run on something like oobabooga yet. It looks like the LoRa weights need to be combined with the original llama weights, and not sure if that can even be done with the 4 bit quantized version of the llama mo... | 8 | 0 | 2023-03-16T12:56:14 | qrayons | false | null | 0 | jcfbio8 | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jcfbio8/ | false | 8 |
t1_jcf6ibz | Try installing it on WSL. There's something funky with Oobabooga performance in Windows.
There's a thread here where someone brought up the performance differences:
https://github.com/oobabooga/text-generation-webui/issues/237
Unfortunately, it's full of Linux recruiters and people who misread OP rather than people ... | 10 | 0 | 2023-03-16T12:11:56 | harrylettuce | false | null | 0 | jcf6ibz | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcf6ibz/ | false | 10 |
t1_jcf3mw9 | [deleted] | 2 | 0 | 2023-03-16T11:43:49 | [deleted] | true | null | 0 | jcf3mw9 | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jcf3mw9/ | false | 2 |
t1_jcf02gu | Activate the conda environment and run "python server.py --model llama-13b-hf --load-in-8bit" inside text-generation-webui.
Replace "llama-13b-hf" with the model you're using and change --load-in-8bit to --gptq-bits 4 --no-stream if you're running 4-bit LLaMA | 2 | 0 | 2023-03-16T11:05:17 | Technical_Leather949 | false | null | 0 | jcf02gu | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcf02gu/ | false | 2 |
t1_jcevpr5 | How do I rerun the webui after the first time? | 1 | 0 | 2023-03-16T10:11:21 | EndlessZone123 | false | null | 0 | jcevpr5 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcevpr5/ | false | 1 |
t1_jcev12z | Apparently the smallest model runs at decent speed even on a Pixel 6's CPU: https://nitter.lacontrevoie.fr/thiteanish/status/1635678053853536256
I wonder if LLaMA is just quite favorable for CPU (Stable Diffusion having more convolutions that GPU crushes at) or if the GPU version could be greatly optimized? You could ... | 7 | 0 | 2023-03-16T10:02:10 | mannerto | false | null | 0 | jcev12z | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcev12z/ | false | 7 |
t1_jcesw9v | Sent the repo you are using for window | 1 | 0 | 2023-03-16T09:32:18 | Tight-Juggernaut138 | false | null | 0 | jcesw9v | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcesw9v/ | false | 1 |
t1_jcerkn8 | I think you should disable token streaming. | 5 | 0 | 2023-03-16T09:13:11 | BalorNG | false | null | 0 | jcerkn8 | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcerkn8/ | false | 5 |
t1_jcer0l8 | thanks, link looks helpful, I'll investigate further. | 2 | 0 | 2023-03-16T09:05:03 | -main | false | null | 0 | jcer0l8 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcer0l8/ | false | 2 |
t1_jcegh7u | > so.... does 4-bit LLAMA actually exist on AMD / ROCm (yet)?
[Issue #166](https://github.com/oobabooga/text-generation-webui/issues/166) on GitHub reports 4-bit working with ROCm. You can try asking for more info there. | 3 | 0 | 2023-03-16T06:33:24 | Technical_Leather949 | false | null | 0 | jcegh7u | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcegh7u/ | false | 3 |
t1_jcec2ql | I feel silly, I ended up using chatgpt to solve all my issues. You should recommend this solution on the main post, I think. A lot of less skilled people could get a lot of fast help that they'd forget to even try using, and it would probably free up everyone else's time. For some reason my entire anaconda env for text... | 1 | 0 | 2023-03-16T05:36:44 | wintermuet | false | null | 0 | jcec2ql | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcec2ql/ | false | 1 |
t1_jce8e47 | 13b at 4bit is \~6.5GB with no overhead. 7b 8bit is \~7GB, 4bit is \~4.5GB. | 3 | 0 | 2023-03-16T04:54:39 | EndlessZone123 | false | null | 0 | jce8e47 | false | /r/LocalLLaMA/comments/11r6mdm/you_might_not_need_the_minimum_vram/jce8e47/ | false | 3 |
t1_jce7c3y | >haha just build your own X lol | -12 | 0 | 2023-03-16T04:43:33 | Virtamancer | true | null | 0 | jce7c3y | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jce7c3y/ | false | -12 |
t1_jce766w | [removed] | 1 | 0 | 2023-03-16T04:41:52 | [deleted] | true | null | 0 | jce766w | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jce766w/ | false | 1 |
t1_jce75ji | right this way, miss. - https://huggingface.co/chavinlo/alpaca-native/tree/main | 6 | 0 | 2023-03-16T04:41:41 | BackgroundFeeling707 | false | null | 0 | jce75ji | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jce75ji/ | false | 6 |
t1_jce6vw7 | >Cut 65b down to 3 or 4 bit, fine tune it on the Stanford data set (first clean out all the disclaimers responses if they haven't already) without all these shortcuts, and then distribute it.
The code is out there, why don't you take up the mantle and honor us with your finetuned 65B that's close to chatGPT and can... | 11 | 0 | 2023-03-16T04:38:58 | WarProfessional3278 | false | null | 0 | jce6vw7 | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jce6vw7/ | false | 11 |
t1_jce5isw | > then follow GPTQ instructions
Those instructions include this step:
> git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
> cd GPTQ-for-LLaMa
> python setup_cuda.py install
That last step errors out looking for a CUDA_HOME environment variable. I suspect the script wants a CUDA dev envi... | 2 | 0 | 2023-03-16T04:25:11 | -main | false | null | 0 | jce5isw | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jce5isw/ | false | 2 |
t1_jce3jwb | I'm going to preface this with a huge warning that I"m 99% in the dark on this too. So hopefully someone can shed some light on this for both of us.
I'm giving it a shot right now locally with the [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) link someone posted. But my original plan was to use [vast.ai](https:... | 3 | 0 | 2023-03-16T04:06:20 | toothpastespiders | false | null | 0 | jce3jwb | false | /r/LocalLLaMA/comments/11s8585/can_someone_eli5_wheres_the_best_place_to_fine/jce3jwb/ | false | 3 |
t1_jce0ycr | Cringe. I don't want a "slightly worse" version of a "slightly worse" version of a "slightly worse" version of the worst of 4 versions.
Cut 65b down to 3 or 4 bit, fine tune it on the Stanford data set (first clean out all the disclaimers responses if they haven't already) without all these shortcuts, and then distrib... | -17 | 0 | 2023-03-16T03:42:41 | Virtamancer | true | null | 0 | jce0ycr | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jce0ycr/ | false | -17 |
t1_jce0h3r | Sorry bud, I skimmed the topic a bit when replying. I don't know about the 7/8bit, but I would recommend hitting the 13b 4bit model. should take nearly as much processing and you get better results...its a few more steps to take, but worth it. | 1 | 0 | 2023-03-16T03:38:30 | RobXSIQ | false | null | 0 | jce0h3r | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jce0h3r/ | false | 1 |
t1_jcdyrhr | It's wild how fast this stuff is moving! My crusty old M40 probably isn't up for it, but eh, it does have 24 GB vram so giving it a shot. | 1 | 0 | 2023-03-16T03:23:46 | toothpastespiders | false | null | 0 | jcdyrhr | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jcdyrhr/ | false | 1 |
t1_jcdv39o | Neat! I'm hoping someone can do a trained 13B model to share. | 11 | 0 | 2023-03-16T02:53:34 | iJeff | false | null | 0 | jcdv39o | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jcdv39o/ | false | 11 |
t1_jcdlsof | (Also I think you could adjust that repository to finetune on general text not just instruct) | 3 | 0 | 2023-03-16T01:44:14 | Dany0 | false | null | 0 | jcdlsof | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jcdlsof/ | false | 3 |
t1_jcdkk68 | Here's what Bing Chat thinks: https://sl.bing.net/djDC6udRqm | 3 | 0 | 2023-03-16T01:35:11 | iJeff | false | null | 0 | jcdkk68 | false | /r/LocalLLaMA/comments/11s8585/can_someone_eli5_wheres_the_best_place_to_fine/jcdkk68/ | false | 3 |
t1_jcdfdar | Did you manage to make it work? I have an AMD GPU too. | 1 | 0 | 2023-03-16T00:57:15 | Christ0ph_ | false | null | 0 | jcdfdar | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcdfdar/ | false | 1 |
t1_jcd56ru | I wanna point out to anyone in the comments that if you need any help setting it up, chatgpt is incredibly helpful. I got halfway through Googling before I realized how silly I was. Just ask the robot, he knows how to set up llama webui. | 8 | 0 | 2023-03-15T23:44:38 | wintermuet | false | null | 0 | jcd56ru | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcd56ru/ | false | 8 |
t1_jcczaf5 | Never mind them. Like you said, a .pt file is not needed for the 8-bit HF model, and it doesn't even exist (unless using the original non-HF model, but that would not have .bin files).
That said, no clue what your issue is. I'd advise going through all the steps from the start (keeping the model, ofc) | 1 | 0 | 2023-03-15T23:03:06 | Reachthrough | false | null | 0 | jcczaf5 | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcczaf5/ | false | 1 |
t1_jccvfoa | Tuning code has been released, people is starting to finetune on their own! | 3 | 0 | 2023-03-15T22:36:34 | alexl83 | false | null | 0 | jccvfoa | false | /r/LocalLLaMA/comments/11qng27/stanford_alpaca_7b_llama_instructionfollowing/jccvfoa/ | false | 3 |
t1_jccr0cq | [deleted] | 1 | 0 | 2023-03-15T22:06:24 | [deleted] | true | null | 0 | jccr0cq | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jccr0cq/ | false | 1 |
t1_jccnbcj | The text-generation-webui already features REST endpoints. You just enable `--listen` and disable any chat modes. I've used it from Phyton just by simple modifications of their example script in the repo. | 6 | 0 | 2023-03-15T21:42:20 | lacethespace | false | null | 0 | jccnbcj | false | /r/LocalLLaMA/comments/11s5f39/integrate_llama_into_python_code/jccnbcj/ | false | 6 |
t1_jccfadb | it should be possible to modify the silero extension to use tortoise-tts instead | 3 | 0 | 2023-03-15T20:51:27 | estrafire | false | null | 0 | jccfadb | false | /r/LocalLLaMA/comments/11s5f39/integrate_llama_into_python_code/jccfadb/ | false | 3 |
t1_jccbxk7 | This is the repo I've been using the past week or so to interface with LLaMA-7b-int4.
[https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
It has extension support and already a silero extension built in. I haven't used that extension myself, but I'm fairly certain ... | 6 | 0 | 2023-03-15T20:30:59 | remghoost7 | false | null | 0 | jccbxk7 | false | /r/LocalLLaMA/comments/11s5f39/integrate_llama_into_python_code/jccbxk7/ | false | 6 |
t1_jccax6e | I think you'll be fine.
A few tips on things I got stuck on:
1. Windows wouldn't pull up "x64 native command" or whatever for me until I restarted my computer after installing vs 2019. Make sure you install *both vs 2019 (community worked fine for me) and build tools. When you're installing, make sure to install the ... | 1 | 0 | 2023-03-15T20:24:45 | JunoGyles | false | null | 0 | jccax6e | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jccax6e/ | false | 1 |
t1_jcc7jgn | [deleted] | 1 | 0 | 2023-03-15T20:03:58 | [deleted] | true | 2023-03-20T15:02:02 | 0 | jcc7jgn | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc7jgn/ | false | 1 |
t1_jcc7bur | Oh wait okay I had forgotten what you put down in your OP last night. W/ your hardware you should 100% be running 13b 4int. Depending on whether the improvements speculated right now actually occur soon you might be able to run 30b 3int soon too. | 1 | 0 | 2023-03-15T20:02:41 | JunoGyles | false | null | 0 | jcc7bur | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc7bur/ | false | 1 |
t1_jcc6zkn | Ah, sorry you are correct on that. I assumed it did since that's what the point of the repo is, but it appears it hasn't been converted to 8bit transformers yet. It's not particularly important since you should probably be using 13b int4 anyway over 7bit int8. Chances are unless you have a ton of vram and system ram yo... | 1 | 0 | 2023-03-15T20:00:35 | JunoGyles | false | null | 0 | jcc6zkn | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc6zkn/ | false | 1 |
t1_jcc670u | [deleted] | 1 | 0 | 2023-03-15T19:55:43 | [deleted] | true | 2023-03-20T15:02:05 | 0 | jcc670u | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc670u/ | false | 1 |
t1_jcc5pyg | I mean that's reasonable but please understand this is an issue with your own patience. I'd be happy to continue to help you because I get it can be frustrating: it took me two evenings to get the system running at all. Fortunately I think I could do it a second time in less than an hour. | 5 | 0 | 2023-03-15T19:52:50 | JunoGyles | false | null | 0 | jcc5pyg | false | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/jcc5pyg/ | false | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.