name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jd2yru9 | I wish the input would clear every time I press enter to chat with it. I tried looking for the JavaScript but wasn’t able to find it | 1 | 0 | 2023-03-21T13:49:08 | Necessary_Ad_9800 | false | null | 0 | jd2yru9 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2yru9/ | false | 1 |
t1_jd2yfae | If you feel anything is keeping you limited from what you wanna do & have the money I’d say go for it. | 14 | 0 | 2023-03-21T13:46:33 | Necessary_Ad_9800 | false | null | 0 | jd2yfae | false | /r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/jd2yfae/ | false | 14 |
t1_jd2w21o | Thank you! :)
​
I tried this: **sudo python setup\_cuda.py install**
And it report to me this: **sudo: python: command not found** | 1 | 0 | 2023-03-21T13:28:41 | SlavaSobov | false | null | 0 | jd2w21o | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2w21o/ | false | 1 |
t1_jd2qss9 | Were you able to run any LLaMA model successfully without the LoRA? All current issues point toward this being a problem with outdated or incorrectly installed transformers. Double check that the repo is up to date and you receive no HTTP errors when installing/reinstalling the requirements.
If you continue having the... | 1 | 0 | 2023-03-21T12:46:05 | Technical_Leather949 | false | null | 0 | jd2qss9 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2qss9/ | false | 1 |
t1_jd2pprt | >error: \[Errno 1\] Operation not permitted
First, before I recommend anything else, try running the setup with:
sudo python setup_cuda.py install
and see if that resolves it. | 2 | 0 | 2023-03-21T12:36:41 | Technical_Leather949 | false | null | 0 | jd2pprt | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2pprt/ | false | 2 |
t1_jd2p6fs | When using the LoRA, you don't start in cai-chat or notebook mode. Load up the base web UI, and when you apply the lora, the prompt is automatically added to the input. All you have to do is make sure your cursor is below "Response" when you generate. | 1 | 0 | 2023-03-21T12:31:54 | Technical_Leather949 | false | null | 0 | jd2p6fs | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2p6fs/ | false | 1 |
t1_jd2p0ra | getting this error after your previous fix, even after --no-stream and changing the tokenizer config. im on latest huggingface transformers transformers-4.28.0.dev0 | 1 | 0 | 2023-03-21T12:30:28 | skyrimfollowers | false | null | 0 | jd2p0ra | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2p0ra/ | false | 1 |
t1_jd2oxn4 | Again, very great, thank you. :) So close. I report the commands I have run to try and do the install, I appreciate any assistance you can throw to me. :)
\#Setup Ubuntu in WSL
**wsl --install ubuntu**
**wget** [**https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86\_64.sh**](https://repo.anaconda.com/mi... | 1 | 0 | 2023-03-21T12:29:42 | SlavaSobov | false | 2023-03-21T13:58:28 | 0 | jd2oxn4 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2oxn4/ | false | 1 |
t1_jd2nn2f | [removed] | 1 | 0 | 2023-03-21T12:17:59 | [deleted] | true | 2023-03-21T15:00:14 | 0 | jd2nn2f | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2nn2f/ | false | 1 |
t1_jd2lwbz | `For this particular LoRA, the prompt must be formatted like this (the starting line must be below "Response"):`
​
Where do I put this in chat mode? Does it have to be in the character section, or does the prompt only have to be formatted like this for notebook mode? | 1 | 0 | 2023-03-21T12:01:20 | Educational_Smell292 | false | null | 0 | jd2lwbz | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2lwbz/ | false | 1 |
t1_jd2lmkj | Someone else had the same problem at [\#332 here](https://github.com/oobabooga/text-generation-webui/issues/332#issuecomment-1475425198) and that should've fixed it. Try the solution advised by this user here [on #392](https://github.com/oobabooga/text-generation-webui/issues/392#issuecomment-1474889800). You might hav... | 1 | 0 | 2023-03-21T11:58:43 | Technical_Leather949 | false | null | 0 | jd2lmkj | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2lmkj/ | false | 1 |
t1_jd2ksqn | >that gave me this error:
`File "G:\text webui\one-click-installers-oobabooga-windows\installer_files\env\lib\site-packages\accelerate\utils\`[`modeling.py`](https://modeling.py)`", line 493, in get_balanced_memory`
`last_gpu = max(i for i in max_memory if isinstance(i, int) and max_memory[i] > 0)`
`ValueEr... | 1 | 0 | 2023-03-21T11:50:32 | skyrimfollowers | false | null | 0 | jd2ksqn | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2ksqn/ | false | 1 |
t1_jd2jeq3 | Try this:
Navigate to \\path\\to\\miniconda3\\envs\\textgen\\lib\\site-packages\\accelerate\\utils\\modeling.py
On line 452 **replace**:
per\_gpu = module\_sizes\[""\] // (num\_devices - 1 if low\_zero else num\_devices)
**With this**:
per\_gpu = module\_sizes\[""\] // (num\_devices - 1 if low\_zero else num\_devi... | 1 | 0 | 2023-03-21T11:36:09 | Technical_Leather949 | false | null | 0 | jd2jeq3 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2jeq3/ | false | 1 |
t1_jd2jc5j | I think it's a lot more complicated than that. My understanding is that for 4 bit it's quantized to optimize the output of that model, but if you're fine tuning the model then that quantization won't make sense for the newly fine-tuned model. I think you're best bet is to fine tune first and then quantize to 4 bits on ... | 5 | 0 | 2023-03-21T11:35:25 | qrayons | false | null | 0 | jd2jc5j | false | /r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jd2jc5j/ | false | 5 |
t1_jd2irxk | getting this error when trying to add the lora:
​
`Running on local URL:` [`http://127.0.0.1:7860`](http://127.0.0.1:7860)
​
`To create a public link, set \`share=True\` in \`launch()\`.`
`Adding the LoRA alpaca-lora-7b to the model...`
`Traceback (most recent call last):`
`File "G:\text... | 1 | 0 | 2023-03-21T11:29:28 | skyrimfollowers | false | null | 0 | jd2irxk | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2irxk/ | false | 1 |
t1_jd2ecn0 | [https://huggingface.co/Pi3141/alpaca-13B-ggml](https://huggingface.co/Pi3141/alpaca-13B-ggml) | 2 | 0 | 2023-03-21T10:37:27 | EuphoricCocos | false | null | 0 | jd2ecn0 | false | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/jd2ecn0/ | false | 2 |
t1_jd2cqet | >Does this also prevent it from automatically responding with dialogue from the user?
I've tested this for at least half a dozen hours and have not experienced that behavior. Make sure you're in --cai-chat mode too. There's also a setting that can be checked in the Parameters tab to stop generation on new lines, bu... | 1 | 0 | 2023-03-21T10:16:07 | Technical_Leather949 | false | null | 0 | jd2cqet | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2cqet/ | false | 1 |
t1_jd2b9mh | Wow thanks! Will try it out later. Does this also prevent it from automatically responding with dialogue from the user? Like sometimes it starts almost chatting with itself imitating the user | 1 | 0 | 2023-03-21T09:55:51 | Necessary_Ad_9800 | false | null | 0 | jd2b9mh | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2b9mh/ | false | 1 |
t1_jd2a55o | Create this .json file inside text-generation-webui/characters:
{
"char_name": "LLaMA",
"char_persona": "LLaMA's primary function is to interact with users through natural language processing, which means it can understand and respond to text-based queries in a way that is similar to how ... | 2 | 0 | 2023-03-21T09:40:11 | Technical_Leather949 | false | null | 0 | jd2a55o | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd2a55o/ | false | 2 |
t1_jd29luv | When I talk to it, it often responds with really short sentences and almost being rude to me (lol). Is there any way to always make it respond with longer answers? | 1 | 0 | 2023-03-21T09:32:18 | Necessary_Ad_9800 | false | null | 0 | jd29luv | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd29luv/ | false | 1 |
t1_jd29bsa | Yes so, depending on the format of the model the purpose is different..
If you want to run it in cpu mode , the ending format you want is ggml-...q4_0.bin
For the people running it in 16 hit mode, it would be f16 there in the end.
If it ends with pt the file extension that means its in pytorch format the checkpoint exp... | 6 | 0 | 2023-03-21T09:28:05 | moridin007 | false | null | 0 | jd29bsa | false | /r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/jd29bsa/ | false | 6 |
t1_jd27f5j | I've never used Voldy, but in Oobabooga, there's a very basic positive prompt and negative prompt included to improve picture quality and appearance in general. You can change the positive and negative prompt to whatever you please, as well as the generation resolution.
It'll use whatever model you last used in AUTOM... | 1 | 0 | 2023-03-21T08:59:30 | friedrichvonschiller | false | null | 0 | jd27f5j | false | /r/LocalLLaMA/comments/11x12jz/is_it_possible_to_integrate_llama_cpp_with/jd27f5j/ | false | 1 |
t1_jd265en | [deleted] | 1 | 0 | 2023-03-21T08:40:37 | [deleted] | true | null | 0 | jd265en | false | /r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/jd265en/ | false | 1 |
t1_jd263cn | [deleted] | 1 | 0 | 2023-03-21T08:39:48 | [deleted] | true | null | 0 | jd263cn | false | /r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/jd263cn/ | false | 1 |
t1_jd24r71 | Neat! Didn't know that LoRA existed, will take a look. | 2 | 0 | 2023-03-21T08:19:48 | _ouromoros | false | null | 0 | jd24r71 | false | /r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jd24r71/ | false | 2 |
t1_jd23h5q | Anthropic HH-RLHF: [https://huggingface.co/datasets/Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
Stanford Human Preferences Dataset: [https://huggingface.co/datasets/stanfordnlp/SHP](https://huggingface.co/datasets/stanfordnlp/SHP)
the ChatLLaMA LoRA was said to be trained on the Anthropic da... | 17 | 0 | 2023-03-21T08:00:31 | Civil_Collection7267 | false | null | 0 | jd23h5q | false | /r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jd23h5q/ | false | 17 |
t1_jd214sr | Finally works! Thanks. I'm actually surprised it's working after all that. | 1 | 0 | 2023-03-21T07:25:23 | Pan000 | false | null | 0 | jd214sr | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd214sr/ | false | 1 |
t1_jd1z4l3 | From the [pinned comment](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/comment/jcwhu96/?utm_source=share&utm_medium=web2x&context=3) and the note in the guide:
>**Note:** For 4-bit usage, a recent update to GPTQ-for-LLaMA has made it necessary to change to a previous commit when using certain mode... | 1 | 0 | 2023-03-21T06:55:51 | Technical_Leather949 | false | null | 0 | jd1z4l3 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1z4l3/ | false | 1 |
t1_jd1yqot | That's awesome! I've gotta give that a try. One of the first things that really grabbed me about chatgtp was recreating the feel of old adventure games. Though its lack of proper zork map structure was deplorable. | 5 | 0 | 2023-03-21T06:50:28 | toothpastespiders | false | null | 0 | jd1yqot | false | /r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jd1yqot/ | false | 5 |
t1_jd1yomb | Following those instructions I managed to get past `setup_cuda.py`, but now I get an error on [`server.py`](https://server.py)
`TypeError: load_quant() missing 1 required positional argument: 'groupsize'`
That's using python `server.py` `--model llama-30b --gptq-bits 4`
Or if I do it without the gptq-bits parameter ... | 1 | 0 | 2023-03-21T06:49:39 | Pan000 | false | null | 0 | jd1yomb | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1yomb/ | false | 1 |
t1_jd1xc3k | Yes I use this, but had to edit the script to generate image for every response | 5 | 0 | 2023-03-21T06:30:57 | vaidas-maciulis | false | null | 0 | jd1xc3k | false | /r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jd1xc3k/ | false | 5 |
t1_jd1wvzr | >which guide do you have an exact link?
This exact guide. It's in the troubleshooting section under WSL.
I believe the reason you are having so many issues is because you may have accidentally glanced or skipped over a lot of information when trying to install this. Everything is in the guide, and please do not sk... | 1 | 0 | 2023-03-21T06:24:44 | Technical_Leather949 | false | null | 0 | jd1wvzr | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1wvzr/ | false | 1 |
t1_jd1wpab | which guide do you have an exact link? i followed the 25 step one multiple times. then my brother got it working on his computer with wsl, I followed the same steps but it doesn't work on mine.
(textgen) llama@SD:\~/text-generation-webui/repositories/GPTQ-for-LLaMa$ sudo apt install build-essential
Reading package li... | 1 | 0 | 2023-03-21T06:22:15 | SDGenius | false | null | 0 | jd1wpab | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1wpab/ | false | 1 |
t1_jd1wk83 | >tried it, but it gave an error
Can you explain what error you're getting? Something must be seriously wrong with the setup you have if that's happening. Did you update and upgrade packages as mentioned in step 4? You shouldn't be getting an error when trying to run a command as simple as that.
>have no idea wh... | 1 | 0 | 2023-03-21T06:20:24 | Technical_Leather949 | false | null | 0 | jd1wk83 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1wk83/ | false | 1 |
t1_jd1w70f | 1. tried it, but it gave an error
2. have no idea where that is
3. i tried about 3 of their commands from that thread and none worked
4. now my c drive is all filled up with 5 mb left with various packages from all these installs | 1 | 0 | 2023-03-21T06:15:20 | SDGenius | false | 2023-03-21T06:19:15 | 0 | jd1w70f | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1w70f/ | false | 1 |
t1_jd1vqci | It was 11.7 every time except the most recent one on Windows where I followed someone's instructions with 11.3, which gave the same error.
I've done it over 3 times. Same error. I find it unusual that the same error occurs on both WSL and Windows.
I will try again with the alternate fix and update if it works. | 1 | 0 | 2023-03-21T06:09:10 | Pan000 | false | null | 0 | jd1vqci | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1vqci/ | false | 1 |
t1_jd1uv2p | >However, each time I have the correct CUDA version
>
>11.3
In WSL, you shouldn't be using CUDA 11.3. That is only an instruction for the 4-bit Windows installation. You should have CUDA 11.7 installed with the special WSL-Ubuntu CUDA toolkit, as seen in the troubleshooting section under WSL. There is als... | 1 | 0 | 2023-03-21T05:57:42 | Technical_Leather949 | false | null | 0 | jd1uv2p | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1uv2p/ | false | 1 |
t1_jd1u4b2 | I don't understand the point of those models. First, every project I've found wants you to start from the base Facebook models which are in an entirely different format. Second, anything else wants the hfv2 format and then you have to quantize them with a certain specific method for each project, but those are pre-quan... | 2 | 0 | 2023-03-21T05:48:11 | Virtamancer | false | null | 0 | jd1u4b2 | false | /r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/jd1u4b2/ | false | 2 |
t1_jd1tskj | Does the ooba webui do anything like voldy for SD, where it has features enabled by default that affect the output? I want to start with 100% guaranteed vanilla output and learn from there, and I like the idea of a GUI for repetitive processes and saving settings presets. | 2 | 0 | 2023-03-21T05:44:03 | Virtamancer | false | null | 0 | jd1tskj | false | /r/LocalLLaMA/comments/11x12jz/is_it_possible_to_integrate_llama_cpp_with/jd1tskj/ | false | 2 |
t1_jd1tqiq | You need to install choco then run, choco install wget | 1 | 0 | 2023-03-21T05:43:18 | moridin007 | false | null | 0 | jd1tqiq | false | /r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jd1tqiq/ | false | 1 |
t1_jd1to7w | I've tried multiple instructions from here and various others, both on WSL on Windows 11 (fresh Ubuntu as installed by WSL) and for native Windows 11 and weirdly I get the same error from `python setup_cuda.py install.` That same error I get both from WSL Ubuntu and from Windows, which is odd. With the prebuilt wheel s... | 1 | 0 | 2023-03-21T05:42:29 | Pan000 | false | null | 0 | jd1to7w | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1to7w/ | false | 1 |
t1_jd1srlv | Thank you so much. :) I will give this the try later.
Edit: Nevermind I see the WSL specific instruction. :D | 1 | 0 | 2023-03-21T05:31:07 | SlavaSobov | false | 2023-03-21T05:35:11 | 0 | jd1srlv | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1srlv/ | false | 1 |
t1_jd1soaa | 1. Did you follow the instructions to do sudo apt install build-essential?
2. Did you check the special troubleshooting section under WSL, applying all of those fixes as necessary, especially the part about installing WSL-Ubuntu CUDA toolkit?
3. Did you try the suggestion [here](https://github.com/oobabooga/text-gene... | 1 | 0 | 2023-03-21T05:29:56 | Technical_Leather949 | false | null | 0 | jd1soaa | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1soaa/ | false | 1 |
t1_jd1s8g0 | There could be several possible options to try to fix this, such as [this](https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/47#issuecomment-1474775345) or [this](https://github.com/oobabooga/text-generation-webui/issues/457#issuecomment-1477153571), but I wouldn't want to have you spend a lot of time going through ... | 2 | 0 | 2023-03-21T05:24:27 | Technical_Leather949 | false | null | 0 | jd1s8g0 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1s8g0/ | false | 2 |
t1_jd1qao3 | You can run 7B and 13B on an M1 using llama.cpp. | 3 | 0 | 2023-03-21T05:01:30 | The_frozen_one | false | null | 0 | jd1qao3 | false | /r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/jd1qao3/ | false | 3 |
t1_jd1lgce | tried WSL as you suggested, still not working
​
Traceback (most recent call last):
File "/home/llama/text-generation-webui/repositories/GPTQ-for-LLaMa/setup\_cuda.py", line 4, in <module>
setup(
File "/home/llama/miniconda3/envs/textgen/lib/python3.10/site-packages/setuptools/\_\_init\_\_.py",... | 1 | 0 | 2023-03-21T04:09:39 | SDGenius | false | null | 0 | jd1lgce | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1lgce/ | false | 1 |
t1_jd1j8tu | Yes. And the only proper way to run fine-tuned solutions like Alpaca is to recreate them yourself based on the instructions in Stanford's paper *after* you've been granted access, definitely not by downloading such models directly from sources on posts like this one:
https://www.reddit.com/r/Oobabooga/comments/11v56na... | 23 | 0 | 2023-03-21T03:48:27 | AI-Pon3 | false | null | 0 | jd1j8tu | false | /r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/jd1j8tu/ | false | 23 |
t1_jd1eq18 | The models are available via torrent:
magnet:?xt=urn:btih:cdee3052d85c697b84f4c1192f43a2276c0daea0&dn=LLaMA | 7 | 0 | 2023-03-21T03:08:14 | AntDisastrous5496 | false | null | 0 | jd1eq18 | false | /r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/jd1eq18/ | false | 7 |
t1_jd1epau | It is our duty as fellow academics to wait for proper approval by Meta corporation and not download models from unofficial sources like [https://huggingface.co/decapoda-research](https://huggingface.co/decapoda-research) | 32 | 0 | 2023-03-21T03:08:03 | oobabooga1 | false | null | 0 | jd1epau | false | /r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/jd1epau/ | false | 32 |
t1_jd1dwp6 | Explain this voodoo please | 3 | 0 | 2023-03-21T03:01:22 | RobXSIQ | false | null | 0 | jd1dwp6 | false | /r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jd1dwp6/ | false | 3 |
t1_jd1dea5 | Thank you! | 3 | 0 | 2023-03-21T02:57:09 | falconnor4 | false | null | 0 | jd1dea5 | false | /r/LocalLLaMA/comments/11x12jz/is_it_possible_to_integrate_llama_cpp_with/jd1dea5/ | false | 3 |
t1_jd1ckq0 | Still trying to get 4-bit working. \^\^;
Followed all the windows directions, multiple times, after removing textgen from the anaconda each time, and even re-installing 2019 build tools just to be safe.
Anytime I try and run the 'python setup\_cuda.py install' I get the following error. Any ideas? I tried to search, ... | 1 | 0 | 2023-03-21T02:50:34 | SlavaSobov | false | null | 0 | jd1ckq0 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd1ckq0/ | false | 1 |
t1_jd18dm4 | Is that using the sd_api_pictures extension for oobabooga?
https://github.com/oobabooga/text-generation-webui/tree/main/extensions/sd_api_pictures
If so can you explain your work flow a little? | 6 | 0 | 2023-03-21T02:17:53 | Inevitable-Start-653 | false | null | 0 | jd18dm4 | false | /r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jd18dm4/ | false | 6 |
t1_jd17l7q | Check out Oobabooga’s repo. It has API and local TTS functionality and is growing with more extensions for added functionality. | 3 | 0 | 2023-03-21T02:11:47 | mxby7e | false | null | 0 | jd17l7q | false | /r/LocalLLaMA/comments/11x12jz/is_it_possible_to_integrate_llama_cpp_with/jd17l7q/ | false | 3 |
t1_jd0kdok | I guess I'll have to try wsl eventually. But, it's weird because I did input all your commands for clearing the env and cache, and I manually deleted it from that folder as well.
Edit:
how important is using powershell rather than conda? do I have to use it for the whole installation or just a part? | 1 | 0 | 2023-03-20T23:23:38 | SDGenius | false | 2023-03-21T01:21:03 | 0 | jd0kdok | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd0kdok/ | false | 1 |
t1_jd0jeiv | >WARNING: A directory already exists at the target location 'D:\\miniconda3\\envs\\textgen'
>
>Using cached datasets-2.10.1-py3-none-any.whl
>
>Using cached responses-0.18.0-py3-none-any.whl (38 kB)
It looks like here that you didn't remove the previous environment, and you didn't clean the cach... | 1 | 0 | 2023-03-20T23:16:44 | Technical_Leather949 | false | null | 0 | jd0jeiv | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd0jeiv/ | false | 1 |
t1_jd0fqw3 | I did try just now to follow the steps exactly to the t, copying and pasting each one... here was my process. there was absolutely no errors until the end
[https://pastebin.com/T6F1p7iF](https://pastebin.com/T6F1p7iF) | 1 | 0 | 2023-03-20T22:50:50 | SDGenius | false | null | 0 | jd0fqw3 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd0fqw3/ | false | 1 |
t1_jd0d4sk | >Was 12.0 ever supposed to be there? Or was it a mistake we both made? I'm a bit confused about that.
Your conda environment should not have CUDA 12.0, and the 4-bit steps do not have any part for installing that. Currently, the steps involve installing CUDA 11.3, and I'll probably update that, but CUDA 11.3 still... | 1 | 0 | 2023-03-20T22:32:23 | Technical_Leather949 | false | null | 0 | jd0d4sk | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd0d4sk/ | false | 1 |
t1_jd0c6b3 | [removed] | 1 | 0 | 2023-03-20T22:25:37 | [deleted] | true | null | 0 | jd0c6b3 | false | /r/LocalLLaMA/comments/11wndc9/toms_hardware_wrote_a_guide_to_running_llama/jd0c6b3/ | false | 1 |
t1_jd0c5vn | Agreed! Great article, in fact I’m going to add TH back to my regular feed now. My earlier cynicism was totally undeserved. Very enjoyable read with lots of tinkering, poor fella sounds like he had a lot of hassles to overcome during his test. | 4 | 0 | 2023-03-20T22:25:32 | PM_ME_ENFP_MEMES | false | null | 0 | jd0c5vn | false | /r/LocalLLaMA/comments/11wndc9/toms_hardware_wrote_a_guide_to_running_llama/jd0c5vn/ | false | 4 |
t1_jd0bnty | i tried a new env, but someone wrote me this too
Default install instructions for pytorch will install 12.0 cuda files. The easiest way that I've found to get around this is to install pytorch using conda with -c "nvidia/label/cuda-11.7.0"included before -c nvidia:
conda install pytorch torchvision torchaudio p... | 1 | 0 | 2023-03-20T22:22:04 | SDGenius | false | 2023-03-20T22:26:35 | 0 | jd0bnty | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd0bnty/ | false | 1 |
t1_jd0b4ng | You had a previous issue where you had the incorrect CUDA version installed. Did you restart from step 1 with a new environment or did you install CUDA 11.3 into that same environment you were working with? Did you also have any HTTP errors along the way?
I just finished retesting the steps again on a completely new W... | 1 | 0 | 2023-03-20T22:18:20 | Technical_Leather949 | false | null | 0 | jd0b4ng | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd0b4ng/ | false | 1 |
t1_jd09izq | Ah, this was it
`export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/lib/wsl/lib` | 1 | 0 | 2023-03-20T22:07:10 | zxyzyxz | false | null | 0 | jd09izq | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd09izq/ | false | 1 |
t1_jd06glj | hi, did you check the pinned guide? [https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/)
and ooba's repo?
[https://github.com/oobabooga/text-generation-webui](https://github.com/oobabo... | 1 | 0 | 2023-03-20T21:46:32 | Civil_Collection7267 | false | null | 0 | jd06glj | false | /r/LocalLLaMA/comments/11wt9di/i_want_to_run_llama7b4bit_in_some_type_of_python/jd06glj/ | true | 1 |
t1_jd051od | Is this WSL or native Linux? If it's WSL, did you check the trouleshooting section and apply all necessary fixes, such as the fix here:
>In order to avoid a CUDA error when starting the web UI, you will need to apply the following fix as seen in [this comment](https://github.com/TimDettmers/bitsandbytes/issues/156#... | 1 | 0 | 2023-03-20T21:37:13 | Technical_Leather949 | false | null | 0 | jd051od | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd051od/ | false | 1 |
t1_jd02pr1 | [deleted] | 1 | 0 | 2023-03-20T21:21:39 | [deleted] | true | null | 0 | jd02pr1 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd02pr1/ | false | 1 |
t1_jd021yb | I'm getting the error that I don't have a CUDA device / GPU, even though I do and `torch.cuda.is_available()` is `True`.
$ python server.py --model llama-7b-hf --load-in-8bit
Loading llama-7b-hf...
===================================BUG REPORT===================================
Welcome to bitsandb... | 1 | 0 | 2023-03-20T21:17:16 | zxyzyxz | false | 2023-03-20T21:20:37 | 0 | jd021yb | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd021yb/ | false | 1 |
t1_jd01kp9 | Seems like a solid article. Kudos to Jarred Walton! Would be interested to see dual GPUs thrown into the fray. I'm curious how 2x 3080 10GB would compare. | 8 | 0 | 2023-03-20T21:14:05 | iJeff | false | null | 0 | jd01kp9 | false | /r/LocalLLaMA/comments/11wndc9/toms_hardware_wrote_a_guide_to_running_llama/jd01kp9/ | false | 8 |
t1_jd01264 | Ah, my mistake, I just copy/pasted the command from the install script. I also used `python download-model.py llama-7b-hf` inside text-generation-webui which works great, no need to git clone at all manually. | 1 | 0 | 2023-03-20T21:10:41 | zxyzyxz | false | null | 0 | jd01264 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd01264/ | false | 1 |
t1_jd00u7i | >I used git clone https://huggingface.co/decapoda-research/llama-7b-hf
You're attempting to run the 13B model, but if you did git clone https://huggingface.co/decapoda-research/llama-7b-hf, then you only have the 7B model. The error states: models/llama-13b-hf is not a local folder
You have to download the 13B mod... | 1 | 0 | 2023-03-20T21:09:15 | Technical_Leather949 | false | null | 0 | jd00u7i | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd00u7i/ | false | 1 |
t1_jd000nq | [removed] | 1 | 0 | 2023-03-20T21:03:52 | [deleted] | true | null | 0 | jd000nq | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jd000nq/ | false | 1 |
t1_jczzzkg | This worked! I can run 13B 4int model on my 3080Ti now. Will try if I can run the 8bit models and Alpaca next. | 1 | 0 | 2023-03-20T21:03:40 | lanky_cowriter | false | null | 0 | jczzzkg | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jczzzkg/ | false | 1 |
t1_jczzbbp | I suggest starting with the OobaBooga repo and follow the instructions for running the model in 4bit. You’re going to get slow responses with that gpu though. | 1 | 0 | 2023-03-20T20:59:17 | mxby7e | false | null | 0 | jczzbbp | false | /r/LocalLLaMA/comments/11wt9di/i_want_to_run_llama7b4bit_in_some_type_of_python/jczzbbp/ | false | 1 |
t1_jczyn3t | I'm getting the following error:
$ python server.py --model llama-13b-hf --load-in-8bit
Loading llama-13b-hf...
Traceback (most recent call last):
File "/home/user/anaconda3/envs/textgen/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status
response... | 1 | 0 | 2023-03-20T20:54:57 | zxyzyxz | false | 2023-03-20T21:16:41 | 0 | jczyn3t | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jczyn3t/ | false | 1 |
t1_jczvqy9 | went through all the instructions, step by step, got this error:
[https://pastebin.com/GTwbCfu4](https://pastebin.com/GTwbCfu4) | 1 | 0 | 2023-03-20T20:36:28 | SDGenius | false | null | 0 | jczvqy9 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jczvqy9/ | false | 1 |
t1_jczrt25 | Followed all your steps but the errors persists. I get errors in step 21 when I try to install the setup\_cuda.py. Also when I tried to run your steps to remove the conda environment I still have a miniconda3 folder at c:/users/xxx which is kinda weird.
I think one of the crucial mistakes that I originally made th... | 1 | 0 | 2023-03-20T20:11:13 | reneil1337 | false | null | 0 | jczrt25 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jczrt25/ | false | 1 |
t1_jczq9uy | this has been asked and answered in the pinned thread already. see more here: [https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/)
you did not do the step to install cuda 11.3 if you're... | 1 | 0 | 2023-03-20T20:01:24 | Civil_Collection7267 | false | null | 0 | jczq9uy | false | /r/LocalLLaMA/comments/11wt73z/trying_to_do_4bit_on_windows_10_getting_cuda/jczq9uy/ | true | 1 |
t1_jczosv4 | I'm using Windows 11 but had the same problem. The only solution that ultimately worked was completely uninstalling CUDA 12 (just via the normal Windows add/remove programs) and installing v11.3. Then start with a new virtual environment in miniconda. I tried various fixes of adding PATHs to the Windows Environment a... | 1 | 0 | 2023-03-20T19:52:10 | Organic_Studio_438 | false | null | 0 | jczosv4 | false | /r/LocalLLaMA/comments/11wt73z/trying_to_do_4bit_on_windows_10_getting_cuda/jczosv4/ | false | 1 |
t1_jczd5ce | got it working, llama 7b is scary
&#x200B;
https://preview.redd.it/yoktnsgcczoa1.png?width=426&format=png&auto=webp&v=enabled&s=2940c648a86bc5c2bd39339e0b77ac3e2922b574 | 2 | 0 | 2023-03-20T18:37:13 | megadonkeyx | false | null | 0 | jczd5ce | false | /r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/jczd5ce/ | false | 2 |
t1_jcz7o2k | [removed] | 1 | 0 | 2023-03-20T18:02:07 | [deleted] | true | null | 0 | jcz7o2k | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcz7o2k/ | false | 1 |
t1_jcz679s | it's just a file, open the directory with `explorer.exe .` or some other file transfer mechanism | 1 | 0 | 2023-03-20T17:52:45 | EveningFunction | false | null | 0 | jcz679s | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcz679s/ | false | 1 |
t1_jcyvo8n | Colour me skeptical when it comes to TH and accuracy but I’ll give it a read! | 1 | 0 | 2023-03-20T16:45:29 | PM_ME_ENFP_MEMES | false | null | 0 | jcyvo8n | false | /r/LocalLLaMA/comments/11wndc9/toms_hardware_wrote_a_guide_to_running_llama/jcyvo8n/ | false | 1 |
t1_jcyn2d9 | Thanks, gonna look into it | 1 | 0 | 2023-03-20T15:49:32 | Necessary_Ad_9800 | false | null | 0 | jcyn2d9 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcyn2d9/ | false | 1 |
t1_jcyia59 | I had the same issue, redid everything from a clean windows install without downloading anything cuda related from nvidia, I only followed this guide (4bit). The reason you get 0 tokens is the cuda extension error message | 1 | 0 | 2023-03-20T15:16:23 | Necessary_Ad_9800 | false | 2023-03-20T15:49:16 | 0 | jcyia59 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcyia59/ | false | 1 |
t1_jcyhfnk | Yes, save it as a .json file in text-generation-webui/characters. The file should look like "LLaMA-Precise.json", and you can use an image file with the same name to give it a picture.
If you're looking for an experience similar to ChatGPT, I currently recommend using this instead of the LLaMA-Precise example:
{
... | 2 | 0 | 2023-03-20T15:10:12 | Technical_Leather949 | false | null | 0 | jcyhfnk | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jcyhfnk/ | false | 2 |
t1_jcygkgy | A couple of notes:
>I've checked-out GPTQ-for-LLaMA a few hours ago.
I just want to confirm, you're saying you did the git reset --hard 468c47c01b4fe370616747b6d69a2d3f48bab5e4 fix right?
>I had CUDA 12.x installed previously which led to a problem during the initial installation process. After installing CUDA... | 2 | 0 | 2023-03-20T15:03:56 | Technical_Leather949 | false | null | 0 | jcygkgy | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcygkgy/ | false | 2 |
t1_jcyb959 | any luck? Im having troubles with this exact part on windows on AMD | 1 | 0 | 2023-03-20T14:22:28 | limitedby20character | false | null | 0 | jcyb959 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcyb959/ | false | 1 |
t1_jcy8eu1 | Can you tell me how you installed the model on WSL? | 1 | 0 | 2023-03-20T14:01:40 | EnvironmentalAd3385 | false | null | 0 | jcy8eu1 | false | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcy8eu1/ | false | 1 |
t1_jcy8aen | [removed] | 1 | 0 | 2023-03-20T14:00:44 | [deleted] | true | null | 0 | jcy8aen | false | /r/LocalLLaMA/comments/11wjmai/i_made_a_gui_app_for_installing_and_chatting_with/jcy8aen/ | false | 1 |
t1_jcy892j | Do I save this as a .json file in some folder and load it in the text webui? :) Sorry for noobiness! | 1 | 0 | 2023-03-20T14:00:27 | oliverban | false | null | 0 | jcy892j | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jcy892j/ | false | 1 |
t1_jcy87lo | Update: Excluded the folder from windows defender and added the missing files to make sure everything is in the folder. That didn't help to resolve the 0 tokens error. | 1 | 0 | 2023-03-20T14:00:07 | reneil1337 | false | null | 0 | jcy87lo | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcy87lo/ | false | 1 |
t1_jcy5ear | Hey! Thanks for your reply. Attached a screenshot of the log.
1) I'm using Windows
2) I've checked-out GPTQ-for-LLaMA a few hours ago.
3) Yes, this is actually the case. I was wondering about it but as the model was loaded into the VRAM and I could access the UI
3.1) I had CUDA 12.x installed previously whic... | 1 | 0 | 2023-03-20T13:38:38 | reneil1337 | false | null | 0 | jcy5ear | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcy5ear/ | false | 1 |
t1_jcxzoj9 | It's interesting how it always starts with perfectly answering the question and ends with non-sense. Any settings to fiddle with to make it more coherent? | 1 | 0 | 2023-03-20T12:51:36 | Scriptod | false | null | 0 | jcxzoj9 | false | /r/LocalLLaMA/comments/11rkts9/tested_some_questions_for_ais/jcxzoj9/ | false | 1 |
t1_jcxuxd7 | There's a patch [here](https://github.com/oobabooga/text-generation-webui/issues/332#issuecomment-1474883977) but I don't know if this is actually working. See oobabooga's [comment on it here](https://github.com/oobabooga/text-generation-webui/issues/332#issuecomment-1475292634). I personally tested the 30B LoRA that's... | 3 | 0 | 2023-03-20T12:06:23 | Technical_Leather949 | false | null | 0 | jcxuxd7 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcxuxd7/ | false | 3 |
t1_jcxuqsn | [deleted] | 1 | 0 | 2023-03-20T12:04:35 | [deleted] | true | null | 0 | jcxuqsn | false | /r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/jcxuqsn/ | false | 1 |
t1_jcxuaz2 | Can you share more info:
1. Are you using Windows, WSL, or native Linux?
2. Did you make sure to do the GPTQ-for-LLaMA reset to the previous working commit?
3. Can you post more of the log, for example, are you seeing "CUDA extension not installed"?
I've seen this issue come up a few different times and it seems ther... | 2 | 0 | 2023-03-20T12:00:08 | Technical_Leather949 | false | null | 0 | jcxuaz2 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcxuaz2/ | false | 2 |
t1_jcxoq0l | [removed] | 1 | 0 | 2023-03-20T10:58:00 | [deleted] | true | null | 0 | jcxoq0l | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcxoq0l/ | false | 1 |
t1_jcxopg4 | Is there any possibility to run the 4bit with the alpaca lora? | 1 | 0 | 2023-03-20T10:57:48 | Necessary_Ad_9800 | false | null | 0 | jcxopg4 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcxopg4/ | false | 1 |
t1_jcxmrnt | Thanks for the awesome tutorial. Finally got the 13B 4-bit LLaMA running on my 4080 which is great. I can access the UI but the output that is generated is always 0 tokens.
That doesn't change when I'm trying the "--cai-chat" mode. I briefly see the image + "is typing" as I generate an output but in few milliseco... | 1 | 0 | 2023-03-20T10:33:13 | reneil1337 | false | null | 0 | jcxmrnt | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcxmrnt/ | false | 1 |
t1_jcx22l7 | I'm using it in nodeJS with [https://github.com/cocktailpeanut/dalai](https://github.com/cocktailpeanut/dalai)
works really easily and well. Here's a short example script for using it in code
const Dalai = require('dalai')
const home = "C:/mypath/dalai_ai";
result = ""
let prompt = "Script\n Red Dwarf... | 1 | 0 | 2023-03-20T05:36:56 | Sixhaunt | false | null | 0 | jcx22l7 | false | /r/LocalLLaMA/comments/11s5f39/integrate_llama_into_python_code/jcx22l7/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.