name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jc2h7kw | without the [llama.py](https://llama.py) changes, I get this error:
Traceback (most recent call last):
File "/home/<>/text-generation-webui/server.py", line 191, in <module>
shared.model, shared.tokenizer = load\_model(shared.model\_name)
File "/home/<>/text-generation-webui/modules/models.py", li... | 1 | 0 | 2023-03-13T15:47:30 | Tasty-Attitude-7893 | false | null | 0 | jc2h7kw | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2h7kw/ | false | 1 |
t1_jc2gxd4 | This is the diff I had to use to get past the dictionary error on loading at first, where it spews a bunch of missing keys:
diff --git a/llama.py b/llama.py
index 09b527e..dee2ac0 100644
\--- a/llama.py
\+++ b/llama.py
@@ -240,9 +240,9 @@ def load\_quant(model, checkpoint, wbits):
print('Loading model ...')
if c... | 1 | 0 | 2023-03-13T15:45:39 | Tasty-Attitude-7893 | false | null | 0 | jc2gxd4 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2gxd4/ | false | 1 |
t1_jc2etra | [removed] | 1 | 0 | 2023-03-13T15:31:47 | [deleted] | true | null | 0 | jc2etra | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2etra/ | false | 1 |
t1_jc2ei13 | [removed] | 1 | 0 | 2023-03-13T15:29:37 | [deleted] | true | null | 0 | jc2ei13 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2ei13/ | false | 1 |
t1_jc2dn50 | Are you sure everything is setup correctly and you're using the model [downloaded here](https://huggingface.co/decapoda-research/llama-30b-hf/tree/main)? I've tested following these steps from the beginning on fresh Ubuntu and Windows installs and haven't run into any errors or problems.
decapoda-research said they we... | 3 | 0 | 2023-03-13T15:23:55 | Technical_Leather949 | false | null | 0 | jc2dn50 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2dn50/ | false | 3 |
t1_jc2c8pk | What is (if any) the drawbacks of using 3/4 bit instead of 8? | 1 | 0 | 2023-03-13T15:14:24 | skripp11 | false | null | 0 | jc2c8pk | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc2c8pk/ | false | 1 |
t1_jc24pot | [removed] | 1 | 0 | 2023-03-13T14:22:04 | [deleted] | true | null | 0 | jc24pot | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc24pot/ | false | 1 |
t1_jc24a25 | [removed] | 1 | 0 | 2023-03-13T14:18:57 | [deleted] | true | null | 0 | jc24a25 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc24a25/ | false | 1 |
t1_jc1lsd5 | What are the system requirements for 3-bit? | 3 | 0 | 2023-03-13T11:36:37 | PartySunday | false | null | 0 | jc1lsd5 | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc1lsd5/ | false | 3 |
t1_jc14r1i | I think you should add `--listen` to the argument in the bath file that lunches the server | 1 | 0 | 2023-03-13T07:42:48 | curtwagner1984 | false | null | 0 | jc14r1i | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jc14r1i/ | false | 1 |
t1_jc0h4gf | Yeah, 3-bit LLaMA 7B, 13B and 30B available here
[https://huggingface.co/decapoda-research/llama-smallint-pt/tree/main](https://huggingface.co/decapoda-research/llama-smallint-pt/tree/main) | 3 | 0 | 2023-03-13T03:16:36 | Irrationalender | false | null | 0 | jc0h4gf | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc0h4gf/ | false | 3 |
t1_jc0a6en | Thanks for this! After struggling for hours trying to get it to run on Windows, I got it up and running with zero headaches using Ubuntu on Windows Subsystem for Linux. | 3 | 0 | 2023-03-13T02:20:34 | iJeff | false | 2023-03-13T03:29:38 | 0 | jc0a6en | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc0a6en/ | false | 3 |
t1_jc00oai | I edited the code to take away the strict model loading and it loaded after downloading an tokenizer from HF, but it now just spits out jibberish. I used the one from the Decapoda-research unquantified model for 30b. Do you think that's the issue? | 1 | 0 | 2023-03-13T01:05:02 | Tasty-Attitude-7893 | false | null | 0 | jc00oai | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc00oai/ | false | 1 |
t1_jbztfbo | I had the same error(RuntimeError:....lots of missing dict stuff) and I tried two different torrents from the official install guide and the weights from huggingface. on ubuntu 22.04. I had a terrible time in CUDA land just trying to get the cpp file to compile and I've been doing cpp for almost 30 years :(. I just ha... | 3 | 0 | 2023-03-13T00:09:00 | Tasty-Attitude-7893 | false | null | 0 | jbztfbo | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbztfbo/ | false | 3 |
t1_jbzgdjt | It depends on your settings, but I can get a response as quick as 5 seconds, mostly 10 or under. Some can go 20-30 with settings turned up (using an 13B on an RTX 3080 10GB). | 3 | 0 | 2023-03-12T22:31:49 | iJeff | false | null | 0 | jbzgdjt | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbzgdjt/ | false | 3 |
t1_jbzg7ws | [This](https://cocktailpeanut.github.io/dalai/#/) is as a good as it gets. | 3 | 0 | 2023-03-12T22:30:41 | iJeff | false | null | 0 | jbzg7ws | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbzg7ws/ | false | 3 |
t1_jbzcccr | Yes, results are more coherent and higher quality for everything. I've tested language translation, chatting, question answering, etc, and 13B is a good baseline. | 3 | 0 | 2023-03-12T22:02:51 | Technical_Leather949 | false | null | 0 | jbzcccr | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jbzcccr/ | false | 3 |
t1_jbzbk2m | I don't mind. Thanks for making the web UI! All of this is more accessible because of it. | 11 | 0 | 2023-03-12T21:57:20 | Technical_Leather949 | false | null | 0 | jbzbk2m | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbzbk2m/ | false | 11 |
t1_jbza6zc | Try `-—share` when you launch the server | 1 | 0 | 2023-03-12T21:47:52 | pdaddyo | false | null | 0 | jbza6zc | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jbza6zc/ | false | 1 |
t1_jbz5o8i | i am using [oobabooga](https://github.com/oobabooga)/[**text-generation-webui**](https://github.com/oobabooga/text-generation-webui)
can pls someone help i dont know how to code or anything but i need just a small help i wanna make the UI/chat accessable on my tablet but i dont know how , where is this \`launch()\`... | 2 | 0 | 2023-03-12T21:16:05 | Regmas0 | false | null | 0 | jbz5o8i | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jbz5o8i/ | false | 2 |
t1_jbz2rlw | I have borrowed your instructions. I hope you don't mind :)
[https://github.com/oobabooga/text-generation-webui/wiki/Installation-instructions-for-human-beings](https://github.com/oobabooga/text-generation-webui/wiki/Installation-instructions-for-human-beings) | 12 | 0 | 2023-03-12T20:55:17 | oobabooga1 | false | null | 0 | jbz2rlw | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbz2rlw/ | false | 12 |
t1_jbz279c | I'll try it out. Is the 13B 4bit significantly smarter than the 7B one? | 1 | 0 | 2023-03-12T20:51:25 | curtwagner1984 | false | null | 0 | jbz279c | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jbz279c/ | false | 1 |
t1_jbyt4hx | I think you may have skipped a few steps. If you're following the instructions on the [GitHub](https://github.com/oobabooga/text-generation-webui#installation-option-1-conda) page, your conda environment should be named textgen.
Try starting over using the 4bit [instructions here](https://www.reddit.com/r/LocalLLaMA/c... | 2 | 0 | 2023-03-12T19:48:06 | Technical_Leather949 | false | null | 0 | jbyt4hx | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jbyt4hx/ | false | 2 |
t1_jbxrg2f | I can't wait for the 4090 titan so that I can run these models at home. Thank you for the tutorial. | 1 | 0 | 2023-03-12T15:26:11 | RabbitHole32 | false | null | 0 | jbxrg2f | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbxrg2f/ | false | 1 |
t1_jbwo8zz | What is the speed of these responses? I'm interested in running llama locally but not sure how it performs. | 3 | 0 | 2023-03-12T08:10:16 | andrejg57 | false | null | 0 | jbwo8zz | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbwo8zz/ | false | 3 |
t1_jbvj2f4 | ChatGPT with 175b parameters and instruction-tuning (that no open-source model has been able to replicate yet) also confidently bullshits and invents information.
These models are next-token predictors. It's expected that they are dumb. The question is how good they are at pretending to be smart. | 13 | 0 | 2023-03-12T01:19:11 | oobabooga1 | false | null | 0 | jbvj2f4 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbvj2f4/ | false | 13 |
t1_jbvi4ef | I seem to be getting an error at the end about not finding a file.
PS C:\Users\X\text-generation-webui\repositories\GPTQ-for-LLaMa>python setup_cuda.py install
No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1'
running install
C:\Python310\lib\site-p... | 2 | 0 | 2023-03-12T01:11:22 | iJeff | false | 2023-03-12T03:33:39 | 0 | jbvi4ef | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbvi4ef/ | false | 2 |
t1_jbvbhps | It's for making characters in the style of TavernAI. I used it as a simple way to create a very basic initial prompt similar to what ChatGPT or [Bing Chat](https://www.make-safe-ai.com/is-bing-chat-safe/Prompts_Conversations.txt) uses. | 3 | 0 | 2023-03-12T00:18:56 | Technical_Leather949 | false | null | 0 | jbvbhps | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbvbhps/ | false | 3 |
t1_jburnjs | I'm less than impressed. The AI does get the answers correct, but none of the explanations make sense. For the first question, the AI says:
> Since there are no other subjects mentioned beforehand, we must assume the subject of the preceding clause is also the subject of the following clause (i.e., the school bus).... | 5 | 0 | 2023-03-11T21:45:41 | [deleted] | false | null | 0 | jburnjs | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jburnjs/ | false | 5 |
t1_jbuqa24 | lmaooooooooooooo | 9 | 0 | 2023-03-11T21:35:33 | bittytoy | false | null | 0 | jbuqa24 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbuqa24/ | false | 9 |
t1_jbujse4 | I've been playing with llama.cpp, which I don't think text-generation-webui supports yet. Anyways, is this json file something that is from text-generation-webui? I'm guessing it's a way to tell text-generation-webui which prompt to "pre-inject", so to speak? Just researching some good prompts for llama 13B and came... | 2 | 0 | 2023-03-11T20:47:35 | anarchos | false | null | 0 | jbujse4 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbujse4/ | false | 2 |
t1_jbu6te9 | This is very promising. I have created an extension that loads your csv and lets users pick a prompt to use:
[https://github.com/oobabooga/text-generation-webui/blob/main/extensions/llama\_prompts/script.py](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/llama_prompts/script.py)
&#x200B;
... | 9 | 0 | 2023-03-11T19:13:48 | oobabooga1 | false | null | 0 | jbu6te9 | false | /r/LocalLLaMA/comments/11oqbvx/repository_of_llama_prompts/jbu6te9/ | false | 9 |
t1_jbu3lk4 | What hardware are you using to run this? | 5 | 0 | 2023-03-11T18:51:19 | 2muchnet42day | false | null | 0 | jbu3lk4 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbu3lk4/ | false | 5 |
t1_jbu1ocq | Thank you for your message. Is there an estimated time of arrival for a **user-friendly** installation method that is compatible with the WebUI? | 1 | 0 | 2023-03-11T18:37:53 | curtwagner1984 | false | null | 0 | jbu1ocq | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbu1ocq/ | false | 1 |
t1_jbtng6x | A user on github provided the whl required for windows which SHOULD significantly shorten the 4-bit installation process, i believe foregoing the need to install Visual Studio altogether.
[GPTQ quantization(3 or 4 bit quantization) support for LLaMa · Issue #177 · oobabooga/text-generation-webui · GitHub](https://gith... | 3 | 0 | 2023-03-11T16:58:19 | j4nds4 | false | null | 0 | jbtng6x | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbtng6x/ | false | 3 |
t1_jbtl1op | [https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/) | 4 | 0 | 2023-03-11T16:41:43 | Kamehameha90 | false | null | 0 | jbtl1op | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbtl1op/ | false | 4 |
t1_jbtkk02 | Where can one get the model? | 1 | 0 | 2023-03-11T16:38:23 | curtwagner1984 | false | null | 0 | jbtkk02 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbtkk02/ | false | 1 |
t1_jbsuftk | Thanks a lot for this guide! All is working and I had no errors, but if I press "generate" I get this error:
`Traceback (most recent call last):`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\gradio\`[`routes.py`](https://routes.py)`", line 374, in run_predict`
`output = await app.get_blocks().proc... | 2 | 0 | 2023-03-11T13:15:00 | Kamehameha90 | false | 2023-03-11T14:08:36 | 0 | jbsuftk | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbsuftk/ | false | 2 |
t1_jbsdxsf | Really interresting great job! Thanks for sharing! | 3 | 0 | 2023-03-11T09:45:11 | alexl83 | false | null | 0 | jbsdxsf | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbsdxsf/ | false | 3 |
t1_jbrqxsw | Those replies are really impressive. I was used to messing with OPT and GPT-J before all this, and the responses were semi-coherent ramblings. LLaMA is extremely coherent in comparison. | 11 | 0 | 2023-03-11T04:55:42 | oobabooga1 | false | null | 0 | jbrqxsw | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbrqxsw/ | false | 11 |
t1_jbrh3e5 | Here is the text for the .json file I used. The example dialogue comes from the same preprompt Bing Chat uses, after seeing someone else get good results using a similar approach:
{
"char_name": "LLaMA-Precise",
"char_persona": "LLaMA-Precise is a helpful AI chatbot that always provides useful and ... | 10 | 0 | 2023-03-11T03:24:22 | Technical_Leather949 | false | null | 0 | jbrh3e5 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbrh3e5/ | false | 10 |
t1_jbrfliw | Can you share the background description of a bot? | 2 | 0 | 2023-03-11T03:11:21 | polawiaczperel | false | null | 0 | jbrfliw | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbrfliw/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.