name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jcwxv2t
This is caused by the recent update to GPTQ-for-LLaMA. See [the reply](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/comment/jcwhu96/?utm_source=share&utm_medium=web2x&context=3) under the pinned comment for more details. To fix it, do the following: cd repositories/GPTQ-for-LLaMa git reset --...
2
0
2023-03-20T04:47:05
Technical_Leather949
false
null
0
jcwxv2t
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcwxv2t/
false
2
t1_jcwq0dw
I see this error when I try to run 4bit, any ideas:python [server.py](https://server.py) \--load-in-4bit --model llama-7b-hf Warning: --load-in-4bit is deprecated and will be removed. Use --gptq-bits 4 instead. Loading llama-7b-hf... Traceback (most recent call last): File "/home/projects/text-generation-webui/serv...
1
0
2023-03-20T03:29:07
lanky_cowriter
false
null
0
jcwq0dw
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcwq0dw/
false
1
t1_jcwhu96
**Edit:** The latest webUI update has incorporated the GPTQ-for-LLaMA changes. The instructions below are no longer needed and the guide has been updated with the most recent information. **~~New Update:~~** ~~For 4-bit usage, a recent update to GPTQ-for-LLaMA has made it necessary to change to a previous commit when...
2
0
2023-03-20T02:19:40
Technical_Leather949
false
2023-03-26T06:55:57
0
jcwhu96
true
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcwhu96/
false
2
t1_jcwcyt2
These examples are from the extension page and the first two are NeverEndingDream. I don't know what the third pic model is but it's probably Chilloutmix on Civitai
2
0
2023-03-20T01:41:06
Civil_Collection7267
false
null
0
jcwcyt2
false
/r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/jcwcyt2/
false
2
t1_jcw9wzw
Damn, which model are you using? Is this just SD 1.5? If so, that's some impressive prompt capability!
2
0
2023-03-20T01:17:20
Evoke_App
false
null
0
jcw9wzw
false
/r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/jcw9wzw/
false
2
t1_jcw8gs6
For some reason the extension is bugged for me Edit: Nvm! There are some bugs with the extension, and you need to manually change some settings to get it working.
3
0
2023-03-20T01:06:02
WarProfessional3278
false
2023-03-20T01:29:30
0
jcw8gs6
false
/r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/jcw8gs6/
false
3
t1_jcw6u7e
I think it depends on what it's trying to write based on params, it can be 1.5-5.0 words/sec on my 3090 with llama-30b
1
0
2023-03-20T00:53:18
EveningFunction
false
null
0
jcw6u7e
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jcw6u7e/
false
1
t1_jcw42us
whisper.cpp runs pretty well on my MacBook with the base model on cpu.
2
0
2023-03-20T00:32:04
Tasty-Attitude-7893
false
null
0
jcw42us
false
/r/LocalLLaMA/comments/11v1dbu/using_whisper_llama13b_and_sd_at_the_same_time_on/jcw42us/
false
2
t1_jcw3zn9
With the 4bit I can pull this off. With the 8 bit, its a tight squeeze for 7b or GPT-J 6b. I had 7b running on my phone now in termux as is the new hotness.
2
0
2023-03-20T00:31:23
Tasty-Attitude-7893
false
null
0
jcw3zn9
false
/r/LocalLLaMA/comments/11v1dbu/using_whisper_llama13b_and_sd_at_the_same_time_on/jcw3zn9/
false
2
t1_jcw2esi
From the [extension page](https://github.com/oobabooga/text-generation-webui/wiki/Extensions): Lets the bot answer you with a picture! Load it in the --cai-chat mode with --extension sd\_api\_pictures alongside send\_pictures (it's not really required, but completes the picture). If enabled, the image generation is...
9
0
2023-03-20T00:19:27
Civil_Collection7267
false
null
0
jcw2esi
false
/r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/jcw2esi/
false
9
t1_jcvwrez
>I was not able to get the winget command to work For Git, there is an installer on their website [here](https://git-scm.com/download/win). You can also follow [this documentation](https://learn.microsoft.com/en-us/windows/package-manager/winget/) for winget if you prefer to install Git through that. >Any ideas...
2
0
2023-03-19T23:37:22
Technical_Leather949
false
null
0
jcvwrez
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcvwrez/
false
2
t1_jcveyap
[removed]
1
0
2023-03-19T21:21:28
[deleted]
true
null
0
jcveyap
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcveyap/
false
1
t1_jcv8l3s
Thanks for this guide! Installing the 4-bit. I was not able to get the winget command to work, it's not installed for me. I substituted "conda install git" and that worked fine. Now, running into an issue at "python setup_cuda.py install" File "C:\Users\User\miniconda3\envs\textgen\lib\site-packages\torch\utils...
2
0
2023-03-19T20:37:52
pmjm
false
null
0
jcv8l3s
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcv8l3s/
false
2
t1_jcv5dq3
Thanks, I've fixed the typo
1
0
2023-03-19T20:14:59
Technical_Leather949
false
null
0
jcv5dq3
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcv5dq3/
false
1
t1_jcu7vdi
I moved over to an Ubuntu WSL install. The pinned post has been updated with instructions. It’s faster for some reason also.
1
0
2023-03-19T16:33:45
antialtinian
false
null
0
jcu7vdi
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcu7vdi/
false
1
t1_jcu3oxo
I am seeing the same error and I got 7B 4-bit to work ,but as soon as I changed to load in 8bit I get the same error as you do.
1
0
2023-03-19T16:04:17
wsxedcrf
false
null
0
jcu3oxo
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcu3oxo/
false
1
t1_jctxggr
To quantize the model
1
0
2023-03-19T15:20:05
zxyzyxz
false
null
0
jctxggr
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jctxggr/
false
1
t1_jctna46
I really look forward to the Whisper extension eventually having a way to send a prompt without keyboard/mouse interaction.
1
0
2023-03-19T14:04:47
harrylettuce
false
null
0
jctna46
false
/r/LocalLLaMA/comments/11v1dbu/using_whisper_llama13b_and_sd_at_the_same_time_on/jctna46/
false
1
t1_jcti7l1
It works with my pre-installed \[CUDA 11.8 + cuDNN 8.x\] for WSL-Ubuntu. I just followed the installation guide for WSL in nvidia website. There is a .deb files for download and install as local APT repo. And you can pointing the exact version of CUDA path in LD\_LIBRARY\_PATH in /usr/local/. p.s. The cuda setup guid...
1
0
2023-03-19T13:21:27
jeffleung711
false
null
0
jcti7l1
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcti7l1/
false
1
t1_jctf22x
Now automate prompt transfer :) By the way... once language models and vision models get good enough to detect and describe "image quality" in detail, it should be pretty simple to set up self-supervised finetuning.
5
0
2023-03-19T12:52:18
BalorNG
false
null
0
jctf22x
false
/r/LocalLLaMA/comments/11v1dbu/using_whisper_llama13b_and_sd_at_the_same_time_on/jctf22x/
false
5
t1_jcte8q2
[removed]
1
0
2023-03-19T12:44:10
[deleted]
true
null
0
jcte8q2
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcte8q2/
false
1
t1_jcte82d
You can test it out here for free, in chat format https://alpaca.point.space/
1
0
2023-03-19T12:43:59
starstruckmon
false
null
0
jcte82d
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcte82d/
false
1
t1_jctdkgg
The line: sudo dpkg -i cuda-repo-wsl-ubuntu-11-7-local\_11.7.0-1\_amd64.debsudo cp /var/cuda-repo-wsl-ubuntu-11-7-local/cuda-\*-keyring.gpg /usr/share/keyrings/ needs to be split into two lines as: sudo dpkg -i cuda-repo-wsl-ubuntu-11-7-local\_11.7.0-1\_amd64.deb sudo cp /var/cuda-repo-wsl-ubuntu-11-7-local/cuda-\...
1
0
2023-03-19T12:37:22
danihend
false
null
0
jctdkgg
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jctdkgg/
false
1
t1_jctd95w
[deleted]
1
0
2023-03-19T12:34:06
[deleted]
true
null
0
jctd95w
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jctd95w/
false
1
t1_jct8vko
[deleted]
1
0
2023-03-19T11:46:18
[deleted]
true
null
0
jct8vko
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jct8vko/
false
1
t1_jct31q0
question answered so removed. if you have further questions please post in pinned thread. thanks
1
0
2023-03-19T10:29:58
Civil_Collection7267
false
null
0
jct31q0
false
/r/LocalLLaMA/comments/11ves42/got_the_llama_ccp_ggml_4bit_weights_can_i_get/jct31q0/
true
1
t1_jct2ot0
1. Are you sure you have the Build Tools for 2019 installed properly with "Desktop development with C++"? 2. Are you sure you're using x64 Native Tools Command Prompt for VS 2019 and running in admin mode when entering the command? Check this carefully. If you are and you're still having that error, try adding cl to ...
2
0
2023-03-19T10:24:45
Technical_Leather949
false
null
0
jct2ot0
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jct2ot0/
false
2
t1_jct1o0t
[removed]
1
0
2023-03-19T10:10:05
[deleted]
true
null
0
jct1o0t
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jct1o0t/
false
1
t1_jct0ff3
[deleted]
1
0
2023-03-19T09:52:28
[deleted]
true
null
0
jct0ff3
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jct0ff3/
false
1
t1_jct05rm
>I have the llama ggml weights I think there is a misunderstanding here. Can you post exactly what you have and how you got it? If you're saying that you have a model named "ggml-model-q4\_0.bin" then it's already been converted and quantized. All you do to do is run that with llama.cpp using the steps provided on...
1
0
2023-03-19T09:48:42
Technical_Leather949
false
null
0
jct05rm
false
/r/LocalLLaMA/comments/11ves42/got_the_llama_ccp_ggml_4bit_weights_can_i_get/jct05rm/
false
1
t1_jcsyjas
Yes that's correct, I have the llama ggml weights and I'll need the appropriate model that goes with it. There are torrents for these that are shared around here and other places, so I'd rather not go through facebook. My question is more around which link should I use to get the correct model that's compatible with my...
1
0
2023-03-19T09:25:15
Virtamancer
false
null
0
jcsyjas
false
/r/LocalLLaMA/comments/11ves42/got_the_llama_ccp_ggml_4bit_weights_can_i_get/jcsyjas/
false
1
t1_jcsxryg
>Regarding a working environment, my order of preference is: > >linux KVM guest on my linux host > >if that won't work with this for some reason, then i would go with a windows kvm guest on the linux host > >in the worst case scenario, i'd use my actual windows install I use Linux/Wind...
1
0
2023-03-19T09:13:53
Technical_Leather949
false
null
0
jcsxryg
false
/r/LocalLLaMA/comments/11ves42/got_the_llama_ccp_ggml_4bit_weights_can_i_get/jcsxryg/
false
1
t1_jcsw0a3
>I downloaded Nvidia cuda toolkit 12.1 I think you may have skipped over some steps then. For the native Windows 4-bit install, the instructions are to run: conda install cuda -c nvidia/label/cuda-11.3.0 -c nvidia/label/cuda-11.3.1 Try removing the conda environment with: conda env remove -n textgen ...
3
0
2023-03-19T08:48:15
Technical_Leather949
false
null
0
jcsw0a3
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcsw0a3/
false
3
t1_jcsultd
Windows 4bit. Can’t right now as I’m not at home. But maybe it has something to do with cuda? I downloaded Nvidia cuda toolkit 12.1 and yet it still says cuda it not installed.
1
0
2023-03-19T08:28:24
Necessary_Ad_9800
false
null
0
jcsultd
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcsultd/
false
1
t1_jcsru79
Are you using Windows or Linux? Which process did you follow: 8-bit or 4-bit? Can you also share the full output log? All steps have been retested on fresh installs and currently work. Some people have been having problems due to a recent update, but you can take a look at [issue #400 here](https://github.com/oobaboog...
2
0
2023-03-19T07:48:12
Technical_Leather949
false
null
0
jcsru79
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcsru79/
false
2
t1_jcsbf9a
[deleted]
1
0
2023-03-19T04:22:51
[deleted]
true
null
0
jcsbf9a
false
/r/LocalLLaMA/comments/11uidst/whats_the_best_cloud_gpu_service_for_65b/jcsbf9a/
false
1
t1_jcrvl5u
30 minutes to query? I’m new 😬
1
0
2023-03-19T02:03:30
Maximum-Geologist-98
false
null
0
jcrvl5u
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jcrvl5u/
false
1
t1_jcrlxk6
I can load the web UI but it says “CUDA extension not installed” during launch and when I try to generate output it does not work. I get like “output generated in 0.02 seconds (0.00 tokens/s, 0 tokens) What am I doing wrong? Why don’t I get any output?
1
0
2023-03-19T00:47:23
Necessary_Ad_9800
false
null
0
jcrlxk6
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcrlxk6/
false
1
t1_jcrds9r
[deleted]
1
0
2023-03-18T23:45:48
[deleted]
true
null
0
jcrds9r
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcrds9r/
false
1
t1_jcr9fvc
Perfect, that has fixed it. Thanks very much.
1
0
2023-03-18T23:12:26
Organic_Studio_438
false
null
0
jcr9fvc
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcr9fvc/
false
1
t1_jcr51lw
Yes, but I personally mix between the precise and creative (Storywriter) parameters depending on the topic and what I'm asking about. I recommend experimenting with using both parameters to generate from the same prompt and seeing which result you like best.
3
0
2023-03-18T22:39:12
Technical_Leather949
false
null
0
jcr51lw
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcr51lw/
false
3
t1_jcr4g64
Try starting your generation from the line below the last input line. For example: >Common sense questions and answers > > > >Question: What is a llama? > >Factual answer: > ><your cursor should be **on this line** when you generate> and not: >Common sense question...
3
0
2023-03-18T22:34:42
Technical_Leather949
false
null
0
jcr4g64
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcr4g64/
false
3
t1_jcr3pf5
Try this: **in** text-generation-webui/modules/LoRA.py **uncomment** line 20 by removing the # sign. It should go from #params['device_map'] = {'': 0} to params['device_map'] = {'': 0} Save, restart the web UI, and try loading the LoRA again.
2
0
2023-03-18T22:29:01
Technical_Leather949
false
null
0
jcr3pf5
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcr3pf5/
false
2
t1_jcqtnp6
I have 1 small remaining issue. When generating text some reason llama is duplicating the last character of the input phrase. Are you seeing this as well? https://i.imgur.com/WF7Kvlf.png
1
0
2023-03-18T21:12:53
antialtinian
false
null
0
jcqtnp6
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcqtnp6/
false
1
t1_jcqmykz
I can't get the Alpaca LORA to run. I'm using Windows with a 3080ti. I have got Llama 13b working in 4 bit mode and Llama 7b in 8bit without the LORA, all on GPU. Launching the Webui with ... python [server.py](https://server.py) \--model llama-7b --load-in-8bit ... works fine. Then the LORA seems to load OK bu...
1
0
2023-03-18T20:24:49
Organic_Studio_438
false
null
0
jcqmykz
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcqmykz/
false
1
t1_jcqmbnu
Awesome
2
0
2023-03-18T20:20:15
starstruckmon
false
null
0
jcqmbnu
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcqmbnu/
false
2
t1_jcqhxlr
THANK YOU!!! I had to run export CUDA_HOME=/usr/local/cuda-11.7, presumably because I fucked with it earlier and was able to get it to compile!
2
0
2023-03-18T19:48:14
antialtinian
false
null
0
jcqhxlr
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcqhxlr/
false
2
t1_jcqdwdx
I retested all steps with a new Ubuntu environment. For 4-bit, you have to install the WSL2 specific CUDA toolkit that won't overwrite the driver, and it must be [11.7](https://developer.nvidia.com/cuda-11-7-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&amp...
3
0
2023-03-18T19:20:05
Technical_Leather949
false
null
0
jcqdwdx
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcqdwdx/
false
3
t1_jcq3p5o
It's quite good, definitely better then the regular LLaMA models.
3
0
2023-03-18T18:09:25
AddendumContent6736
false
null
0
jcq3p5o
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcq3p5o/
false
3
t1_jcq2l7x
Does it remember previous prompts?
1
0
2023-03-18T18:01:59
Necessary_Ad_9800
false
null
0
jcq2l7x
false
/r/LocalLLaMA/comments/11r6mdm/you_might_not_need_the_minimum_vram/jcq2l7x/
false
1
t1_jcpyiwo
Ok, I installed conda install -c "nvidia/label/cuda-11.7.1" cuda-nvcc and manually set CUDA_HOME to /home/steph/miniconda3/envs/textgen. I now get this. python setup_cuda.py install running install /home/steph/miniconda3/envs/textgen/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecatio...
1
0
2023-03-18T17:34:29
antialtinian
false
2023-03-18T18:21:28
0
jcpyiwo
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcpyiwo/
false
1
t1_jcpxz5q
Yeah, you might have to use the full path for python executable or put it in the System's PATH variable. Good. As I said, that's the much better approach. It's not very user friendly right now. I'm not sure if that's a full replication, but I hope it is. Good luck and report back with results if possible.
3
0
2023-03-18T17:30:47
starstruckmon
false
null
0
jcpxz5q
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcpxz5q/
false
3
t1_jcpxhxo
>Can I just use sudo apt install nvidia-cuda-toolkit No, don't do this as it will overwrite the WSL2 specific driver and cause several problems. You only have to install cudatoolkit inside the conda environment, along with the other fix listed in the troubleshooting section.
2
0
2023-03-18T17:27:29
Technical_Leather949
false
null
0
jcpxhxo
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcpxhxo/
false
2
t1_jcpx4xy
Tried that and got it said Python was not found, which I know to be false. I guess I'll just download the weights from here [https://huggingface.co/chavinlo/alpaca-cleaned/tree/main](https://huggingface.co/chavinlo/alpaca-cleaned/tree/main).
1
0
2023-03-18T17:25:04
AddendumContent6736
false
null
0
jcpx4xy
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcpx4xy/
false
1
t1_jcpv87i
Figures. That was a quick and dirty attempt at converting the code to windows batch script. It would take a bit of troubleshooting. But what it does it actually very simple. It goes through every single file in the encrypted folder and runs python3 decrypt.py <filename> "original/7B/consolidated.00.pth" "result...
2
0
2023-03-18T17:12:00
starstruckmon
false
null
0
jcpv87i
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcpv87i/
false
2
t1_jcpu9mn
When I pasted that in command prompt, it gave me an error. What am I meant to do with this?
1
0
2023-03-18T17:05:33
AddendumContent6736
false
null
0
jcpu9mn
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcpu9mn/
false
1
t1_jcptgu9
I have started over after continuing to get errors. I am now to the point where 8bit works and I need to compile for 4bit. nvcc is not currently installed in my WSL Ubuntu instance. Can I just use sudo apt install nvidia-cuda-toolkit, or do I need something specific? I also plan to run sudo apt install build-essent...
1
0
2023-03-18T17:00:10
antialtinian
false
null
0
jcptgu9
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcptgu9/
false
1
t1_jcprp28
It won't work. It's a bash script. Try this for %%f in (encrypted\*) do ( if %%~xf == "" ( python decrypt.py "encrypted\%%f" "original\7B\consolidated.00.pth" "result\" ) )
2
0
2023-03-18T16:48:51
starstruckmon
false
2023-03-18T16:56:54
0
jcprp28
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcprp28/
false
2
t1_jcppzve
Are you using a cloud service or local?
1
0
2023-03-18T16:37:23
zxyzyxz
false
null
0
jcppzve
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jcppzve/
false
1
t1_jcppwtd
I've got the files downloading now, so all I have to do next is hope that the last command to decrypt the files works on Windows. If not, I might just put Linux Mint on a flash drive and run it on that, cause I really want to try this out.
1
0
2023-03-18T16:36:49
AddendumContent6736
false
null
0
jcppwtd
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcppwtd/
false
1
t1_jcppfok
I'm not calling you dumb. As I said, I'm not trying to be rude, but this would require a lot of hand holding. Yes, you can install in on windows. But you actually have to install it first. I can't give you a step by step on how to install it since I haven't even done it myself.
3
0
2023-03-18T16:33:36
starstruckmon
false
null
0
jcppfok
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcppfok/
false
3
t1_jcpoz0v
I know you said it was a Linux command, but you said that you can install it for Windows, too. ​ I ain't gonna wait, this is the first time I've actually asked for help with stuff like this and I'd like to learn a bit. I didn't want to ask for help before because I knew people would just tell me I'm dumb a...
1
0
2023-03-18T16:30:22
AddendumContent6736
false
null
0
jcpoz0v
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcpoz0v/
false
1
t1_jcpmych
As I said it's a Linux program. I think there's a Windows port too, but I can't help you with that ( no experience ). I don't want to sound rude ( I'm not trying to be, just being honest ), but if you were unable to figure this out even after my comment, you're probably in over your head at the moment. Sit back for a ...
3
0
2023-03-18T16:16:25
starstruckmon
false
null
0
jcpmych
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcpmych/
false
3
t1_jcpme26
Which errors are you getting? I can try to help. WSL installation is working smoothly for me after removing the conda environment and starting over.
2
0
2023-03-18T16:12:21
Technical_Leather949
false
null
0
jcpme26
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcpme26/
false
2
t1_jcplyj0
It fixed my 8bit issues. Now I'm working on getting 4bit going. It's my first time building it in WSL Ubuntu and I'm getting errors.
1
0
2023-03-18T16:09:20
antialtinian
false
null
0
jcplyj0
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcplyj0/
false
1
t1_jcplxkn
The solution has been posted [here](https://github.com/oobabooga/text-generation-webui/issues/400#issuecomment-1474876859). Installation is working correctly now.
3
0
2023-03-18T16:09:09
Technical_Leather949
false
null
0
jcplxkn
false
/r/LocalLLaMA/comments/11uqyk1/new_installs_are_currently_broken_but_oobagooba/jcplxkn/
false
3
t1_jcpl6fz
I'm getting about ~5 token/sec on a VM with 2xA40.
1
0
2023-03-18T16:03:49
Gudeldar
false
null
0
jcpl6fz
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jcpl6fz/
false
1
t1_jcpk7fk
The solution has been posted [here](https://github.com/oobabooga/text-generation-webui/issues/400#issuecomment-1474876859). Can you confirm if this solves your 4bit issues?
3
0
2023-03-18T15:57:01
Technical_Leather949
false
null
0
jcpk7fk
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcpk7fk/
false
3
t1_jcpjm1i
It's Windows Subsystem for Linux. Here's a small guide for installing it: https://github.com/oobabooga/text-generation-webui/wiki/Windows-Subsystem-for-Linux-(Ubuntu)-Installation-Guide There's a mistake in that doc, as version 2 is supported on Windows 10 version 21H2 or later (right click start, system to confirm) ...
4
0
2023-03-18T15:53:15
harrylettuce
false
null
0
jcpjm1i
false
/r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcpjm1i/
false
4
t1_jcpibqe
'wget' is not recognized as an internal or external command, operable program or batch file. ​ I'm using Windows 10 Edit: I'm just downloading them manually.
1
0
2023-03-18T15:45:05
AddendumContent6736
false
2023-03-18T16:14:03
0
jcpibqe
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcpibqe/
false
1
t1_jcphh79
[removed]
1
0
2023-03-18T15:39:16
[deleted]
true
null
0
jcphh79
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcphh79/
false
1
t1_jcphg9s
I just ran the wget and it's downloading the diffs as I write this. So I can at least say the files are still up. If worst came to worst you could just open the filelist.txt in a text editor and manually load the urls in a web browser to download them into the point-alpaca/encrypted folder. Tedious, but better than not...
2
0
2023-03-18T15:39:05
toothpastespiders
false
null
0
jcphg9s
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcphg9s/
false
2
t1_jcph5ll
If you're still having trouble with that and you don't mind using the LoRA, you can setup Alpaca-LoRA with the web UI: 1. Navigate to the text-generation-webui folder 2. Ensure it's up to date with: git pull https://github.com/oobabooga/text-generation-webui 3. Re-install the requirements if needed: pip in...
2
0
2023-03-18T15:37:01
Technical_Leather949
false
null
0
jcph5ll
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcph5ll/
false
2
t1_jcpegu7
I tried both using command prompt while in the folder and downloading it manually and neither worked.
1
0
2023-03-18T15:18:49
AddendumContent6736
false
null
0
jcpegu7
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcpegu7/
false
1
t1_jcpcyfr
Something is broken right now :( I had a working 4bit install and broke it yesterday by updating to the newest version. The good news is oobabooga is looking into it: https://github.com/oobabooga/text-generation-webui/issues/400
3
0
2023-03-18T15:08:16
antialtinian
false
null
0
jcpcyfr
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcpcyfr/
false
3
t1_jcpb3nw
That's a Linux command. I think you can install it for windows too. All it does is download the list of files from the URLs in the file filelist.txt ( from the repo ) and put them in the folder on your system called encrypted. You could do this manually too.
2
0
2023-03-18T14:55:09
starstruckmon
false
null
0
jcpb3nw
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcpb3nw/
false
2
t1_jcp9hmj
Alright, I am now stuck on step 2, how do I use this command? wget -P encrypted/ -i filelist.txt
1
0
2023-03-18T14:43:42
AddendumContent6736
false
null
0
jcp9hmj
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcp9hmj/
false
1
t1_jcp8n2y
[removed]
1
0
2023-03-18T14:37:39
[deleted]
true
null
0
jcp8n2y
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcp8n2y/
false
1
t1_jcp8m0i
[deleted]
1
0
2023-03-18T14:37:26
[deleted]
true
null
0
jcp8m0i
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcp8m0i/
false
1
t1_jcp81lh
Just follow the instruction on the GitHub under "How to distill the weights", then copy the new weights from result folder to where you have the original model currently.
2
0
2023-03-18T14:33:16
starstruckmon
false
null
0
jcp81lh
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcp81lh/
false
2
t1_jcp7kpw
I've already got the original LLaMA models working on my 3090, I just wanted to know how to get this new Alpaca model.
1
0
2023-03-18T14:29:57
AddendumContent6736
false
null
0
jcp7kpw
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcp7kpw/
false
1
t1_jcp7bqi
I've already got regular LLaMA 7B and 13B working, just wanted to know how to get this Alpaca model.
2
0
2023-03-18T14:28:03
AddendumContent6736
false
null
0
jcp7bqi
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcp7bqi/
false
2
t1_jcp6270
Thanks for adding the necessary additional info. I'll check that guide out too.
1
0
2023-03-18T14:18:34
starstruckmon
false
null
0
jcp6270
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcp6270/
false
1
t1_jcp5q83
fyi to u/[AddendumContent6736](https://www.reddit.com/user/AddendumContent6736/) the rentry guide hasn't been updated in a while and is missing some key steps like updating the tokenizer\_config.json for those models. oobabooga recommends wsl as the windows installation method too. the reddit guide pinned in this sub i...
3
0
2023-03-18T14:16:07
Civil_Collection7267
false
null
0
jcp5q83
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcp5q83/
false
3
t1_jcp4k81
This, in particular, is not at a level yet where a beginner can use it ( not been quantised for cheaper hardware ). But you can follow this to run the original Llama model or the Llama model with the Alpaca Lora https://rentry.org/llama-tard-v2
2
0
2023-03-18T14:07:21
starstruckmon
false
null
0
jcp4k81
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcp4k81/
false
2
t1_jcp3i1v
Hello, I'm new to all this, can you explain what WSL is?
2
0
2023-03-18T13:59:03
Sampson_shits
false
null
0
jcp3i1v
false
/r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/jcp3i1v/
false
2
t1_jcp2v2c
To answer your first question, I'm pretty sure it does. While I don't personally know anyone running it on 2 3090s, I've heard stories of similar networks being run that way and, In theory, this type of workload is often distributed across a 4 or 8-gpu server when run on an organizational scale. As far as speed goes...
7
0
2023-03-18T13:54:00
AI-Pon3
false
null
0
jcp2v2c
false
/r/LocalLLaMA/comments/11uidst/whats_the_best_cloud_gpu_service_for_65b/jcp2v2c/
false
7
t1_jcp1awc
Can someone make a easy tutorial on how to use this? Kinda new here.
1
0
2023-03-18T13:41:23
AddendumContent6736
false
null
0
jcp1awc
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcp1awc/
false
1
t1_jcow8w0
Even if they're real, they don't seem to be a full replication, by their own admission. Only 1 epoch of training instead of 3. I'm going to need them to provide a little more information before I try running those, especially with people pointing out the size difference issue.
2
0
2023-03-18T12:57:20
starstruckmon
false
null
0
jcow8w0
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcow8w0/
false
2
t1_jcotvsu
[removed]
1
0
2023-03-18T12:34:53
[deleted]
true
null
0
jcotvsu
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcotvsu/
false
1
t1_jcotv00
there's apparently 13b alpaca too but I don't know if these are real: [https://huggingface.co/elinas/alpaca-13b-int4/tree/main](https://huggingface.co/elinas/alpaca-13b-int4/tree/main) [https://huggingface.co/nealchandra/alpaca-13b-hf-int4/tree/main](https://huggingface.co/nealchandra/alpaca-13b-hf-int4/tree/main)
2
0
2023-03-18T12:34:40
Civil_Collection7267
false
null
0
jcotv00
false
/r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/jcotv00/
false
2
t1_jcokk9b
Does LLaMA work with multiple GPUs? Also, do you know how fast it is, ie how many words per second? Any prompts that would make it work like ChatGPT, as well? Thanks in advance.
2
0
2023-03-18T10:48:40
zxyzyxz
false
null
0
jcokk9b
false
/r/LocalLLaMA/comments/11uidst/whats_the_best_cloud_gpu_service_for_65b/jcokk9b/
false
2
t1_jcokfzg
AFAIK vast.ai tends to have pretty competitive pricing for this sort of thing, especially since they're crowd-sourced rather than running on just datacenter GPUs. As far as running it on your own setup, you need 40 GB of VRAM and 128GB between RAM and swap to use the 4-bit version. So 2x3090 would likely be the cheap...
4
0
2023-03-18T10:47:01
AI-Pon3
false
null
0
jcokfzg
false
/r/LocalLLaMA/comments/11uidst/whats_the_best_cloud_gpu_service_for_65b/jcokfzg/
false
4
t1_jcok4p1
Thanks a lot! Temp 0.7, rep penalty 1.176..., topk40 and topp 0.1 are still valid here? Thanks!
3
0
2023-03-18T10:42:38
dangernoodle01
false
null
0
jcok4p1
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jcok4p1/
false
3
t1_jcoa294
I did run 65B on my PC a few days ago (Intel 12600, 64GB DDR4, Fedora 37, 2TB NVMe SSD). It was quite slow around 1000-1400ms per token but it runs without problems. Ram usage is around 40 - 47GB. No problem quantitizing the original model either, it completed within a few minutes without running out of memory.
3
0
2023-03-18T08:16:19
Special_Freedom_8069
false
null
0
jcoa294
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jcoa294/
false
3
t1_jco4jmy
Thanks, it worked great, didn't know it was for each sub file.
1
0
2023-03-18T06:56:16
zxyzyxz
false
null
0
jco4jmy
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jco4jmy/
false
1
t1_jco4h69
1 word per second 🙁. How well does it run on GPU? And how can I try it on GPU?
3
0
2023-03-18T06:55:20
zxyzyxz
false
null
0
jco4h69
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jco4h69/
false
3
t1_jco3los
What is the token rate with a CPU although?
1
0
2023-03-18T06:43:22
EveningFunction
false
null
0
jco3los
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jco3los/
false
1
t1_jco3a3e
It says I should be able to run 7B LLaMa on an RTX-3050, but it keeps giving me out of memory for CUDA. I followed the instructions, and everything compiled fine. Any advices to help this run? 13B seems to be use less RAM than 7B when it reports this. I found that strange. ​ Thank you for advance! https:/...
1
0
2023-03-18T06:38:59
SlavaSobov
false
null
0
jco3a3e
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jco3a3e/
false
1
t1_jco32pk
Update, it worked fine. It took maybe 30 minutes or so, not bad at all.
2
0
2023-03-18T06:36:16
zxyzyxz
false
null
0
jco32pk
false
/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/jco32pk/
false
2