name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jdrhi66 | >Does anyone know how to do CPU/GPU offloading for text-generation-webui?
It's supported in the web UI with:
python server.py --auto-devices
See the [wiki page here](https://github.com/oobabooga/text-generation-webui/wiki/Low-VRAM-guide) for more information. | 1 | 0 | 2023-03-26T16:49:46 | Technical_Leather949 | false | null | 0 | jdrhi66 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdrhi66/ | false | 1 |
t1_jdrcbxn | ...So is there a pre-built Windows binary for llama.cpp/alpaca.cpp that is recommended to use with alpaca-30b-4bit?
Would love a link, if anyone has one. | 7 | 0 | 2023-03-26T16:13:02 | c4r_guy | false | null | 0 | jdrcbxn | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdrcbxn/ | false | 7 |
t1_jdr9okg | This could be a good idea when things stabilise a bit. Imho, right now it feels like there is too much change coming too quickly to keep a megathread up to date. | 1 | 0 | 2023-03-26T15:54:12 | cheesetor | false | null | 0 | jdr9okg | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdr9okg/ | false | 1 |
t1_jdr4rvb | Interesting. This was the one you used?
[https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO](https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO)
Any reason to not do a local install of the python that tokenizes the .txt documents? Wanted to feed it several long docum... | 1 | 0 | 2023-03-26T15:18:59 | dave9199 | false | null | 0 | jdr4rvb | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdr4rvb/ | false | 1 |
t1_jdr1s24 | I'm running ggml-alpaca-7b-q4.bin, does q4 mean 4 bits? | 1 | 0 | 2023-03-26T14:56:50 | Independent-Ant-4678 | false | null | 0 | jdr1s24 | false | /r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/jdr1s24/ | false | 1 |
t1_jdqq8hd | Thank you!!
I was looking for a simple way to do this and it started working immediately.
My only mistake was I assumed for some reason that I needed to rename my llama files that I already downloaded but then I saw you linked the other files to download I'm a dumb dumb.
It's running quite fast on my 5950x 32 gigs r... | 1 | 0 | 2023-03-26T13:23:20 | ThePseudoMcCoy | false | 2023-03-26T14:54:02 | 0 | jdqq8hd | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdqq8hd/ | false | 1 |
t1_jdqp6gb | Can you tell me if it can write in a different langage (French or Spanish for example ?) I am waiting for a new computer to try it and I cannot test by myself. GPT is almost perfect in multi langage | 2 | 0 | 2023-03-26T13:13:48 | Mooblegum | false | null | 0 | jdqp6gb | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdqp6gb/ | false | 2 |
t1_jdqmqjt | you won't be able to use text-generation-webui, but you can use llama.cpp here: [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
the GitHub page explains everything and has full documentation for setting it up. if you have more questions on llama.cpp, you can can open an issue [here](ht... | 1 | 0 | 2023-03-26T12:50:46 | Civil_Collection7267 | false | null | 0 | jdqmqjt | false | /r/LocalLLaMA/comments/122kwnx/can_i_run_the_llama_model_locally_with_an_apu_and/jdqmqjt/ | true | 1 |
t1_jdqm9li | Guys.... HOW did you get it running with 4bit on a Tesla M40 ? im struggling to get it working on ubuntu... anyone can help me out here please ?
​
i always get this error running 8bit
​
RuntimeError: probability tensor contains either \`inf\`, \`nan\` or element < 0 | 2 | 0 | 2023-03-26T12:46:03 | MageLD | false | 2023-03-26T13:09:08 | 0 | jdqm9li | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdqm9li/ | false | 2 |
t1_jdqm0yg | guess depends on Model of GPU... im struggeling get it running with 8/4 bit on a Tesla M40 24GB | 1 | 0 | 2023-03-26T12:43:40 | MageLD | false | null | 0 | jdqm0yg | false | /r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/jdqm0yg/ | false | 1 |
t1_jdql6yr | That's because they're trying to get us off their crowded servers! /S | 6 | 0 | 2023-03-26T12:35:22 | ThePseudoMcCoy | false | null | 0 | jdql6yr | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdql6yr/ | false | 6 |
t1_jdql3yx | when trying to run in 8bit mode i get following error
RuntimeError: probability tensor contains either \`inf\`, \`nan\` or element < 0
When i try the "oneclick" linux installer it works only with 7b version, 13b crashes after some trys.
No didnt try that , since im running it on linux/ubuntu | 1 | 0 | 2023-03-26T12:34:33 | MageLD | false | null | 0 | jdql3yx | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdql3yx/ | false | 1 |
t1_jdqkuov | its because chatgpt doesnt work in more than half of the countries in the world | 2 | 0 | 2023-03-26T12:31:55 | LienniTa | false | null | 0 | jdqkuov | false | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/jdqkuov/ | false | 2 |
t1_jdqi6po | Fuck, 7b is already quite impressive. Is this 8bit? | 1 | 0 | 2023-03-26T12:02:53 | TheCastleReddit | false | null | 0 | jdqi6po | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdqi6po/ | false | 1 |
t1_jdqc32h | I don’t know,but internally in the end the json is just for the inferface,it’s translated to prompts, you could just pass the prompts in notebook mode and it would generate the conversation | 1 | 0 | 2023-03-26T10:46:24 | psycholustmord | false | null | 0 | jdqc32h | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdqc32h/ | false | 1 |
t1_jdqaw7a | >Do you know what libraries you need to install to run llama.cpp if you already have a quantized model?
looks like the release version was compiled with visual studio and so needs the [visual studio runtime libraries](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
Testing... | 4 | 0 | 2023-03-26T10:29:20 | MoneyPowerNexis | false | 2023-03-27T08:00:43 | 0 | jdqaw7a | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdqaw7a/ | false | 4 |
t1_jdq93mp | I've been using it, unfortunately it crashes whenever my prompt is too long (longer than 2 lines gives me a crash most of the time) | 4 | 0 | 2023-03-26T10:02:52 | AbaDagan112 | false | null | 0 | jdq93mp | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdq93mp/ | false | 4 |
t1_jdq6aid | Yeah this would be nice.
If someone suggests good project, subscribed subredditor can tag mod in that comment. | 2 | 0 | 2023-03-26T09:21:23 | utkvishwas | false | null | 0 | jdq6aid | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdq6aid/ | false | 2 |
t1_jdq62qx | A discord server would be nice. | 2 | 0 | 2023-03-26T09:18:08 | divine-ape-swine | false | null | 0 | jdq62qx | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdq62qx/ | false | 2 |
t1_jdq21rg | Problem is I don't have swap enabled, and limited storage available in my Linux installation. Not sure if I can create a swapfile in my ntfs disk though | 1 | 0 | 2023-03-26T08:18:51 | kozer1986 | false | null | 0 | jdq21rg | false | /r/LocalLLaMA/comments/121redn/question_on_llamacpp_and_webui_ram_usage/jdq21rg/ | false | 1 |
t1_jdq0pzd | >Is a rebuild or reinstall of GPTQ required
Updating the GPTQ repo along with a reinstall. I've just finished testing the 7B and 30B safetensors and everything is working correctly.
About the torrent, I find it interesting too. It seems the worldwide demand for locally run LLMs is not tepid, even when LLaMA is pr... | 3 | 0 | 2023-03-26T07:59:23 | Technical_Leather949 | false | null | 0 | jdq0pzd | false | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/jdq0pzd/ | false | 3 |
t1_jdq0d30 | I’m wondering if it’s possible to have multiple JSON files that you can have the chat query depending on the question asked.
Eg, it has a summarisation of each json file as a prefix and when you type something, searches the json files for whatever prefix has the most similarities with your prompt (or you could pick ma... | 1 | 0 | 2023-03-26T07:54:05 | Nezarah | false | null | 0 | jdq0d30 | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdq0d30/ | false | 1 |
t1_jdpzyuu | [Perplexity scores](https://medium.com/@TheJasperWhisperer/the-dummy-guide-to-perplexity-and-burstiness-in-ai-generated-content-1b4cb31e5a81). Lower is better, hence the recommendation for the groupsize for everything except 7B. | 4 | 0 | 2023-03-26T07:48:18 | friedrichvonschiller | false | 2023-03-26T07:51:29 | 0 | jdpzyuu | false | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/jdpzyuu/ | false | 4 |
t1_jdpyuze | what do those benchmarks mean? what are they benchmarking? | 3 | 0 | 2023-03-26T07:32:07 | Tystros | false | null | 0 | jdpyuze | false | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/jdpyuze/ | false | 3 |
t1_jdpygz1 |
Hello, I am trying to set up a custom device\_map via hugging face's instructions
[https://huggingface.co/docs/transformers/main/en/main\_classes/quantization#offload-between-cpu-and-gpu](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu)
I have this code insert... | 1 | 0 | 2023-03-26T07:26:23 | SomeGuyInDeutschland | false | null | 0 | jdpygz1 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdpygz1/ | false | 1 |
t1_jdpxpe0 | Thank you very much for this PSA.
Is a rebuild or reinstall of GPTQ required for any reason? Anything else to know?
It's fascinating how many of the peers in the torrent are from countries where English is not the first language. | 3 | 0 | 2023-03-26T07:15:16 | friedrichvonschiller | false | 2023-03-26T07:32:16 | 0 | jdpxpe0 | false | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/jdpxpe0/ | false | 3 |
t1_jdpu3h4 | [removed] | 1 | 0 | 2023-03-26T06:24:38 | [deleted] | true | null | 0 | jdpu3h4 | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdpu3h4/ | false | 1 |
t1_jdpu2yz | [https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/)
Look at the "Using Alpaca-LoRA with text-generation-webui"
I got it running although it still can't detect my amd gpu , I have rocm... | 1 | 0 | 2023-03-26T06:24:26 | PixelForgDev | false | null | 0 | jdpu2yz | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdpu2yz/ | false | 1 |
t1_jdpt3w0 | Do you know what libraries you need to install to run llama.cpp if you already have a quantized model? I didnt have to install anything extra to run alpaca.cpp, but I would like to switch.
Also, how do you run alpaca in llama.cpp? Don't you need a specific text format? | 3 | 0 | 2023-03-26T06:11:17 | throwawaydthrowawayd | false | null | 0 | jdpt3w0 | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdpt3w0/ | false | 3 |
t1_jdpqwil | FYI the \`llama.cpp\` repo keeps improving the inference performance significantly and I don't see the changes merged in \`alpaca.cpp\`. So you will probably find it more efficient to run Alpaca using \`llama.cpp\` | 9 | 0 | 2023-03-26T05:42:59 | ggerganov | false | null | 0 | jdpqwil | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdpqwil/ | false | 9 |
t1_jdpjn06 | [deleted] | 2 | 0 | 2023-03-26T04:21:56 | [deleted] | true | null | 0 | jdpjn06 | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdpjn06/ | false | 2 |
t1_jdpi057 | What exactly do you download? I have oobabooga but only see pygmalion | 1 | 0 | 2023-03-26T04:05:36 | AbuDagon | false | null | 0 | jdpi057 | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdpi057/ | false | 1 |
t1_jdpc77j | [deleted] | 1 | 0 | 2023-03-26T03:11:17 | [deleted] | true | null | 0 | jdpc77j | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdpc77j/ | false | 1 |
t1_jdpb0ws | I'm 100% sure 65b is coming at \*some\* point, and there are probably groups working on it right now even though I don't know of anyone in particular.
In theory, something like [this](https://github.com/tloen/alpaca-lora) could be used to do it, but according to that source, it took about 5 hours on a 4090 to train t... | 9 | 0 | 2023-03-26T03:01:01 | AI-Pon3 | false | null | 0 | jdpb0ws | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdpb0ws/ | false | 9 |
t1_jdp8ulp | I haven’t tried any int8 models due to my specs not being sufficient. I will say that alpaca 30B 4bit .bin with alpaca.cpp has impressed me way more than LLaMA 13B 4bit .bin | 1 | 0 | 2023-03-26T02:42:28 | jetpackswasno | false | null | 0 | jdp8ulp | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdp8ulp/ | false | 1 |
t1_jdp7s5w | 30b alpaca can write stories quite well.its been a lot of fun to play with.
Do you know if anyone is working on a 65b alpaca? | 8 | 0 | 2023-03-26T02:33:40 | crash1556 | false | null | 0 | jdp7s5w | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdp7s5w/ | false | 8 |
t1_jdp6auo | Please check my comments, it’s all I know, promise,I’m high and unable to collect information right now (its 4:20 am) | 2 | 0 | 2023-03-26T02:21:27 | psycholustmord | false | null | 0 | jdp6auo | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdp6auo/ | false | 2 |
t1_jdp5of5 | yes this one, will check my files if some are corrupted if not I will make the correct WSL installation when I got the time I guess, until i will try to lora it in 8bit as you say, I was using --gpu-memory 6 but I see you can directly set the max amount on your link that's nice thanks | 1 | 0 | 2023-03-26T02:16:29 | Momomotus | false | null | 0 | jdp5of5 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdp5of5/ | false | 1 |
t1_jdp4xda | [removed] | 1 | 0 | 2023-03-26T02:10:16 | [deleted] | true | null | 0 | jdp4xda | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdp4xda/ | false | 1 |
t1_jdp4w9q | I explain the rough formatting in the comment that I linked to. His colab already had an option for what length to cut inputs to (he had 256, original alpaca used 512, and I used 700).
I could have trained larger input sizes but then it would be far more expensive and take a lot longer. With 256 they used batches of s... | 1 | 0 | 2023-03-26T02:10:00 | Sixhaunt | false | null | 0 | jdp4w9q | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdp4w9q/ | false | 1 |
t1_jdp2ogm | So in your Elder Scrolls dataset, you fed it entire books into the dataset and it worked? Or are you parsing the lore down to
"what's is hammerfell?"
"Hammerfell, once known as Hegathe, the Deathland (or Deathlands), and Volenfell, is a province in the west of Tamriel, bordering Skyrim, Cyrodiil, and High Rock.[Thi... | 1 | 0 | 2023-03-26T01:52:00 | dave9199 | false | null | 0 | jdp2ogm | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdp2ogm/ | false | 1 |
t1_jdoxgph | >I don't use lora (I think with this) from https://rentry.org/llama-tard-v2#choosing-8bit-or-4bit
Problems with gibberish from models can come from using out of date or improperly quantized versions. To confirm, you're using this 4-bit model right?: [https://huggingface.co/decapoda-research/llama-7b-hf-int4](https:... | 1 | 0 | 2023-03-26T01:13:14 | Technical_Leather949 | false | null | 0 | jdoxgph | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdoxgph/ | false | 1 |
t1_jdoqwau | Can you share your prompting data? I want to give this a try | 1 | 0 | 2023-03-26T00:22:26 | _wsgeorge | false | null | 0 | jdoqwau | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdoqwau/ | false | 1 |
t1_jdoqr3j | thanks for the answer
I use "The CORRECT "HF Converted" weights pytorch_model-00001-of-00033.bin etc" in the llama-7b folder, llama-7b-4bit.pt at the root of models, I don't use lora (I think with this)
from https://rentry.org/llama-tard-v2#choosing-8bit-or-4bit.
I can use it but it take 8GB vram instantly and can of... | 1 | 0 | 2023-03-26T00:21:23 | Momomotus | false | null | 0 | jdoqr3j | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdoqr3j/ | false | 1 |
t1_jdopnkn | So, I got significantly faster performance on 4bit when I was running Windows native, but on pure Ubuntu 22.10 I'm getting vastly better performance out of 8bit. At least your predicament made sense.
As you observe, the code is in a lot of flux. Early days. | 2 | 0 | 2023-03-26T00:13:20 | friedrichvonschiller | false | null | 0 | jdopnkn | false | /r/LocalLLaMA/comments/122218d/speed_difference_between_8bit_and_4bit/jdopnkn/ | false | 2 |
t1_jdopbfi | [https://youtu.be/UtcaO3zTCKQ](https://youtu.be/UtcaO3zTCKQ) | 1 | 0 | 2023-03-26T00:10:53 | chain-77 | false | null | 0 | jdopbfi | false | /r/LocalLLaMA/comments/1224fh0/video_tutorial_for_amd_gpu_to_run_llama_and_lora/jdopbfi/ | false | 1 |
t1_jdoozaa | Got it, thanks | 1 | 0 | 2023-03-26T00:08:24 | ID4gotten | false | null | 0 | jdoozaa | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdoozaa/ | false | 1 |
t1_jdoo1nw | LOLd a the disclaimer. I agree. There needs to be a directory for this sort of thing, as the pace of change has gotten out of hand.
How easy is it to contribute to a Wiki? I've never done that, but a stickied mega thread sounds like a low effort way to get this going. Perhaps a mod can pick submissions in the comments... | 1 | 0 | 2023-03-26T00:01:14 | _wsgeorge | false | null | 0 | jdoo1nw | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdoo1nw/ | false | 1 |
t1_jdongqz | What I did was saved a copy of a google colab that fetched a copy of llama, the alpaca dataset, then reformatted the dataset and trained the alpaca lora. I then modified the code to also pull in the ElderScrolls dataset and I formatted it in the same way that the alpaca one was. This was the colab I branched off from, ... | 2 | 0 | 2023-03-25T23:56:50 | Sixhaunt | false | null | 0 | jdongqz | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdongqz/ | false | 2 |
t1_jdomxdw | Can you share what problems you're having? Have you tried the prebuilt DLL [found here](https://www.reddit.com/r/LocalLLaMA/comments/11z8vzy/got_problems_with_bitsandbytes_this_may_be_a_fix/)? | 1 | 0 | 2023-03-25T23:52:46 | Technical_Leather949 | false | null | 0 | jdomxdw | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdomxdw/ | false | 1 |
t1_jdolgdg | Are you talking about the base model or that new 7B 4-bit Alpaca model by someone online? I haven't tried it myself, but it looks like a lot of people have been having problems with that Alpaca model. You can apply [Alpaca-LoRA](https://huggingface.co/tloen/alpaca-lora-7b) to the 7b model instead of trying to use that ... | 1 | 0 | 2023-03-25T23:41:41 | Technical_Leather949 | false | null | 0 | jdolgdg | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdolgdg/ | false | 1 |
t1_jdokz9u | I definitely preferred 13B at the lower bitness. [The accuracy remains remarkable](https://www.reddit.com/r/Oobabooga/comments/11yiwid/gptq_the_most_important_thing_going_on_right_now/) and you get 6B more parameters, which makes for significantly better responses. qwopqwop200 is doing phenomenal work and it's about ... | 1 | 0 | 2023-03-25T23:38:12 | friedrichvonschiller | false | null | 0 | jdokz9u | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdokz9u/ | false | 1 |
t1_jdokyfz | >Note: I ve a 4090, but my memory is only 32gb of RAM.
You can use swap space instead of RAM. With that hardware, you can run 4-bit 30B LLaMA easily. | 1 | 0 | 2023-03-25T23:38:03 | Technical_Leather949 | false | null | 0 | jdokyfz | false | /r/LocalLLaMA/comments/121redn/question_on_llamacpp_and_webui_ram_usage/jdokyfz/ | false | 1 |
t1_jdoki0o | For models, there's only two types to use: standard and finetuned.
For standard, those are the 7B, 13B, 33B, and 65B models from Meta.
For finetuned, there are options like LoRAs and a separate option for a replica of the dataset. Alpaca-LoRA and Alpaca Native are the most prominent projects. Alpaca-LoRA reproduces t... | 1 | 0 | 2023-03-25T23:34:38 | Technical_Leather949 | false | null | 0 | jdoki0o | false | /r/LocalLLaMA/comments/1221xih/guide_to_the_different_parameters_and_models_in/jdoki0o/ | false | 1 |
t1_jdokejq | Haha that's awesome. Do you have any further documentation of the project or know of other projects that break down how to re-format books so that I can be used to train your data set ? | 1 | 0 | 2023-03-25T23:33:55 | dave9199 | false | null | 0 | jdokejq | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdokejq/ | false | 1 |
t1_jdok2d7 | Thanks for the reply. Which do you think is the better place to start 13B@4bit? | 1 | 0 | 2023-03-25T23:31:22 | dave9199 | false | null | 0 | jdok2d7 | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdok2d7/ | false | 1 |
t1_jdohy4v | [removed] | 1 | 0 | 2023-03-25T23:15:29 | [deleted] | true | null | 0 | jdohy4v | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdohy4v/ | false | 1 |
t1_jdoeug8 | I think a wiki page for this would also work. Lots of subreddits use it to gather best practices and an overview of available options. | 1 | 0 | 2023-03-25T22:52:04 | addandsubtract | false | null | 0 | jdoeug8 | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdoeug8/ | false | 1 |
t1_jdocsiz | There is a project exist like this one https://github.com/nsarrazin/serge
A compilation of project like this one would be helpful. | 1 | 0 | 2023-03-25T22:36:26 | utkvishwas | false | null | 0 | jdocsiz | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdocsiz/ | false | 1 |
t1_jdocey7 | >python server.py --model llama-7b-hf --gptq-bits 4 --no-stream --cai-chat --lora llama-hh-lora-7B
loras work for 8bit right now, not 4bit
[https://github.com/oobabooga/text-generation-webui/wiki/Using-LoRAs#instructions](https://github.com/oobabooga/text-generation-webui/wiki/Using-LoRAs#instructions) | 1 | 0 | 2023-03-25T22:33:32 | Civil_Collection7267 | false | null | 0 | jdocey7 | false | /r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdocey7/ | false | 1 |
t1_jdobxjr | I originally didn't want to create a megathread because I've never seen them widely used in other subreddits. Mods would create a megathread, categorize a huge amount of content for that thread, then force everyone to use it. I wanted a hands-off approach for this place.
>I believe having a centralized thread for L... | 1 | 0 | 2023-03-25T22:29:47 | Civil_Collection7267 | false | null | 0 | jdobxjr | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdobxjr/ | true | 1 |
t1_jdobwun | Yeah in the stream function you should be able to | 1 | 0 | 2023-03-25T22:29:38 | BriefCardiologist656 | false | null | 0 | jdobwun | false | /r/LocalLLaMA/comments/121pk4y/fastllama_a_python_wrapper_to_run_llamacpp/jdobwun/ | false | 1 |
t1_jdo9gnt | [removed] | 1 | 0 | 2023-03-25T22:10:37 | [deleted] | true | null | 0 | jdo9gnt | false | /r/LocalLLaMA/comments/121pk4y/fastllama_a_python_wrapper_to_run_llamacpp/jdo9gnt/ | false | 1 |
t1_jdo9ffv | That's really cool! Running it through with the 7b alpaca model and a
prompt = 'Below is an instruction that describes a task. Write a response that appropriately completes the request. \n \n### Instruction: Write in the style of Donald Trump, keeping his preferences in mind, about the best horror movies. \n \n ### R... | 1 | 0 | 2023-03-25T22:10:22 | toothpastespiders | false | null | 0 | jdo9ffv | false | /r/LocalLLaMA/comments/121pk4y/fastllama_a_python_wrapper_to_run_llamacpp/jdo9ffv/ | false | 1 |
t1_jdo89qn | I used Gpt4 to make one of Samus for me the other day. Whenever I need something. I just ask Gpt for it. :) | 1 | 0 | 2023-03-25T22:01:25 | scorpiove | false | null | 0 | jdo89qn | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdo89qn/ | false | 1 |
t1_jdo7lgd | Yes, both of them. 4-bit | 1 | 0 | 2023-03-25T21:56:12 | kozer1986 | false | 2023-03-25T23:08:22 | 0 | jdo7lgd | false | /r/LocalLLaMA/comments/121redn/question_on_llamacpp_and_webui_ram_usage/jdo7lgd/ | false | 1 |
t1_jdo49bd | I found good success by mixing my dataset with the alpaca dataset then retraining an alpaca lora that understands my new data as well. You can try it out to see for yourself: [https://huggingface.co/Xanthius/alpaca7B-ES-lora](https://huggingface.co/Xanthius/alpaca7B-ES-lora)
it's basically just alpaca that also was ta... | 3 | 0 | 2023-03-25T21:31:08 | Sixhaunt | false | null | 0 | jdo49bd | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdo49bd/ | false | 3 |
t1_jdo3zyp | Having an amd card sucks right now if you plan to do any ai at all, feels like ass. I tried dual booting ubuntu but I wasn't even able to make it work even there, everything was so scuffed | 1 | 0 | 2023-03-25T21:29:08 | illyaeater | false | null | 0 | jdo3zyp | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdo3zyp/ | false | 1 |
t1_jdo3ihe | [removed] | 1 | 0 | 2023-03-25T21:25:28 | [deleted] | true | null | 0 | jdo3ihe | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdo3ihe/ | false | 1 |
t1_jdo3hhg | Alpaca's not really a model. There are models built with it embedded that you can download, but it's natively a LoRA. It's a layer that sits atop LLaMA, which is the underlying model.
Depends what you'd like to do. Alpaca is for listening to directives. LLaMA is just guessing the next word in a blob of text.
I'd ... | 5 | 0 | 2023-03-25T21:25:16 | friedrichvonschiller | false | null | 0 | jdo3hhg | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdo3hhg/ | false | 5 |
t1_jdo2olb | Can you somehow give me the link? | 1 | 0 | 2023-03-25T21:19:16 | -2b2t- | false | null | 0 | jdo2olb | false | /r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/jdo2olb/ | false | 1 |
t1_jdnzrql | And a version control tool is essential, for any Lora project. | 3 | 0 | 2023-03-25T20:57:40 | New-Act1498 | false | null | 0 | jdnzrql | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdnzrql/ | false | 3 |
t1_jdnyupm | I'm using oobabooga
https://github.com/oobabooga/text-generation-webui | 1 | 0 | 2023-03-25T20:50:47 | Inevitable-Start-653 | false | null | 0 | jdnyupm | false | /r/LocalLLaMA/comments/121ad9h/cant_tell_if_the_ai_is_repeating_itself_or/jdnyupm/ | false | 1 |
t1_jdnyi7d | A good estimate for 1B parameters is 2GB in 16bit, 1GB in 8bit and 500MB in 4bit. In practice it's a bit more than that. The only way to fit a 13B model on the 3060 is with 4bit quantitization. | 6 | 0 | 2023-03-25T20:48:12 | Disastrous_Elk_6375 | false | null | 0 | jdnyi7d | false | /r/LocalLLaMA/comments/121xqmv/out_of_memory_with_13b_on_an_rtx_3060/jdnyi7d/ | false | 6 |
t1_jdnxz8e | There is a TLDR in the comments under that post https://www.reddit.com/r/MachineLearning/comments/1215dbl/comment/jdl1bmp/?utm_source=share&utm_medium=web2x&context=3 | 1 | 0 | 2023-03-25T20:44:17 | tuna_flsh | false | null | 0 | jdnxz8e | false | /r/LocalLLaMA/comments/121b1l5/implementing_reflexion_into_llamaalpaca_would_be/jdnxz8e/ | false | 1 |
t1_jdnu31z | Is it quantized? | 1 | 0 | 2023-03-25T20:15:44 | MammothCollege6260 | false | null | 0 | jdnu31z | false | /r/LocalLLaMA/comments/121redn/question_on_llamacpp_and_webui_ram_usage/jdnu31z/ | false | 1 |
t1_jdnpw2l | [removed] | 1 | 0 | 2023-03-25T19:45:01 | [deleted] | true | null | 0 | jdnpw2l | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdnpw2l/ | false | 1 |
t1_jdnpv5j | >ggml alpaca 4bit .bin files for alpaca.cpp
How is the performance compared to LLaMA 13b int4 and LLaMA 13b int8 w/ alpaca lora? | 1 | 0 | 2023-03-25T19:44:49 | tronathan | false | null | 0 | jdnpv5j | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdnpv5j/ | false | 1 |
t1_jdnptuk | How do you get this kind of interface ? | 2 | 0 | 2023-03-25T19:44:33 | Downtown-Web3655 | false | null | 0 | jdnptuk | false | /r/LocalLLaMA/comments/121ad9h/cant_tell_if_the_ai_is_repeating_itself_or/jdnptuk/ | false | 2 |
t1_jdnjbeu | in cai-chat mode in oobabooga/text-generation-webui
in chat mode you can load characters by loading a json that basically builds the prompt for the chat. so i took the sample json and asked chatgpt to create the same format for aristotle | 3 | 0 | 2023-03-25T18:57:39 | psycholustmord | false | null | 0 | jdnjbeu | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdnjbeu/ | false | 3 |
t1_jdniykx | I have 16GB of ram and a 3600 12gb, but you can run alpaca in that card for sure.
I've used oobabooga/text-generation-webui starting in 8bit + alpaca lora
You can set it to chat mode,and there is a sample json that you can use as a template to ask chatgpt or whatever language model you have access to. This is the pr... | 4 | 0 | 2023-03-25T18:55:07 | psycholustmord | false | null | 0 | jdniykx | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdniykx/ | false | 4 |
t1_jdnh03o | I'm not aware of how well it performs, but as far as a quick web search goes, it has enough RAM! ;-) Perhaps even for a larger model, but speed will be slower. | 1 | 0 | 2023-03-25T18:41:15 | schorhr | false | null | 0 | jdnh03o | false | /r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/jdnh03o/ | false | 1 |
t1_jdngt0v | Oh let me know how it works! How much RAM is available when you close everything? | 1 | 0 | 2023-03-25T18:39:54 | schorhr | false | null | 0 | jdngt0v | false | /r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/jdngt0v/ | false | 1 |
t1_jdngdp1 | This is truly one of the best tips here. I was under the assumption that ChatGPT won't be able to help since its training is only till Sept 2021. But it's been giving me a lot of steps so far. Hoping for the best. | 3 | 0 | 2023-03-25T18:36:53 | LickTempo | false | null | 0 | jdngdp1 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdngdp1/ | false | 3 |
t1_jdneosi | Hi, I have done the one click installer and the bandaid installer for oogabooga people + downloaded the correct 7b4bit + the new 7b model.
I can use it in 8bit it works, but in 4bit it just spew random words anyone have an idea about this ? thanks
It load the checkpoint shards (long loading) only when i doesn't speci... | 1 | 0 | 2023-03-25T18:24:50 | Momomotus | false | null | 0 | jdneosi | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdneosi/ | false | 1 |
t1_jdnbtrb | So, gpt4 wrote a description of Aristotle, and you feed this as a prompt to alpaca? Or...?
I'm confused, please explain! | 1 | 0 | 2023-03-25T18:04:40 | ID4gotten | false | null | 0 | jdnbtrb | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdnbtrb/ | false | 1 |
t1_jdn6xqq | Do you think I could run the model with more performance on my steam deck? | 1 | 0 | 2023-03-25T17:30:30 | -2b2t- | false | null | 0 | jdn6xqq | false | /r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/jdn6xqq/ | false | 1 |
t1_jdn6fyh | Thank you very mutch :)) I'm on Redmi note 8 pro, so I don't expect better performance, but thank you very much for the tutorial, will try it out and then update you | 1 | 0 | 2023-03-25T17:27:02 | -2b2t- | false | null | 0 | jdn6fyh | false | /r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/jdn6fyh/ | false | 1 |
t1_jdn595r | Hi! :-)
After I've tried three days to build it for Android on my bad Windows laptop, I gave up. But then the awesome Klyntar on the Cocktail Peanut Factory Discord gave me this to run on the phone (Termux):
(EDIT: First install Termux on Android, e.g. from F-Droid, and do termux-setup-storage to get access to your S... | 2 | 0 | 2023-03-25T17:18:49 | schorhr | false | 2023-03-25T18:42:11 | 0 | jdn595r | false | /r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/jdn595r/ | false | 2 |
t1_jdn33nc | I got Alcapa 7B and 13B working, getting ~20s per response for 7B, and >1 min per response for 13B. I'm using Ryzen 5 3600, 16GB RAM with default settings.
The big plus: this UI has features like "Memory", "World info", and "author's notes" that help you tune the AI and help it keep context even in long sessions, w... | 1 | 0 | 2023-03-25T17:03:21 | impetu0usness | false | null | 0 | jdn33nc | false | /r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdn33nc/ | false | 1 |
t1_jdn2cv1 | Afaik the demo I saw was running in Termux. That's all I know. Make sure to keep it up to date like all Linux distros by occasionally running
`apt update && apt full-upgrade -y && apt autoremove -y` | 1 | 0 | 2023-03-25T16:58:03 | afoam | false | null | 0 | jdn2cv1 | false | /r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/jdn2cv1/ | false | 1 |
t1_jdmuoz0 | I think I'll just stick with basic llama for now. This 4bit LoRA seems tempting but smells like hours of troubleshooting.
Maybe next time when this whole process becomes more user-friendly, at least on Stable diffusion auto1111 level.
Thank you for all your help! | 1 | 0 | 2023-03-25T16:03:32 | Famberlight | false | null | 0 | jdmuoz0 | false | /r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmuoz0/ | false | 1 |
t1_jdmtxts | [removed] | 1 | 0 | 2023-03-25T15:58:11 | [deleted] | true | null | 0 | jdmtxts | false | /r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdmtxts/ | false | 1 |
t1_jdmtwnx | Problem is LLaMA isn't open source but OpenAssistant is. I wanted to use LLaMA for an app I want to sell but looks like I'll have to use OpenAssistant instead, which actually is higher quality in its completions in my usage than LLaMA or Alpaca is. | 2 | 0 | 2023-03-25T15:57:57 | Neurprise | false | null | 0 | jdmtwnx | false | /r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/jdmtwnx/ | false | 2 |
t1_jdmsyvh | I was able to get 7b and 13b working on a 16GB ram and 8gbvram on older card. | 3 | 0 | 2023-03-25T15:51:19 | Readityesterday2 | false | null | 0 | jdmsyvh | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdmsyvh/ | false | 3 |
t1_jdmqrdf | Nice!! Frikin love it! | 2 | 0 | 2023-03-25T15:35:32 | Inevitable-Start-653 | false | null | 0 | jdmqrdf | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdmqrdf/ | false | 2 |
t1_jdmp3rz | Uhm, not sure... have you moved your downloaded models to the `models` folder? And I think you need both the 8bit and 4bit models to run in 4bit mode, on my setup I placed the 4bit model one folder up, so 8bit model in `models/llama-7b-hf` and 4bit model in `models` | 1 | 0 | 2023-03-25T15:23:48 | stonegdi | false | null | 0 | jdmp3rz | false | /r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmp3rz/ | false | 1 |
t1_jdmoyfe | Yes!! | 1 | 0 | 2023-03-25T15:22:45 | Inevitable-Start-653 | false | null | 0 | jdmoyfe | false | /r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmoyfe/ | false | 1 |
t1_jdmoq83 | Hmm 24GB of vram should be enough to run the whole 13B model easily in 4-bit with that character card. I'm using a 4090, but still the vram is always the limiting factor and the two cards perform similarly and the 30B model can still run with that character card in 4-bit mode.
I've changed and updated my install sev... | 1 | 0 | 2023-03-25T15:21:11 | Inevitable-Start-653 | false | null | 0 | jdmoq83 | false | /r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jdmoq83/ | false | 1 |
t1_jdmo7j2 | Check this out too, just found this post... looks like LoRA now works with 4bit. There is some info about the CUDA error, too. I think I might give it a try.
[https://github.com/s4rduk4r/alpaca\_lora\_4bit\_readme/blob/main/README.md](https://github.com/s4rduk4r/alpaca_lora_4bit_readme/blob/main/README.md) | 1 | 0 | 2023-03-25T15:17:26 | stonegdi | false | null | 0 | jdmo7j2 | false | /r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/jdmo7j2/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.