title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Paper with dataset: Let’s Verify Step by Step | 30 | 2023-05-31T23:31:46 | https://twitter.com/_akhaliq/status/1663981726647894027 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 13x1tb3 | false | {'oembed': {'author_name': 'AK', 'author_url': 'https://twitter.com/_akhaliq', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Open AI releases paper + dataset<br><br>Let’s Verify Step by Step<br><br>trained a model to achieve a new state-of-the-art in mathematical problem solving by rewarding each correct step of reasoning (“process supervision”) instead of simply rewarding the correct final answer (“outcome… <a href="https://t.co/M6PwkqBLL9">pic.twitter.com/M6PwkqBLL9</a></p>— AK (@_akhaliq) <a href="https://twitter.com/_akhaliq/status/1663981726647894027?ref_src=twsrc%5Etfw">May 31, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/_akhaliq/status/1663981726647894027', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_13x1tb3 | /r/LocalLLaMA/comments/13x1tb3/paper_with_dataset_lets_verify_step_by_step/ | false | false | 30 | {'enabled': False, 'images': [{'id': 'UO2NDUz6VO_0ZzZlnukTQKJ0IlrpPsKILmmg8vffv5Y', 'resolutions': [{'height': 94, 'url': 'https://external-preview.redd.it/S7BzeQ19Nj-B80F4tmJwbyVZhd4bccgsjFmyFQQjJsc.jpg?width=108&crop=smart&auto=webp&s=bdb74b9e9da16db7e8083a76e3b1daafc68de97e', 'width': 108}], 'source': {'height': 123, 'url': 'https://external-preview.redd.it/S7BzeQ19Nj-B80F4tmJwbyVZhd4bccgsjFmyFQQjJsc.jpg?auto=webp&s=579284a78e92f6474c35f3fb4d628ab3f8416548', 'width': 140}, 'variants': {}}]} | ||
Gozfarb shareGPT dataset? | 3 | Is there a copy of the HF Gozfarb shareGPT dataset available anywhere? A lot of important fine-tuning repositories rely on it, and I wanted to try fine-tuning it myself. It looks like the whole account is gone.
​
Thank you! | 2023-05-31T22:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/13x0ht9/gozfarb_sharegpt_dataset/ | ProfessionalHand9945 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13x0ht9 | false | null | t3_13x0ht9 | /r/LocalLLaMA/comments/13x0ht9/gozfarb_sharegpt_dataset/ | false | false | self | 3 | null |
CUDA out of memory error | 0 | [removed] | 2023-05-31T21:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/13wysff/cuda_out_of_memory_error/ | franco84732 | self.LocalLLaMA | 2023-05-31T21:40:40 | 0 | {} | 13wysff | false | null | t3_13wysff | /r/LocalLLaMA/comments/13wysff/cuda_out_of_memory_error/ | false | false | default | 0 | null |
Hi, just a question on resizable bar... | 1 | [removed] | 2023-05-31T21:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/13wy2p9/hi_just_a_question_on_resizable_bar/ | Ferrero__64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wy2p9 | false | null | t3_13wy2p9 | /r/LocalLLaMA/comments/13wy2p9/hi_just_a_question_on_resizable_bar/ | false | false | default | 1 | null |
How to load guanaco-33b on both cpu and gpu so it doesn't give me OOM error ? | 1 | [removed] | 2023-05-31T20:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/13wxp75/how_to_load_guanaco33b_on_both_cpu_and_gpu_so_it/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wxp75 | false | null | t3_13wxp75 | /r/LocalLLaMA/comments/13wxp75/how_to_load_guanaco33b_on_both_cpu_and_gpu_so_it/ | false | false | default | 1 | null |
Meta experiments with quantization aware training for LLMs, allowing for 4-bit quantization better than GPTQ and with reduced memory requirements. | 90 | Paper: [https://arxiv.org/abs/2305.17888](https://arxiv.org/abs/2305.17888)
Some excerpts:
>As a result, we are able to distill the 7B, 13B and 30B LLaMA models with weights and KV cache quantized down to 4-bits. In this regard, our approach exhibits significant enhancements in quality compared to post-training quantization. Notably, larger models employing QAT outperform smaller models utilizing floating-point 16-bit representations, despite having similar model sizes.
>
>In the 4-8-4 setting, where both weights and the KV cache are quantized to 4 bits, all PTQ methods produce poor results, whereas LLM-QAT achieves 69.9, only trailing the full precision model by 1.5 points on average.
>
>One important question for practitioners is whether to use a small model at full precision, or a larger quantized model of similar inference cost. First, 8-bit quantization should be preferred over smaller full precision models. An 8-8-8 30B quantized model outperforms a 13B model of similar size, and should have lower latency and higher throughput in practice. This also holds for an 8-bit 13B model compared with a 16-bit 7B model. Furthermore, 4-bit models quantized using LLM-QAT should be preferred over 8-bit models of similar size. As a result, we recommend 4-bit LLM-QAT models for the best efficiency-accuracy tradeoff.
Memory consumption of key-value cache (KV cache) for different sequence length
https://preview.redd.it/4k95ikopo93b1.png?width=556&format=png&auto=webp&s=92ac820c9bc1080d28f49b5d4d0a51854e5a5ebb
It's worth mentioning that Meta has started calling it LLaMA 30B, so the 33B meme can go away. | 2023-05-31T20:31:01 | https://www.reddit.com/r/LocalLLaMA/comments/13wx4q0/meta_experiments_with_quantization_aware_training/ | FrequentVariety4414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wx4q0 | false | null | t3_13wx4q0 | /r/LocalLLaMA/comments/13wx4q0/meta_experiments_with_quantization_aware_training/ | false | false | 90 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | |
Thoughts on llama.cpp CPU usage? | 15 | 2023-05-31T20:26:55 | involviert | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13wx0ws | false | null | t3_13wx0ws | /r/LocalLLaMA/comments/13wx0ws/thoughts_on_llamacpp_cpu_usage/ | false | false | 15 | {'enabled': True, 'images': [{'id': 's6focp9-t17jTPqhwUgPd8W3HydNpv65QyL2bHT0kcI', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=108&crop=smart&auto=webp&s=4ba792ad022ef437a35555fc5a69b681c08e4269', 'width': 108}, {'height': 71, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=216&crop=smart&auto=webp&s=893506f4be9daa3c4dfbd8d1143fc7d81cabb02f', 'width': 216}, {'height': 105, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=320&crop=smart&auto=webp&s=3b4b2968f39bf34b6988afb25b39522c9f87cfaf', 'width': 320}, {'height': 210, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=640&crop=smart&auto=webp&s=98f5413d70e707c665809e87e2d34be924fc6561', 'width': 640}, {'height': 316, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=960&crop=smart&auto=webp&s=68403b7c3b516a61e4a7fe379e1491665ee2d8b0', 'width': 960}, {'height': 355, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=1080&crop=smart&auto=webp&s=b5ca308b322a6c4e2459a04285f38eeeb027bc6e', 'width': 1080}], 'source': {'height': 551, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?auto=webp&s=a4685683868893db353fc38d9ced4f8fef5d93b3', 'width': 1673}, 'variants': {}}]} | |||
best LLM my computer can handle? | 1 | [removed] | 2023-05-31T20:11:17 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wwml4 | false | null | t3_13wwml4 | /r/LocalLLaMA/comments/13wwml4/best_llm_my_computer_can_handle/ | false | false | default | 1 | null | ||
Comparing the Applications of Uncensored LLM Models to Public Models | 0 | [removed] | 2023-05-31T20:00:08 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wwc4h | false | null | t3_13wwc4h | /r/LocalLLaMA/comments/13wwc4h/comparing_the_applications_of_uncensored_llm/ | false | false | default | 0 | null | ||
OpenLlama finetuning and inference on intel dgpus | 2 | [removed] | 2023-05-31T19:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/13wvpxc/openllama_finetuning_and_inference_on_intel_dgpus/ | unrahul | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wvpxc | false | null | t3_13wvpxc | /r/LocalLLaMA/comments/13wvpxc/openllama_finetuning_and_inference_on_intel_dgpus/ | false | false | default | 2 | null |
LLM Score v2 - Modern Models Tested by Human | 77 | Hey folks!
I'd like to share some numbers of **Gotzmann LLM Score** after two-week-long marathon testing all popular GGML models available here, including fresh hot **Hippogriff** and **Samantha** :)
The V2 questions were fixed for better and there finally good scores for those who'd like to use instruct models for real projects or local experiments.
The **Main Score** was normalised for **\[ 0 .. 100 \]** range, expecting ideal model to be close to 100.
[Norm Score - Part I](https://preview.redd.it/6ykty05pa93b1.png?width=2110&format=png&auto=webp&s=5a91d52413487f1314f4dc04219dcc934e16d934)
[Norm Score - Part II](https://preview.redd.it/5ncao3cra93b1.png?width=2110&format=png&auto=webp&s=fb6036b3db25543b68ee82b80184c0f943a31b97)
Then, there now special **Perf Score**, which depends on Main Score and model size / speed. This shows the best performing models according to CPU / RAM resources used.
[Perf Score - Part I](https://preview.redd.it/6zqubvm1b93b1.png?width=2110&format=png&auto=webp&s=5e401bd86b04915acdb65b30271d0c7ebb1bf41e)
[Perf Score- Part II](https://preview.redd.it/8ihlj304b93b1.png?width=2112&format=png&auto=webp&s=77656d6bed7d4f3adea7b8a2376a97d7177b575c)
I've got many insights after V1 so the V2 was changed for better, and I have plans for even better V3 going soon, where I'm going to automate and open-source the whole thing.
Please check out the original results via the [Google Docs](https://docs.google.com/spreadsheets/d/1ikqqIaptv2P4_15Ytzro46YysCldKY7Ub2wcX5H1jCQ/edit#gid=0) | 2023-05-31T19:22:07 | https://www.reddit.com/r/LocalLLaMA/comments/13wvd0j/llm_score_v2_modern_models_tested_by_human/ | Gatzuma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wvd0j | false | null | t3_13wvd0j | /r/LocalLLaMA/comments/13wvd0j/llm_score_v2_modern_models_tested_by_human/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'XU0ZoS-2FKOHVF5eYKmADD890q6sOEOpCLirBhmc-mM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=108&crop=smart&auto=webp&s=d35b0af110726adf73a8a8a1430c43e3a20e9600', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=216&crop=smart&auto=webp&s=7bad1ab4483a579641bfd664f0b170438dcaa437', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=320&crop=smart&auto=webp&s=3dc7ec570063af2c54d56ebbc7946f37d5f95c6f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=640&crop=smart&auto=webp&s=32e12e71274caafa5b0ff72af91e551e14d40e3b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=960&crop=smart&auto=webp&s=df79bc805706a5165257afb831d733f1d1a8b7f8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=1080&crop=smart&auto=webp&s=54f7f7b91c05b4b14939ad5733329a34bc00ec08', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?auto=webp&s=0bc9f2b1d7a5911de815dfa980b4cb0a8c508f1b', 'width': 1200}, 'variants': {}}]} | |
Getting Falcon 40B to work? | 12 | Has anyone here actually gotten Falcon 40B to work?
I've tried running it in Oobabooga; I get errors.
I've tried running the example code from the Falcon 40B repo; it doesn't produce any output either.
Does anyone at all have a working HOWTO for running Falcon 40B, ideally in Oobabooga? | 2023-05-31T19:01:17 | https://www.reddit.com/r/LocalLLaMA/comments/13wutj4/getting_falcon_40b_to_work/ | AlpsAficionado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wutj4 | false | null | t3_13wutj4 | /r/LocalLLaMA/comments/13wutj4/getting_falcon_40b_to_work/ | false | false | self | 12 | null |
How should I choose between 4/8-bit model? | 1 | [removed] | 2023-05-31T18:40:57 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wuazp | false | null | t3_13wuazp | /r/LocalLLaMA/comments/13wuazp/how_should_i_choose_between_48bit_model/ | false | false | default | 1 | null | ||
hippogriff-30b King of good roleplay for me - Listens to "Tool" | 3 | [removed] | 2023-05-31T18:23:26 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wtv8z | false | null | t3_13wtv8z | /r/LocalLLaMA/comments/13wtv8z/hippogriff30b_king_of_good_roleplay_for_me/ | false | false | default | 3 | null | ||
Can't load a 30B GPTQ model on 20Gb of VRAM (split on two GPUs) | 7 | Hey there!
I'm new to this so I'm quite lost on how I could run a 30B model.
-
I got an RTX4070 today with 12Gb of VRAM and kept my old donkey the GTX1070 with its 8Gb. I also have 48Gb of RAM.
Now I've read that 30B models could load with 20Gb of VRAM but with the oobabooga UI I get this message :
RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 88604672 bytes.
By using this command :
python server.py --model TheBloke_Wizard-Vicuna-30B-Uncensored-GPTQ --chat --xformers --sdp-attention --wbits 4 --groupsize 128 --model_type Llama --pre_layer 21 11
I tried with different numbers for pre_layer but without success.
-
I can load a GGML model and even followed these [instructions](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md) to have DLLAMA_CUBLAS (no idea what that is tho) in my textgen conda env but none of my GPUs are reacting during inferences. I did use "--n-gpu-layers 200000" as shown in the oobabooga instructions (I think that the real max number is 32 ? I'm not sure at all about what that is and would be glad to know too) but only my CPU gets used for inferences (0.6t/s if there is no context).
-
I did try to read as much as I can but I don't know if that's ADHD, the lack of sleep or the fact that when you sneeze there is an update somewhere but... what am I doing wrong here?
-
I'm not a python beginner but running llama models is completely new to me (I did a few extensions for the A1111 webui for stable-diffusion for [example](https://github.com/Extraltodeus?tab=repositories&q=&type=&language=&sort=stargazers)).
edit: spaced a bit more for easier reading | 2023-05-31T18:18:34 | https://www.reddit.com/r/LocalLLaMA/comments/13wtqx2/cant_load_a_30b_gptq_model_on_20gb_of_vram_split/ | Extraltodeus | self.LocalLLaMA | 2023-06-01T00:37:41 | 0 | {} | 13wtqx2 | false | null | t3_13wtqx2 | /r/LocalLLaMA/comments/13wtqx2/cant_load_a_30b_gptq_model_on_20gb_of_vram_split/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'FMRmpeVBrEzhPPzjaS62Anb7KWA2xvPB_u3njHDRojg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=108&crop=smart&auto=webp&s=218ff1c36305c08dadeb33b5ef349b0d4bfcb493', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=216&crop=smart&auto=webp&s=1c939de0bc05c81e8d8e9359a24c01d3ea02227a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=320&crop=smart&auto=webp&s=7970830e12a2b90cd11ea587eb56ace356d3846e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=640&crop=smart&auto=webp&s=58cb89defe919db040bafb9d3d366f2f9d03b191', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=960&crop=smart&auto=webp&s=1ec2bfd4837afbcde3eebd259547e30acabf1b48', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=1080&crop=smart&auto=webp&s=058b20b1c0e147bc9a9b25519173f78e4c3674c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?auto=webp&s=ae196d6017a1f4d8ee6e6cd875a6b676dbfe109e', 'width': 1200}, 'variants': {}}]} |
LLM-As-Chatbot: Local, Open Source Instruction Following Models | 5 | From the [github repo](https://github.com/deep-diver/LLM-As-Chatbot):
>The purpose of this repository is to let people to use lots of open sourced instruction-following fine-tuned LLM models as a Chatbot service.
LLM-As-Chatbot is a local service for using instruction-tuned models. I'm excited to see what this enables people to do without reliance on closed services.
As of now it supports:
* Custom Models
* t5-vicuna-3b
* flan-3b
* camel-5b
* alpaca-lora-7b
* stablelm-7b
* os-stablelm-7b
* gpt4-alpaca-7b
* mpt-7b
* redpajama-7b
* vicuna-7b
* llama-deus-7b
* evolinstruct-vicuna-7b
* alpacoom-7b
* baize-7b
* guanaco-7b
* falcon-7b
* flan-11b
* alpaca-lora-13b
* gpt4-alpaca-13b
* stable-vicuna-13b
* starchat-15b
* vicuna-13b
* evolinstruct-vicuna-13b
* baize-13b
* guanaco-13b
* camel-20b
* guanaco-33b
* falcon-40b | 2023-05-31T18:18:00 | https://www.reddit.com/r/LocalLLaMA/comments/13wtqg9/llmaschatbot_local_open_source_instruction/ | triptoplane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wtqg9 | false | null | t3_13wtqg9 | /r/LocalLLaMA/comments/13wtqg9/llmaschatbot_local_open_source_instruction/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'h9GWZbwbHZ7OipB6De1T-nY-AcxsSqnSHcP52T7NZUM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=108&crop=smart&auto=webp&s=dac85da949c29972840e257ab243e96c9b02b3ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=216&crop=smart&auto=webp&s=6aa142cceacc07dcc460a9327d5cd465c4f1af18', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=320&crop=smart&auto=webp&s=04bc4d2942876c8fa30b25bfe9407c0594d6d0f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=640&crop=smart&auto=webp&s=bcbca8a161b833a8322c970571fee2b5501c98fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=960&crop=smart&auto=webp&s=1cd364cf538ccc45e05b547d655e074a496ba54b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=1080&crop=smart&auto=webp&s=c8471578eeb13acaeb42d76d1a8e869b756c9fd4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?auto=webp&s=633d08f593ba8e7e40a0163b84084511d37e101b', 'width': 1200}, 'variants': {}}]} |
Local Large Language Models – beginners guide | 1 | 2023-05-31T18:03:15 | https://int8.io/local-large-language-models-beginners-guide/ | zibenmoka | int8.io | 1970-01-01T00:00:00 | 0 | {} | 13wtdhs | false | null | t3_13wtdhs | /r/LocalLLaMA/comments/13wtdhs/local_large_language_models_beginners_guide/ | false | false | default | 1 | null | |
Persistent cloud computing setup to run different LLM models. | 4 | Hi All,
in my limited free time, i experiment a bit with self hosted large language models just to try to keep up with what is possible. With the current speed of development, i am having a hard time to keep up. I guess that i am not alone with this. :-)
Anyway, what i am actually interested in is if some of you are running the available models in the Cloud. I don't have a suitable working machine available anymore, i operate completely from my business laptop . Therefore i rely on IaaS providers for everything a bit more computational heavy, which is fine. I am even a big fan of Gaming-as-a-service but this is a different topic. ;-)
So far, i ran my few experiments with language models on [runpod.io](https://runpod.io), which works well. In the past, i worked with various IaaS providers (AWS, Google, Azure, etc) for other use cases but i am a bit outdated. I also saw that many use Google Colab for language models.
My problem with [runpod.io](https://runpod.io) is that i am looking for a more persistent setup that i can start and shut down relatively fast. On [runpod.io](https://runpod.io) i am always starting to download the same models just to proceed where i stopped the last time which takes me too much time. I saw that i can now have a persistent volume on [runpod.io](https://runpod.io) for the secure cloud. I guess that would solve my problem but i would be interested how you are handling this issue.
Any feedback is highly appreciated.
Peace,
Xardas | 2023-05-31T17:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/13wsufp/persistent_cloud_computing_setup_to_run_different/ | Xardas1987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wsufp | false | null | t3_13wsufp | /r/LocalLLaMA/comments/13wsufp/persistent_cloud_computing_setup_to_run_different/ | false | false | self | 4 | null |
LlamaCPP and LangChain Agent Quality | 25 | I've tried many models ranging from 7B to 30B in `langchain` and found that none can perform tasks. ChatGPT seems to be the only zero shot agent capable of producing the correct `Action, Action Input, Observation` loop.
For the models I modified the prompts with the ones in oobabooga for instructions. So they are like the langchain prompts but formatted for the model. Still the models cannot follow the formatting. The models are able to get the correct ideas, but the actual formatting is incorrect and so doesn't work with langchain.
Example:
Action: get_webpage(url="https://www.example.com")
Action Input: url = "https://www.example.com"
Instead of:
Action: get_webpage
Action Input: https://www.example.com
Has anyone else been able to get a local model to reliably perform tasks? If so what model and settings are you using? | 2023-05-31T17:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/13wsg64/llamacpp_and_langchain_agent_quality/ | IWantToBeAWebDev | self.LocalLLaMA | 2023-05-31T17:38:54 | 0 | {} | 13wsg64 | false | null | t3_13wsg64 | /r/LocalLLaMA/comments/13wsg64/llamacpp_and_langchain_agent_quality/ | false | false | self | 25 | null |
Completely lost regarding training LLaMA model | 9 | Hi, I need help training a LLaMA-model. I'm a bit lost and have no idea where to even begin.
I want to train a 7B-model on my own text. Preferably the openLLaMA-7B one as that can be used commercially. I have structured my dataset like this:
>Title of the subject
>
>Textblock
I want my model not to be a traditional instruct-model but rather it to emulate my writing style, interests and personality. The dataset is about 1 MB in size.
I want a result that I can quantize down to 5\_1 in order to run it locally on my CPU. I don't want to run it on a GPU, just regular CPU.
I have no idea what this could possible cost, but I'm prepared to spend around up to $300 dollars to rent a runpod or whatever to get this done.
And here's where I'm right now and have no clue how to proceed. I have no prior programming knowledge nor any knowledge about training LLM-models. And the more I read, the more confused I get. To make matters worse, most guides seem aimed at making an instruct-model, but I don't want to make that. It all seem so complicated...
Maybe anyone here can help me. I would appreciate it very much. | 2023-05-31T17:12:59 | https://www.reddit.com/r/LocalLLaMA/comments/13ws492/completely_lost_regarding_training_llama_model/ | Xeruthos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ws492 | false | null | t3_13ws492 | /r/LocalLLaMA/comments/13ws492/completely_lost_regarding_training_llama_model/ | false | false | self | 9 | null |
How much of an upgrade for 30b models would downgrading from a 3060TI 8GB VRAM to a 3060 12GB VRAM be? | 3 | [deleted] | 2023-05-31T16:55:30 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wrn8m | false | null | t3_13wrn8m | /r/LocalLLaMA/comments/13wrn8m/how_much_of_an_upgrade_for_30b_models_would/ | false | false | default | 3 | null | ||
Inspired by a recent post on this r/ I decided to play with parameters to get weird results... I think this is success (model: guanaco-13B-GGML 5_0) | 18 | 2023-05-31T16:51:02 | https://www.reddit.com/gallery/13wrj16 | Famberlight | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 13wrj16 | false | null | t3_13wrj16 | /r/LocalLLaMA/comments/13wrj16/inspired_by_a_recent_post_on_this_r_i_decided_to/ | false | false | 18 | null | ||
Online Services to run LLMs? | 1 | [removed] | 2023-05-31T16:49:44 | https://www.reddit.com/r/LocalLLaMA/comments/13wrhrn/online_services_to_run_llms/ | pelatho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wrhrn | false | null | t3_13wrhrn | /r/LocalLLaMA/comments/13wrhrn/online_services_to_run_llms/ | false | false | default | 1 | null |
Any non-LLama models under 20B (commercially licensable) that are not really bad? | 2 | [removed] | 2023-05-31T16:42:03 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wragb | false | null | t3_13wragb | /r/LocalLLaMA/comments/13wragb/any_nonllama_models_under_20b_commercially/ | false | false | default | 2 | null | ||
Integrating LLMs into existing RPG games | 20 | AFAIK, Infinite Engine/Fallout/Daggerfal were completely reverse engineered, so it should be just a matter of processing the hardcoded character dialogues with LLM to produce a character sheet, and then just use it to generate dynamic dialogue coupled with action tokens. It is also possible to make LLM act as a dungeon master, monitoring player's progress and creating challenges, based of the picked difficulty and personality. Daggerfal will especially win from that, due to the completely open ended nature of the game, which has really underused skills like languages. The base LLMs should have a lot of knowledge about these games in its training data, so you wont even need to adapt them. | 2023-05-31T15:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/13wpd7w/integrating_llms_into_existing_rpg_games/ | NancyAurum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wpd7w | false | null | t3_13wpd7w | /r/LocalLLaMA/comments/13wpd7w/integrating_llms_into_existing_rpg_games/ | false | false | self | 20 | null |
Guanaco 33B coding | 27 | I recently tested Guanaco 33B with internet access (EdgeGPT, settings: 'always search' checked) and I am surprised with the results for code generation.
I compared code generation with ChatGPT(not paid version), Bing, Phind.com on same prompts and on all it generates better coding.
I changed the instruction template (created additional yaml for guanaco-chat with:
### Human:
### Assistant:
context:" A chat between... (vicuna like)
max_new_tokens 289
temp 0.75
top_p 0.85
(other parameters same as LLaMA-Precise)
Has anyone tried? If anyone has used for code please bring some insides , ... is it me or this is becoming very useful assistant) | 2023-05-31T15:27:36 | https://www.reddit.com/r/LocalLLaMA/comments/13wpb6q/guanaco_33b_coding/ | kexibis | self.LocalLLaMA | 2023-05-31T15:49:03 | 0 | {} | 13wpb6q | false | null | t3_13wpb6q | /r/LocalLLaMA/comments/13wpb6q/guanaco_33b_coding/ | false | false | self | 27 | null |
Combining LLMs with Instant Generation Descriptive GUIs for Interactive Text Input | 5 |
I wanted to discuss an interesting concept that I've been contemplating recently: the combination of Large Language Models (LLMs) like OpenAI's GPT series with Instant Generation Descriptive Graphical User Interfaces (GUIs) to facilitate more interactive text input.
We're all familiar with traditional text input methods in which we type out our instructions or queries. While this works well, it can sometimes be a bit tedious, especially when dealing with complex topics or lengthy discussions. So, why not try something different?
The idea here is to use an Instant Generation Descriptive GUI in tandem with an LLM. An Instant Generation Descriptive GUI, for those unfamiliar with the term, is a dynamic user interface that adapts to user input on the fly. It creates, or "generates", user interface elements based on the information or requirements at hand. This can include buttons, sliders, dropdown menus, or any other type of input field.
This could be applied to an LLM in a novel way. Instead of typing in a text prompt, users could interact with a series of interface elements that guide them through generating their query or instruction. These elements would be designed to match the specific capabilities and requirements of the LLM, ensuring that users are only presented with valid options.
Let's take the example of writing a story. Instead of having to type out an initial prompt, you could select a genre from a dropdown menu, choose a setting using a series of checkboxes, and input character names and traits using text fields. The LLM would then take this structured input and generate a story based on it.
This approach has several advantages. For one, it simplifies the interaction process, especially for new users who might not be familiar with the nuances of generating useful prompts. Secondly, it can help to avoid misunderstandings or incorrect usage by guiding the user through the process. Lastly, it could potentially lead to more varied and interesting results, as users might be prompted to consider options they wouldn't have thought of on their own.
Of course, this is just a concept at this point, and there would be many technical and design challenges to overcome in order to make it a reality. But I believe the potential benefits are worth exploring.
What do you think?
[ ](https://www.reddit.com/r/GPT3/comments/13wje29/combining_llms_with_instant_generation/) | 2023-05-31T15:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/13wp727/combining_llms_with_instant_generation/ | Periplokos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wp727 | false | null | t3_13wp727 | /r/LocalLLaMA/comments/13wp727/combining_llms_with_instant_generation/ | false | false | self | 5 | null |
Dataset prep/cleaning | 3 | I’m seeing pretty decent results training LoRA’s over the documentation and user manuals of a complicated software system. I manually crafted and cleaned this dataset by hand (a real pita) then trained via raw text option in webui.
Now I’d like to do the same for the source code just to see what the LLM learns. I suppose it’s time for me to write a dataset prep script. However, before I try hack my own together, does anyone know of an open-source repo/library that already does this?
Seems like this sort of thing probably already exists. I’d like to point at a git repo or directory, specify file type filters, clean special chars/formatting, de-dup, etc…
Any help is appreciated. Thanks | 2023-05-31T15:21:38 | https://www.reddit.com/r/LocalLLaMA/comments/13wp5qc/dataset_prepcleaning/ | ki7a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wp5qc | false | null | t3_13wp5qc | /r/LocalLLaMA/comments/13wp5qc/dataset_prepcleaning/ | false | false | self | 3 | null |
I think custom datasets and LoRA training will define the future of LLM competency | 70 | With things like [Landmark Attention,](https://github.com/epfml/landmark-attention) we could have near-infinite context without running into memory issues. Landmark Attention breaks the context into 50 token chunks and makes a new token to represent that chunk, essentially compressing the context 50x. It's able to pull the most relevant chunks and use those, like what vector databases/embeddings do today. If you run out of memory at 2k tokens before on your hardware currently, it would be close to 100k now.
In theory, if we have near-infinite context and models that are as good as ChatGPT 4 running locally on our machines, what differentiates what one person can do using an LLM from another? The prompts/templates used, and any fine-tuning or LoRA's applied on top of the base model for domain-specific knowledge. This is why I think learning how to generate your own dataset and train your own LORA is very important for the future.
If you have your own high-quality datasets when a new model comes out, all you need to do is train a new LoRA, and you have a more capable assistant on your own knowledge/tasks. | 2023-05-31T15:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/13wp2ni/i_think_custom_datasets_and_lora_training_will/ | NeverEndingToast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wp2ni | false | null | t3_13wp2ni | /r/LocalLLaMA/comments/13wp2ni/i_think_custom_datasets_and_lora_training_will/ | false | false | self | 70 | {'enabled': False, 'images': [{'id': '9AlTfytK8JKMoaR1DoQd2gvztH9ezp_V0W7Po5AIfqU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=108&crop=smart&auto=webp&s=2f1cef133667bae4b4213dd00b1cd8e2a8cbd0ae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=216&crop=smart&auto=webp&s=f92fb17bde0ad1b9120076a00fd41ca74fa749b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=320&crop=smart&auto=webp&s=d11ca72dec75080d5f02c32d52816f56bd010121', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=640&crop=smart&auto=webp&s=8861aec9486332c63d650e53d30711f718899965', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=960&crop=smart&auto=webp&s=e7f8fa15028900c88611086824068738d0f9fbcf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=1080&crop=smart&auto=webp&s=c013de5e98606157f4e2400d20faa1a963621b16', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?auto=webp&s=122cbd0f7c437448c1e58d8c03f5d88e8eb7d76a', 'width': 1200}, 'variants': {}}]} |
Falcon40B has waived royalties on its use for commercial and research purposes | 354 | 2023-05-31T14:51:50 | https://twitter.com/TIIuae/status/1663911042559234051?s=20 | hackerllama | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 13wodey | false | {'oembed': {'author_name': 'Technology Innovation Institute', 'author_url': 'https://twitter.com/TIIuae', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">UAE's Falcon 40B,\xa0the world's top ranked open-source AI model from the Technology Innovation Institute (TII) has waived royalties on its use for commercial and research purposes.<a href="https://twitter.com/hashtag/TII?src=hash&ref_src=twsrc%5Etfw">#TII</a> <a href="https://twitter.com/hashtag/LLM?src=hash&ref_src=twsrc%5Etfw">#LLM</a> <a href="https://twitter.com/hashtag/FalconLLM?src=hash&ref_src=twsrc%5Etfw">#FalconLLM</a> <a href="https://twitter.com/hashtag/Tech?src=hash&ref_src=twsrc%5Etfw">#Tech</a> <a href="https://twitter.com/hashtag/Innovation?src=hash&ref_src=twsrc%5Etfw">#Innovation</a> <a href="https://twitter.com/hashtag/AI?src=hash&ref_src=twsrc%5Etfw">#AI</a> <a href="https://twitter.com/hashtag/AbuDhabi?src=hash&ref_src=twsrc%5Etfw">#AbuDhabi</a> <a href="https://twitter.com/hashtag/UAE?src=hash&ref_src=twsrc%5Etfw">#UAE</a> <a href="https://t.co/Qr4D8NxROz">pic.twitter.com/Qr4D8NxROz</a></p>— Technology Innovation Institute (@TIIuae) <a href="https://twitter.com/TIIuae/status/1663911042559234051?ref_src=twsrc%5Etfw">May 31, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/TIIuae/status/1663911042559234051', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_13wodey | /r/LocalLLaMA/comments/13wodey/falcon40b_has_waived_royalties_on_its_use_for/ | false | false | 354 | {'enabled': False, 'images': [{'id': 'T1VBjYNi-z2BPIjFQEgNPu8OUHu28n8ENpivlN_1Ivs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/637cUHKm0dDFQxduekMskbWVxA8bP7NayNfrwH5xi7I.jpg?width=108&crop=smart&auto=webp&s=2c8b1abff2faa322a50a8644d5e6ee1fb9aaa094', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/637cUHKm0dDFQxduekMskbWVxA8bP7NayNfrwH5xi7I.jpg?auto=webp&s=bd35c33510e99e976188bc7597beda1c4c92c613', 'width': 140}, 'variants': {}}]} | ||
Training a 13B LLaMA on information from documents. | 6 | Hello. I am running Meta’s 13B LLaMA in 4bit using ooba UI. I am mainly using the chat function, and was wondering if it is possible to train it on some documents that I have, so that it can help me and my colleagues troubleshoot system errors.
As a larger context, I have about 5 documents for a “system”. Those documents contain different error codes, split in different categories depending on the error and what occurred. I would like to train the LLaMA on all those error codes and documents, so that I could chat with it and ask for example “Hey, I got this error: xxxx”, and it would respond using the info it learned from those documents, as well as add extra context to it (like LLaMAs usually do) eg “The error code xxxxxx is referring to an issue in aaaa part. The first digits stand for bla bla bla indicating an issue there, etc etc”
Would that be possible to do and is my hardware good enough for training it? (I run a RTX 3090 24GB - but I have a very outdated CPU)
Thanks a lot for the support. | 2023-05-31T14:37:53 | https://www.reddit.com/r/LocalLLaMA/comments/13wo0ut/training_a_13b_llama_on_information_from_documents/ | reverrover16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wo0ut | false | null | t3_13wo0ut | /r/LocalLLaMA/comments/13wo0ut/training_a_13b_llama_on_information_from_documents/ | false | false | self | 6 | null |
What’s the best local LLM for low to medium end machines? | 37 | I have a laptop with a 1650 ti, 16 gigs of RAM, and an i5-10th gen. I am looking for a good local LLM that I can use for coding, and just normal conversations. I want it to be able to run smooth enough on my computer but actually be good as well. I also would prefer if it had plugins that could read files. | 2023-05-31T14:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/13wnuuo/whats_the_best_local_llm_for_low_to_medium_end/ | SimRacer101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wnuuo | false | null | t3_13wnuuo | /r/LocalLLaMA/comments/13wnuuo/whats_the_best_local_llm_for_low_to_medium_end/ | false | false | self | 37 | null |
Would I be able to QLora finetune 65b models with 2 3090s and NVLink? | 6 | Basically the title. I wanted to know If I purchased a 3090 and NVLink if I would be able to finetune the 65b model. If I can't, I guess I would have to run it on huggingface | 2023-05-31T13:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/13wmyb3/would_i_be_able_to_qlora_finetune_65b_models_with/ | MrBeforeMyTime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wmyb3 | false | null | t3_13wmyb3 | /r/LocalLLaMA/comments/13wmyb3/would_i_be_able_to_qlora_finetune_65b_models_with/ | false | false | self | 6 | null |
Mixing 3090ti and p40 for gpu 65b | 4 | Hi all, I have a 3090ti, 3950x, and 64 gb of vram. Obviously I'm only able to run 65b models on the cpu/ram (I can't compile the latest llama.ccp to enable gpu offloading for ggml due to a weird but but that's unrelated to this post.)
I was wondering if adding a used tesla p40 and splitting the model across the vram using ooba booga would be faster than using ggml cpu plus gpu offloading. Anyone try this yet, especially for 65b? I think I heard that the p40 is so old that it slows down the 3090, but it still might be faster from ram/cpu. And for $200, it's looking pretty tasty. Worse case scenario I could use it in a future permanent llm server (I'm always looking to add servers to my collection, it's a problem.)
I've also heard because the p40 is so old, some people have trouble getting it working. In that case, what would be the oldest generation of cards to use? I'm not quite ready to commit to another 3090ti, especially as llms keeps getting faster and easier to run. | 2023-05-31T13:23:47 | https://www.reddit.com/r/LocalLLaMA/comments/13wm849/mixing_3090ti_and_p40_for_gpu_65b/ | TheSilentFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wm849 | false | null | t3_13wm849 | /r/LocalLLaMA/comments/13wm849/mixing_3090ti_and_p40_for_gpu_65b/ | false | false | self | 4 | null |
LLM for information extraction | 13 | Hi,
I'm trying to use the LLama family to extract information from unstructured text and to return it in JSON format. The tasks are e.g. to return the date a letter was written (the date is mentioned in the text). While ChatGPT answers the question immediately and correctly, all my Llama/vicuna/alpaca attempts fail (I tried several prompts). The answer never returns only the JSON, always some other text, and the JSON, when it is created, hardly ever has a correct data field. Has anybody used these models for this kind of task, and with success?
Greetings!
PS: a little bit off topic, but what exactly is the difference between the conversation and the instruction mode - is it somehow fundamentally different, or just an interface thing (conversation re-sending everything that has been said yet)? | 2023-05-31T13:15:34 | https://www.reddit.com/r/LocalLLaMA/comments/13wm0z6/llm_for_information_extraction/ | Bradymodion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wm0z6 | false | null | t3_13wm0z6 | /r/LocalLLaMA/comments/13wm0z6/llm_for_information_extraction/ | false | false | self | 13 | null |
Help needed with installing quant_cuda for the WebUI | 1 | [removed] | 2023-05-31T12:53:56 | https://www.reddit.com/r/LocalLLaMA/comments/13wlhwo/help_needed_with_installing_quant_cuda_for_the/ | QUAZARD3141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wlhwo | false | null | t3_13wlhwo | /r/LocalLLaMA/comments/13wlhwo/help_needed_with_installing_quant_cuda_for_the/ | false | false | default | 1 | null |
Can finetuning an LLM add new knowledge or just teach it response patterns? | 13 | Hi all, I have a question regarding fine-tuning of LLMs. I wanted to add some new content/knowledge to an LLM by fine-tuning it, but through some searches, I found some mentions that fine-tuning an LLM does not give it new knowledge but trains it for a specific response pattern ([Link](https://community.openai.com/t/finetuning-for-domain-knowledge-and-questions/24817)). Some also mention the problem of catastrophic forgetting where fine-tuned LLMs forget the previous knowledge they had after fine-tuning.
I know that retrieval-based techniques (e.g. using vector store and stuffing retrieved chunks in the context) can enhance LLM responses with the new content. But I'm wondering **if it is at all possible to add new knowledge via fine-tuning**.
If any of you have (successful/failed) experiences adding new knowledge to LLMs through fine-tuning, would you mind sharing your stories and thoughts? If you have any theoretical or conceptual explanations/guides, that would also be much appreciated! | 2023-05-31T10:41:58 | https://www.reddit.com/r/LocalLLaMA/comments/13wimi0/can_finetuning_an_llm_add_new_knowledge_or_just/ | bafil596 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wimi0 | false | null | t3_13wimi0 | /r/LocalLLaMA/comments/13wimi0/can_finetuning_an_llm_add_new_knowledge_or_just/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'BxNvUeMFd6obe78ihkAJrzGJFpOkQpmEJ7BbVi_larY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=108&crop=smart&auto=webp&s=93162f369263d5c2a9e1768260a77dbe05103f3e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=216&crop=smart&auto=webp&s=dbc4176da2b6e2f8cc29002232fb799303e8c4d7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=320&crop=smart&auto=webp&s=0e5a8489fbfa03e8d18ba305ccd4945626d306f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=640&crop=smart&auto=webp&s=34d3e6a5c65886be595f06be7e712b5946391d72', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=960&crop=smart&auto=webp&s=073702ab34f5d22d066990279fcb8219ec0c160c', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?auto=webp&s=8a98b7aa0d64757a3ef9f7a2c7bf4c8f10b02230', 'width': 1024}, 'variants': {}}]} |
OpenAccess AI Collective's Hippogriff 30B Chat | 87 | Another great new model from [OpenAccess AI Collective](https://huggingface.co/openaccess-ai-collective) and /u/winglian
# Hippogriff 30B Chat
Hippogriff 30B Chat is an experiment that builds on Manticore with new datasets, while removing a few more instruction and chat datasets. It also includes a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using ### in favor of chat only style prompts using USER:,ASSISTANT: as well as [**pygmalion/metharme prompting**](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using <|system|>, <|user|> and <|model|> tokens.
Questions, comments, feedback, looking to donate, or want to help? Reach out [to chat](https://discord.gg/KKQgbDhJ) or email [**wing@openaccessaicollective.org**](mailto:wing@openaccessaicollective.org)
# Prompt Templates
You are a helpful assistant
USER: prompt goes here
ASSISTANT:
or
<|system|> You are a helpful assistant
<|user|> prompt goes here
<|model|>
# Quantisations for local LLMing
* [https://huggingface.co/TheBloke/hippogriff-30b-chat-GPTQ](https://huggingface.co/TheBloke/hippogriff-30b-chat-GPTQ)
* [https://huggingface.co/TheBloke/hippogriff-30b-chat-GGML](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGML) | 2023-05-31T10:36:14 | https://www.reddit.com/r/LocalLLaMA/comments/13wiin0/openaccess_ai_collectives_hippogriff_30b_chat/ | The-Bloke | self.LocalLLaMA | 2023-05-31T12:39:44 | 0 | {} | 13wiin0 | false | null | t3_13wiin0 | /r/LocalLLaMA/comments/13wiin0/openaccess_ai_collectives_hippogriff_30b_chat/ | false | false | self | 87 | {'enabled': False, 'images': [{'id': '6atkc6f2ftM9kyUXaG-h4RD1n166r6NsA0DiHOpeJaY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=108&crop=smart&auto=webp&s=86584819f161b7d61d55d078b6fd3313c0ed67df', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=216&crop=smart&auto=webp&s=f72ecb53c5289daaf4d97596be48e629cb477045', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=320&crop=smart&auto=webp&s=ba9e0d9def45e9558d16f1a27203b1d3b293fb72', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=640&crop=smart&auto=webp&s=3e67a740bd7b59b9c74b0cb07d840e2e98c8426a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=960&crop=smart&auto=webp&s=2d3e4fa62ef56662f078fae7be376dd8d1a8e1be', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=1080&crop=smart&auto=webp&s=066a16887eec7c3207680f700a6c3fdaf56002ec', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?auto=webp&s=0a3b3e5ab1368e4794d26527aead34a6881f8a14', 'width': 1200}, 'variants': {}}]} |
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth | 34 | 2023-05-31T10:29:55 | https://huggingface.co/BlinkDL/rwkv-4-raven/blob/main/RWKV-4-Raven-7B-v12-Eng49%25-Chn49%25-Jpn1%25-Other1%25-20230530-ctx8192.pth | Balance- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13wie45 | false | null | t3_13wie45 | /r/LocalLLaMA/comments/13wie45/new_model/ | false | false | 34 | {'enabled': False, 'images': [{'id': '3HjrW-l8WZHB406ZEXOb2zG68k9U687DZ8bOgw_-CoQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=108&crop=smart&auto=webp&s=d93692813a3b9f071865edd0279f9dec8bb3f1f1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=216&crop=smart&auto=webp&s=2329254a0d2d22d1237791dbb0d03779bf6cbec4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=320&crop=smart&auto=webp&s=64e027c5fd2a7f4478a1e9d7185c514af187a4a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=640&crop=smart&auto=webp&s=22ba3a5518ccb0b768933b810eb4bf7e77f7d69f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=960&crop=smart&auto=webp&s=a62269f164a8177e5801c6a616dfc355897f5890', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=1080&crop=smart&auto=webp&s=9313db144113f3acc46875319922e225f9472b73', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?auto=webp&s=3df35b4ef1a4972cb9ffb80473854d76c117178d', 'width': 1200}, 'variants': {}}]} | ||
What is the objective for the supervised fine-tuning stage of instruction-following models? | 1 | 2023-05-31T10:06:24 | https://www.reddit.com/r/deeplearning/comments/13whxjk/what_is_the_objective_for_the_supervised/ | BlueHemp | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 13whzaq | false | null | t3_13whzaq | /r/LocalLLaMA/comments/13whzaq/what_is_the_objective_for_the_supervised/ | false | false | default | 1 | null | |
Help for a complete noob | 1 | [removed] | 2023-05-31T10:03:56 | https://www.reddit.com/r/LocalLLaMA/comments/13whxqg/help_for_a_complete_noob/ | Leavetheporkbehind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13whxqg | false | null | t3_13whxqg | /r/LocalLLaMA/comments/13whxqg/help_for_a_complete_noob/ | false | false | default | 1 | null |
Lora batch job merging thought | 2 | Memory and context are the stand out limitations for me and LLMs. Just had a thought and was wondering if anyone is aware of anyone trying this approach. Run a local model for an internal use at a company and log all chat dialog, then at the end of the day take that log and use it to train a Lora in an overnight batch process then merge with the original weights. That should "commit" the conversations to memory, then repeat the process each day. | 2023-05-31T09:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/13whgzn/lora_batch_job_merging_thought/ | TaiMaiShu-71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13whgzn | false | null | t3_13whgzn | /r/LocalLLaMA/comments/13whgzn/lora_batch_job_merging_thought/ | false | false | self | 2 | null |
Method to block possible internet traffic from LLaMA on MacOS | 0 | **So in an abundance of paranoid caution I wanted to really make sure that my Vicuna/textgenerationwebui setup could not access the internet, as I really don't want confidential material uploaded.**
**I did the following, in case it's useful to anyone else OR if I've been stupid somehow and it's not actually working OR there's a simpler way to do this. I'm aware that my setup shouldn't be accessing the internet anyway, I just wanted to make that surety total:**
1) setup Radio Silence on Mac
2) loaded up my venv python environment
3) ran 'pip install requests'
4) in python, ran the following script:
import requests
response = requests.get('https://www.google.com')
print(response.status\_code)
5) This gave me a code 200, accessing Google
6) Having accessed the internet successfully, the python path that my Vicuna setup utilises was exposed to Radio Silence under the Network Monitor list (it wasn't there before)
7) I added that to my Radio Silence blocklist
8) I shut the terminal down, restarted it, and tried steps 1-4 again
9) This time I got an error saying 'OSError: \[Errno 9\] Bad file descriptor' -- bingo, it can't access the internet anymore.
**Hope this helps! And if you have any advice on improving the above or additional privacy/security measures to keep my llama offline/secure, very much appreciated.** | 2023-05-31T09:13:50 | https://www.reddit.com/r/LocalLLaMA/comments/13wh2yu/method_to_block_possible_internet_traffic_from/ | spoilingba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wh2yu | false | null | t3_13wh2yu | /r/LocalLLaMA/comments/13wh2yu/method_to_block_possible_internet_traffic_from/ | false | false | self | 0 | null |
OpenAssistant's RLHF Models | 13 | OpenAssistant releases RLHF tuned version. Curious to see how this compares to the StableVicuna.
[https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor](https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor)
Tagging u/The-Bloke in the hopes of getting quantised GGML weights :) Thanks! | 2023-05-31T07:46:23 | https://www.reddit.com/r/LocalLLaMA/comments/13wfn56/openassistants_rlhf_models/ | sneruz | self.LocalLLaMA | 2023-05-31T12:08:22 | 0 | {} | 13wfn56 | false | null | t3_13wfn56 | /r/LocalLLaMA/comments/13wfn56/openassistants_rlhf_models/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': '578mpVwWF25m5vOABma33i38LCAywh4p3qragThHgh8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=108&crop=smart&auto=webp&s=b24867e2302ce1e54245f8ce49abf40ce21a28b3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=216&crop=smart&auto=webp&s=ecdd6149746baad7122ceceae82137f028bb4609', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=320&crop=smart&auto=webp&s=af2c36bff1f7cec29ffb7646a836d1a31d4e89c8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=640&crop=smart&auto=webp&s=395b8858b65eed64cb812c63b4b959031adde3dc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=960&crop=smart&auto=webp&s=13900a454733bfd3142f3129d89a0dfa98caf841', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=1080&crop=smart&auto=webp&s=2b37c3fe3da8928a1de1c4575d119bf9ba878fe5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?auto=webp&s=cbf4ab005bce32094ca3ba312c542386ea0e4fd6', 'width': 1200}, 'variants': {}}]} |
AutoCompleter: auto complete any text inside any application | 1 | [removed] | 2023-05-31T07:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/13wf6g8/autocompleter_auto_complete_any_text_inside_any/ | Kitchen-Monitor-6924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wf6g8 | false | null | t3_13wf6g8 | /r/LocalLLaMA/comments/13wf6g8/autocompleter_auto_complete_any_text_inside_any/ | false | false | default | 1 | null |
AutoCompleter: Auto Complete anything using a ggml model | 1 | [removed] | 2023-05-31T05:58:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wdvnz | false | null | t3_13wdvnz | /r/LocalLLaMA/comments/13wdvnz/autocompleter_auto_complete_anything_using_a_ggml/ | false | false | default | 1 | null | ||
(Code Released) Landmark Attention: Random-Access Infinite Context Length for Transformers | 150 | Code for Landmark Attention is now released and it should be possible to finetune existing LLaMA models using this method.
[https://github.com/epfml/landmark-attention](https://github.com/epfml/landmark-attention)
More info
[https://www.reddit.com/r/MachineLearning/comments/13srbl7/landmark\_attention\_randomaccess\_infinite\_context/](https://www.reddit.com/r/MachineLearning/comments/13srbl7/landmark_attention_randomaccess_infinite_context/)
[https://www.reddit.com/r/LocalLLaMA/comments/13sy2bu/landmark\_attention\_llama\_7b\_with\_32k\_tokens/](https://www.reddit.com/r/LocalLLaMA/comments/13sy2bu/landmark_attention_llama_7b_with_32k_tokens/) | 2023-05-31T03:35:17 | https://www.reddit.com/r/LocalLLaMA/comments/13wb59a/code_released_landmark_attention_randomaccess/ | IxinDow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wb59a | false | null | t3_13wb59a | /r/LocalLLaMA/comments/13wb59a/code_released_landmark_attention_randomaccess/ | false | false | self | 150 | {'enabled': False, 'images': [{'id': '9AlTfytK8JKMoaR1DoQd2gvztH9ezp_V0W7Po5AIfqU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=108&crop=smart&auto=webp&s=2f1cef133667bae4b4213dd00b1cd8e2a8cbd0ae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=216&crop=smart&auto=webp&s=f92fb17bde0ad1b9120076a00fd41ca74fa749b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=320&crop=smart&auto=webp&s=d11ca72dec75080d5f02c32d52816f56bd010121', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=640&crop=smart&auto=webp&s=8861aec9486332c63d650e53d30711f718899965', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=960&crop=smart&auto=webp&s=e7f8fa15028900c88611086824068738d0f9fbcf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=1080&crop=smart&auto=webp&s=c013de5e98606157f4e2400d20faa1a963621b16', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?auto=webp&s=122cbd0f7c437448c1e58d8c03f5d88e8eb7d76a', 'width': 1200}, 'variants': {}}]} |
Fine-Tuning Language Models with Just Forward Passes | 27 | [https://arxiv.org/abs/2305.17333](https://arxiv.org/abs/2305.17333)
TL;DR: MeZO allows for fine-tuning a model with the same amount of VRAM as needed for inference. Can be used for either a full fine tune or a LoRA. Potentially can be combined with other memory-efficient strategies, but they haven't tried it with quantized models yet. Still research code, of course, so there'd be some implementation work involved before we'd be able to use it. | 2023-05-31T03:22:24 | https://www.reddit.com/r/LocalLLaMA/comments/13wavpf/finetuning_language_models_with_just_forward/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wavpf | false | null | t3_13wavpf | /r/LocalLLaMA/comments/13wavpf/finetuning_language_models_with_just_forward/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Where can I keep on top of LLM developments? | 27 | I'm currently attempting to broaden my knowledge of AI and ML, particularly in relation to large language models. My understanding so far is that a significant limitation of these models is their restricted context window, which appears to hinder their ability to maintain continuity of information and reason effectively about complex topics. I see models like GPT-4, Anthropic's Claude, and Mosaic ML implementing larger windows (currently 32k, 100k and 82k tokens respectively).
Can anyone confirm whether my comprehension of the context window is accurate? If not, could you explain the primary challenges that impede the reasoning and problem-solving abilities of LLMs? Additionally, what are the proposed solutions currently being explored to overcome these challenges? Finally, could anyone recommend the best way to stay on top of developments in the LLM and AI agent space? | 2023-05-31T02:43:22 | https://www.reddit.com/r/LocalLLaMA/comments/13wa1of/where_can_i_keep_on_top_of_llm_developments/ | banderamobile | self.LocalLLaMA | 2023-05-31T03:48:38 | 0 | {} | 13wa1of | false | null | t3_13wa1of | /r/LocalLLaMA/comments/13wa1of/where_can_i_keep_on_top_of_llm_developments/ | false | false | self | 27 | null |
Chinese-Alpaca-Plus-13B-GPTQ | 4 | I'd like to share with you today the [**Chinese-Alpaca-Plus-13B-GPTQ**](https://huggingface.co/rabitt/Chinese-Alpaca-Plus-13B-GPTQ) model, which is the GPTQ format quantised 4bit models of [Yiming Cui's Chinese-LLaMA-Alpaca 13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) for GPU reference.
to use it in text-generation-webui,
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `rabitt/Chinese-Alpaca-Plus-13B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded, `Chinese-Alpaca-Plus-13B-GPTQ`.
7. If you see an error like `Error no file named pytorch_model.bin ...` in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! | 2023-05-31T02:31:04 | https://www.reddit.com/r/LocalLLaMA/comments/13w9ry5/chinesealpacaplus13bgptq/ | ttimasdf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w9ry5 | false | null | t3_13w9ry5 | /r/LocalLLaMA/comments/13w9ry5/chinesealpacaplus13bgptq/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'X4rmX94b0pa0Swfzq-37wJrtt_jnnNUgkrwXRsN0lyI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=108&crop=smart&auto=webp&s=d0272c240e8ab71d826533a0c17f599316ef6214', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=216&crop=smart&auto=webp&s=8088de1f112ade9e7db01d5f82204c2f28adcdce', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=320&crop=smart&auto=webp&s=3f77f4defdf35d3f13289af3e0f7dcb61f96bc4b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=640&crop=smart&auto=webp&s=cbf64941bcf86cb5dbb6e60cf84dba5618fa8e60', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=960&crop=smart&auto=webp&s=f9a62179c22e316cce446f820abc7ff7c290f1ed', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=1080&crop=smart&auto=webp&s=835d4a42cb18119614bb5a33503e6d51b1e6f888', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?auto=webp&s=05c19eddb0bbd98e7a2cdef840e728c019449c08', 'width': 1200}, 'variants': {}}]} |
Using the right prompt format makes responses so much better | 74 | 2023-05-31T01:28:40 | https://www.reddit.com/gallery/13w8ecu | Creative-Rest-2112 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 13w8ecu | false | null | t3_13w8ecu | /r/LocalLLaMA/comments/13w8ecu/using_the_right_prompt_format_makes_responses_so/ | false | false | default | 74 | null | |
Using the right prompt format makes a huge difference | 1 | [removed] | 2023-05-31T01:03:32 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13w7ukj | false | null | t3_13w7ukj | /r/LocalLLaMA/comments/13w7ukj/using_the_right_prompt_format_makes_a_huge/ | false | false | default | 1 | null | ||
Llama weight license- any updates from Meta? | 6 | Has anyone heard any updates if meta is considering changing the llama weights license? I am desperate for a commercial model that isn’t closedAI and I’m getting backed into a corner not being able to use llama commercially. I’ve been scouring twitter and other places but haven’t seen anything new for a few weeks. | 2023-05-30T23:48:23 | https://www.reddit.com/r/LocalLLaMA/comments/13w64t8/llama_weight_license_any_updates_from_meta/ | big_ol_tender | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w64t8 | false | null | t3_13w64t8 | /r/LocalLLaMA/comments/13w64t8/llama_weight_license_any_updates_from_meta/ | false | false | self | 6 | null |
Which uncensored model to use to remove calendar events from message text? | 1 | I need a model which can consistently remove any references to calendar events like Christmas, Halloween, Valentine’s Day, etc. as well as any reference to day or month like Sunday, Monday or February etc. I tried the 30B Wizard uncensored but it didn’t do a good job compared to GPT3.5. It kept the same message without modifying or didn’t work on messages with large length.
I need it to be uncensored solve some message text might contain stuff that GPT doesn’t allow.
An example I tried: “Happy 4th of july! How’s your day going?” Should be changed to “How’s your day going?” | 2023-05-30T23:44:11 | https://www.reddit.com/r/LocalLLaMA/comments/13w61b4/which_uncensored_model_to_use_to_remove_calendar/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w61b4 | false | null | t3_13w61b4 | /r/LocalLLaMA/comments/13w61b4/which_uncensored_model_to_use_to_remove_calendar/ | false | false | self | 1 | null |
Increasing speed for webui/Wizard-Vicuna-13B with my Mac Pro M1 16gb setup? | 7 | Hey! So after a lot of time spent getting it to actually work, I've finally got my offline llama setup on a Macbook with an Apple M1 Pro (model number mk193b/a, 10 cores (8 performance 2 efficiency), 16 GB LPDDR5 RAM).
I'm definitely running ARM64 python also.
It's still running quite slowly even for simple questions, and I was wondering if there are settings I'm misunderstanding or that I might alter to further increase speeds? I'm very new at this and following a mishmash of guides and fixes, so would love to learn more in general!
\*\*My starting commands (\*\*auto-devices and -mlock made the speed bearable, still quite slow), **if there are any elements i could change for this to make it run better?**
cd \~/text-generation-webui
source venv/bin/activatepython server.py --auto-devices --notebook --model Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1 --auto-launch --mlock --threads 4
**SAMPLE SIMPLE PROMPT:** (with 'none' selected under prompt menu and max\_new\_tokens on 200):
Question: How many planets in the solar system?
Factual answer: 8 (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune)
\*\*CONSOLE LOG FOR THIS:\*\*llama\_print\_timings: load time = 4246.67 ms
llama\_print\_timings: sample time = 24.14 ms / 29 runs ( 0.83 ms per token)
llama\_print\_timings: prompt eval time = 4243.48 ms / 24 tokens ( 176.81 ms per token)
llama\_print\_timings: eval time = 19010.91 ms / 28 runs ( 678.96 ms per token)
llama\_print\_timings: total time = 23832.13 ms
Output generated in 25.06 seconds (1.12 tokens/s, 28 tokens, context 24, seed 2077319512)
**SETTINGS IN TEXTGENERATIONGUI:**
Running with Llama-precise settings
Transformers setting - 0 cpu-memory in MiB; auto-devices ticked but nothing else; no transformers 4-bit settings ticked; compute\_dtype float16; quant\_type nf4
gptq wbits none, groupsize none, model\_type llama, pre\_layer 0
llama.cpp - threads 4, n\_batch 512, n-gpu-layers 0, n\_ctex 2048, no-mmap unticked, mlock ticked, seed 0
no extensions
boolean command-line flags - auto\_launch, pin\_weight ticked but nothing else
**In console, after I type the initial python loading commands:**
bin /Users/text-generation-webui/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes\_cpu.so
/Users/text-generation-webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
'NoneType' object has no attribute 'cadam32bit\_grad\_fp32'
INFO:Loading Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1...
INFO:llama.cpp weights detected: models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1.bin
INFO:Cache capacity is 0 bytes
llama.cpp: loading model from models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1.bin
llama\_model\_load\_internal: format = ggjt v3 (latest)
llama\_model\_load\_internal: n\_vocab = 32000
llama\_model\_load\_internal: n\_ctx = 2048
llama\_model\_load\_internal: n\_embd = 5120
llama\_model\_load\_internal: n\_mult = 256
llama\_model\_load\_internal: n\_head = 40
llama\_model\_load\_internal: n\_layer = 40
llama\_model\_load\_internal: n\_rot = 128
llama\_model\_load\_internal: ftype = 9 (mostly Q5\_1)
llama\_model\_load\_internal: n\_ff = 13824
llama\_model\_load\_internal: n\_parts = 1
llama\_model\_load\_internal: model size = 13B
llama\_model\_load\_internal: ggml ctx size = 0.09 MB
llama\_model\_load\_internal: mem required = 11359.05 MB (+ 1608.00 MB per state)
....................................................................................................
llama\_init\_from\_file: kv self size = 1600.00 MB
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512\_VBMI = 0 | AVX512\_VNNI = 0 | FMA = 0 | NEON = 1 | ARM\_FMA = 1 | F16C = 0 | FP16\_VA = 1 | WASM\_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
INFO:Loaded the model in 41.01 seconds.
​
**Any advice on how I can improve this whole setup given my machine type especially appreciated!**
​
\---UPDATE BELOW after trying u/Evening_Ad6637's advice, thanks!
**New launch parameters that reduce sample prompt loading time from 25.06 seconds to 8.83 seconds on an M1 PRO 16GB macbook, if helpful to anyone else:**
python server.py --auto-devices --notebook --model Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1 --auto-launch --no-mmap --mlock --threads 4
**SPEED:**
llama\_print\_timings: load time = 4153.25 ms
llama\_print\_timings: sample time = 20.30 ms / 29 runs ( 0.70 ms per token)
llama\_print\_timings: prompt eval time = 4151.52 ms / 24 tokens ( 172.98 ms per token)
llama\_print\_timings: eval time = 4398.54 ms / 28 runs ( 157.09 ms per token)
llama\_print\_timings: total time = 8620.52 ms
Output generated in 8.83 seconds (3.17 tokens/s, 28 tokens, context 24, seed 1392505426)
**HTOP DATA:**
**PYTHON/VICUNA PROCESS:**VIRT:404GRES:10.3GCPU%:0.7MEM%:63.5
***BEFORE RUNNING PROMPT:*****PROCESSOR CORES**: 9 listed0 - 28.5%1 - 28.3%2 - 2.6%3-9: all 0%**TOTAL MEM:** 12.6/16.0G**TOTAL SWP:** 2.94G/4.00G**Tasks:** 298, 1059 thr, 0 kthr; 6 running**Load average:** 3.10 7.16 11.20
***DURING TEXT GENERATION FROM PROMPT:*****PROCESSOR CORES**: 9 listed0 - 38.7%1 - 38.9%2 - 43.0%3 - 29.8%4 - 16.0%5 - 32.9%6 - 40.0%7 - 37.3%8 - 14.0%9 - 8.6%3-9: all 0%**TOTAL MEM:** 12.6/16.0G**TOTAL SWP:** 2.94G/4.00G**Tasks:** 340, 1058 thr, 0 kthr; 1 running**Load average:** 2.31 2.52 5.81
**\[When first loading Wizard, I now get the following terminal log:**
*bin /Users/text-generation-webui/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes\_cpu.so*
*/Users/text-generation-webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.*
*warn("The installed version of bitsandbytes was compiled without GPU support. "*
*'NoneType' object has no attribute 'cadam32bit\_grad\_fp32'*
*INFO:Loading Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1...*
*INFO:llama.cpp weights detected: models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1.bin*
*INFO:Cache capacity is 0 bytes*
*llama.cpp: loading model from models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1.bin*
*llama\_model\_load\_internal: format = ggjt v3 (latest)*
*llama\_model\_load\_internal: n\_vocab = 32000*
*llama\_model\_load\_internal: n\_ctx = 2048*
*llama\_model\_load\_internal: n\_embd = 5120*
*llama\_model\_load\_internal: n\_mult = 256*
*llama\_model\_load\_internal: n\_head = 40*
*llama\_model\_load\_internal: n\_layer = 40*
*llama\_model\_load\_internal: n\_rot = 128*
*llama\_model\_load\_internal: ftype = 9 (mostly Q5\_1)*
*llama\_model\_load\_internal: n\_ff = 13824*
*llama\_model\_load\_internal: n\_parts = 1*
*llama\_model\_load\_internal: model size = 13B*
*llama\_model\_load\_internal: ggml ctx size = 9311.05 MB*
*llama\_model\_load\_internal: mem required = 11359.05 MB (+ 1608.00 MB per state)*
*....................................................................................................*
*llama\_init\_from\_file: kv self size = 1600.00 MB*
*AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512\_VBMI = 0 | AVX512\_VNNI = 0 | FMA = 0 | NEON = 1 | ARM\_FMA = 1 | F16C = 0 | FP16\_VA = 1 | WASM\_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |*
*INFO:Loaded the model in 2.21 seconds.*
**Any other suggestions for running better/faster much appreciated!** | 2023-05-30T23:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/13w53k3/increasing_speed_for_webuiwizardvicuna13b_with_my/ | spoilingba | self.LocalLLaMA | 2023-05-31T08:43:28 | 0 | {} | 13w53k3 | false | null | t3_13w53k3 | /r/LocalLLaMA/comments/13w53k3/increasing_speed_for_webuiwizardvicuna13b_with_my/ | false | false | self | 7 | null |
Cheapest GPU VPS for 24/7 hosting | 4 | I found runpod.io a very interesting provider for cloud GPU with fair on-demand pricing, especially for training and testing. But for production deployment I am looking for a VPS that I can rent on monthly basis to run 24/7 and which is therefore cheaper (per month). Are there providers you can recommend? | 2023-05-30T22:15:43 | https://www.reddit.com/r/LocalLLaMA/comments/13w3w74/cheapest_gpu_vps_for_247_hosting/ | Koliham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w3w74 | false | null | t3_13w3w74 | /r/LocalLLaMA/comments/13w3w74/cheapest_gpu_vps_for_247_hosting/ | false | false | self | 4 | null |
problem installing llama.cpp on a scientific linux cluster | 1 | [removed] | 2023-05-30T21:46:59 | https://www.reddit.com/r/LocalLLaMA/comments/13w35zr/problem_installing_llamacpp_on_a_scientific_linux/ | Bozo32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w35zr | false | null | t3_13w35zr | /r/LocalLLaMA/comments/13w35zr/problem_installing_llamacpp_on_a_scientific_linux/ | false | false | default | 1 | null |
Link with js script | 1 | [removed] | 2023-05-30T21:46:37 | https://www.reddit.com/r/LocalLLaMA/comments/13w35o8/link_with_js_script/ | Interesting_Ad_1603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w35o8 | false | null | t3_13w35o8 | /r/LocalLLaMA/comments/13w35o8/link_with_js_script/ | false | false | default | 1 | null |
GPU card suggestions | 4 | I'm new here and am completely lost when it comes to video cards. Unfortunately I'm not a gamer so I have no clue on them but I am interested in running these models. I would say I'm more of a hobbyist looking more likely at running smaller models with budget under $700.
Can someone recommend some video cards to buy? Even better would be a price range chart, model of card, and LLM model sizes for running various models here. | 2023-05-30T20:48:36 | https://www.reddit.com/r/LocalLLaMA/comments/13w1nxv/gpu_card_suggestions/ | ctbitcoin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w1nxv | false | null | t3_13w1nxv | /r/LocalLLaMA/comments/13w1nxv/gpu_card_suggestions/ | false | false | self | 4 | null |
if the average human typing speed is around the 1 token/s mark, why it feel so slow to interact with LLMs running on cpu? | 25 | human beings fall between the 40 to 70 words per minute in typing benchmarks, which translates to roughly 1 token/s. Many people have shared their experience about running LLMs on cpu/ram with comparable response times, why do we consider such speed slow if it's kinda the same as chatting with other person? | 2023-05-30T20:11:58 | https://www.reddit.com/r/LocalLLaMA/comments/13w0qcp/if_the_average_human_typing_speed_is_around_the_1/ | Caffdy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w0qcp | false | null | t3_13w0qcp | /r/LocalLLaMA/comments/13w0qcp/if_the_average_human_typing_speed_is_around_the_1/ | false | false | self | 25 | null |
Llama Chatbot for Telegram | 1 | [removed] | 2023-05-30T19:41:34 | https://github.com/aneeshjoy/llama-telegram-bot | [deleted] | github.com | 1970-01-01T00:00:00 | 0 | {} | 13vzxyg | false | null | t3_13vzxyg | /r/LocalLLaMA/comments/13vzxyg/llama_chatbot_for_telegram/ | false | false | default | 1 | null |
LLM Lobotomy - predictions? | 19 | I'm wondering if anyone has ever tried effectively giving an LLM a 'lobotomy' by gradually corrupting it's weights while using it?
Seems like an interesting thing to try. Firstly you'd run the original model, give it some prompts, record the outputs.
Then you'd randomly (or perhaps not randomly), change the values of the model as it's loaded into RAM/VRAM, perhaps you would initially change 10,000 values to zero.
Then re-run the same prompts perhaps with the same seeds, I wonder if it would become slightly less intelligent if you managed to 'damage' the part of the 'neural network' it was using to some degree.
Then you'd change 10,000 more values to zero, and try again.
Eventually you'd have to end up with a model that couldn't function, I'm wondering how long that would take.... hopefully this doesn't sound too unethical! | 2023-05-30T17:51:20 | https://www.reddit.com/r/LocalLLaMA/comments/13vx1e5/llm_lobotomy_predictions/ | ozzeruk82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vx1e5 | false | null | t3_13vx1e5 | /r/LocalLLaMA/comments/13vx1e5/llm_lobotomy_predictions/ | false | false | self | 19 | null |
AutoCompleter: Auto Complete anything using a ggml model | 1 | [removed] | 2023-05-30T17:48:30 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13vwysi | false | null | t3_13vwysi | /r/LocalLLaMA/comments/13vwysi/autocompleter_auto_complete_anything_using_a_ggml/ | false | false | default | 1 | null | ||
Can you download your Huggingface chats? | 1 | [removed] | 2023-05-30T17:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/13vwaud/can_you_download_your_huggingface_chats/ | spmmora | self.LocalLLaMA | 2023-06-02T11:33:21 | 0 | {} | 13vwaud | false | null | t3_13vwaud | /r/LocalLLaMA/comments/13vwaud/can_you_download_your_huggingface_chats/ | false | false | default | 1 | null |
Long load time using from_pretrained? | 2 | Loading a 6B model (gpt-j) onto GPU using from\_pretrained is taking over an hour. I have the model files cached and use\_local\_files set to True. Can't find anyone else with this issue. Here's the full call:
``AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path,
return_dict=True,
load_in_8bit=True,
device_map='auto',
max_memory={0: "12GB"},
cache_dir='../huggingface-cache')``` | 2023-05-30T17:13:28 | https://www.reddit.com/r/LocalLLaMA/comments/13vw2eh/long_load_time_using_from_pretrained/ | ThineGame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vw2eh | false | null | t3_13vw2eh | /r/LocalLLaMA/comments/13vw2eh/long_load_time_using_from_pretrained/ | false | false | self | 2 | null |
temperature=1000.0 and top_p=0.1 is hilarious | 262 | I find this behavior surprising and enlightening. Having top\_p small enough keeps it mostly coherent while making it have a really difficult time speaking properly.
​
>I have 2 cows, and one of 'm died cuzz the farmer was mean and didn t let 'er drink any wate. I m so pist! But now i only haz the other, an its a big bessy. But she is a little crazeeee! But she is still better den the one I useed 4 work, cuz that one would just stand in fronta the gate when I was tryn to take him out, an just say NOO. So now the other cus wants a boyfrinz. I told it I'd get one but only 5 mons old. I don'z underdastnad whhy they haze so mch fuss an stuff an they want a man! They iz crazy! I don t undastn why, maybe cuze they think men haves lots u money. I duno. I don;T get it
For best results, choose an appropriate beginning for the response, such as "yoooo," or "omgz,"
​
>Human: invent a cure for cancer
>
>Assistant:Yooooo, let the madness continue, it is so funny to hear all those people scared from a disease, they can'... Wait I just remembered that there are some really cool people on my list of followings and some of my family is in danger from this cancer. So let’ me try something. Cancer can’t surviv’ this. This is how I will make it:
>
>I would use an atomic-accu’rately precise raygun. I would aim for an exact location of the tumor. I wi... No, no no I would not shoot a las’ gun. Las' are way cool’, but this isn... Wait! Wait I just realized this: the ray gun could kill people! So, no, I won' do the shooting, it... I’m sorry I didn... It was... Wait a’, there we... Hang in the’ a... Wait a'! I have... Wait, wait a minute here’. How the hell can we do... How do ... No wait, this i’ crazi. How do you expect us ... Hmmm ... This ... Is crazy! Wait, I just realized... Wait... Nope... It’s still a... Cra’ziness... This cra’nuts’ ... Nope... Wait. This i... Cra... Nuttz... Crazi ... Crazziness... Nutts. Wait! Wait.
​
>Human: write a comedy about a poker game
>
>Assistant:Yooooo, gather all my peeeeples for a story that is guaranteed to give yall the LULZ: 1. It starts wit a guyyyy nameeedddd Johnnieyy, 1. Hizzz friendzy were playingzzz pokzzzeerr, but Johnnie was a littelzzz confused, he hadntz heard the rules before! Haha, he had a big questionzz onzz hisz mined, and his friendzy thoughtz it was pretty funny, haha: 10) what was it?? Itzzz, whatzzt da differenc betweenzz "aces high," "kings high"? Hisz freenz were lauffzzing their a\*\*es off and Johnie got a little madddddzz: 99). So they told him the rule: a "roxx offffff the tzzte!" and jonnnies was like... 9. "okayzzzz i gotchuz! so whoszz the kingz of spaddz"? Hahahaa jonnes! His freentz toldz him: 7) and johnie'z face turned to diszmay. He was so confusezdzz, haha... But the game keptzzz gong, and the fun kept coming!
​
>Human: write a sad story about a kitty cat
>
>Assistant:Yooooo, there lived an old, old woman.
>
>The old woman was very lonely because she was so very old And she was so sad that her husband passed awaaYYyYYYYYY! <3 One cold night when the moon was bright she saw this little KITY KAYY KAAA! She was all byyyyy herseFFFFEEE! And the ol' woman took in her. Now she had companyyyyy! But one dark, cold day her beloved Kity diedyyyyy <((()> <:C And that was very saddyyyy! ##V She was so saddd that the little kitten dieddy. The ol' women had soooo little left in lifeeey! And her KITY <((O)))! <:D<D::) >: She missed the cat very mmuuch and she felt all alonelyyyy! But the little girl from the story loved Kittyy <((: and took careyyyyyyyy of him and loved hmmme <::)3::) <D::D <D <D But one day KITTTY <:((O)>< <O>< O O <( ><D::): <((( >.< <:(( She loved Kitty very mucH!!!! But then he was Gone!!! <3((O.< ##V V Now KITY WASS gone forever
Thank you science | 2023-05-30T16:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/13vv941/temperature10000_and_top_p01_is_hilarious/ | phree_radical | self.LocalLLaMA | 2023-05-30T16:46:11 | 1 | {'gid_2': 1} | 13vv941 | false | null | t3_13vv941 | /r/LocalLLaMA/comments/13vv941/temperature10000_and_top_p01_is_hilarious/ | false | false | self | 262 | null |
What runs ChatGPT? Inside Microsoft's AI supercomputer | 8 | What runs ChatGPT? Inside Microsoft's AI supercomputer: [https://youtu.be/Rk3nTUfRZmo](https://youtu.be/Rk3nTUfRZmo)
​
How much better is the 530B model, compared to 170B GPT3? | 2023-05-30T16:06:21 | https://www.reddit.com/r/LocalLLaMA/comments/13vubnr/what_runs_chatgpt_inside_microsofts_ai/ | NancyAurum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vubnr | false | null | t3_13vubnr | /r/LocalLLaMA/comments/13vubnr/what_runs_chatgpt_inside_microsofts_ai/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'qypuv7qgWWhlaEITLML1HTEM2xyJxc90FtPYgwoqn0s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/X6bFFFNabiMERs8LOF8geqm6C-5l3Qe8GJF335w_Skk.jpg?width=108&crop=smart&auto=webp&s=4e8ee2efc8ef386f46c637225f3618f16e5559fb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/X6bFFFNabiMERs8LOF8geqm6C-5l3Qe8GJF335w_Skk.jpg?width=216&crop=smart&auto=webp&s=5b7b793539596de3049de24a4c7cfb8fe08a415e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/X6bFFFNabiMERs8LOF8geqm6C-5l3Qe8GJF335w_Skk.jpg?width=320&crop=smart&auto=webp&s=b50d950359957f874eb4143e5aa7badc6ab1008e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/X6bFFFNabiMERs8LOF8geqm6C-5l3Qe8GJF335w_Skk.jpg?auto=webp&s=2e7c5fe8e682a81441e8f98d9c51c3f9c959fad1', 'width': 480}, 'variants': {}}]} |
Understanding frequency penalty, presence penalty, repetition penalty | 1 | [removed] | 2023-05-30T15:50:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13vtx2j | false | null | t3_13vtx2j | /r/LocalLLaMA/comments/13vtx2j/understanding_frequency_penalty_presence_penalty/ | false | false | default | 1 | null | ||
ARKANE INDUSTRIES BASILISK FRAMEWORK | 0 | [removed] | 2023-05-30T13:24:32 | https://www.reddit.com/gallery/13vq96s | 0xPICNIK | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 13vq96s | false | null | t3_13vq96s | /r/LocalLLaMA/comments/13vq96s/arkane_industries_basilisk_framework/ | false | false | default | 0 | null |
Any Local LLMs trained specifically to handle langchain? | 28 | It seems so obvious that I can't believe I don't see people dropping better versions of langchain agent trained models every day.
Local models aren't as strong in their general reasoning skills and in my attempts to use some of the more commonly well regarded local models like snoozy, they just can't handle the long and complex prompt instructions that come with running a custom agent with custom tools.
But I could very well see a 40/33B or even 13b model trained on a large dataset of custom agent prompts and chain type prompts that *could* follow general instructions, so long as they're in the right format.
Does anyone know of a model like this? Or if anyone's working on it? | 2023-05-30T13:14:43 | https://www.reddit.com/r/LocalLLaMA/comments/13vq0uj/any_local_llms_trained_specifically_to_handle/ | gentlecucumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vq0uj | false | null | t3_13vq0uj | /r/LocalLLaMA/comments/13vq0uj/any_local_llms_trained_specifically_to_handle/ | false | false | self | 28 | null |
What can I do with 10GB of VRAM? | 3 | The idea of being able to run a LLM locally seems almost too good to be true so I'd like to try it out but as far as I know this requires a lot of RAM and VRAM. RAM isn't much of an issue as I have 32GB, but the 10GB of VRAM in my 3080 seems to be pushing the bare minimum of VRAM needed. I've thought of selling my 3080 for a 3090 but something tells me this is a stupid idea. I've also thought about buying an older compute GPU i.e. Tesla P40 but I'd need to essentially jerryrig the thing to my PC since it isn't a traditional GPU and while I'm sure most models would fit more easily in the 24GB of VRAM, I'm not sure how much faster (or slower) it would be compared to just using a more limited model with my 3080. | 2023-05-30T13:09:58 | https://www.reddit.com/r/LocalLLaMA/comments/13vpws1/what_can_i_do_with_10gb_of_vram/ | yungfishstick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vpws1 | false | null | t3_13vpws1 | /r/LocalLLaMA/comments/13vpws1/what_can_i_do_with_10gb_of_vram/ | false | false | self | 3 | null |
Can we have tracker for every module release or announce here ? | 0 | [removed] | 2023-05-30T13:02:16 | [deleted] | 2023-06-26T12:15:29 | 0 | {} | 13vpq1u | false | null | t3_13vpq1u | /r/LocalLLaMA/comments/13vpq1u/can_we_have_tracker_for_every_module_release_or/ | false | false | default | 0 | null | ||
I want to translate natural language into commands for a robot, is a local LLaMa the place to start? | 14 | I want to speak to a robot using conversational speech and have that translated into actions. I like the idea of using a local AI to do that but I am wondering whether I need something as general as a LLaMa. I did see a [post](https://towardsdatascience.com/control-web-apps-via-natural-language-by-casting-speech-to-commands-with-gpt-3-113177f4eab1) where GPT3 was being used to generate JSmol commands from natural language. Another one was [this](https://twitter.com/svpino/status/1659529362411237377) from santiago on twitter.
Does anyone have any experience with doing something like this? Any tutorials that can be recommended? | 2023-05-30T12:46:13 | https://www.reddit.com/r/LocalLLaMA/comments/13vpc5k/i_want_to_translate_natural_language_into/ | iChinguChing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vpc5k | false | null | t3_13vpc5k | /r/LocalLLaMA/comments/13vpc5k/i_want_to_translate_natural_language_into/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '07_FXwbB_oj_jdjPer6HfK8b5htgnPF9IhCYPQ5lO3c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=108&crop=smart&auto=webp&s=36f4288e0cf6f39222172e2664bb38bebac39273', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=216&crop=smart&auto=webp&s=4cff922de4ca0ca509c7f39b028d26bfa39a1840', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=320&crop=smart&auto=webp&s=3ec738b332874e0d3d6d836df56d9a19d72cc2b1', 'width': 320}, {'height': 346, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=640&crop=smart&auto=webp&s=ef0dc5a03201b454767fd5504188a0ee1b4c0870', 'width': 640}, {'height': 519, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=960&crop=smart&auto=webp&s=5100f8fd4c03a3bba5d5d574183ed0e4e595b908', 'width': 960}, {'height': 584, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=1080&crop=smart&auto=webp&s=93dc6b8f1dc3f36237b0ee3773b3d0dda86bf7f9', 'width': 1080}], 'source': {'height': 647, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?auto=webp&s=7d94e6ea4588b02e47135dd01f19785dc1629b6e', 'width': 1195}, 'variants': {}}]} |
[P] Fine-tuning LLaMA on TheVault by AI4Code | 10 | Hey everyone, I'm looking for suggestions and things to keep in mind while I do this.
I essentially want to fine-tune LLaMA on a dataset that's geared towards code generation. After a bit of research I found [TheVault](https://github.com/FSoft-AI4Code/TheVault) which seems good enough for the job (let me know if there are better datasets tho).
For the fine-tuning part, I looking to use LoRA or other similar methods. This is the first time I'm fine-tuning LLMs so let me know if you have any suggestions or tips. | 2023-05-30T11:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/13vnngo/p_finetuning_llama_on_thevault_by_ai4code/ | 04RR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vnngo | false | null | t3_13vnngo | /r/LocalLLaMA/comments/13vnngo/p_finetuning_llama_on_thevault_by_ai4code/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'LbDf92FJ1Tb60TgiYvBvIiV_muINezCCLBCEmDpJyvQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=108&crop=smart&auto=webp&s=a34b8e44de682c546f39684783cc7bd7a077a98e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=216&crop=smart&auto=webp&s=c0f68233b6732ffa889a3421b4f78ca624bfea97', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=320&crop=smart&auto=webp&s=3b3e760b0591b9dba804a623e859262230c77f28', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=640&crop=smart&auto=webp&s=fd78b24f2b2aa81e52785bb763dd3d4bebae132a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=960&crop=smart&auto=webp&s=5520ef4a0f05ea170c472abc6be207acb4b0b575', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=1080&crop=smart&auto=webp&s=7ce440f23fa6a8bec858253dcea4261935bd0703', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?auto=webp&s=583b13235851cba744103cb841f8920666f569b8', 'width': 1200}, 'variants': {}}]} |
Can an RTX 3090 run a 30B 4bit model with webui and windows desktop at the same time? | 2 | [deleted] | 2023-05-30T10:10:52 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13vm7tx | false | null | t3_13vm7tx | /r/LocalLLaMA/comments/13vm7tx/can_an_rtx_3090_run_a_30b_4bit_model_with_webui/ | false | false | default | 2 | null | ||
Looking for a pre trained food recognition model | 0 | [removed] | 2023-05-30T09:06:47 | https://www.reddit.com/r/LocalLLaMA/comments/13vl4ru/looking_for_a_pre_trained_food_recognition_model/ | Excellent-Pea3508 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vl4ru | false | null | t3_13vl4ru | /r/LocalLLaMA/comments/13vl4ru/looking_for_a_pre_trained_food_recognition_model/ | false | false | default | 0 | null |
samantha-falcon-7b | 43 | I've released a Samantha finetune of falcon-7b.
[**https://huggingface.co/ehartford/samantha-falcon-7b**](https://huggingface.co/ehartford/samantha-falcon-7b)
should make falcon more fun to talk to. 40b is coming too.
In case you haven't met her:
[https://erichartford.com/meet-samantha](https://erichartford.com/meet-samantha)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format.
She will not engage in roleplay, romance, or sexual activity. | 2023-05-30T06:50:12 | https://www.reddit.com/r/LocalLLaMA/comments/13vivij/samanthafalcon7b/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vivij | false | null | t3_13vivij | /r/LocalLLaMA/comments/13vivij/samanthafalcon7b/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'EbCiWIN49sd0THriZeJyHZ7VKvytwjy5Nobqky9_rMw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=108&crop=smart&auto=webp&s=9f4276960aca828176d4b8ca1589aa9dc9ebc3fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=216&crop=smart&auto=webp&s=308e2c6af604b4aae738cf83df601521389ab739', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=320&crop=smart&auto=webp&s=699a55084c6b06da6d8df8fbffad16a11cac1e38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=640&crop=smart&auto=webp&s=6c306a4859ca90b6eb9af2663d2599a4504d6049', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=960&crop=smart&auto=webp&s=c1781c118dbeaf9ff375d7e92cbf37a11aeab469', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=1080&crop=smart&auto=webp&s=5a3e4225409aab374e085c1eb2abca2de89eac15', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?auto=webp&s=a0f7ec2d14f60d50d70277055ca8fff2d9ba1c99', 'width': 1200}, 'variants': {}}]} |
LLama + llama-index | 0 | [removed] | 2023-05-30T06:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/13viiuv/llama_llamaindex/ | Reasonable-Put-1853 | self.LocalLLaMA | 2023-05-30T06:39:11 | 0 | {} | 13viiuv | false | null | t3_13viiuv | /r/LocalLLaMA/comments/13viiuv/llama_llamaindex/ | false | false | default | 0 | null |
TheBloke is on an absolute tear over on HuggingFace | 1 | [removed] | 2023-05-30T06:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/13vi2ow/thebloke_is_on_an_absolute_tear_over_on/ | natufian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vi2ow | false | null | t3_13vi2ow | /r/LocalLLaMA/comments/13vi2ow/thebloke_is_on_an_absolute_tear_over_on/ | false | false | default | 1 | null |
Wizard-Vicuna-30B-Uncensored | 336 | I just released Wizard-Vicuna-30B-Uncensored
[https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored)
It's what you'd expect, although I found the larger models seem to be more resistant than the smaller ones.
Disclaimers:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
u/The-Bloke already did his magic. Thanks my friend!
[https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ)
[https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGML](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGML)
​ | 2023-05-30T05:54:14 | https://www.reddit.com/r/LocalLLaMA/comments/13vhyen/wizardvicuna30buncensored/ | faldore | self.LocalLLaMA | 2023-05-30T05:57:44 | 0 | {} | 13vhyen | false | null | t3_13vhyen | /r/LocalLLaMA/comments/13vhyen/wizardvicuna30buncensored/ | false | false | self | 336 | {'enabled': False, 'images': [{'id': 'iykKO3N7SpXuBtTikcYk9qHwPIi8eRAcC-Ft--EB2mw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=108&crop=smart&auto=webp&s=b779898997e5e5a921c2a16565cb3f8dabe05c30', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=216&crop=smart&auto=webp&s=b3f99bbdf028ccc43b940e13d6ee38c3f0403333', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=320&crop=smart&auto=webp&s=e5876192b4b64ce506d972982287710c960536c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=640&crop=smart&auto=webp&s=039fa790c5d85ed3ad6522369d6db8872b01c503', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=960&crop=smart&auto=webp&s=d200aa16a0fa6199f64a780d4dbaa870fc3323ff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=1080&crop=smart&auto=webp&s=c4e4894414e2f7a33856c6e17aea487b1a003c27', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?auto=webp&s=52f20211c4446d24ec7c05f505a643a46a1cccc9', 'width': 1200}, 'variants': {}}]} |
Worth it on my AMD based laptop? | 1 | Hey everyone.
Im seeing pretty much all intel/nvidia support for everything. I already planned to buy a new pc in the fall, but I'd like to learn how to operate them in the mean time, even if it is just a 7b model. Having a local model to go over all of my research docs and aid me in writing sounds like a dream. I don't need perfection, just functionality. Id also like to support the efforts of someone other than open AI.
On that note, would my specs even be supported?
Cpu: AMD 6900HX
Gpu: AMD 6800s
Ram: 16gb of ddr5, debating on upgrading this to 40.
1TB ssd
Im med tech, comp sci, so this question might have an obvious answer. But id appreciate anyones input. | 2023-05-30T05:53:25 | https://www.reddit.com/r/LocalLLaMA/comments/13vhxy0/worth_it_on_my_amd_based_laptop/ | The_Dayne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vhxy0 | false | null | t3_13vhxy0 | /r/LocalLLaMA/comments/13vhxy0/worth_it_on_my_amd_based_laptop/ | false | false | self | 1 | null |
Best web UI and cloud GPU to run 30b LLaMA models? | 3 | As it’s not possible for my hardware to run anything more than a quantised 13b model, I’m looking for alternative solutions that maybe others have been using.
I loved the idea of the Hugging Face web-ui as it’s stupid easy since that’s where the models are hosted, but quickly found out the LLaMA models aren’t supported with the Inference Endpoint. | 2023-05-30T05:32:53 | https://www.reddit.com/r/LocalLLaMA/comments/13vhlqg/best_web_ui_and_cloud_gpu_to_run_30b_llama_models/ | sardoa11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vhlqg | false | null | t3_13vhlqg | /r/LocalLLaMA/comments/13vhlqg/best_web_ui_and_cloud_gpu_to_run_30b_llama_models/ | false | false | self | 3 | null |
Introducing LocalGPT: Offline ChatBOT for your FILES with GPU - Vicuna | 149 | Introducing LocalGPT: [https://github.com/PromtEngineer/localGPT](https://github.com/PromtEngineer/localGPT)
This project will enable you to chat with your files using an LLM. Run it offline locally without internet access. Completely private and you don't share your data with anyone. It takes inspiration from the privateGPT project but has some major differences.
1. It runs on GPU instead of CPU (privateGPT uses CPU). Therefore both the embedding computation as well as information retrieval are really fast. It also has CPU support in case if you don't have a GPU.
2. By default, it uses VICUNA-7B which is one of the most powerful LLM in its category. The implementation is modular so you can easily replace it with another model if you want.
3. It uses Instructor-Embeddings ([https://huggingface.co/spaces/mteb/leaderboard](https://huggingface.co/spaces/mteb/leaderboard)) which is one of the best embeddings out there. It makes the project even more powerful.
4. Everything is running locally (apart from first iteration when it downloads the required models). No data is leaving your PC.
Checkout the repo here: [https://github.com/PromtEngineer/localGPT](https://github.com/PromtEngineer/localGPT)
Here is a video that explains the code and the system design in detail. [https://youtu.be/MlyoObdIHyo](https://youtu.be/MlyoObdIHyo) | 2023-05-30T05:21:23 | https://www.reddit.com/r/LocalLLaMA/comments/13vhev0/introducing_localgpt_offline_chatbot_for_your/ | satmarz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vhev0 | false | null | t3_13vhev0 | /r/LocalLLaMA/comments/13vhev0/introducing_localgpt_offline_chatbot_for_your/ | false | false | self | 149 | {'enabled': False, 'images': [{'id': 'k9UvaWdiVD4hOv1vETarT9Ls9F-dIr24qSsAyOL1eOU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=108&crop=smart&auto=webp&s=82080fe11385d96a09bfc1921d71f6850ca6380f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=216&crop=smart&auto=webp&s=6423540cf5118840f73d6943905aee7ca153138b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=320&crop=smart&auto=webp&s=5805240f7e7cc1942a44362a2a0b7d7b5dd17c91', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=640&crop=smart&auto=webp&s=54e5bbf11fe923cff05ad848ba883b135c72037e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=960&crop=smart&auto=webp&s=67d1d3a94d04eb601b8236d54fd407d7740dca72', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=1080&crop=smart&auto=webp&s=b6c1f9fc85cdb90af82af57e0ac482d76b48212f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?auto=webp&s=5c4aef913f5df498131a96d47b1777754cbca121', 'width': 1200}, 'variants': {}}]} |
How many iterations do fine-trained models usually need/use using LoRA/QLoRA? | 10 | [deleted] | 2023-05-30T04:30:13 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13vggwg | false | null | t3_13vggwg | /r/LocalLLaMA/comments/13vggwg/how_many_iterations_do_finetrained_models_usually/ | false | false | default | 10 | null | ||
PrivateGPT - Asking itself questions and answering? | 2 | When I ask a question on PrivateGPT, it answers, but spirals and starts asking random questions and answering them by itself. In the example below, I only asked it the first question but it spiralled into the other questions and gave answers. Any idea how to fix this?
Using the Koala 13B, but similar results with GPT4All and Vicuna 13b.
​
https://preview.redd.it/42jjtwytpx2b1.png?width=1106&format=png&auto=webp&s=5a0a692e4dc54e35a026afe58049c6965a197972 | 2023-05-30T04:07:58 | https://www.reddit.com/r/LocalLLaMA/comments/13vg1gt/privategpt_asking_itself_questions_and_answering/ | masterblaster269 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vg1gt | false | null | t3_13vg1gt | /r/LocalLLaMA/comments/13vg1gt/privategpt_asking_itself_questions_and_answering/ | false | false | 2 | null | |
Good ggml model for coding? | 6 | Hi all, anyone have a recommendation for a ggml model fine tuned for code?
Edit: not too concerned on language, python preferred but most languages would be fine. | 2023-05-30T03:55:16 | https://www.reddit.com/r/LocalLLaMA/comments/13vfrnj/good_ggml_model_for_coding/ | Equal_Station2752 | self.LocalLLaMA | 2023-05-30T04:10:41 | 0 | {} | 13vfrnj | false | null | t3_13vfrnj | /r/LocalLLaMA/comments/13vfrnj/good_ggml_model_for_coding/ | false | false | self | 6 | null |
Seeking Opinions on e5-large-v2 and instructor-xl Embedding Models for Multilingual Applications | 12 | Hi Reddit Community,
I hope you're all doing well. I've been recently exploring the realm of embedding models for a multilingual project I am working on, and I've narrowed my options down to two models: e5-large-v2 and instructor-xl.
Each of them appears promising, with their own unique features and capabilities. But, as we all know, practical, real-world performance can often be different from what's listed on paper. Hence, I'm turning to you, the wise folks of this subreddit, in hopes of gaining some insight into these models.
Specifically, I'm looking to understand the following:
1. **Multilingual Capability:** How effective are these models in handling multiple languages? Are they proficient in capturing the semantics across different languages?
2. **Performance:** What are your experiences with these models in terms of accuracy, speed, and computational resource requirements? Is one noticeably more efficient or effective than the other?
3. **Implementation:** How user-friendly are these models when it comes to implementation? Are there any hurdles that I should be aware of before diving in?
4. **Versatility:** Can they handle a broad range of tasks well? Or do they excel in certain areas but fall short in others?
5. **Support and Documentation:** How well-documented are these models? If I run into issues during implementation or operation, is there an active community or support team to turn to?
I would greatly appreciate it if you could share your personal experiences and thoughts about these two models. Also, if there's another model that you believe might be a better fit for my needs, feel free to suggest it.
I look forward to reading your insights. Thanks in advance for your time and help!
Best, Hamouda ALIAS | 2023-05-30T02:36:24 | https://www.reddit.com/r/LocalLLaMA/comments/13ve2wd/seeking_opinions_on_e5largev2_and_instructorxl/ | Reality-Sufficient | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ve2wd | false | null | t3_13ve2wd | /r/LocalLLaMA/comments/13ve2wd/seeking_opinions_on_e5largev2_and_instructorxl/ | false | false | self | 12 | null |
Uncensored model for 4Gb vram | 2 | [removed] | 2023-05-30T02:25:04 | https://www.reddit.com/r/LocalLLaMA/comments/13vdu02/uncensored_model_for_4gb_vram/ | cyberv1k1n9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vdu02 | false | null | t3_13vdu02 | /r/LocalLLaMA/comments/13vdu02/uncensored_model_for_4gb_vram/ | false | false | default | 2 | null |
Mod team and contributors | 29 | This small community of local language model enthusiasts has grown quickly! Thank you everyone for being part of what makes this subreddit great.
r/LocalLLaMA is looking for two things:
* One or two moderators to help with keeping the subreddit high quality and maintaining the wiki pages. If you're interested in helping maintain the wiki but not in being a mod, you can show interest too.
* Users interested in forming a Hugging Face group for models. More on this below.
First, if you'd like to be a mod, you need at least 500 comment karma and 3 months of account history. These are the only requirements, but the highest priority will be given to users who have previously commented or posted in this subreddit. Past or current mod experience would be helpful.
[**Click here** to send a message](https://www.reddit.com/message/compose?to=Technical_Leather949&subject=LocalLLaMA%20Mod%20Application) and answer these questions, using your knowledge of markdown to make it easier to read:
1. Why do you want to be a mod? 2. Do you have mod experience with other subreddits? 3. How much time can you contribute to helping mod per week and in what timezone? 4. Would you be able to help maintain the subreddit wiki? 5. What is one good suggestion you have to improve this subreddit? 6. Do you understand how moderation affects subreddit activity? If there's anything else you want to mention, like something unique you can bring to the mod team, write it below:
For wiki contributors, familiarity with markdown is needed. If you want to be a wiki contributor but not a mod, [**click here**](https://www.reddit.com/message/compose?to=Technical_Leather949&subject=Wiki%20Contributor) and answer these questions, formatting it to make it easier to read:
1. What wiki page would you like to help maintain? This can be a new page. 2. Have you maintained a wiki before on Reddit or a different site? 3. Create a list numbered 1 to 4 with a nested bullet point that says LLaMA between 2 and 3.
r/LocalLLaMA is also looking for users interested in forming a Hugging Face group to upload converted and quantized models. The idea is to create download pages free from all self-promotion, even this subreddit, and putting the emphasis on donating to or sponsoring the **original** individuals and groups who create these new models.
This will be completely selfless with no advertisement to anything or anyone except the original creator, and the name of the group won't reference this subreddit. If you're interested and understand the process, [**click here**](https://www.reddit.com/message/compose?to=Technical_Leather949&subject=Hugging%20Face%20Group&message=My%20Hugging%20Face%20account%20is) to send a message and what you could help with.
Once again, thank you everyone for being part of this community! | 2023-05-30T02:14:54 | https://www.reddit.com/r/LocalLLaMA/comments/13vdlzr/mod_team_and_contributors/ | Technical_Leather949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vdlzr | true | null | t3_13vdlzr | /r/LocalLLaMA/comments/13vdlzr/mod_team_and_contributors/ | false | false | self | 29 | null |
Guanaco 7B llama.cpp newline issue | 6 | So, I've been using Guanaco 7B q5\_1 with llama.cpp and think it's \*awesome\*. With the "precise chat" settings, it's easily the best 7B model available, punches well above it's weight, and acts like a 13B in a lot of ways. There's just one glaring problem that, realistically, is more of a minor annoyance than anything, but I'm curious if anyone else has experienced, researched, or found a fix for it.
After certain prompts or just talking to it for long enough, the model will spam newlines until you ctrl+c to stop it.
That's... all, really. It just spams newline like if you opened notepad and pressed "enter" repeatedly.
It's really weird though. I haven't seen any other model do this. It doesn't preface it with anything predictable like ###Instruction: or the like. It just starts flooding the chat window with space.
There also doesn't seem to be an easy solution to this since llama.cpp doesn't process escape characters. There's the -e option, but it only works for prompt(s), not reverse prompt. Therefore, -r "\\n" doesn't work. Neither does -r "\^\\n". After some research and testing, I found that -r "\`n\`n\`n" works in powershell (ie it makes three newline characters in a row a "reverse prompt"), but since I like batch scripting I would really like to avoid the need for powershell and recreate this in windows command prompt or eliminate the need for it. Any ideas, explanation as to why this is a thing, or at least confirmation that I'm not the only one experiencing it? | 2023-05-30T00:47:23 | https://www.reddit.com/r/LocalLLaMA/comments/13vbr0e/guanaco_7b_llamacpp_newline_issue/ | AI-Pon3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vbr0e | false | null | t3_13vbr0e | /r/LocalLLaMA/comments/13vbr0e/guanaco_7b_llamacpp_newline_issue/ | false | false | self | 6 | null |
Cheap/Free GPU Cloud services? | 1 | [removed] | 2023-05-30T00:25:08 | https://www.reddit.com/r/LocalLLaMA/comments/13vb9ia/cheapfree_gpu_cloud_services/ | Huge-Usual-3787 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vb9ia | false | null | t3_13vb9ia | /r/LocalLLaMA/comments/13vb9ia/cheapfree_gpu_cloud_services/ | false | false | default | 1 | null |
Cross pollination of ideas from Stable Diffusion: Textual Inversions | 50 | I've had this idea rattling around in my mind for a while and need to get it out. I was amazed to hear from the creator of LORA that they designed LORA for LLMs, but they ended up being quite suited for Stable Diffusion. This got me wondering - is there anything currently in Stable Diffusion that is able to be applied to LLaMA?
One of these things that seems to have promise to me is Textual Inversion. [This explanation](https://www.youtube.com/watch?v=opD_H9bED9Y) of it seems to do a pretty good job. They are also known as "embeddings" but from what I have seen so far in the LLM space this means something completely different. For example with a Textual Inversion in SD, it simply points to a space in the diffusion model that is so close to the trained material that it recreates it. With embeddings, they are often referencing the actual data itself.
I am not an AI engineer, I'm an artist. So basically what I am envisaging is the ability for a user of Oobabooga to load up an LLM and then apply small files (like several kb in size) to their LLM to get it to provide greatly effected responses. For example, using a Steven King style embedding would not answer questions about the text of Steven King's novels, but would instead inject a style of horror and literary talent into the responses. Aka, respond "in the style of..." - is this possible with embeddings today? My understanding is that embeddings and vector databases are rather clunky. If embeddings could move from one LLaMA model to another, this would be extremely entertaining (of course, it would change dramatically with each model, part of the fun!)
The other thing I would be fascinated to try from SD to LLM space is merging models.
This of course is only possible due to ckpt files all being similar, but the benefits are huge. A static model that seems rigidly overtrained because immediately more flexible and expressive, almost like it has room to breathe. I would think this process if ported over to LLMs would be very valuable and unexpected. In Stable Diffusion it's more an alchemy than a science. But I could see lots of beneficial results coming from it if it's possible.
What do you think is possible taking Stable Diffusion ideas into the LLaMA space? Do these sound absurd and unachievable, or does it seem remotely plausible? | 2023-05-29T21:37:35 | https://www.reddit.com/r/LocalLLaMA/comments/13v7ctt/cross_pollination_of_ideas_from_stable_diffusion/ | revolved | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13v7ctt | false | null | t3_13v7ctt | /r/LocalLLaMA/comments/13v7ctt/cross_pollination_of_ideas_from_stable_diffusion/ | false | false | self | 50 | {'enabled': False, 'images': [{'id': 'HFF1ModH9rb79ohJok8TB5o2V72dPF1koNi65mLF9Fk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/sibHwFD7dMBm0hOMe3NBTV_nktx-n497RV-fu7k8hXI.jpg?width=108&crop=smart&auto=webp&s=afbdc7fb6a6e30e19c8e39a9b5e4f4f3f0f3841e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/sibHwFD7dMBm0hOMe3NBTV_nktx-n497RV-fu7k8hXI.jpg?width=216&crop=smart&auto=webp&s=be07e70f0cce1a1bf8866cdf5bea189487c73c00', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/sibHwFD7dMBm0hOMe3NBTV_nktx-n497RV-fu7k8hXI.jpg?width=320&crop=smart&auto=webp&s=297075e717ec6b3a0f4b76cd5517c668fd3a0d7d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/sibHwFD7dMBm0hOMe3NBTV_nktx-n497RV-fu7k8hXI.jpg?auto=webp&s=23781bd26013b0e64babb23b130b6cf2d50c5c87', 'width': 480}, 'variants': {}}]} |
llamacpp memory use on a m1 using oobabooga | 1 | [removed] | 2023-05-29T21:32:00 | https://www.reddit.com/r/LocalLLaMA/comments/13v77yw/llamacpp_memory_use_on_a_m1_using_oobabooga/ | Bozo32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13v77yw | false | null | t3_13v77yw | /r/LocalLLaMA/comments/13v77yw/llamacpp_memory_use_on_a_m1_using_oobabooga/ | false | false | default | 1 | null |
TIL Sharding a model into smaller chunks may make it possible to load in Free Colab instances without running out of memory | 24 | I recently came across this [huggingface article](https://huggingface.co/blog/4bit-transformers-bitsandbytes) where there was a notebook claiming that they you can load GPT-NeoX-20B in a Free Colab instance which I found hard to believe because I was unable to load models as small as GPT-Neo-2.7B and StableLM 3B on a Free Colab instance before running out of memory (at least with oobabooga's interface.) As it turns out, it worked fine even though models almost 1/10 the size were running out of physical memory on my 12.7 GB free colab instance.
Further investigation reveals that [sharding](https://huggingface.co/docs/transformers/big_models) is the answer. However, you'll need a Colab Pro subscription to be able to load the larger models and to be able to shard them so there's a bit of a chicken and egg problem here but after you shard the models, you'll be able to load them on an account with a Free Colab instance (without burning your credits) or to share them with your friends who don't have Colab Pro or local machines capable of running LLMs.
So here are the steps to sharding a model into smaller pieces, assuming you have a Colab Pro subscription:
- Start a Colab Pro Notebook with the High Ram option enabled. Depending on the size of the model, you may need to also enable a GPU (T4 should be fine in most cases.)
- Download and load the model from Huggingface. If it won't fit in your GPU as is and you plan on running it in 8-bit mode anyways in the future, it's fine to load it in 8-bit.
- Save the model with sharding enabled.
Example:
dest = "/content/drive/MyDrive/llms/models/mymodel" #Preferably a shared folder that your Free Colab accounts can also access
shard_size = "1000MB" #A good value to try
model.save_pretrained(dest, max_shard_size=shard_size)
tokenizer.save_pretrained(dest)
You can also publish it to your own Huggingface account as well if you want to share it with others easily.
If you're interested in the actual code to do this, I can provide it; just let me know.
Here's end-to-end code for converting WizardLM-7B-uncensored. You may need to change the destination path.
from google.colab import drive # type: ignore
try:
drive_path = "/content/drive"
drive.mount(drive_path,force_remount=False)
except:
print("...error mounting drive or with drive path variables")
!git clone https://github.com/oobabooga/text-generation-webui
%cd text-generation-webui
!pip install -r requirements.txt
!pip install -U transformers
!pip install sentencepiece bitsandbytes accelerate
model = "TheBloke/Wizard-Vicuna-7B-Uncensored-HF"
dest = "/content/drive/MyDrive/llm/models/{}".format(model.replace("/","_")) #May need to edit based on where you're storing your models
shard_size = "1000MB"
from transformers import LlamaTokenizer , LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained(model)
model = LlamaForCausalLM.from_pretrained(model, load_in_8bit=True, device_map='auto')
model.save_pretrained(dest, max_shard_size=shard_size)
tokenizer.save_pretrained(dest)
I can post a notebook here on my anonymous Google account but you may need to enable the High Ram and GPU on a Colab Pro instance to run it. | 2023-05-29T21:12:58 | https://www.reddit.com/r/LocalLLaMA/comments/13v6qvh/til_sharding_a_model_into_smaller_chunks_may_make/ | ragnarkar | self.LocalLLaMA | 2023-05-30T13:06:07 | 0 | {} | 13v6qvh | false | null | t3_13v6qvh | /r/LocalLLaMA/comments/13v6qvh/til_sharding_a_model_into_smaller_chunks_may_make/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': '5ors3IzSd91LDWo2yC2yuTxbj5satwsZQHIPrP5aNIo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=108&crop=smart&auto=webp&s=ffc336c656133c8a8bd641e31b75fb1aa11f8621', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=216&crop=smart&auto=webp&s=b9ff564f2023e86473ce0e652c89c051f6ff1f0d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=320&crop=smart&auto=webp&s=7f128fbeaaa0215fa9d56fe951360b6816d05d15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=640&crop=smart&auto=webp&s=11de21cb0d1e5e65061595bbdc3ee0e7bf363013', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=960&crop=smart&auto=webp&s=aa940f51838964b78f0e74d4a8d185daa8b66258', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=1080&crop=smart&auto=webp&s=b45d46bcf6025c8ae8b913dd314011b331b2f9d0', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?auto=webp&s=29860490a90daf517877704aef67f4db4a6ebbc9', 'width': 1300}, 'variants': {}}]} |
Bitsandbytes giving you a cuda error on windows? Don't worry, some guy already compiled it for you. | 14 | I don't know what bitsandbytes is or what it does or why it won't just compile for me out of the box. Alls I know is it gives me errors and makes me sad. In an effort to prevent more tears, here's what I learned: I found some post somewhere that said to pip install this git repository and I did and then bitsandbytes worked with cuda.
[https://github.com/Keith-Hon/bitsandbytes-windows](https://github.com/Keith-Hon/bitsandbytes-windows)
Maybe others will chime in here and let me know if there's a better solution. Hopefully this post finds you well in your time of need, llocalllama warrior. | 2023-05-29T20:13:35 | https://www.reddit.com/r/LocalLLaMA/comments/13v57sf/bitsandbytes_giving_you_a_cuda_error_on_windows/ | UnoriginalScreenName | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13v57sf | false | null | t3_13v57sf | /r/LocalLLaMA/comments/13v57sf/bitsandbytes_giving_you_a_cuda_error_on_windows/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'BSXsiBdAOCkfF5-JmoMvq9Q6TWfza3b-1kk_0wUeqQ4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=108&crop=smart&auto=webp&s=8b176a9209c9411f21d188fd88d66ab729a03525', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=216&crop=smart&auto=webp&s=c36859b70ed1d5ed06924e49c86e333ec3bb5863', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=320&crop=smart&auto=webp&s=d8bb6520f92eae7f983d370046fbcfcc865b99bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=640&crop=smart&auto=webp&s=e76886dae9bed0e6e59a5a8faf80ba838a6d44dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=960&crop=smart&auto=webp&s=5fc656d8fdde531bf8fc94b4d002c885c27791ca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=1080&crop=smart&auto=webp&s=bfc24d668db9e9795804b4afb0c7e280ed87eaab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?auto=webp&s=e162176ba37ad59f9226396b837d8e7709854054', 'width': 1200}, 'variants': {}}]} |
Server help (again) | 2 | [removed] | 2023-05-29T19:52:34 | https://www.reddit.com/r/LocalLLaMA/comments/13v4ohm/server_help_again/ | Emergency-Seaweed-73 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13v4ohm | false | null | t3_13v4ohm | /r/LocalLLaMA/comments/13v4ohm/server_help_again/ | false | false | default | 2 | null |
Is there a GUI for llama.cpp that I can use my CuBlas enabled binary with? | 1 | [removed] | 2023-05-29T19:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/13v4425/is_there_a_gui_for_llamacpp_that_i_can_use_my/ | APUsilicon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13v4425 | false | null | t3_13v4425 | /r/LocalLLaMA/comments/13v4425/is_there_a_gui_for_llamacpp_that_i_can_use_my/ | false | false | default | 1 | null |
Model answering itself? | 0 | [removed] | 2023-05-29T19:23:50 | https://www.reddit.com/r/LocalLLaMA/comments/13v3yqa/model_answering_itself/ | butidontwanto | self.LocalLLaMA | 2023-05-29T19:30:10 | 0 | {} | 13v3yqa | false | null | t3_13v3yqa | /r/LocalLLaMA/comments/13v3yqa/model_answering_itself/ | false | false | default | 0 | null |
Multiple cheap gpus or a single expensive one? | 13 | So i have about $500-600 and already a good server 128-256gb ddr3 and 24 xeon e5-2698 V2 cores so there i don't need an upgrade i think but i dont have a GPU in it yet and i am wondering would it be better to get more ram and getting older server GPUs or something like a single 3090? Also does AMD vs Nvidia matter seeing that the Rx 6800xt is cheaper than a 3090 but 2 of them have more memory and probably even more compute power. So if anyone has a good resource or article explaining what is better for running any kind of local llm(i am looking for a local chat gpt/bing/bart alternative) i would appreciate it. | 2023-05-29T18:58:17 | https://www.reddit.com/r/LocalLLaMA/comments/13v3b6q/multiple_cheap_gpus_or_a_single_expensive_one/ | gpu_melter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13v3b6q | false | null | t3_13v3b6q | /r/LocalLLaMA/comments/13v3b6q/multiple_cheap_gpus_or_a_single_expensive_one/ | false | false | self | 13 | null |
Kobold on runpod? | 2 | [removed] | 2023-05-29T18:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/13v2qry/kobold_on_runpod/ | chakalakasp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13v2qry | false | null | t3_13v2qry | /r/LocalLLaMA/comments/13v2qry/kobold_on_runpod/ | false | false | default | 2 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.