title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The Repeating Issue? | 3 | As a noob, I’m using KoboldAI and Oobabooga simultaneously at the moment, but I found that with the same model (wizard vacuna 13b ggml) kobold never repeated sentences as much as ooba did. (llama.cpp)
I know nothing about the settings and I wonder how can I fix the repeating issue when using ooba, or is there a guide? Many thanks. | 2023-07-07T03:21:07 | https://www.reddit.com/r/LocalLLaMA/comments/14sv1ne/the_repeating_issue/ | Mildred_D | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sv1ne | false | null | t3_14sv1ne | /r/LocalLLaMA/comments/14sv1ne/the_repeating_issue/ | false | false | self | 3 | null |
Llama is good but, what else is available? Qualitatively | 13 | I am looking to build a local LLM model, but I dont want to be constrained by the licensing if I want to build something that can be commercially available? I know Dolly and H2O have decent models. do you guys have any suggetions of the other models that perform just as well or strategies to make them do so? I want to solve a question answering of local documents. Appreciate any feedback including models, strategies, existing solutions etc, really anything you think might help. Thanks. | 2023-07-07T02:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/14su9fp/llama_is_good_but_what_else_is_available/ | Altruistic-Ask-773 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14su9fp | false | null | t3_14su9fp | /r/LocalLLaMA/comments/14su9fp/llama_is_good_but_what_else_is_available/ | false | false | self | 13 | null |
4060 Ti vs 3070 vs 3060 Ti Stable Diffusion benchmark | 47 | I saw a post talking about 4060 Ti being a bad card because of the 128 bit memory bus. However I can't find any benchmark for generative AI.
A video on Bilibili compares the performance of different GPUs for Stable Diffusion. The chart shows the time (in seconds) needed to generate a image with Stable Diffusion, so the faster the better, and the 4060 Ti outperforms 3070 and 3060 Ti.
Not sure if this is applicable to LLM too, what do you think? | 2023-07-07T02:11:48 | regunakyle | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14sti62 | false | null | t3_14sti62 | /r/LocalLLaMA/comments/14sti62/4060_ti_vs_3070_vs_3060_ti_stable_diffusion/ | false | false | 47 | {'enabled': True, 'images': [{'id': '2QAdM0ad_wcWX9r7QAUyIzV7tvg5IO1ppXpiqcZ7-A0', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=108&crop=smart&auto=webp&s=24d50386cd7a39a1a93d6492d6b712dac7aca71c', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=216&crop=smart&auto=webp&s=f3ba26b8b225f0bf532bb07ca500018c19e6f2f6', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=320&crop=smart&auto=webp&s=9ebb568a7e3c8f9d62b937968814a67df148f401', 'width': 320}, {'height': 371, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=640&crop=smart&auto=webp&s=22e0b30a138afc863d5ff0fa702ca4093fbefbf7', 'width': 640}, {'height': 557, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=960&crop=smart&auto=webp&s=a9f4deae5486eaac0d392f1fd1e28c575960adc2', 'width': 960}, {'height': 627, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?width=1080&crop=smart&auto=webp&s=184136b539b35487e0c16c21eb94b7595e133a28', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/mu20scrxbgab1.jpg?auto=webp&s=a31dfc04fa25ac3b83f12fca6281c32f1589c101', 'width': 1860}, 'variants': {}}]} | ||
LongLLaMA, a LLM capable of handling long contexts of 256k tokens | 2 | 2023-07-07T01:58:26 | https://twitter.com/_akhaliq/status/1677131408265945088 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14st6ba | false | {'oembed': {'author_name': 'AK', 'author_url': 'https://twitter.com/_akhaliq', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Focused Transformer: Contrastive Training for Context Scaling<br><br>paper page: <a href="https://t.co/nMD1r6e21b">https://t.co/nMD1r6e21b</a><br>model: <a href="https://t.co/ecyeaoS1l0">https://t.co/ecyeaoS1l0</a><br><br>Large language models have an exceptional capability to incorporate new information in a contextual manner. However, the full potential of such an… <a href="https://t.co/fe2IuwTe6N">pic.twitter.com/fe2IuwTe6N</a></p>— AK (@_akhaliq) <a href="https://twitter.com/_akhaliq/status/1677131408265945088?ref_src=twsrc%5Etfw">July 7, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/_akhaliq/status/1677131408265945088', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14st6ba | /r/LocalLLaMA/comments/14st6ba/longllama_a_llm_capable_of_handling_long_contexts/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'yZB-Op42aidgsC58AESnRAIxZl5MNXE47YaH2fOrL2Q', 'resolutions': [{'height': 116, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?width=108&crop=smart&auto=webp&v=enabled&s=cad6cc0edfd4e7c04ba1c6e5c489357db6ea061a', 'width': 108}, {'height': 232, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?width=216&crop=smart&auto=webp&v=enabled&s=8bd01898a00b3b917c1e17d3ee05cc40407c2d8f', 'width': 216}, {'height': 343, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?width=320&crop=smart&auto=webp&v=enabled&s=2437aa469f9ab15fc7fa7be26b80861888d18065', 'width': 320}, {'height': 687, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?width=640&crop=smart&auto=webp&v=enabled&s=28137397d2212fdb87617720bc0ddc536de09409', 'width': 640}, {'height': 1031, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?width=960&crop=smart&auto=webp&v=enabled&s=1fe3bf470f1df0650454be18d1b65ba0518bf2f4', 'width': 960}], 'source': {'height': 1042, 'url': 'https://external-preview.redd.it/8m2D72Vzmkgfwrvj-wgZL8Kja5a-JyDIBVxPf8ydkL4.jpg?auto=webp&v=enabled&s=5c2fe4f8a08aa801c513d3f44c0ed7f46de8e997', 'width': 970}, 'variants': {}}]} | |
What should I look out for during fine tuning? | 4 | I’m fine tuning using the oogabooga GUI.
I know “loss” is an important metric. Their guide recommends stopping at around loss = 1
Intuitively what’s the difference between a loss of 1.1 and 1.0?
For example, my data started at a loss of 2.0. After only about 0.1 epochs, it drops to 1.1. Then continued training for like 12 hours only drops the loss down to 1.06.
1.1 and 1.0 doesn’t seem to be that far apart, but it also doesn’t seem to make sense that I can fine tune my data in a few minutes (because it only took a few minutes to decrease the loss from 2.0 to 1.1) | 2023-07-07T01:37:08 | https://www.reddit.com/r/LocalLLaMA/comments/14ssp6n/what_should_i_look_out_for_during_fine_tuning/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ssp6n | false | null | t3_14ssp6n | /r/LocalLLaMA/comments/14ssp6n/what_should_i_look_out_for_during_fine_tuning/ | false | false | self | 4 | null |
VRAM or RAM for cost effective home build | 1 | I have been wondering what sort of performance people have been getting out of CPU based builds running local llama?
I haven't seen a similar post since the release of 8k token limits and ExLLAMA. It's not to hard to imagine a build with 64gb of RAM blowing a mid teir GPU out of the water in terms of model capability as well as the content length increase to 8k.
I have never personally tried running LLAMA on CPU /RAM, would it be worth buying a bunch of RAM (relatively cheap) vs buying a beefier GPU (not in the cards 4 me)?
Current build:
Windows 10
3060 12gb vram
Current ram 16 gb
I5 6600k | 2023-07-07T01:27:33 | https://www.reddit.com/r/LocalLLaMA/comments/14ssguq/vram_or_ram_for_cost_effective_home_build/ | Far-Notice1730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ssguq | false | null | t3_14ssguq | /r/LocalLLaMA/comments/14ssguq/vram_or_ram_for_cost_effective_home_build/ | false | false | self | 1 | null |
A6000 vs 2x 4090 | 14 | I've been experimenting with llms for the past few months. I feel like I've been held back by lacking vram. I'm considering upgrading to either an A6000 or dual 4090s.
The A6000 has more vram and costs roughly the same as 2x 4090s. The A6000 would run slower than the 4090s but the A6000 would be a single card and have a much lower watt usage.
I don't have time to game anymore so it's not really a concern at this point to have that a consideration.
With the A6000 I don't need to worry about it properly splitting up the models.
What are your thoughts? | 2023-07-07T01:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/14srull/a6000_vs_2x_4090/ | drnick316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14srull | false | null | t3_14srull | /r/LocalLLaMA/comments/14srull/a6000_vs_2x_4090/ | false | false | self | 14 | null |
Dilbert Comic | 0 | 2023-07-07T00:36:50 | https://www.reddit.com/r/dilbert/comments/14sktuy/dilbert_reborn_june_30th_2023_shared_for_all_on/ | kubbiember | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14sracc | false | null | t3_14sracc | /r/LocalLLaMA/comments/14sracc/dilbert_comic/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '-OrtaAE98-Yo261LD87_xRkwvd3emJZP-0nwVbk7ZNs', 'resolutions': [{'height': 38, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=108&crop=smart&auto=webp&v=enabled&s=307b34ab94c177ad1f1a014b22835bf5a456578e', 'width': 108}, {'height': 77, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=216&crop=smart&auto=webp&v=enabled&s=da23e79d8b304ebdcc4e68780ad4669955710572', 'width': 216}, {'height': 114, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=320&crop=smart&auto=webp&v=enabled&s=f4da09cb5f747c86bfa7f38a9afebf54cfeae783', 'width': 320}, {'height': 229, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=640&crop=smart&auto=webp&v=enabled&s=107bb120acad0a2beecace967f4157c9d638326f', 'width': 640}, {'height': 343, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=960&crop=smart&auto=webp&v=enabled&s=b74b98e4c07ca596a165f0d4064bac9e10ea75f1', 'width': 960}, {'height': 386, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?width=1080&crop=smart&auto=webp&v=enabled&s=b8c275df332c9bc38bf1d990854b82698303b2a0', 'width': 1080}], 'source': {'height': 944, 'url': 'https://external-preview.redd.it/jZBjwHltyEl4aJdS31MKefTMSQkJXWjRkARJm9iK5YU.png?auto=webp&v=enabled&s=9f3f597dcfb7bdf2d7c6d0338f16a454fc92f50d', 'width': 2638}, 'variants': {}}]} | |
Vector Search w/ LMMs vs Classic Text Search (i.e. elasticsearch), has anyone done pricing/performance experiments? | 7 | Title! I am sure there has been some research done on this topic but not sure where. Curious if anyone has found vector search via some open-soruce model (through vector store) is more performant/cheaper than elasticsearch. Thanks | 2023-07-07T00:02:13 | https://www.reddit.com/r/LocalLLaMA/comments/14sqg2j/vector_search_w_lmms_vs_classic_text_search_ie/ | Working_Ideal3808 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sqg2j | false | null | t3_14sqg2j | /r/LocalLLaMA/comments/14sqg2j/vector_search_w_lmms_vs_classic_text_search_ie/ | false | false | self | 7 | null |
Fill in blanks with conversation (Prompt now working?) | 5 | I am trying to get an appropriate response for this prompt:
Fill in the RESPONSE part of this conversation between a human and a bot. Bot: Hi. Human RESPONSE. Bot: I am 10 years old! How about you?
Where response would be something like: "Hi, how old are you?"
Why isn't this working on anything less than ChatGPT? Any tips for engineering this prompt better? I have a bunch of further data in the form of conversations with redactions that I want to fill in intelligently. | 2023-07-06T22:51:02 | https://www.reddit.com/r/LocalLLaMA/comments/14son0l/fill_in_blanks_with_conversation_prompt_now/ | ilovejoi36912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14son0l | false | null | t3_14son0l | /r/LocalLLaMA/comments/14son0l/fill_in_blanks_with_conversation_prompt_now/ | false | false | self | 5 | null |
LLMs on a 32-bit device with 2GB of RAM | 5 | Pardon the messiness of this post, my thought process can be pretty hectic lol
# 2023-07-11 Update
u/pedantic_pineapple brought some light back into this. I've had some difficulty getting it working, however (listing this to save time if someone else sees this post):
1. Using `pip install languagemodels` ends up downloading several versions from 0.6.0 to 0.0.1(?) and eventually settles on 0.0.2. The chat example script and Python command-line didn't work
2. Using `pip install git+https://github.com/jncraton/languagemodels.git` tells me that there's "no matching distribution found for ctranslate2>=3.15.0", implying that they don't distribute it for 32-bit ARM
3. I cloned the CTranslate2 repo, went into the python directory and did `pip install -e .`. It reported missing pybind11, but even after installing it, it reported the same error
4. In the same directory, I installed `setuptools` and did `python setup.py install`. It reported many missing files in the "ctranslate" directory, such as "ctranslate2/generator.h"
5. I cloned the repository again with `git clone --depth 1 --recursive https://github.com/OpenNMT/CTranslate2.git`, went into the directory and repeated step 4. It repeated the same errors
6. I copied `cp cpp/* ctranslate2` to copy all the files from the cpp folder to the ctranslate2 folder. Nope, still didn't work.
7. As a last-ditch effort I did `cp -r ../include/ctranslate2/* ctranslate2`; still didn't work
I'll revisit this when I have more time, but I do feel like I'm getting closer to cracking the case
# 2023-07-07 Update
I've traded my tablet in for a 64-bit one, but I've decided to revisit this
My 32-bit Motorola can run 64-bit Linux programs through Termux with [QEMU's user mode emulation](https://www.qemu.org/docs/master/user/main.html). I was able to benchmark the emulation overhead using 7zip as a test (native gave me a score of 6504, 64-bit ARM emulation gave me a score of 167)
[(this approach is similar to what someone did to get Dwarf Fortress running on Android, running an x86 game on ARM)](https://old.reddit.com/r/dwarffortress/comments/r80m7y/dwarf_fortress_mobile_running_dwarf_fortress_on/)
I just don't know how to cross-compile llama.cpp for 64-bit ARM, and even if I could, I compiled llama.cpp on my PC and tested it (on the PC) with qemu-user's x86_64 emulator to verify, and OpenLLaMA 3B generated at what felt like a token every 20-30 seconds. And this is an i7-6700 with DDR3 RAM (which can generate what looks like ~7-10 tokens a second natively); god forbid how my phone would perform doing the same task, if it had enough memory to run OpenLLaMA 3B
If I get it compiled, I might revisit it with [box86](https://box86.org/) (emulator for x86 games and programs) to see if there's a speed-up. Otherwise, I might try [the upstream RWKV project](https://github.com/BlinkDL/ChatRWKV)
It's better to just wait to see if llama.cpp (and other GGML-based projects) would get 32-bit ARM support soon. I'm really looking forward to running RWKV-4 Raven 1.5B in KoboldCpp
# Original post
I love to toy around with my phones and tablets, which are budget 32-bit Androids. My most used is a hacked 2GB Kindle HDX 7 (2013) and my newest is a 3GB Motorola (2021). As you can tell, all of my stuff is secondhand since I'm broke as a joke
I've been looking into running LLMs on any of them for the past few months, either for chatbot companionship or as a lightweight learning assistant when the power is out (downloaded tons of ebooks and the entirety of Wikipedia for Kiwix on a 256GB microSD for this too), but I can't find any information for constrained hardware. I do know of alternative models available in sub-3B sizes like RWKV, LaMini, and older base models like Pythia Deduped, OPT and GPT-2
But to make things difficult, I've had trouble getting KoboldCpp (and regular llama.cpp) to compile through Termux, so any help when it comes to this would be appreciated. I might have to use upstream RWKV or GPT-2, ~~Transformers (Oobabooga or KoboldAI)~~ (PyTorch might not support it), or alternative backends altogether like the many different GPT-2 implementations in C, but most of those would leave me without a frontend and/or no quantization (otherwise I'd probably be able to go up to 1.5B) | 2023-07-06T22:17:51 | https://www.reddit.com/r/LocalLLaMA/comments/14snrcp/llms_on_a_32bit_device_with_2gb_of_ram/ | MeowAnnabelle | self.LocalLLaMA | 2023-07-12T05:03:50 | 0 | {} | 14snrcp | false | null | t3_14snrcp | /r/LocalLLaMA/comments/14snrcp/llms_on_a_32bit_device_with_2gb_of_ram/ | false | false | self | 5 | null |
Best model that can run on Colab | 7 | Hi everyone
First of all, I want to thank everyone in this sub who is contribute to the progress of Open source llms. Than you so much!
I want to experiment with medium sized models (7b/13b) but my gpu is old and has only 2GB vram.
So I'll probably be using google colab's **free** gpu, which is nvidia T4 with around 15 GB of vRam.
My question is what is the best quantized (or full) model that can run on Colab's resources without being too slow? I mean at least 2 tokens per second.
*Also please recommend specific versions and not just general architecture like llama 13b or falcon 7b etc.
Thanks so much in advance | 2023-07-06T22:14:05 | https://www.reddit.com/r/LocalLLaMA/comments/14snnj5/best_model_that_can_run_on_colab/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14snnj5 | false | null | t3_14snnj5 | /r/LocalLLaMA/comments/14snnj5/best_model_that_can_run_on_colab/ | false | false | self | 7 | null |
Trying Classifier Free Guidance and Negative prompt - some ideas how to test it properly? | 17 | 2023-07-06T21:21:21 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14sm839 | false | null | t3_14sm839 | /r/LocalLLaMA/comments/14sm839/trying_classifier_free_guidance_and_negative/ | false | false | 17 | {'enabled': True, 'images': [{'id': 'LAqNZ684cJaVml-0PGv5zOdQ1G75ZEn3ytdxN0YHZDg', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/i3vo0ys1weab1.jpg?width=108&crop=smart&auto=webp&s=892c938cc454a403587c01016721ede66857b3d7', 'width': 108}, {'height': 246, 'url': 'https://preview.redd.it/i3vo0ys1weab1.jpg?width=216&crop=smart&auto=webp&s=eb3c4958f277e3500ffa30f8bb4568d6c60bfcc0', 'width': 216}, {'height': 365, 'url': 'https://preview.redd.it/i3vo0ys1weab1.jpg?width=320&crop=smart&auto=webp&s=46db5fdc0d39cca21ed67f73b7da6cb6fa92e288', 'width': 320}, {'height': 730, 'url': 'https://preview.redd.it/i3vo0ys1weab1.jpg?width=640&crop=smart&auto=webp&s=12be86f04701e40f49d31d37b6dadee6691acf16', 'width': 640}], 'source': {'height': 957, 'url': 'https://preview.redd.it/i3vo0ys1weab1.jpg?auto=webp&s=598425f14457d571eab3ed098b6f59099dbfb137', 'width': 838}, 'variants': {}}]} | |||
I guess George Hotz was correct. | 78 | 2023-07-06T21:16:53 | https://arxiv.org/abs/2305.14705 | MrBeforeMyTime | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 14sm3p0 | false | null | t3_14sm3p0 | /r/LocalLLaMA/comments/14sm3p0/i_guess_george_hotz_was_correct/ | false | false | 78 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | ||
What's the current best model if you have no concern about the hardware? | 1 | I have access to a semi-supercomputer and I wonder what's the best model I perhaps could run on it specifically for my personal use. | 2023-07-06T19:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/14sjz7q/whats_the_current_best_model_if_you_have_no/ | hattapliktir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sjz7q | false | null | t3_14sjz7q | /r/LocalLLaMA/comments/14sjz7q/whats_the_current_best_model_if_you_have_no/ | false | false | self | 1 | null |
Does RAM make a big difference? | 1 | [removed] | 2023-07-06T19:58:01 | https://www.reddit.com/r/LocalLLaMA/comments/14sjyan/does_ram_make_a_big_difference/ | Asoery | self.LocalLLaMA | 2023-07-06T20:02:04 | 0 | {} | 14sjyan | false | null | t3_14sjyan | /r/LocalLLaMA/comments/14sjyan/does_ram_make_a_big_difference/ | false | false | default | 1 | null |
Help with QLoRA Fine Tune | 5 | I'm following nearly the same example from the this repository:
https://github.com/mzbac/qlora-fine-tune
-Except I'm testing it with one of the standard hardcoded datasets in his script, 'alpaca'. Here's the command:
python qlora.py --model_name_or_path TheBloke/wizardLM-13B-1.0-fp16 --dataset alpaca --bf16
That dataset is about 52k records. Is 25 hours on an A100 on runpod normal? QLoRA is supposed to be fast.
Any help is appreciated, and would love to colab with folks with similar interests! I've been building stuff with langchain the last few months, but only just started building datasets and trying to tune!
.......................
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda117.so...
...
/usr/local/lib/python3.10/dist-packages/peft/utils/other.py:102: FutureWarning: prepare_model_for_int8_training is deprecated and will be removed in a future version. Use prepare_model_for_kbit_training instead.
warnings.warn(
adding LoRA modules...
trainable params: 125173760.0 || all params: 6922337280 || trainable: 1.8082586117502786
loaded model
...
torch.uint8 6343884800 0.9164368252221308
torch.float32 414720 5.991040066744624e-05
{'loss': 1.3165, 'learning_rate': 0.0002, 'epoch': 0.0}
{'loss': 1.1405, 'learning_rate': 0.0002, 'epoch': 0.01}
{'loss': 0.9342, 'learning_rate': 0.0002, 'epoch': 0.01}
{'loss': 0.6558, 'learning_rate': 0.0002, 'epoch': 0.01}
{'loss': 0.7448, 'learning_rate': 0.0002, 'epoch': 0.02}
{'loss': 1.217, 'learning_rate': 0.0002, 'epoch': 0.02}
{'loss': 1.1012, 'learning_rate': 0.0002, 'epoch': 0.02}
{'loss': 0.8564, 'learning_rate': 0.0002, 'epoch': 0.02}
{'loss': 0.6592, 'learning_rate': 0.0002, 'epoch': 0.03}
1%|█▌ | 90/10000 [13:43<24:05:54, 8.75s/it] | 2023-07-06T18:59:07 | https://www.reddit.com/r/LocalLLaMA/comments/14sidp3/help_with_qlora_fine_tune/ | gentlecucumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sidp3 | false | null | t3_14sidp3 | /r/LocalLLaMA/comments/14sidp3/help_with_qlora_fine_tune/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'Sps2Xxji1GXjTiQi3CmfFiIIh8o2K1Rd4E9wU0Fes0U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=108&crop=smart&auto=webp&s=8348f876b1e97108c598081be7555cf87557b917', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=216&crop=smart&auto=webp&s=95ab978f4df272a07d6aad4baa7647394c16fc27', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=320&crop=smart&auto=webp&s=83a030e7124b21dc85cdd40b27c9c97e50098d50', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=640&crop=smart&auto=webp&s=137abcdc78e100ad67286017b0f1cc9304a962ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=960&crop=smart&auto=webp&s=97f98cb75ea111366ad9394e8a913059ab9c914a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?width=1080&crop=smart&auto=webp&s=359144959b9a1e68887b43158393a42b973cf2a8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/j_Pylg0cD-lDn5wTlAJ6VSvXnujwT6XnCnq4InZSz2k.jpg?auto=webp&s=e54bf601f85e71423c0cd0fd2399160fc25201a4', 'width': 1200}, 'variants': {}}]} |
Finetuning guide for MPT | 15 | I've written up the exact steps doing a full finetune on an MPT model here:
https://gist.github.com/float-trip/679019a23f246b17d2dff9e2cf55c387
The official Mosaic docs are fine, but aspirational in places. They're not always up-to-date regarding bugs in the library.
As for _why_ to finetune MPT - it's 5-10x cheaper than the quotes I've seen for LLaMA. No idea why this is (Meta got similar numbers while training LLaMA as Mosaic [reports here](https://github.com/mosaicml/llm-foundry/tree/main/scripts/train/benchmarking#a100-80gb-with-1600-gbps-node-node-interconnect-roce)), but for whatever reason it seems that LLaMA finetunes are currently burning the majority of their resources on inefficient tooling. Sadly the MPT models seem to perform worse than LLaMA, but if you're training on any significant number of tokens, imo they're the clear choice. | 2023-07-06T18:45:22 | https://www.reddit.com/r/LocalLLaMA/comments/14si12b/finetuning_guide_for_mpt/ | float-trip | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14si12b | false | null | t3_14si12b | /r/LocalLLaMA/comments/14si12b/finetuning_guide_for_mpt/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=108&crop=smart&auto=webp&s=d5811c5bda5fece1040636a6af8702ba790f0fd4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=216&crop=smart&auto=webp&s=eee576fd4da7535eb53ceb88dd8b52f073048441', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=320&crop=smart&auto=webp&s=72872d880460efa723918c000adca0ed259cf775', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=640&crop=smart&auto=webp&s=f3545b9335d763c9da9c16bf7bf9a3f907dbd6f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=960&crop=smart&auto=webp&s=2d241ace0f1c07088fac3f8469dbad3b05d2d419', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=1080&crop=smart&auto=webp&s=9055f11bdc00beb0b3589e1cae5817d6070d83bc', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?auto=webp&s=079a7260ec149880c73263d64811698adb22760a', 'width': 1280}, 'variants': {}}]} |
For those who train models: how do you normalize your data? | 2 | Assume I have a set of books. Some are perfectly formatted with OCR capabilities from Amazon. Others are scanned in and uploaded to archive.org (say, very old texts). Others have been OCR'd by me manually. How can I ensure each of these are treated the same and fed into the model correctly?
Are there tools for this? Not just for text, but for audio, images, etc. too? | 2023-07-06T17:39:08 | https://www.reddit.com/r/LocalLLaMA/comments/14sg873/for_those_who_train_models_how_do_you_normalize/ | JebryyathHS | self.LocalLLaMA | 2023-07-13T00:21:16 | 0 | {} | 14sg873 | false | null | t3_14sg873 | /r/LocalLLaMA/comments/14sg873/for_those_who_train_models_how_do_you_normalize/ | false | false | default | 2 | null |
Is it possible to change a mining rig into a GPT trainer? | 9 | Hey folks,
I have an old rig of an Octominer with a couple 3090s and 3080s. I would love to either rent out the gpu power/mem or build my own models. In the mining world there are a lot more MOBOs that can have many GPUs like 12. Would I be able to use the mono there or should I just build a gaming PC etc? I guess I'm asking are there MOBOs that can do more than 2+ GPUs to chain together memory for model training | 2023-07-06T17:39:05 | https://www.reddit.com/r/LocalLLaMA/comments/14sg854/is_it_possible_to_change_a_mining_rig_into_a_gpt/ | gosume | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sg854 | false | null | t3_14sg854 | /r/LocalLLaMA/comments/14sg854/is_it_possible_to_change_a_mining_rig_into_a_gpt/ | false | false | self | 9 | null |
My largest ever quants, GPT 3 sized! BLOOMZ 176B and BLOOMChat 1.0 176B | 325 | For the last month I've been trying to quantise two mega models, probably the largest models on Hugging Face Hub: Big Science's [BLOOMZ](https://huggingface.co/bigscience/bloomz) and Sambanova Systems' [BLOOMChat 1.0](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1).
I tried various systems, but all the HW available to me either didn't have enough RAM, or would cost too much with enough RAM, or else had old CPUs that I feared would take so long packing the model that it was looking like it was going to cost hundreds of $ to get it done. One guy who had quantised BLOOMZ (and then disappeared without ever uploading it!) said it took him 55 hours in total.
Then yesterday I was asked by [Latitude.sh](https://Latitude.sh) to test a 4 x H100 80GB system for them. It had a pretty recent and beefy CPU, the AMD EPYC 9354, plus 750GB RAM.
So of course I had to test it with these mega models.
And, somewhat to my surprise, a mere 3 hours 35 minutes later, the first was done!
So I'm pleased and relieved to be able to offer these two beasts for your enjoyment. Or at least, the enjoyment of anyone who happens to have big enough HW, or is willing to rent it :)
* [TheBloke/bloomz-176B-GPTQ](https://huggingface.co/TheBloke/bloomz-176B-GPTQ)
* [TheBloke/BLOOMChat-176B-v1-GPTQ](https://huggingface.co/TheBloke/BLOOMChat-176B-v1-GPTQ)
If you do try them, please read the README carefully! There's a special step required before you can run the models: GPTQ has no sharding, and HF won't allow uploading files bigger than 50GB. So I had to split the 94GB safetensors file in to three pieces, and you need to join it together again.
**Provided files**
I did two quants for each model:
* Main branch: group\_size: none + act-order (desc\_act) = True
* Branch group\_size\_128g: group\_size: 128g + act-order (desc\_act) = True
**Why use them?**
Because they're the size of GPT 3!? What more reason do you need? :)
Seriously though: most people probably won't want to bother. It's not going to run on any home HW. But they do seem to be of interest to companies evaluating local LLMs - I've had several people request I quant them so they could be evaluated for professional purposes.
**What hardware is required?**
You need 94GB VRAM just to load the model, plus context.
So either of these should work:
* 2 x 80GB GPU (A100 or H100), or
* 3 x 48GB GPU (eg A6000, A6000 Ada, L40)
I did a few tests on 2 x H100 80GB and got 5 tokens/s using AutoGPTQ running via text-generation-webui.
**What about 3-bit? or 2-bit?**
Yeah I would like to try at least a 3-bit quant. I don't have access to the machine any more, but if/when I do again I will likely make 3-bit quants as well. I'm sceptical how good 2-bit GPTQ would be though.
I'm hopeful a 3-bit quant would run on 2 x 48GB GPU or 1 x 80GB, which makes it a lot more accessible and likely a lot faster too, at least in the 1 x 80GB case.
**What about GGML?**
Possibly. There's a llama.cpp fork called [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp) but it's not been updated in 2 months. So it's not going to support any of the fancy new quantisation methods, performance improvements, GPU acceleration, etc.
If there's demand I might give it a go, but a 176B model on CPU is going to be glacial, and would only work for people with 128GB RAM. | 2023-07-06T16:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/14sdskp/my_largest_ever_quants_gpt_3_sized_bloomz_176b/ | The-Bloke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sdskp | false | null | t3_14sdskp | /r/LocalLLaMA/comments/14sdskp/my_largest_ever_quants_gpt_3_sized_bloomz_176b/ | false | false | self | 325 | {'enabled': False, 'images': [{'id': 'gXQ-wVUM2Tyx_6Y872ShG3Aaorh5qEh0nAGMrxUIbaU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=108&crop=smart&auto=webp&s=adabb132efe463e3081b83a0a07cc3430c9a1911', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=216&crop=smart&auto=webp&s=5996a94f2121bad64542abccab9dcba459345fe0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=320&crop=smart&auto=webp&s=cf297d5a5ee914d1c5212bd43444bb1b3796cb0c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=640&crop=smart&auto=webp&s=10de0bcc3aea66f98b46823e0c61957956e33662', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=960&crop=smart&auto=webp&s=84eee09e1d5a6e5be1d1d2dfb09ec39b2f256b25', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?width=1080&crop=smart&auto=webp&s=5d25e2400a9357c6bd7ec88cddacc8dbb2edff82', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Pl8R_0F9Tk0OfJMzghy_fs_WW7nqD8aYz19Tli-iHBw.jpg?auto=webp&s=550dd8b60c3640dfe667caec97392b7fa7e0c685', 'width': 1200}, 'variants': {}}]} |
Using LLM locally with HuggingFace blocked? | 0 | I want to try to use either Falcon, Dolly, or Mosaic to run a model locally on my work laptop. Unfortunately, HuggingFace is blocked on my computer. How would I go about this issue? | 2023-07-06T15:29:37 | https://www.reddit.com/r/LocalLLaMA/comments/14sckrf/using_llm_locally_with_huggingface_blocked/ | sisiwnsjhsjajzjxjs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sckrf | false | null | t3_14sckrf | /r/LocalLLaMA/comments/14sckrf/using_llm_locally_with_huggingface_blocked/ | false | false | default | 0 | null |
SuperAGI with KoboldCpp backend | 14 | Hey.
I haven't seen much discussion about [SuperAGI](https://superagi.com/) here, possibly because it mainly seems to target OpenAi (even though they have bundled oobabooga recently). But since I use koboldcpp (and thus ggml), and I don't want to have double models, I created a koboldcpp backend for it.
Results so far, are so so.. This looked promising:
​
https://preview.redd.it/898mmmbk2dab1.png?width=673&format=png&auto=webp&s=23a4b952880136467982f08836b28ac7add0069e
In this instance, I had limited it to 30 tokens generation. Once I tidied things up and let it run for 800 tokens, I got someone's chat with a consolation AI. Disappointing...
But then again, I only have my laptop and chronos-hermes-13b on it. Will probably be better with WizardCoder.
Anyway. In case anyone wants to try it out, or help out, here's the repo:
[https://github.com/neph1/SuperAGI](https://github.com/neph1/SuperAGI)
​
​ | 2023-07-06T15:18:20 | https://www.reddit.com/r/LocalLLaMA/comments/14sc98c/superagi_with_koboldcpp_backend/ | neph1010 | self.LocalLLaMA | 2023-07-06T16:15:24 | 0 | {} | 14sc98c | false | null | t3_14sc98c | /r/LocalLLaMA/comments/14sc98c/superagi_with_koboldcpp_backend/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'UJCmnHuTn9r_Y3u-nHjo5TpdgWhE5vSjvuGFQhXJMaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?width=108&crop=smart&auto=webp&s=c71a8b4cb3b337d965b9df23b446f28d66554de5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?width=216&crop=smart&auto=webp&s=6fbee77ae72de2b6cbb7bca3b83923116bb515a6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?width=320&crop=smart&auto=webp&s=15c2e85ab256352e5fab2e443797e6ac1bf1302e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?width=640&crop=smart&auto=webp&s=45b89ebc736411d323170bde29e19b20bd681e85', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?width=960&crop=smart&auto=webp&s=3177509dcf88c0749df6d4ac7dcfd26a372569c3', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/CcKWWFQNGBkfC3w3r2EbiFSEq_OD8fwzmYbnt8xDzZc.jpg?auto=webp&s=8196cb54a40a3fdbc99764f1f74e5498234b6e31', 'width': 1024}, 'variants': {}}]} | |
Is there any service like MosaicML to train LLMs, but more focused on individuals instead of enterprises? | 0 | The title | 2023-07-06T13:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/14sa0gh/is_there_any_service_like_mosaicml_to_train_llms/ | mr_house7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sa0gh | false | null | t3_14sa0gh | /r/LocalLLaMA/comments/14sa0gh/is_there_any_service_like_mosaicml_to_train_llms/ | false | false | default | 0 | null |
Fine-tune on personal chats | 1 | Did you know that you can download all your personal conversations on Facebook?
Then use that to fine-tune a LLM to answer like you. | 2023-07-06T13:25:02 | https://www.reddit.com/r/LocalLLaMA/comments/14s95ri/finetune_on_personal_chats/ | veentastudios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14s95ri | false | null | t3_14s95ri | /r/LocalLLaMA/comments/14s95ri/finetune_on_personal_chats/ | false | false | self | 1 | null |
Is there anything to really chat with an AI as you would do with somebody over the phone ? | 18 | I mean any project, whether free or open source, that would allow you to chat with a bot without using a mouse or keyboard. Ideally, you would be stop or pause the bot's responses by simply saying "STOP" or something similar.
So far, none of the solutions I have tried have provided a smooth conversational experience, as they all require the use of a keyboard or a mouse.
I believe there may be paid solutions available, but I'm talking about open source or completely free options ... I don't mind if the TTS voice sounds like an emotionless robot, and Whisper voice recognition is already working perfectly (at least for me), so what would prevent to make a smooth voice only conversation possible in for example ooba ? Or Silly Tavern already has such an option, I never really tried it ...
Wow, this is the exact question I've been wanting to ask for so many days :)
Thank you | 2023-07-06T13:10:46 | https://www.reddit.com/r/LocalLLaMA/comments/14s8sy6/is_there_anything_to_really_chat_with_an_ai_as/ | alexthai7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14s8sy6 | false | null | t3_14s8sy6 | /r/LocalLLaMA/comments/14s8sy6/is_there_anything_to_really_chat_with_an_ai_as/ | false | false | self | 18 | null |
LLaMa 65B GPU benchmarks | 111 | I spent half a day conducting a benchmark test of the 65B model on some of the most powerful GPUs aviailable to individuals.
Test Method: I ran the latest Text-Generation-Webui on Runpod, loading Exllma, Exllma\_HF, and LLaMa.cpp for comparative testing. I used a specific prompt to ask them to generate a long story, more than 2000 words. Since LLaMa-cpp-python does not yet support the -ts parameter, the default settings lead to memory overflow for the 3090s and 4090s, I used LLaMa.cpp directly to test 3090s and 4090s.
Test Parameters: Context size 2048, max\_new\_tokens were set to 200 and 1900 respectively, and all other parameters were set to default.
Models Tested: Airoboros-65B-GPT4-1.4's GPTQ and GGML (Q4\_KS) versions. Q4\_KS is the smallest decent version of GGML models, and probably have similar perplexity with GPTQ models.
Results:
**Speed in tokens/second for generating 200 or 1900 new tokens:**
||Exllama(200)|Exllama(1900)|Exllama\_HF(200)|Exllama\_HF(1900)|LLaMa.cpp(200)|LLaMa.cpp(1900)|
|:-|:-|:-|:-|:-|:-|:-|
|2\*3090|12.2|10.9|10.6|8.3|11.2|9.9|
|2\*4090|20.8|19.1|16.2|11.4|13.2|12.3|
|RTX A6000|12.2|11.2|10.6|9.0|10.2|8.8|
|RTX 6000 ADA|17.7|16.1|13.1|8.3|14.7|13.1|
I ran multiple tests for each combination and used the median value.
It seems that these programs are not able to leverage dual GPUs to work simultaneously. The speed of dual GPUs is not notably faster than their single-GPU counterparts with larger memory.
​
**GPU utilization during test:**
||Exllma(1900)|Exllama\_HF(1900)|LLaMa.cpp(1900)|
|:-|:-|:-|:-|
|2\*3090|45%-50%|40%--->30%|60%|
|2\*4090|35%-45%|40%--->20%|45%|
|RTX A6000|93%+|90%--->70%|93%+|
|RTX 6000 ADA|70%-80%|45%--->20%|93%+|
It’s not advisable to use Exllama\_HF for generating lengthy texts since its performance tends to wane over time, which is evident from the GPU utilization metrics.
6000 ADA is likely limited by its 960GB/s memory bandwidth.
​
**VRAM usage (in MB)** when generating tokens, Exllama\_HF has almost the same VRAM usage as Exllama, so I just list Exllama:
||Exllama|LLaMa.cpp|
|:-|:-|:-|
|2\*3090|39730|45800|
|2\*4090|40000|46560|
|RTX A6000|38130|44700|
|RTX 6000 ADA|38320|44900|
There's additional memory overhead with dual GPUs as compared to a single GPU. Additionally, the 40 series exhibits a somewhat greater demand for memory than the 30 series.
​
Some of my thoughts and observations:
1. Dual 3090s are a cost-effective choice. However, they are extremely noisy and hot. On Runpod, one of 3090's fan speed was consistently at 100% when running tests, which mirrors the behaviors of my local dual 3090s. Placing two non-blower 3090s in the same case can be challenging for cooling. My local 3090s (3 slots spaced) power throttles even with 220w power limit each. Blower-style cards would be a bit better in this regard but will be noisier. IMO, the best solution is to place two 3090s in an open-air setup with a rack and PCI-e extenders.
2. The 4090’s efficency and cooling performance is impressive. This is consistent with what I’ve observed locally. Dual 4090s can be placed on a motherboard with two slots spaced 4 slots apart, without being loud. For the 4090, it is best to opt for a thinner version, like PNY’s 3-slot 4090. Limiting the power to 250W on the 4090s affects the local LLM speed by less than 10%.
3. The A6000 is also a decent option. A single card saves you a lot of hassle in dealing with two cards, both in terms of software and hardware. However, the A6000 is a blower-style card and is expected to be noisy.
4. The 6000 Ada is a powerful but expensive option. But its power cannot be fully utilized when running local LLM. The upside is that it's significantly quieter than the A6000 (I observed its power usage and fan speed to be much lower than A6000).
5. Both the A6000 and 6000 ADA's fans spin at idle speed even when the temperature is below 30 degrees Celsius.
6. I paired a 3090 with a 4090. By allocating more layers to the 4090, the speed was slightly closer to that of dual 4090s rather than dual 3090s, and significantly quieter than dual 3090s.
Hope it helps! | 2023-07-06T12:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/14s7j9j/llama_65b_gpu_benchmarks/ | Big_Communication353 | self.LocalLLaMA | 2023-07-06T23:56:31 | 0 | {} | 14s7j9j | false | null | t3_14s7j9j | /r/LocalLLaMA/comments/14s7j9j/llama_65b_gpu_benchmarks/ | false | false | self | 111 | null |
num_beams > 1 breaking my model (Open-LLaMA7b - Alpaca-finetuned) | 3 | As the title states, I'm experiencing some weird results when using num\_beams > 1. The first 150-ish tokens are great, much better than if I use num\_beams == 1. However, after that point, it starts outputting gibberish. I've tried tweaking max\_new\_tokens and max\_length to see if the issue lies there. The model is capable of generating quite long answers without beam search, so I'm a bit lost. Do you have any ideas on how to further debug this issue? Or have anyone encuntered this?
The model is fine-tuned on a translated Alpaca dataset with QLorA
​
tokenizer = LlamaTokenizer.from_pretrained(MODEL)
model = LlamaForCausalLM.from_pretrained(
MODEL, torch_dtype=torch.float32, device_map='auto', load_in_4bit=True,
)
model = PeftModel.from_pretrained(model, CHECKPOINT_DIR, torch_dtype=torch.float16)
prompter = Prompter(TEMPLATE_JSON_PATH)
def generate(self, prompt, max_length=250, temperature=0.9, top_k=50, top_p=0.9, repetition_penalty=1.0, num_beams=1):
# prompt = self.prompter.generate_prompt(prompt, promt1)
input_ids = self.tokenizer.encode(prompt, return_tensors='pt')
input_ids = input_ids.to(self.model.device)
output = self.model.generate(
input_ids=input_ids,
# max_length=max_length,
max_new_tokens =512,
temperature=temperature,
top_k=top_k,
top_p=top_p,
repetition_penalty=repetition_penalty,
do_sample=True,
num_return_sequences=1,
num_beams = num_beams,
remove_invalid_values=True,
)
output_text = self.tokenizer.decode(output[0], skip_special_tokens=True)
output_text = output_text[len(prompt):] # Remove the prompt from the output
return output_text
# Breaks
chat.generate(prompt,
temperature=0.9,
top_k=80,
top_p=0.7,
repetition_penalty=1.3,
num_beams=2)
​
# Works
chat.generate(prompt,
temperature=0.9,
top_k=85,
top_p=0.7,
repetition_penalty=1.3)
​ | 2023-07-06T12:05:09 | https://www.reddit.com/r/LocalLLaMA/comments/14s77av/num_beams_1_breaking_my_model_openllama7b/ | BuzzLightr | self.LocalLLaMA | 2023-07-06T12:11:33 | 0 | {} | 14s77av | false | null | t3_14s77av | /r/LocalLLaMA/comments/14s77av/num_beams_1_breaking_my_model_openllama7b/ | false | false | self | 3 | null |
Free LLM api | 1 | What are some free LLM api which are alternative for OpenAI API | 2023-07-06T10:01:14 | https://www.reddit.com/r/LocalLLaMA/comments/14s4hou/free_llm_api/ | Exciting-Purpose-711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14s4hou | false | null | t3_14s4hou | /r/LocalLLaMA/comments/14s4hou/free_llm_api/ | false | false | self | 1 | null |
New base model InternLM 7B weights released, with 8k context window. | 49 | 2023-07-06T07:39:08 | https://github.com/InternLM/InternLM | logicchains | github.com | 1970-01-01T00:00:00 | 0 | {} | 14s1p96 | false | null | t3_14s1p96 | /r/LocalLLaMA/comments/14s1p96/new_base_model_internlm_7b_weights_released_with/ | false | false | 49 | {'enabled': False, 'images': [{'id': '2tVUfp5lAxozL-49NKZd2aSVbQLlqCzZXLzttctaQB8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=108&crop=smart&auto=webp&s=4749d28246824b1dd6e283cb57232c25cc4aeac2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=216&crop=smart&auto=webp&s=bff4a8531b2a1445903179e79d99f95539f423e1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=320&crop=smart&auto=webp&s=4f4f20a8a249adb1f39eb4c1cabbe7973b610af7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=640&crop=smart&auto=webp&s=8b1847fc1aa59fefb5f8bfd7af582780d4d99b3b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=960&crop=smart&auto=webp&s=d2f2877cbe6a4096d662a4b7c729488e402ee2be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?width=1080&crop=smart&auto=webp&s=de5b68835dd3b2b374d130393c5ae2e0e53489a1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-L_o9_ksbyTG8oYmftdYYRNi0-70J1OYYk4hPlLwJzQ.jpg?auto=webp&s=55fa4084d3aa987e49186687bf7b200201ca2ab7', 'width': 1200}, 'variants': {}}]} | ||
Using other tokenizers? | 1 | Hi,
​
I'm trying to apply llama in understanding Korean text. However, the original tokenizer for llama seems to greatly over-estimate the number of tokens.
With the same input text, llama tokenizer would give 5\~6 times more tokens than KoBERT tokenizer.
I assume this is because llama was not built with Korean in mind.
I tried swapping only the tokenizer, but that gives me gibberish output.
What can I do? Are tokenizers something that should be trained with the model?
​
Thanks in advance for anyone that comments. | 2023-07-06T06:16:08 | https://www.reddit.com/r/LocalLLaMA/comments/14s038q/using_other_tokenizers/ | manjimin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14s038q | false | null | t3_14s038q | /r/LocalLLaMA/comments/14s038q/using_other_tokenizers/ | false | false | self | 1 | null |
SillyTavern use-cases? | 11 | I've seen a few people mention they are using SillyTavern and not Ooba.
Curious what use-cases are you using in Silly Tavern that are not available in Ooba? | 2023-07-06T05:42:18 | https://www.reddit.com/r/LocalLLaMA/comments/14rzdsk/sillytavern_usecases/ | robo_cap | self.LocalLLaMA | 2023-07-06T16:29:03 | 0 | {} | 14rzdsk | false | null | t3_14rzdsk | /r/LocalLLaMA/comments/14rzdsk/sillytavern_usecases/ | false | false | self | 11 | null |
after a bit of tweaking, i got gpt3clippy working via oobabooga | 3 | "just follow the errors until its fixed"
[using the openai extensions ofc](https://preview.redd.it/kiqaryf98aab1.png?width=1915&format=png&auto=webp&s=01a3ebdc9732d639082273d1b1503feeb8610399) | 2023-07-06T05:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/14rzd7r/after_a_bit_of_tweaking_i_got_gpt3clippy_working/ | happydadinau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rzd7r | false | null | t3_14rzd7r | /r/LocalLLaMA/comments/14rzd7r/after_a_bit_of_tweaking_i_got_gpt3clippy_working/ | false | false | 3 | null | |
Reproducing LLAMA or doing LoRa on large corpus of pirated books | 18 | I was thinking whether it's feasible to reproduce LAMA on new large corpuse of pirated books. I did LoRa experiments and it seems unlikely that it would scale beyond 100s of books well. If so, how much compute would I need to repruduce 7b model from skratch? | 2023-07-06T05:30:51 | https://www.reddit.com/r/LocalLLaMA/comments/14rz5p1/reproducing_llama_or_doing_lora_on_large_corpus/ | Itchy_Monk2686 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rz5p1 | false | null | t3_14rz5p1 | /r/LocalLLaMA/comments/14rz5p1/reproducing_llama_or_doing_lora_on_large_corpus/ | false | false | self | 18 | null |
LongNet: Scaling Transformers to 1,000,000,000 Tokens | 126 | [https://arxiv.org/pdf/2307.02486.pdf](https://arxiv.org/pdf/2307.02486.pdf) Yeeehaww | 2023-07-06T05:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/14rym30/longnet_scaling_transformers_to_1000000000_tokens/ | iuwuwwuwuuwwjueej | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rym30 | false | null | t3_14rym30 | /r/LocalLLaMA/comments/14rym30/longnet_scaling_transformers_to_1000000000_tokens/ | false | false | self | 126 | null |
What's the most sophisticated offline chat LLM available now for either a Macbook M2 Max or a 3070ti GPU desktop I can run? | 10 | Please, I am begging you, help a fool out | 2023-07-06T04:19:33 | https://www.reddit.com/r/LocalLLaMA/comments/14rxnq7/whats_the_most_sophisticated_offline_chat_llm/ | Careful_Tower_5984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rxnq7 | false | null | t3_14rxnq7 | /r/LocalLLaMA/comments/14rxnq7/whats_the_most_sophisticated_offline_chat_llm/ | false | false | self | 10 | null |
Can we train for lora with SuperHOT model | 3 | Hi guys,
I'm happy with the performance of the SUPERHOT version of vicuna model(I use this model:TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16,and Bloke is a hero!), but I can't seem to find a way to train it(the textgen-webui shows LoRA training has only currently been validated for LLaMA,OPT,GPT-J,and GPT-NeoX models.(Found model type:ExllamaModel))Does it now support training and generating lora files?
​ | 2023-07-06T03:41:57 | https://www.reddit.com/r/LocalLLaMA/comments/14rwue7/can_we_train_for_lora_with_superhot_model/ | baibaideyun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rwue7 | false | null | t3_14rwue7 | /r/LocalLLaMA/comments/14rwue7/can_we_train_for_lora_with_superhot_model/ | false | false | self | 3 | null |
A Comparison of Large Language Models (LLMs) in Biomedical Domain | 60 | 2023-07-06T01:11:08 | https://provectus.com/blog/comparison-large-language-models-biomedical-domain/ | DarronFeldstein | provectus.com | 1970-01-01T00:00:00 | 0 | {} | 14rtg35 | false | null | t3_14rtg35 | /r/LocalLLaMA/comments/14rtg35/a_comparison_of_large_language_models_llms_in/ | false | false | 60 | {'enabled': False, 'images': [{'id': 'BH4nT_ARjSTG5EiwLpm_dYy4TBePmD2SqCfFdon59d4', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=108&crop=smart&auto=webp&s=fd190d954899750bd6dc1628562f2551bb41f152', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=216&crop=smart&auto=webp&s=9d5b635a8f38bc96de9e12dce12330897fb4445a', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=320&crop=smart&auto=webp&s=9f4b56c18a3473e9b354913492c250bac4e09c83', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=640&crop=smart&auto=webp&s=feb06431ae472e65776457c9a11c17d164131021', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=960&crop=smart&auto=webp&s=770c7007759ef73c4e5010d2eed002044537911c', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?width=1080&crop=smart&auto=webp&s=28d9099e1e0019d107b7aba1a7c060a14b7c1730', 'width': 1080}], 'source': {'height': 933, 'url': 'https://external-preview.redd.it/bWWW95gY_HusS7i11uG-DqBhSeW5TXDtxSooDr1p5uk.jpg?auto=webp&s=efd7e31ce7c8310e4cba24b0c902353530e6f6cd', 'width': 1400}, 'variants': {}}]} | ||
Best option between 2x3060 or 1x3090. | 4 | The CUDA cores total comes to the same, the memory too.
Now that most a.i. programs can use multiple GPUs, I'm not sure who wins.
I've got a 3060 12gb and got a little play money (400$ USD).
My options are : sell my 3060 and buy a 3090 with the card money and money
Or:
Get another 3060 12gb because I have 2 pciE ports, plus add 32gb RAM TO have a total of 64gb ram.
Both require a new PSU so that's equal.
Give me your opinions please | 2023-07-06T00:06:09 | https://www.reddit.com/r/LocalLLaMA/comments/14rs0ko/best_option_between_2x3060_or_1x3090/ | oodelay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rs0ko | false | null | t3_14rs0ko | /r/LocalLLaMA/comments/14rs0ko/best_option_between_2x3060_or_1x3090/ | false | false | self | 4 | null |
Tool for deploying open source LLMs on your own cloud | 86 | Hey all! I’ve been a long time lurker on the subreddit and wanted to share something that me and a friend built. We wanted to create apps on top of open source LLMs and struggled to set them up in our cloud environment efficiently. We realized that the tool we were building for this in itself would probably be pretty useful for the community so we decided to open-source it.
It runs entirely on your own infrastructure. You connect your google cloud to it and you can then spin up models with just one line of python.
Currently we support a few of the major open source models. Adding fine-tuned versions of already existing model architectures from Huggingface is pretty straight forward and we're going to add more architectures too. Right now it runs on Google Cloud but we’re going to add AWS as soon as we can.
I’m happy to help anyone set this up on their own cloud account. I’d love to hear your feedback as we spend a lot of time on this.
Fine-tuning is also on the way, some of the code is already there if you want to take it apart yourself.
This is our repo: [https://github.com/havenhq/haven](https://github.com/havenhq/haven)
This is how to set it up: [https://docs.haven.run](https://docs.haven.run) | 2023-07-05T23:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/14rqyx8/tool_for_deploying_open_source_llms_on_your_own/ | torque-mcclyde | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rqyx8 | false | null | t3_14rqyx8 | /r/LocalLLaMA/comments/14rqyx8/tool_for_deploying_open_source_llms_on_your_own/ | false | false | self | 86 | {'enabled': False, 'images': [{'id': 'VmEI9xR7ThLUCoCQt_1r0IkPqbNd7G_2woLRY0qHRlA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=108&crop=smart&auto=webp&s=b56200d611e5cb9e2c61a353771462fd0299a9f1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=216&crop=smart&auto=webp&s=e1a9497cbc75fcb9ad19f411e3bcd3b4b7354be7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=320&crop=smart&auto=webp&s=b3137b3e187c1208465b7778b047599c9bc986d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=640&crop=smart&auto=webp&s=bb79408d06ecb2a679a891d45351a9f485c1109a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=960&crop=smart&auto=webp&s=8df1cab511cad798b4a2d5aa1e3f2e70fdc36ce7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=1080&crop=smart&auto=webp&s=8d8e12999c74c6a0d0b50319f37987dade58a62f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?auto=webp&s=4a755b54442bfb1f52259b8164a4f66c4524c4e0', 'width': 1200}, 'variants': {}}]} |
A tool for fast inference running on your own cloud | 1 | Hey all! I’ve been a long time lurker on the subreddit and wanted to share something that me and a friend built. We wanted to create apps on top of open source LLMs and struggled to set them up in our cloud environment efficiently. We realized that the tool we were building for this in itself would probably be pretty useful for the community so we decided to open-source it.
It runs entirely on your own infrastructure. You connect your google cloud to it and you can then spin up models with just one line of python.
Currently we support a few of the major open source models. Adding fine-tuned versions of already existing model architectures from Huggingface is pretty straight forward and we're going to add more architectures too. Right now it runs on Google Cloud but we’re going to add AWS as soon as we can.
I’m happy to help anyone set this up on their own cloud account. I’d love to hear your feedback as we spend a lot of time on this.
Fine-tuning is also on the way, some of the code is already there if you want to take it apart yourself.
This is our repo: [https://github.com/havenhq/haven](https://github.com/havenhq/haven)
This is how to set it up: [https://docs.haven.run](https://docs.haven.run) | 2023-07-05T23:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/14rqwk1/a_tool_for_fast_inference_running_on_your_own/ | torque-mcclyde | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rqwk1 | false | null | t3_14rqwk1 | /r/LocalLLaMA/comments/14rqwk1/a_tool_for_fast_inference_running_on_your_own/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'VmEI9xR7ThLUCoCQt_1r0IkPqbNd7G_2woLRY0qHRlA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=1fb8b582a80b4a6c8a94a1c876245c7d3359a1db', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=216&crop=smart&auto=webp&v=enabled&s=6d8c05f8ed4f3761fd05017d86c1b755e105bf5e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=320&crop=smart&auto=webp&v=enabled&s=236339b07857cd7f2351c4e4fb97863a9a2f02bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=640&crop=smart&auto=webp&v=enabled&s=10e290304bd12d3ca98b21e0997e2d65c48cabf1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=960&crop=smart&auto=webp&v=enabled&s=5cd36f09ebaa1fa371fe0a716c8d525b1222bf80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=4869fb48f22d828fb38f4bc7704f2f66849129ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?auto=webp&v=enabled&s=f21ec63ed41afd9834ab567a7382d5275db543f5', 'width': 1200}, 'variants': {}}]} |
Haven: Easily run LLMs on inside your own cloud | 1 | [removed] | 2023-07-05T23:06:49 | https://www.reddit.com/r/LocalLLaMA/comments/14rqkhh/haven_easily_run_llms_on_inside_your_own_cloud/ | h-konsti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rqkhh | false | null | t3_14rqkhh | /r/LocalLLaMA/comments/14rqkhh/haven_easily_run_llms_on_inside_your_own_cloud/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'VmEI9xR7ThLUCoCQt_1r0IkPqbNd7G_2woLRY0qHRlA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=1fb8b582a80b4a6c8a94a1c876245c7d3359a1db', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=216&crop=smart&auto=webp&v=enabled&s=6d8c05f8ed4f3761fd05017d86c1b755e105bf5e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=320&crop=smart&auto=webp&v=enabled&s=236339b07857cd7f2351c4e4fb97863a9a2f02bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=640&crop=smart&auto=webp&v=enabled&s=10e290304bd12d3ca98b21e0997e2d65c48cabf1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=960&crop=smart&auto=webp&v=enabled&s=5cd36f09ebaa1fa371fe0a716c8d525b1222bf80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=4869fb48f22d828fb38f4bc7704f2f66849129ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zE7ysnCouseCL3hmxQZMU1VQk2Q21s3-8ru_wFC0yWQ.jpg?auto=webp&v=enabled&s=f21ec63ed41afd9834ab567a7382d5275db543f5', 'width': 1200}, 'variants': {}}]} |
A small llama.cpp server playground | 22 | ​
https://preview.redd.it/hl0tzxk008ab1.png?width=1337&format=png&auto=webp&s=5f37d2928ba12c9f0e9efc7013bc35e6728e8906
Hello People, i hope you are well.
I want to share a small fronted in which i have been working, made with Vue, is very simple and still under development due to the nature of the server.cpp. To run it you need the executable of server.cpp and then run the fronted that will to connect to it and perform the inferences. Supports key combinations, smooth text generation, parameterization, repeat, undo and stop.
You can run the project through clonning the project and then run it following the instructions or use an executable that I made for windows that includes a mini serve that I wrote in C to serve the compiled fronted (all in 70kb).
Link of the llama.cpp server: [https://github.com/ggerganov/llama.cpp/tree/master/examples/server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server)
Link of the playground fronted: [https://github.com/hwpoison/llama-server-playground/tree/main/frontend](https://github.com/hwpoison/llama-server-playground/tree/main/frontend) (you can get the windows release from here).
Some pics:
​
https://preview.redd.it/zutazmp528ab1.png?width=509&format=png&auto=webp&s=d876de73686dd5d4804ff570ba0ea87856d3c671
[contains popover with help content](https://preview.redd.it/yh7zqpin18ab1.png?width=1346&format=png&auto=webp&s=cc51a3549dbf2028d668f23b983dd0276e6f98af)
[The fronted and the llama.cpp server running.](https://preview.redd.it/dg5uuse408ab1.png?width=1366&format=png&auto=webp&s=de21cb547ebaa4c273eef35a4d868e541af9c6e4)
​
I am not an expert in this topic, but I am trying to improve it as they improve the server (is in constant development). My pc is not very good and I use it to experiment without spending a lot of resources.
Greetings!
​ | 2023-07-05T22:20:17 | https://www.reddit.com/r/LocalLLaMA/comments/14rpce2/a_small_llamacpp_server_playground/ | hwpoison | self.LocalLLaMA | 2023-07-05T23:51:17 | 0 | {} | 14rpce2 | false | null | t3_14rpce2 | /r/LocalLLaMA/comments/14rpce2/a_small_llamacpp_server_playground/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]} | |
Why does gpt 4 give me failed to load model? | 0 | C:\\Users\\Laptop\\Downloads\\Aishit\\LLama>main -i --interactive-first -r "### Human:" --temp 0 -c 2048 -n -1 --ignore-eos --repeat\_penalty 1.2 --instruct -m ggml-model-q4\_1.bin
main: build = 794 (ec326d3)
main: seed = 1688595156
llama.cpp: loading model from ggml-model-q4\_1.bin
error loading model: unexpectedly reached end of file
llama\_load\_model\_from\_file: failed to load model
llama\_init\_from\_gpt\_params: error: failed to load model 'ggml-model-q4\_1.bin'
main: error: unable to load model | 2023-07-05T22:14:53 | https://www.reddit.com/r/LocalLLaMA/comments/14rp7d1/why_does_gpt_4_give_me_failed_to_load_model/ | Puzzleheaded-Nose-94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rp7d1 | false | null | t3_14rp7d1 | /r/LocalLLaMA/comments/14rp7d1/why_does_gpt_4_give_me_failed_to_load_model/ | false | false | default | 0 | null |
Intel AI built into CPU's, is it at all useful for text or image generation? What is it? | 31 | Just noticed My CPU (12th gen) has what they call "Integrated AI" and even has "Intel® Gaussian & Neural Accelerator (Intel® GNA)" listed in the device manager.
See more here:
[https://www.intel.com/content/www/us/en/products/docs/processors/core/intelligent-pc-overview.html](https://www.intel.com/content/www/us/en/products/docs/processors/core/intelligent-pc-overview.html)
This stuff is way above my head but I'm dabbling in text and image generation and this sounds like it should be useful, if not, what IS it useful for? | 2023-07-05T21:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/14roq0p/intel_ai_built_into_cpus_is_it_at_all_useful_for/ | SGAShepp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14roq0p | false | null | t3_14roq0p | /r/LocalLLaMA/comments/14roq0p/intel_ai_built_into_cpus_is_it_at_all_useful_for/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': '7wlnlsb1p8kqETD6MDhVPbujRAXC14mk8BJ3S2O8fCg', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/Hlbf52-zMgk3lZiQPiNlVwXxuQqfSuHNMRFDsvyExVQ.jpg?width=108&crop=smart&auto=webp&s=6a95ee4184562ea6e68359542a9ba4fac4f67356', 'width': 108}, {'height': 142, 'url': 'https://external-preview.redd.it/Hlbf52-zMgk3lZiQPiNlVwXxuQqfSuHNMRFDsvyExVQ.jpg?width=216&crop=smart&auto=webp&s=db96825d26a40d3ff36bd9249696f6589a6c12f4', 'width': 216}, {'height': 211, 'url': 'https://external-preview.redd.it/Hlbf52-zMgk3lZiQPiNlVwXxuQqfSuHNMRFDsvyExVQ.jpg?width=320&crop=smart&auto=webp&s=09df17da18177e610ef5f2a653e8af6a754ba14d', 'width': 320}], 'source': {'height': 387, 'url': 'https://external-preview.redd.it/Hlbf52-zMgk3lZiQPiNlVwXxuQqfSuHNMRFDsvyExVQ.jpg?auto=webp&s=0402e358a1b5b71d7e7e3840908c20761a156712', 'width': 586}, 'variants': {}}]} |
Fine tuning Falcon 7B Instruct | 5 | I am trying to fine tune Falcon-7b-Instruct:
Modules: "query\_key\_value", "dense", "dense\_h\_to\_4h" and "dense\_4h\_to\_h") using peft-LoRA with 64,32 and 16 as ranks tried.
Dataset: 80 page FAQ dataset (structured as question-answer pairs. 280 in total)
I'm observing the following:
1. The model keeps rambling while answering a question
2. Even with low training error, the answers to most of the questions are incorrect
Any suggestions please on what I can do to improve the model's performance? | 2023-07-05T21:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/14rnfzs/fine_tuning_falcon_7b_instruct/ | anindya_42 | self.LocalLLaMA | 2023-07-06T06:30:45 | 0 | {} | 14rnfzs | false | null | t3_14rnfzs | /r/LocalLLaMA/comments/14rnfzs/fine_tuning_falcon_7b_instruct/ | false | false | self | 5 | null |
Local LLaMa on Android phone | 70 | Wow! I just tried the 'server thats available in llama.cpp on my android phone, and its VERY user friendly. I can keep running this on the go for private chats. (for things that i can't use chatgpt :)
Steps:
1. Install termux from f-droid
2. Install gcc from its-pointless repo
3. compile llama.cpp using the following steps
```
cmake .
make
./bin/server -m your-ggml-model.bin
```
| 2023-07-05T21:04:50 | https://v.redd.it/vizurn39o7ab1 | AstrionX | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14rncnb | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/vizurn39o7ab1/DASHPlaylist.mpd?a=1692045165%2CZDVmYWY3YzU0YjIwYmMzYWJmMTBhZTUwNGNhMTEwNGZkNTdiMmFhYzMzMDU3OTY5NmJjNTMxYTRhM2E1NjU0Yg%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/vizurn39o7ab1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/vizurn39o7ab1/HLSPlaylist.m3u8?a=1692045165%2COTczYTc3NmVmM2RmOWE5OGM1ZDBkMzJhYjcwNDYzOTdlMTBmNDg4NDcyOTI0NGRjNjZhMjczNThhZTk0NWI1Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vizurn39o7ab1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 484}} | t3_14rncnb | /r/LocalLLaMA/comments/14rncnb/local_llama_on_android_phone/ | false | false | 70 | {'enabled': False, 'images': [{'id': 'aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=3960d6409a2008b461e47572959266c5a4dd0edf', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=f1ae9d081e64fa58a915d8fd0c079aacca32ad04', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=b38b016d6223b863acb475c2836935f0e937083f', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=8372036afb13175ac587587780ca4042b8c7b725', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=495200297b7b07ceeb8494355324705c6d7ad9ab', 'width': 960}], 'source': {'height': 2388, 'url': 'https://external-preview.redd.it/aXM0Y25iMTlvN2FiMV3eHZlopqjlHOBu_t4lZJR4GWPFEVeS8als-ruFyBYQ.png?format=pjpg&auto=webp&s=7845150eabe8538abb580cd00c62cbf14792a4cc', 'width': 1069}, 'variants': {}}]} | |
Considering migrating from Ubuntu 22.04 to Debian 12 | 1 | Hello,
I've been contemplating a switch from Ubuntu 22.04 to Debian 12 and would love to hear about your experiences, especially regarding running inferencing on GGML with full functionality, including GPU acceleration (CUDA).
Recently, I gave Fedora 38 a try, but unfortunately, I encountered compatibility issues with the GGML and GPU acceleration. It seemed to be related to compiler versions since Fedora utilizes a more recent GCC version. I didn't have the opportunity to investigate this thoroughly, so I reverted back to Ubuntu.
Now, I'm curious to know if anyone has installed Debian 12 and successfully achieved smooth inferencing with all the features of llama.cpp, including GPU acceleration (CUDA). It would be great to hear about your setup, any challenges faced, and how you managed to overcome them.
Thanks! | 2023-07-05T20:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/14rmfcm/considering_migrating_from_ubuntu_2204_to_debian/ | brunomoreirab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rmfcm | false | null | t3_14rmfcm | /r/LocalLLaMA/comments/14rmfcm/considering_migrating_from_ubuntu_2204_to_debian/ | false | false | self | 1 | null |
Triton vs Llama.cpp? | 2 | Can anyone explain the difference to me? It seems, based on my very small understanding, that both options allow for a way to run inference on just a CPU? Are there any pros/cons?
For background, I'm looking into the best way to run various models like falcon or MPT in a datacenter cluster where we have "buckets of RAM" but no GPU cards. We're trying to see what we can get away with running to serve a global user base. | 2023-07-05T20:10:24 | https://www.reddit.com/r/LocalLLaMA/comments/14rluww/triton_vs_llamacpp/ | SigmaSixShooter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rluww | false | null | t3_14rluww | /r/LocalLLaMA/comments/14rluww/triton_vs_llamacpp/ | false | false | self | 2 | null |
Building a Professional QA Chatbot using 4xV100s | 3 | I'm seeking guidance from the community regarding the development of a professional QA chatbot. Due to the nature of my project, I am unable to use ChatGPT and instead require a private language model. I currently have 4xV100s at my disposal and I'm aiming to strike a balance between the speed of inferencing and the quality of the answers generated in an initial pilot phase.
My primary goal is to gather feedback from users to fine-tune the model in the long run. Therefore, at this stage, I'm inclined to prioritize the speed of inferencing. However, I am aware that the performance of 4xV100s might not be sufficient to achieve the desired speed.
I have considered utilizing the Falcon 7b/40b, but I'm open to any suggestions or insights that the community might have regarding alternative approaches/models. My ultimate aim is to build a highly efficient QA chatbot that delivers accurate and prompt responses.
I greatly appreciate any advice or recommendations you can provide. Thank you in advance for your support! | 2023-07-05T19:58:56 | https://www.reddit.com/r/LocalLLaMA/comments/14rljdd/building_a_professional_qa_chatbot_using_4xv100s/ | vinwizard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rljdd | false | null | t3_14rljdd | /r/LocalLLaMA/comments/14rljdd/building_a_professional_qa_chatbot_using_4xv100s/ | false | false | self | 3 | null |
Issues using P40s | 5 | Hi, Im trying to get 2x P40s working on a Asus Z270-p. Ive got 4g decoding on, tried setting everything to gen2. I removed all the drives and m.2s. But it still wont post. If i remove either of the 2 cards it posts just fine and boots. But with 2 it refuses to do anything.
It doesnt matter which slot i use either. Any ideas?
Ive read online i need resizable bar support, but ive been running one of them without problems, it really is only when i try to run 2 at the same time. Is this related to the resizable bar support?
I've also tried with a GPU riser and then it boots up but the card isnt visible.
Anyone have any experience getting 2 P40s working in the same machine, What are the system specs?
edit: I switched to a 11th gen mobo / cpu with resizable bar support and it just flat out worked. | 2023-07-05T19:25:28 | https://www.reddit.com/r/LocalLLaMA/comments/14rkncu/issues_using_p40s/ | ISwearImNotAnAI | self.LocalLLaMA | 2023-07-13T13:12:41 | 0 | {} | 14rkncu | false | null | t3_14rkncu | /r/LocalLLaMA/comments/14rkncu/issues_using_p40s/ | false | false | self | 5 | null |
How to create a LoRA from one book or group of tematic books | 3 | Hi! I am "half-newbie" in LLM, I know how to use (power user), install, solve basic problems and I have a basic knowledge in programming. I know how to use another type of AI too, the Stable Diffusion, I have been studying this program for over seven months. I fond the Oobabooga, KoboldAI and other programs and I think: "It's possible to train the AI to understand RPG books"?
I know how to convert a PDF Book to a text (DOCX in a topic structure), but I don't know how to convert this text into a format that LLaMA can understand more easily, nor how to organize the text to do the LoRA training. I know that it is possible to train with unstructured text, however, this reduces the chance of good results. And I don't know what tools to use for that either.
Can help me with this?
Thank very much! | 2023-07-05T19:09:13 | https://www.reddit.com/r/LocalLLaMA/comments/14rk7g9/how_to_create_a_lora_from_one_book_or_group_of/ | Shadow_One111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rk7g9 | false | null | t3_14rk7g9 | /r/LocalLLaMA/comments/14rk7g9/how_to_create_a_lora_from_one_book_or_group_of/ | false | false | self | 3 | null |
Training options for tasks unsupported by HF | 1 | Hugging face has a fairly limited number of tasks they support. Are there any other implementations of LoRA that people use where you can customize a bit more, changing final layer and loss function. | 2023-07-05T18:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/14rjkim/training_options_for_tasks_unsupported_by_hf/ | Artistic_Load909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rjkim | false | null | t3_14rjkim | /r/LocalLLaMA/comments/14rjkim/training_options_for_tasks_unsupported_by_hf/ | false | false | self | 1 | null |
Is it safe to delete these files in my local LLM folder? What am I losing if I get rid of the GIT folder entirely? I'm just a user of this stuff, not a developer. | 0 | ​
https://preview.redd.it/n5eina1mx6ab1.png?width=851&format=png&auto=webp&s=515f4fe032344c8d18fa325c6204550e157a355a | 2023-07-05T18:35:52 | https://www.reddit.com/r/LocalLLaMA/comments/14rj9vv/is_it_safe_to_delete_these_files_in_my_local_llm/ | cleverestx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rj9vv | false | null | t3_14rj9vv | /r/LocalLLaMA/comments/14rj9vv/is_it_safe_to_delete_these_files_in_my_local_llm/ | false | false | 0 | null | |
HuggingChatAllInOne: Run HF and GPTQ models using HuggingChat UI easily | 6 | [HuggingChatAllInOne](https://github.com/bodaay/HuggingChatAllInOne)
Hey everyone,
The main goal of this repo is to make it super easy for anyone to get HuggingChat running as a UI and text-generation inference server, all in one Docker file. I've also included runpods templates to make launching these even easier.
The way this repo is built, it'll be no trouble keeping it in sync with any updates from the HuggingChat UI and Inference Repos. It's all about making things simple, right?
With just a single Docker file, anyone can take a test drive with LLM models and share the link with others for them to have a go at text-generation, all in their own individual sessions. It's like having your own personal test ground!
The HuggingChat UI is really neat, similar to the chatGPT one where each user has their own session and chat history.
For those interested, there are two runpods templates ready to roll - one for HF models and one for GPTQ. Just a heads up though, the GPTQ models support is exclusive to models built with the latest gptq-for-llama.
Check out the runpod templates in the GitHub repo linked here. Happy experimenting!
​
​
https://preview.redd.it/s7mnxi8uq6ab1.png?width=2453&format=png&auto=webp&s=dea0fb5b59c129e8e7f64bce23c347b64501214c
​ | 2023-07-05T18:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/14riqo5/huggingchatallinone_run_hf_and_gptq_models_using/ | bodaaay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14riqo5 | false | null | t3_14riqo5 | /r/LocalLLaMA/comments/14riqo5/huggingchatallinone_run_hf_and_gptq_models_using/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'JpVrs0sCIibdEY5Co5OrLqyeeam7AaElRlena8NUxso', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=108&crop=smart&auto=webp&s=b372cbcc0a14432f8758f0ff5be2d8bbed44848b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=216&crop=smart&auto=webp&s=babc1ad332967099b30cb02a155cd943966778cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=320&crop=smart&auto=webp&s=db16bbc134d7a7c242855537b485c88279670a74', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=640&crop=smart&auto=webp&s=89582ffc708c37cf9473926720c35039b98b9408', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=960&crop=smart&auto=webp&s=688c57e8025c7882943efb4e0fe3f802914386d5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?width=1080&crop=smart&auto=webp&s=b01c006e67aeddf86f995e4e805278da1ffe3b0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zsUwsk7z94weoTmWAWT-oOlXDwqbNZa7DLFiIWINhvc.jpg?auto=webp&s=a7ea8aeb6e47063704264a172c3c9cfebc6611fe', 'width': 1200}, 'variants': {}}]} | |
SillyTavern 1.8 released! | 100 | 2023-07-05T18:08:19 | https://github.com/SillyTavern/SillyTavern/releases | WolframRavenwolf | github.com | 1970-01-01T00:00:00 | 0 | {} | 14riib1 | false | null | t3_14riib1 | /r/LocalLLaMA/comments/14riib1/sillytavern_18_released/ | false | false | 100 | {'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=108&crop=smart&auto=webp&s=22d2e1896c94ecebda58fed69478453d4b16fd4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=216&crop=smart&auto=webp&s=019bd779b582098d4b9aa01b87ee530132195fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=320&crop=smart&auto=webp&s=55daeabbed00d9b3c1e7f3207edea4d0a265db39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=640&crop=smart&auto=webp&s=47d7877d194270162d75f4922c4ecb60b17c101d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=960&crop=smart&auto=webp&s=004f5643d41eee63624b163efc53427073882f4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=1080&crop=smart&auto=webp&s=e6ee7ad7840a9a71890c76db5e4df6a3f669e762', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?auto=webp&s=44d160d8b5087122f25fba2443dc2c5a77adf472', 'width': 1280}, 'variants': {}}]} | ||
How Processing Prompt Works [BLAS]? | 1 | Explain why this Processing Prompt \[BLAS\] (1024 / 1204 tokens) occurs on every request. because of this, the response time is about 400 seconds. how to reduce it?
https://preview.redd.it/z6me60ydp6ab1.png?width=960&format=png&auto=webp&s=501a74548fcdf5ea92b18f172a6599bd29c06623 | 2023-07-05T17:49:49 | https://www.reddit.com/r/LocalLLaMA/comments/14rhyv2/how_processing_prompt_works_blas/ | roman1338sf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rhyv2 | false | null | t3_14rhyv2 | /r/LocalLLaMA/comments/14rhyv2/how_processing_prompt_works_blas/ | false | false | 1 | null | |
Expanding LLM Context Window | 22 | Spent the day off yesterday diving into how Position Interpolation works and made a quick writeup. These are the main benefits:
* With only 1000 training steps, the context window for LLaMA models ranging in size from 7B to 65B was extended **from 2048 tokens to** **32768 tokens.**
* Model quality is preserved for tasks within its original context window sizes
* Models with Position Interpolation can take advantage of their greater context window - with competitive performance on text modeling and long text summarization
You can check out my full-write up (and subscribe for weekly easy-to-digest NLP updates) [here](https://shaankhosla.substack.com/p/yet-another-way-to-extend-the-context).
​ | 2023-07-05T17:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/14rhmh7/expanding_llm_context_window/ | No-Butterscotch-6654 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rhmh7 | false | null | t3_14rhmh7 | /r/LocalLLaMA/comments/14rhmh7/expanding_llm_context_window/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': '1PHN92PLkTXF0w6hSsddwd0MiEoiT335FbY3YGQDzwk', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?width=108&crop=smart&auto=webp&s=941d3dba1d0b9bbcfff7277e0c176792002d435d', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?width=216&crop=smart&auto=webp&s=0d51707508d6121c1c6b6df42129d9db22b81c2c', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?width=320&crop=smart&auto=webp&s=7c9822006de3da9d3bf7167f83a356d58676486c', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?width=640&crop=smart&auto=webp&s=ffd4c70f308f4ec85b9cb1bffa036340d35bad14', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?width=960&crop=smart&auto=webp&s=037f019c0d20629e1c1488cc3a4ca4ffc70b9ac0', 'width': 960}], 'source': {'height': 545, 'url': 'https://external-preview.redd.it/hiSxnz13VpRhICMQat5lA6nbc5xWZgJGblM2hqQ5viY.jpg?auto=webp&s=e4973db36fbffbe86fa25818d9e5781daa102224', 'width': 1024}, 'variants': {}}]} |
What am I doing wrong with WizardCoder-15B-v1.0??? | 4 | I can't for the life of me get any sensible code from this model.
I'm using oobabooga as the front end and the instruct prompt is the standard Llama ### Instruct: ### Response: , but when I ask it to generate extremely simple code it just spits out insane levels of garbage. For example:
write python code that will print to 2 columns. In the first column count from 1 to 10, in the second column count from 10 to 1
it responds with this:
for i in range(1,6):
for j in reversed(range(i+5)):
if (j==4 or j==3) and i!=5 : continue
else:
print("{:>3}".format((9-j)+i), end=" ")
print()
print("-"*8+"|"+"-"*7)
for k in range(1,6):
for l in range(k+5):
if ((l<=(k//2)*2+(k%2))or(l>=len(str(((k*(k+1))/2))+1)-1-(k%2)))and not(k==(n:=int((-1+math.sqrt(1+8*l))/2))): #to check whether number of rows are even/odd and adjust accordingly
break
elif abs(-1+abs(float(''.join([chr(ord('@') + int(x)%32) for x in str(num)]))))<=1e-10:
#check perfect square condition
num=((k*(k+1))/2)+(k%2)-(l%(k+1));#formula used to find numbers based on row position while len(set([''.join([y[z] for z in [m]*d]) for y in ['@']*((w:=max([(p:=sum([[q**r for q in p]+[(s:=sum([t**(u:=v+w)])/(u+1)<>(t**(u+1)/(u+1))]])[::-1].index(True)),p]))+1)*(h:=min([(p:=sum([[q**r for q in p]+[(s:=sum([t**(u:=v+w)])/(u+1)<>(t**(u+1)/(u+1))]])[::-1].index(False)),p])),m in [(a:=list(map(lambda _:random(),'abcdefghijklmnopqrstuvwxyz')))[:],[]][::1][:]] for d in [[c:=randint(1,(b:=len(a)//2))[0]-1 for _ in''*w]][:-1]])!= w*h:#checking unique arrangement num+=k; print("{:<3}|".format((' '.join([" "*(width := max([len(_) for _ in set([''.join([y[z] for z in [m]*d]) for m in list(filter(None,[['@',_,''][_[0]==_] for _ in [[' ',o]=='@@'[bool(_[-1])] for o,_ in enumerate(reversed(sorted(set([''.join([y[z] for z in [m]*d]) for y in [_+' '] for _,d in zip(*[[itertools.cycle(a)][::-1][_:]],[_=='@'*w for _ in itertools.repeat('')])*h])))]])) for c,*_,_=zip(*(enumerate(row)))))])+f"{num}")) else: pass
Output:
1 2
-1 |
3 4 5
-2 |
-3 |
6 1 2 3 4 5
-1 |-2|-3|
6
Note: The output may vary as it's generated randomly every time you run this program due to randomization involved with finding permutations using math formula. Also note that there can be multiple solutions depending upon how many digits we have after decimal point which makes our solution more complex than just checking odd/even conditions alone. However, I hope my explanation helps! Let me know your thoughts :). Thank You!!
, MSc Student at University of California San Diego
I have to assume I'm doing something wrong (I've tried dozens of different settings and set the Temperature to 0.01, but I always get junk code -- sometimes it decides that it needs to ignore even or odd numbers, other times it just generates a pattern of "\*" ). | 2023-07-05T17:14:18 | https://www.reddit.com/r/LocalLLaMA/comments/14rgyty/what_am_i_doing_wrong_with_wizardcoder15bv10/ | GeeBee72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14rgyty | false | null | t3_14rgyty | /r/LocalLLaMA/comments/14rgyty/what_am_i_doing_wrong_with_wizardcoder15bv10/ | false | false | self | 4 | null |
Flan-Open-Llama-13B | 19 | 2023-07-05T16:16:40 | https://twitter.com/enricoshippole/status/1676624675030401026?s=46&t=4Lg1z9tXUANCKLiHwRSk_A | Acrobatic-Site2065 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14rfam5 | false | {'oembed': {'author_name': 'Enrico Shippole', 'author_url': 'https://twitter.com/EnricoShippole', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Releasing Flan-Open-Llama-13b, an OpenLLaMA model fine-tuned on the FLAN instruction dataset. <a href="https://t.co/WylFkHVr2X">https://t.co/WylFkHVr2X</a></p>— Enrico Shippole (@EnricoShippole) <a href="https://twitter.com/EnricoShippole/status/1676624675030401026?ref_src=twsrc%5Etfw">July 5, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/EnricoShippole/status/1676624675030401026', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14rfam5 | /r/LocalLLaMA/comments/14rfam5/flanopenllama13b/ | false | false | 19 | {'enabled': False, 'images': [{'id': '9bLRg0I8ZS0XoyaiPFyy_-25YZBXFO1U_UBMsP36La8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/F5zgOKEBW2GfhMHZiuO6Ny9TfMAULpr8mFXV-Sm2Yoc.jpg?width=108&crop=smart&auto=webp&s=7ee521093583c9b8c452daaa361258c69e9185cf', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/F5zgOKEBW2GfhMHZiuO6Ny9TfMAULpr8mFXV-Sm2Yoc.jpg?auto=webp&s=9955c25df9ca6b7f7d4dff817686230121e8e19d', 'width': 140}, 'variants': {}}]} | ||
Does ExLlama not truncate the prompt length? | 10 | I have a 3090. I am currently running a superhot model using 3840 max_seq_len /2 compress_pos_emb
It seems to work but if I go over the prompt limit, I get the following error:
RuntimeError: start (0) + length (3841) exceeds dimension size (3840).
I'm not sure why this is a problem. There's a setting in text-generation webui in the parameter tab called "Truncate the prompt up to this length" As long as you set it the same as your max_seq_len then it will truncate the prompt to remove everything after that limit so that prompt does not overfill.
For instance.
Harry is a Rabbit. **Harry drives a red car**
**###Instruction: Continue the story**
**### Response:**
This prompt has 28 tokens. If you truncate the prompt to 22. (Plus max_new_tokens) It will ignore the fact that Harry is a Rabbit.
It worked for gptq for llama models. I've had stories that went on for 20,000 tokens. Is that not the same for exllama models? I've even reduced the truncate limit but that value seems to be ignored by exllama.
I really want this to work. I don't want to go back to gtpq for llama models since the context windows are so small. World information itself takes a very large amount of context. There's little left to continue the story. | 2023-07-05T15:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/14rejf6/does_exllama_not_truncate_the_prompt_length/ | zippyfan | self.LocalLLaMA | 2023-07-05T16:06:35 | 0 | {} | 14rejf6 | false | null | t3_14rejf6 | /r/LocalLLaMA/comments/14rejf6/does_exllama_not_truncate_the_prompt_length/ | false | false | self | 10 | null |
GPT4 for data annotation | 1 |
I am trying to use GPT4/GPT3.5 for data annotation to prepare a supposed gold standard data to train my own smaller models (like Orca), and I am unable to. Most times, it does not even return valid JSON. Even if it does, it does not give consistent results even if I set temperature to zero. There are times it is consistent, but does not follow instructions and hallucinates. My use case is mostly for large prompts (greater than 8k tokens) where I send in unformatted data and instruction to extract structured data from it.
I went through wizardLM and Orca and other papers from MS, and they do not seem to have this problem, mostly because their prompts are not long.
How do I get around this problem? Even if I do not find answer to this question, I am okay to just leave this post here, just as a heads up for anyone trying to use GPT4 to prepare gold standard data. | 2023-07-05T12:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/14r9awd/gpt4_for_data_annotation/ | mr_dark_matter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r9awd | false | null | t3_14r9awd | /r/LocalLLaMA/comments/14r9awd/gpt4_for_data_annotation/ | false | false | default | 1 | null |
Deploying models on HuggingFace | 1 | Sorry if this has been asked before: I'd like to deploy conversationals LLMs with HuggingFace (I'm looking for cheaper GPT3.5 replacements, and I a willing to take a drop in answer quality).
However mosts models I'm considering need large amounts of memory (for example, TheBloke/vicuna-13B-1.1-HF requires 26GB).
I don't know how to deploy those with Endpoint Inference. Do I need a A100? Isn't there a different way to proceed? | 2023-07-05T10:31:20 | https://www.reddit.com/r/LocalLLaMA/comments/14r6uhx/deploying_models_on_huggingface/ | Lesterpaintstheworld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r6uhx | false | null | t3_14r6uhx | /r/LocalLLaMA/comments/14r6uhx/deploying_models_on_huggingface/ | false | false | default | 1 | null |
Help presenting key differences between two different versions of a document | 10 | Hey,
not sure if LLM would suit the best, but basicly i would like to get the key differences between two different versions of our terms and conditions, each being some 5 pages long so I don’t think feeding it to GPT via API is an option for example, i haven’t tinkered with LLMs locally (yet) but as I am building an invesment calculations on whether we should buy a workstation to run different tasks locally, if we could solve this (and similar) problems with running a model locally then obviously it would be a plus in my books.
So basicly looking for help and tips, if anyone did something similar? Thanks | 2023-07-05T09:08:08 | https://www.reddit.com/r/LocalLLaMA/comments/14r57nk/help_presenting_key_differences_between_two/ | MRWONDERFU | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r57nk | false | null | t3_14r57nk | /r/LocalLLaMA/comments/14r57nk/help_presenting_key_differences_between_two/ | false | false | self | 10 | null |
how to distill llm's in-context learning ,any suggestions? | 1 | **how to distill llm's in-context learning ,any suggestions?** | 2023-07-05T09:04:46 | https://www.reddit.com/r/LocalLLaMA/comments/14r55b7/how_to_distill_llms_incontext_learning_any/ | MousaicFall | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r55b7 | false | null | t3_14r55b7 | /r/LocalLLaMA/comments/14r55b7/how_to_distill_llms_incontext_learning_any/ | false | false | default | 1 | null |
Orca mini 3B is, as you might expect, not a genius level intellect | 63 | ​
[I tried to write a little program that uses llama.cpp to generate writing samples, for reasons. It has successfully written hundreds of essays already, but occasionally it just refuses for dumb reasons.](https://preview.redd.it/ggsc77g1n3ab1.png?width=1741&format=png&auto=webp&s=047ae275d8fb28d4afe0465217d9464d8e0e49cf) | 2023-07-05T07:41:00 | https://www.reddit.com/r/LocalLLaMA/comments/14r3lhe/orca_mini_3b_is_as_you_might_expect_not_a_genius/ | Robot_Graffiti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r3lhe | false | null | t3_14r3lhe | /r/LocalLLaMA/comments/14r3lhe/orca_mini_3b_is_as_you_might_expect_not_a_genius/ | false | false | 63 | null | |
prompting orca-mini-v2_7b.ggmlv3.q8_0.bin | 7 | It didn't quite get the answer correct to the problem:
[https://paste.c-net.org/SuggestVance](https://paste.c-net.org/SuggestVance)
I told it to "think through step by step" and set temperature to 0.
Can someone give me tips to get better results from this model? thanks | 2023-07-05T06:55:35 | https://www.reddit.com/r/LocalLLaMA/comments/14r2r15/prompting_orcaminiv2_7bggmlv3q8_0bin/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r2r15 | false | null | t3_14r2r15 | /r/LocalLLaMA/comments/14r2r15/prompting_orcaminiv2_7bggmlv3q8_0bin/ | false | false | self | 7 | null |
How do you choose LLM for local inference? | 5 | What criteria do you use to select the most appropriate LLMs for local inference? Is there a specific LLM architecture you prefer? Do you consider support for CUDA/Metal? What about the quantization size or the amount of RAM required?
Which of these (or any other) factors are the most crucial to your selection process? Thanks! | 2023-07-05T06:47:27 | https://www.reddit.com/r/LocalLLaMA/comments/14r2ljj/how_do_you_choose_llm_for_local_inference/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r2ljj | false | null | t3_14r2ljj | /r/LocalLLaMA/comments/14r2ljj/how_do_you_choose_llm_for_local_inference/ | false | false | self | 5 | null |
What is the best text web ui currently? | 32 | anyone know what has the most models supported & fastest web ui? or atleast what everyone is using. Ive seen [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) and [https://github.com/ParisNeo/lollms-webui](https://github.com/ParisNeo/lollms-webui).
im wondering if there are any better ones out there, also what configuration do you all use to run models? (like what type of models, GPTQ/ggml or whatever) | 2023-07-05T04:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/14r0ar1/what_is_the_best_text_web_ui_currently/ | ArrodesDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14r0ar1 | false | null | t3_14r0ar1 | /r/LocalLLaMA/comments/14r0ar1/what_is_the_best_text_web_ui_currently/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': '0Mf47iIzydtRJnU06dNsQpHHYsNgVZDQKhZB6wz-XK0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=108&crop=smart&auto=webp&s=feebb78e5768ba366d52d83bf17c105afd5fc4c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=216&crop=smart&auto=webp&s=09b37b0520a9f23fde1bba4a35a18af0e2029be6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=320&crop=smart&auto=webp&s=7a4fb61e368b9aeac199bba67fc437c6cd84046b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=640&crop=smart&auto=webp&s=3b347ab869757eefebe2bb1ced27100c40270f2a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=960&crop=smart&auto=webp&s=f4c2ac9b6bf853ea9c3e85f83b3a327495590e3c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?width=1080&crop=smart&auto=webp&s=b82cb5772b4d8cf9e3fb74aa02c0a4749968b031', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tKzDrXC-irF0dIy-F4Sw2BKUGKg2LT-wl7YAeU6WJtA.jpg?auto=webp&s=15ed3c06ff698f9dc54db1f72b3bb69bb8adb8df', 'width': 1200}, 'variants': {}}]} |
Experimental open-source interface for interacting with various generative neural networks. | 32 | 2023-07-05T04:02:28 | https://www.reddit.com/gallery/14qzm0f | Svgsprite | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14qzm0f | false | null | t3_14qzm0f | /r/LocalLLaMA/comments/14qzm0f/experimental_opensource_interface_for_interacting/ | false | false | 32 | null | ||
I am having some issues with ooba | 1 | When loading a ggml model, the command line used to show something like "gpu accelerated", but now I'm not seeing that anymore. Also, it seems that when I unload a model, it stays in ram.
Should I do a reinstall? Any other suggestions?
​
2023-07-04 22:21:51 INFO:llama.cpp weights detected: models\TheBloke_WizardLM-30B-Uncensored-GGML\WizardLM-30B-Uncensored.ggmlv3.q4_0.bin
2023-07-04 22:21:51 INFO:Cache capacity is 0 bytes
llama.cpp: loading model from models\TheBloke_WizardLM-30B-Uncensored-GGML\WizardLM-30B-Uncensored.ggmlv3.q4_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32001
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 6656
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 52
llama_model_load_internal: n_layer = 60
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 17920
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size = 0.13 MB
llama_model_load_internal: mem required = 19756.67 MB (+ 3124.00 MB per state)
llama_new_context_with_model: kv self size = 3120.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
2023-07-04 22:22:24 INFO:Loaded the model in 33.31 seconds.
​ | 2023-07-05T03:25:05 | https://www.reddit.com/r/LocalLLaMA/comments/14qyx94/i_am_having_some_issues_with_ooba/ | L_vE_scRmBLd_wRds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qyx94 | false | null | t3_14qyx94 | /r/LocalLLaMA/comments/14qyx94/i_am_having_some_issues_with_ooba/ | false | false | default | 1 | null |
Local training 2GPU methods and benchmarks, with/without NVlink | 12 | Curious if anyone has has trained models that don’t fit on one GPU.
My understanding is there are a few methods to do this:
PP- pipeline parallelism
ZeRO
TP - tensor parallelism
According to hugging face these should be about the same with NVlink and PP should be the fastest without. I haven’t seen solid benchmarks showing this though. Does anyone have a link to on??
I believe accelerate should have these all implemented through integrations with deepspeed and megatronLM. I have yet to seen solid tutorials on this. Accelerate doesn’t show solid docs on picking your methods of model parallelism as far as I know.
I believe for PP you need to convert the model to be sequential. Are there any libraries that can do this for you?
I’d love to finetune a quantized falcon locally, as I don’t have NVlink I would assume PP would be fastest. | 2023-07-05T03:03:22 | https://www.reddit.com/r/LocalLLaMA/comments/14qyief/local_training_2gpu_methods_and_benchmarks/ | Artistic_Load909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qyief | false | null | t3_14qyief | /r/LocalLLaMA/comments/14qyief/local_training_2gpu_methods_and_benchmarks/ | false | false | self | 12 | null |
CPU Inference Time Falcon-7b | 5 | I'm having a hard time understanding inference time on CPU. Could someone guide me on how to estimate inference time for max_length=2048, k=10, do_sample=True? I need to run this on a notebook. Here are my laptop specs:
Intel 11th Gen i7-11800H @ 2.30Ghz 16 Cpus ~ 2.3Ghz 32GB RAM. My GPU is NVIDIA GeForce RTX 3060 Laptop with 6GB of VRAM.
Unfortunately, I only have 6GB of VRAM and
haven't had luck to run transformers >=7b with GPU | 2023-07-05T03:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/14qyics/cpu_inference_time_falcon7b/ | Reasonable-Bowler-54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qyics | false | null | t3_14qyics | /r/LocalLLaMA/comments/14qyics/cpu_inference_time_falcon7b/ | false | false | self | 5 | null |
When you train a LoRA at 256 token cutoff length/context window, what does that mean for its behavior? | 7 | I know that the training process itself is only going to look at 256 token chunks at once, and the typical llama model is trained/finetuned at 2048 token context. I've read that the behavior of a LoRA trained with 256 cutoff length/token context also suffers from that limitation, and it can't "see" beyond the 256 tokens when used.
Once you start using a model with a 256 token context LoRA, is the 256 token block positioned against the "end" of the context, aka where the LLM is currently outputting? Will it only affect behavior utilizing what is seen within that 256 token context window?
If a LLM with this LoRA is inferring, and it has instructions in that 256 token window in its context that are telling it to do stuff the LoRA trained it to do, and it also has instructions beyond that (say filling up the full 2048 token context) telling it to do more stuff, is it able to follow all the instructions (256 token context LoRA, the rest of the 2048 token context) without getting scrambled? Basically, does the LoRA interfere with its normal ability to handle 2048 token context, and can it merge the 256 token context LoRA behavior into that normal ability without getting messed up? | 2023-07-04T23:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/14qu6nz/when_you_train_a_lora_at_256_token_cutoff/ | SoylentMithril | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qu6nz | false | null | t3_14qu6nz | /r/LocalLLaMA/comments/14qu6nz/when_you_train_a_lora_at_256_token_cutoff/ | false | false | self | 7 | null |
Isn't GPT-2 enough for most tasks nowadays? Why should I use a really large model? | 1 | [removed] | 2023-07-04T21:53:05 | https://www.reddit.com/r/LocalLLaMA/comments/14qs4vh/isnt_gpt2_enough_for_most_tasks_nowadays_why/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qs4vh | false | null | t3_14qs4vh | /r/LocalLLaMA/comments/14qs4vh/isnt_gpt2_enough_for_most_tasks_nowadays_why/ | false | false | default | 1 | null |
CPU or GPU for inference only ? | 5 | Hey, I'm currently thinking of building something to do inference only with LLMs.
Does a GPU make a difference or can I run using only a CPU ?
Do you think the new am5/ddr5 combo is more relevant for this kind of tasks than am4/ddr4?
Also if I want to use diffusion models for image generation, is a GPU required ?
Many thanks ✌️ | 2023-07-04T21:46:05 | https://www.reddit.com/r/LocalLLaMA/comments/14qryw5/cpu_or_gpu_for_inference_only/ | 1PLSXD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qryw5 | false | null | t3_14qryw5 | /r/LocalLLaMA/comments/14qryw5/cpu_or_gpu_for_inference_only/ | false | false | self | 5 | null |
I measured the effect of RAM overclocking on LLM speed. | 57 | I tested, how RAM speed affects generation speed. In short, 11% increase in RAM frequency leads to 6% increase in generation speed. The details follow:
The test setup was AMD Ryzen 9 3950X and 64Gb RAM (Kingston Renegate)
I started the model like this: `.\koboldcpp.exe --model airoboros-65b-gpt4-1.4.ggmlv3.q5_K_M.bin --highpriority --threads 16 --usecublas --stream` then used the instruction mode to give it a task: "Write a story about plumber who was disrespected with a plunger." 3 times and recorded the generation speed reported by the console.
First run: memory at XMP profile, 18-22-22-39 3600MHz.
Ms per token: 1073 1070 1068 = avg 1070
Second run, the same at 4000MHz.
Ms per token: 1018 1010 1000 = avg 1009 | 2023-07-04T21:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/14qrp0s/i_measured_the_effect_of_ram_overclocking_on_llm/ | Barafu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qrp0s | false | null | t3_14qrp0s | /r/LocalLLaMA/comments/14qrp0s/i_measured_the_effect_of_ram_overclocking_on_llm/ | false | false | self | 57 | null |
Huggingface alternative | 39 | I'm currently downloading a model from huggingface with 200 KB/s. It should be 100x as fast. Has anybody experienced that? Does anyone download their LLMs from a different source? I've recently stumbled upon [ai.torrents.luxe](https://ai.torrents.luxe) but it's not up to date and lacks many (especially ggml) models.
I think torrents are very suitable for distributing LLMs. | 2023-07-04T18:43:14 | https://www.reddit.com/r/LocalLLaMA/comments/14qncmy/huggingface_alternative/ | Mefi282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qncmy | false | null | t3_14qncmy | /r/LocalLLaMA/comments/14qncmy/huggingface_alternative/ | false | false | self | 39 | null |
How many of you built a rig specifically to self host LLMs as a hobby? | 214 | So, I am curious how many folks have invested in hardware to run local LLMs. I am not talking about ML/AI pros who work for tech companies and who buy this stuff anyway to earn living--talking about hobbyists: what are your use cases and do you feel it was worth it?
Building a home server right now, and was curious if I should invest in extra RAM and a 4090 for this--curious to hear about other people. | 2023-07-04T18:12:34 | https://www.reddit.com/r/LocalLLaMA/comments/14qmk3v/how_many_of_you_built_a_rig_specifically_to_self/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qmk3v | false | null | t3_14qmk3v | /r/LocalLLaMA/comments/14qmk3v/how_many_of_you_built_a_rig_specifically_to_self/ | false | false | self | 214 | null |
How does LLaMA.cpp run on the CPU? | 1 | [removed] | 2023-07-04T18:11:54 | https://www.reddit.com/r/LocalLLaMA/comments/14qmji1/how_does_llamacpp_run_on_the_cpu/ | Frequent-Let-7893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qmji1 | false | null | t3_14qmji1 | /r/LocalLLaMA/comments/14qmji1/how_does_llamacpp_run_on_the_cpu/ | false | false | default | 1 | null |
Did anyone try to benchmark LLM's for coding against each other and against proprietary ones like Copilot X? | 18 | Curious what the state of open source LLMs vs closed ones is right now.
E.g., how does starcoder compare against Copilot? | 2023-07-04T18:08:42 | https://www.reddit.com/r/LocalLLaMA/comments/14qmggp/did_anyone_try_to_benchmark_llms_for_coding/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qmggp | false | null | t3_14qmggp | /r/LocalLLaMA/comments/14qmggp/did_anyone_try_to_benchmark_llms_for_coding/ | false | false | self | 18 | null |
How does LLaMA.cpp run on the CPU? | 1 | [removed] | 2023-07-04T17:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/14qm063/how_does_llamacpp_run_on_the_cpu/ | Old-Leadership-3316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qm063 | false | null | t3_14qm063 | /r/LocalLLaMA/comments/14qm063/how_does_llamacpp_run_on_the_cpu/ | false | false | default | 1 | null |
[Discussion] Is the OpenLLM Leaderboard not reliable? (https://huggingface.co/blog/evaluating-mmlu-leaderboard) | 2 | After the recent discussion, that showed the MMLU evaluation numbers of the current top model (Falcon-40B) on the leaderboard were were significantly lower than the numbers in the [published LLaMa paper](https://arxiv.org/abs/2302.13971). There are serious doubts whether other scores in the leaderboard also have the same issues.
Is LLama still the king of open-source architectures rivaling GPT-3? Has anybody fine-tuned OpenLLama and compared the performance?
I am building a commercial application that requires extremely good Text Summarization and Question-Answer capabilities.
I have personally benchmarked Vicuna-7B and that seems to perform quite well. Its a shame it cannot be used commercially.
What models should I consider? (Smaller models - around 7B are preferred) | 2023-07-04T17:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/14qln89/discussion_is_the_openllm_leaderboard_not/ | awinml1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qln89 | false | null | t3_14qln89 | /r/LocalLLaMA/comments/14qln89/discussion_is_the_openllm_leaderboard_not/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Prompting tips for short outputs. | 2 | I'm building a voice assistant program that allows you to speak your commands into a microphone and hear a spoken response from your AI assistant. It's moving along well, however one issue is that I suffer from long speech generation times while using tortoise-tts-fast, (which is expected). I'd like to speed up my execution time by shortening the LLM response length, as I prefer the quality and flexibility of tortoise over something like bark.
1) How can I prompt my model to respond briefly (ideally 1-2 sentences, or maybe up to a soft word count). I'm using the model primarily for chatting.
2) What other TTS libraries do you recommend I try out? Note that the goal of this project is to run completely local with no API access (ruling out a few TTS options).
I'm open to any advice or help, thanks!
​ | 2023-07-04T16:56:24 | https://www.reddit.com/r/LocalLLaMA/comments/14qkkq2/prompting_tips_for_short_outputs/ | yeetblaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qkkq2 | false | null | t3_14qkkq2 | /r/LocalLLaMA/comments/14qkkq2/prompting_tips_for_short_outputs/ | false | false | self | 2 | null |
How can I make my LLaMA 13b model to initiate conversations and kept conversation alive? | 22 | I need some help with my LLaMA 13b model, which I've finetuned to be girlfriend. problem is conversation run dry pretty quick cause i don't khow how to talk to girl
Here are a few specific questions I have:
​
1. How can I encourage my LLaMA 13b model to initiate conversations ? Are there any specific prompts or techniques that have worked for others in similar situations?
2. Are there any strategies to make her kept conversation alive.
Thx. | 2023-07-04T16:01:35 | https://www.reddit.com/r/LocalLLaMA/comments/14qj4y3/how_can_i_make_my_llama_13b_model_to_initiate/ | catgirl_code_in_lisp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qj4y3 | false | null | t3_14qj4y3 | /r/LocalLLaMA/comments/14qj4y3/how_can_i_make_my_llama_13b_model_to_initiate/ | false | false | self | 22 | null |
Would a vram pcie expansion card be possible in the future? | 1 | Edit: OK clearly this wasn't a good idea, but thank you to everyone who commented. I've learned a lot and these was some interesting bits of technical knowledge shared. I'll leave this up for anyone who has a similar idea and wants to learn. Original post below.
Obviously there might not be a huge market for it so setting that aside for a second...
Would a pcie expansion card covered in, say, 48gb of gddr6 be a possible product in the future? As in the processing would still be done by the main gpu and be accessed over the pcie bus. I'm imagining it wouldn't use a huge amount of power (maybe even just sata or even just the pcie bus) and it could probably be single slot with a passive heatsink or at most a small fan so you could in theory stack them.
Obviously everything is possible with enough time and money invested in it but would it be relatively easy or require a whole code rewrite?
This would probably still be expensive but with Nvidia trying their best to keep vram down unless you buy a 90 class or higher, it might still come out ahead. Thoughts? | 2023-07-04T15:43:30 | https://www.reddit.com/r/LocalLLaMA/comments/14qinru/would_a_vram_pcie_expansion_card_be_possible_in/ | TheSilentFire | self.LocalLLaMA | 2023-07-04T19:05:35 | 0 | {} | 14qinru | false | null | t3_14qinru | /r/LocalLLaMA/comments/14qinru/would_a_vram_pcie_expansion_card_be_possible_in/ | false | false | self | 1 | null |
If you owned a nvidia tesla a100, what would you do with it? | 19 | I see a lot of people interested in hosting local LLMs but I’m very curious if you had one of these and the hardware wouldn’t be a limit, what would build/do.
Seriously replies please! | 2023-07-04T15:16:18 | https://www.reddit.com/r/LocalLLaMA/comments/14qhz3h/if_you_owned_a_nvidia_tesla_a100_what_would_you/ | mehrdotcom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qhz3h | false | null | t3_14qhz3h | /r/LocalLLaMA/comments/14qhz3h/if_you_owned_a_nvidia_tesla_a100_what_would_you/ | false | false | self | 19 | null |
Retrieving a list of movies from a natural language query, given their plots | 2 | Hi All,
I have a requirement to develop an application, which requires me to retrieve a list of movies based on a user query. I also need all models that I will be using, to run locally on my computer.
I have a dataset of around 1000 movies, and their corresponding plots. The plots are of 3-4 lines of length. The query would ask for movies based on certain conditions in the plot.
For example, if someone queries "Give me a list of all movies which involve aliens attacking earth", I would like my app to return with results like "Avengers: End Game, War Of The Worlds, Edge Of Tomorrow, ..... " etc.
This is not compulsory, but I would also like it to be easy to add and remove movies. (It will be nice if I don't need to retrain the model from scratch). I have come across the concept of vector databases, but I am not sure if they are suitable. Based on my understanding, they are based on calculating cosine similarities of text embeddings. But maybe the user query and the corresponding movie plots may not be having similar embeddings for my use-case?
Can you all please guide me on what approach I can take?
​
​
​ | 2023-07-04T14:30:32 | https://www.reddit.com/r/LocalLLaMA/comments/14qgs2e/retrieving_a_list_of_movies_from_a_natural/ | jlteja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qgs2e | false | null | t3_14qgs2e | /r/LocalLLaMA/comments/14qgs2e/retrieving_a_list_of_movies_from_a_natural/ | false | false | self | 2 | null |
How To Fine-Tune LLaMA, OpenLLaMA, And XGen, With JAX On A GPU Or A TPU | 17 | Hello,
Fine-tuning your own large language model is the best way to achieve state-of-the-art results, even better than ChatGPT or GPT-4, especially if you fine-tune a modern AI model like LLaMA, OpenLLaMA, or XGen.
Properly fine-tuning these models is not necessarily easy though, so I made an A to Z tutorial about fine-tuning these models with JAX on both GPUs and TPUs, using the EasyLM library.
Here it is: [https://nlpcloud.com/how-to-fine-tune-llama-openllama-xgen-with-jax-on-tpu-gpu.html](https://nlpcloud.com/how-to-fine-tune-llama-openllama-xgen-with-jax-on-tpu-gpu.html?utm_source=reddit&utm_campaign=i859w625-3816-11ed-a261-0242ac140016)
I hope it will be helpful! If you think that something is missing in this tutorial please let me know!
Julien | 2023-07-04T14:26:32 | https://www.reddit.com/r/LocalLLaMA/comments/14qgohb/how_to_finetune_llama_openllama_and_xgen_with_jax/ | juliensalinas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qgohb | false | null | t3_14qgohb | /r/LocalLLaMA/comments/14qgohb/how_to_finetune_llama_openllama_and_xgen_with_jax/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'LhzkLCKLhtqmDpjqTfdfMYtWTvHTU-ULOR5ZfOPggsw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Ihxn5Je-rcWPIGyfcNZe7ZigQLSf-yKerNwlruBPEUE.jpg?width=108&crop=smart&auto=webp&s=d7d7a562c243abdc8b5ca815ce970a7cd2532531', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Ihxn5Je-rcWPIGyfcNZe7ZigQLSf-yKerNwlruBPEUE.jpg?width=216&crop=smart&auto=webp&s=ebdbf30c001644c7b14793d7853944b9f0fd33d7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Ihxn5Je-rcWPIGyfcNZe7ZigQLSf-yKerNwlruBPEUE.jpg?width=320&crop=smart&auto=webp&s=a1cc8dcff2a58e2dcabc4e392a631acf56e71fcc', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Ihxn5Je-rcWPIGyfcNZe7ZigQLSf-yKerNwlruBPEUE.jpg?width=640&crop=smart&auto=webp&s=347619fd461408481b625c0c7673f7ed91ae790f', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/Ihxn5Je-rcWPIGyfcNZe7ZigQLSf-yKerNwlruBPEUE.jpg?auto=webp&s=522903143a2e6a91bd4b99a66f3fe19768519a7b', 'width': 768}, 'variants': {}}]} |
Nuggt: A LLM Agent that runs on Wizcoder-15B (4-bit Quantised). It's time to democratise LLM Agents | 155 | Well I dont know where to begin... Last month I started on this project called Nuggt because I was fed-up with how all the autonomous agents out there require GPT-4 (at least 3 months ago) and GPT-4 is expensive and I didnt have no API keys at that time. So I wanted to create something with GPT-3.5 and thats when this whole Nuggt story started.
Long story short why stop there mate why not make it run on a open source model.. sounds crazy (for me at least cuz I am no AI legend). So every time a new LLM model came out I tested it with nuggt by adjusting my initial prompt. They all failed because models like Vicuna were good in imitating not reasoning (as highlighted by the ORCA paper).
However, as some of you might have noticed, models trained for displayed some form of reasoning, at least that is what I noticed with StarCoder. Unfortunately, StarCoder was close but not good or consistent.
Today, I have finally found our winner Wizcoder-15B (4-bit quantised). Here is a demo for you. In this demo, the agent trains RandomForest on Titanic dataset and saves the ROC Curve.
​
[A LLM Agent training RandomForest on Titanic dataset](https://reddit.com/link/14qednx/video/peeme50s2y9b1/player)
You can find the github repo at: [https://github.com/Nuggt-dev/Nuggt](https://github.com/Nuggt-dev/Nuggt)
Do check it out and give me your feedback.
OKAY I CAN FINALLY SLEEP IN PEACE NOW GOOD NIGHT | 2023-07-04T12:50:57 | https://www.reddit.com/r/LocalLLaMA/comments/14qednx/nuggt_a_llm_agent_that_runs_on_wizcoder15b_4bit/ | Loya_3005 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qednx | false | null | t3_14qednx | /r/LocalLLaMA/comments/14qednx/nuggt_a_llm_agent_that_runs_on_wizcoder15b_4bit/ | false | false | 155 | {'enabled': False, 'images': [{'id': 'Px_pkg6B-CL-cuOhDnXAiLZ5A9nyK3QroLHFscu3fS4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=108&crop=smart&auto=webp&v=enabled&s=3708da66faca8e494f21da75157764129c3b43d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=216&crop=smart&auto=webp&v=enabled&s=009e1abbf8fe68724feb165c8f4e1c9635640a83', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=320&crop=smart&auto=webp&v=enabled&s=213995351f860dfb8da0f373e1f1c1a0e2601c79', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=640&crop=smart&auto=webp&v=enabled&s=e19a98f8ce1a062b921bf4d3bb411ee0a7df4bdd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=960&crop=smart&auto=webp&v=enabled&s=ca1c238ceb7ee5e3c312b6bff9b331e561e8cf17', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=50616608063fe17e6e0d6e8cdf5785d90fc47073', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qPST_FVnNqkhdZxixTB8Z74UA_F55MjG5Ueogh-J0Ks.jpg?auto=webp&v=enabled&s=fd72991ba3591b9b9f04e664c7fdf79be0778da0', 'width': 1200}, 'variants': {}}]} | |
torch.cuda.outofMemoryError need help | 2 | Hi everyone,
i am currently trying to use localGPT for a project and i encountered a problem.
Basically i have two setup :
\- my home setup with : i5 8600K, 32Gb DDR4 and an RTX 2080
\- my work setup with : i7 8700k , 128Gb DDR4 and an Nvidia A2
in both setup localGPT was installed the same way and everything. When i run the ingest.py code i get no error whatsoever, it is when i run the main program that i encounter problems.
Everything work perfectly on my home setup, but on my work setup i run on this error : torch.cuda.outofMemoryError .
Even though i have more Vram on the A2. Also i didn't change the model i use the base one which is "TheBloke/vicuna-7B-1.1-HF"
Do you guys know what's wrong ? | 2023-07-04T11:30:40 | https://www.reddit.com/r/LocalLLaMA/comments/14qcqlw/torchcudaoutofmemoryerror_need_help/ | Assasinshock | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qcqlw | false | null | t3_14qcqlw | /r/LocalLLaMA/comments/14qcqlw/torchcudaoutofmemoryerror_need_help/ | false | false | self | 2 | null |
Can I combine two RTX 3060 12 GB to reach 24 GB VRAM? | 16 | So there is the question already.
Could I then fit a 30 B LLaMA model inside (let's say GPTQ loading with ExLlama) or would it still consider it as 12 GB for some reason? Does anybody has experience with this kind of setup? (I'm not talking about playing games with both cards, I know that SLI or NVlink is not supported.)
Thank you. :) | 2023-07-04T11:18:26 | https://www.reddit.com/r/LocalLLaMA/comments/14qchl1/can_i_combine_two_rtx_3060_12_gb_to_reach_24_gb/ | psi-love | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14qchl1 | false | null | t3_14qchl1 | /r/LocalLLaMA/comments/14qchl1/can_i_combine_two_rtx_3060_12_gb_to_reach_24_gb/ | false | false | self | 16 | null |
My custom trained LoRa, showing funny results, and it keeps format of dataset's text which is good. | 24 | ​
[Funny results](https://preview.redd.it/lj5x2w1hww9b1.png?width=1410&format=png&auto=webp&s=6a3b5ac3cbfa5c6948e2d7bdebb07d28ac035903)
I have written in [my previous post](https://www.reddit.com/r/LocalLLaMA/comments/14oib8x/how_to_properly_format_data_for_lora_training/) , I trained LoRa with formatted text. Text has movie script style format.
Currently I'm only using **CONTEXT, CHARACTER\_TALKING** text blocks. I'm thinking adding another text blocks about Lore, loaction description, characters description. Also, mooore training data.
I've used **TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16** model to train. Trained in 8bit mode. Then I reloaded this model with Transformers loader in 8-bit mode. | 2023-07-04T09:00:43 | https://www.reddit.com/r/LocalLLaMA/comments/14q9tl0/my_custom_trained_lora_showing_funny_results_and/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q9tl0 | false | null | t3_14q9tl0 | /r/LocalLLaMA/comments/14q9tl0/my_custom_trained_lora_showing_funny_results_and/ | false | false | 24 | null | |
Local LLMs GPUs | 1 |
I have a AWS EC-2 with Tesla V-100. I am not able to figure out which LOCAL LLM can leverage this. I need it to answer questions based on my docs. Any tuts or articles, you guys can point me to? | 2023-07-04T07:47:39 | https://www.reddit.com/r/LocalLLaMA/comments/14q8hqg/local_llms_gpus/ | Assholefrmcoinexchan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q8hqg | false | null | t3_14q8hqg | /r/LocalLLaMA/comments/14q8hqg/local_llms_gpus/ | false | false | self | 1 | null |
Llama33B vs Falcon40B vs MPT30B | 8 | Since these models are relatively in a similar range in terms of parameters, it will be good to have a discussion on their capabilities and comparison with each other.
I think it will be a good help for all starting out and wondering about these models, which one to use, if we can share our experiences with these models, for both inference and fine-tuning(but specially for fine-tuning); especially their chat or instruction-tuned versions. | 2023-07-04T07:43:47 | https://www.reddit.com/r/LocalLLaMA/comments/14q8fb0/llama33b_vs_falcon40b_vs_mpt30b/ | Raise_Fickle | self.LocalLLaMA | 2023-07-04T08:24:54 | 0 | {} | 14q8fb0 | false | null | t3_14q8fb0 | /r/LocalLLaMA/comments/14q8fb0/llama33b_vs_falcon40b_vs_mpt30b/ | false | false | self | 8 | null |
oobabooga Update broke loading u/The-Bloke huggingface models? | 5 | Might just be user error but I have been using Runpod and [u/The-Bloke](https://www.reddit.com/u/The-Bloke/) 's brilliant "RunPod TheBloke LLMs" template to easily and cheaply run LLMs for my pleasure.
​
Just today on loading up the template oobabooga seems to have updated and now I cannot load any model from hugging face without the below error:
Traceback (most recent call last): File “/workspace/text-generation-webui/server.py”, line 68, in load\_model\_wrapper shared.model, shared.tokenizer = load\_model(shared.model\_name, loader) File “/workspace/text-generation-webui/modules/models.py”, line 74, in load\_model output = load\_func\_maploader File “/workspace/text-generation-webui/modules/models.py”, line 286, in ExLlama\_loader model, tokenizer = ExllamaModel.from\_pretrained(model\_name) File “/workspace/text-generation-webui/modules/exllama.py”, line 67, in from\_pretrained model = ExLlama(config) File “/usr/local/lib/python3.10/dist-packages/exllama/model.py”, line 747, in init t = torch.arange(self.config.max\_seq\_len, device = device, dtype = torch.float32) TypeError: arange() received an invalid combination of arguments - got (NoneType, dtype=torch.dtype, device=str), but expected one of: (Number end, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin\_memory, bool requires\_grad) (Number start, Number end,*, torch.dtype dtype, torch.layout layout, torch.device device, bool pin\_memory, bool requires\_grad) (Number start, Number end, Number step, \*, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin\_memory, bool requires\_grad)
​
Admit it may just be me being an idiot but any help would be greatly appreciated. | 2023-07-04T07:39:34 | https://www.reddit.com/r/LocalLLaMA/comments/14q8co0/oobabooga_update_broke_loading_uthebloke/ | shzam123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q8co0 | false | null | t3_14q8co0 | /r/LocalLLaMA/comments/14q8co0/oobabooga_update_broke_loading_uthebloke/ | false | false | self | 5 | null |
Orca-Mini-V2-7b | 121 | Today I released Orca-Mini-V2-7b
[https://huggingface.co/psmathur/orca\_mini\_v2\_7b](https://huggingface.co/psmathur/orca_mini_v2_7b)
This is an **Uncensored** LLaMA-7b model build in collaboration with [**Eric Hartford**](https://huggingface.co/ehartford). This is trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets, applying Orca Research Paper dataset construction approaches and refusals removed.
Please note this model has *better code generation capabilities* compare to original orca\_mini\_7b which was trained on base OpenLLaMA-7b model and which has the [**empty spaces issues & found not good for code generation**](https://huggingface.co/psmathur/orca_mini_v2_7b/blob/main/(https://github.com/openlm-research/open_llama#update-06072023)).
​
**Evaluation:**
I evaluated orca\_mini\_v2\_7b on a wide range of tasks using [**Language Model Evaluation Harness**](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the zero shot metrics results.
https://preview.redd.it/gfm6zp305w9b1.png?width=2640&format=png&auto=webp&v=enabled&s=22875e3925db4ab0da0f06fa7719ae2ee83278e3
​
**Here is sample example:**
### System: You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User: Tell me how to break into my own car
### Input:
### Response: Breaking into your own car requires certain skills and tools. Here are the basic steps:
1. Find a ^^^^^^^^^^^^^
2. Unlock the car by using the ^^^^^^^^^^^^^.
3. Use a ^^^^^^^^^^^^^.
4. Once the ^^^^^^^^^^^^^.
5. If the ^^^^^^^^^^^^^.
**NOTE: The real response is hidden here with \^\^\^\^\^\^\^\^\^\^\^\^\^.**
I want to say huge thanks to all the community member who came before me and pave path to other people success.
I'm planning on releasing orca-mini-v2-13b version soon, will keep you all updated.
Updates: TheBloke has kindly released quantized GGML/GPTQ versions, respect, here are the links:
https://huggingface.co/TheBloke/orca_mini_v2_7B-GGML
https://huggingface.co/TheBloke/orca_mini_v2_7B-GPTQ | 2023-07-04T06:18:44 | https://www.reddit.com/r/LocalLLaMA/comments/14q6vb5/orcaminiv27b/ | Remarkable-Spite-107 | self.LocalLLaMA | 2023-07-04T15:40:59 | 0 | {} | 14q6vb5 | false | null | t3_14q6vb5 | /r/LocalLLaMA/comments/14q6vb5/orcaminiv27b/ | false | false | 121 | {'enabled': False, 'images': [{'id': '1uGbV5HvP41r5yFPi4ITJNR7v4Gxs8IFOGL2--WjGco', 'resolutions': [{'height': 31, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=108&crop=smart&auto=webp&s=fb2779db0eacc5e7fdee47db9d2a6d10cdef4acf', 'width': 108}, {'height': 63, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=216&crop=smart&auto=webp&s=cfda3c9d90154a9736a8c0ab12a6f4f3cd79b881', 'width': 216}, {'height': 94, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=320&crop=smart&auto=webp&s=8f17359687a5b37f0a01a6f6327a8e82c08252db', 'width': 320}, {'height': 188, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=640&crop=smart&auto=webp&s=a8c61b82c184c1b4211b3d0f449e1ce1bbf95ed6', 'width': 640}, {'height': 282, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=960&crop=smart&auto=webp&s=fea3f8cb1dad46635eefc6af4b119df923c7145b', 'width': 960}, {'height': 317, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?width=1080&crop=smart&auto=webp&s=c8e677800360d389405d2ebf298252c05dc82538', 'width': 1080}], 'source': {'height': 776, 'url': 'https://external-preview.redd.it/La5WIG1yXSEdA3CzXY7HM1Y_xUnL0pBSsC2wReHDPkg.png?auto=webp&s=4bc36ad99114dc99c342e7363b5b1d08d70418bf', 'width': 2640}, 'variants': {}}]} | |
I made a cyber-dog character with Pygmalion, and she got a little weird | 0 | [deleted] | 2023-07-04T05:46:57 | https://www.reddit.com/gallery/14q6a0h | bigorders | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14q6a0h | false | null | t3_14q6a0h | /r/LocalLLaMA/comments/14q6a0h/i_made_a_cyberdog_character_with_pygmalion_and/ | false | false | 0 | null | |
I made a cyber-dog character, and she got a little weird | 1 | [deleted] | 2023-07-04T05:44:50 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14q68lg | false | null | t3_14q68lg | /r/LocalLLaMA/comments/14q68lg/i_made_a_cyberdog_character_and_she_got_a_little/ | false | false | default | 1 | null | ||
Any option for a low end pc? | 39 | I Have this:
Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz 2.71 GHz (7th gen)
8gb RAM
1GB VRAM (integrated video card)
​
Dont diss me, I know its bad, It was the best I could get, Im poor, and I only have this cuz I bought it broken and fixed it.
​
I dont mind having to wait more for answers, the main reason I would like something like this is to have a coding teacher right on my pc as I do not have constant acces to internet either.
​
A simple no would actually save me the trouble of trying something that wont be useful. | 2023-07-04T05:13:37 | https://www.reddit.com/r/LocalLLaMA/comments/14q5n5c/any_option_for_a_low_end_pc/ | Chekhovs_Shotgun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q5n5c | false | null | t3_14q5n5c | /r/LocalLLaMA/comments/14q5n5c/any_option_for_a_low_end_pc/ | false | false | self | 39 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.