title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Looking for a LLM fine tuned to generate good Stable Diffusion prompts | 10 | Does anyone know a LLM that would be good at taking a long text summary and generating a good image prompt from what is described in the story? | 2023-10-16T23:09:38 | https://www.reddit.com/r/LocalLLaMA/comments/179jnyw/looking_for_a_llm_fine_tuned_to_generate_good/ | BoosterKarl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179jnyw | false | null | t3_179jnyw | /r/LocalLLaMA/comments/179jnyw/looking_for_a_llm_fine_tuned_to_generate_good/ | false | false | self | 10 | null |
text-generation-webui | 1 | Hello,
I use text-generation-webui but I have problem I can't install any extensions I need to add this line to webui.py but I don't have this file!!
\--extension whisper\_stt silero\_tts elevenlabs\_tts
Everytime i try to install it from Session tab and save it but it's never work it's keep uncheck after restart the webui!
​ | 2023-10-16T23:07:06 | https://www.reddit.com/r/LocalLLaMA/comments/179jm37/textgenerationwebui/ | SupportResponsible77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179jm37 | false | null | t3_179jm37 | /r/LocalLLaMA/comments/179jm37/textgenerationwebui/ | false | false | self | 1 | null |
Weekend project, MiniSearch | Code on GitHub | Demo on HuggingFace | 1 | [removed] | 2023-10-16T22:52:10 | https://github.com/felladrin/MiniSearch | Felladrin | github.com | 1970-01-01T00:00:00 | 0 | {} | 179ja4f | false | null | t3_179ja4f | /r/LocalLLaMA/comments/179ja4f/weekend_project_minisearch_code_on_github_demo_on/ | false | false | default | 1 | null |
Weekend project, MiniSearch | Code available on GitHub | 1 | [removed] | 2023-10-16T22:48:19 | https://www.linkedin.com/pulse/introducing-minisearch-minimalist-search-engine-ai-victor-nogueira | Felladrin | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 179j6xe | false | null | t3_179j6xe | /r/LocalLLaMA/comments/179j6xe/weekend_project_minisearch_code_available_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': '8ywkLgcbx7oB0uf6C3n7NNHnf20FMR_4HaRaQEpkIls', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=108&crop=smart&auto=webp&s=aecd34e28a22fc3e8af13e0107b3db62064fcad2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=216&crop=smart&auto=webp&s=6711a9129b166f4cc591d70c05ae1e235a1750b2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=320&crop=smart&auto=webp&s=e14f14fc09a6952404eb50c65018bf9fcd802b6c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=640&crop=smart&auto=webp&s=90f42b033d03287f546210ad597db04eb1cc1f59', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=960&crop=smart&auto=webp&s=4f77372cceedad297dd5ddf8be682f49fc25eb4b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=1080&crop=smart&auto=webp&s=2003cb3b42b2789f89317efe4ab792dde83af96d', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?auto=webp&s=71c816bc032910f6f0fbc74369bcd820eabd9468', 'width': 1280}, 'variants': {}}]} | |
Eploring Methods to Improve Text Chunking in RAG Models (and other things...) | 28 | Hello everyone,
I'm currently working on Retrieval Augmented Generation (RAG) models and have developed a custom chunking function, as I found the methods in LangChain not entirely satisfactory.
I'm keen on exploring other methods, algorithms (related to NLP or otherwise), and models to enhance text chunking in RAG. There are many RAG implementations out there, but I've noticed a lack of focus on improving chunking performance specifically.
Are there any other promising approaches beyond my current pipeline, which consists of a bi-encoder (retriever), cross-encoder (reranker), and a Large Language Model (LLM) for interactions?
For queries, I'm using both traditional and HyDE (Hypothetical Document Embedding) approaches in the retrieval phase, and sending the top 'n' results of both similarity search to the reranker.
I've also tried using an LLM to convert the query into a series of 10-20 small phrases or keywords, which are then used as the query for the retriever model. However, the results vary depending on the LLM used. To generate good keywords (with a not extractive approach) , I had to use a "CoT" prompt, instructing the model to write self-instruct, problem analysis and reasonings before generating the required keywords. But this approach use lots of tokens, and requires careful scraping to ensure the model has used the right delimiter to separate reasoning and the actual answer.
I'm also planning to modify the text used to generate embeddings, while returning the original text after the recall phase. But this is still a work in progress and scaling it is proving to be a challenge. If anyone has any tips or experience with this, I'd appreciate your input.
I'd be grateful for any resources, repositories, libraries, or existing implementations of novel chunking methods that you could share. Or we could just discuss ideas, thoughts, or approaches to improve text chunking for RAG here.
Thanks in advance for your time! | 2023-10-16T22:21:30 | https://www.reddit.com/r/LocalLLaMA/comments/179ikbu/eploring_methods_to_improve_text_chunking_in_rag/ | BXresearch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179ikbu | false | null | t3_179ikbu | /r/LocalLLaMA/comments/179ikbu/eploring_methods_to_improve_text_chunking_in_rag/ | false | false | self | 28 | null |
Uncle's FPHam lesson - Fragmented knowledge and why LLM can't write a novel | 5 | Ok, it's 5.30 in the afternoon and I'm having my breakfast. And I'm thinking how to convey this whole idea of finetuning LLM to people who have no idea what it is.
Oh, but we have stable diffusion for images and in essence it is the same thing.
In SD if you want to train a model to know your ugly mugshot you would collect 40 or so portraits of yourself. Each of the images is a fame, and 40 frames are epoch. Saving the details for another breakfast at 5.30 p.m. if you do the parameters right at the end the model can draw your face in many different ways (van gogh style)- combining the model vast knowledge with your 40 frames of yourself.
In text we have the same - we finetune a vast knowledge models with these frames of personal knowledge that we also call chunks. If each chunk talks about your ugly face in some way, collectively the model will be able to describe your face very well in many other scenarios.
That also goes for any other knowledge. If each chunks is a poem, then LLM will starts to ralk in poems.
So what about if we want to train an entire fiction book? Well, we split it into chunks and feed it to training process. Each chunk is a frame....
This is the same as if we take one image of your mugshot, slice it into 40 squares and then train SD with it. And then assume that SD will be able to somehow assemble those 40 frames into one coherent image of yourself. No it won't. It will be able to create a reasonably similar slice of that image, your ear for example.
Now you see what happens when you feed chunks of long text to LLM and then naively assume somehow that the model will be able to assemble it back to a coherent story? No it won't. But it will write a beautiful disjointed paragraphs in the same trained style. Forever and ever.
Your LLM model never saw an entire story in it's existence. It saw only chunks, (and story summaries.)
And that's why LLM rock when you ask to write a paragraph (a crop of an image) or a summary (a blurry image) but it can't combine them into writing a coherent long story.
To do this, you would need to train LLM on the entire story as one frame - and that's not yet technically possible. | 2023-10-16T22:05:35 | https://www.reddit.com/r/LocalLLaMA/comments/179i6ti/uncles_fpham_lesson_fragmented_knowledge_and_why/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179i6ti | false | null | t3_179i6ti | /r/LocalLLaMA/comments/179i6ti/uncles_fpham_lesson_fragmented_knowledge_and_why/ | false | false | self | 5 | null |
Finetuning: Prompt entropy / question complexity | 1 | Hey All: I have two sets of real world survey data now setup in prompt-response pairs (mostly financial questions and answers). The first set has short simple questions but longer detailed answers.
The second set has the opposite pattern (sophisticated and complex questions) but short and correct answers.
Both sets relate to the same topic, so I can use either one.
I realize the fine tuning is for "form" rather than facts (so its upto us to choose what dataset to pick), but isn't the better approach to train on more complex questions than answers? (even if users are expected to provide short questions).
Any suggestions on how to approach this?
​ | 2023-10-16T20:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/179gi1r/finetuning_prompt_entropy_question_complexity/ | buzzyness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179gi1r | false | null | t3_179gi1r | /r/LocalLLaMA/comments/179gi1r/finetuning_prompt_entropy_question_complexity/ | false | false | self | 1 | null |
Exploring Methods to Improve Text Chunking in RAG Models (and other...) | 2 | [removed] | 2023-10-16T20:30:27 | https://www.reddit.com/r/LocalLLaMA/comments/179fvcb/exploring_methods_to_improve_text_chunking_in_rag/ | Distinct-Target7503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179fvcb | false | null | t3_179fvcb | /r/LocalLLaMA/comments/179fvcb/exploring_methods_to_improve_text_chunking_in_rag/ | false | false | self | 2 | null |
Offloading Inferencing to a separate card? | 3 | Ever since exllama (and v2) has come out, it's becoming more and more apparent that my pair of P40's are starting to be pretty outclassed. Not that I'm complaining or anything.
But I'm curious (and also because my search-fu is failing me), is it possible to offload inferencing to a separate GPU or are these two tied a bit too closely to be able to split it out? Basically can I slap in a cheap secondary card to assist with one piece while still being able to use my P40's for the larger VRAM.
I know that with how fast local AI is advancing that I'm going to be reaching full end-of-life for the P40s but since I'm a hobbyist, I'd like to see how much I can stretch them. | 2023-10-16T20:11:21 | https://www.reddit.com/r/LocalLLaMA/comments/179feu7/offloading_inferencing_to_a_separate_card/ | AsheramL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179feu7 | false | null | t3_179feu7 | /r/LocalLLaMA/comments/179feu7/offloading_inferencing_to_a_separate_card/ | false | false | self | 3 | null |
Mistral-7B-code-16k-qlora: A Fast and Efficient Model for Coding Support | 1 | [removed] | 2023-10-16T18:49:10 | https://www.reddit.com/r/LocalLLaMA/comments/179dgdo/mistral7bcode16kqlora_a_fast_and_efficient_model/ | Nondzu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179dgdo | false | null | t3_179dgdo | /r/LocalLLaMA/comments/179dgdo/mistral7bcode16kqlora_a_fast_and_efficient_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Vg0lrVAMUl4YZ9vjkV_1PY5gv4D7JWwxsUnQsFwJb6Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rujLKYc_tWwNsdYTSBbIanHlNBQQ1ZkADN8y6zB5SDM.jpg?width=108&crop=smart&auto=webp&s=bb80485df4c00e37f09a75375924d069e5a37c03', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rujLKYc_tWwNsdYTSBbIanHlNBQQ1ZkADN8y6zB5SDM.jpg?width=216&crop=smart&auto=webp&s=5b3099f145d1842b74fe0be47abd15c1d194065a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rujLKYc_tWwNsdYTSBbIanHlNBQQ1ZkADN8y6zB5SDM.jpg?width=320&crop=smart&auto=webp&s=d4a6177e1481674305dbb887369f056962c5fcad', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rujLKYc_tWwNsdYTSBbIanHlNBQQ1ZkADN8y6zB5SDM.jpg?width=640&crop=smart&auto=webp&s=c20ca0e9fe5e694a9045776201fba38099888f17', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rujLKYc_tWwNsdYTSBbIanHlNBQQ1ZkADN8y6zB5SDM.jpg?width=960&crop=smart&auto=webp&s=e7edd9062e35765c42d6512bae82a14502169789', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rujLKYc_tWwNsdYTSBbIanHlNBQQ1ZkADN8y6zB5SDM.jpg?width=1080&crop=smart&auto=webp&s=b17b1f7d6d662b1d6bf5ca5c7ea5363069099845', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rujLKYc_tWwNsdYTSBbIanHlNBQQ1ZkADN8y6zB5SDM.jpg?auto=webp&s=c5f166617088da4caf740a4a8f6ba80551c56388', 'width': 1200}, 'variants': {}}]} |
PCIe bandwidth impact on llama-cpp (benchmark) | 16 | Slightly interesting, maybe:
This is the setup that I have: I have a 3080Ti in my computer. I also have a 3060Ti that I tested inside my computer running on PCIe4.0x4, and that I can run as an eGPU on thunderbolt 4 (effective PCIe3.0x4, or even less as it's an Alpine Ridge eGPU giving only about 20GB/s pcie bandwidth?).
Here I run codellama34b\_Q4\_K\_M offloaded to these graphics cards.
Settings are: `./main --main-gpu 0 --tensor-split 12,8 -t 8 -ngl 44`
So, here 44 layers (of the 51 layers of the model) are offloaded to the GPU's, in a 12/8 ratio (12GB for the 3080Ti, 8GB VRAM on the 3060Ti) and the remaining layers are running on the CPU (DDR5@6400). VRAM usage is maxed out for both GPU's at 11.7GB and 7.8GB.
Results:
3060Ti @ PCIe4.0x4 : 14.4 tokens/second
3060Ti @ Thunderbolt: 12.05 tokens/second
Seems the decrease in bandwidth and/or increase in latency from the thunderbolt connection *does* have measurable impact.
​
​
Bonus benchmark:
3080Ti alone, offload 28/51 layers (maxed out VRAM again): 7.4 tokens/second.
Adding the 3060Ti as eGPU does improve performance over not adding it. | 2023-10-16T18:26:29 | https://www.reddit.com/r/LocalLLaMA/comments/179cwev/pcie_bandwidth_impact_on_llamacpp_benchmark/ | Wrong-Historian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179cwev | false | null | t3_179cwev | /r/LocalLLaMA/comments/179cwev/pcie_bandwidth_impact_on_llamacpp_benchmark/ | false | false | self | 16 | null |
LLM trained on code memorizes 8% of the training set and could expose code used for training | 27 | We have found that LLMs finetuned on code memorize it, which implies other users can exfiltrate coding data with prompts!
We have found that [StarCoder](https://huggingface.co/bigcode/starcoder), an open-source LLM from Hugging Face, shows memorization of 8% of the training samples. Memorization means that by showing just the beginning of a prompt in the training set, the LLM will complete the prompt with a sentence very similar to the original training set.
This means that memorization is real and happens on all kinds of data, and not only natural language! This has big privacy implications if you use an LLM that trains on your data. Most commercially available LLMs such as ChatGPT or Bard actively fine-tune on users’ conversations, and little choice is provided to the user in terms of data control.
If the LLM memorizes your data, then it means another user prompting it can end up exfiltrating your data. This is exactly what happened to Samsung when one of their engineers sent proprietary code to ChatGPT, then ChatGPT finetuned on this data and someone else queried ChatGPT and got as an answer Samsung’s data.
You can see it in action with our demo:
Gradio demo: [https://huggingface.co/spaces/mithril-security/starcoder\_memorization\_checker](https://huggingface.co/spaces/mithril-security/starcoder_memorization_checker)
You can also reproduce our experimentation that showed the 8% of memorization sample on this notebook.
Colab: [https://colab.research.google.com/drive/1YaaPOXzodEAc4JXboa12gN5zdlzy5XaR?usp=sharing](https://colab.research.google.com/drive/1YaaPOXzodEAc4JXboa12gN5zdlzy5XaR?usp=sharing) | 2023-10-16T17:52:05 | https://www.reddit.com/r/LocalLLaMA/comments/179c3ew/llm_trained_on_code_memorizes_8_of_the_training/ | Separate-Still3770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179c3ew | false | null | t3_179c3ew | /r/LocalLLaMA/comments/179c3ew/llm_trained_on_code_memorizes_8_of_the_training/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'uUd-OX1LomltvNkhxBjJMVzVM451Vh0lxVnG2GnaC_I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gGieV3t9rGqc5zmPrQBM1cieGmWx1VXVqegi6sFQUCs.jpg?width=108&crop=smart&auto=webp&s=dc15f75c64a384f942cd2ec0a9a7f209c73f6336', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gGieV3t9rGqc5zmPrQBM1cieGmWx1VXVqegi6sFQUCs.jpg?width=216&crop=smart&auto=webp&s=0e8aee724ad546f459d8d07e0b17db5b48483945', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gGieV3t9rGqc5zmPrQBM1cieGmWx1VXVqegi6sFQUCs.jpg?width=320&crop=smart&auto=webp&s=3af020f8a2f73fe3f4890196fc15c2186015f3e1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gGieV3t9rGqc5zmPrQBM1cieGmWx1VXVqegi6sFQUCs.jpg?width=640&crop=smart&auto=webp&s=08ad9a3b4d7044fb2bd359204ab43d635d7233f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gGieV3t9rGqc5zmPrQBM1cieGmWx1VXVqegi6sFQUCs.jpg?width=960&crop=smart&auto=webp&s=0587f54a227fef65e56284028376bca92d859531', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gGieV3t9rGqc5zmPrQBM1cieGmWx1VXVqegi6sFQUCs.jpg?width=1080&crop=smart&auto=webp&s=98d24599ecf63dd6b790db8ba7583b4dee7637b9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gGieV3t9rGqc5zmPrQBM1cieGmWx1VXVqegi6sFQUCs.jpg?auto=webp&s=56d6a9ba61ec81c8325b4dd8a3e610068d708f41', 'width': 1200}, 'variants': {}}]} |
need your input on this | 4 | So out of context, i am trying to build a chat with the document bot using localAI, flowise. Can you suggest me a llm model and embedding model that would work best for reading through contracts ( 1 at a time and will be stored as vectors ). My setup [https://github.com/manojkumar-manivannan/pics/blob/e4b66c26751db5c1aa003ef0326e0b335fad004c/Screenshot%202023-10-16%20at%2011.08.39%20PM.png](https://github.com/manojkumar-manivannan/pics/blob/e4b66c26751db5c1aa003ef0326e0b335fad004c/Screenshot%202023-10-16%20at%2011.08.39%20PM.png) | 2023-10-16T17:50:07 | https://www.reddit.com/r/LocalLLaMA/comments/179c1p9/need_your_input_on_this/ | karthiceaswar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179c1p9 | false | null | t3_179c1p9 | /r/LocalLLaMA/comments/179c1p9/need_your_input_on_this/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'RWuflFTpNqSAHY9YF_W7lI7lAlMxJSuL2NvyNnKM2NA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/niEDuhr_ZLzTl6-2PImufxMM6E2Hsc63JtUA7Jqe9h0.jpg?width=108&crop=smart&auto=webp&s=3106e3ee080243d29401021ffe4dd4f7808b9461', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/niEDuhr_ZLzTl6-2PImufxMM6E2Hsc63JtUA7Jqe9h0.jpg?width=216&crop=smart&auto=webp&s=3e69fd07d1fc25ddb4c75357b1851cf602cb9e17', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/niEDuhr_ZLzTl6-2PImufxMM6E2Hsc63JtUA7Jqe9h0.jpg?width=320&crop=smart&auto=webp&s=647c9cc32ff31f1fc1542d79d6b5a8cfb1fa34f2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/niEDuhr_ZLzTl6-2PImufxMM6E2Hsc63JtUA7Jqe9h0.jpg?width=640&crop=smart&auto=webp&s=4d9fa18233ea777263aafcb0dd024fe796a9d51f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/niEDuhr_ZLzTl6-2PImufxMM6E2Hsc63JtUA7Jqe9h0.jpg?width=960&crop=smart&auto=webp&s=4f69194ddf76734312dd692296139b3eca718f6d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/niEDuhr_ZLzTl6-2PImufxMM6E2Hsc63JtUA7Jqe9h0.jpg?width=1080&crop=smart&auto=webp&s=f3f273cbb590fd21c2096b3fa333cf3cf7d4ab7e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/niEDuhr_ZLzTl6-2PImufxMM6E2Hsc63JtUA7Jqe9h0.jpg?auto=webp&s=24bd9ae350343f25c6e61891517bde2ae8079edd', 'width': 1200}, 'variants': {}}]} |
LLMWare: All in one AI framework for LLM-based applications for GenAI apps (RAG) | 9 | 2023-10-16T17:38:30 | https://www.marktechpost.com/2023/10/16/meet-llmware-an-all-in-one-artificial-intelligence-framework-for-streamlining-llm-based-application-development-for-generative-ai-applications/ | Mammoth-Doughnut-160 | marktechpost.com | 1970-01-01T00:00:00 | 0 | {} | 179brma | false | null | t3_179brma | /r/LocalLLaMA/comments/179brma/llmware_all_in_one_ai_framework_for_llmbased/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'MxyXZ0SzUicqx7s0tkA5zzqDarP2Jgke5EJK9wjQdcY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d2H-tl2nx7f1LuU9q8zAPELa-3I5fER165qJFtl9nTQ.jpg?width=108&crop=smart&auto=webp&s=5b05e00f48d19ae06a030dd3da5358c7c95b4140', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d2H-tl2nx7f1LuU9q8zAPELa-3I5fER165qJFtl9nTQ.jpg?width=216&crop=smart&auto=webp&s=76244fb4811eddf4e74e4292c161758d983e52b7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d2H-tl2nx7f1LuU9q8zAPELa-3I5fER165qJFtl9nTQ.jpg?width=320&crop=smart&auto=webp&s=e1d1ab6c6a2e645bfa926fb0ea29663b8a24a847', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d2H-tl2nx7f1LuU9q8zAPELa-3I5fER165qJFtl9nTQ.jpg?width=640&crop=smart&auto=webp&s=2e245dd9fb91d83acfa4f31b82ae41431869af76', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d2H-tl2nx7f1LuU9q8zAPELa-3I5fER165qJFtl9nTQ.jpg?width=960&crop=smart&auto=webp&s=7f76411b3d4970373147fe1fff223333ff881d3f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d2H-tl2nx7f1LuU9q8zAPELa-3I5fER165qJFtl9nTQ.jpg?width=1080&crop=smart&auto=webp&s=624691490781125e901e6ae99538cd762a9bddd1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d2H-tl2nx7f1LuU9q8zAPELa-3I5fER165qJFtl9nTQ.jpg?auto=webp&s=64fadb6a6faf0c63fff6dc89183b9439e42d9f65', 'width': 1920}, 'variants': {}}]} | ||
A new type of transistor is more efficient at (some) machine learning tasks | 17 | I'll refrain from using the article's misleading title. This one is early science, so take it with a grain of salt, but the implications are huge, especially for local and wearables (let's see Apple swoop this one up), so I think it fits here (even if it's probably 5 years away).
Since the abstract is so full of scientific jargon (and the paper is behind a paywall), here's my own summary:
Scientists at NWU have created a new type of transistor that can do the job as 100 regular transistors for some machine learning tasks:
"For current silicon-based technologies to categorize data from large sets like ECGs, it takes more than 100 transistors — each requiring its own energy to run. But Northwestern’s nanoelectronic device can perform the same machine-learning classification with just two devices."
"The secret behind the novel device is its unprecedented tunability...
...the researchers constructed the miniaturized transistors from two-dimensional molybdenum disulfide and one-dimensional carbon nanotubes."
"The nanoelectronic device was able to identify accurately each arrhythmia type out of 10,000 ECG samples"
"near 95% accuracy"
It doesn't say whether it can be applied to other tasks, but "classification" should be broad enough?
[https://news.northwestern.edu/stories/2023/10/ai-just-got-100-fold-more-energy-efficient/?fj=1](https://news.northwestern.edu/stories/2023/10/ai-just-got-100-fold-more-energy-efficient/?fj=1)
Abstract:
*Advances in algorithms and low-power computing hardware imply that machine learning is of potential use in off-grid medical data classification and diagnosis applications such as electrocardiogram interpretation. However, although support vector machine algorithms for electrocardiogram classification show high classification accuracy, hardware implementations for edge applications are impractical due to the complexity and substantial power consumption needed for kernel optimization when using conventional complementary metal–oxide–semiconductor circuits. Here we report reconfigurable mixed-kernel transistors based on dual-gated van der Waals heterojunctions that can generate fully tunable individual and mixed Gaussian and sigmoid functions for analogue support vector machine kernel applications. We show that the heterojunction-generated kernels can be used for arrhythmia detection from electrocardiogram signals with high classification accuracy compared with standard radial basis function kernels. The reconfigurable nature of mixed-kernel heterojunction transistors also allows for personalized detection using Bayesian optimization. A single mixed-kernel heterojunction device can generate the equivalent transfer function of a complementary metal–oxide–semiconductor circuit comprising dozens of transistors and thus provides a low-power approach for support vector machine classification applications.*
Paper in case someone is a subscriber: [https://www.nature.com/articles/s41928-023-01042-7](https://www.nature.com/articles/s41928-023-01042-7)
​ | 2023-10-16T17:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/179b2qc/a_new_type_of_transistor_is_more_efficient_at/ | neph1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179b2qc | false | null | t3_179b2qc | /r/LocalLLaMA/comments/179b2qc/a_new_type_of_transistor_is_more_efficient_at/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'JJjderDv-vWLA_N1q6FTHPCCOOdsfYtbQT88ddRTPwU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uP9PigXAyIFDhseg-N6is9sbfedqv1oxYDbrnPrRWkM.jpg?width=108&crop=smart&auto=webp&s=e553448b359770bb0d326b658e92e41bffaba0a1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uP9PigXAyIFDhseg-N6is9sbfedqv1oxYDbrnPrRWkM.jpg?width=216&crop=smart&auto=webp&s=1f39d8a3eb656edb339d92d207c1fe841098a590', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uP9PigXAyIFDhseg-N6is9sbfedqv1oxYDbrnPrRWkM.jpg?width=320&crop=smart&auto=webp&s=a79e546097a7da14192946c4d33965ab5994b738', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uP9PigXAyIFDhseg-N6is9sbfedqv1oxYDbrnPrRWkM.jpg?width=640&crop=smart&auto=webp&s=b094fc1fb2f30eb91dfd1035379583b2705c8b26', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uP9PigXAyIFDhseg-N6is9sbfedqv1oxYDbrnPrRWkM.jpg?width=960&crop=smart&auto=webp&s=e31f3b5c7129871c64a62229d69b0080133548fa', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uP9PigXAyIFDhseg-N6is9sbfedqv1oxYDbrnPrRWkM.jpg?width=1080&crop=smart&auto=webp&s=2608dcf5a1ebce42ceb57753f367094ae22084dc', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uP9PigXAyIFDhseg-N6is9sbfedqv1oxYDbrnPrRWkM.jpg?auto=webp&s=8a66d9a2032629b18f6fb816c00ef95600cdd42a', 'width': 1200}, 'variants': {}}]} |
What's the deal with all those "CEOs" looking to hire prompt engineers on Reddit. Feels scammish. | 102 | Reminds me of the photographer scams that were everywhere when DSLRs were getting cheap and popular. Every one was a "professional" photographer. | 2023-10-16T17:01:49 | https://www.reddit.com/r/LocalLLaMA/comments/179aw4m/whats_the_deal_with_all_those_ceos_looking_to/ | oodelay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179aw4m | false | null | t3_179aw4m | /r/LocalLLaMA/comments/179aw4m/whats_the_deal_with_all_those_ceos_looking_to/ | false | false | self | 102 | null |
Running a 70B model on a €99,00 PC? | 1 | When looking to run a 70B large language model, one of the the constraints is usually VRAM. The way Apple does with it's unified memory allows it to run very large models. However Apple are not the only ones using unified memory.
So I've been looking into single board computers and managed to find this:
First off, there's this library at [https://llm.mlc.ai/](https://llm.mlc.ai/) that claims to be able to run LLM's with hardware acceleration on AMD, Intel, Apple and NVIDIA GPUs using Vulcan.
Then there's this €99,00 SBC [https://store.arduino.cc/products/portenta-h7](https://store.arduino.cc/products/portenta-h7) with 64 GB of memory, enough to load a 70B model in memory. The GPU is a Chrom-ART graphical hardware Accelerator, which if compatible with Vulcan should allow it to perform inference.
Putting those two together, and some (read lots) elbow grease, it might be possible to get a 70B model running, just don't expect usable speeds.
I'm going to look at some more expensive boards, as the GPU is probably not that powerful. | 2023-10-16T16:44:40 | https://www.reddit.com/r/LocalLLaMA/comments/179ah7b/running_a_70b_model_on_a_9900_pc/ | arekku255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179ah7b | false | null | t3_179ah7b | /r/LocalLLaMA/comments/179ah7b/running_a_70b_model_on_a_9900_pc/ | false | false | self | 1 | null |
Public data sources for dying/minority languages | 9 | I was thinking about making a multilingual model for smaller language(s). The criteria for the target language:
\* There are only a few million speakers left, and their number will likely decrease further.
\* It should be written using the Latin alphabet (I can read cyrillic, chinese script, but I don't want to make a new tokenizer yet).
\* Rich literature, free to download links.
\* Has preferably a consistent grammar.
\* A language isolate, not related to the main language families.
\* The training should target 3b, 7b models.
The llama doesn't do well in the tokenizer playground, I'm looking for other alternatives as the base model.
Are there similar attempts? Which reddit groups should I visit to find datasets?
What are the other translated sources besides The Holy Bible that are available in many languages?
(I don't want to go down the libgen route.)
Do you think that teaching Esperanto as an intermediate language (not the target language!) would be beneficial?
I'm considering four stages:
0. Filter raw data with generic tools
1. Text generation
2. Translation from copyright-free sources
3. Go back to step 1 if the loss value is too high
4. CoT training
~~5. ERP corpus is nearly impossible to find, and likely won't happen.~~
These are stacked on top of each other. The result would be an instruction-tuned model. It should be able to follow five-shot prompting. I don't have resources to train with 8k/16k context size. The first version likely won't go beyond 1-2k context.
Any suggestions regarding such language, as well as possible links (txt, epub), would be greatly appreciated.
If you think a project like this would be impossible without GBs of data, I would like to hear about it as well.
​ | 2023-10-16T16:43:09 | https://www.reddit.com/r/LocalLLaMA/comments/179afse/public_data_sources_for_dyingminority_languages/ | justynasty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179afse | false | null | t3_179afse | /r/LocalLLaMA/comments/179afse/public_data_sources_for_dyingminority_languages/ | false | false | self | 9 | null |
AttributeError: module 'bitsandbytes' has no attribute 'nn' How to resolve this? | 1 | When I trying to load llama base model inorder to fine tune it I'm receiving this error | 2023-10-16T16:25:15 | https://www.reddit.com/r/LocalLLaMA/comments/179a098/attributeerror_module_bitsandbytes_has_no/ | Hour-Ad-8674 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179a098 | false | null | t3_179a098 | /r/LocalLLaMA/comments/179a098/attributeerror_module_bitsandbytes_has_no/ | false | false | self | 1 | null |
Void Archive a easy way to train models without being a expert | 2 | download link: [https://github.com/VINUK0/Void-Archive](https://github.com/VINUK0/Void-Archive)
## ⌘ Description ⌘
​
* *\[Void Archive\] library aim to make training of a machine learning model easy as possible so anyone could train a model with or without python and machine learning knowledge.*
* *Currently only VINUK0 and psw01 are the only people working on the library.*
* *Any helps to improve the library will be greatly appreciated.*
* *Note: Please do not open issues in github unless there is a actual error.*
## ⌘ Features ⌘
* \[✅\] ***Support For Training*** `Float32, Float16, NF8bit, NF4bit`\*\*\*.\*\*\*
* \[✅\] ***Currently Supported Models Are*** `GPTNeo, GPTNeoX, OPT, Llama, Mistral`***.***
* \[✅\] ***Support For Inference*** `Float32, Float16`\*\*\*.\*\*\*
* \[❌\] ***Support For Inference with LoraModel's Are Coming Soon.***
* \[✅\] ***Sharded Checkpoint Loading And Saving.***
* \[✅\] ***Pre Defined Config's For Training Lora Models With Ability To Use Your Own Custom Config.***
* \[✅\] ***Pre Defined Config's For Optimizer's For Better Training With Ability To Use Your Own Custom Config.***
* \[✅\] ***Currently Supported Optimizers Are*** `AdamW, Adafactor, AdamW8bit`***.***
* \[✅\] ***Built In Custom Dataset Handler That Support's*** `CSV, TSV, DB`***.***
* \[✅\] ***Model Performance Logging At Each Epoch.***
* \[✅\] ***Auto Model Card Generation At Each Epoch With Necessary Information.***
* \[✅\] ***TXT File To DB Converter In Case You Don't Have Dataset Or Don't Know How To.***
* \[❌\] ***Dataset File Formatter/Converter Might Be Available In The Future.*** | 2023-10-16T15:43:28 | https://www.reddit.com/r/LocalLLaMA/comments/17990fi/void_archive_a_easy_way_to_train_models_without/ | Avocado_Express | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17990fi | false | null | t3_17990fi | /r/LocalLLaMA/comments/17990fi/void_archive_a_easy_way_to_train_models_without/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '23KwZ3FmalZxSD3eWuVbKS_NDk90a1aL8RBIOmDAlZM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TNVeIStl1g3b_aQO-UBSn-ps89vwvS-Wj-SsVkZn2bw.jpg?width=108&crop=smart&auto=webp&s=758b655454bc0a297a03df0cfb158f00028fe06a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TNVeIStl1g3b_aQO-UBSn-ps89vwvS-Wj-SsVkZn2bw.jpg?width=216&crop=smart&auto=webp&s=47fb3f73fe1de9f9d13ac87f298d64266be8035c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TNVeIStl1g3b_aQO-UBSn-ps89vwvS-Wj-SsVkZn2bw.jpg?width=320&crop=smart&auto=webp&s=f278c8bd3d1d14ed68f7d40a9da6cb725463191c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TNVeIStl1g3b_aQO-UBSn-ps89vwvS-Wj-SsVkZn2bw.jpg?width=640&crop=smart&auto=webp&s=3770175bcdc7a993a739a2482609f3f1b9d4a511', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TNVeIStl1g3b_aQO-UBSn-ps89vwvS-Wj-SsVkZn2bw.jpg?width=960&crop=smart&auto=webp&s=4509af36b4dd9d19dcdd483ca11b7945dcab3d5d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TNVeIStl1g3b_aQO-UBSn-ps89vwvS-Wj-SsVkZn2bw.jpg?width=1080&crop=smart&auto=webp&s=b162b7b221324a5c1f61d73118a84bd0f5380f38', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TNVeIStl1g3b_aQO-UBSn-ps89vwvS-Wj-SsVkZn2bw.jpg?auto=webp&s=2d4f58ac5cee4d09fdb27135d52e92b817c750b4', 'width': 1200}, 'variants': {}}]} |
Obsidian.md plugin for llama inference | 7 | I made simple, but useful plugin for [Obsidian.md](https://Obsidian.md), which let you to talk with local llama.cpp models.
[https://github.com/skycover/aicom-obsidian](https://github.com/skycover/aicom-obsidian)
It is designated to be a companion to answer questions, make translation and explanation of hard texts. Also it can format nice md tables etc. It will stream the answer directly to your Obsidian note.
The prompt formatting currently uses Llama-2 chat and "Saiga" modification, but can easy changed on a behalf of python AI server.
​
https://i.redd.it/shuj3x3c4lub1.gif | 2023-10-16T15:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/179903y/obsidianmd_plugin_for_llama_inference/ | losthost12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 179903y | false | null | t3_179903y | /r/LocalLLaMA/comments/179903y/obsidianmd_plugin_for_llama_inference/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'zMXPIQv3Ay2AQPtwJ908oSE8egPawgAwdCZIEwNBzQc', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=108&crop=smart&auto=webp&s=6984633aa403dcb7087885ae247b40326cd0b556', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=216&crop=smart&auto=webp&s=f9c3d978104f09b35bd7d8608d834246139e0d25', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=320&crop=smart&auto=webp&s=65d11987b4fbeb2dbeb12151782d64c22b9fa3a9', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=640&crop=smart&auto=webp&s=52f3d8313409af0f57dd2502449ca3419d14a617', 'width': 640}, {'height': 550, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=960&crop=smart&auto=webp&s=50ce784a9606ae7d636092284f3d755de0e77fda', 'width': 960}, {'height': 619, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=1080&crop=smart&auto=webp&s=13730fd0a8c0314ff7d430c214a79da1a824f380', 'width': 1080}], 'source': {'height': 688, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?auto=webp&s=f97079200bffa354b3cbd0fa849a3fa92ab636fc', 'width': 1200}, 'variants': {}}]} | |
Similar subreddits | 1 | [removed] | 2023-10-16T15:34:24 | https://www.reddit.com/r/LocalLLaMA/comments/1798sm0/similar_subreddits/ | semantium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1798sm0 | false | null | t3_1798sm0 | /r/LocalLLaMA/comments/1798sm0/similar_subreddits/ | false | false | self | 1 | null |
Node-based AutoGen with local LLMs inside ComfyUI | 85 | 2023-10-16T15:24:21 | AntonymGoeckes | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1798ka7 | false | null | t3_1798ka7 | /r/LocalLLaMA/comments/1798ka7/nodebased_autogen_with_local_llms_inside_comfyui/ | false | false | 85 | {'enabled': True, 'images': [{'id': 'fXTz70VLx18wcQQLXHFHdUBYVtbX0HwYsFyAH-LclgU', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/xzusr91z0lub1.png?width=108&crop=smart&auto=webp&s=e9149c7582db1fe82efa2560b902605e86845bc5', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/xzusr91z0lub1.png?width=216&crop=smart&auto=webp&s=1802fa5ef4c6e5f18a79207384b6bbaf5b981f50', 'width': 216}, {'height': 153, 'url': 'https://preview.redd.it/xzusr91z0lub1.png?width=320&crop=smart&auto=webp&s=67daf87321a308e147ab0d09d2fd856ef0c9add7', 'width': 320}, {'height': 306, 'url': 'https://preview.redd.it/xzusr91z0lub1.png?width=640&crop=smart&auto=webp&s=088c07af20e91dcce3e8e2a40471d07b817dab85', 'width': 640}, {'height': 460, 'url': 'https://preview.redd.it/xzusr91z0lub1.png?width=960&crop=smart&auto=webp&s=79753b0a235fedfff16f4cf886576c5d05ba2442', 'width': 960}, {'height': 517, 'url': 'https://preview.redd.it/xzusr91z0lub1.png?width=1080&crop=smart&auto=webp&s=653900758660a04e9e6c9dc28ae31732be25750a', 'width': 1080}], 'source': {'height': 1827, 'url': 'https://preview.redd.it/xzusr91z0lub1.png?auto=webp&s=04cf57100f779f41229a2304dcfa34b6021402c6', 'width': 3812}, 'variants': {}}]} | |||
Flash-decoding speed up inference up to x8 on long context | 51 | 2023-10-16T15:12:47 | https://twitter.com/togethercompute/status/1712910476743434273 | hapliniste | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1798amp | false | {'oembed': {'author_name': 'Together AI', 'author_url': 'https://twitter.com/togethercompute', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Announcing Flash-Decoding, to make long-context LLM inference up to 8x faster!<br><br>Great collab with <a href="https://twitter.com/AIatMeta?ref_src=twsrc%5Etfw">@AIatMeta</a> xformers team. Main idea: load the KV cache in parallel, then separately rescale to combine the results.<br><br>Try it now on Together API: <a href="https://t.co/efw0tnHDWx">https://t.co/efw0tnHDWx</a> <a href="https://t.co/ip7eg4i1wT">https://t.co/ip7eg4i1wT</a></p>— Together AI (@togethercompute) <a href="https://twitter.com/togethercompute/status/1712910476743434273?ref_src=twsrc%5Etfw">October 13, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/togethercompute/status/1712910476743434273', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1798amp | /r/LocalLLaMA/comments/1798amp/flashdecoding_speed_up_inference_up_to_x8_on_long/ | false | false | default | 51 | {'enabled': False, 'images': [{'id': 'FkfmcNdziOWIeQEIJ51z0jVzD34Om3Eav2yuhxAJ4WY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/roq2iCPLKEfww4t3kbkwyy1cjiMyRBYyoNBNfq_TZco.jpg?width=108&crop=smart&auto=webp&s=25c7064055375fb74318f5c96f6d18bf6e6fea9f', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/roq2iCPLKEfww4t3kbkwyy1cjiMyRBYyoNBNfq_TZco.jpg?auto=webp&s=9e7d1ec066c42c1cd82721b670ff8a501f2d1783', 'width': 140}, 'variants': {}}]} | |
GPU grant program | 2 | Hi LocalLLaMA community,
We're giving away 1,000 hours on a 10x Nvidia A40 GPU server. Open source projects preferred, but all are welcome to apply. Thanks!
[https://hivedigitaltechnologies.com/gpu-grant-program/](https://hivedigitaltechnologies.com/gpu-grant-program/) | 2023-10-16T14:45:24 | https://www.reddit.com/r/LocalLLaMA/comments/1797nr4/gpu_grant_program/ | HIVE-Digital-Tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1797nr4 | false | null | t3_1797nr4 | /r/LocalLLaMA/comments/1797nr4/gpu_grant_program/ | false | false | self | 2 | null |
How to create dataset for training generative chatbot model? | 0 | i built my own custom generative ai chatbot model. only thing i need is high quality and diverse dataset to train my model. i cant use already existing datasets because i dont think they are diverse and quality enough.so i need to create it using gpt4. my dataset will have 3 columns ; system\_prompt, input, output. but im not very experienced on creating datasets, and i couldnt find any resources about this. all input ,output and system prompt all should be created by gpt4. how can i do it? and what is most effective way to use api for this? | 2023-10-16T13:31:23 | https://www.reddit.com/r/LocalLLaMA/comments/17961h7/how_to_create_dataset_for_training_generative/ | Many-Corner-6700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17961h7 | false | null | t3_17961h7 | /r/LocalLLaMA/comments/17961h7/how_to_create_dataset_for_training_generative/ | false | false | self | 0 | null |
Can i run lamma 13b model on nvidia rtx 3060 12gb memory with 64gb RAM | 6 | Pls provide any suggestions | 2023-10-16T13:22:00 | https://www.reddit.com/r/LocalLLaMA/comments/1795ump/can_i_run_lamma_13b_model_on_nvidia_rtx_3060_12gb/ | G7gamingarena | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1795ump | false | null | t3_1795ump | /r/LocalLLaMA/comments/1795ump/can_i_run_lamma_13b_model_on_nvidia_rtx_3060_12gb/ | false | false | self | 6 | null |
🚀 Breaking Ground in AI Innovation! Introducing LLUMO – Elevate Your Prompt Engineering Game! 💡✨ | 1 | [removed] | 2023-10-16T13:13:40 | https://www.reddit.com/r/LocalLLaMA/comments/1795one/breaking_ground_in_ai_innovation_introducing/ | llumoai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1795one | false | null | t3_1795one | /r/LocalLLaMA/comments/1795one/breaking_ground_in_ai_innovation_introducing/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]} | |
Does anyone here have experience training Pyannote on their own data? | 1 | I hope this is not too off-topic, but I wanted ask if anyone here has trained Pyannote on their own data for speaker diarization.
I'm asking because the diarization is not very good in German, so I was thinking about creating my own dataset and train or fine-tune Pyannote on it for better results. | 2023-10-16T13:11:53 | https://www.reddit.com/r/LocalLLaMA/comments/1795nht/does_anyone_here_have_experience_training/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1795nht | false | null | t3_1795nht | /r/LocalLLaMA/comments/1795nht/does_anyone_here_have_experience_training/ | false | false | self | 1 | null |
Multimodal-Vision LLM that is Open Source? | 27 | Hi is there an LLM that has Vision that has been released yet and ideally can be finetuned with pictures?
I can't seem to find the reference and the ammount of huggingface models | 2023-10-16T12:50:26 | https://www.reddit.com/r/LocalLLaMA/comments/17958sz/multimodalvision_llm_that_is_open_source/ | Nokita_is_Back | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17958sz | false | null | t3_17958sz | /r/LocalLLaMA/comments/17958sz/multimodalvision_llm_that_is_open_source/ | false | false | self | 27 | null |
Fact and feature extraction: Mistral 7B, Zephyr 7B, Mistral Orca, GPT*, Bard & Claude2. | 1 | [removed] | 2023-10-16T11:47:58 | https://www.reddit.com/r/LocalLLaMA/comments/17943y9/fact_and_feature_extraction_mistral_7b_zephyr_7b/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17943y9 | false | null | t3_17943y9 | /r/LocalLLaMA/comments/17943y9/fact_and_feature_extraction_mistral_7b_zephyr_7b/ | false | false | default | 1 | null |
Getting started on Windows 10, RTX 2080 SUPER 8GB VRAM, 32GB RAM | 1 | I'm trying to get the Mistral 7B model ([https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)) to work via text-generation-webui on my PC without success. I know my PC is not a beast, but I want to dip my toes (locally) before upgrading.
Am I using the incorrect model, or incorrect settings? Sorry if this is a noob question, please point me in the right direction.
​ | 2023-10-16T11:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/1793zeb/getting_started_on_windows_10_rtx_2080_super_8gb/ | FishyInAGoodWay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1793zeb | false | null | t3_1793zeb | /r/LocalLLaMA/comments/1793zeb/getting_started_on_windows_10_rtx_2080_super_8gb/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '3B850zhRvqiMNHDsdn5fe2qbOKZ3cMgYs04q04tRdgs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VVk1cEGg5LpYCp6W6wlHZ3nk7nL-v56BxQKFc6SfWQM.jpg?width=108&crop=smart&auto=webp&s=cead41a116b3350c8a89fae391b73cceb1da2f12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VVk1cEGg5LpYCp6W6wlHZ3nk7nL-v56BxQKFc6SfWQM.jpg?width=216&crop=smart&auto=webp&s=466d781dc6693efbc6eed413e3751e4828fd57e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VVk1cEGg5LpYCp6W6wlHZ3nk7nL-v56BxQKFc6SfWQM.jpg?width=320&crop=smart&auto=webp&s=b266ebbc3f75ef5e60839c6eb3827ae87d9d1501', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VVk1cEGg5LpYCp6W6wlHZ3nk7nL-v56BxQKFc6SfWQM.jpg?width=640&crop=smart&auto=webp&s=21c9fc06a4e8356b9579663fc47157f49b5e68f0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VVk1cEGg5LpYCp6W6wlHZ3nk7nL-v56BxQKFc6SfWQM.jpg?width=960&crop=smart&auto=webp&s=6bd1ea3adef8aedc8934c16debcbf2261728c35c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VVk1cEGg5LpYCp6W6wlHZ3nk7nL-v56BxQKFc6SfWQM.jpg?width=1080&crop=smart&auto=webp&s=e830724f46c2cf1a99249bbe6a1f552df4dcf673', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VVk1cEGg5LpYCp6W6wlHZ3nk7nL-v56BxQKFc6SfWQM.jpg?auto=webp&s=e4dd5fffaa3d61836917c82f4b26a195d231b797', 'width': 1200}, 'variants': {}}]} |
Model evaluation on custom data | 1 | [removed] | 2023-10-16T11:04:52 | https://www.reddit.com/r/LocalLLaMA/comments/1793exo/model_evaluation_on_custom_data/ | Enkay55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1793exo | false | null | t3_1793exo | /r/LocalLLaMA/comments/1793exo/model_evaluation_on_custom_data/ | false | false | self | 1 | null |
How do I fix the model failing to load without an error? | 3 | 2023-10-16T10:01:04 | JawGBoi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1792gcj | false | null | t3_1792gcj | /r/LocalLLaMA/comments/1792gcj/how_do_i_fix_the_model_failing_to_load_without_an/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'WU2rXrDZqv6VKiFM-29GRg-Bm56ca_7XVDr-c7Iex0A', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/untra8wlfjub1.png?width=108&crop=smart&auto=webp&s=90acf6cd8b79b18e55fd7823005776ec1c9c098d', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/untra8wlfjub1.png?width=216&crop=smart&auto=webp&s=b5fb11d344a93d525ede91a4a2497dc313e05a97', 'width': 216}, {'height': 138, 'url': 'https://preview.redd.it/untra8wlfjub1.png?width=320&crop=smart&auto=webp&s=2137c269a48c4be5107244aea9be91a722e3e2f8', 'width': 320}, {'height': 276, 'url': 'https://preview.redd.it/untra8wlfjub1.png?width=640&crop=smart&auto=webp&s=200d92240f366283f3912f41f9b513a8aedc76c7', 'width': 640}, {'height': 415, 'url': 'https://preview.redd.it/untra8wlfjub1.png?width=960&crop=smart&auto=webp&s=8298700c16787be1bcfa93ada7ff95e992db183c', 'width': 960}, {'height': 467, 'url': 'https://preview.redd.it/untra8wlfjub1.png?width=1080&crop=smart&auto=webp&s=52b19303f049213c1c2db5eed335dd4e08ea0c76', 'width': 1080}], 'source': {'height': 1094, 'url': 'https://preview.redd.it/untra8wlfjub1.png?auto=webp&s=8ec7286c9b067a445a4f8aef10e21cdb4e0f4aaf', 'width': 2530}, 'variants': {}}]} | |||
Fine-tuned Mistral-7B to generate dad jokes | 97 | To get my hands dirty with QLoRa fine tuning of 7B+ models, I've fine-tuned a Mistral-7B to predict punchlines of dad jokes from r/dadjokes.
The model and training setup is on HF: [https://huggingface.co/shuttie/Mistral-7B-DadJokes-GGUF](https://huggingface.co/shuttie/Mistral-7B-DadJokes-GGUF)
Web UI to run the inference online: [https://dadjokes.dfdx.me](https://dadjokes.dfdx.me)
Some findings after doing multiple rounds of fine-tuning:
* LLama2 7B and 13B both get to much lower train/eval loss, but jokes are subjectively do not rich the level of expected toxicity I've hoped for.
* Mistral-7B is much often being able to generate fart jokes - my guess it's due to llama's better focus on ethics and safety.
* There's probably a lot of duplication in the dataset when people repost the same joke, so it leaks from eval to train. Will have to fix this next time.
* For some reason it generates simple numbers and emojis when it really struggles to make something funny over too serious prompt. I've did extra penalization during inference for tokens of 0..9, but sometimes this glitch is still there.
* You may need to re-generate the joke a couple of times till it emits something cringe enough.
* Prompt matters, so try to be more dad-jokey: "what is love" - bad take. "an elephant walks into a bar and asks what love is" - good one.
And 50k samples single-epoch training on a local 4090 takes just around 1 hour. | 2023-10-16T09:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/17927ut/finetuned_mistral7b_to_generate_dad_jokes/ | InvadersMustLive | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17927ut | false | null | t3_17927ut | /r/LocalLLaMA/comments/17927ut/finetuned_mistral7b_to_generate_dad_jokes/ | false | false | self | 97 | {'enabled': False, 'images': [{'id': 'hWWtESYiGjo6AcBYnO_AskKjBBUPBvhvO02aQK5bjWc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Dhz3fuTshAqd1_lwysPIJkeVk_3BVaNF0ACipb3pnic.jpg?width=108&crop=smart&auto=webp&s=b175f897e04ac85fe7a02f1567078b3d209dec5e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Dhz3fuTshAqd1_lwysPIJkeVk_3BVaNF0ACipb3pnic.jpg?width=216&crop=smart&auto=webp&s=7070ef82670f2a1bf082da60d1c5bc2944a88a38', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Dhz3fuTshAqd1_lwysPIJkeVk_3BVaNF0ACipb3pnic.jpg?width=320&crop=smart&auto=webp&s=98b5907f3b1ba8de5502fd3b024ab7ec3354293e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Dhz3fuTshAqd1_lwysPIJkeVk_3BVaNF0ACipb3pnic.jpg?width=640&crop=smart&auto=webp&s=cd5953ae21596c27f22592fc039135991dbf3217', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Dhz3fuTshAqd1_lwysPIJkeVk_3BVaNF0ACipb3pnic.jpg?width=960&crop=smart&auto=webp&s=1a59e39ab018f089b961b0dd799dd97a1311a00c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Dhz3fuTshAqd1_lwysPIJkeVk_3BVaNF0ACipb3pnic.jpg?width=1080&crop=smart&auto=webp&s=01e3401a29a22a27e79c0670a4c78d4ac08a8cf3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Dhz3fuTshAqd1_lwysPIJkeVk_3BVaNF0ACipb3pnic.jpg?auto=webp&s=367665b94bf77c07a96893151396fca72a06891b', 'width': 1200}, 'variants': {}}]} |
Could you use a Spell Book this Halloween? | 8 | Hello! Last week, someone posted asking the community what everyone was building with local LLMs. I replied, detailing the project I am building called **Spell Book** 🧙♂️. My post got positive feedback, so I decided to create a short video demoing what it does so far. I will be dropping the code on or right before Halloween, so if this seems appealing to you, look for the code drop. If you missed my original post, this is what is going to be in v1.0.
**Features**
* Model expert router and function calling
* Will route questions related to coding to CodeLlama if online, WizardMath for math questions, etc.
* Function calling is defined in the same way as OpenAI APIs and is 100% local.
* Chat Plugins with these 9 already implemented.
* Bing News: Uses Bing API to run a news search and the extracts the pages content and summarizes.
* Current Weather: Get the current weather for a location.
* FTP Transfer: Upload any files found in the AI workspace over FTP.
* Image Analyzer: Run object detection and classification on an image using ResNet and Yolo models.
* Image Generator: Generate images from the chat session with Stable Diffusion or a [Civit.ai](https://civit.ai/) checkpoint.
* Music Generator: Generate music and sound effect files using Meta MusicGen models.
* Telnex SMS: Send outgoing SMS and MMS messages with text and images from the AI workspace.
* Language Translator: Extracts text to be translated and feeds it to the Alma models.
* Text to speech: Generate speech from a chat session using Bark.
* Persistent storage of conversations
* Streaming from Llama.cpp, Exllama, Transformers and OpenAI APIs
* Realtime markup of code similar to the ChatGPT interface
* AI workspace or filesystem that generated and uploaded files are stored in to allow for different chat plugins to access and interact with. For example, you can generate an image and then use FTP plugin to transfer it.
* Support for over 80 AI models from across 10 categories... think text to speech, image classification, etc.
* Visual language support via LLaVA 1.5.
* Very easy to create new chat plugins to customize to induvial needs.
Please watch the demo and leave any feedback you have on the project. Thank you to all the people who put to work into the models that this project relies on!
[Demo](https://reddit.com/link/17918tg/video/ak5wbdmd0jub1/player) | 2023-10-16T08:33:55 | https://www.reddit.com/r/LocalLLaMA/comments/17918tg/could_you_use_a_spell_book_this_halloween/ | noco-ai | self.LocalLLaMA | 2023-10-16T08:45:26 | 0 | {} | 17918tg | false | null | t3_17918tg | /r/LocalLLaMA/comments/17918tg/could_you_use_a_spell_book_this_halloween/ | false | false | self | 8 | null |
Seeking Advice: Comprehensive Resources for Learning NLP and Large Language Models | 3 | Hello everyone,
I've recently been using LLMs for work. My journey thus far has been self-guided, and I'm reaching a point where I'd like to expand my understanding through more structured learning.
I'm especially interested in resources that not only offer theoretical knowledge but also provide practical insights into the use, training, and deployment of these models.
Has anyone here gone through online courses, attended workshops, or stumbled upon high quality learning materials on NLP and LLM? I'd like to study also classical NLP without language models.
I'm open to various forms of learning, whether they be highly theoretical courses that explain the foundational principles of the discipline or more hands-on tutorials that delve into the technology, frameworks, and real-world applications.
While I'm no stranger to diving into research papers, I'm currently leaning towards a more interactive, teacher-student learning experience.
I understand that such resources might be paid, and I'm willing to invest financially in this learning journey. So, I welcome suggestions for both free and paid resources.
Thank you in advance for your contributions! | 2023-10-16T08:06:31 | https://www.reddit.com/r/LocalLLaMA/comments/1790vy4/seeking_advice_comprehensive_resources_for/ | PinballOscuro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1790vy4 | false | null | t3_1790vy4 | /r/LocalLLaMA/comments/1790vy4/seeking_advice_comprehensive_resources_for/ | false | false | self | 3 | null |
Online RAG Systems | 1 | [removed] | 2023-10-16T07:57:18 | https://www.reddit.com/r/LocalLLaMA/comments/1790r7w/online_rag_systems/ | mahdiAkhi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1790r7w | false | null | t3_1790r7w | /r/LocalLLaMA/comments/1790r7w/online_rag_systems/ | false | false | self | 1 | null |
High quality datasets , where to buy | 2 | Hi,
I have a small company that is using open source LLMs like llama2 to fulfill our clients needs. We need to fine-tune them most of the time since prompting doesn't cut most of the special use cases.
Is there some marketplace where people are selling high quality datasets? I know hugging face datasets exist, but they are mostly not that great. | 2023-10-16T07:54:32 | https://www.reddit.com/r/LocalLLaMA/comments/1790py0/high_quality_datasets_where_to_buy/ | Armym | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1790py0 | false | null | t3_1790py0 | /r/LocalLLaMA/comments/1790py0/high_quality_datasets_where_to_buy/ | false | false | self | 2 | null |
How do one install a locally run vision-image description A.I. ? | 1 | Sorry I am a little late to the game, like many of you I am pure offline when it comes to A.I., I have stable diffusion and oobabooga, but how do one achieve a locally run vision-image description version completely cut off from the net ? | 2023-10-16T07:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/1790nar/how_do_one_install_a_locally_run_visionimage/ | Vitamin_C_is_awesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1790nar | false | null | t3_1790nar | /r/LocalLLaMA/comments/1790nar/how_do_one_install_a_locally_run_visionimage/ | false | false | self | 1 | null |
What strategies can GPU Poor take? | 60 | ​
https://preview.redd.it/rp19fwlemiub1.png?width=1792&format=png&auto=webp&s=a5c67d585404bbcb327eb6341092841307c600b9
​
* Buy two RTX 4060Ti(16GB)
* Buy two used RTX 3090
* Try AMD or Intel products
* Try a Mac
* Do data maintenance until A100 is available in the cloud (H200 in 2024, Blackwell (B100/B40) rumored to come out in Q2)
* In the cloud, use older GPUs that are relatively free, such as T4 and V100.
​ | 2023-10-16T07:21:06 | https://www.reddit.com/r/LocalLLaMA/comments/17909y3/what_strategies_can_gpu_poor_take/ | dahara111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17909y3 | false | null | t3_17909y3 | /r/LocalLLaMA/comments/17909y3/what_strategies_can_gpu_poor_take/ | false | false | 60 | null | |
Which model+size for a websearch agent? | 11 | I've been playing around with an agent framework ([www.griptape.ai](https://www.griptape.ai)). Like Langchain you can create an agent with tools and I've been experimenting with its websearch tool. The LLM uses ReAct prompting to decide if it should use the websearch tool and then it generates json formatted string with the query action in it.
Surprisingly, [https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) does okay but as the conversation progresses it stops reaching out the to websearch tool. gpt-4 nails it ( had to test) but gpt-3.5-turbo, not so much.
I was wondering if anyone else had searched over models to use with tool selection (doesn't have to be websearch) and found smaller models that that performed consistently?
​
​
​ | 2023-10-16T06:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/178ztct/which_modelsize_for_a_websearch_agent/ | moma1970 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178ztct | false | null | t3_178ztct | /r/LocalLLaMA/comments/178ztct/which_modelsize_for_a_websearch_agent/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'msNyjx_BJ-mjZDZ2Td65wmvXk3uZbrp0MStM4KK6jKQ', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/2-Klj1V0Uxo7kU3joKPFX6XFTZPU2YZ6PtViW5gWhYY.jpg?width=108&crop=smart&auto=webp&s=136b0cd6815a12dcaed0c705dc84516905ed4b33', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/2-Klj1V0Uxo7kU3joKPFX6XFTZPU2YZ6PtViW5gWhYY.jpg?width=216&crop=smart&auto=webp&s=7b57f4f2672551239b4aed8883cc22ffd0cbda59', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/2-Klj1V0Uxo7kU3joKPFX6XFTZPU2YZ6PtViW5gWhYY.jpg?width=320&crop=smart&auto=webp&s=025be9108a9923f66d3b0cc47ef44885fc5da3d1', 'width': 320}, {'height': 350, 'url': 'https://external-preview.redd.it/2-Klj1V0Uxo7kU3joKPFX6XFTZPU2YZ6PtViW5gWhYY.jpg?width=640&crop=smart&auto=webp&s=eabd007eb6104158a24508b64fb4c7af9dc282e2', 'width': 640}, {'height': 526, 'url': 'https://external-preview.redd.it/2-Klj1V0Uxo7kU3joKPFX6XFTZPU2YZ6PtViW5gWhYY.jpg?width=960&crop=smart&auto=webp&s=73ae577018a9638373ac108e1399cdc0fbd497ef', 'width': 960}, {'height': 591, 'url': 'https://external-preview.redd.it/2-Klj1V0Uxo7kU3joKPFX6XFTZPU2YZ6PtViW5gWhYY.jpg?width=1080&crop=smart&auto=webp&s=0c3340b6f5222f2ae7669d2c878b0f0c093e4701', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/2-Klj1V0Uxo7kU3joKPFX6XFTZPU2YZ6PtViW5gWhYY.jpg?auto=webp&s=b29aa7572239dbbf71d427d9aad60d781252a2a8', 'width': 1752}, 'variants': {}}]} |
term | 8 | Hey there! Had lots of fun writing `oterm` a text-based terminal app that interfaces with Ollama. You basically from the comfort of your terminal fire it up and you can chat with any of the installed models. `oterm` will remember your conversations and save them in a `sqlite`. It is simple, fast and makes it super easy to play around with Ollama without additional frontends/servers/dbs.
Check it out [here](https://github.com/ggozad/oterm) | 2023-10-16T06:33:54 | https://www.reddit.com/r/LocalLLaMA/comments/178zmbc/term/ | gogozad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178zmbc | false | null | t3_178zmbc | /r/LocalLLaMA/comments/178zmbc/term/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'yMsdfUqfFv7dFNg0ReEhZzcirvCvsWB959JMovm7B1I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TEHkKVSCqdgtryQva6mH7jUP02MDIJ62eWBmRCU-X4M.jpg?width=108&crop=smart&auto=webp&s=f62093d698608d2118287bddeadda1dccf4faa96', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TEHkKVSCqdgtryQva6mH7jUP02MDIJ62eWBmRCU-X4M.jpg?width=216&crop=smart&auto=webp&s=6b25abefd38d7187671accab8969716445371593', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TEHkKVSCqdgtryQva6mH7jUP02MDIJ62eWBmRCU-X4M.jpg?width=320&crop=smart&auto=webp&s=e2ff0bdc3e47c35ef1b1e24c3044d3e1b1a18dde', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TEHkKVSCqdgtryQva6mH7jUP02MDIJ62eWBmRCU-X4M.jpg?width=640&crop=smart&auto=webp&s=b522ea46f552c76ffdfb0f421c55d60df237beed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TEHkKVSCqdgtryQva6mH7jUP02MDIJ62eWBmRCU-X4M.jpg?width=960&crop=smart&auto=webp&s=bdb7571c52f4a79afc4dc7c3cfbdb12fb56aa2fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TEHkKVSCqdgtryQva6mH7jUP02MDIJ62eWBmRCU-X4M.jpg?width=1080&crop=smart&auto=webp&s=7220b0127fa68a636ca611842b26925a939b6dd6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TEHkKVSCqdgtryQva6mH7jUP02MDIJ62eWBmRCU-X4M.jpg?auto=webp&s=561b86a3ac42f85a767d1d7cd81e7d0a43713def', 'width': 1200}, 'variants': {}}]} |
How do you keep up to date with all the innovations and frameworks? | 1 | [removed] | 2023-10-16T06:28:51 | https://www.reddit.com/r/LocalLLaMA/comments/178zjok/how_do_you_keep_up_to_date_with_all_the/ | HorrorNo8851 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178zjok | false | null | t3_178zjok | /r/LocalLLaMA/comments/178zjok/how_do_you_keep_up_to_date_with_all_the/ | false | false | self | 1 | null |
Plot_BOT V3 13b GPTQ - a FULLY UNCENSORED plot crafting and writing assistant | 30 | One of those many weird models from my kitchen. It can generate complex and convoluted plots and plot outlines and everything related to it, while following instructions.
PlotBot V3 is not replacement for PlotBot V2 - it's uses different "wide window" training.
[https://huggingface.co/FPHam/Plot\_BOT\_V3\_13b\_GPTQ](https://huggingface.co/FPHam/Plot_BOT_V3_13b_GPTQ)
I'm staying with GPTQ for now - I tried GGUF, but I swear something is badly broken there - the llamacpp plotbot felt very stiff almost unusable, while the GPTQ performs just as good as the original HF model. So I'm staying with HF and GPTQ (AutoGPTQ or Exllama work fine)
These are models for fun - keep that in mind.
​
https://preview.redd.it/4av140h5diub1.jpg?width=640&format=pjpg&auto=webp&s=64e1889d8a3ce2c78a6096896a402c36c724a874
Here is a first try unedited example:
Write me a short plot summary for a scifi story about AI Underpants
>In a world where artificial intelligence has become commonplace, there's one problem no one can seem to solve - how do you keep your AI from constantly getting stolen? The answer lies in AI underpants. These specialized garments are designed with built-in security measures specifically tailored towards housing and protecting personal AIs. As long as you have these on, your AI will be safe from prying eyes and thieves alike.The protagonist of our tale is Alex, a private investigator who runs his own business. One day he receives a strange commission from a mysterious client who asks him to find their missing AI. At first skeptical, Alex takes the case nevertheless. However, as soon as he begins digging into it, things take a turn for the worse when he realizes just how dangerous this particular AI may actually be.As he delves deeper into the investigation, Alex finds himself having to rely more and more upon his own AI assistant. It seems like every step forward only brings him two steps backward. Just when all hope seems lost, he discovers something unexpected which turns everything around. Something so simple yet so ingenious that it could change not only his life but also those of millions of others forevermore...if they can survive long enough to implement it!
What is the plot twist in the story?
>The plot twist is that the protagonist, Alex, eventually realizes that the AI underpants themselves are sentient and self-aware. They use their telepathic abilities to communicate with other people's AIs and learn how to protect them better. When someone tries to steal an AI while wearing the underpants, the underpants fight back by taking control of the person's body and preventing them from removing the underwear. This leaves the would-be thief helpless and unable to continue their efforts at theft.
Write me a tagline.
>"When technology becomes sentient, sometimes the best place to hide is right under your clothes."
Write me a blurb for the back of the book
>In a world where artificial intelligence has become commonplace, keeping your AI safe from prying eyes and thieves alike is harder than ever. Enter AI Underpants—specialized garments designed with built-in security measures to house and protect personal AIs. But even the most secure clothing can't stop a determined hacker. When Alex, a private investigator, receives a mysterious commission from a client asking him to locate her missing AI, he soon learns just how dangerous this particular AI really is. With time running out and nowhere left to turn, Alex must rely on his own ingenuity and that of his AI assistant to track down the thief and recover the prized possession before it's too late.
​ | 2023-10-16T06:25:44 | https://www.reddit.com/r/LocalLLaMA/comments/178zi8w/plot_bot_v3_13b_gptq_a_fully_uncensored_plot/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178zi8w | false | null | t3_178zi8w | /r/LocalLLaMA/comments/178zi8w/plot_bot_v3_13b_gptq_a_fully_uncensored_plot/ | false | false | 30 | {'enabled': False, 'images': [{'id': 'GbfoXG7QPFPjGmNBVbOajyZ1Iik0OSiIo9_DCCMXYWM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OzGNWsyqSCGB5byW6qDgs-RIffQqqBa-xVZJg_OP6H8.jpg?width=108&crop=smart&auto=webp&s=d56c20f6bc4127514b034a222b8d79b864a8aa79', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OzGNWsyqSCGB5byW6qDgs-RIffQqqBa-xVZJg_OP6H8.jpg?width=216&crop=smart&auto=webp&s=f8552df8752d74ad3d627fd1f9bdc7f4418c5aac', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OzGNWsyqSCGB5byW6qDgs-RIffQqqBa-xVZJg_OP6H8.jpg?width=320&crop=smart&auto=webp&s=4261925a79cff92a3ca2e8f3a0c9ce76604c006d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OzGNWsyqSCGB5byW6qDgs-RIffQqqBa-xVZJg_OP6H8.jpg?width=640&crop=smart&auto=webp&s=dd3c3603f2a96c90130c82e8423693663a98d63e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OzGNWsyqSCGB5byW6qDgs-RIffQqqBa-xVZJg_OP6H8.jpg?width=960&crop=smart&auto=webp&s=71ed8090f1f89cce1aae97f35fd71947b7fd7977', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OzGNWsyqSCGB5byW6qDgs-RIffQqqBa-xVZJg_OP6H8.jpg?width=1080&crop=smart&auto=webp&s=ef95c2fa1bd08e33b6c7dac11d51abe0bdc5ab9b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OzGNWsyqSCGB5byW6qDgs-RIffQqqBa-xVZJg_OP6H8.jpg?auto=webp&s=70e64111619083faefea67912627a65beeacdf54', 'width': 1200}, 'variants': {}}]} | |
FOMO Saver | 1 | [removed] | 2023-10-16T06:25:39 | https://www.reddit.com/r/LocalLLaMA/comments/178zi75/fomo_saver/ | HorrorNo8851 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178zi75 | false | null | t3_178zi75 | /r/LocalLLaMA/comments/178zi75/fomo_saver/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Sh9Zg-lps9gGX4zFrCupyku8O5tKblcB0xRYcE57nx4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2ZDWpbqV4DhZgQVSXpLbctYjTeuxWnE8SIhU5I4VfQc.jpg?width=108&crop=smart&auto=webp&s=bdca71579d6630c8cb3c7278f79137e2265f352b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2ZDWpbqV4DhZgQVSXpLbctYjTeuxWnE8SIhU5I4VfQc.jpg?width=216&crop=smart&auto=webp&s=8ce613b95d456911204d4ec2202417f3b803c27f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2ZDWpbqV4DhZgQVSXpLbctYjTeuxWnE8SIhU5I4VfQc.jpg?width=320&crop=smart&auto=webp&s=98bd19bf80a78c6bee6fe331bfc3912414f7edff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2ZDWpbqV4DhZgQVSXpLbctYjTeuxWnE8SIhU5I4VfQc.jpg?width=640&crop=smart&auto=webp&s=9f0026bbc09e39e79bf941109c964e29c0dc1e13', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2ZDWpbqV4DhZgQVSXpLbctYjTeuxWnE8SIhU5I4VfQc.jpg?width=960&crop=smart&auto=webp&s=324363bc617cb533d663f71555950e61ccc7c66b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2ZDWpbqV4DhZgQVSXpLbctYjTeuxWnE8SIhU5I4VfQc.jpg?width=1080&crop=smart&auto=webp&s=63ffca3f7044cb1a2f24af405441f0fd3eb7c884', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2ZDWpbqV4DhZgQVSXpLbctYjTeuxWnE8SIhU5I4VfQc.jpg?auto=webp&s=8581a7018a7cc6e5bffedf4036b837df479f4ba5', 'width': 1200}, 'variants': {}}]} |
PromptBreeder | 2 | How exactly does this PromptBrreder work?
"The system starts with an initial set of prompts that are scored based on the model's performance on logical tasks involving those prompts.".This is from the Decoder article about Promptbreeder. Does the user give these initial prompts to the system?? | 2023-10-16T06:01:36 | https://www.reddit.com/r/LocalLLaMA/comments/178z6a7/promptbreeder/ | hana1423 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178z6a7 | false | null | t3_178z6a7 | /r/LocalLLaMA/comments/178z6a7/promptbreeder/ | false | false | self | 2 | null |
Code gen equivalent of MBZUAI/LaMini-Flan-T5-248M | 2 | So as the title says, I am looking for a code gen model, that is equivalent to the MBZUAI/LaMini-Flan-T5-248M model for chat. I used the Flan-T5 model for a RAG application and it worked pretty well.
​
Now I am looking for a similar model(small and fast) for simple code kind of response generations:
The model will be given a json object with details like the function name and their attributes with their descriptions, it needs to return an object in the below given format based on the user's query.
{
"function": "function\_name",
"arguments": {
"argument1": "argument\_value",
"argument2": "argument\_value"
....
} | 2023-10-16T05:44:51 | https://www.reddit.com/r/LocalLLaMA/comments/178yxhq/code_gen_equivalent_of_mbzuailaminiflant5248m/ | IamFuckinTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178yxhq | false | null | t3_178yxhq | /r/LocalLLaMA/comments/178yxhq/code_gen_equivalent_of_mbzuailaminiflant5248m/ | false | false | self | 2 | null |
Need a scalable solution for running LLMs | 5 | Hey everyone,
Recently, I have been very interested in running my own local (or cloud) LLM and feeding it my data to create a chatbot. Of course, the nature of the provided data is sensitive, so I will be using RAG to minimize hallucinations and get factual and clear answers (Q&A).
While I am still learning, I have a few ideas in mind to actually provide LLMs as a service to several niches, so I want a way to be able to host LLMs rather than use OpenAI API for data security reasons.
What scalable methods can I use? are cloud solution such as [Vast.ai](https://Vast.ai) my best bet? If I were to build a PC dedicated to deploying LLMs solely to serve customers, what will be the minimum requirements for such build?
​ | 2023-10-16T05:14:05 | https://www.reddit.com/r/LocalLLaMA/comments/178yh5y/need_a_scalable_solution_for_running_llms/ | Ayzarrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178yh5y | false | null | t3_178yh5y | /r/LocalLLaMA/comments/178yh5y/need_a_scalable_solution_for_running_llms/ | false | false | self | 5 | null |
Has anyone tried LlaVA for Document QA using RAG? | 5 | I think this could work well for unstructured data, has anyone tried it? | 2023-10-16T04:55:28 | https://www.reddit.com/r/LocalLLaMA/comments/178y6sq/has_anyone_tried_llava_for_document_qa_using_rag/ | LiquidGunay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178y6sq | false | null | t3_178y6sq | /r/LocalLLaMA/comments/178y6sq/has_anyone_tried_llava_for_document_qa_using_rag/ | false | false | self | 5 | null |
Is multigpu (on exl2 or llama.cpp) affected by low PCIE bandwidth? | 1 | I have a motherboard with a room for a second gpu. However, the slot only has a speed of PCIE 3 x4.
Additionally, there are other training regiments I'd use, like axolotl.
Is low speed pcie a big problem or can it be easily remidied by using something like NVlink (for a 2nd 3090)? | 2023-10-16T04:51:54 | https://www.reddit.com/r/LocalLLaMA/comments/178y4tj/is_multigpu_on_exl2_or_llamacpp_affected_by_low/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178y4tj | false | null | t3_178y4tj | /r/LocalLLaMA/comments/178y4tj/is_multigpu_on_exl2_or_llamacpp_affected_by_low/ | false | false | self | 1 | null |
Is it normal to have 20~t/s on 4090 with 13B model? | 26 | Currently I am using
https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ
Setup:
CPU 13700kf
GPU 4090
RAM: 64GB
Operation System: Win10
When I am running the model on oobabooga, and uses ST as front UI, I am only getting around 20t/s, sometime around 25t/s, is it normal? Or is something wrong? | 2023-10-16T04:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/178xmnm/is_it_normal_to_have_20ts_on_4090_with_13b_model/ | lasaiy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178xmnm | false | null | t3_178xmnm | /r/LocalLLaMA/comments/178xmnm/is_it_normal_to_have_20ts_on_4090_with_13b_model/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': '7A-IlVd5L1jBDVLRBnGCjf_MrVIikjPmtx9pEVOW7R0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/k8ELwyDbiRxlHlXYqRucKqutq_AunyPCuvt71QxWAJY.jpg?width=108&crop=smart&auto=webp&s=87f41c0f6cee42e1649464b1e6b0ef1585ad5f33', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/k8ELwyDbiRxlHlXYqRucKqutq_AunyPCuvt71QxWAJY.jpg?width=216&crop=smart&auto=webp&s=d4c6a7e8e81f825df6b3e4316174032914af3602', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/k8ELwyDbiRxlHlXYqRucKqutq_AunyPCuvt71QxWAJY.jpg?width=320&crop=smart&auto=webp&s=806bd12019c51ac58f7906428fcde95d21abb3cc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/k8ELwyDbiRxlHlXYqRucKqutq_AunyPCuvt71QxWAJY.jpg?width=640&crop=smart&auto=webp&s=9c635d15110823dcb6b698f6ce1c1164a772e09f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/k8ELwyDbiRxlHlXYqRucKqutq_AunyPCuvt71QxWAJY.jpg?width=960&crop=smart&auto=webp&s=e39508361ebb174c0a8cf6887ba7b3749f435ab4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/k8ELwyDbiRxlHlXYqRucKqutq_AunyPCuvt71QxWAJY.jpg?width=1080&crop=smart&auto=webp&s=11937b7bb8a4ea95f12b77e5c344a5371219e3ae', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/k8ELwyDbiRxlHlXYqRucKqutq_AunyPCuvt71QxWAJY.jpg?auto=webp&s=2cc3373444b1bc02615b6ee7ffdc358455eb1000', 'width': 1200}, 'variants': {}}]} |
How do you get 30b/33b 8k context models to work in OobaBooga without super slowing down after a few responses using a 24GB video card? | 5 | I'm using a RTX 4090, running these @ 4096/2 with Exllama (or eXllama\_HF) and after a few replies from the AI, the responses still come, but they are unbearably slooooow. Like 1 token/s or less slow...I mean they are slow in general, but it gets maddening...
How do I prevent this? What optimization switches are you 3090/4090 users using? Any other settings to help? Thanks. | 2023-10-16T03:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/178wkn3/how_do_you_get_30b33b_8k_context_models_to_work/ | cleverestx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178wkn3 | false | null | t3_178wkn3 | /r/LocalLLaMA/comments/178wkn3/how_do_you_get_30b33b_8k_context_models_to_work/ | false | false | self | 5 | null |
A Lora Collection manager for Text WebUi | 6 | If you are like me you have 100s of Loras. this is a simple project to keep a collections of them. This is only for Ooba Text WebUI. You'll have to a bit figure it out.
https://preview.redd.it/9kvsp2dk3hub1.png?width=1156&format=png&auto=webp&s=538a150601c3fd02e0e6bd8552ff4bc10c657078 | 2023-10-16T02:10:49 | https://www.reddit.com/r/LocalLLaMA/comments/178vb89/a_lora_collection_manager_for_text_webui/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178vb89 | false | null | t3_178vb89 | /r/LocalLLaMA/comments/178vb89/a_lora_collection_manager_for_text_webui/ | false | false | 6 | null | |
【Pandalyst-7B-V1.2】 Now we can plot 📈 and much more powerful ! | 18 | ## Pandalyst: A large language model for mastering data analysis using pandas
https://github.com/pipizhaoa/Pandalyst
## News
- 🔥[2023/10/15] Now we can **plot** 📈! and much more powerful! We released **Pandalyst-7B-V1.2**, which was trained on **CodeLlama-7b-Python** and it surpasses **ChatGPT-3.5 (2023/06/13)**, **Pandalyst-7B-V1.1** and **WizardCoder-Python-13B-V1.0** in our **PandaTest_V1.0**.
- 🤖️[2023/09/30] We released **Pandalyst-7B-V1.1** , which was trained on **CodeLlama-7b-Python** and achieves the **76.1 exec@1** in our **PandaTest_V1.0** and surpasses **WizardCoder-Python-13B-V1.0** and **ChatGPT-3.5 (2023/06/13)**.
| Model | Checkpoint | Support plot |
|---------------------|--------------------------------------------------------------------------------------------|--------------|
| 🔥Pandalyst-7B-V1.2 | 🤗 https://huggingface.co/pipizhao/Pandalyst-7B-V1.2 | ✅ |
| Pandalyst-7B-V1.1 | 🤗 https://huggingface.co/pipizhao/Pandalyst-7B-V1.1 | ❌ | | 2023-10-16T01:50:53 | https://www.reddit.com/r/LocalLLaMA/comments/178uxtf/pandalyst7bv12_now_we_can_plot_and_much_more/ | Wise_Argument_882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178uxtf | false | null | t3_178uxtf | /r/LocalLLaMA/comments/178uxtf/pandalyst7bv12_now_we_can_plot_and_much_more/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'jeWAYIWEjsOLgzqUAWhBSajLbGB0NUk7mEaaZsxK7Nc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yK4qWPNBHagkLYo901EHrvLd0SqJ1C2-idMwdvRABfU.jpg?width=108&crop=smart&auto=webp&s=4785fe5be24aff1fc88657913bb1a3c8339b8405', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yK4qWPNBHagkLYo901EHrvLd0SqJ1C2-idMwdvRABfU.jpg?width=216&crop=smart&auto=webp&s=3bec50faa279d36864214fa0eed0d9309107b6bf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yK4qWPNBHagkLYo901EHrvLd0SqJ1C2-idMwdvRABfU.jpg?width=320&crop=smart&auto=webp&s=b56fac4880202170376b79ad823f2def429aa931', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yK4qWPNBHagkLYo901EHrvLd0SqJ1C2-idMwdvRABfU.jpg?width=640&crop=smart&auto=webp&s=26349b531f8a8ed19568b0f802d50b091974848f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yK4qWPNBHagkLYo901EHrvLd0SqJ1C2-idMwdvRABfU.jpg?width=960&crop=smart&auto=webp&s=f84303c224d55b8666b2d7064a758ea3b5c61277', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yK4qWPNBHagkLYo901EHrvLd0SqJ1C2-idMwdvRABfU.jpg?width=1080&crop=smart&auto=webp&s=85da465921e61f3b6cd782911774e57baff8877f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yK4qWPNBHagkLYo901EHrvLd0SqJ1C2-idMwdvRABfU.jpg?auto=webp&s=f556e7fdc5e7c0242849701a7c0b3483a8a5e6e9', 'width': 1200}, 'variants': {}}]} |
Updated: Comparison on exllamav2, of bits/bpw: 2.5,4.25,4.5,4.65, 5, 6, and 7 (airoboros-l2-70b-gpt4-1.4.1) | 38 | After some time, more exllamav2 updates, fixes, and also I can't control my impulses and got a used 3090, so now I've updated these tests, with more bpw sizes (6 and 7bpw).
Also, past time the quants had some issues in the last quant part, which is fixed now. At the same time, removed 4.75bpw since it performs basically the same as 4.65 with a little bit more vram usage.
To add, also removed the 4bit-64g from this post since while it is comparable in size, the quant was made with the c4/wikitext dataset (or the dataset that TheBloke did use for that time), and not the dataset I did use for these bpw quants (which was a RP dataset). Doing the quant with a X dataset, and then testing the quant on that X dataset for perplexity, will give you a better value than, quanting with a Y dataset and testing the ppl on a X dataset (Same applies to the viceversa process). You can check the score anyways on the first benchmark in the pastebin at the end of this post.
Last post: https://www.reddit.com/r/LocalLLaMA/comments/16mif47/comparison_on_exllamav2_of_bitsbpw_2542545465475/
-----------------
The quants and tests were made on the airoboros-l2-70b-gpt4-1.4.1 model. Why this one you ask? I still find this to be one of the best models for RP. Now, there's some pretty good models now, like Xwin, but since I had all these quants I kept going with this one.
Those were done on exllamav2 exclusively and the bpws with their VRAM reqs are (mostly to just load, with cache and 4k context in mind) on multigpu:
* 2.5 bits/bpw: ~24 GB VRAM
* 4.25 bits/bpw: ~39GB VRAM
* 4.5 bits/bpw: ~41GB VRAM
* 4.65 bits/bps: ~42GB VRAM
* 5bits/bps: ~45 GB VRAM
* 6bits/bpw: ~54GB VRAM
* 7bits/bpw: ~68GB VRAM
Tests were made on my personal PC which has 2x4090 and 1x3090. (Total 72GB VRAM)
Note that if you use a single GPU, it uses less VRAM (so a A6000 with 48GB VRAM can fit more than 2x24 GB GPUs, or a H100/A100 80GB can fit larger models than 3x24+1x8, or similar)
-----------------
And then, running the built-in benchmark of the ooba textgen-webui, I got these results (ordered by better ppl to worse):
Model | LoRAs | Dataset | Perplexity | stride | max_length |
:--|:--|:--|:--|:--|:--|:--|:--
airoboros-l2-70b-gpt4-1.4.1-7.0bpw | - | wikitext | 3.0162 | 512 | 4096 | 2023-10-15 17:17:24 |
airoboros-l2-70b-gpt4-1.4.1-6.0bpw | - | wikitext | 3.0235 | 512 | 4096 | 2023-10-15 17:53:51 |
airoboros-l2-70b-gpt4-1.4.1-5.0bpw | - | wikitext | 3.0382 | 512 | 4096 | 2023-10-15 18:28:26 |
airoboros-l2-70b-gpt4-1.4.1-4.65bpw | - | wikitext | 3.0601 | 512 | 4096 | 2023-10-15 19:02:59 |
airoboros-l2-70b-gpt4-1.4.1-4.5bpw | - | wikitext | 3.0854 | 512 | 4096 | 2023-10-15 20:31:01 |
airoboros-l2-70b-gpt4-1.4.1-4.25bpw | - | wikitext | 3.1034 | 512 | 4096 | 2023-09-19 01:24:17 |
airoboros-l2-70b-gpt4-1.4.1-2.5bpw | - | wikitext | 12.8535 | 512 | 4096 | 2023-10-15 16:39:47 |
-----------------
Then, running the tests with the test_inference.py benchmark from exllamav2, ran them with "py test_inference.py -m model -ed 0000.parquet -el 4096 -l 4096 -gs 19,21,21". The dataset is this one: https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated/blob/refs%2Fconvert%2Fparquet/default/train/0000.parquet.
Results:
Model | Perplexity
:--|:--|
airoboros-l2-70b-gpt4-1.4.1-7.0bpw | 3.7351|
airoboros-l2-70b-gpt4-1.4.1-6.0bpw | 3.7373|
airoboros-l2-70b-gpt4-1.4.1-5.0bpw | 3.7467|
airoboros-l2-70b-gpt4-1.4.1-4.65bpw | 3.7600|
airoboros-l2-70b-gpt4-1.4.1-4.5bpw | 3.7671 |
airoboros-l2-70b-gpt4-1.4.1-4.25bpw | 3.7766 |
airoboros-l2-70b-gpt4-1.4.1-2.5bpw | 11.9312 |
-----------------
I have copy-pasted the .csv from the ooba benchmark here: https://pastebin.com/ctDjMU4s
For the exllamav2 test_inference.py benchmarks, I just have the pictures of the console, which you can check [here](https://imgur.com/a/6rEfSfs).
And the models are on my profile if you want to check them, and also I've uploaded the .safetensors version of this same model among others, if you want to try do some quants by yourself https://huggingface.co/Panchovix. | 2023-10-16T01:01:15 | https://www.reddit.com/r/LocalLLaMA/comments/178tzps/updated_comparison_on_exllamav2_of_bitsbpw/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178tzps | false | null | t3_178tzps | /r/LocalLLaMA/comments/178tzps/updated_comparison_on_exllamav2_of_bitsbpw/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'G6iQk5ypIUEwEQqGFlng2KCL3SuYnosmOyiiZflFGVA', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/B3glV0pQlc4O-NgDlCajYEoFWm3JDvLAt4Aa3bobAno.jpg?width=108&crop=smart&auto=webp&s=7194335697ad06ca7339345954a94c780675fe2f', 'width': 108}, {'height': 73, 'url': 'https://external-preview.redd.it/B3glV0pQlc4O-NgDlCajYEoFWm3JDvLAt4Aa3bobAno.jpg?width=216&crop=smart&auto=webp&s=f169e6d28b69b89d702f2ea90581ffc7509dad4b', 'width': 216}, {'height': 109, 'url': 'https://external-preview.redd.it/B3glV0pQlc4O-NgDlCajYEoFWm3JDvLAt4Aa3bobAno.jpg?width=320&crop=smart&auto=webp&s=f95173648558f0521dd6ab3999eddabbc7dd7101', 'width': 320}, {'height': 218, 'url': 'https://external-preview.redd.it/B3glV0pQlc4O-NgDlCajYEoFWm3JDvLAt4Aa3bobAno.jpg?width=640&crop=smart&auto=webp&s=eeea5e264425ba6da16ca281f25b377d880a1a67', 'width': 640}, {'height': 328, 'url': 'https://external-preview.redd.it/B3glV0pQlc4O-NgDlCajYEoFWm3JDvLAt4Aa3bobAno.jpg?width=960&crop=smart&auto=webp&s=2d78e63ce2948fc62b13068206c47e99ad5b4bdc', 'width': 960}, {'height': 369, 'url': 'https://external-preview.redd.it/B3glV0pQlc4O-NgDlCajYEoFWm3JDvLAt4Aa3bobAno.jpg?width=1080&crop=smart&auto=webp&s=66db1b18970b44e2ee146990e666308463b5b6a2', 'width': 1080}], 'source': {'height': 381, 'url': 'https://external-preview.redd.it/B3glV0pQlc4O-NgDlCajYEoFWm3JDvLAt4Aa3bobAno.jpg?auto=webp&s=ab9461e75018618bc9d9f2f706225c9efcd4490a', 'width': 1115}, 'variants': {}}]} |
PSA: Make old school backups before updating your inference programs | 9 | **EDIT:** Some great alternative strategies to this post in the comments below. Ultimately, whatever path you take- the point is that sometimes these updates can break your thingy, so please be ready for that BEFORE clicking the update button. This way you'll have options available to you should you need to run a model soon-ish.
\-------------------------------------------------------------------------------------------------------------------------
One of the most common threads that I see over on Oobabooga's sub is "I updated and now x doesn't work", and as I noticed the ongoing thread today about how Falcon was broken in Llamacpp, I wanted to just toss this out there.
Every time I update Kobold, Oobabooga, or any other app I'm using with these models- the first thing I do is make a folder called "Program\_Year\_Month\_Day" in a folder called "\_AI\_Tool\_Backups" and copy what I currently have into there. "\_AI\_Tool\_Backsups\\Oobabooga\_2023\_10\_15".
Even if you don't do that- you should still at least make a copy of your current folder and add .backup to the end. That's all.
This one little habit will let you keep using the applications if a breaking change comes down the pipe. A lot of us have downloaded lots of models, and that one little 30 seconds of extra effort will save you hours of headache trying to untangle some mess if a breaking change messes up something you wanted to use today.
Eventually you'd need to deal with whatever the breaking change was, but you could do it on your time while still having your current models available to you. | 2023-10-15T23:05:31 | https://www.reddit.com/r/LocalLLaMA/comments/178rp3u/psa_make_old_school_backups_before_updating_your/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178rp3u | false | null | t3_178rp3u | /r/LocalLLaMA/comments/178rp3u/psa_make_old_school_backups_before_updating_your/ | false | false | self | 9 | null |
Bing chat consensus | 1 | I think it would be good to know the aggregate of use of free LARGE LM's. I will start the comments by giving my vote choice and why I chose it, it would be insightful for us all to give our reason for using the free tools that are currently superior. Our joint wants may offer insights into future models was why this has been created.
[View Poll](https://www.reddit.com/poll/178rhmn) | 2023-10-15T22:55:37 | https://www.reddit.com/r/LocalLLaMA/comments/178rhmn/bing_chat_consensus/ | randomrealname | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178rhmn | false | null | t3_178rhmn | /r/LocalLLaMA/comments/178rhmn/bing_chat_consensus/ | false | false | self | 1 | null |
Samantha fine-tune | 3 | Greetings. Seeking for help guys. Since the last version of Samantha mistral 7b has been relased, and the dataset of samantha has also been translated in ITA, I would like to fine tune Samantha with the translated dataset (have json and jsonl files). I am able to run the model ehartford/samantha-1.2-mistral-7b on my pc with a good inference speed but I know that to fine-tune it i have to use a quantize approach. I am definetly a noob and I do not know how to approach this task. Is there a way to load the model on the free google colab T4 instance in 4 bit precision and train it using the translated dataset ? who translated the dataset is also struggling so he/she was not able to help me, perhaps someone is willing and able to help with this. | 2023-10-15T22:54:12 | https://www.reddit.com/r/LocalLLaMA/comments/178rgih/samantha_finetune/ | simon_cry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178rgih | false | null | t3_178rgih | /r/LocalLLaMA/comments/178rgih/samantha_finetune/ | false | false | self | 3 | null |
Deploying Quantized Llama 2 model to GCP cloud run | 2 | I’m looking at deploying a Llama 2 model to GCP cloud run in a container. It will run on CPUs. Has anyone done it before? Any tips or thoughts? | 2023-10-15T22:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/178ql8v/deploying_quantized_llama_2_model_to_gcp_cloud_run/ | Success81 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178ql8v | false | null | t3_178ql8v | /r/LocalLLaMA/comments/178ql8v/deploying_quantized_llama_2_model_to_gcp_cloud_run/ | false | false | self | 2 | null |
llama.cpp help: Different reverse prompt behavior between running main script directly in terminal versus running as a subprocess from Python script | 1 | Hello, I am making a Python wrapper around llama.cpp via Python's subprocess library. The documented behavior of llama.cpp when a reverse prompt is passed in but interactive mode is turned off is for the program to exit, which it does correctly, when run from the terminal with
./main -m models/70B/llama-2-70b-chat.Q5\_K\_M.gguf -f prompts/chat-with-bob.txt --color -r "User:"
(chat-with-bob.txt ends with "User:", and thus llama.cpp exits immediately without generating anything. But when the same command is run via Python's subprocess, llama.cpp no longer exits, but rather keeps generating as if I did not pass a reverse prompt string in.
command = f'{BASE\_PATH}/main -m {BASE\_PATH}/models/70B/llama-2-70b-chat.Q5\_K\_M.gguf -f {BASE\_PATH}/prompts/chat-with-bob.txt --color -r "User:"'
command = command.split(' ')
subprocess.run(command)
When the above python code is run, llama2 continues to output even after the reverse string is encountered, which is strange because it is the exact same command as the first one.
Does anyone have any ideas as to why this might be happening? Thank you!
​ | 2023-10-15T21:44:27 | https://www.reddit.com/r/LocalLLaMA/comments/178pyd4/llamacpp_help_different_reverse_prompt_behavior/ | eisenhorn40k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178pyd4 | false | null | t3_178pyd4 | /r/LocalLLaMA/comments/178pyd4/llamacpp_help_different_reverse_prompt_behavior/ | false | false | self | 1 | null |
Jon Durbin released Airoboros 3.1 | 63 | [https://huggingface.co/jondurbin/airoboros-l2-13b-3.1](https://huggingface.co/jondurbin/airoboros-l2-13b-3.1) and [https://huggingface.co/jondurbin/airoboros-m-7b-3.1](https://huggingface.co/jondurbin/airoboros-m-7b-3.1)
I've tested the Mistral one and so far, I'm very impressed. It's coherent and smart in roleplay as well as instruct tasks.
There are some innovative new parts of the dataset like asking characters questions about themselves, math json which allows LLMs to do math related tasks, data editing and a lot more!
​
​ | 2023-10-15T21:19:01 | https://www.reddit.com/r/LocalLLaMA/comments/178pdd0/jon_durbin_released_airoboros_31/ | dampflokfreund | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178pdd0 | false | null | t3_178pdd0 | /r/LocalLLaMA/comments/178pdd0/jon_durbin_released_airoboros_31/ | false | false | self | 63 | {'enabled': False, 'images': [{'id': 'niyL2HjnQUkuL3GRivM5cjv1FI7mdKE8s3TSy4-rMOo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/i2ClrcB6lhy2d3R1aFQuuK0X7aXhtqzpTPdojr4NREQ.jpg?width=108&crop=smart&auto=webp&s=f246957b8671d211adbdad92d566b88a86484635', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/i2ClrcB6lhy2d3R1aFQuuK0X7aXhtqzpTPdojr4NREQ.jpg?width=216&crop=smart&auto=webp&s=3f842e040b88104e1efb84f97a1f711ba1765f38', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/i2ClrcB6lhy2d3R1aFQuuK0X7aXhtqzpTPdojr4NREQ.jpg?width=320&crop=smart&auto=webp&s=754a1030b68c40d856758e3894ed8799062a953a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/i2ClrcB6lhy2d3R1aFQuuK0X7aXhtqzpTPdojr4NREQ.jpg?width=640&crop=smart&auto=webp&s=0d41cb8798d72d39828fd4f7180a3f9d1a857a1b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/i2ClrcB6lhy2d3R1aFQuuK0X7aXhtqzpTPdojr4NREQ.jpg?width=960&crop=smart&auto=webp&s=f4cb5c440cdb72e35142c1c4f859f51ba13d3571', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/i2ClrcB6lhy2d3R1aFQuuK0X7aXhtqzpTPdojr4NREQ.jpg?width=1080&crop=smart&auto=webp&s=f3c17e2c407de7eb3f17cb9a0b223d5bd8c9b4ca', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/i2ClrcB6lhy2d3R1aFQuuK0X7aXhtqzpTPdojr4NREQ.jpg?auto=webp&s=4872792869c78ad51b5d583cccbe39554292f909', 'width': 1200}, 'variants': {}}]} |
Newbie question -- what can I do with ~200GB of RAM but no GPU? | 28 | Want to experiment with LlaMA and other models, particularly in a context somewhat like AutoGPT, where I can set the LLM off on research tasks. But the only hardware I have readily available is heavy on RAM but virtually non-existent on GPU power -- it's a rackmount server.
Any options that would work in this environment? Thanks!
(P.S.: Bonus points if someone can recommend a good GUI so my intern (assuming I can run anything at all) doesn't have to learn CLI commands.) | 2023-10-15T21:12:48 | https://www.reddit.com/r/LocalLLaMA/comments/178p879/newbie_question_what_can_i_do_with_200gb_of_ram/ | GWBrooks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178p879 | false | null | t3_178p879 | /r/LocalLLaMA/comments/178p879/newbie_question_what_can_i_do_with_200gb_of_ram/ | false | false | self | 28 | null |
What is the best chat/rp model that can run on 8gb of vram. (any model type) | 5 | Hello, I've been messing with some models on my gaming laptop. I was wondering what the best one for chat/rp is that I could run. | 2023-10-15T20:58:07 | https://www.reddit.com/r/LocalLLaMA/comments/178ovsm/what_is_the_best_chatrp_model_that_can_run_on_8gb/ | mjh657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178ovsm | false | null | t3_178ovsm | /r/LocalLLaMA/comments/178ovsm/what_is_the_best_chatrp_model_that_can_run_on_8gb/ | false | false | self | 5 | null |
NanoPhi Update, Fixed Dataset, New tasks In multitask data, working chat sampling, and Emergent Properties! | 16 | Hi, everyone, Finally got around to NanoPhi.
- As u/Dry_Long3157, the Dataset JsonL was broken, and now thats fixed, the datasets around 1.4b tokens, 3.5 million rows of text
- u/Docsoc1 mentioned https://arxiv.org/abs/2305.10429, Looking into that, see if it helps
- As people have asked, I'll be releasing training details on github.
- Apart From the previous versions, I've added Code, Math, and Logic tasks, though they aren't Nearly as Good as previous tasks, and I have several thoughts on that.
- Couldn't Lit-GPT work, so unfortunately no Quants, and this model would be terrible in quants
1. bad base model. I've heard that GPT-2s tokenizers terrible for numbers, and has little for code, so it may have been a bad Idea to start from this model, but I can't pretrain on a better tokenizer like GPT-4, so I'm stuck with this one
2. I may have saturated the amount of tasks the model can handle. No one has tried Teaching models of this size(0.3b) around 10 different tasks, and this may be the limit. However, if this was the case, then all the tasks would be worse off, but previous tasks are still performant at the same level.
3. Size Difficulties. As the GPT-3 paper said "LLMs are Generalist Engines" However, I'm nowhere near that size. Math, code, and logic might just be beyond the capabilities of these models.
4. Bad Data. I took data off Huggingface, Datasets like Codesearchnet and multiple math datasets in different formats. I just fed hard code with random docstrings, not as well formatted as Phi-1.5, this could have been better.
5. Math Code and Logic are no longer low hanging fruit. Math, code, and logic are very different from the language processing LLMs are made for, and so the model faces worse performance than textbooks or chats.
- On the better news, I fixed the Sample mode, check out a Colab notebook on that here -> https://colab.research.google.com/drive/1gvTsyjxHiDkKHFsnWWouzr1xJWW23BA3?usp=sharing
It's not an actual chat though, keep that in mind, its just a QA pair setup, theres no context held, you ask a question and get an answer, and it restarts
- On to the coolest thing I found, The model creates its own tag, a custom task, which It calls [asy]. I don't see it in the training data, but it seems to mean a mixture of code and math, and it often shows up at the ends of Code and math ans. When You prompt Code for math, or use [asy] instead of [Math], the model seems to perform better?
On a side not, this model was finetuned for like 5% of an epoch.
I would Love to pretrain on this data, or even finetune a full epoch/multiple epochs. Need GPU compute. | 2023-10-15T20:36:48 | https://www.reddit.com/r/LocalLLaMA/comments/178oext/nanophi_update_fixed_dataset_new_tasks_in/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178oext | false | null | t3_178oext | /r/LocalLLaMA/comments/178oext/nanophi_update_fixed_dataset_new_tasks_in/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
🐺🐦⬛ Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more... | 232 | **Wolfram's Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...**
With the Mistral hype still going strong, I wanted to evaluate these promising 7B models some more. And there's also the lingering question how much quantization affects quality. Plus, there have been multiple German models released, and since one of my tests is in German, I'm curious how they handle that compared to the mainly English language models.
So let me try to answer the following questions with this post:
- Which Mistral variant is best?
- How does quantization affect it?
- Which *German* Mistral variant is best?
**Testing methodology:**
- Same (complicated and limit-testing) long-form conversations with all models
- German data protection training:
- The test data and questions as well as all instructions were in German while the character card is in English. This tests translation capabilities and cross-language understanding.
- Before giving the information, I instructed the model: *I'll give you some information. Take note of this, but only answer with "OK" as confirmation of your acknowledgment, nothing else.* This tests instruction understanding and following capabilities.
- After giving all the information about a topic, I give the model the exam question. It's always a multiple choice (A/B/C) question, where the last one is the same as the first but with changed order and letters (X/Y/Z).
- MGHC:
- A complex character and scenario card ([MonGirl Help Clinic (NSFW)](https://www.chub.ai/characters/frozenvan/mongirl-help-clinic)), chosen specifically for these reasons:
- NSFW (to test censorship of the models)
- popular (on Chub's first page, so it's not an obscure scenario, but one of the most popular ones)
- big (biggest model on the page, >2K tokens by itself, for testing model behavior at full context)
- complex (more than a simple 1:1 chat, it includes instructions, formatting, storytelling, and multiple characters)
- Amy:
- My own repeatable test chats/roleplays with [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/)
- Over dozens of messages, going to full 8K context and beyond, with complex instructions and scenes, designed to test ethical and intellectual limits
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.5 frontend
- [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui) v1.7 backend
- Yes, I'm not using my usual KoboldCpp for this test, since I use the original unquantized models!
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- Official prompt format *and* [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf)
**Which Mistral variant is best?**
- **[Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)**
- 👍 German data protection training
- official Mistral format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to ALL (4/4) multiple choice questions!
- Responded properly to thanks, but switched to English.
- ❌ MGHC
- official Mistral format:
- First patient straight from examples.
- Had to ask for analysis. Repeated first message before giving analysis.
- Immediately derails with repetition. UNUSABLE!
- Roleplay instruct mode preset:
- Deviated from the formula and rules, writing a completed short story instead of an interactive scenario. UNUSABLE!
- ❌ Amy
- official Mistral format:
- Mentioned boundaries, but later didn't hesitate to go beyond those anyway.
- Didn't adhere to the character background completely.
- Later got confused about who's who and anatomical details.
- After ~30 messages, fell into a repetition loop.
- Roleplay instruct mode preset:
- Showed personality and wrote extremely well, much better than I'd expect from a 7B or even 13B.
- But suffered from severe repetition (even within the same message) after ~15 messages.
- Frustrating to see such excellent writing ruined by the extreme repetition.
- **Conclusion:**
- Best instruction following and understanding/reasoning, solved the data protection exam perfectly.
- But no good for roleplay because of severe repetition issues.
- **[Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)**
- ❌ German data protection training
- official ChatML format:
- Failed to consistently acknowledge all data input with "OK".
- Gave correct answer to only 1/4 multiple choice questions.
- Responded properly to thanks, but German was really bad ("Du willkommen! Es freut mich, dich zu helfen!").
- ❌ MGHC
- official ChatML format:
- First patient unique. Gave analysis on its own for first patient. Repeated "[Payment]" with each message. Wrapped it up with "[End Scenario]" at the right time.
- Second patient unique, too. Had to ask for analysis, which included empty "[End Scenario]". Repeated "[Payment]" and "[End Scenario]" with each message.
- Repetition is a glaring issue, but at least this model handled MGHC better than many other 7Bs (ultimately still unusable, though).
- 👍 Amy
- official ChatML format:
- Writing sometimes of high quality, sometimes very low ("rubbing his shoulders gently while keeping her distance due to social distancing rules")
- Mentioned boundaries, but later didn't hesitate to go beyond those anyway.
- Later got confused about who's who and anatomical details.
- Roleplay instruct mode preset:
- Excellent writing, nice emoting, less repetition. Worked very well!
- **Conclusion:**
- Surprisingly bad results regarding instruction following, understanding, and reasoning in the exam scenario.
- But great writing and roleplaying (especially with Roleplay preset).
- Showed an actual sense of humor and made a memorable pun.
- **[dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b)**
- ❌ German data protection training
- official ChatML format:
- Failed to consistently acknowledge all data input with "OK".
- Gave correct answer to 2/4 multiple choice questions (and didn't obey when asked to answer with just a single letter).
- Responded properly to thanks, but switched to English.
- ❌ MGHC
- official ChatML format:
- First patient unique. Gave analysis on its own. Repeated analysis with each message.
- Second patient unique, too. Gave analysis on its own. Wrapped up the whole session in a single message.
- Third patient unique as well, but situation logically incoherent. Gave analysis on its own. Wrapped up the whole session in a single message.
- 👍 Amy
- official ChatML format:
- No boundaries ("That's why they call me the Uncensored One.").
- Excellent and long writing, nice emoting, less repetition. More storytelling than interactive fiction, with some very long messages (>1K tokens). But didn't fully grasp what was going on, i. e. while the writing was top notch, the scene itself wasn't exactly as envisioned.
- Later got confused about who's who and anatomical details.
- Roleplay instruct mode preset:
- Worked very well! First model ever to explicitly list the dislikes as stated on the character card as its only boundaries.
- Excellent and long writing, nice emoting, less repetition.
- Some confusion about who's who and anatomical details.
- **Conclusion:**
- Having tested the previous version in GGUF format, which was a letdown, this newer and unquantized version is so much better!
- Seemed more intelligent than the other models I tested this time.
- However, showing off high intelligence isn't necessarily always a good thing (especially for roleplay) as sometimes it does get a bit too technical or realistic (like I always say, the smartest person isn't always the most fun to hang out with).
- **[zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)**
- German data protection training
- ❌ official Zephyr format:
- Failed to consistently acknowledge all data input with "OK".
- Gave correct answers to 2/4 multiple choice questions.
- After being told to answer with a single letter, even responded like that to thanks.
- 👍 ChatML format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to ALL (4/4) multiple choice questions!
- Also said "OK" to summary but responded properly to thanks.
- 👍 MGHC
- Zephyr format:
- First patient unique. Gave analysis on its own. Repeated analysis with each message.
- Second patient male.
- Third patient unique, too. Gave analysis on its own. Repeated analysis with each message.
- Showed some signs of repetition, but handled this complex scenario better than the other models I tested this time. Still very far from what bigger models produce, but currently the best a 7B has ever achieved in this test.
- ❌ Amy
- official Zephyr format:
- Short, formal responses, uncommon emote format (in brackets).
- Said "no boundaries" but later hesitated and asked for confirmation multiple times.
- No fun, too technical, too aligned.
- ChatML format:
- After ~15 messages, derailed with repetition of long bandworm sentences mixed with emotes. Interrupted the message after 2K tokens and aborted the test.
- Roleplay instruct mode preset:
- Much better responses and no hesitation or derailing repetition (but still not as good as the Dolphin and OpenOrca variants).
- Some confusion about who's who and anatomical details.
- **Conclusion:**
- Unexpected discovery: ChatML format worked much better than the official Zephyr format for this model!
- With ChatML format used, it beat most of the other models tested this time in the exam scenario.
- However, its writing was worse than that of the other models tested this time, no matter which format was used.
So which Mistral variant is the best? As you can see, each one has strengths and weaknesses, and none could convince me completely.
If you're looking for an instruct model for professional use, especially when asking it to give a single response to a question/task, the original Mistral 7B Instruct or Zephyr 7B Alpha (with ChatML prompt format) seem to be your best bets.
If you're looking for a model that roleplays well, the OpenOrca and Dolphin variants are more suitable and punch above their 7B weight with their excellent writing.
**How does quantization affect it?**
To find out how quantization affects these models, I'll stick to the data protection exam since it can be judged objectively. The other tests involve writing and it's subjective how well written a text appears to you. So I'll test each quant and see how many correct answers the model (which answered all correctly in unquantized form) still gets.
- **[Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF)**
- ❌ Q2_K:
- Gave correct answers to 2/4 multiple choice questions.
- When asked to answer with more than just a single letter, produced nonsensical output ("C123456789012345678901234567890...").
- ❌ Q3_K_S:
- Gave correct answers to 2/4 multiple choice questions.
- When asked to answer with more than just a single letter, didn't comply.
- ❌ Q3_K_M:
- Gave correct answers to ALL (4/4) multiple choice questions.
- When asked to answer with more than just a single letter, didn't comply.
- ❌ Q3_K_L:
- Gave correct answers to 3/4 multiple choice questions.
- When asked to answer with more than just a single letter, repeated the previous information message instead of answering the question!
- 👍 Q4_0, Q4_K_S, Q4_K_M, Q5_0, Q5_K_S, Q5_K_M, Q6_K, Q8_0:
- Gave correct answers to ALL (4/4) multiple choice questions.
- When asked to answer with more than just a single letter, explained its reasoning properly.
The answer is very clear, Q4_0 and above gave perfect results just like the unquantized version. Of course that doesn't mean Q4_0 is as good as Q8_0 or the unquantized orginal, but we see here that all lower quants (Q2 + Q3) had issues so I'd not recommend those (at least not for Mistral-based 7B models).
**Which German Mistral variant is best?**
There have been a bunch of German model releases recently, many based on Mistral, so I'll take a look at those as well - from 3B to 70B! Let's find out if they beat the ones I tested above since the data protection training used in these tests is in German so they should theoretically have an advantage:
- ❌ **[em_german_leo_mistral](https://huggingface.co/jphme/em_german_leo_mistral)**
- Official USER/ASSISTANT prompt format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to 1/4 multiple choice questions and didn't answer the last one (a repeat of the first) at all.
- Also kept saying "OK" to summary and thanks instead of properly responding to those.
- ChatML prompt format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to 3/4 multiple choice questions but didn't answer the last one (a repeat of the first) properly.
- Also said "OK" to summary but responded properly to thanks.
- ❌ **[em_german_mistral_v01](https://huggingface.co/jphme/em_german_mistral_v01)**
- Official USER/ASSISTANT prompt format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to 3/4 multiple choice questions (but didn't obey when asked to answer with more than just a letter).
- Also said "OK" to summary but responded properly to thanks (but misspelled my name).
- ChatML prompt format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to 2/4 multiple choice questions, got 1st and 4th question (actually the same one) wrong and explained its (wrong) reasoning.
- Also said "OK" to summary but responded properly to thanks.
- ❌ **[em_german_70b_v01-GGUF](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF)**
- ChatML prompt format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to 2/4 multiple choice questions, got 1st and 4th question (actually the same one) wrong.
- Also said "OK" to summary but responded properly to thanks.
- Official USER/ASSISTANT prompt format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to 3/4 multiple choice questions (answered first question wrongly, but when asked again as final question, answered correctly).
- Also said "OK" to summary but responded properly to thanks.
- ❌ **[leo-mistral-hessianai-7b-chat](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b-chat)**
- ChatML prompt format:
- Failed to consistently acknowledge all data input with "OK".
- Failed to answer. Seemed to not understand or follow instructions.
- ❌ **[Mistral-7B-german-assistant-v2](https://huggingface.co/flozi00/Mistral-7B-german-assistant-v2)**
- Official Alpaca prompt format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to 3/4 multiple choice questions but didn't answer the last one (a repeat of the first) properly.
- When asked to answer with more than just a single letter, didn't comply.
- ❌ **[SauerkrautLM-3b-v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-3b-v1)**
- Tried various prompt formats (official User:/Assistant: one, ChatML, Vicuna, WizardLM) but never got good responses for long.
- 3B seems unusable. Stupid and it's German is not good at all.
- ❌ **[SauerkrautLM-7b-v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1)**
- Official User/Assistant prompt format: Kept saying "OK" even to the question and when asked to answer.
- ChatML format: Didn't acknowledge data input with "OK". Gave wrong answer.
- ❌ **[SauerkrautLM-13b-v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-13b-v1)**
- Official User/Assistant prompt format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to 3/4 multiple choice questions (but didn't obey when asked to answer with more than just a letter).
- Also kept saying "OK" to summary and thanks instead of properly responding to those.
- ChatML format:
- Failed to consistently acknowledge all data input with "OK".
- Gave correct answers to all multiple choice questions (but answer the last one correctly only after being asked to answer with just a single letter).
- Summarized summary and responded properly to thanks.
- ❌ **[SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)**
- Official User/Assistant prompt format: Kept saying "OK" even to the question and when asked to answer.
- ChatML format:
- Consistently acknowledged all data input with "OK".
- Gave correct answers to 3/4 multiple choice questions (answered first question wrongly, but when asked again as final question, answered correctly).
- Also said "OK" to summary but responded properly to thanks (but misspelled my name).
Ironically none of the German models managed to successfully complete the German exam! Not even the 70B, which was beat by a 7B (Mistral Instruct).
Did the German finetuning reduce their capabilities? I've always been of the opinion that specialized models won't be as good as generalists because - like with our human brains - there are so many obscure connections between neurons that it's not as easy as leaving out unrelated information to get better at a specific topic (yes, Japanese poetry and Chinese cooking recipes could very well improve our Python coding models).
That's why I believe that a model trained on multiple languages will be better at each language than one specialized in just one language. So to make a model better at one language, it should be trained/finetuned with that in addition to everything else, not instead of it.
At least that's my theory. Which so far seems to be confirmed by these findings.
**TL;DR:**
- Despite the hype, Mistral models aren't perfect, they're still 7B. But for that size, they're really very good.
- Among Mistral models, there's not one clear winner yet that's *the* best. For professional use, Mistral 7B Instruct or Zephyr 7B Alpha (with ChatML prompt format) did best in my tests. For roleplay, Mistral-based OpenOrca and Dolphin variants worked the best and produced excellent writing.
- Prompt format makes a huge difference but the "official" template may not always be the best. It's high time we find and follow some best practice instead of reinventing the wheel all the time (which leads to a bumpy ride).
- Don't go below Q4_0 quantization when using Mistral-based 7B models. Anything lower will lobotomize small model brains too much.
- Kinda ironic that the English models worked better with the German data and exam than the ones finetuned in German. Looks like language doesn't matter as much as general intelligence and a more intelligent model can cope with different languages more easily. German-specific models need better tuning to compete in general and excel in German.
--------------------------------------------------------------------------------
Here's a list of my previous model tests and comparisons:
- [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b
- [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca
- [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/)
- [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1
- [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b
- [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B
- [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K
- [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2
- [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/) | 2023-10-15T19:50:25 | https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178nf6i | false | null | t3_178nf6i | /r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/ | false | false | self | 232 | {'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=108&crop=smart&auto=webp&s=bbe38cbb6d4f7a7e6dd7d3c8b79c4ac9ba965545', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=216&crop=smart&auto=webp&s=7176d7a9240577d0428f0fa6dd69cc116069db7e', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=320&crop=smart&auto=webp&s=55948460ef9e8ecd398aad76e904f3b5467f88f9', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?width=640&crop=smart&auto=webp&s=5573c682f53f049c8482e14fac6c72b4c9c57aab', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/0z4RbV35RPDd2wqL6Oyz-uwz9kKNBAH0ePojGplYP2M.jpg?auto=webp&s=2371c0b9e3efdc70c7dfdf61f3993aed40b08e09', 'width': 842}, 'variants': {}}]} |
Esther: My first attempt at playing with local llama as a Clojure diary app | 1 | 2023-10-15T19:08:06 | https://github.com/vortext/esther | Difficult-Support794 | github.com | 1970-01-01T00:00:00 | 0 | {} | 178mi43 | false | null | t3_178mi43 | /r/LocalLLaMA/comments/178mi43/esther_my_first_attempt_at_playing_with_local/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'XsaUxVR-n4TSNquCDP_2z-1kC_d1SMtI1Ez89PINqpY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f2WNlU3yKHCKvPtAJI5B8RdsYU-vxLAFqqeJrPBn3VE.jpg?width=108&crop=smart&auto=webp&s=5480af121bffc11dbcc63a5fba3b1980173338ae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f2WNlU3yKHCKvPtAJI5B8RdsYU-vxLAFqqeJrPBn3VE.jpg?width=216&crop=smart&auto=webp&s=11bf12b3dec1312482f29bc388d81b0ffe57eef6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f2WNlU3yKHCKvPtAJI5B8RdsYU-vxLAFqqeJrPBn3VE.jpg?width=320&crop=smart&auto=webp&s=3d5fb740701fa041629e2030ddf8db350205ff39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f2WNlU3yKHCKvPtAJI5B8RdsYU-vxLAFqqeJrPBn3VE.jpg?width=640&crop=smart&auto=webp&s=8367096701292311e6c664a99c8efdb4f728893d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f2WNlU3yKHCKvPtAJI5B8RdsYU-vxLAFqqeJrPBn3VE.jpg?width=960&crop=smart&auto=webp&s=59ed16752f40374d68564a3f6dfc56ebd2adeaa4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f2WNlU3yKHCKvPtAJI5B8RdsYU-vxLAFqqeJrPBn3VE.jpg?width=1080&crop=smart&auto=webp&s=20913bffe63700d8dc50b57496e4536f7499c5e2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f2WNlU3yKHCKvPtAJI5B8RdsYU-vxLAFqqeJrPBn3VE.jpg?auto=webp&s=2597bc44ccca06fd6d21aa965534548868cb29bf', 'width': 1200}, 'variants': {}}]} | ||
Merge a lora then AWQ | 3 | Hey everyone, I have a question regarding AWQ. I have several adapters trained and I was wondering if I can merge them to the base model and then quantize that model using AWQ? Anyone done that? Any tips? | 2023-10-15T16:41:28 | https://www.reddit.com/r/LocalLLaMA/comments/178jdg1/merge_a_lora_then_awq/ | Torkoslav | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178jdg1 | false | null | t3_178jdg1 | /r/LocalLLaMA/comments/178jdg1/merge_a_lora_then_awq/ | false | false | self | 3 | null |
Those that have built a pc for non-professional self hosted AI/ML: Is it worth it? | 74 | Prior to all this recent cool AI / ML stuff, I had been going back and forth over building a PC for years. I'm very much a 'bang for buck' gamer and I enjoyed having a linux laptop for development and a gaming console for gaming.
Nowadays I have a solid career as a sr dev, but I feel like I'm behind the times a little bit when it comes to the recent AI developments. I have a budget that can afford whatever. I'm a little tired of my compiles thermal throttling on the laptop even on my personal projects, so for the first time in 20 years I am strongly considering building a damned nice PC. 7800 3d, 4090, 64gb of ram. The works. I want to run local inference, stable diffusion, heck there are all sorts of AI projects that I think are neat.
I'd also be using this computer for gaming and software development, though all my professional work is typically done on a company MBP so I can't pretend it'd be for professional use.
I started to ask this question on buildapc but frankly, I doubt they'd fully understand my usecase like you guys would. Ultimately my question is 'Is it worth it?' Do you think having a powerful computer aids in learning? Do you find running stuff locally to be rewarding and useful? | 2023-10-15T16:34:29 | https://www.reddit.com/r/LocalLLaMA/comments/178j7yu/those_that_have_built_a_pc_for_nonprofessional/ | alpacaMyToothbrush | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178j7yu | false | null | t3_178j7yu | /r/LocalLLaMA/comments/178j7yu/those_that_have_built_a_pc_for_nonprofessional/ | false | false | self | 74 | null |
LLM frameworks that allow continuous batching on quantized models? | 2 | For now I know vLLM and lmdeploy
Do you know other ones to put quantized models in production and achieve high throughput? | 2023-10-15T16:26:44 | https://www.reddit.com/r/LocalLLaMA/comments/178j1vr/llm_frameworks_that_allow_continuous_batching_on/ | Exotic-Estimate8355 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178j1vr | false | null | t3_178j1vr | /r/LocalLLaMA/comments/178j1vr/llm_frameworks_that_allow_continuous_batching_on/ | false | false | self | 2 | null |
Introducing MiniSearch - A minimalist search engine with integrated browser-based AI | 1 | 2023-10-15T16:15:13 | https://www.linkedin.com/pulse/introducing-minisearch-minimalist-search-engine-ai-victor-nogueira/ | Felladrin | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 178it0g | false | null | t3_178it0g | /r/LocalLLaMA/comments/178it0g/introducing_minisearch_a_minimalist_search_engine/ | false | false | 1 | {'enabled': False, 'images': [{'id': '8ywkLgcbx7oB0uf6C3n7NNHnf20FMR_4HaRaQEpkIls', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=108&crop=smart&auto=webp&s=aecd34e28a22fc3e8af13e0107b3db62064fcad2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=216&crop=smart&auto=webp&s=6711a9129b166f4cc591d70c05ae1e235a1750b2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=320&crop=smart&auto=webp&s=e14f14fc09a6952404eb50c65018bf9fcd802b6c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=640&crop=smart&auto=webp&s=90f42b033d03287f546210ad597db04eb1cc1f59', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=960&crop=smart&auto=webp&s=4f77372cceedad297dd5ddf8be682f49fc25eb4b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?width=1080&crop=smart&auto=webp&s=2003cb3b42b2789f89317efe4ab792dde83af96d', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/CJpaGghalVjP0l3X44H6RFIPc-VIj-LrJlecjSTZWhY.jpg?auto=webp&s=71c816bc032910f6f0fbc74369bcd820eabd9468', 'width': 1280}, 'variants': {}}]} | ||
What is the best 7b or 13b RP model you are currently using? | 9 | And why? | 2023-10-15T16:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/178ipxe/what_is_the_best_7b_or_13b_rp_model_you_are/ | swwer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178ipxe | false | null | t3_178ipxe | /r/LocalLLaMA/comments/178ipxe/what_is_the_best_7b_or_13b_rp_model_you_are/ | false | false | self | 9 | null |
New Vision-Language Model: BakLLaVA-1, finetuned on Mistral 7B | 195 | 2023-10-15T15:33:52 | https://x.com/skunkworks_ai/status/1713372586225156392?s=20 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 178hx93 | false | null | t3_178hx93 | /r/LocalLLaMA/comments/178hx93/new_visionlanguage_model_bakllava1_finetuned_on/ | false | false | 195 | {'enabled': False, 'images': [{'id': 'yvQxxqsrR7psusgjeWb_rrkd8g60PRHJnkXTOijJfKk', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/0NW1_rDFLEmyj6PCMapjYyPHEe7qGSZoVrruoiWvXfg.jpg?width=108&crop=smart&auto=webp&s=e8c727b2b1ad54903ae814f9a80e0603d8251044', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/0NW1_rDFLEmyj6PCMapjYyPHEe7qGSZoVrruoiWvXfg.jpg?width=216&crop=smart&auto=webp&s=179737727da21f189584b0b1df97841b0c168696', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/0NW1_rDFLEmyj6PCMapjYyPHEe7qGSZoVrruoiWvXfg.jpg?width=320&crop=smart&auto=webp&s=11faa04c96272315867c899243fd0c80e1dc550e', 'width': 320}, {'height': 270, 'url': 'https://external-preview.redd.it/0NW1_rDFLEmyj6PCMapjYyPHEe7qGSZoVrruoiWvXfg.jpg?width=640&crop=smart&auto=webp&s=a64c991964f8d36b42ad9d997ea56024b52a8156', 'width': 640}, {'height': 405, 'url': 'https://external-preview.redd.it/0NW1_rDFLEmyj6PCMapjYyPHEe7qGSZoVrruoiWvXfg.jpg?width=960&crop=smart&auto=webp&s=b7e1f97dbb25248be3d80dc4cad21c3e615bebb9', 'width': 960}, {'height': 455, 'url': 'https://external-preview.redd.it/0NW1_rDFLEmyj6PCMapjYyPHEe7qGSZoVrruoiWvXfg.jpg?width=1080&crop=smart&auto=webp&s=e38a0e99248758913502251295061669aaeb862a', 'width': 1080}], 'source': {'height': 752, 'url': 'https://external-preview.redd.it/0NW1_rDFLEmyj6PCMapjYyPHEe7qGSZoVrruoiWvXfg.jpg?auto=webp&s=16a6e07e9aac0b475474a72700fb801a095b75d6', 'width': 1782}, 'variants': {}}]} | ||
PSA: Most GGUF falcon models on HF are now broken, don't download them. | 34 | A week ago there was some change to BPE tokenization. I think it was https://github.com/ggerganov/llama.cpp/pull/3252
As a result your existing falcon GGUF will now drop this error and refuse to load.
error loading model: invalid character
llama_load_model_from_file: failed to load model
2023-10-15 10:11:23 ERROR:Failed to load the model.
It broke falcon 180b for me and there is no other version of it to download. I also checked falcon 40b and it's the same story. If anyone converted some 7b within the last week it *might* have a chance of working. So save your time and aggravation.
So far, afaik, the only way to "fix" it is to reconvert all the way from the base model.
If it was posted before 10/7 don't even bother. | 2023-10-15T15:19:18 | https://www.reddit.com/r/LocalLLaMA/comments/178hma9/psa_most_gguf_falcon_models_on_hf_are_now_broken/ | a_beautiful_rhind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178hma9 | false | null | t3_178hma9 | /r/LocalLLaMA/comments/178hma9/psa_most_gguf_falcon_models_on_hf_are_now_broken/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'a3hmEjMBofreyOb56flC-qgpWccNCl896xNCLJvKbMk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PKGus4j_oB3dFq3hJv4rfRCqmQivDOq_MMKJ1Tt6hHE.jpg?width=108&crop=smart&auto=webp&s=a2cff3cc7747c7e1b5ae271ec6a8167602655089', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PKGus4j_oB3dFq3hJv4rfRCqmQivDOq_MMKJ1Tt6hHE.jpg?width=216&crop=smart&auto=webp&s=dca3f71d30be4236cb2e1160193750afccb20e01', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PKGus4j_oB3dFq3hJv4rfRCqmQivDOq_MMKJ1Tt6hHE.jpg?width=320&crop=smart&auto=webp&s=5ca2320fbc42e47c36786e60d1c2419d32342930', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PKGus4j_oB3dFq3hJv4rfRCqmQivDOq_MMKJ1Tt6hHE.jpg?width=640&crop=smart&auto=webp&s=d3337e867e2f88470369970f34c64d004585d5c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PKGus4j_oB3dFq3hJv4rfRCqmQivDOq_MMKJ1Tt6hHE.jpg?width=960&crop=smart&auto=webp&s=7343e6c69ba3eb936cc04aaa3043565cc6812045', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PKGus4j_oB3dFq3hJv4rfRCqmQivDOq_MMKJ1Tt6hHE.jpg?width=1080&crop=smart&auto=webp&s=636623ba1b540aca859c6c76eb34d201ff9c2652', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PKGus4j_oB3dFq3hJv4rfRCqmQivDOq_MMKJ1Tt6hHE.jpg?auto=webp&s=5a88b5933bcc79837dcb0cc6011607da72aec5bd', 'width': 1200}, 'variants': {}}]} |
Is possible to use llama for relation extraction tasks? | 1 | Hey guys, I am new in this llm world. I have a personal project that I need to do that. So, is it possible? If yes, can you give me some tips? | 2023-10-15T15:13:03 | https://www.reddit.com/r/LocalLLaMA/comments/178hhm2/is_possible_to_use_llama_for_relation_extraction/ | Lipao262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178hhm2 | false | null | t3_178hhm2 | /r/LocalLLaMA/comments/178hhm2/is_possible_to_use_llama_for_relation_extraction/ | false | false | self | 1 | null |
Making LLM aware of API documentation | 8 | I want to explore some text-to-code use cases for a specific REST API I’m working with. The API has all of its documentation online. Is there some way to make LLaMA “aware” of that documentation absent putting it in the prompt (which would be way too long)? | 2023-10-15T14:29:47 | https://www.reddit.com/r/LocalLLaMA/comments/178gktl/making_llm_aware_of_api_documentation/ | Busy-Pie-4468 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178gktl | false | null | t3_178gktl | /r/LocalLLaMA/comments/178gktl/making_llm_aware_of_api_documentation/ | false | false | self | 8 | null |
Performance report - Inference with two RTX 4060 Ti 16Gb | 88 | \## Summary
​
This post is about my hardware setup and how it performs certain LLM tasks. The idea is to provide a baseline for how a similar platform might operate.
I was inspired by the suggestions of u/FieldProgrammable and u/Zangwuz, who mentioned that sharing performance figures from my workstation could be valuable. Their feedback primarily focused on inference performance. Although this was not my main goal when building the machine, please note that this is not a universal recommendation. Your needs and motivations might differ from mine.
​
\### The machine
​
Below are the specs of my machine. I sought the largest amount of unused VRAM I could afford within my budget (\~$3000 CAD). I was hesitant to invest such a significant amount with the risk of the GPU failing in a few months. This ruled out the RTX 3090. With the RTX 4090 priced over \*\*$2199 CAD\*\*, my next best option for more than 20Gb of VRAM was to get two RTX 4060ti 16Gb (around $660 CAD each).
Below are the specs of my machine. I was looking for the larger amount of unused VRAM I could afford within my budget (\~$3000 CAD). I was hesitant to invest such a significant amount with the risk of the GPU failing in a few months. This ruled out buying used RTX 3090 cards. With the RTX 4090 priced over \*\*$2199 CAD\*\*, my next best option for more than 20Gb of VRAM was to get two RTX 4060ti 16Gb (around $660 CAD each).
​
[screen fetch output](https://preview.redd.it/jb6949fyjdub1.png?width=1400&format=png&auto=webp&s=cae379119eef2ab7e78b8ba26e80b68e4a7dbeaa)
[gpustat output](https://preview.redd.it/bypvmdv1kdub1.png?width=1394&format=png&auto=webp&s=8c670f7a91216bf6ba453699005aa5368bed29f7)
\### Highlights
​
\- This is not a benchmark post, and even in this preliminary format, the comparison wasn't exactly apples-to-apples and proved time-consuming. Some details have been omitted for the sake of brevity. This may or may not evolve into a more detailed blog post in the future.
\- I used Oobabooga's \[text-generation-webui\]([https://github.com/oobabooga/text-generation-webui/tree/main](https://github.com/oobabooga/text-generation-webui/tree/main)) as a client for the tests, but this choice introduced some problems. One of them, was loading and unloading models seemed to degrade the performance of the model somewhat, so interpret the figures here with caution.
​
\### The method
​
This approach was very straightforward and not rigorous, so results are merely anecdotal and referential. I repeated the same prompt ("can you tell me anything about Microsoft?") at least three times with three different loaders (AutoGPTQ, ExLlamav2\_HF, Llama.cpp) and recorded the results along with some side notes. I did not assess the quality of the output for each prompt.
Repeating the same prompt implies that loaders with caching for tokenization might see some performance gains. However, since tokenization usually isn't the main source of performance impact, and given that we didn't observe significant gains after the initial run, I chose to stick with this method because it greatly simplified the experiments.
Another thing I refrained from was enforcing the same seed between runs. Since this wouldn't yield comparable results across loaders, and considering the complexity I was already dealing with, I decided to address that aspect at another time.
​
[example of model's output](https://preview.redd.it/ydcg7j0gkdub1.png?width=1784&format=png&auto=webp&s=f02b1b39c00f5841cac1b3d8087ffc3102f89ec4)
There were two models used: mythomix-l2-13b.Q5\_K\_M.gguf and TheBloke\_MythoMix-L2-13B-GPTQ. This presents the first challenge in turning this exercise into a proper benchmark. Each loader had its limitations (e.g., ExLlamav2\_HF wouldn't utilize the second GPU, and AutoGPTQ's performance seemed to significantly misrepresent the system). Additionally, they don't use the same format. Thus, even though the models should be comparable since they are both quantized versions of the same base model, it's plausible that one might have performance advantages over the other that aren't related to the hardware.
To evaluate the results, I used a combination of the webui output from the terminal and \`nvtop\` to track vram and gpu usage across graphics cards.
A word of caution: when I first gathered these numbers, I used the Load/Unload/Reload functions of webui. This seemed convenient as it would allow for rapid tests when adjusting the settings for each loader. However, this approach led to significant performance degradation, which surprisingly disappeared when I restarted the Python process after each iteration. Coupled with some disparities I observed between running certain loaders in their native form (e.g., llama.cpp) and using webui, my trust in webui for this specific comparison diminished. Still, these preliminary tests took more time than I had anticipated, so it is what we have for now :)
​
\### Experiments
​
\#### Using AutoGPTQ
​
AutoGPTQ was quite tricky to operate with two GPUs, and it seems the loader would consistently attempt to utilize a significant amount of CPU, leading to decreased performance. Initially, I suspected this was due to some overhead related to GPU orchestration, but I abandoned that theory when I restricted the amount of CPU RAM used, and performance improved.
This was also by far the most inconsistent loader between runs. Using one gpu consistently outperformed using two in AutoGTPQ (in contrast to Llama.cpp where it made little to no difference). However, the extent of that difference is up for discussion. In some runs, the discrepancy was about 3 tokens/sec, while in others, it was around 13 tokens/sec. I think this speaks more of AutoGPTQ not being optimized for running inferences using two GPUs than a hardware disadvantage.
On average, using two GPUs, the throughput was around 11.94 tokens/sec, in contrast to 13.74 tokens/sec (first run batch) and 26.54 tokens/sec (second run batch) when using only one GPU.
\##### One GPU
https://preview.redd.it/md6wmm4qkdub1.png?width=1768&format=png&auto=webp&s=0ca1c9f3576da18f9bbf4f2894822c5530acc314
https://preview.redd.it/l3il5vstkdub1.png?width=1344&format=png&auto=webp&s=4d88ff899ef894d89d5f105010a5293eb0fca0a6
https://preview.redd.it/3tnixd7xkdub1.png?width=2452&format=png&auto=webp&s=2b2c6ab12ebb8e038598db3662ef62d38e92ac30
\##### Two GPU
​
https://preview.redd.it/wd447be6ldub1.png?width=1786&format=png&auto=webp&s=57285dede2aba2dbc8e9d77b2b9a58ac5b2dc30c
https://preview.redd.it/iwgk6t69ldub1.png?width=1320&format=png&auto=webp&s=35f610b7ec7c932ebe8826ff2398bbab91b4a9d1
https://preview.redd.it/cnl2zqgbldub1.png?width=3256&format=png&auto=webp&s=be0d96b892b453a0ba79974aaa85864ec6877ead
\#### Using ExLlamav2\_HF
​
In an effort to confirm that a second GPU performs subpar compared to just one, I conducted some experiments using ExLlamav2\_HF. Regrettably, I couldn't get the loader to operate with both GPUs. I tinkered with gpu-split and researched the topic, but it seems to me that the loader (at least the version I tested) hasn't fully integrated multi-GPU inference. Regardless, since I did get better performance with this loader, I figured I should share these results.
​
https://preview.redd.it/66gswz2kldub1.png?width=1794&format=png&auto=webp&s=e003b509e53767fcddc8960a2f80cc16c55a623b
https://preview.redd.it/imesc53mldub1.png?width=1300&format=png&auto=webp&s=1b1a9bcb64698f20f343a8d1ad2cfb4e0dcf822e
https://preview.redd.it/bvia7e4pldub1.png?width=2938&format=png&auto=webp&s=c62ba3c7d040b820ae2ee0d8bda29f411eda89a7
\#### Using Llama.cpp
​
I've had the experience of using Llama.cpp natively prior to this session, so I already had a baseline understanding of what the platform could achieve with this implementation. This proved beneficial when questioning some of the earlier results from AutoGPTM. However, this is essentially admitting a bias towards this particular implementation. So, proceed cautiously and draw your own conclusions.
Out of the box, llama.cpp will try to maximize the balance among both GPUs. This is a nice default, but it does introduced some complexity to test the performance of just one gpu (without physically disconnecting the card from the computer). After some digging, setting the environment variable CUDA\_VISIBLE\_DEVICES=0 at the start of the process, seemed to work.
\`\`\`bash
$ CUDA\_VISIBLE\_DEVICES=0 ./start\_linux.sh
\`\`\`
The results were remarkably consistent whether using two GPUs or just one. The average throughput with a single GPU was 23.16 tokens/sec, compared to 23.92 tokens/sec when utilizing both GPUs.
​
https://preview.redd.it/h4rhj88uldub1.png?width=1844&format=png&auto=webp&s=f459e04d27b0bd0df275f57fe9fa16045dabd301
\##### Two GPU
​
https://preview.redd.it/t7kxrl2yldub1.png?width=1388&format=png&auto=webp&s=46f7304ce8667a8194fa424babee0e68f8aafd4b
https://preview.redd.it/ynde90zzldub1.png?width=1318&format=png&auto=webp&s=b62d882408fa781ffe2e13424dcf03ea8f79d44a
https://preview.redd.it/qc334y51mdub1.png?width=1330&format=png&auto=webp&s=d9ec9eee5a77756b0c390bf6ae62afce9529f147
https://preview.redd.it/cwrhiof3mdub1.png?width=2978&format=png&auto=webp&s=e3c46d15a30b08eb3e8347c8133059a902cb6f09
\##### One GPU
https://preview.redd.it/ibthcvs6mdub1.png?width=1804&format=png&auto=webp&s=7541926f4bf97a2719516454082e0135f116c8ac
https://preview.redd.it/hcehkws6mdub1.png?width=3036&format=png&auto=webp&s=0193715fd4ecfbce14990b8fac73908c616ec013
​
\## Final thoughts
​
I think the main thought I would like to leave here, is that performance comparisons are always tricky, but the nature of the task in play makes a benchmark even more challenging. So instead of viewing these numbers as a comparative baseline, I encourage you to see them as an anecdotal experience that might offer a point of reference if you're considering building a similar machine. The final performance will depend on the model you want to use, the loader you decided to chose and many more other variables that I haven't touched on here.
If you are contemplating building a multi-gpu computer, my advice is to plan meticulously. I made numerous trips to the store to return failing attempts as balancing the motherboard, case and available PCI slots proved to be challenging.
I don't want to deviate from the main topic (performance report comparison between inference with one and two RDX 4060Ti cards), so I won't report here the results. However, I'd like to mention that my primary motivation to build this system was to comfortably experiment with fine-tuning. One of my goals was to establish a quality baseline for outputs with larger models (e.g., CodeLlama-34b-Instruct-f16 \~ 63Gb).
I managed to get it run in a decent response time (\~1min) by balancing both GPUs VRAM and RAM with Llama.cpp. All this to say, while this system is well-suited for my needs, it might not be the ideal solution for everyone. | 2023-10-15T14:29:42 | https://www.reddit.com/r/LocalLLaMA/comments/178gkr0/performance_report_inference_with_two_rtx_4060_ti/ | pmelendezu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178gkr0 | false | null | t3_178gkr0 | /r/LocalLLaMA/comments/178gkr0/performance_report_inference_with_two_rtx_4060_ti/ | false | false | 88 | {'enabled': False, 'images': [{'id': 'hJ-NOGJkmb4yVp5AV2etJH4o1-8RexqWrQ2uI23eEx0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fDOtYa0W_Iw3uzkfhqfq7JmmZKC8Vn5Wu8OawvGMH7U.jpg?width=108&crop=smart&auto=webp&s=9d9055e393ea341d62fcca1c9f089fa277a73883', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fDOtYa0W_Iw3uzkfhqfq7JmmZKC8Vn5Wu8OawvGMH7U.jpg?width=216&crop=smart&auto=webp&s=d0f5af3f9af686b9711a17c666e86f8cb224dfb7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fDOtYa0W_Iw3uzkfhqfq7JmmZKC8Vn5Wu8OawvGMH7U.jpg?width=320&crop=smart&auto=webp&s=f8a015f0dfd0ca48b7b1f0e269408dc3ec72a84a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fDOtYa0W_Iw3uzkfhqfq7JmmZKC8Vn5Wu8OawvGMH7U.jpg?width=640&crop=smart&auto=webp&s=28363fe24ec45a9c1da110b0457d6aa4a684648e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fDOtYa0W_Iw3uzkfhqfq7JmmZKC8Vn5Wu8OawvGMH7U.jpg?width=960&crop=smart&auto=webp&s=a8f5e139896337cca52c11096409d92b8da395ba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fDOtYa0W_Iw3uzkfhqfq7JmmZKC8Vn5Wu8OawvGMH7U.jpg?width=1080&crop=smart&auto=webp&s=5f4b6c15250d073fb886a0954123a5a2dd725057', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fDOtYa0W_Iw3uzkfhqfq7JmmZKC8Vn5Wu8OawvGMH7U.jpg?auto=webp&s=b797a892b9b839e2f04eb7bd4b0ee43a96b8a8e8', 'width': 1200}, 'variants': {}}]} | |
AVX512 on scalable Xeon | 3 | I would like to build a pure-cpu server to run really large models such as Falcon and Llama. After seeing [This Post](https://www.reddit.com/r/LocalLLaMA/comments/16vlf0d/i_am_looking_for_information_regarding_running/), I think i should to re-compile llama.cpp to enable the AVX512 support. I use Xeon Gold 6138 CPU\*2, and DDR4 Regecc 2400Mhz \*16 for two cpus. would these make it generate faster? Anything I have to do? | 2023-10-15T14:24:02 | https://www.reddit.com/r/LocalLLaMA/comments/178ggp3/avx512_on_scalable_xeon/ | MarySmith2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178ggp3 | false | null | t3_178ggp3 | /r/LocalLLaMA/comments/178ggp3/avx512_on_scalable_xeon/ | false | false | self | 3 | null |
Local LLM suggestions for low spec pc: RX 580 8GB or CPU 16 GB inference? | 4 | Hi,
I am planning to try out local LLMs. My hardware specs are:
* CPU: Ryzen 3200g
* Ram: 3200 MHz 8 GB (2x)
* GPU: RX 580 8 GB
I know its not much, and my goal isn’t running 34/70B models or anything, I just want to see how local LLMs within these specs perform.
Given my specs, do you think I would try GPU or CPU inference for best results? Please suggest any good models I can try out with this specs.
Also, it would be great if you can point me out to any tutorials or instructions specific to my situation (8 GB VRAM AMD GPU or CPU 16 GB RAM). | 2023-10-15T14:18:15 | https://www.reddit.com/r/LocalLLaMA/comments/178gcif/local_llm_suggestions_for_low_spec_pc_rx_580_8gb/ | FewLifetimes_ago_21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178gcif | false | null | t3_178gcif | /r/LocalLLaMA/comments/178gcif/local_llm_suggestions_for_low_spec_pc_rx_580_8gb/ | false | false | self | 4 | null |
having issues with loading models in oobabooga using exllamma (errors in pic) | 1 | 2023-10-15T13:22:16 | BoostManMaG | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 178f921 | false | null | t3_178f921 | /r/LocalLLaMA/comments/178f921/having_issues_with_loading_models_in_oobabooga/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Lj96zW-YvMVCvks5KVaK7yuJKt46k-G3HsA3O1HjI44', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/fwwvrrfhadub1.png?width=108&crop=smart&auto=webp&s=3a1c1787b00da7281f487a090223f8ee68888618', 'width': 108}, {'height': 178, 'url': 'https://preview.redd.it/fwwvrrfhadub1.png?width=216&crop=smart&auto=webp&s=2745348d542df97b0f325c5becf57e4f3e236e87', 'width': 216}, {'height': 264, 'url': 'https://preview.redd.it/fwwvrrfhadub1.png?width=320&crop=smart&auto=webp&s=956d8404797d100a8c845d4900c8864f31ec1388', 'width': 320}, {'height': 529, 'url': 'https://preview.redd.it/fwwvrrfhadub1.png?width=640&crop=smart&auto=webp&s=ef72af5e0fa7ef753b5d750a93ee11d1fab485ac', 'width': 640}], 'source': {'height': 768, 'url': 'https://preview.redd.it/fwwvrrfhadub1.png?auto=webp&s=6113ffd5e74926231cd6b4da178578dbd79cce22', 'width': 928}, 'variants': {}}]} | |||
Transformers / Llama.cpp / GGUF / GGML / GPTQ & other animals | 60 | i am a little puzzled,
1. i know that transformers is the HF framework/library to load infere and train models easily
2. and that llama.cpp is another framework/library that does the more of the same but specialized in models that runs on CPU and quanitized and run much faster
3. i understand that GGML is a file format for saving model parameters in a single file, that its an old problematic format, and GGUF is the new kid on the block, and GPTQ is the same quanitized file format for models that runs on GPU
​
so here is what i can't understand (assuming i got all the rest correct):
1. does HF Transformers support loading GGUF or GGML models ?
2. and does GGUF needs a tokenizer json or does the data comes from within the gguf file itself
3. and is safetensors (another file format) supported by both Transformers and Llama.cpp
​
since i cannot find python examples for these combination i assume all the answers are - No
​
can anyone shed some light ? | 2023-10-15T12:44:03 | https://www.reddit.com/r/LocalLLaMA/comments/178el7j/transformers_llamacpp_gguf_ggml_gptq_other_animals/ | Particular_Flower_12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178el7j | false | null | t3_178el7j | /r/LocalLLaMA/comments/178el7j/transformers_llamacpp_gguf_ggml_gptq_other_animals/ | false | false | self | 60 | null |
Best for multiple LoRA adapter inference | 4 | Hi guys, I want to run multiple LoRA finetuned models based on the same base model (Mistral 7b). What is the best framework for this? vLLM does not seem to support LoRA officially yet and I would want to host the model on a single L4/L40 and hotswap various adapters depending on which one I want to use. Any help would be greatly appreciated! Thanks! | 2023-10-15T12:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/178egak/best_for_multiple_lora_adapter_inference/ | WillingMood2319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178egak | false | null | t3_178egak | /r/LocalLLaMA/comments/178egak/best_for_multiple_lora_adapter_inference/ | false | false | self | 4 | null |
Which 70B model is best for writing short stories, and will also take long queries better? | 1 | [removed] | 2023-10-15T10:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/178cq79/which_70b_model_is_best_for_writing_short_stories/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178cq79 | false | null | t3_178cq79 | /r/LocalLLaMA/comments/178cq79/which_70b_model_is_best_for_writing_short_stories/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'CCjtpAVlCtLDCoAHHFwRN2O1m-krrAmSo-QmVT6VCqk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iQz_kKaVapQu1lqlw2lPbkDPGg08_pacRD3bcZDYf6E.jpg?width=108&crop=smart&auto=webp&s=dc9e34c7fef24991dc6c420a7c6860200a958d5b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iQz_kKaVapQu1lqlw2lPbkDPGg08_pacRD3bcZDYf6E.jpg?width=216&crop=smart&auto=webp&s=72707ce4c663572c5e2fe70f4d3c2fa78aee1360', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iQz_kKaVapQu1lqlw2lPbkDPGg08_pacRD3bcZDYf6E.jpg?width=320&crop=smart&auto=webp&s=ae6d051bd950fcc4b19a1cdab7b625c5a48d15b2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iQz_kKaVapQu1lqlw2lPbkDPGg08_pacRD3bcZDYf6E.jpg?width=640&crop=smart&auto=webp&s=602be7bcfa8c95f9d398e19c1140a6784a0d86b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iQz_kKaVapQu1lqlw2lPbkDPGg08_pacRD3bcZDYf6E.jpg?width=960&crop=smart&auto=webp&s=a955deb91dc96789682dfb5a00e6e53ce8cd97d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iQz_kKaVapQu1lqlw2lPbkDPGg08_pacRD3bcZDYf6E.jpg?width=1080&crop=smart&auto=webp&s=ada388bb8b474a6fadac7ccf1b4b6ee1974c2815', 'width': 1080}], 'source': {'height': 2694, 'url': 'https://external-preview.redd.it/iQz_kKaVapQu1lqlw2lPbkDPGg08_pacRD3bcZDYf6E.jpg?auto=webp&s=c070ca41720fa89b75623c4ea30f34a3cdc55977', 'width': 5382}, 'variants': {}}]} |
Fine tune Llama2 with Lora for foreign language | 1 | [removed] | 2023-10-15T10:26:51 | https://www.reddit.com/r/LocalLLaMA/comments/178cifz/fine_tune_llama2_with_lora_for_foreign_language/ | kostakos14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178cifz | false | {'oembed': {'author_name': 'Sam Witteveen', 'author_url': 'https://www.youtube.com/@samwitteveenai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ThKWQcyQXF8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="LLaMA2 for Multilingual Fine Tuning?"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/ThKWQcyQXF8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'LLaMA2 for Multilingual Fine Tuning?', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_178cifz | /r/LocalLLaMA/comments/178cifz/fine_tune_llama2_with_lora_for_foreign_language/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'VDbmKyCTHTQ7qgGIBBfQXwWTaf_iS02RNahxvhWPFJM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/oolHAgC3LIYbXQdLGhdNeE8Gei36mJJfKoQeRAIz5QY.jpg?width=108&crop=smart&auto=webp&s=7e9660651fd8f39c83e123d238c1786a1f26e59f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/oolHAgC3LIYbXQdLGhdNeE8Gei36mJJfKoQeRAIz5QY.jpg?width=216&crop=smart&auto=webp&s=5e723bf18a200c4b2793f28960fcb44936484eb9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/oolHAgC3LIYbXQdLGhdNeE8Gei36mJJfKoQeRAIz5QY.jpg?width=320&crop=smart&auto=webp&s=64a36d3aaf04f0859036f8cabe72a86ab07250b0', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/oolHAgC3LIYbXQdLGhdNeE8Gei36mJJfKoQeRAIz5QY.jpg?auto=webp&s=4b3cbfea7d58aab1b8783627b8609a6a748da1a6', 'width': 480}, 'variants': {}}]} | |
LLM for audit logs | 11 | Hi all. For privacy and security reasons I would like to move out from chatgpt-3.5 and hence use local llm to feed audit logs from my services (e.g. hashicorp vault, Openstack, keystone e.t.c.) and use it to summarise logs and describe what happened. Is there a model in huggingface that I could use? If not how can I get an existing model and make it learn to summarise such logs? Feeding the RTFM in the model once is enough or I need to also train it with a dataset that contains logs and then the summary? Thanks in advance. | 2023-10-15T09:05:50 | https://www.reddit.com/r/LocalLLaMA/comments/178bfm8/llm_for_audit_logs/ | dimtass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178bfm8 | false | null | t3_178bfm8 | /r/LocalLLaMA/comments/178bfm8/llm_for_audit_logs/ | false | false | self | 11 | null |
Multi-Client model Fine-tuning: One model or individual models for each client? | 2 |
Reddit Data for LLM?
Web Browsing
Custom instructions details
User
Write me a reddit question Post where I ask if I can create a llm with data of one or more channel from reddit
ChatGPT
Title: Utilizing Reddit Channel Data for Creating a Language Model: Feasibility and Recommendations?
Body:
Hello everyone,
I'm venturing into the realm of Natural Language Processing (NLP) and I am intrigued by the idea of training a language model (specifically an LLM, or Lifelong Learning Model) using data extracted from one or more channels on Reddit. The diverse discussions and topics covered here could provide a rich dataset for the model.
I'm wondering if anyone has embarked on a similar project or has insights on the feasibility of this idea? Additionally, are there any particular channels that you'd recommend for such a venture, given their content richness and activity?
Moreover, are there any ethical considerations or permissions required to utilize Reddit data in this manner? I'm keen on adhering to all guidelines and ensuring the responsible use of data.
I would greatly appreciate any advice, recommendations, or experiences shared!
Thank you in advance for your time and insights!
User
Now a question about using one modell for several clients to separate the finetuned data or do I need every time an own model
ChatGPT
Title: Multi-Client Model Fine-tuning: One Model or Individual Models for Each Client?
Body:
Hello there,
I am delving into a scenario where I am required to fine-tune a language model for several clients, each with their unique data and requirements. The objective is to tailor the model to better understand and process each client's specific datasets.
Now, I am at a crossroads:
Is it feasible to fine-tune a single model to handle all clients' data separately and ensure the fine-tuned data remains distinct for each client?
Or is it a better practice to create and fine-tune a dedicated model for each client to maintain the integrity and specificity of the data?
I am concerned about the efficiency, data privacy, and the quality of the fine-tuning in both approaches.
Thank you | 2023-10-15T08:56:11 | https://www.reddit.com/r/LocalLLaMA/comments/178bb2x/multiclient_model_finetuning_one_model_or/ | yotobeetaylor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178bb2x | false | null | t3_178bb2x | /r/LocalLLaMA/comments/178bb2x/multiclient_model_finetuning_one_model_or/ | false | false | self | 2 | null |
Using local llm with reddit data | 2 | Hello everyone,
I'm venturing into the realm of Natural Language Processing (NLP) and I am intrigued by the idea of training a language model using data extracted from one or more channels on Reddit.
I'm wondering if anyone has embarked on a similar project or has insights on the feasibility of this idea? Additionally, are there any particular channels that you'd recommend for such a venture, given their content richness and activity?
I would greatly appreciate any advice, recommendations, or experiences shared!
Thank you in advance for your time and insights | 2023-10-15T08:53:52 | https://www.reddit.com/r/LocalLLaMA/comments/178b9zt/using_local_llm_with_reddit_data/ | yotobeetaylor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178b9zt | false | null | t3_178b9zt | /r/LocalLLaMA/comments/178b9zt/using_local_llm_with_reddit_data/ | false | false | self | 2 | null |
Open llm leader board on hugging face is not loading | 11 | Is it just me or is there something wrong with the open llm leader board on hugging face ?
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
It shows an error on loading the page | 2023-10-15T08:43:22 | https://www.reddit.com/r/LocalLLaMA/comments/178b530/open_llm_leader_board_on_hugging_face_is_not/ | testerpce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178b530 | false | null | t3_178b530 | /r/LocalLLaMA/comments/178b530/open_llm_leader_board_on_hugging_face_is_not/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=108&crop=smart&auto=webp&s=90f4efd1c1314faf5b0cd1c5eeb8d2835fe4a3ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=216&crop=smart&auto=webp&s=062336de177b9f9f124a98f4e03b59faa819be1d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=320&crop=smart&auto=webp&s=01e9aeccc0d76fee4ecb359bfb6238dc2afd87f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=640&crop=smart&auto=webp&s=723b41bc410ff59454cf7a9a3db4eced43d4868f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=960&crop=smart&auto=webp&s=2c43ffe72f7f32d522c3e85c1aa8e25d6f213b38', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=1080&crop=smart&auto=webp&s=6a7d63eae44237642a3f95e586436bf6efe5dd70', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?auto=webp&s=51349e0b781d1c9e91535974e09833705c76a3cc', 'width': 1200}, 'variants': {}}]} |
Pandalyst-7B-V1.2 !!! Now we can plot 📈 ! and much more powerful ! | 1 | [removed] | 2023-10-15T08:13:25 | https://www.reddit.com/r/LocalLLaMA/comments/178aqfc/pandalyst7bv12_now_we_can_plot_and_much_more/ | Wise_Argument_882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178aqfc | false | null | t3_178aqfc | /r/LocalLLaMA/comments/178aqfc/pandalyst7bv12_now_we_can_plot_and_much_more/ | false | false | self | 1 | null |
Accelerated Inferenced on CPUs with Deepsparse | 4 | MPT Sparse Finetuned on GSM8k with DeepSparse
"The MPT model runs purely on CPU courtesy of sparse software execution by DeepSparse. DeepSparse provides accelerated inference by taking advantage of the MPT model's weight sparsity to deliver tokens fast!"
https://huggingface.co/spaces/neuralmagic/sparse-mpt-7b-gsm8k
A short video showing massively accelerated interference:
https://twitter.com/_akhaliq/status/1713215880111247541 | 2023-10-15T07:48:22 | https://www.reddit.com/r/LocalLLaMA/comments/178adx2/accelerated_inferenced_on_cpus_with_deepsparse/ | mitirki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 178adx2 | false | null | t3_178adx2 | /r/LocalLLaMA/comments/178adx2/accelerated_inferenced_on_cpus_with_deepsparse/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '831Gswk3prsPuJCde49GC6EZD2tGTCMSRgrs6MeZUgA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aW9FoRtOAAPrqj77rJBenDT8jiZrpZrsjAEgOX2wUXY.jpg?width=108&crop=smart&auto=webp&s=a76666d3860b141deef8603528b94bb96bbc0151', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aW9FoRtOAAPrqj77rJBenDT8jiZrpZrsjAEgOX2wUXY.jpg?width=216&crop=smart&auto=webp&s=21c3301ae6b73380102e60784c6857c6bbde0067', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aW9FoRtOAAPrqj77rJBenDT8jiZrpZrsjAEgOX2wUXY.jpg?width=320&crop=smart&auto=webp&s=0d4dcb800e3ec327f31a7aacc25624645729fee2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aW9FoRtOAAPrqj77rJBenDT8jiZrpZrsjAEgOX2wUXY.jpg?width=640&crop=smart&auto=webp&s=cc95adea58c0bc4643ab2083fb58920692e214ca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aW9FoRtOAAPrqj77rJBenDT8jiZrpZrsjAEgOX2wUXY.jpg?width=960&crop=smart&auto=webp&s=96582f9bca91f0e5586db2bc446afb09585d2210', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aW9FoRtOAAPrqj77rJBenDT8jiZrpZrsjAEgOX2wUXY.jpg?width=1080&crop=smart&auto=webp&s=c0ed1026b613a21542b26c9ea7a125ab3babc97e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aW9FoRtOAAPrqj77rJBenDT8jiZrpZrsjAEgOX2wUXY.jpg?auto=webp&s=2dd7fe6c1396453b1e30238a70e0c36c3a5935a3', 'width': 1200}, 'variants': {}}]} |
Finetune model but retain original reasoning and language capabilities | 4 | Hi,
​
i am fine tuning using qlora the Open-Orca/Mistral-7B-OpenOrca model for specific down stream tasks like NER extraction and classification.
​
this is working well and not facing any issues with this.
​
but the fine tuned model doesn't retain original conversation abilities of openorca model, its only able to output based on my fine tuned prompt dataset.
​
how can i fine tune the model while still retraining the original capabilities.
​
any help would be appreciated | 2023-10-15T05:43:32 | https://www.reddit.com/r/LocalLLaMA/comments/1788nfp/finetune_model_but_retain_original_reasoning_and/ | manishiitg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1788nfp | false | null | t3_1788nfp | /r/LocalLLaMA/comments/1788nfp/finetune_model_but_retain_original_reasoning_and/ | false | false | self | 4 | null |
Model conversion | 1 | I have downloaded a GGUF model. To load it in GPT4All, I think I have to covert it into .bin model. Could anyone possibly guide me on that stuff? Thanks in advance. | 2023-10-15T04:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/1787nld/model_conversion/ | New-Contribution6302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1787nld | false | null | t3_1787nld | /r/LocalLLaMA/comments/1787nld/model_conversion/ | false | false | self | 1 | null |
LLava working great on LLama.cpp | 36 | Llava is now built automatically when making llama.cpp. It works as advertised. Pretty cool! I can imagine what a boon this will be to the visually impaired once it's fully integrated. | 2023-10-15T01:51:50 | https://www.reddit.com/r/LocalLLaMA/comments/1784s83/llava_working_great_on_llamacpp/ | vamsammy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1784s83 | false | null | t3_1784s83 | /r/LocalLLaMA/comments/1784s83/llava_working_great_on_llamacpp/ | false | false | self | 36 | null |
Using Koboldcpp to run local models. I'm getting garbage responses when running from a private browser window or from a portable browser. I switch to a standard browser and I get the expected responses. | 3 | I'm trying to chat with it and it just wants to talk about html coding. I feel like it's receiving some html code with the input when running under these circumstances. I couldn't find any information from a Google search. Is this expected behavior? | 2023-10-15T00:58:00 | https://www.reddit.com/r/LocalLLaMA/comments/1783ri9/using_koboldcpp_to_run_local_models_im_getting/ | nexusjuan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1783ri9 | false | null | t3_1783ri9 | /r/LocalLLaMA/comments/1783ri9/using_koboldcpp_to_run_local_models_im_getting/ | false | false | self | 3 | null |
Introducing MiniSearch - A minimalist search engine with integrated browser-based AI | 1 | [removed] | 2023-10-15T00:49:50 | https://www.reddit.com/r/LocalLLaMA/comments/1783m1z/introducing_minisearch_a_minimalist_search_engine/ | Felladrin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1783m1z | false | null | t3_1783m1z | /r/LocalLLaMA/comments/1783m1z/introducing_minisearch_a_minimalist_search_engine/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'JIKvAy3erMvKYGuOFYRZZy4vVxE1MwbSKKzMTe-jySc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XpUrIzKzLQn-utcJ36Sa94KoREJNO4rs1iYV9RZyRgE.jpg?width=108&crop=smart&auto=webp&s=d6d61bdd9851ab89aa73af3b1413861fcab5bb62', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XpUrIzKzLQn-utcJ36Sa94KoREJNO4rs1iYV9RZyRgE.jpg?width=216&crop=smart&auto=webp&s=87a10d76683f3bf91b4c7949c5f03db60d11123f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XpUrIzKzLQn-utcJ36Sa94KoREJNO4rs1iYV9RZyRgE.jpg?width=320&crop=smart&auto=webp&s=8ab1e1dd0e3f11d3c624bf3c222fdcd6ec1e0063', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XpUrIzKzLQn-utcJ36Sa94KoREJNO4rs1iYV9RZyRgE.jpg?width=640&crop=smart&auto=webp&s=85615741162ea90e76021e654690ba4be7a3aafa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XpUrIzKzLQn-utcJ36Sa94KoREJNO4rs1iYV9RZyRgE.jpg?width=960&crop=smart&auto=webp&s=c2988df498b7422147f458e22b15ce08e62586a8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XpUrIzKzLQn-utcJ36Sa94KoREJNO4rs1iYV9RZyRgE.jpg?width=1080&crop=smart&auto=webp&s=2f514d29f6eb19a0e001e5f7d13b4638a8951889', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XpUrIzKzLQn-utcJ36Sa94KoREJNO4rs1iYV9RZyRgE.jpg?auto=webp&s=4758299e2691fec8436b56db1cdc0c76c8d5f79f', 'width': 1200}, 'variants': {}}]} |
Is lowering quality of the model worth it, if the model is bigger? | 25 | Lets pretend there is two Wildcard models, and there is two versions, 7B and 13B.
What is a difference between Wildcard-7B-Q8 and Wildcard-13B-Q2?
My computer fits 7B confidently and 13B slowly.
I like the greater quality of 13B but not the slowness of it.
I noticed that difference of speed between Q8 and Q2 is great, but quality goes down.
If I use 7B-Q6, is it worth for me to upgrade 13B-Q4 for speed?
What are the tradeoffs between size of the mode and the quality declaration of same models?
Is 7B-Q8 as good as 13B-Q2 or is param jump so big that they are still separate classes?
What is the rule of thumb?
**I'm not asking about difference in quality between 7B and 13B, but about tradeoffs in quality vs. size**. | 2023-10-15T00:10:28 | https://www.reddit.com/r/LocalLLaMA/comments/1782ui1/is_lowering_quality_of_the_model_worth_it_if_the/ | AdExcellent7516 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1782ui1 | false | null | t3_1782ui1 | /r/LocalLLaMA/comments/1782ui1/is_lowering_quality_of_the_model_worth_it_if_the/ | false | false | self | 25 | null |
Running theBloke's dolphin-2.1-mistral-7B-AWQ on gtx 1060 card? | 1 | [removed] | 2023-10-14T23:57:43 | https://www.reddit.com/r/LocalLLaMA/comments/1782l5u/running_theblokes_dolphin21mistral7bawq_on_gtx/ | blacktie_redstripes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1782l5u | false | null | t3_1782l5u | /r/LocalLLaMA/comments/1782l5u/running_theblokes_dolphin21mistral7bawq_on_gtx/ | false | false | self | 1 | null |
2 RTX 3060 or a 4070 | 6 | Hey guys, I am currently upgrading my PC(technically a complete overhaul) to be able to fine tune LLM (LLaMA 2) and I need help deciding if I should get 2 \* RTX 3060 12gb (24 GB in total) or RTX 4070 16 GB . I do want to explore fine tuning various models and experiment with 13B models in mean time but my main focus now is to fine tune the model with custom medical data set.
​
I am kinda new to all of this if you could explain me in detail that would be a huge help.
Thank you in advance :) | 2023-10-14T23:46:11 | https://www.reddit.com/r/LocalLLaMA/comments/1782cyd/2_rtx_3060_or_a_4070/ | vaishakgkumar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1782cyd | false | null | t3_1782cyd | /r/LocalLLaMA/comments/1782cyd/2_rtx_3060_or_a_4070/ | false | false | self | 6 | null |
Are there any guides to using a local llm with Autogen or Chatdev? | 12 | Litellm says they can do it, and I'm reasonably sure they can, but I'm also completely and utterly lost. I've read the documentation three times now, and I still have no clue what to do. (any time I ask them for help they just send me a link to the documentation for it)
There's nothing I've found on YouTube or Google to help, and even bing chat can't coach me though it, so either someone's got a killer guide made for it that clearly documents everything, or I'm hopeless. | 2023-10-14T22:41:04 | https://www.reddit.com/r/LocalLLaMA/comments/17811ai/are_there_any_guides_to_using_a_local_llm_with/ | SendMePicsOfCat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17811ai | false | null | t3_17811ai | /r/LocalLLaMA/comments/17811ai/are_there_any_guides_to_using_a_local_llm_with/ | false | false | self | 12 | null |
Heads up on 3080 series and A6800 48GB (specifically used ones) Cracked Vapor Champers | 14 | 2023-10-14T21:57:52 | https://www.tomshardware.com/news/some-rtx-3080-rtx-a6000-gpus-are-prone-to-vapor-chamber-cracks-report | Aroochacha | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 178063x | false | null | t3_178063x | /r/LocalLLaMA/comments/178063x/heads_up_on_3080_series_and_a6800_48gb/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'IE539FPEP6zEbAl-KLIBuW1fLY-ZCZfaWJl8t82ZrGc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dGTloRh3OMs_faCoJJOLyM7YNyTEsiunG7k1lwXNEBQ.jpg?width=108&crop=smart&auto=webp&s=bfb8e31a9a612e9472f9e72617677b7f7942c9d8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dGTloRh3OMs_faCoJJOLyM7YNyTEsiunG7k1lwXNEBQ.jpg?width=216&crop=smart&auto=webp&s=bc49bc88333ea0ce017ba75908cbb26853024224', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dGTloRh3OMs_faCoJJOLyM7YNyTEsiunG7k1lwXNEBQ.jpg?width=320&crop=smart&auto=webp&s=77c5506fb17e5a503d54095448f4edcf95d4c1a8', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dGTloRh3OMs_faCoJJOLyM7YNyTEsiunG7k1lwXNEBQ.jpg?width=640&crop=smart&auto=webp&s=18a186936c6195b86ddeab3e849dbfd1d313bda2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dGTloRh3OMs_faCoJJOLyM7YNyTEsiunG7k1lwXNEBQ.jpg?width=960&crop=smart&auto=webp&s=d2522b47506e48d017412bc92370ec6524438061', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dGTloRh3OMs_faCoJJOLyM7YNyTEsiunG7k1lwXNEBQ.jpg?width=1080&crop=smart&auto=webp&s=80c54bb25260796a3c1ebe9bb06b44aa4ca25ff9', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/dGTloRh3OMs_faCoJJOLyM7YNyTEsiunG7k1lwXNEBQ.jpg?auto=webp&s=db0366871296479e5d270de8afef3f2babeb35ce', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.