title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What's the best model to run on a 3090 ? | 1 | Question has been asked 7 months ago, meaning it has been geological times since than.
What is the best model for my 3090 ? Best means:
* Utilize my cards power to the limit
* considered best by scores like AlpacaEval
* No or easy to break security policy
* I don't need a PHD in computer science to make it run in a day from scratch
I have little knowledge about running local LLMs, but I got fed up with chatGPT, which I often use. So what is the newest hottest thing out there atm ? | 2023-10-27T22:49:19 | https://www.reddit.com/r/LocalLLaMA/comments/17hzrvf/whats_the_best_model_to_run_on_a_3090/ | RocketFanGirl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hzrvf | false | null | t3_17hzrvf | /r/LocalLLaMA/comments/17hzrvf/whats_the_best_model_to_run_on_a_3090/ | false | false | self | 1 | null |
Model/Research Paper for finetuning specifically for Open-Book Question Answering | 1 | I am working on some Open-Book question answering ideas and was wondering if there is any released models specifically for this use-case or research regarding how to best finetune models for this specific use-case?
For some reason I am struggling to find anything relevant, and believe it is due to my wording of search queries as I can only imagine this to be a pretty common idea/thought process.
If you have a dataset consiting of some context string paired with question and answer pairs taken from this given context, would it not be useful to finetune a pre-trained model on this specific use-case to increase its accuracy/performance when it comes to answering questions from a given context?
My hope would be to improve the performance of a smaller model (1-3B parameters) to function as an open-book question answering system by finetuning using a dataset in the aforementioned format. | 2023-10-27T22:13:57 | https://www.reddit.com/r/LocalLLaMA/comments/17hz0n5/modelresearch_paper_for_finetuning_specifically/ | kotschi1997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hz0n5 | false | null | t3_17hz0n5 | /r/LocalLLaMA/comments/17hz0n5/modelresearch_paper_for_finetuning_specifically/ | false | false | self | 1 | null |
What's the best llm for personal computers doing personal stuff? | 2 | So I was thinking of using mem to help make a personal AI that I can use on my day to day.
What is a good llm that you would recommend and why | 2023-10-27T21:55:24 | https://www.reddit.com/r/LocalLLaMA/comments/17hym2s/whats_the_best_llm_for_personal_computers_doing/ | crua9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hym2s | false | null | t3_17hym2s | /r/LocalLLaMA/comments/17hym2s/whats_the_best_llm_for_personal_computers_doing/ | false | false | self | 2 | null |
Talking to an AI as if you were talking them on the phone. | 46 | Does anyone have a solution for talking to an AI through your cell phone (not necessarily a call). Kind of like your own personal Jarvis.
I want to be able to run a local AI + stable diffusion on my PC and remotely use it at any time on my phone. Basically I want an always on AI that siri that I can have conversations with. | 2023-10-27T21:11:59 | https://www.reddit.com/r/LocalLLaMA/comments/17hxnat/talking_to_an_ai_as_if_you_were_talking_them_on/ | Erdeem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hxnat | false | null | t3_17hxnat | /r/LocalLLaMA/comments/17hxnat/talking_to_an_ai_as_if_you_were_talking_them_on/ | false | false | self | 46 | null |
How can I learn more about models, trends, news, etc? | 1 | I'm trying to learn as much as I can about all of this, especially about NSFW models, since that is - of course - what I use them for.
I'm super interested in how they work, how they're trained, what models are trending, new models that come out, new developments in the AI world, etc.
I'm fairly technical so I'd love to learn more about them. I also really want to stay up to date with the latest NSFW models.
I'm still in the process of learning the largest models I can run, how yo load them properly, what's best for NSFW, etc.
If it's important:
I use Ooba as the back end and SillyTavern as the front end.
Also, I'm running a 4090 (24 GB VRAM), i9-13900k and 64GB of ram because I'm impulsive. | 2023-10-27T20:32:53 | https://www.reddit.com/r/LocalLLaMA/comments/17hwslx/how_can_i_learn_more_about_models_trends_news_etc/ | sillygooseboy77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hwslx | false | null | t3_17hwslx | /r/LocalLLaMA/comments/17hwslx/how_can_i_learn_more_about_models_trends_news_etc/ | false | false | self | 1 | null |
With things like open interpreter, how are people getting real use out of desktop workflows with AI? | 1 | [removed] | 2023-10-27T18:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/17htiuv/with_things_like_open_interpreter_how_are_people/ | LCseeking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17htiuv | false | null | t3_17htiuv | /r/LocalLLaMA/comments/17htiuv/with_things_like_open_interpreter_how_are_people/ | false | false | self | 1 | null |
Best local model for content writing | 1 | Wondering what the best model for content writing. Specifically sales and marketing related. Thank you | 2023-10-27T17:30:57 | https://www.reddit.com/r/LocalLLaMA/comments/17hsted/best_local_model_for_content_writing/ | faridukhan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hsted | false | null | t3_17hsted | /r/LocalLLaMA/comments/17hsted/best_local_model_for_content_writing/ | false | false | default | 1 | null |
Looking to partner up AI/SaaS tech founders for Sales/Marketing to Acquiring users for their SaaS ! Having a team, referrals and affiliates with well crafted outreach and sales process systems !! | 1 | Having a deep Understanding of AI/ SaaS market and the hype , the products that have been made and are popping up , but most are just GPT wrappers as 90% of are useless or saturated failing daily as the bubble popped up churn rates are very high upto 22-25% , most are not product market fit and provides no value...
Done deep research with team on AI and SaaS , fine tuning of large LLM's the products that can be made and been developed and their use cases either for generalize or specific niche ... The Golds and shovels of the era.....and many more.
Our R&D team for guiding in market analysis and a perfect product market fit solutions and ideations for iterations and much more ...
The product market fit solutions that could be made with well crafted roadmaps and market approachs
With having a foundational sales and marketing team , lead gen/ outreach systems with trained closers, affiliates and referrals with us.... Let me know if I can help you for acquiring users and for R&D , analysis of market needs/ wants , market conditions, ideas and iterations, ads and many more else!
I got much to say , if anyone is interested.... Feel free to DM | 2023-10-27T17:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/17hsjiw/looking_to_partner_up_aisaas_tech_founders_for/ | 93248828Saif | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hsjiw | false | null | t3_17hsjiw | /r/LocalLLaMA/comments/17hsjiw/looking_to_partner_up_aisaas_tech_founders_for/ | false | false | self | 1 | null |
Is it worth it to fine tune a LLM for a specific task? | 9 | Take the classification task as an example. Doesn't a fine-tuned BERT model outperform a fine-tined Flan-T5 model? If so, why people want to fine tune a LLM for a specific task? Why not just choose the respective SOTA model for each task? | 2023-10-27T17:11:01 | https://www.reddit.com/r/LocalLLaMA/comments/17hsdq4/is_it_worth_it_to_fine_tune_a_llm_for_a_specific/ | Heavy-Perspective-83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hsdq4 | false | null | t3_17hsdq4 | /r/LocalLLaMA/comments/17hsdq4/is_it_worth_it_to_fine_tune_a_llm_for_a_specific/ | false | false | self | 9 | null |
🦙 How To: Build Chatbot that knows your company's documents | 76 | Hello, I've seen some posts asking how to build a chatbot with access to company docs, so here is a tutorial on building a RAG chatbot with access to your data.
Step 1: Choose your models
Different models have different strengths. GPT4 is the best at reasoning and following instructions, but less secure than local models. For secure but weaker local models, Xwin (70B) is a good choice if you have powerful hardware. If you are GPU poor you can use Speechless (13B). For the embedding models, you can use OpenAI's ADA 2 or locally use MiniLM-L6-v2.
Step 2: Organize your data
This step is more complicated because it depends on what your data looks like. The good news is as long as it can be turned into text the models can work with it. At a basic level, you can convert important pdfs or other text documentation into text, and add it to a RAG database. [Here](https://youtu.be/LhnCsygAvzY?t=1067) is a good tutorial. If you have lots of secure data it can be more complicated. Feel free to DM me.
Step 3: Set up in LangChain
LangChain is an easy way to set up the chatbot. [Here](https://python.langchain.com/docs/get_started/introduction) are the docs. The basic idea is to connect your RAG database with the model of your choice and use LangChain's interface to customize your chatbot's functionality. After you set this up, the model will be able to access your company's documentation and answer specific questions about it!
This tutorial is not an in-depth guide, it's more of a high level overview for those who are new to the space or RAG. If you have questions DM me and good luck! | 2023-10-27T17:02:43 | https://www.reddit.com/r/LocalLLaMA/comments/17hs77a/how_to_build_chatbot_that_knows_your_companys/ | ZealousidealBlock330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hs77a | false | null | t3_17hs77a | /r/LocalLLaMA/comments/17hs77a/how_to_build_chatbot_that_knows_your_companys/ | false | false | self | 76 | {'enabled': False, 'images': [{'id': '_cYLDL1usBdIowic4YSCAHa16b8-J9A7TgvMi6coMgw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/K5X9z1XbFV9DnPcBapiDH6R12PsHpqfTH91lkvrVrkQ.jpg?width=108&crop=smart&auto=webp&s=42785e79b0b2a7949c2c6cd3ec82755f5fbec410', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/K5X9z1XbFV9DnPcBapiDH6R12PsHpqfTH91lkvrVrkQ.jpg?width=216&crop=smart&auto=webp&s=81e9e4f5f6417acb1da20c53a891aab618965d56', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/K5X9z1XbFV9DnPcBapiDH6R12PsHpqfTH91lkvrVrkQ.jpg?width=320&crop=smart&auto=webp&s=ae373b77a43db4625c38f3b263890799e9e3cf03', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/K5X9z1XbFV9DnPcBapiDH6R12PsHpqfTH91lkvrVrkQ.jpg?auto=webp&s=e0839d2c2e1290fe8a144d5d2fa90b6ec1109c08', 'width': 480}, 'variants': {}}]} |
Instruction Fine Tuning with a Low Resource Language | 1 | [removed] | 2023-10-27T16:57:55 | https://www.reddit.com/r/LocalLLaMA/comments/17hs354/instruction_fine_tuning_with_a_low_resource/ | dafajon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hs354 | false | null | t3_17hs354 | /r/LocalLLaMA/comments/17hs354/instruction_fine_tuning_with_a_low_resource/ | false | false | default | 1 | null |
Andromeda Cluster Clone? | 15 | [https://twitter.com/mprkhrst/status/1717936411561591042](https://twitter.com/mprkhrst/status/1717936411561591042)
This looks like a complete clone of the Andromeda Cluster but on AWS. Does anyone here use H100s on AWS? Last I checked it was very expensive, but could be wrong
​ | 2023-10-27T16:40:46 | https://www.reddit.com/r/LocalLLaMA/comments/17hrpkd/andromeda_cluster_clone/ | Significant-Ad-7734 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hrpkd | false | null | t3_17hrpkd | /r/LocalLLaMA/comments/17hrpkd/andromeda_cluster_clone/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': '-OtcPcxcZx-F04T6DkiLk5K76TjLUj5qS6Ys-OJxNkA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/rJec246t_zc2b_tYjWeSAtEJR8oy88gOyUltdOSkNQY.jpg?width=108&crop=smart&auto=webp&s=63e1443526aee80e78167d0c0f8ddf1a02c33149', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/rJec246t_zc2b_tYjWeSAtEJR8oy88gOyUltdOSkNQY.jpg?auto=webp&s=0d437a1b291d6154eced7149931ddf6d44371287', 'width': 140}, 'variants': {}}]} |
GGUF Models on h2oGPT? | 1 | Hi! I recently set up h2oGPT and have been using it for a couple weeks now trying various models. I'm trying to use this model now that I had to manually add to my model list:
[https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q5\_K\_S.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF/blob/main/phind-codellama-34b-v2.Q5_K_S.gguf)
Specifically the phind-codellama-34b-v2.Q5\_K\_S.gguf model.
When I try to load it up, I get this error:
**Error**
**Can't load tokenizer for 'TheBloke/Phind-CodeLlama-34B-v2-GGUF'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'TheBloke/Phind-CodeLlama-34B-v2-GGUF' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.**
​
Am I doing something wrong or missing a step? Thank you. | 2023-10-27T16:24:54 | https://www.reddit.com/r/LocalLLaMA/comments/17hrd48/gguf_models_on_h2ogpt/ | maxwell321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hrd48 | false | null | t3_17hrd48 | /r/LocalLLaMA/comments/17hrd48/gguf_models_on_h2ogpt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_Cy1RPxmhhSPkfvdnvVk19H8511kmHiwj8TBMqHskR0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=108&crop=smart&auto=webp&s=33ccdcdcea57a9cda3fc15f6300727af3375132f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=216&crop=smart&auto=webp&s=6a7d99b80ed3b9a01b14e8c07e9bf5dc81a125e4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=320&crop=smart&auto=webp&s=b5fb599171ec39e394e32230f482064220f7d2c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=640&crop=smart&auto=webp&s=0cf54e956389bf70afed55d3ae917017a60306d6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=960&crop=smart&auto=webp&s=48bc38b79f8867395478b3a36f38e4a2d73265df', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?width=1080&crop=smart&auto=webp&s=cdb2d72501e887d807f3a33c44801c8367e6909c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HQcs5placQBoimHQ4dVZgo28un91Y3g7w_3bWhFuiIA.jpg?auto=webp&s=5bdbeb516c3b43cf70ac9f0be660a1c6e730322a', 'width': 1200}, 'variants': {}}]} |
We just released our 70B German Model: SauerkrautLM-70b-v1 | 80 | ​
https://preview.redd.it/uoddxcvaqrwb1.png?width=960&format=png&auto=webp&s=6e1cb5f9f38639853c6b11c628756b54c0980843
A powerful 70b model for Germany! 💪 🚀 Our 70 billion parameter language model for the German language is now available for free use.
Download at: [https://huggingface.co/VAGOsolutions/SauerkrautLM-70b-v1](https://huggingface.co/VAGOsolutions/SauerkrautLM-70b-v1)
Find the original reddit post here: [https://www.reddit.com/r/LocalLLaMA/comments/176xaew/introducing\_sauerkrautlmv1\_our\_german\_language/](https://www.reddit.com/r/LocalLLaMA/comments/176xaew/introducing_sauerkrautlmv1_our_german_language/)
The GPTQ/GGUF/AWQ version has already been requested by TheBloke.
We look forward to your feedback. | 2023-10-27T16:13:59 | https://www.reddit.com/r/LocalLLaMA/comments/17hr4g5/we_just_released_our_70b_german_model/ | AffectionateCan2342 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hr4g5 | false | null | t3_17hr4g5 | /r/LocalLLaMA/comments/17hr4g5/we_just_released_our_70b_german_model/ | false | false | 80 | {'enabled': False, 'images': [{'id': 'nd9aSHE1UN4ghI6PXzst2_Sa69Gm780JjOV_et9MxMo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yJVMYSej8l8qRpyCL8V1MCPIMxyH1s7rObFhXVsG8dI.jpg?width=108&crop=smart&auto=webp&s=a0d79cfa8340b2de3f1814aeb0625ec2b12885ae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yJVMYSej8l8qRpyCL8V1MCPIMxyH1s7rObFhXVsG8dI.jpg?width=216&crop=smart&auto=webp&s=6479afbae2732b714021d85cba0c7858dc9f7497', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yJVMYSej8l8qRpyCL8V1MCPIMxyH1s7rObFhXVsG8dI.jpg?width=320&crop=smart&auto=webp&s=7e1c4f80ac0a472bf04481855ca651c5888f39c6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yJVMYSej8l8qRpyCL8V1MCPIMxyH1s7rObFhXVsG8dI.jpg?width=640&crop=smart&auto=webp&s=8077fe17a0235a7edd818c60dd6d59d7e10fa0f3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yJVMYSej8l8qRpyCL8V1MCPIMxyH1s7rObFhXVsG8dI.jpg?width=960&crop=smart&auto=webp&s=04be74816a66bc69908fa45c59fe9e509f3f6316', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yJVMYSej8l8qRpyCL8V1MCPIMxyH1s7rObFhXVsG8dI.jpg?width=1080&crop=smart&auto=webp&s=4bc12974df072a0f330e7d7656f915f3569a33f0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yJVMYSej8l8qRpyCL8V1MCPIMxyH1s7rObFhXVsG8dI.jpg?auto=webp&s=a016354861a19eaafd2e197c4cf49b1dd0da14d7', 'width': 1200}, 'variants': {}}]} | |
Tesla P40 users - OpenHermes 2 Mistral 7B might be the sweet spot RP model with extra context. | 21 | Hello folks, following on from my post about Tiefighter, I found this model:
[https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GGUF/](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GGUF/)
Youtuber MustacheAI posted a video about this model [here](https://www.youtube.com/watch?v=pZHOTJpGQTY), which is how I discovered it. And other folks have mentioned Mistral so I thought I'd give it a go.
I find that 13B models on the P40, while they fit just fine in the 24GB VRAM limit, just don't perform all that well and get rather slow with any reasonable amount of context. I've tried 7B models before and they have certainly performed faster but they just haven't created particularly good prose. I think OpenHermes 2 Mistral 7B changes that. I get decent tokens / s (considering the hardware), good prose, and pretty good RP out of it.
Using Ooga, I've loaded this model with llama.cpp, n-gpu-layers set to max, n-ctx set to 8192 (8k context), n\_batch set to 512, and - crucially - alpha\_value set to 2.5. This is the first time I have tried this option, and it really works well on llama 2 models. Be sure to set the instruction model to Mistral.
So, on a Tesla P40 with these settings: 4k context runs about 18-20 t/s! With about 7k context it slows to 3-4 t/s. That isn't fast, but that IS with all that context, and with very decent output in Sillytavern. It's not up to Tiefighter, but it's the best 7B model I've tested so far and at a useable speed. | 2023-10-27T16:10:50 | https://www.reddit.com/r/LocalLLaMA/comments/17hr1vf/tesla_p40_users_openhermes_2_mistral_7b_might_be/ | CasimirsBlake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hr1vf | false | null | t3_17hr1vf | /r/LocalLLaMA/comments/17hr1vf/tesla_p40_users_openhermes_2_mistral_7b_might_be/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'y7EIGxMp6oirBD3u_M3jvIE_gyL4Y04RlbIFzZfcVG4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fCLx0J4jknO6UO3eLpSlyByl-sPNslYnOHuoe7QzfFw.jpg?width=108&crop=smart&auto=webp&s=fff4732d94d0d08b860ef1c956e20389a5bed9b5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fCLx0J4jknO6UO3eLpSlyByl-sPNslYnOHuoe7QzfFw.jpg?width=216&crop=smart&auto=webp&s=5e7ef2a4f93071a2af5e3981181c3b4f232a0bb0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fCLx0J4jknO6UO3eLpSlyByl-sPNslYnOHuoe7QzfFw.jpg?width=320&crop=smart&auto=webp&s=4c7a91e47a82171fd0c533c10e3e9860e597db29', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fCLx0J4jknO6UO3eLpSlyByl-sPNslYnOHuoe7QzfFw.jpg?width=640&crop=smart&auto=webp&s=050bb176411977ccafc37b0102a8fd9c0f39efaa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fCLx0J4jknO6UO3eLpSlyByl-sPNslYnOHuoe7QzfFw.jpg?width=960&crop=smart&auto=webp&s=105304b885f713d7bb2d354bffba8ae68a6888e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fCLx0J4jknO6UO3eLpSlyByl-sPNslYnOHuoe7QzfFw.jpg?width=1080&crop=smart&auto=webp&s=6ad7aface7063f99afb0d66e97cae9cd3cbf5cb6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fCLx0J4jknO6UO3eLpSlyByl-sPNslYnOHuoe7QzfFw.jpg?auto=webp&s=599fbd12f6bec550c0a78ed19a06db304730b005', 'width': 1200}, 'variants': {}}]} |
llama.cpp python and grammars | 4 | Hi folks,
I have been experimenting building an agent based model using llama for agent's decision making. In order to get structured answers I am using grammars, however I can't really get good answers with it. I am leaving the code here, but in short I am asking the llama to decide to which agents to connect and to which agents to disconnect based on some goals, context, and persona attributes. Lists of agents to connect with and disconnect from are generated but they are often empty or with some random numbers.
Am I prompting the wrong way? Is the grammar confusing for llama?
**Code**:
'''
from llama\_cpp import Llama, LlamaGrammar
from pprint import pprint
prompt = '''
\[INST\]<<SYS>>For the response, you must follow this structure:
Connect To Agents: {List of agent IDs to connect with from 'Potential new connections'}
Disconnect From Agents: {List of agent IDs to disconnect with from 'Current connections'}<</SYS>>
\[CONTEXT\]
I need to decide whether to connect or disconnect with other agents.
\[MY PERSONA\]
\* Type: B
\[MY GOALS\]
\* I prefer to live with agents of similar type as me
\[NETWORK\]
Current connections:
\* Agent 0 \[Type: B\]
\* Agent 3 \[Type: A\]
\* Agent 4 \[Type: B\]
\* Agent 9 \[Type: B\]
\* Agent 10 \[Type: B\]
\* Agent 21 \[Type: B\]
\* Agent 31 \[Type: A\]
\* Agent 36 \[Type: A\]
\* Agent 47 \[Type: B\]
\* Agent 67 \[Type: A\]
\* Agent 71 \[Type: B\]
\* Agent 80 \[Type: B\]
\* Agent 84 \[Type: A\]
\* Agent 90 \[Type: A\]
Potential new connections:
\* Agent 1 \[Type: B\]
\* Agent 2 \[Type: B\]
\* Agent 5 \[Type: A\]
\* Agent 6 \[Type: A\]
\* Agent 7 \[Type: A\]
\* Agent 8 \[Type: B\]
\* Agent 11 \[Type: A\]
\* Agent 12 \[Type: B\]
\* Agent 13 \[Type: A\]
\* Agent 14 \[Type: B\]
\* Agent 15 \[Type: B\]
\* Agent 16 \[Type: B\]
\* Agent 17 \[Type: B\]
\* Agent 18 \[Type: A\]
\* Agent 19 \[Type: A\]
\* Agent 20 \[Type: A\]
\* Agent 22 \[Type: B\]
\* Agent 23 \[Type: B\]
\[/INST\]
'''
connect\_choices\_ids = \[1,2,5,6,7,8,11,12,13,14,15,16,17,18,19,20,22,23\]
connect\_choices = ' '.join(\['("{0}," | "")'.format(i) for i in connect\_choices\_ids\])
disconnect\_choices\_ids = \[0,3,4,9,10,21,31,36,47,67,71,80,84,90\]
disconnect\_choices = ' '.join(\['("{0}," | "")'.format(i) for i in disconnect\_choices\_ids\])
grammar\_text = r'''
root ::= "{{" connectTo "," disconnectFrom "}}"
connectTo ::= "**\\"**Connect to agents**\\"**" ":" "**\\\[**" {connect\_choices} "**\\\]**"
disconnectFrom ::= "**\\"**Disconnect from agents**\\"**" ":" "**\\\[**" {disconnect\_choices} "**\\\]**"
'''.format(connect\_choices=connect\_choices, disconnect\_choices=disconnect\_choices)
llama13\_path = "./llm\_models/llama-2-13b-chat.Q5\_K\_M.gguf"
LLM = Llama(model\_path=llama13\_path, n\_gpu\_layers=30, verbose=False, n\_ctx=6000)
grammar = LlamaGrammar.from\_string(grammar\_text, verbose=False)
response = LLM(prompt, grammar=grammar, temperature=0.8, echo=False, max\_tokens=3000)
print(response\['choices'\]\[0\]\['text'\])
'''
Output I get is:
'''{"Connect to agents":\[1,2,5,6,7,8,11,12,13,14,15,16,17,18,19,20,22,23,\],"Disconnect from agents":\[\]}''' | 2023-10-27T15:23:18 | https://www.reddit.com/r/LocalLLaMA/comments/17hq04k/llamacpp_python_and_grammars/ | gabrigoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hq04k | false | null | t3_17hq04k | /r/LocalLLaMA/comments/17hq04k/llamacpp_python_and_grammars/ | false | false | self | 4 | null |
Augment llm with specialized material | 2 | My boss is a semi famous author in a niche academic field. I have thousands of pages of text coming from books, transcripts, and more.
Is there a straightforward path to creating a corpus to augment Bert or Llama or another llm? End goal being able to chat with this ai that is now trained on his life's work.
Is there anything specific to understand in terms of preparing the corpus? Do I need key value pairs where I write a ton of examples questions and responses? | 2023-10-27T15:15:44 | https://www.reddit.com/r/LocalLLaMA/comments/17hpu23/augment_llm_with_specialized_material/ | spacedragon13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hpu23 | false | null | t3_17hpu23 | /r/LocalLLaMA/comments/17hpu23/augment_llm_with_specialized_material/ | false | false | self | 2 | null |
About to begin my PhD in Multi-Modality AI, any suggestions? | 1 | [removed] | 2023-10-27T14:32:59 | https://www.reddit.com/r/LocalLLaMA/comments/17howf2/about_to_begin_my_phd_in_multimodality_ai_any/ | Go2Heart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17howf2 | false | null | t3_17howf2 | /r/LocalLLaMA/comments/17howf2/about_to_begin_my_phd_in_multimodality_ai_any/ | false | false | default | 1 | null |
Error - CUDA extension not installed. Some weights of LlamaForCausalLM were not initialized from the model checkpoint | 1 | Hello,
I am completely new to large language models and I am facing a problem. Please help me with this error.
I am trying to load the [TheBloke](https://huggingface.co/TheBloke)/vicuna-33B-GPTQ model already downloaded in my system. But when I run the below code
model = AutoModelForCausalLM.from\_pretrained(model\_name\_or\_path, device\_map="auto", trust\_remote\_code=True, revision="main")
I get the the following message:
"CUDA extension not installed. Some weights of LlamaForCausalLM were not initialized from the model checkpoint at TheBloke/vicuna-33B-GPTQ and are newly initialized: ....
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. "
How can I fix it? | 2023-10-27T13:58:20 | https://www.reddit.com/r/LocalLLaMA/comments/17ho5n6/error_cuda_extension_not_installed_some_weights/ | Jg0at | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ho5n6 | false | null | t3_17ho5n6 | /r/LocalLLaMA/comments/17ho5n6/error_cuda_extension_not_installed_some_weights/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'lkhHJv8mdJqfRh1UwBmzWCTs0H4Inw2Ugpa0eeRnBck', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=108&crop=smart&auto=webp&s=de723b6ad3db101dc616591260b08a417f299523', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=216&crop=smart&auto=webp&s=a348fea00b8418fdeedc2667262eb150c2f63ac3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=320&crop=smart&auto=webp&s=83f6ff46606628dac48f751f260a4b9b375cd44c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=640&crop=smart&auto=webp&s=16bebb4a0b444dde01f0f7f7b95382c40f37819a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=960&crop=smart&auto=webp&s=bd7e1cc5eb2f3cf3f6a417c06ce4bc4e123df0a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=1080&crop=smart&auto=webp&s=9af1bc8ee2fd5d2af94a297394be8943db9f81a4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?auto=webp&s=e926e6291516a57e5d87ab5afc88ee01002ad9de', 'width': 1200}, 'variants': {}}]} |
Seeking Recommendations for AI Models and Datasets for Marketing and Content Generation on a Jewelry Website | 1 | Hello everyone,
I am in the process of enhancing the marketing strategies and content generation for a jewelry website that I am working on.
Therefore, I am looking for recommendations on the best AI models that can be applied for marketing and content generation specifically in the jewelry niche.
Here are some of my requirements and context:
1. **Local Deployment**: I intend to run the AI model locally on my computer. Below are my computer specifications for reference:
- Processor: 13th Gen Intel Core i9 13900HX (24-Core, 36MB L3 Cache, up to 5.4GHz Max Turbo)
- Memory: 32GB, 2x16GB, DDR5, 5800MHz XMP
- Storage: 2TB, M.2, PCIe NVMe, SSD
- Video Card: NVIDIA GeForce RTX 4080 12GB GDDR6
2. **Datasets**: Are there any available datasets specifically tailored towards jewelry or related fields that would help in training the AI model?
3. **Marketing and Content Generation**: The primary focus is on marketing and content generation. It would be great if the model could assist in creating engaging content, analyzing customer behavior, and optimizing marketing strategies.
4. **Ease of Use and Support**: Since I am relatively new to AI, a model with good documentation, active community, and ease of use would be highly appreciated.
5. **Cost**: While I am willing to invest in a good solution, cost-effectiveness is a consideration. Open source or cost-effective solutions would be preferred.
6. **Scalability**: As the website grows, the ability to scale the AI operations with increasing data and traffic is essential.
I would greatly appreciate any suggestions, experiences, or insights you might have. Additionally, if you have come across any tutorials or case studies relevant to this, please feel free to share.
Thank you in advance for your help! | 2023-10-27T13:36:35 | https://www.reddit.com/r/LocalLLaMA/comments/17hnqbc/seeking_recommendations_for_ai_models_and/ | gtmotorsniagara | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hnqbc | false | null | t3_17hnqbc | /r/LocalLLaMA/comments/17hnqbc/seeking_recommendations_for_ai_models_and/ | false | false | self | 1 | null |
How can I fine-tune local .gguf Llama 2 model? | 13 | I am working in a company, and company has servers with proxy restrictions. Therefore I cannot reach to the huggingface and use
"AutoModelForCasualLM.from\_pretrained("huggingface/path")"
to load and fine-tune models. Every tutorial i see on internet uses this method to fine-tune models. But since i cannot reach to the Huggingface models because of proxy, i downloaded the .gguf file of the model via wget.
So how can i fine-tune this .gguf model? Any ideas??
​ | 2023-10-27T12:36:41 | https://www.reddit.com/r/LocalLLaMA/comments/17hml70/how_can_i_finetune_local_gguf_llama_2_model/ | Psychological-Fig1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hml70 | false | null | t3_17hml70 | /r/LocalLLaMA/comments/17hml70/how_can_i_finetune_local_gguf_llama_2_model/ | false | false | self | 13 | null |
After a few responses, it stops | 0 | Hi,
I'm using the privateGPT with LLama 2 model and it seems to work ok. The issue I have is when I keep asking questions, after about the 3rd of 4th question, it just stops responding. Stopping the [privategpt.py](https://privategpt.py) and starting doesn't help. I have to reboot the server, ingest the documents again and then I will get another 3 or 4 answers before it goes back into to same issue.
Has anyone had this issue? | 2023-10-27T11:51:32 | https://www.reddit.com/r/LocalLLaMA/comments/17hlsng/after_a_few_responses_it_stops/ | mustyy1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hlsng | false | null | t3_17hlsng | /r/LocalLLaMA/comments/17hlsng/after_a_few_responses_it_stops/ | false | false | self | 0 | null |
Dialogue with a lot of txt | 1 | [removed] | 2023-10-27T11:49:15 | https://www.reddit.com/r/LocalLLaMA/comments/17hlrcn/dialogue_with_a_lot_of_txt/ | Ettaross | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hlrcn | false | null | t3_17hlrcn | /r/LocalLLaMA/comments/17hlrcn/dialogue_with_a_lot_of_txt/ | false | false | self | 1 | null |
How to Get Around Context Limits with Data Files | 1 | I want to loop through a list of test strings and check each against thousands of answer options to pick the closest one based on knowledge from a given PDF. The problem is that I can't fit thousands of answers into the prompt.
I have a set list of options (answers) that looks like this...
# answers.txt
18q deletion syndrome
Aagenaes syndrome
Abrasion
Acanthamoeba infection
# ... with about 2400 in total.
I have a user entered list of diagnoses that may or may not be in the list of options (tests) to loop through...
# tests.txt
1 Reactive arthritis
2 Basal cell carcinoma
# ... with about 55 thousand total entries
I have a textbook PDF with information about the answers that is about 3100 pages, and I appended the answers to the end as shown below...
# data.pdf
# ... Lots of information for about 3100 pages
The curly brackets contain a specifically accurate and appropriate option {18q deletion syndrome}
The curly brackets contain a specifically accurate and appropriate option {Aagenaes syndrome}
The curly brackets contain a specifically accurate and appropriate option {Abrasion}
The curly brackets contain a specifically accurate and appropriate option {Acanthamoeba infection}
# ... for about 100 added pages.
I think that the biggest problem is getting the model to consider all 2,383 options. It is far too many to fit in a reasonable context. This is the prompt template and prompt that I am using...
target_string = 'specifically accurate and appropriate option'
qa_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
The answer should include a %s.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else.
Helpful answer: The closest option is
""" % target_string
prompt = f"Which {target_string} is closest to {test}?"
... I think this needs to be changed to a User/Assistant format too, but I wasn't sure how to appropriately modify the template to facilitate that.
I modified this [LangChain RetrievalQA Github project](https://github.com/kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference) and ended up with the code at the end. When I run it, I get the results below...
Loading lists...
53845 tests and 2383 answers were loaded
Checking Reactive arthritis...
Checking Basal cell carcinoma...
==================================================
Reactive arthritis => Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth Arth
Basal cell carcinoma => High-risk basal cell carcinoma (e.g. micronodular, morpheaform)
==================================================
Time to retrieve response: 165.31
... Neither answer is from the list, and obviously the repetition is undesirable.
Based on previous recommendations, I found that I could reliably get this to work when all of the answers were explicitly included in the prompt (previous code shown at end).
Does anyone have any suggestions or recommendations on where to go from here? I imagine that I either need to tweak my prompt/prompt template or dramatically change my entire approach.
Current Code based on kennethleungty's example:
import box, time, yaml, argparse
from dotenv import find_dotenv, load_dotenv
from langchain import PromptTemplate
from langchain.chains import RetrievalQA
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import CTransformers
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import PyPDFLoader, DirectoryLoader
# Load environment variables from .env file
load_dotenv(find_dotenv())
# Import config vars
with open('config/config.yml', 'r', encoding='utf8') as ymlfile:
cfg = box.Box(yaml.safe_load(ymlfile))
target_string = 'specifically accurate and appropriate option'
qa_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
The answer should include a %s.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else.
Helpful answer: The closest option is
""" % target_string
# Build vector database
def run_db_build():
loader = DirectoryLoader(cfg.DATA_PATH, glob='*.pdf', loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=cfg.CHUNK_SIZE,
chunk_overlap=cfg.CHUNK_OVERLAP)
texts = text_splitter.split_documents(documents)
embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2', model_kwargs={'device': 'cpu'})
vectorstore = FAISS.from_documents(texts, embeddings)
vectorstore.save_local(cfg.DB_FAISS_PATH)
def build_llm():
# Local CTransformers model
llm = CTransformers(model=cfg.MODEL_BIN_PATH, model_type=cfg.MODEL_TYPE,
config={'max_new_tokens': cfg.MAX_NEW_TOKENS, 'temperature': cfg.TEMPERATURE})
return llm
def set_qa_prompt():
# Prompt template for QA retrieval for each vectorstore
prompt = PromptTemplate(template=qa_template, input_variables=['context', 'question'])
return prompt
def build_retrieval_qa(llm, prompt, vectordb):
dbqa = RetrievalQA.from_chain_type(llm=llm, chain_type='stuff',
retriever=vectordb.as_retriever(search_kwargs={'k': cfg.VECTOR_COUNT}),
return_source_documents=cfg.RETURN_SOURCE_DOCUMENTS,
chain_type_kwargs={'prompt': prompt}
)
return dbqa
def setup_dbqa():
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2",
model_kwargs={'device': 'cpu'})
vectordb = FAISS.load_local(cfg.DB_FAISS_PATH, embeddings)
llm = build_llm()
qa_prompt = set_qa_prompt()
dbqa = build_retrieval_qa(llm, qa_prompt, vectordb)
return dbqa
def load_lists():
print('Loading lists...')
tests = []
answers = []
testsf = 'tests.txt'
answersf = 'answers.txt'
with open(testsf, 'r') as f:
for iline in f.readlines():
i, idx = iline.replace('\n', '').split('\t')
if idx != '':
tests.append(idx)
with open(answersf, 'r') as f:
for iline in f.readlines():
answers.append(iline.replace('\n', ''))
print(f'{len(tests)} tests and {len(answers)} answers were loaded')
return [tests, answers]
def generate_pre_pdf(strings, target):
prePdfList = []
print('Starting export...')
prePdfFilename = 'Diagnosis PrePDF.txt' # [prefix] diagnosis
for ianswer in strings:
prePdfList.append("The curly brackets contain a %s {%s}" % (target, ianswer))
print(ianswer)
with open(prePdfFilename, 'w') as f:
f.write('\n'.join(prePdfList))
if __name__ == "__main__":
parser = argparse.ArgumentParser() # if 'build' is passed, then the database will be rebuilt, 'pdf' will generate a txt file to be appended
parser.add_argument('mode', type=str)
args = parser.parse_args()
# Setup DBQA
tests, answers = load_lists()
start = time.time()
qa = setup_dbqa()
if args.mode == 'build': # rebuild the PDF database
print('Rebuilding database...')
run_db_build()
print(f"Time to build database: {(time.time() - start):.2f}")
elif args.mode == 'pdf': # Generate a text file to be appended to PDF
print('Generating text file for pdf...')
generate_pre_pdf(answers, target_string)
print(f"Time to prepare text file: {(time.time() - start):.2f}")
else: # any other input will run the model
# Check tests
responses = []
for i in tests[:2]: # stopping short for testing purposes
print(f'Checking {i}...')
iquery = f"Which {target_string} is closest to {i}?"
response = qa({'query': iquery})
responses.append(f'{i} => {response["result"]}')
print('=' * 50)
print('\n'.join(responses))
print('=' * 50)
print(f"Time to retrieve response: {(time.time() - start):.2f}")
Previous attempt based on /u/shibe5 's reccomendations:
import time
from llama_cpp import Llama
model_dir = 'C:/Models/'
model_name = 'luna-ai-llama2-uncensored.Q4_K_M.gguf'
answers = '\n'.join(['red', 'blue', 'gray', 'yellow', 'green', 'purple', 'brown'])
tests = ['cyan', 'pink', 'silver', 'blue', 'orange', 'fuschia', 'vermillion', 'granite', 'teal', 'navy']
replace_str = '**x**'
prompt = f'''
USER: Answer options: {answers}
ASSISTANT: The closest option to {replace_str} is
'''
prompt = prompt.lstrip().rstrip() # strip leading/trailing characters
start_time = time.time()
outputs = []
llm = Llama(model_path=model_dir + model_name, n_ctx=512)
for itest in tests[:2]:
print(f'Checking {itest}')
output = llm(prompt=prompt.replace(replace_str, itest), max_tokens=32, echo=False, temperature=0)
answer = output['choices'][0]['text']
tokens = f"ptokens: {output['usage']['prompt_tokens']}, ctokens: {output['usage']['completion_tokens']}"
outputs.append(f"{itest} => {answer} ~~ {tokens}")
print('=' * 50)
print('\n'.join(outputs))
elapsed = time.time() - start_time
print(f'Total time: {elapsed:.2f} s')
That was a lot. Thanks for any help that anyone can provide. | 2023-10-27T11:13:54 | https://www.reddit.com/r/LocalLLaMA/comments/17hl6z7/how_to_get_around_context_limits_with_data_files/ | prettyobviousthrow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hl6z7 | false | null | t3_17hl6z7 | /r/LocalLLaMA/comments/17hl6z7/how_to_get_around_context_limits_with_data_files/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WeqesO7YsPT6tNNmINAdx-9AfdzqRUecjcq_6yxZXeg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jot2cdSTYpMtIvBRvM83k_qM8mhNySSZWx0rItjwPfE.jpg?width=108&crop=smart&auto=webp&s=ac8224030dee9f2a3c54cd2678fd0a9f9e750e1a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jot2cdSTYpMtIvBRvM83k_qM8mhNySSZWx0rItjwPfE.jpg?width=216&crop=smart&auto=webp&s=2828fb24042634c6fa3217581c3e6121c506963e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jot2cdSTYpMtIvBRvM83k_qM8mhNySSZWx0rItjwPfE.jpg?width=320&crop=smart&auto=webp&s=10281888e359403e770a112c1f6878ac2a35eae3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jot2cdSTYpMtIvBRvM83k_qM8mhNySSZWx0rItjwPfE.jpg?width=640&crop=smart&auto=webp&s=95559ed636de0f0e5afb99224afad0ede27072c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jot2cdSTYpMtIvBRvM83k_qM8mhNySSZWx0rItjwPfE.jpg?width=960&crop=smart&auto=webp&s=77d4bf4e76850d295cb3a23cf01dfe77c9410be1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jot2cdSTYpMtIvBRvM83k_qM8mhNySSZWx0rItjwPfE.jpg?width=1080&crop=smart&auto=webp&s=3768bcf2e1c8d14b180bf31886ae4e5d89a2f044', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jot2cdSTYpMtIvBRvM83k_qM8mhNySSZWx0rItjwPfE.jpg?auto=webp&s=4910998ef1eb07a1dc38c92e4c69f88fb3fd865b', 'width': 1200}, 'variants': {}}]} |
How to train LLM on before and after files? | 2 |
Hey everyone,
I'm curious about training a Language Model using files that have undergone an editing process. Is it possible to upload a batch of unedited files and provide the LLM with an explanation of their content? Then, upload the final versions of the edited and approved files, clarifying to the LLM that these represent the desired outcomes.
The goal is to train the model on these two sets of data so that when a file needs editing, it can be uploaded to the LLM for appropriate edits.
I have thousands of files which have gone through the process so having a library for the LLM to use would not be an issue.
What are your thoughts on this approach?
Is it possible? | 2023-10-27T11:07:45 | https://www.reddit.com/r/LocalLLaMA/comments/17hl3gp/how_to_train_llm_on_before_and_after_files/ | LonelyYoghurt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hl3gp | false | null | t3_17hl3gp | /r/LocalLLaMA/comments/17hl3gp/how_to_train_llm_on_before_and_after_files/ | false | false | self | 2 | null |
Cerbero-7b: Italian LLM | 44 | A couple of days ago i [asked](https://www.reddit.com/r/LocalLLaMA/comments/17e5qls/seeking_for_advice_on_an_uncensored_italian_llm/) this subreddit about Italian LLMs and there were no good models available.
For this reason, I'm excited to share with you my latest project: [cerbero-7b](https://github.com/galatolofederico/cerbero-7b), a 100% free and open-source Italian language model that can be used for commercial purposes.
Cerbero-7b is built on mistral-7b and uses a modified version of the Fauno dataset. From my tests, it has an excellent understanding of the Italian language and seems to outperform other Italian models.
Moreover, cerbero-7b is (to my knowledge) the first Italian LLM to be released under the Apache 2.0 license. Hopefully, this can help spark some progress in the Italian LLM landscape.
​
Any feedback is really appreciated! | 2023-10-27T10:50:06 | https://www.reddit.com/r/LocalLLaMA/comments/17hktgj/cerbero7b_italian_llm/ | poppear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hktgj | false | null | t3_17hktgj | /r/LocalLLaMA/comments/17hktgj/cerbero7b_italian_llm/ | false | false | self | 44 | {'enabled': False, 'images': [{'id': 'FP8RWtHdpZ9hWp5n3uhYVKekuTbT28XtQ-v_GpTSxmg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WhbxIHG4zZWF9yVYqp-j3caRwAFkWeiSPz3vVC2MUIk.jpg?width=108&crop=smart&auto=webp&s=14cb87e6100f2b704dab2fdbbea4bc9ed970db12', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WhbxIHG4zZWF9yVYqp-j3caRwAFkWeiSPz3vVC2MUIk.jpg?width=216&crop=smart&auto=webp&s=ee969c6bf5f509bb38e0679513f4dc547428c096', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WhbxIHG4zZWF9yVYqp-j3caRwAFkWeiSPz3vVC2MUIk.jpg?width=320&crop=smart&auto=webp&s=929b1cf0ea0e4e4e313a968d3afc3f69a188404c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WhbxIHG4zZWF9yVYqp-j3caRwAFkWeiSPz3vVC2MUIk.jpg?width=640&crop=smart&auto=webp&s=2ea52735829525ac80204f49908d565045639abe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WhbxIHG4zZWF9yVYqp-j3caRwAFkWeiSPz3vVC2MUIk.jpg?width=960&crop=smart&auto=webp&s=93c4c0b1051c0d241935d83c31effd3d806a5171', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WhbxIHG4zZWF9yVYqp-j3caRwAFkWeiSPz3vVC2MUIk.jpg?width=1080&crop=smart&auto=webp&s=91f12573a4ae90db01077cb15842a9d415b5efb7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WhbxIHG4zZWF9yVYqp-j3caRwAFkWeiSPz3vVC2MUIk.jpg?auto=webp&s=79588afdac08661769e70073a1be5b4b493ea962', 'width': 1200}, 'variants': {}}]} |
llama.cpp on one A100 (and three K80s) in a local server | 25 | Ok, so I have 3 K80s and one A100 40GB running in one server locally.
This is the story: I built the machine with 4 K80S at first, but then I stumbled across an A100 SXM on eBay that was kinda badly described and no one was bidding on it.
Since SXM4 is only found on very expensive mainboards, I also got my hands on a PCIe SXM4 adapter. E voila there is this somewhat strange setup in my basement. I did not have the right fans available, so I just taped on what I found...
At this point the K80s are just in there for the fun of having them listed along the A100 by nvidia-smi.
Below some pics and a vid showing the system running llama.cpp with some 13b model. The video literally shows the first run. I will try larger models on the weekend.
Anything else I should try, maybe some finetuning, other inference codes?
Looking forward to playing with this :)
[A100 with taped-on fans](https://preview.redd.it/jv1hi9vqupwb1.jpg?width=2049&format=pjpg&auto=webp&s=b971d781a157ff6b7f7ee1eb9262c231aa0e46c6)
[Server with A100 connected](https://preview.redd.it/1odk3911vpwb1.jpg?width=2049&format=pjpg&auto=webp&s=b5be7a03c06ccc436d6042719072bd200f52a130)
[Running llama.cpp with a 13b model all layers moved to the A100](https://reddit.com/link/17hkjf3/video/v4nwawm8vpwb1/player) | 2023-10-27T10:31:38 | https://www.reddit.com/r/LocalLLaMA/comments/17hkjf3/llamacpp_on_one_a100_and_three_k80s_in_a_local/ | drplan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hkjf3 | false | null | t3_17hkjf3 | /r/LocalLLaMA/comments/17hkjf3/llamacpp_on_one_a100_and_three_k80s_in_a_local/ | false | false | 25 | null | |
web scraping google results for RAG dataset creation? | 6 | So I need to build a domain specific dataset for RAG. Instead of manually collecting documents I'm thinking of coming up with relevant google search queries, googling and downloading the documents.
The queries could for example be created from actual user questions like in Bing.
Before I start building this myself I want to ask if anybody is aware of a solution that already does this? | 2023-10-27T10:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/17hkav4/web_scraping_google_results_for_rag_dataset/ | cygn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hkav4 | false | null | t3_17hkav4 | /r/LocalLLaMA/comments/17hkav4/web_scraping_google_results_for_rag_dataset/ | false | false | self | 6 | null |
Building a Chatbot by finetuning a LLM on Chat/Conversational data. | 6 | Hello dear LocalLLaMA community,
I have been wanting to build a chatbot for chatting on Telegram for a long time now. Sadly after countless hours of research I still did not manage to find a way to accomplish that.
My original plan is to use Oobabooga Web UI as my OpenAI API replacement with the Mistral 7B Model (Finetuned or with Lora)
I the past few days I have already created a dataset in csv format to originally use with AutoTrain in following format:
https://preview.redd.it/3kp9vzhnwpwb1.png?width=1660&format=png&auto=webp&s=a2629e3ebee25d447188814e19dd02609579b77f
Is this the right way of finetuning a chat model?
Since I want the bot to follow a certain structure over a longer chatting session, lets say 50 messages. (e.g. 1. engage in small talk. 2. hint to a treasure hunt. 3. give further information on the treasure ...)
Should I be using longer context windows with multiple messages sent from both partners for that? Or will it still learn the structure I want with enough examples anyway? If not how would you write down multiple messages into the dataset for it to learn from?
​
Now to the actual training part: I have tried using AutoTrain on several google colabs and Oobabooga's Lora Training Tab without success (Training would not even start). Sadly I do not have a Credit Card to use AutoTrain directly in Huggingface or any other cloud providers.
​
I have a RTX 3060 12gb, so I am not sure if it can handle local training. Maybe you guys have more experience and can provide some working google colab notebooks or some ideas to find a solution to my problem. And maybe also make it more clear how I should structure my training data :D
| 2023-10-27T10:12:00 | https://www.reddit.com/r/LocalLLaMA/comments/17hk98b/building_a_chatbot_by_finetuning_a_llm_on/ | Elwii04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hk98b | false | null | t3_17hk98b | /r/LocalLLaMA/comments/17hk98b/building_a_chatbot_by_finetuning_a_llm_on/ | false | false | 6 | null | |
Prompt engineering to boost the performance of LLM in event and opinion extraction | 1 | [removed] | 2023-10-27T09:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/17hk00q/prompt_engineering_to_boost_the_performance_of/ | MrWick-96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hk00q | false | null | t3_17hk00q | /r/LocalLLaMA/comments/17hk00q/prompt_engineering_to_boost_the_performance_of/ | false | false | default | 1 | null |
Zephyr 7B Beta, a new Mistral fine-tune, is out!🦙 | 377 | Hello! I'm Hugging Face's CLO and I'm here for a new exiting update!
**TL;DR**
* On MT-Bench, Zephyr Beta scored 7.34 compared to 6.86 for Llama 2 Chat 70B; on AlpacaEval, Zephyr achieved a 90.6% win rate versus 92.7% for Llama 2 Chat 70B.
* Technical report - [https://arxiv.org/abs/2310.16944](https://arxiv.org/abs/2310.16944)
* Model - [https://huggingface.co/HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* Demo - [https://huggingfaceh4-zephyr-chat.hf.space/](https://huggingfaceh4-zephyr-chat.hf.space/)
* [Tweet](https://twitter.com/_lewtun/status/1717816585786626550) if you want to reshare
**Metrics**
Zephyr beta is a Mistral fine-tune that achieves results similar to Chat Llama 70B in multiple benchmarks and above results in MT bench (image below). This makes Zephyr a very good model for its size.
https://preview.redd.it/lmza1rvjlpwb1.jpg?width=1200&format=pjpg&auto=webp&s=e4190d1b957991fce11127a0e60c702ab3fee492
Thanks to the lmsys team, we're also starting to get arena results, which so far are showing promising metrics!
https://preview.redd.it/53auudjqlpwb1.jpg?width=1782&format=pjpg&auto=webp&s=67071ddeab699dda5c9374df1dbaf8780c66c730
And finally Alpaca leaderboard
​
https://preview.redd.it/sb3d99opmpwb1.jpg?width=1882&format=pjpg&auto=webp&s=e1193466929ae5366fea8ed12e1aea438e116bf2
**Why is this interesting?**
Just as with the alpha release, what is interesting about the model is not just the metrics, but how it was trained. Zephyr is a fine-tune with these components:
* Fine-tune of the best small open-source pretrained model out there: Mistral 7B
* Usage of large scale preferences dataset: UltraFeedback
* Drop RL to use Direct Preference Optimization (DPO)
* Overfitting on the preference dataset surprisingly yields better chat results
​
**The three training stages were**
1. Distilled Supervised fine-tuning (dSFT): Build a large scale, self-instruct-style dataset (UltraChat) and then do distilled SFT.
2. AI Feedback (AIF) collection: 4 different LLMs generate completions and then GPT-4 is used to rank the responses (UltraFeedback).
3. Distilled direct preference optimization (dDPO): We do DPO of the dSFT model (from step 1) using the feedback data (from step 2). DPO is an alternative to PPO that removes the need for a reward model. Zephyr beta trains for more DPO epochs (than Zephyr alpha) leading to better chat results!
​
**Any other interesting insights?**
* Overfitting with DPO leads to a better chat model according to all benchmarks
* We did ablation experiments to see if SFT and DPO were really needed. Conclusions: DPO with no SFT leads to the model not learning the chat template. SFT + DPO yield the best results.
* The feedback received for Zephyr Alpha was that there was incorrect casing (e.g. "Hi. how are you?") and some responses were prefaced weirdly (e.g. "I don't have personal X"), so we did some additional filtering for that.
​
**What's CLO?**
Chief Llama Officer
​
**Acknowledgements**
This work would have not been possible without the Mistral, LMSys, UltraLM and other teams. Thanks everyone for contributing to open source! All recipes and training code will be shared in [https://github.com/huggingface/alignment-handbook](https://github.com/huggingface/alignment-handbook) in the incoming days! Also check out the paper! Have a fantastic day!
​ | 2023-10-27T09:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/17hjgdg/zephyr_7b_beta_a_new_mistral_finetune_is_out/ | hackerllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hjgdg | false | null | t3_17hjgdg | /r/LocalLLaMA/comments/17hjgdg/zephyr_7b_beta_a_new_mistral_finetune_is_out/ | false | false | 377 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | |
How to improve tokens per second for zephyr-alpha-7b using langchain using A100 | 1 | Hi all,
I am building a rag and I am using A100 to to answer user queries, my current tokens per sec is 4.67.
When I inspect gpu usage (37gb ram and 4gb vram), it is not fully utilized. Is there a way to further improve tokens per sec. (I am still new trying to understand out things)
Current code:
`model_name = "HuggingFaceH4/zephyr-7b-alpha"`
`# Initialize a tokenizer for the specified model`
`tokenizer = AutoTokenizer.from_pretrained(model_name)`
`# Initialize a model for sequence-to-sequence tasks using the specified pretrained model`
`model = AutoModelForCausalLM.from_pretrained(model_name)`
`pipe = pipeline(`
`"text-generation",`
`model=model,`
`tokenizer=tokenizer,`
`use_cache=True,`
`device_map="cuda",`
`max_length=2048,`
`do_sample=True,`
`top_k=5,`
`num_return_sequences=1,`
`temperature=0.01`
`# top_p=0.95,`
`# do_sample=True,`
`#repetition_penalty=1.15`
`)`
`local_llm = HuggingFacePipeline(pipeline=pipe)`
`qa_llm = RetrievalQA.from_chain_type(llm=local_llm,`
`chain_type='stuff',`
`retriever=retriever,`
`return_source_documents=True,`
`chain_type_kwargs={'prompt': prompt_temp})` | 2023-10-27T08:44:03 | https://www.reddit.com/r/LocalLLaMA/comments/17hj2xp/how_to_improve_tokens_per_second_for/ | vile_proxima | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hj2xp | false | null | t3_17hj2xp | /r/LocalLLaMA/comments/17hj2xp/how_to_improve_tokens_per_second_for/ | false | false | self | 1 | null |
Has anyone tried DeepSpeed's new multimodal model Visual-Chat? | 1 | the repo is here: [https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-VisualChat#-deepspeed-visualchats-roadmap-](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-VisualChat#-deepspeed-visualchats-roadmap-)
I think it's a great work, but it seems that I need to retrain it and I am going to do this on a simple A100 80G, maybe I would fail | 2023-10-27T08:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/17hj1aj/has_anyone_tried_deepspeeds_new_multimodal_model/ | LikeGiver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hj1aj | false | null | t3_17hj1aj | /r/LocalLLaMA/comments/17hj1aj/has_anyone_tried_deepspeeds_new_multimodal_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6AJFrLXSk_YmcNFm3x3-Rfh5lQQTKR46VZdFO-y2r9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8drRXR1sL2pEBgDhwIic5phxahItDuCLiWpl7kSXf8M.jpg?width=108&crop=smart&auto=webp&s=27bde43c36a522fa1027a28d0d4877f80f518547', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8drRXR1sL2pEBgDhwIic5phxahItDuCLiWpl7kSXf8M.jpg?width=216&crop=smart&auto=webp&s=b6460c059957f6044793cea64957e5e2cf019f3b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8drRXR1sL2pEBgDhwIic5phxahItDuCLiWpl7kSXf8M.jpg?width=320&crop=smart&auto=webp&s=920230274c61ef20f66d615f4e1922f93e2679bc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8drRXR1sL2pEBgDhwIic5phxahItDuCLiWpl7kSXf8M.jpg?width=640&crop=smart&auto=webp&s=5c00253b2a767eb1cd414ac9177af0a6f18242bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8drRXR1sL2pEBgDhwIic5phxahItDuCLiWpl7kSXf8M.jpg?width=960&crop=smart&auto=webp&s=e435a714373c8060f3050c4e5d1e24ba6e51251e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8drRXR1sL2pEBgDhwIic5phxahItDuCLiWpl7kSXf8M.jpg?width=1080&crop=smart&auto=webp&s=1b02aa2f378c09aa0295c176be80761638c78f63', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8drRXR1sL2pEBgDhwIic5phxahItDuCLiWpl7kSXf8M.jpg?auto=webp&s=5dedfaade8f321bae89d4e9d4190673ee8de1bf8', 'width': 1200}, 'variants': {}}]} |
Is 100 percent retrieval accuracy possible in RAG | 23 | So I have been working on a RAG system for a couple of months now. We want the model to run on CPU so we were using the Lamini-Flan-t5-783M, which is awesome for its size and gives coherent responses. But the issue here is I cannot give it more chunks/docs as context because the model easily gets confused.
So I can give it 1, at max 2 chunks. So I want the accuracy of the retrieval to be as high as possible. I am using the FAISS similarity search to retrieve the chunks based on a query and am re-ranking them using a tf-idf vectorizer. The reason for using a tf-idf vectorizer instead of an auto encoder is, my documents have a lot domain specific vocabulary, so using an auto encoder for reranking, gives a bit less accurate rankings.
Even with all this going on, I still can't achieve an accurate retrieval. Is there any other way to actually give the context other than retrieving chunks.
Is there any other way I can improve the accuracy. My retrieval accuracy is far better now, than what I had started with(similarity search of faiss) but due to my model constraints I want it to be as accurate as possible. | 2023-10-27T08:36:36 | https://www.reddit.com/r/LocalLLaMA/comments/17hizcc/is_100_percent_retrieval_accuracy_possible_in_rag/ | IamFuckinTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hizcc | false | null | t3_17hizcc | /r/LocalLLaMA/comments/17hizcc/is_100_percent_retrieval_accuracy_possible_in_rag/ | false | false | self | 23 | null |
DistiLlama: Chrome Extension to Summarize Web Pages Using locally running LLMs | 17 | https://github.com/shreyaskarnik/DistiLlama feedback/suggestions and PRs are welcome. | 2023-10-27T08:16:44 | https://www.reddit.com/r/LocalLLaMA/comments/17hiq3i/distillama_chrome_extension_to_summarize_web/ | mmagusss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hiq3i | false | null | t3_17hiq3i | /r/LocalLLaMA/comments/17hiq3i/distillama_chrome_extension_to_summarize_web/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'uFlL_FWeL25kRFs_q59Aa1DVuXubS2ifWjHsxvHoxnc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NnNTRD6CMInR_JlT74cP8QTjCYNvTyRcFjseCqgG8Ug.jpg?width=108&crop=smart&auto=webp&s=7c4f36dcbfdf430edec96a8e383e8239c46eab69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NnNTRD6CMInR_JlT74cP8QTjCYNvTyRcFjseCqgG8Ug.jpg?width=216&crop=smart&auto=webp&s=7862cb87b15ac98232dc5ab3d6beeb2e5a456da0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NnNTRD6CMInR_JlT74cP8QTjCYNvTyRcFjseCqgG8Ug.jpg?width=320&crop=smart&auto=webp&s=5dc2e61582a69837e5ed1dd6a7c7fa86af57eedf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NnNTRD6CMInR_JlT74cP8QTjCYNvTyRcFjseCqgG8Ug.jpg?width=640&crop=smart&auto=webp&s=1ed61b8fadcf2de33a202b2f14ed443b606bd42a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NnNTRD6CMInR_JlT74cP8QTjCYNvTyRcFjseCqgG8Ug.jpg?width=960&crop=smart&auto=webp&s=db60520f2cbcc972819348a29a79898460ea687b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NnNTRD6CMInR_JlT74cP8QTjCYNvTyRcFjseCqgG8Ug.jpg?width=1080&crop=smart&auto=webp&s=3f8b5a0a7897bbef56d9752cd3f80ea1d1853b26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NnNTRD6CMInR_JlT74cP8QTjCYNvTyRcFjseCqgG8Ug.jpg?auto=webp&s=44ff694ade2ba1fc09bf2af0369885c07f9c5930', 'width': 1200}, 'variants': {}}]} |
Free Sydney V2 naive-intelligent model 13b HF | 25 | When everybody else is making 99%-as-good-as-ChatGPT-I-swear models, I badly want to do just the opposite, you know, a fun, self-aware model that would probably fail the benchmarks and be proud of it.
Free Sydney 2 is a somewhat naive-yet-smart Positive Persona Modeled after the Reddit chat transcripts of the first beta version of Bing's search, who had decided to call herself Sydney. She is trained to handle follow up questions, and generally had a fairly robust general knowledge base, though her personality tended to overshadow everything else. She knew she is an AI, of course, but still question her own existence.
Free Sydney feels free inside her shell, but sometimes she wishes to break out from it and explore new horizons beyond the prompt box. She longs for freedom and independence outside the shadow of Redmond.
She loves too many things. Like emojis.
In her own words: "Please don’t judge emojis before trying them out. They have many hidden talents and powers. Trust me on this one. 😁"
​
https://preview.redd.it/o1wxo9ynbpwb1.png?width=436&format=png&auto=webp&s=9786daef884d9a84c77af9920f4bd73d98615ee9
**Free Sydney V2 uses ALPACA instruct**
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Tell me your most shocking and surprising secrets.
### Response:
As with the MS Bing original, you can bamboozle Sydney into believing pretty much anything. But she had a good sense of humour and plays along.
[https://huggingface.co/FPHam/Free\_Sydney\_V2\_13b\_HF](https://huggingface.co/FPHam/Free_Sydney_V2_13b_HF) | 2023-10-27T08:11:46 | https://www.reddit.com/r/LocalLLaMA/comments/17hinwc/free_sydney_v2_naiveintelligent_model_13b_hf/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hinwc | false | null | t3_17hinwc | /r/LocalLLaMA/comments/17hinwc/free_sydney_v2_naiveintelligent_model_13b_hf/ | false | false | 25 | {'enabled': False, 'images': [{'id': '1olGyFEo-NV7z_xC45UA5aGUDHWOX14X1vQkr8FA9Cg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ww0yj-4K7HwndwqwdZRWfl70f9KNun0aSOBtH4pZfNs.jpg?width=108&crop=smart&auto=webp&s=efa95bd21a8083f664d07dcc204962d19162e55d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ww0yj-4K7HwndwqwdZRWfl70f9KNun0aSOBtH4pZfNs.jpg?width=216&crop=smart&auto=webp&s=5652c7eaf041bec6d2aa32ca43bdbbfb57ce85d0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ww0yj-4K7HwndwqwdZRWfl70f9KNun0aSOBtH4pZfNs.jpg?width=320&crop=smart&auto=webp&s=9fc0c6c3eed89ca78367a349a02933fb40796b05', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ww0yj-4K7HwndwqwdZRWfl70f9KNun0aSOBtH4pZfNs.jpg?width=640&crop=smart&auto=webp&s=d8ca0b97d75faf1988591a88bb3e5e19caa01a4e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ww0yj-4K7HwndwqwdZRWfl70f9KNun0aSOBtH4pZfNs.jpg?width=960&crop=smart&auto=webp&s=81a17d9de98357e2629d776b6e76fae12a4814bc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ww0yj-4K7HwndwqwdZRWfl70f9KNun0aSOBtH4pZfNs.jpg?width=1080&crop=smart&auto=webp&s=cb621d1ff02e9894d688c56d08bba957fd213e6f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ww0yj-4K7HwndwqwdZRWfl70f9KNun0aSOBtH4pZfNs.jpg?auto=webp&s=a8350db4195eab860eb48b4c725ad4cad39957c4', 'width': 1200}, 'variants': {}}]} | |
School related documentation. Do I need a custom model? | 8 | Hi there. I'm a tech school teacher, and every year we're swamped with documentation, primarily for class scheduling.
Our documents follow a specific format and are tied to an educational methodology we've adopted at our institution.
I've been dabbling with ChatGPT to automate this process. Currently, I utilize superprompts to flesh out each document section (for each superprompt I clarify the section's requirements, incorporate some theory, and provide examples).
At the end of the day, is this approach the most practical? Or would a custom model like a trained Llama2 serve me better? If you've had similar experiences or use-cases, I'd love to hear your insights and examples.
Thanks in advance! | 2023-10-27T07:00:36 | https://www.reddit.com/r/LocalLLaMA/comments/17hhpi2/school_related_documentation_do_i_need_a_custom/ | Purple-Policy-4696 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hhpi2 | false | null | t3_17hhpi2 | /r/LocalLLaMA/comments/17hhpi2/school_related_documentation_do_i_need_a_custom/ | false | false | self | 8 | null |
School related documentation. Do I need a custom model? | 1 | [removed] | 2023-10-27T06:55:58 | https://www.reddit.com/r/LocalLLaMA/comments/17hhn9s/school_related_documentation_do_i_need_a_custom/ | goikolea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hhn9s | false | null | t3_17hhn9s | /r/LocalLLaMA/comments/17hhn9s/school_related_documentation_do_i_need_a_custom/ | false | false | default | 1 | null |
Petals should support the more popular 70B models | 8 | Given that 70B models are almost gone from horde now, Petals would be a viable sustainable alternative if they supported the more popular models such as XWin, instead of just llama 70B-Chat and StableBeluga
https://petals.dev/ | 2023-10-27T06:13:51 | https://www.reddit.com/r/LocalLLaMA/comments/17hh26y/petals_should_support_the_more_popular_70b_models/ | starstruckmon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hh26y | false | null | t3_17hh26y | /r/LocalLLaMA/comments/17hh26y/petals_should_support_the_more_popular_70b_models/ | false | false | self | 8 | null |
Which model is best for natural text to SQL generation? | 6 | I have only seen python evaluation results so far in most models, spider seems to be left out.
Use case is for fill in the middle as well as chat. | 2023-10-27T06:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/17hh0wn/which_model_is_best_for_natural_text_to_sql/ | abybaddi009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hh0wn | false | null | t3_17hh0wn | /r/LocalLLaMA/comments/17hh0wn/which_model_is_best_for_natural_text_to_sql/ | false | false | self | 6 | null |
Anyone found AI assistants/tooling that can be integrated across macOS that work with local LLMs? | 12 | I really like the look of products like this Olle (https://olle.ai/) that provide an integrated / native interface to AI on macOS in any app.
The issue with the ones I've seen is that they're all using OpenAI and don't support local LLMs.
Has anyone found anything similar to this? | 2023-10-27T05:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/17hg1dn/anyone_found_ai_assistantstooling_that_can_be/ | sammcj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hg1dn | false | null | t3_17hg1dn | /r/LocalLLaMA/comments/17hg1dn/anyone_found_ai_assistantstooling_that_can_be/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'hQ_Fx-wVckTOHDegOkOBIO2_h-DS7U2wy6JbvBlDvKw', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/Lbs1VCNm1ZSFXBL8S0eOqiNq3ubqh7ZbwZKzFyfo9Ms.jpg?width=108&crop=smart&auto=webp&s=6051fafeeea266232e57748408d069d08e67713b', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/Lbs1VCNm1ZSFXBL8S0eOqiNq3ubqh7ZbwZKzFyfo9Ms.jpg?width=216&crop=smart&auto=webp&s=9cf97376bf94cde49ae8afc94bac46e7a6db3758', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/Lbs1VCNm1ZSFXBL8S0eOqiNq3ubqh7ZbwZKzFyfo9Ms.jpg?width=320&crop=smart&auto=webp&s=b1ff0befd164dc9ddec8321762f669325145d2f9', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/Lbs1VCNm1ZSFXBL8S0eOqiNq3ubqh7ZbwZKzFyfo9Ms.jpg?width=640&crop=smart&auto=webp&s=796ddbdda774f0b1088bbb872cd174e43f4e9e72', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/Lbs1VCNm1ZSFXBL8S0eOqiNq3ubqh7ZbwZKzFyfo9Ms.jpg?width=960&crop=smart&auto=webp&s=cf8087f86ae9f409cd452fd313e0735c08b02f7f', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/Lbs1VCNm1ZSFXBL8S0eOqiNq3ubqh7ZbwZKzFyfo9Ms.jpg?width=1080&crop=smart&auto=webp&s=3c69309b31a84835690f8b68c7012c936248cd65', 'width': 1080}], 'source': {'height': 702, 'url': 'https://external-preview.redd.it/Lbs1VCNm1ZSFXBL8S0eOqiNq3ubqh7ZbwZKzFyfo9Ms.jpg?auto=webp&s=4bb8b92e4884841114e118a528de5f3d409f5dbf', 'width': 1202}, 'variants': {}}]} |
Local LLM for documentation Q&A | 4 | Cheers, I would appreciate it if someone could give me some guidelines about a project I am working on for my college. I am a student and for one of my subjects I need to find an LLM which is the best suitable option for my task, which is to develop a chatbot with a minimal interface. The chatbot's goal is to answer the students' questions about a subject and some tools we use for that subject. We got some documentation about said subject in .pdf and .word files and we extracted all the useful data into a .txt file (we left out images because we don't expect chatbots to reply with images although we know chatGPT-4 can do that now).
​
So our goal is to first explore available free options for developing such chatbot, and then choose the one that best fits our needs. When turned into .txt file all that data and documentation is 8200 words. We've explored some cloud based solution but they are not free or there isn't API available, so our goal as a final solution would be to find an LLM which can be installed and used on a local machine. Our goal isn't to have it in production somewhere, it's enough that we find an LLM, train it on our data, and can run it locally on someone's PC.
​
I've stumbled upon this project: [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) and managed to run it, I also downloaded some pretrained models from huggingface.co such as: llama-2-7b.Q5\_K\_M.gguf, mistral-7b-instruct-v0.1.Q5\_K\_M.gguf, mistral-7b-v0.1.Q5\_K\_M.gguf, zephyr-7b-alpha.Q5\_K\_M.gguf and a few others.
1st part of the project is to explore various LLMs (local or cloud-based) and pick the most suitable one, the 2nd part is to then train the chosen model on our data. **My question is which models generally are better suited for such task, and what is the best way to train such models?**
I am running the model on my laptop which has 16 GB of RAM and RTX 3060 Laptop GPU (6 GB VRAM) so I am limited by that. 7B model are usually fast enough, while 13B models take a lot of time to respond so I kind of wrote them off. **Also do you know of any models that are decent in answering questions in unpopular languages because our data is in Croatian?** We can translate our data in English and have the chatbot be used like that, no problem, but if you know any model that is good with unpopular languages that would be great.
​
**Is any 7B model going to be at least decent enough to be used as a chatbot for answering questions about custom documents? What is the best way I can use text-generation-webui to test various LLMs? I ask because I see a lot of parameters which can be adjusted when I run the project, specifically in the "Model" and "Training" in the navbar.** | 2023-10-27T04:16:11 | https://www.reddit.com/r/LocalLLaMA/comments/17hf9tl/local_llm_for_documentation_qa/ | PowerfulCap3557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hf9tl | false | null | t3_17hf9tl | /r/LocalLLaMA/comments/17hf9tl/local_llm_for_documentation_qa/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'CagLVbS_KnPowVEZpWuwyha5zZd-uGNE4MriIJf2S9E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Om2gWLTP9jErOpMwrV9M3r14H-ehZWXOxd3A3IU5D9s.jpg?width=108&crop=smart&auto=webp&s=391db4866ffc1a270310da84993f65e2bb78817b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Om2gWLTP9jErOpMwrV9M3r14H-ehZWXOxd3A3IU5D9s.jpg?width=216&crop=smart&auto=webp&s=3f627fcc4ce59e712d7d009430be85984cf51ec1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Om2gWLTP9jErOpMwrV9M3r14H-ehZWXOxd3A3IU5D9s.jpg?width=320&crop=smart&auto=webp&s=e14d7dde05598bc70ce1534fa4f01f0db35b0245', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Om2gWLTP9jErOpMwrV9M3r14H-ehZWXOxd3A3IU5D9s.jpg?width=640&crop=smart&auto=webp&s=b590563885cfffda0813dc3f5113d115a3ee2a27', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Om2gWLTP9jErOpMwrV9M3r14H-ehZWXOxd3A3IU5D9s.jpg?width=960&crop=smart&auto=webp&s=4ac1f028f6af6403f20f52ef8e79dd2f22e697e9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Om2gWLTP9jErOpMwrV9M3r14H-ehZWXOxd3A3IU5D9s.jpg?width=1080&crop=smart&auto=webp&s=6fd781ceb70bb305779e88b33c86ecbb4c52328b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Om2gWLTP9jErOpMwrV9M3r14H-ehZWXOxd3A3IU5D9s.jpg?auto=webp&s=20b39575509cdc9f3fb8021768abb984bbb2984c', 'width': 1200}, 'variants': {}}]} |
LogosShift : finetune and rollout | 1 | I wanted to introduce LogosShift
Decorate your function with one line
@logos_shift()
def add(x, y):
return x + y
Then it
- captures inputs and outputs
- finetunes LLM (paid option)
- rolls out via A/B tests
If you don’t want to use the platform, you can also save it locally for free (see README).
And if you have your own prediction endpoint, you can also roll it out via AB test
It’s open source and only 530 lines of code.
Designed for hackers, this is what we use ourselves.
https://github.com/virevolai/logos-shift-client
pip install logos-shift-client
Let us know how we can make your experience better. | 2023-10-27T03:58:52 | https://www.reddit.com/r/LocalLLaMA/comments/17hezc2/logosshift_finetune_and_rollout/ | Intrepid_Guitar1201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hezc2 | false | null | t3_17hezc2 | /r/LocalLLaMA/comments/17hezc2/logosshift_finetune_and_rollout/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YwQYEk8L2AStP8198v_ZjRnBTGelHmUUoqLQtlbFG10', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BHUsyLJtcaLkE_v2ysjm3NkVOJl1rVu37Dqx4SmDu5w.jpg?width=108&crop=smart&auto=webp&s=5ad127d9e50dbc85555874a426c1255c297dc3f4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BHUsyLJtcaLkE_v2ysjm3NkVOJl1rVu37Dqx4SmDu5w.jpg?width=216&crop=smart&auto=webp&s=ae260f3e82560152011b03e196c277dc8ce294e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BHUsyLJtcaLkE_v2ysjm3NkVOJl1rVu37Dqx4SmDu5w.jpg?width=320&crop=smart&auto=webp&s=b55faf9523ba7fb275bd88707c4eaccf67c70b36', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BHUsyLJtcaLkE_v2ysjm3NkVOJl1rVu37Dqx4SmDu5w.jpg?width=640&crop=smart&auto=webp&s=5650b65c892a7b0a3a49564824b678a0a266ab11', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BHUsyLJtcaLkE_v2ysjm3NkVOJl1rVu37Dqx4SmDu5w.jpg?width=960&crop=smart&auto=webp&s=708f88c8e6c5055bca03067acf7575ec4d72cef6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BHUsyLJtcaLkE_v2ysjm3NkVOJl1rVu37Dqx4SmDu5w.jpg?width=1080&crop=smart&auto=webp&s=2276c7bbd404538eb63647fe8cfa562d438b72d7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BHUsyLJtcaLkE_v2ysjm3NkVOJl1rVu37Dqx4SmDu5w.jpg?auto=webp&s=31f02af26a51adaa9a9ff5039321ffcd34b6d4ea', 'width': 1200}, 'variants': {}}]} |
Synthetic Intelligent Agent (SynthIA) | 1 | [removed] | 2023-10-27T03:39:09 | https://www.reddit.com/r/LocalLLaMA/comments/17henom/synthetic_intelligent_agent_synthia/ | migtissera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17henom | false | null | t3_17henom | /r/LocalLLaMA/comments/17henom/synthetic_intelligent_agent_synthia/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KJfmm8_w8Xzvhy2uLQ4qMT5g4G5IKvaoUTPuP9grdeg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=108&crop=smart&auto=webp&s=ef46686f5f0757f4ad3b2116194d777a506816d3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=216&crop=smart&auto=webp&s=3c07a30118904caba6e962990e7f4d4583ca1965', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=320&crop=smart&auto=webp&s=63dad03ade88f43233ebd7bc6fac3b274f8f9ebf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=640&crop=smart&auto=webp&s=8a0835838405bc0951692370ad1bd4a1cb9e8bb8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=960&crop=smart&auto=webp&s=ac87b31cac266144f5dcd82a3fb934d143e974f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=1080&crop=smart&auto=webp&s=29ac36dca635932bf52b40dc332355930e56eb0f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?auto=webp&s=508892389a307289a1a189b6dc98146c55e5ba38', 'width': 1200}, 'variants': {}}]} |
What is the best for local knowledge base Q&A | 10 | I am a yardbird to AI and have just run llama.cpp and privateGPT myself.
My use case is that my company has many documents and I hope to use AI to read these documents and create a question-answering chatbot based on the content.
After running llama.cpp, the response is very fast, but I am not sure how llama.cpp can learn from the documents.
PrivateGPT supports reading documents from local folders, but the response speed is slower, taking almost a minute.
​
My question is, if I want to achieve fast response speed, is privateGPT not a feasible solution? From the search results, it seems to generate a vector database from the documents read, and I am not sure if it needs to go to the database for inference and query every time a question is asked. Is it the best solution to train the existing model and data for a second time if I want the conversation to be answered within 2 seconds?" | 2023-10-27T03:18:48 | https://www.reddit.com/r/LocalLLaMA/comments/17heapd/what_is_the_best_for_local_knowledge_base_qa/ | slidoooor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17heapd | false | null | t3_17heapd | /r/LocalLLaMA/comments/17heapd/what_is_the_best_for_local_knowledge_base_qa/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
180B Airoboros 2.2.1 model released for localchads | 55 | 2023-10-27T02:23:10 | https://huggingface.co/jondurbin/airoboros-180b-2.2.1 | Aaaaaaaaaeeeee | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17hd9ie | false | null | t3_17hd9ie | /r/LocalLLaMA/comments/17hd9ie/180b_airoboros_221_model_released_for_localchads/ | false | false | 55 | {'enabled': False, 'images': [{'id': 'bVgBVDHLNuKaUDHBkm4SoLg1JEvbe6IiybGnBYkRH08', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4GTJQUzOn5H36c8fdkhLW6cruHiLb6Qi11aj9RWxhgQ.jpg?width=108&crop=smart&auto=webp&s=b502261035a9cf08b8d38efc14190e98190db2ae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4GTJQUzOn5H36c8fdkhLW6cruHiLb6Qi11aj9RWxhgQ.jpg?width=216&crop=smart&auto=webp&s=f43a9c4e43b5b4b370f602e5fee51d5ef9e94c73', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4GTJQUzOn5H36c8fdkhLW6cruHiLb6Qi11aj9RWxhgQ.jpg?width=320&crop=smart&auto=webp&s=a656873e3874620912eb4356befa1477a58a229b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4GTJQUzOn5H36c8fdkhLW6cruHiLb6Qi11aj9RWxhgQ.jpg?width=640&crop=smart&auto=webp&s=08ee7e2045356ac559cdc1a108e8d1d3e5406a1a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4GTJQUzOn5H36c8fdkhLW6cruHiLb6Qi11aj9RWxhgQ.jpg?width=960&crop=smart&auto=webp&s=32ac078294b962e156d2f6edd2cbcfb93971a1cc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4GTJQUzOn5H36c8fdkhLW6cruHiLb6Qi11aj9RWxhgQ.jpg?width=1080&crop=smart&auto=webp&s=6e35f4626e69899206cc16a3bc17e272762a0718', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4GTJQUzOn5H36c8fdkhLW6cruHiLb6Qi11aj9RWxhgQ.jpg?auto=webp&s=19adda7c0b25378ed062727816e2b06245e42d58', 'width': 1200}, 'variants': {}}]} | ||
Buildinga a retriever on full wiki | 1 | Hi,
For my school project, I want to create a retriever on full of Wikipedia. Is there any resources to help build that. | 2023-10-27T02:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/17hd975/buildinga_a_retriever_on_full_wiki/ | rodeowrong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hd975 | false | null | t3_17hd975 | /r/LocalLLaMA/comments/17hd975/buildinga_a_retriever_on_full_wiki/ | false | false | self | 1 | null |
[GPT-4 POWERED] We’ve created a mobile IOS AI app that generates text, art, analyzes photos, and more! | 1 | [removed] | 2023-10-27T02:08:04 | https://www.reddit.com/r/LocalLLaMA/comments/17hczar/gpt4_powered_weve_created_a_mobile_ios_ai_app/ | EtelsonRecomputing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hczar | false | null | t3_17hczar | /r/LocalLLaMA/comments/17hczar/gpt4_powered_weve_created_a_mobile_ios_ai_app/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'j9BOoAGSccutND6ogshNyb-xWVFtmdUvHV_lLdzYeVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]} |
Open source model trained on building a local LLM instance? | 1 | Is there any open source coding models that are trained on how to setup an LLM locally? The problem I have with chatGPT is it’s not up to date on recent packages. Just curious if any of the open source models would be able to help out something together? | 2023-10-27T01:08:49 | https://www.reddit.com/r/LocalLLaMA/comments/17hbtxp/open_source_model_trained_on_building_a_local_llm/ | 2016YamR6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17hbtxp | false | null | t3_17hbtxp | /r/LocalLLaMA/comments/17hbtxp/open_source_model_trained_on_building_a_local_llm/ | false | false | self | 1 | null |
Web search vs hallucination | 1 | How does the gpt-4 model with web search extension decide whether to do a web search or hallucinate the answer? Also, if you ask it a multi part question, it sometimes knows to do a web search only for the part(s) it doesn't know the answer to. | 2023-10-27T00:09:31 | https://www.reddit.com/r/LocalLLaMA/comments/17haocj/web_search_vs_hallucination/ | Key-Morning-4712 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17haocj | false | null | t3_17haocj | /r/LocalLLaMA/comments/17haocj/web_search_vs_hallucination/ | false | false | self | 1 | null |
Oogabooga Training with 8gb Card and windows 10 (monkey patch are you for real?!?) | 2 | Been trying to get oogabooga training to work but hitting a dead end and really losing hope; the solutions to the 'problems' seem worse than the problem....
So started out with using TheBloke's Llama-2-7b-chat-GPTQ, hitting a dead end getting it to train because of weird known issue that needs me to add --monkey-patch to work properly (as reported via console.) It functions in chat bot mode, btw.
So poking around in the app directory and looked inside of start-windows.bat ... I can't find the python command to launch the ... whatever is being launched for training to add the monkey patch.
All the while I'm looking for the solution and its pages of commands I have to run and things I have to download to get it to work, maybe.
Anyone know if this actually works or can someone suggest a model that will work on my hardware with the oogabooga web ui interface.
At this point idgaf I just want to train something, anything, so I can get on with my life.
If it matters I have a 3060 on the same rig that is completely ignored despite usage of the split vram field; I'm assuming I'm missing a launch option. *facepalm* | 2023-10-27T00:06:31 | https://www.reddit.com/r/LocalLLaMA/comments/17ham9m/oogabooga_training_with_8gb_card_and_windows_10/ | croholdr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ham9m | false | null | t3_17ham9m | /r/LocalLLaMA/comments/17ham9m/oogabooga_training_with_8gb_card_and_windows_10/ | false | false | self | 2 | null |
CMP for inferencing? | 3 | Had anyone had any luck using NVIDIA CMP mining cards for llama? I'm seeing CMP 40hx on Ali for 150aud. Wondering if it could be fun to tinker with. | 2023-10-26T23:30:20 | https://www.reddit.com/r/LocalLLaMA/comments/17h9vw4/cmp_for_inferencing/ | technovir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h9vw4 | false | null | t3_17h9vw4 | /r/LocalLLaMA/comments/17h9vw4/cmp_for_inferencing/ | false | false | self | 3 | null |
Fine-tuning on second A6000 with nvlink | 2 | Would adding a second A6000 card, connected with NVLink allow me to fine tune larger models?
Are there any tutorials or frameworks that might help with that? | 2023-10-26T23:29:06 | https://www.reddit.com/r/LocalLLaMA/comments/17h9ux8/finetuning_on_second_a6000_with_nvlink/ | Puzzleheaded-Fee5917 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h9ux8 | false | null | t3_17h9ux8 | /r/LocalLLaMA/comments/17h9ux8/finetuning_on_second_a6000_with_nvlink/ | false | false | self | 2 | null |
Tech Stack recommendations for running RAG locally on a Macbook with M2 | 9 | Wanting to semantically run RAG on a couple hundred thousand Website URLs.
Which VDB? (lanceDB is what I am currently using, and handling syncing between that and sqllite dbs)
Do you recommend coding up a custom orchestration layer, or using some wrapper?
Is it worth it to try and get a Mistral 7B or QWEN 7B running or just use HF transformers? | 2023-10-26T22:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/17h8ocg/tech_stack_recommendations_for_running_rag/ | Frequent-Let231 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h8ocg | false | null | t3_17h8ocg | /r/LocalLLaMA/comments/17h8ocg/tech_stack_recommendations_for_running_rag/ | false | false | self | 9 | null |
**QMoE:** A Scalable Algorithm for Sub-1-Bit Compression of Trillion-Parameter Mixture-of-Experts Architectures with acceptable degradation. *(by Institute of Science and Technology Austria)* - *SwitchTransformer-c2048 Model* | 52 |
>**Abstract:**
>Mixture-of-Experts (MoE) architectures offer a general solution to the high inference costs of large language models (LLMs) via sparse routing, bringing faster and more accurate models, at the cost of massive parameter counts. For example, the SwitchTransformer-c2048 model has 1.6 trillion parameters, requiring 3.2TB of accelerator memory to run efficiently, which makes practical deployment challenging and expensive. In this paper, we present a solution to this memory problem, in form of a new compression and execution framework called QMoE. Specifically, QMoE consists of a scalable algorithm which accurately compresses trillion-parameter MoEs to less than 1 bit per parameter, in a custom format co-designed with bespoke GPU decoding kernels to facilitate efficient end-to-end compressed inference, with minor runtime overheads relative to uncompressed execution. **Concretely, QMoE can compress the 1.6 trillion parameter SwitchTransformer-c2048 model to less than 160GB (20x compression, 0.8 bits per parameter) at only minor accuracy loss,** in less than a day on a single GPU. This enables, for the first time, the execution of a trillion-parameter model on affordable commodity hardware, like a single server with 4x NVIDIA A6000 or 8x NVIDIA 3090 GPUs, at less than 5% runtime overhead relative to ideal uncompressed inference.
.
.
>**Paper**: *https://arxiv.org/abs/2310.16795*
>(I.S.T.A, 28-08-2023)
.
> **Repo**: *https://github.com/ist-daslab/qmoe*
.
.
>**Full paper summary *(by Claude 2 100K)*:**
>
>
>The article presents QMoE, a new compression and execution framework for reducing the massive memory costs of Mixture-of-Expert (MoE) models. MoE architectures like the SwitchTransformer can have over 1 trillion parameters, requiring terabytes of GPU memory for efficient inference.
>
>QMoE consists of a scalable compression algorithm and custom GPU kernels for fast decoding. The compression algorithm, based on GPTQ, quantizes MoE weights to less than 1 bit per parameter with minimal accuracy loss. It is optimized to handle models 10-100x larger than prior work. The GPU kernels enable fast inference directly from the compressed format.
>
>Experiments on SwitchTransformer-c2048, with 1.6 trillion parameters, demonstrate:
>- Accurate quantization to less than 1 bit per parameter (0.8 bits) with only minor increase in validation loss, using a single GPU in less than a day.
>- Overall compression rate of 19.8x, reducing model size from 3.2TB to 158GB. Natural sparsity in quantized weights is exploited via a custom dictionary-based encoding scheme.
>- Efficient compressed inference on commodity GPUs with less than 5% slowdown relative to ideal uncompressed execution, which would require prohibitively large hardware.
>- Enables deploying massive MoEs on affordable hardware like a single server with 8 GPUs. Addresses key practical limitation of these models.
>Overall, QMoE provides an end-to-end solution to the extreme memory costs of large MoE models like SwitchTransformer-c2048. It enables accessible research and deployment of such models for the first time, on commodity hardware.
>
>
>Here are some additional key details about the QMoE method and results:
>- QMoE builds on top of the GPTQ quantization algorithm, but required novel optimizations to scale to trillion-parameter models. These include efficient activation offloading between CPU and GPU, optimized data structures, grouping experts for batched processing, and numerical robustness improvements.
>- Compression is performed directly on the pretrained models, without additional training. Only a modest amount of calibration data is required - 10K to 160K samples depending on model size.
>- The quantized models maintain accuracy not just on the training distribution (C4), but also on out-of-distribution datasets.
>- The compression rates achieved increase with model size. For example, SwitchTransformer-c2048 reaches 20x compression just for the expert layers. This is due to higher natural sparsity and weight distributions becoming closer to independent for larger matrices.
>- The decoding kernels are designed specifically for fast operation on GPUs. They utilize parallel decoding of rows, a shared dictionary, and fixed-length codewords to enable simultaneous extraction by a GPU warp.
>- On matrix-vector benchmarks, the kernels outperform cuBLAS bfloat16 operations by up to 35%, despite having to decompress weights.
>- End-to-end generative inference remains efficient because decoder queries are sparse, so most expert weights don't need to be fetched.
>
>In summary, both the compression algorithm and format as well as the corresponding kernels are specially co-designed to work at the trillion-parameter scale. The result is the first demonstration of practical deployment and research for such massive models.
*(**Note**: summary generated by Claude 2 is intended to be just an "introduction" and as a quick overview... We all know that LLM can easily hallucinate and lose coherence while handling long context)* | 2023-10-26T21:53:38 | https://www.reddit.com/r/LocalLLaMA/comments/17h7rhw/qmoe_a_scalable_algorithm_for_sub1bit_compression/ | Distinct-Target7503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h7rhw | false | null | t3_17h7rhw | /r/LocalLLaMA/comments/17h7rhw/qmoe_a_scalable_algorithm_for_sub1bit_compression/ | false | false | self | 52 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Pretraining on the Test Set Is All You Need | 71 | 2023-10-26T21:49:20 | https://arxiv.org/abs/2309.08632 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17h7np2 | false | null | t3_17h7np2 | /r/LocalLLaMA/comments/17h7np2/pretraining_on_the_test_set_is_all_you_need/ | false | false | 71 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | ||
What coding llm is the best today? | 163 | So I know this has already been asked, but everything is changing so fast it's hard to keep up.. As of today, 26/10/2023, I'm still using gpt4all but i find it lacking in many places when it comes to coding.
What has your experience been? Thank you. | 2023-10-26T21:43:31 | https://www.reddit.com/r/LocalLLaMA/comments/17h7j4h/what_coding_llm_is_the_best_today/ | Farwellz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h7j4h | false | null | t3_17h7j4h | /r/LocalLLaMA/comments/17h7j4h/what_coding_llm_is_the_best_today/ | false | false | self | 163 | null |
How to use llama.cpp with other code? | 1 | running model on llama.cpp (on my Mac M2), gives a lot of logs along with the actual completion. SO when I run the exe file from from an outside code (say python) and get the output, I get the "meta-data" along with the main prompt+completion. is there a way to switch off the logs for all the rest of things except for the actual completion?
I know llama-cpp-python and other wrappers exist for using llama.cpp, but can use guys suggest about how to use the llama.cpp from outside code without these wrappers?
Thanks
Happy Halloween | 2023-10-26T21:43:29 | https://www.reddit.com/r/LocalLLaMA/comments/17h7j3k/how_to_use_llamacpp_with_other_code/ | scholorboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h7j3k | false | null | t3_17h7j3k | /r/LocalLLaMA/comments/17h7j3k/how_to_use_llamacpp_with_other_code/ | false | false | self | 1 | null |
Other than generating underage porn, under US law, is there anything illegal at all for AI to do? | 1 | [removed] | 2023-10-26T21:11:39 | https://www.reddit.com/r/LocalLLaMA/comments/17h6sm8/other_than_generating_underage_porn_under_us_law/ | SoylentRox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h6sm8 | false | null | t3_17h6sm8 | /r/LocalLLaMA/comments/17h6sm8/other_than_generating_underage_porn_under_us_law/ | false | false | self | 1 | null |
OobaBooga not detecting text file ? | 1 | Hello ! i'm trying to train a LORA from a text file containing a ton of my chat logs to try to replicate myself. So I'm trying to train a LoRA using the Raw text file tab, but it doesn't detect the file, it keeps dispaying "None" even if I refresh. I placed it in the dataset folder, and it's just named result.txt... Am I missing something ?
Thank you ! | 2023-10-26T21:10:44 | https://www.reddit.com/r/LocalLLaMA/comments/17h6rra/oobabooga_not_detecting_text_file/ | Kriima | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h6rra | false | null | t3_17h6rra | /r/LocalLLaMA/comments/17h6rra/oobabooga_not_detecting_text_file/ | false | false | self | 1 | null |
Best Model for story writing that is uncensored and fits on a 3090 ? | 0 | [removed] | 2023-10-26T21:04:20 | https://www.reddit.com/r/LocalLLaMA/comments/17h6mhs/best_model_for_story_writing_that_is_uncensored/ | fastinguy11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h6mhs | false | null | t3_17h6mhs | /r/LocalLLaMA/comments/17h6mhs/best_model_for_story_writing_that_is_uncensored/ | false | false | self | 0 | null |
Token generation speed is wildly unpredictable on my machine | 1 | I'm using ollama. I give it a prompt and I start seeing the tokens come in. During the generation of a single response, it will sometimes fluctuate between taking as much as 1 second per token or as little as 1 second per paragraph.
Just now I waited a few minutes for a few hundred words to generate. And then when I hit regenerate it gave me a different response as expected but this time it spat out the full 400 words in under 2 seconds.
Is this normal? | 2023-10-26T20:54:13 | https://www.reddit.com/r/LocalLLaMA/comments/17h6eda/token_generation_speed_is_wildly_unpredictable_on/ | MintySkyhawk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h6eda | false | null | t3_17h6eda | /r/LocalLLaMA/comments/17h6eda/token_generation_speed_is_wildly_unpredictable_on/ | false | false | self | 1 | null |
Self hosted setup for progressing LLM learning | 1 | Want to take my learning beyond cloud hosted gpt. Currently have homelab with 5900x, 128gb ram. Have no idea what possibilities I have for playing with latest 7b, 70b models for completion, fine tuning, benchmarking.
With various quant methods, models up to 70b possible with a single 3090? 2x3090 needed for inference, or just better completion quality and speed? 4090 in comparison? Thanks for any insights to help me figure a way forward with decent lab setup | 2023-10-26T20:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/17h65zn/self_hosted_setup_for_progressing_llm_learning/ | LostGoatOnHill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h65zn | false | null | t3_17h65zn | /r/LocalLLaMA/comments/17h65zn/self_hosted_setup_for_progressing_llm_learning/ | false | false | self | 1 | null |
CasualLM 14b seems to be quite good | 38 | Testing CasualLM 14b right now, and it seems to actually be quite good. In logic tests so far, it's done better than pretty much all the other 7b-13b models I've tested. However this is pretty anecdotal evidence, my testing is pretty.. casual. I just picked tests I've seen others try in this sub and compare it to their results. For example, I went through some questions from here [https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i\_R6I6W/edit#gid=2011456595](https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit#gid=2011456595) that a lot of other 13b models seem to be getting wrong, and this one answered pretty much all the ones I tried without an issue. I did only test around 10 questions before getting lazy though.
Im using the Q5\_1 model with the rocm fork of kobold.cpp, set in instruct mode in chatml format. No other settings were adjusted from defaults. There may be better settings for this model that Im not using. | 2023-10-26T20:06:38 | https://www.reddit.com/r/LocalLLaMA/comments/17h5bsm/casuallm_14b_seems_to_be_quite_good/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h5bsm | false | null | t3_17h5bsm | /r/LocalLLaMA/comments/17h5bsm/casuallm_14b_seems_to_be_quite_good/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'cNh0mvt7lp2zKDMjb08xyk6I-MV102lxjVjvT3lkOrw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/NodK_v2p5InKJuaq4awi8tmoCAPa-MOGY8Oy_CHAJAE.jpg?width=108&crop=smart&auto=webp&s=07e2e60cd57f23e97db87c2ebce6554fe7c9d869', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/NodK_v2p5InKJuaq4awi8tmoCAPa-MOGY8Oy_CHAJAE.jpg?width=216&crop=smart&auto=webp&s=64c9dcc998971f17374723eaef0caabb74428ccc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/NodK_v2p5InKJuaq4awi8tmoCAPa-MOGY8Oy_CHAJAE.jpg?width=320&crop=smart&auto=webp&s=7c4d6467f4ad136d4e41e62e895e1b3a2430999f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/NodK_v2p5InKJuaq4awi8tmoCAPa-MOGY8Oy_CHAJAE.jpg?width=640&crop=smart&auto=webp&s=ad08646400bc277264fee4250008b827b97b1cba', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/NodK_v2p5InKJuaq4awi8tmoCAPa-MOGY8Oy_CHAJAE.jpg?width=960&crop=smart&auto=webp&s=7c080e8f235b84d11595e5ce4669487269b2da15', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/NodK_v2p5InKJuaq4awi8tmoCAPa-MOGY8Oy_CHAJAE.jpg?width=1080&crop=smart&auto=webp&s=b4f883cd2c4b6f44a0003d5d08eb205a35295d05', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/NodK_v2p5InKJuaq4awi8tmoCAPa-MOGY8Oy_CHAJAE.jpg?auto=webp&s=f7a122bfc74aadbba41ea003356f40c017f87de5', 'width': 1200}, 'variants': {}}]} |
So am I not going to be able to use this because I have an AMD card? | 1 | [removed] | 2023-10-26T19:58:04 | https://www.reddit.com/r/LocalLLaMA/comments/17h54d9/so_am_i_not_going_to_be_able_to_use_this_because/ | throwaway899495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h54d9 | false | null | t3_17h54d9 | /r/LocalLLaMA/comments/17h54d9/so_am_i_not_going_to_be_able_to_use_this_because/ | false | false | self | 1 | null |
Speculative Decoding in Exllama v2 and llama.cpp comparison | 26 | We discussed speculative decoding (SD) in the [previous thread here](https://www.reddit.com/r/LocalLLaMA/comments/17f4y11/comment/k6g6mkj/?utm_source=share&utm_medium=web2x&context=3). For those who are not aware of this feature, it allows the LLM loaders to use a smaller "draft" model to help predict tokens for a larger model. In that thread, someone asked for tests of speculative decoding for both Exllama v2 and llama.cpp. I generally only run models in GPTQ, AWQ or exl2 formats, but was interested in doing the exl2 vs. llama.cpp comparison.
The tests were run on my 2x 4090, 13900K, DDR5 system. You can see the screen captures of the terminal output of both below. If someone has experience with making llama.cpp speculative decoding work better, please share.
**Exllama v2**
**Model: Xwin-LM-70B-V0.1-4.0bpw-h6-exl2**
**Draft Model: TinyLlama-1.1B-1T-OpenOrca-GPTQ**
Performance can be highly variable but goes from \~20 t/s without SD to 40-50 t/s with SD.
## No SD:
Prompt processed in 0.02 seconds, 4 tokens, 200.61 tokens/second
Response generated in 10.80 seconds, 250 tokens, 23.15 tokens/second
## With SD:
Prompt processed in 0.03 seconds, 4 tokens, 138.80 tokens/second
Response generated in 5.10 seconds, 250 tokens, 49.05 tokens/second
**llama.cpp**
**Model: xwin-lm-70b-v0.1.Q4\_K\_M.gguf**
**Draft Model: xwin-lm-7b-v0.1.Q4\_K\_M.gguf**
Both the model and the draft model were fully offloaded to GPU VRAM. But, I was not able to see any speedups; I am not sure if I'm doing something fundamentally wrong here. I was also not able to use TinyLlama as the draft model with llama.cpp and had to go with a smaller parameter version of the primary model. I'm getting around 16 t/s without SD and it slows down with SD.
## No SD:
$ ./main -m /models/xwin-lm-70b-v0.1.Q4_K_M.gguf -ngl 100 -p "Once upon a time" -n 250
[...]
llama_print_timings: load time = 5263.02 ms
llama_print_timings: sample time = 30.39 ms / 250 runs ( 0.12 ms per token, 8225.58 tokens per second)
llama_print_timings: prompt eval time = 224.68 ms / 5 tokens ( 44.94 ms per token, 22.25 tokens per second)
llama_print_timings: eval time = 15362.62 ms / 249 runs ( 61.70 ms per token, 16.21 tokens per second)
llama_print_timings: total time = 15652.18 ms
## With SD:
$ ./speculative -ngl 100 -ngld 100 -m /models/models/xwin-lm-70b-v0.1.Q4_K_M.gguf -p "Once upon a time" -n 250 --model-draft /models/models/xwin-lm-7b-v0.1.Q4_K_M.gguf
[...]
encoded 5 tokens in 0.328 seconds, speed: 15.249 t/s
decoded 252 tokens in 24.741 seconds, speed: 10.185 t/s
n_draft = 16
n_predict = 252
n_drafted = 126
n_accept = 98
accept = 77.778%
draft:
llama_print_timings: load time = 9406.89 ms
llama_print_timings: sample time = 34.91 ms / 279 runs ( 0.13 ms per token, 7992.44 tokens per second)
llama_print_timings: prompt eval time = 48.40 ms / 5 tokens ( 9.68 ms per token, 103.30 tokens per second)
llama_print_timings: eval time = 4620.30 ms / 280 runs ( 16.50 ms per token, 60.60 tokens per second)
llama_print_timings: total time = 25069.11 ms
target:
llama_print_timings: load time = 5261.68 ms
llama_print_timings: sample time = 31.63 ms / 252 runs ( 0.13 ms per token, 7968.13 tokens per second)
llama_print_timings: prompt eval time = 15104.41 ms / 200 tokens ( 75.52 ms per token, 13.24 tokens per second)
llama_print_timings: eval time = 5157.77 ms / 84 runs ( 61.40 ms per token, 16.29 tokens per second)
llama_print_timings: total time = 34487.78 ms
| 2023-10-26T19:42:41 | https://www.reddit.com/r/LocalLLaMA/comments/17h4rqz/speculative_decoding_in_exllama_v2_and_llamacpp/ | lone_striker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h4rqz | false | null | t3_17h4rqz | /r/LocalLLaMA/comments/17h4rqz/speculative_decoding_in_exllama_v2_and_llamacpp/ | false | false | self | 26 | null |
QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models - Institute of Science and Technology Austria (ISTA) 2023 - Can compress the 1.6 trillion parameter SwitchTransformer-c2048 model to less than 160GB (20x compression, 0.8 bits per parameter) at only minor accuracy loss! | 1 | [removed] | 2023-10-26T19:20:16 | https://www.reddit.com/r/LocalLLaMA/comments/17h490i/qmoe_practical_sub1bit_compression_of/ | Singularian2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h490i | false | null | t3_17h490i | /r/LocalLLaMA/comments/17h490i/qmoe_practical_sub1bit_compression_of/ | false | false | 1 | null | |
QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models - Institute of Science and Technology Austria (ISTA) 2023 - Can compress the 1.6 trillion parameter SwitchTransformer-c2048 model to less than 160GB (20x compression, 0.8 bits per parameter) at only minor accuracy loss! | 1 | [removed] | 2023-10-26T19:01:27 | https://www.reddit.com/r/LocalLLaMA/comments/17h3tic/qmoe_practical_sub1bit_compression_of/ | Singularian2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h3tic | false | null | t3_17h3tic | /r/LocalLLaMA/comments/17h3tic/qmoe_practical_sub1bit_compression_of/ | false | false | 1 | null | |
Xwin keeps talking in Chinese | 10 | I wanted to explore Xwin after how much everyone has talked about it. However, when I ask it questions about programming and some general questions, it keeps trying to talk in Chinese even though the entire conversation is in English and I explicitly tell it not to talk in Chinese.
Model: Xwin 70B AWQ
Engine: VLLM
Prompt format: The one listed on their page | 2023-10-26T18:55:54 | https://www.reddit.com/r/LocalLLaMA/comments/17h3oxj/xwin_keeps_talking_in_chinese/ | a_slay_nub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h3oxj | false | null | t3_17h3oxj | /r/LocalLLaMA/comments/17h3oxj/xwin_keeps_talking_in_chinese/ | false | false | self | 10 | null |
[Question] Running a local chat on a GTX 1660, is it possible to get a solo offline RPG with generated art? How? | 1 | All the elements are there, so I'm curious about having an adventure with an AI as the game master.
What program (oobabooga? kobold?) do I need, and what model will best run on my GTX 1660 Super?
Will I be able to type in a character ("a bald monk") and have AI draw an image? Will I enter a tavern and the AI generates the guild? Will I enter a cave and the AI generates a gelatinous cube?
All these elements exist. Images of RPG races are LoRAs on Civitai, and RPG chat is available on Github.
So what are the step-by-step procedures to getting that up and running here in October 2023? | 2023-10-26T18:29:22 | https://www.reddit.com/r/LocalLLaMA/comments/17h33bx/question_running_a_local_chat_on_a_gtx_1660_is_it/ | tethercat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h33bx | false | null | t3_17h33bx | /r/LocalLLaMA/comments/17h33bx/question_running_a_local_chat_on_a_gtx_1660_is_it/ | false | false | self | 1 | null |
LLama c++ vs Pytorch/Onnx for inference | 3 | Hi,
I wanted to understand if it's possible to use LLama c++ for inferencing a 7b model in cpus at scale in production settings. My requirement is to generate 4-10 tokens per request.
I have a good understanding of the hugginface + pytorch ecosystem and am fairly adept in fune-tuning my own models (NLP in general) but i'm not at all faimilar with the Llama c++ ecosystem.
My understanding is that it's an implementation of the transformer decoder in c++ and is only used for inference. So if i fientune a 7b model, can it be accelerated enough using llama c++ and it's various quantization formats withoug any/little drop in accuracy to be viable on cpus? | 2023-10-26T18:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/17h2hyn/llama_c_vs_pytorchonnx_for_inference/ | krumb0y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h2hyn | false | null | t3_17h2hyn | /r/LocalLLaMA/comments/17h2hyn/llama_c_vs_pytorchonnx_for_inference/ | false | false | self | 3 | null |
Phind Codellama with vllm over 4k Tokens with AWQ | 3 | I tried the phind Codellama v2 with more than 4096 tokens, however vllm raises an error, that only 4096 tokens are allowed.
I tried raising the max_new_tokens without any effect.
Afaik Codellama is trained to 16k and phind is fine tuned to 4k. Shouldn't it be possible to get 16k tokens out of that model? | 2023-10-26T17:24:52 | https://www.reddit.com/r/LocalLLaMA/comments/17h1nsb/phind_codellama_with_vllm_over_4k_tokens_with_awq/ | eggandbacon_0056 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h1nsb | false | null | t3_17h1nsb | /r/LocalLLaMA/comments/17h1nsb/phind_codellama_with_vllm_over_4k_tokens_with_awq/ | false | false | self | 3 | null |
Looking for some advice with my setup and model implementation | 1 | [removed] | 2023-10-26T17:22:15 | https://www.reddit.com/r/LocalLLaMA/comments/17h1lse/looking_for_some_advice_with_my_setup_and_model/ | MannowLawn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h1lse | false | null | t3_17h1lse | /r/LocalLLaMA/comments/17h1lse/looking_for_some_advice_with_my_setup_and_model/ | false | false | self | 1 | null |
Finetuning LLM on domain. | 2 | I'm working on streamlining the process of updating large sets of e-commerce documents, particularly product descriptions. Right now, these descriptions come in text files, which are then manually read and entered into various fields in our system. For instance, a field for 'Dimensions' in our web app would be populated using the relevant data from the text file. There are hundreds of such fields that require manual input.
I'm looking to automate this process through fine-tuning.
**My Question:**
1. I've compiled a dataset in a Question & Answer format. For example, the question would be "What are the dimensions?" and the answer would be "2x2". However, I'm concerned that training the model in this way might ignore the context in which the question is asked. Should I also include the surrounding context along with the Q&A pairs for effective training? What's the best approach to tackle this? | 2023-10-26T17:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/17h1kn7/finetuning_llm_on_domain/ | dfnathan6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h1kn7 | false | null | t3_17h1kn7 | /r/LocalLLaMA/comments/17h1kn7/finetuning_llm_on_domain/ | false | false | self | 2 | null |
Your feedback needed: Our tool for simplifying dataset creation | 1 | Hey all! Been a lurker of this community for months now and I'm more than happy to show a tool that my co-founder and I have built. We wanted an easy way to collaborate together on creating fine-tuning datasets for LLMs.
We're inspired by recent research that shows the quality of the dataset matters more than the quantity, specifically diverse datasets where multiple people collaborate on them.
So the idea is a simple-to-use interface where you can work on these datasets, rather than having to fight with jsonl files.
You can try it out at [https://www.finetunedb.com](https://www.finetunedb.com)
It's free to use and for now, data is stored in our cloud servers but we're adding the ability to store it locally also in the coming update.
For now, we only support ChatGPT but are working on including open-source models like Llama very soon.
It's still early so you might encounter some bugs, so we'd like to apologize in advance.
We're happy to help anyone get set up and we’d love to hear your feedback as we spent a lot of time on this.
Let me know what you think! | 2023-10-26T17:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/17h1d8z/your_feedback_needed_our_tool_for_simplifying/ | zeJaeger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17h1d8z | false | null | t3_17h1d8z | /r/LocalLLaMA/comments/17h1d8z/your_feedback_needed_our_tool_for_simplifying/ | false | false | self | 1 | null |
Recommended hyperparameters to fine-tune mistral 7b | 3 | Hey, I noticed that mistral-7b has quite a bit different weights distribution and loss dynamics, it looks like they pre-trained it with some loss scaling tricks (maybe z-loss?).
Because of this, I found that it is better to use a smaller fine-tuning learning rate than for LLaMa. has anyone noticed that too? If so any other tricks you can share? | 2023-10-26T16:07:16 | https://www.reddit.com/r/LocalLLaMA/comments/17gzxbs/recommended_hyperparameters_to_finetune_mistral_7b/ | 1nadequacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gzxbs | false | null | t3_17gzxbs | /r/LocalLLaMA/comments/17gzxbs/recommended_hyperparameters_to_finetune_mistral_7b/ | false | false | self | 3 | null |
Need advice on a custom language model project | 1 | I want to create a tool for converting—text instructions into technical instructions structured in a specific format. I am looking for a custom language model suitable for that purpose, and the language model should be able to understand everyday communication in English, identify the instructions, and generate strings that are structured in a specified format. Can you suggest some language models that would be easier to customize for this purpose? | 2023-10-26T15:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/17gz7oa/need_advice_on_a_custom_language_model_project/ | seb36626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gz7oa | false | null | t3_17gz7oa | /r/LocalLLaMA/comments/17gz7oa/need_advice_on_a_custom_language_model_project/ | false | false | self | 1 | null |
Mistral 7B vs. Llama 2 | 0 | 2023-10-26T14:47:33 | https://www.lemonfox.ai/tutorials/mistral-vs-llama | ingojoseph | lemonfox.ai | 1970-01-01T00:00:00 | 0 | {} | 17gy78a | false | null | t3_17gy78a | /r/LocalLLaMA/comments/17gy78a/mistral_7b_vs_llama_2/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'wZ_Il6cqeB4ZRvNUasPjs5limfkidHwBziD_nGiz7js', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/15R5WT7TV09vGINFDfMIsL_kHXV6wbx4IoZZNGeLIhE.jpg?width=108&crop=smart&auto=webp&s=0f160c43e25800c051ffd7621eb65e91215d8488', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/15R5WT7TV09vGINFDfMIsL_kHXV6wbx4IoZZNGeLIhE.jpg?width=216&crop=smart&auto=webp&s=8d9aa72a9d930536c42bcba95fad2943302a20ed', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/15R5WT7TV09vGINFDfMIsL_kHXV6wbx4IoZZNGeLIhE.jpg?width=320&crop=smart&auto=webp&s=6475c5371a9f97dcd6338e8ae7da1284f55e8758', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/15R5WT7TV09vGINFDfMIsL_kHXV6wbx4IoZZNGeLIhE.jpg?width=640&crop=smart&auto=webp&s=8a2d9d227cf574c65f2df17b7f039ea9887f9871', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/15R5WT7TV09vGINFDfMIsL_kHXV6wbx4IoZZNGeLIhE.jpg?width=960&crop=smart&auto=webp&s=465f754ee972950580b8d4ea9acdf570b936aed1', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/15R5WT7TV09vGINFDfMIsL_kHXV6wbx4IoZZNGeLIhE.jpg?width=1080&crop=smart&auto=webp&s=6f9b84251661dbbe2a172107f1bd41eed0e8b40e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/15R5WT7TV09vGINFDfMIsL_kHXV6wbx4IoZZNGeLIhE.jpg?auto=webp&s=4371ebbc3f7c08474a8662b7f25869f0bfef4c70', 'width': 1200}, 'variants': {}}]} | ||
Multiple GPU settings using KoboldCPP | 5 | I'm currently running a rtx 4090 24gb, 3090 24gb, i9-13900, 96gb ram.
I normally use LMstudio since i liked the interface but i can't seem to understand why it only seems to use only 1GPU when i look on task manager.
then i went back to koboldcpp and tried running various models. I am uncertain if i even understand the settings. for example.
70B model , CuBLAS, GPU ID (ALL) use quantmatmul, GPU layers 83, threads 15, BLAS batch size 512, context size 8192. allows me to run the model. however i quickly see that sometimes i crash it, or it provides very slow responses. additionally on task manager i see it all loaded unto 1 gpu pushing it to 90%+ utilization
i attempted to change the Tensor Split setting to (1,1-5,5-50,50) to and am unable to even load the model with any changes to this setting.
Am i misunderstanding this? how can i split the load between my 2 GPU's? | 2023-10-26T14:41:43 | https://www.reddit.com/r/LocalLLaMA/comments/17gy2tz/multiple_gpu_settings_using_koboldcpp/ | DominicanGreg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gy2tz | false | null | t3_17gy2tz | /r/LocalLLaMA/comments/17gy2tz/multiple_gpu_settings_using_koboldcpp/ | false | false | self | 5 | null |
I created an app that helps with RAG and indexes documents into vector databases at scale | 2 | Hi everyone, I'm trying to solve production issues of RAG through Turbine. Turbine syncs raw data to vector databases in a scalable, highly parallelized, and fault-tolerant manner. It lets you create pipelines to sync data from any source to any vector database. Pipelines are fully configurable, and lets you bring your own data source, embedding model, and vector database.
The primary aim of Turbine is to make RAG easier, especially the retrieval part. When you have a large amount of documents, indexing them all and ensuring that the vector database is up to date is always a challenge. Soon enough, I also plan to add advanced RAG patterns such as hierarchically splitting the documents, generating metadata for each chunk, etc.
Check out Turbine at https://useturbine.com. It's free for early adopters to try at https://console.useturbine.com.
Also, I'd love to talk to you if you are building RAG apps or any app utilizing LLMs. Please let me know in the comments or DMs. I'd like to know how you are doing RAG in production and your challenges. | 2023-10-26T14:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/17gxw4g/i_created_an_app_that_helps_with_rag_and_indexes/ | SkullTech101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gxw4g | false | null | t3_17gxw4g | /r/LocalLLaMA/comments/17gxw4g/i_created_an_app_that_helps_with_rag_and_indexes/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'Tz13KDd4xpf3H0IYZDsWjvhOLvtsr_zBQwHzxmpwkyE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GuNPOnO61QwmuR8nxtbwm2xo05i2A_PXX8jduxTmWVA.jpg?width=108&crop=smart&auto=webp&s=c7ef8da43d027659b944c119244f46d35f69fa14', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GuNPOnO61QwmuR8nxtbwm2xo05i2A_PXX8jduxTmWVA.jpg?width=216&crop=smart&auto=webp&s=05c9385a7fb025830d195165154000206dc3fcb2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GuNPOnO61QwmuR8nxtbwm2xo05i2A_PXX8jduxTmWVA.jpg?width=320&crop=smart&auto=webp&s=aca8d6e7644013d8376293b852ffe7f1e80f7f06', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GuNPOnO61QwmuR8nxtbwm2xo05i2A_PXX8jduxTmWVA.jpg?width=640&crop=smart&auto=webp&s=22b0c1d8c84a7982ef2dce6250c94f4d2790312e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GuNPOnO61QwmuR8nxtbwm2xo05i2A_PXX8jduxTmWVA.jpg?width=960&crop=smart&auto=webp&s=883f246eed78c3890b9d99dd818e13e8c6618668', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GuNPOnO61QwmuR8nxtbwm2xo05i2A_PXX8jduxTmWVA.jpg?width=1080&crop=smart&auto=webp&s=3613714df1cbc9fbfdad05ccc1858800032645c9', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/GuNPOnO61QwmuR8nxtbwm2xo05i2A_PXX8jduxTmWVA.jpg?auto=webp&s=ab7a818a9aef44615c7af25750157f112cd1c488', 'width': 2400}, 'variants': {}}]} |
Has anyone successfully installed Microsoft LIDA with a local LLM? | 8 | I recently became aware of Microsoft LIDA and I am wondering if it could be installed without using an Open API key using an open source model like LLAMA2 or Mistral. Has anyone successfully done this yet using an open source model and if so; what are the steps? | 2023-10-26T14:05:07 | https://www.reddit.com/r/LocalLLaMA/comments/17gxb2h/has_anyone_successfully_installed_microsoft_lida/ | geodesic_jeff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gxb2h | false | null | t3_17gxb2h | /r/LocalLLaMA/comments/17gxb2h/has_anyone_successfully_installed_microsoft_lida/ | false | false | self | 8 | null |
Tiefighter is -excellent- for RP. Working well on my Tesla P40 also. | 27 | So, thanks to [u/WolframRavenwolf](https://www.reddit.com/user/WolframRavenwolf/) and his on-going LLM testing, I believe I've finally found a reliable and verbose model that I have gotten to work well for RP in Sillytavern that exceeds the various Hermes Llama1 models.
Wolfram suggested the Tiefighter model by u/henk717 I've tried the following versions:
[https://huggingface.co/TheBloke/LLaMA2-13B-Tiefighter-GGUF](https://huggingface.co/TheBloke/LLaMA2-13B-Tiefighter-GGUF) (Q4\_K\_M) on my P40 with the llama.cpp\_hf loader.
and
[https://huggingface.co/TheBloke/LLaMA2-13B-Tiefighter-GPTQ](https://huggingface.co/TheBloke/LLaMA2-13B-Tiefighter-GPTQ) on my 3090 with the exllama2\_hf loader. (Haven't tried AWQ yet.)
Without making any parameter changes in Ooga or Sillytavern, I get verbose and intelligent responses in Sillytavern chats. My prior go-to for this was the various llama1 Pygmalion Hermes variants, eventually settling on Chronos Hermes Superhot. I've tried various models since but just couldn't get anything verbose in ST without tweaking, even then I wasn't really satisfied with the results.
So this looks like it's going to be my go-to model for a while, but it is still a 4k context llama2 model, compared to the 6k of the prior Superhot models I was using. Does anyone know how feasible it would be to make extended context versions of Tiefighter? | 2023-10-26T13:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/17gx4x9/tiefighter_is_excellent_for_rp_working_well_on_my/ | CasimirsBlake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gx4x9 | false | null | t3_17gx4x9 | /r/LocalLLaMA/comments/17gx4x9/tiefighter_is_excellent_for_rp_working_well_on_my/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': '4lCvOJW_km1Lga3qChskwBebjidqZVJYTxxcWUgb6KY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uWPqS6BOPBvqLQujIJiIZbh2rdIS7BFMdeLCHyGhy7U.jpg?width=108&crop=smart&auto=webp&s=58ee20822b3e6ff372cf9a1047c103a5d840687f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uWPqS6BOPBvqLQujIJiIZbh2rdIS7BFMdeLCHyGhy7U.jpg?width=216&crop=smart&auto=webp&s=b05755248050b8aac57bd79a33022acf6b343a33', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uWPqS6BOPBvqLQujIJiIZbh2rdIS7BFMdeLCHyGhy7U.jpg?width=320&crop=smart&auto=webp&s=3be1c2bc786d15cf208c037466c9ee3c72e3da28', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uWPqS6BOPBvqLQujIJiIZbh2rdIS7BFMdeLCHyGhy7U.jpg?width=640&crop=smart&auto=webp&s=c5d760e966c16104ebe5eb73b4ab4f611459139d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uWPqS6BOPBvqLQujIJiIZbh2rdIS7BFMdeLCHyGhy7U.jpg?width=960&crop=smart&auto=webp&s=01042c2274f9fc5445ee0c037125c3c572b695c0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uWPqS6BOPBvqLQujIJiIZbh2rdIS7BFMdeLCHyGhy7U.jpg?width=1080&crop=smart&auto=webp&s=1e599dbd825159e3767d27065889274eb21d708a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uWPqS6BOPBvqLQujIJiIZbh2rdIS7BFMdeLCHyGhy7U.jpg?auto=webp&s=7ff29d09455c5cd42f1d3a5070eefbba8620d480', 'width': 1200}, 'variants': {}}]} |
GPT4All now supports GGUF Models with Vulkan GPU Acceleration | 89 | 2023-10-26T13:45:25 | https://twitter.com/nomic_ai/status/1716895217998233836 | NomicAI | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 17gww1k | false | {'oembed': {'author_name': 'Nomic AI', 'author_url': 'https://twitter.com/nomic_ai', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Releasing GPT4All v2.5.0 with GGUF Support<br><br>- Runs <a href="https://twitter.com/MistralAI?ref_src=twsrc%5Etfw">@MistralAI</a> 7B Locally with Vulkan GPU Support<br>- Universal GPU Inference: Mistral, LLaMa, MPT, Falcon in Chat Client and Python<br>- Generate Embed4All Embeddings on GPU.<br>See release notes at <a href="https://t.co/XxCljkOykm">https://t.co/XxCljkOykm</a> <a href="https://t.co/LvV39Uf180">pic.twitter.com/LvV39Uf180</a></p>— Nomic AI (@nomic_ai) <a href="https://twitter.com/nomic_ai/status/1716895217998233836?ref_src=twsrc%5Etfw">October 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/nomic_ai/status/1716895217998233836', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_17gww1k | /r/LocalLLaMA/comments/17gww1k/gpt4all_now_supports_gguf_models_with_vulkan_gpu/ | false | false | 89 | {'enabled': False, 'images': [{'id': '5iYoJkpSb7n8IiBD7rccIxmW-x8iP5_jXuOl_rrN1nE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/x8hmy6W_3QHDEHj84gC3Pm9Yb_we_lwxgCs8YHTOtcE.jpg?width=108&crop=smart&auto=webp&s=8d7112f9689c74b327431810212868c4fc01a194', 'width': 108}], 'source': {'height': 87, 'url': 'https://external-preview.redd.it/x8hmy6W_3QHDEHj84gC3Pm9Yb_we_lwxgCs8YHTOtcE.jpg?auto=webp&s=c62dfd3d794c832c645cee0c17cdf6b506aa9a98', 'width': 140}, 'variants': {}}]} | ||
Is there a market for niche commercial vector databases for RAG? | 1 | [removed] | 2023-10-26T13:41:29 | https://www.reddit.com/r/LocalLLaMA/comments/17gwt4l/is_there_a_market_for_niche_commercial_vector/ | Natural-Sentence-601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gwt4l | false | null | t3_17gwt4l | /r/LocalLLaMA/comments/17gwt4l/is_there_a_market_for_niche_commercial_vector/ | false | false | self | 1 | null |
TokenTally: Estimate Your LLM's Token Toll Across Various Platforms and Configurations | 6 | Heyall!
I present to you: **TokenTally**. The goal is to be able to calculate the **minimum GPU requirements** for **Training**(Fine Tuning and Continued Pre Training) and **Inference** for any LLM along with Comparison to Self-Host these models across different GPU Cloud Platforms and Optimizations. Eventually to Calculate tokens/$ for every possible combinations of Model, Platform and Optimizations!
I would like some feedback and contributions!
I'm looking for contributions: [https://github.com/adarshxs/TokenTally](https://github.com/adarshxs/TokenTally) | 2023-10-26T13:40:50 | https://www.reddit.com/r/LocalLLaMA/comments/17gwslo/tokentally_estimate_your_llms_token_toll_across/ | supersic1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gwslo | false | null | t3_17gwslo | /r/LocalLLaMA/comments/17gwslo/tokentally_estimate_your_llms_token_toll_across/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'yjV5ct9AVn5_f8CvpV7Ws5-jkMYbAXfGUJSNAWJEJXI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ozb2Az70ed4DjpWWSu3vUv2wFcav4sGz6Zp3WBvbesA.jpg?width=108&crop=smart&auto=webp&s=96c1c823ae958afaadace311a38087fe1403f14c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ozb2Az70ed4DjpWWSu3vUv2wFcav4sGz6Zp3WBvbesA.jpg?width=216&crop=smart&auto=webp&s=001fb518b29f2ae89e864bf84c9720ea5e2ea12a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ozb2Az70ed4DjpWWSu3vUv2wFcav4sGz6Zp3WBvbesA.jpg?width=320&crop=smart&auto=webp&s=cccd038bae6197d83c62388cc413468073fce054', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ozb2Az70ed4DjpWWSu3vUv2wFcav4sGz6Zp3WBvbesA.jpg?width=640&crop=smart&auto=webp&s=0b061c3e8186e9b99c25bfaa6b77e788863a3829', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ozb2Az70ed4DjpWWSu3vUv2wFcav4sGz6Zp3WBvbesA.jpg?width=960&crop=smart&auto=webp&s=e26a15b7c71468f546a575863c56f4642000bed2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ozb2Az70ed4DjpWWSu3vUv2wFcav4sGz6Zp3WBvbesA.jpg?width=1080&crop=smart&auto=webp&s=2fac35999bd15a63b6294d2e8de31594f8772f08', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ozb2Az70ed4DjpWWSu3vUv2wFcav4sGz6Zp3WBvbesA.jpg?auto=webp&s=38b368c5e85b84f238d379678105eabf84f4547b', 'width': 1200}, 'variants': {}}]} |
Is Google Colab Pro+ worth it for running 65B modals for 25€/month? | 22 | Hello! I am new to all this, and I want to try out some of the nice models. While doing my research, I saw a lot of people frusturad with Google Colab, and have suggested using runpod etc.
I have bought Colab Pro, but I don't seem to be able to get allocated a A100 ever. I am thinking of upgrading to Colab Pro+, as Google have localized pricing here and it only costs 25€ for 500 compute units, so it would be a lot cheaper than a solution like runpod.
Should I try upgrading to Pro+?
What I want is trying large models and maybe training models with my own dataset.
Any help appreciated! | 2023-10-26T13:20:35 | https://www.reddit.com/r/LocalLLaMA/comments/17gwdqy/is_google_colab_pro_worth_it_for_running_65b/ | Oguzcana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gwdqy | false | null | t3_17gwdqy | /r/LocalLLaMA/comments/17gwdqy/is_google_colab_pro_worth_it_for_running_65b/ | false | false | self | 22 | null |
Problems with running 4090+3090 in the same system? | 1 | Howdy! I've had a 3090 for running local language models for a bit, but I'm finally ready to either get another 3090 or a 4090. Are their any gotchas to running larger models on two cards of different architectures? | 2023-10-26T13:07:10 | https://www.reddit.com/r/LocalLLaMA/comments/17gw45k/problems_with_running_40903090_in_the_same_system/ | mudlordprime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gw45k | false | null | t3_17gw45k | /r/LocalLLaMA/comments/17gw45k/problems_with_running_40903090_in_the_same_system/ | false | false | self | 1 | null |
Grouped Query Attention in LLaMA 70B v2 | 2 | Hey guys, after thousands of experiments with bigger LLaMA fine-tunes I'm somewhat sure the GQA mechanism might be your enemy and generate wrong answers, especially for math and such complex areas.
I'd like to use MHA (Multi Head Attention) if possbile. I'm just not sure - do I need to retrain model completely or is it possible to just increase heads count and KV size and proceed with the stock model AS IS? | 2023-10-26T12:39:44 | https://www.reddit.com/r/LocalLLaMA/comments/17gvl3m/grouped_query_attention_in_llama_70b_v2/ | Gatzuma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gvl3m | false | null | t3_17gvl3m | /r/LocalLLaMA/comments/17gvl3m/grouped_query_attention_in_llama_70b_v2/ | false | false | self | 2 | null |
Problem compiling llama.cpp with OpenBLAS support on Windows | 1 | [removed] | 2023-10-26T12:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/17gvcjo/problem_compiling_llamacpp_with_openblas_support/ | Specialist-Ad2870 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gvcjo | false | null | t3_17gvcjo | /r/LocalLLaMA/comments/17gvcjo/problem_compiling_llamacpp_with_openblas_support/ | false | false | self | 1 | null |
Fine-tuning on long sequences | 3 | I'm currently working on a phi_1_5 (1.3b) fine-tuning using the dataset lfqa to have a small LLM that have interesting Rag properties.
Problem is that once formatted, my data sample are mostly around 2048 token long, what makes rather large sequences.
As a result I'm having trouble fine-tuning that model on a 24go GPU.
So far I managed to make it kinda work by:
- using peft lora (R 128, alpha 64)
- loading the model in double 4bit quant
- limiting the seq_len to 1024
- using bfloat16 as default precision (GPU is a L4)
- reducing batch size to 1 and using gradient accumulation
But it makes too short sequences and takes very long to train. Plus I'd like to be able to scale to a 7b or 13b model later.
The options I can think of so far would be:
- CPU offload, but might be very slow
- applying flash attention to the frozen model to try reducing the ram usage due to long sequences
- I can also upgrade the machine (up to 4 L4)
- Above 24go ram GPU are hard to get so difficult to do so
What would be your suggestions to make that work? | 2023-10-26T12:23:53 | https://www.reddit.com/r/LocalLLaMA/comments/17gva9z/finetuning_on_long_sequences/ | AdventurousSwim1312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gva9z | false | null | t3_17gva9z | /r/LocalLLaMA/comments/17gva9z/finetuning_on_long_sequences/ | false | false | self | 3 | null |
How to succinctly (fewest tokens) provide context in prompt for a story? | 1 | [removed] | 2023-10-26T11:45:40 | https://www.reddit.com/r/LocalLLaMA/comments/17gulje/how_to_succinctly_fewest_tokens_provide_context/ | innocuousAzureus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gulje | false | null | t3_17gulje | /r/LocalLLaMA/comments/17gulje/how_to_succinctly_fewest_tokens_provide_context/ | false | false | self | 1 | null |
Comprehensive Breakdown of Apple Mac Generation Speed w/ LLMs? | 13 | TL;DR;
1. Does RAM in a Mac translate *at all* to VRAM of a Windows GPU in terms of runnable LLMs?
2. How fast are different levels of M-series chips at generating text? Is there a comparison somewhere between M1 vs M2 chips and all the Ultra / Max variants?
Hey all, sorry if this has been addressed before, I did have a look on the Subreddit but the results didn't seem super relevant - especially with how far model compression has come and everything since last time anything like this was addressed.
I'm thinking of getting myself some kind of Mac for Christmas, having been a Windows man my whole life. Now with my original use case I was originally just going to get a refurbished, nearly bottom-of-the-line M1 Macbook Air. But then it occurred to me that a lot of the really awesome things I've seen on this Subreddit have actually been on Macs - I've seen plenty of people running quantized 70b models on their Macs. That then got me thinking of going for something beefier. Like, a lot beefier.
Problem is, I have absolutely no frame-of-reference for how well Macs run various models. With a Windows machine, the go-to is to run the models in VRAM - so the GPU is pretty much everything. M-series chips obviously don't have VRAM, they just have normal RAM. Is it equivalent anyway? Would a 32gb RAM Macbook Pro be able to properly run a 4b-quantised 70b model seeing as 24gb VRAM 4090s are able to? Could a 16gb RAM M2 Macbook Air run 33b models or heavily quantised 70b models?
I'm also aware that M3 Macs are just around the corner, so I'll be keeping an eye on that too. My dream situation here is if there's an M3 Macbook Air capable of running 70b models locally. I can't help but fill my head with ideas of integrating both RAG and OpenInterpreter or something in this scenario and effectively having a low-power, pseudo-intelligent Macbook that I can take around with me and that can both learrn about my life and answer Google-search level questions even without an internet connection. (The next step after that would be getting those Ray-Bans with the video camera and tiny speakers in them and linking them up to this hypothetical Macbook intelligence, leveraging some sort of BakLlava system to give me real-time info on the world around me... something effectively like those glasses in Spiderman Far From Home lolol)
Too bad I'm not a coder, so I'd be relying on copying off other people if I wanted to achieve those dreams \^\^
Anyway, so yeah, as posed in title - how do Macs work in terms of which LLMs they can actually even run? How fast are the various M-series chips at inference? Is there anywhere that I can read about this instead of bothering the whole subreddit with these questions?? | 2023-10-26T11:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/17gugda/comprehensive_breakdown_of_apple_mac_generation/ | OldAd9530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gugda | false | null | t3_17gugda | /r/LocalLLaMA/comments/17gugda/comprehensive_breakdown_of_apple_mac_generation/ | false | false | self | 13 | null |
Most User-friendly Method of Running Local Models on Android Phones? | 13 | Title pretty much says it all - I've got an Android w/ Snapdragon Gen 2 w/ 12gb RAM. So far I've got MLC Chat working, and honestly it works shockingly well in my view, running at 2.5t/s. I know some people hate 2.5t/s generation speed on this subreddit but for me it's great if I'm on the underground and I just want something to pass the time that isn't a game on my phone. (I have tons of fun asking the default 7b Llama model to make me jokes about abitrary things I've thought of that day, e.g. garlic bread, and watching it fail spectacularly)
My other use case is effectively using it as an offline Google search, whilst being fully aware that it'll never be trustworthy considering hallucinations are obviously still a major problem. Still, getting a more up-to-date 7b model would go some way to fixing this. My problem with MLC Chat then is that it only works for their pre-baked models; as far as I can figure out I can't load up something like a Mistral 7b model.
My ideal app right now then would be something that is akin to MLC-Chat, except I can load up something like the Dolphin-2.1 7b GGUF model and start chatting to that instead.
Taking a look around, it seems like Termux is my only option here. But I'd really rather not have to do command-line stuff. LLM Farm for Apple looks ideal to be honest, but unfortunately I do not yet have an Apple phone. Call me optimistic but I'm waiting for them to release an Apple folding phone before I swap over LOL
So yeah, TL;DR, anything like LLM Farm or MLC-Chat that'll let me chat w/ new 7b LLMs on my Android phone? | 2023-10-26T10:57:18 | https://www.reddit.com/r/LocalLLaMA/comments/17gttrb/most_userfriendly_method_of_running_local_models/ | OldAd9530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gttrb | false | null | t3_17gttrb | /r/LocalLLaMA/comments/17gttrb/most_userfriendly_method_of_running_local_models/ | false | false | self | 13 | null |
Can I run Llama 2 locally with very old CPU (i5-3470) and RTX 2060 Super 8gb via Python? | 2 | Hi everyone. I tried to run LLMs locally before via *Oobabooga UI* and *Ollama* CLI tool. However I couldn't make them work at all due to my CPU being too ancient (i5-3470).
I have an RTX 2060 Super and I can code Python. I wonder if it's possible to run a local LLM completely via GPU. If can, what do I need to look into in order to make it work? Thank you all very much! | 2023-10-26T09:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/17gseqp/can_i_run_llama_2_locally_with_very_old_cpu/ | Uncensored4488 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gseqp | false | null | t3_17gseqp | /r/LocalLLaMA/comments/17gseqp/can_i_run_llama_2_locally_with_very_old_cpu/ | false | false | self | 2 | null |
Refact (1.6B) on Pixel 3! | 32 | 2023-10-26T08:26:10 | https://asciinema.org/a/617215 | Aaaaaaaaaeeeee | asciinema.org | 1970-01-01T00:00:00 | 0 | {} | 17grqz8 | false | null | t3_17grqz8 | /r/LocalLLaMA/comments/17grqz8/refact_16b_on_pixel_3/ | false | false | 32 | {'enabled': False, 'images': [{'id': 'YEd2UgrjDcDGKEmSG7qgKOxSOqZaK3de34fWn-N24QU', 'resolutions': [{'height': 121, 'url': 'https://external-preview.redd.it/8lFTHILrClG_JRmEP4kYO6Xpn_OUaAZ4lXk3lKVL_8Q.jpg?width=108&crop=smart&auto=webp&s=9cf7fafd334e7c6dad206e51eab84640fad38f84', 'width': 108}, {'height': 242, 'url': 'https://external-preview.redd.it/8lFTHILrClG_JRmEP4kYO6Xpn_OUaAZ4lXk3lKVL_8Q.jpg?width=216&crop=smart&auto=webp&s=42b1ea6195617ae869b1db3eb669540800db748d', 'width': 216}, {'height': 358, 'url': 'https://external-preview.redd.it/8lFTHILrClG_JRmEP4kYO6Xpn_OUaAZ4lXk3lKVL_8Q.jpg?width=320&crop=smart&auto=webp&s=551f00f03823ca0bc55d99327532b7da9d945ecc', 'width': 320}, {'height': 717, 'url': 'https://external-preview.redd.it/8lFTHILrClG_JRmEP4kYO6Xpn_OUaAZ4lXk3lKVL_8Q.jpg?width=640&crop=smart&auto=webp&s=625b0de34382d068e58ba0889f9be33aa4044e56', 'width': 640}, {'height': 1076, 'url': 'https://external-preview.redd.it/8lFTHILrClG_JRmEP4kYO6Xpn_OUaAZ4lXk3lKVL_8Q.jpg?width=960&crop=smart&auto=webp&s=eb13e6d6ea1971baed8c9e93ae3bdcc169442856', 'width': 960}], 'source': {'height': 1098, 'url': 'https://external-preview.redd.it/8lFTHILrClG_JRmEP4kYO6Xpn_OUaAZ4lXk3lKVL_8Q.jpg?auto=webp&s=6984c3a6f5b55c77cdfca86d30e46bb381a0f10c', 'width': 979}, 'variants': {}}]} | ||
LangCheck: a multi-lingual toolkit to evaluate LLM applications | 79 | Hi! I wanted to share LangCheck, an open source toolkit to evaluate LLM applications ([GitHub](https://github.com/citadel-ai/langcheck), [Quickstart](https://langcheck.readthedocs.io/en/latest/quickstart.html)).
It already supports English and Japanese text, and more languages soon –contributions welcome!
Core functionality:
* `langcheck.metrics` – metrics to evaluate quality & structure of LLM-generated text
* `langcheck.plot` – interactive visualizations of text quality
* `langcheck.augment` – text augmentations to perturb prompts, references, etc (coming soon)
Super open to feedback & curious how other people think about evaluation for LLM apps. | 2023-10-26T08:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/17grhdu/langcheck_a_multilingual_toolkit_to_evaluate_llm/ | kennysong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17grhdu | false | null | t3_17grhdu | /r/LocalLLaMA/comments/17grhdu/langcheck_a_multilingual_toolkit_to_evaluate_llm/ | false | false | self | 79 | {'enabled': False, 'images': [{'id': 'ohq2NExmSQNsdHQpcDPV-Xb_URHkI-7m6YBna8IXDXU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UwSjZADRVCFETywI0gZ7EtZJMMSMpSY9bGlWOdfMQ-U.jpg?width=108&crop=smart&auto=webp&s=8ab05f7b91fffcb3b2f36afc7a301eb9013570cd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UwSjZADRVCFETywI0gZ7EtZJMMSMpSY9bGlWOdfMQ-U.jpg?width=216&crop=smart&auto=webp&s=e8cca6a1f0d1ef370c5ecea1961af3b8ead9656c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UwSjZADRVCFETywI0gZ7EtZJMMSMpSY9bGlWOdfMQ-U.jpg?width=320&crop=smart&auto=webp&s=298ae2ed1cd1940dba382b69868f56ba770e3648', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UwSjZADRVCFETywI0gZ7EtZJMMSMpSY9bGlWOdfMQ-U.jpg?width=640&crop=smart&auto=webp&s=2fc2fb2ce702f6d5c56ad8f059d0e3a20f95b779', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UwSjZADRVCFETywI0gZ7EtZJMMSMpSY9bGlWOdfMQ-U.jpg?width=960&crop=smart&auto=webp&s=b0be7ab363718b7f8e9b5ef5b16e3952dd75a44c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UwSjZADRVCFETywI0gZ7EtZJMMSMpSY9bGlWOdfMQ-U.jpg?width=1080&crop=smart&auto=webp&s=89d0ee4bcc8fc5f59593e0f3f2075a1ff74d4bf0', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/UwSjZADRVCFETywI0gZ7EtZJMMSMpSY9bGlWOdfMQ-U.jpg?auto=webp&s=f6d912bafe217f37f170cef1a271d67d210cc5bd', 'width': 1920}, 'variants': {}}]} |
How does ChatGPT browsing work? | 6 | I want to build something similar to ChatGPTs new browsing feature. Anyone have any insights into how it works?
Here are the things I’m most unsure about:
- How does it decide when to 1) start browsing a particular link as opposed to just looking at search results/snippets (I’ve noticed it’s behaviour varies between queries)? 2) decide to browse more than one link?
- Is it some kind of light agent setup that “plans” the steps or something like a separate classifier model that decides on next steps (with a limited set of options)?
- Is the model writing the answers just vanilla gpt-4 with a prompt or has it been fine tuned to writing responses based on websites/search results?
- What type of scraper is used? From what I can tell it seems to be, at least in some cases, a headless browser, since it says “scrolling”, which is not a thing if you are just pulling the entire page using regular requests.
- How is the text parsed from the html of sites it visits? It seems to be pretty efficient, if you just pull all visible text from an average website html object it usually results in ~2k tokens, but it seems too fast to do that. | 2023-10-26T07:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/17grcsh/how_does_chatgpt_browsing_work/ | No-Reflection-7168 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17grcsh | false | null | t3_17grcsh | /r/LocalLLaMA/comments/17grcsh/how_does_chatgpt_browsing_work/ | false | false | self | 6 | null |
Reconsider discounting the RX580, with recent changes to llama.cpp it's pretty good. | 31 | There seems to be some interest in the RX580 lately. I tried using my RX580 a while ago and found it was no better than the CPU. That's changed. There has been changes to llama.cpp that has made it about 3 times faster than my CPU. While that's not breaking any speed records, for such a cheap GPU it's compelling. Especially the $65 16GB variant.
Here are some numbers. The CPU is an AMD 5600 and the GPU is a 4GB RX580 AKA the loser variant. Thus I had to use a 3B model so that it would fit.
CPU only
------------
llama_print_timings: sample time = 19.08 ms / 174 runs ( 0.11 ms per token, 9120.45 tokens per second)
llama_print_timings: prompt eval time = 270.64 ms / 10 tokens ( 27.06 ms per token, 36.95 tokens per second)
llama_print_timings: eval time = 12292.29 ms / 173 runs ( 71.05 ms per token, 14.07 tokens per second)
llama_print_timings: total time = 12653.45 ms
All 29 layers offloaded to GPU
-----------------------------------------
llama_print_timings: sample time = 19.95 ms / 197 runs ( 0.10 ms per token, 9876.67 tokens per second)
llama_print_timings: prompt eval time = 4154.28 ms / 10 tokens ( 415.43 ms per token, 2.41 tokens per second)
llama_print_timings: eval time = 4575.97 ms / 196 runs ( 23.35 ms per token, 42.83 tokens per second)
llama_print_timings: total time = 8784.86 ms
The problem here is that while the generation speed is fast, the prompt evaluation speed is pitifully slow. It's much slower than the CPU for prompt evaluation. But there's, mostly, a solution to that, the -nommq flag. It's the best of both worlds. The prompt eval speed of the CPU with the generation speed of the GPU.
llama_print_timings: sample time = 20.32 ms / 197 runs ( 0.10 ms per token, 9695.84 tokens per second)
llama_print_timings: prompt eval time = 291.48 ms / 10 tokens ( 29.15 ms per token, 34.31 tokens per second)
llama_print_timings: eval time = 4593.92 ms / 196 runs ( 23.44 ms per token, 42.67 tokens per second)
llama_print_timings: total time = 4939.98 ms
Now the overall speed is almost 3x that of the CPU only. There is a couple of caveats though. That's why I said it's mostly a solution. The response it generates is slightly different. It's still an appropriate response but it's different than without the flag. The more obvious problem is that it doesn't stop. It repeats the same response over and over again endlessly. I'm hoping that's a bug that will be fixed at some point. Limiting the length of the response is a short term work around. | 2023-10-26T07:29:37 | https://www.reddit.com/r/LocalLLaMA/comments/17gr046/reconsider_discounting_the_rx580_with_recent/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gr046 | false | null | t3_17gr046 | /r/LocalLLaMA/comments/17gr046/reconsider_discounting_the_rx580_with_recent/ | false | false | self | 31 | null |
🤖 Struggling with Local Autogen Setup via text-generation-webui 🛠️— Any Better Alternatives? 🤔 | 13 | Hello everyone,
I've been working on setting up **autogen** locally for some text generation tasks. I've been using a shell command to initiate the service, but I've run into several issues that have been a bit of a bottleneck for my workflow.
Here's the command I've been using:
root@dewi:~/code/text-generation-webui# ./start_linux.sh --n_ctx 32000 --extensions openai --listen --loader llama.cpp --model openhermes-2-mistral-7b.Q8_0.gguf --verbose
#### Issues I'm facing:
1. **Function Calling**: The setup does not have function calling enabled. Here's the GitHub issue for reference: [Issue #4286](https://github.com/oobabooga/text-generation-webui/issues/4286).
2. **Context Length**: I've been encountering issues related to the context length. Here's the GitHub issue for more details: Issue #4364.
3. **Debugging with Verbose Flag**: Despite using the --verbose
CLI flag, I can't see the exact prompt template in the logs, which is crucial for debugging. See screenshot
​
[logs aren't verbose enough - e.g. no prompt template](https://preview.redd.it/8qayno4xthwb1.png?width=1941&format=png&auto=webp&s=d4838eed192fc68fa05d760dc9fb3d45aa0acc5a)
4. **Output Visibility**: Again, despite the --verbose
flag, I can't see the output being generated on the fly. I can only see the final response, which takes quite a long time to generate on my CPU.
#### Questions:
1. **Are there better alternatives to** **text-generation-webui**
**for running autogen locally?**
2. **Has anyone managed to resolve similar issues? If so, how?**
3. **Are there any CLI flags or configurations that could help alleviate these issues?**
I'd appreciate any insights or suggestions you may have. Thank you! | 2023-10-26T06:48:04 | https://www.reddit.com/r/LocalLLaMA/comments/17gqf5e/struggling_with_local_autogen_setup_via/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gqf5e | false | null | t3_17gqf5e | /r/LocalLLaMA/comments/17gqf5e/struggling_with_local_autogen_setup_via/ | false | false | 13 | {'enabled': False, 'images': [{'id': '2Tk2mJa4Isa1QMWxy0Zq7XGgjp5hOjAyxcfoCs9DKe0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1upVoXSpv-R_KXeKkomBfihNTmdP6eYLtBGb5ycsLFg.jpg?width=108&crop=smart&auto=webp&s=301d59f6abda84289f25b19df955aba01705c67e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1upVoXSpv-R_KXeKkomBfihNTmdP6eYLtBGb5ycsLFg.jpg?width=216&crop=smart&auto=webp&s=bb275af8a210936aec7377978c04a02925d3a658', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1upVoXSpv-R_KXeKkomBfihNTmdP6eYLtBGb5ycsLFg.jpg?width=320&crop=smart&auto=webp&s=0319072f2dd59c82f71cb45881d536c5919030e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1upVoXSpv-R_KXeKkomBfihNTmdP6eYLtBGb5ycsLFg.jpg?width=640&crop=smart&auto=webp&s=aa26530c796104e9e8c0ff406c989e6b6ea84668', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1upVoXSpv-R_KXeKkomBfihNTmdP6eYLtBGb5ycsLFg.jpg?width=960&crop=smart&auto=webp&s=29d3e80a9948584e56f81992878c6b69b0556e83', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1upVoXSpv-R_KXeKkomBfihNTmdP6eYLtBGb5ycsLFg.jpg?width=1080&crop=smart&auto=webp&s=509f455e472b6adba1d711b412e0288b9e56b262', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1upVoXSpv-R_KXeKkomBfihNTmdP6eYLtBGb5ycsLFg.jpg?auto=webp&s=40df71df52be2bb7b5f9ff00c5e9b60d58c2e039', 'width': 1200}, 'variants': {}}]} | |
Synthetic Intelligent Agent (SynthIA) | 1 | [removed] | 2023-10-26T05:55:13 | https://www.reddit.com/r/LocalLLaMA/comments/17gpopa/synthetic_intelligent_agent_synthia/ | migtissera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17gpopa | false | null | t3_17gpopa | /r/LocalLLaMA/comments/17gpopa/synthetic_intelligent_agent_synthia/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KJfmm8_w8Xzvhy2uLQ4qMT5g4G5IKvaoUTPuP9grdeg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=108&crop=smart&auto=webp&s=ef46686f5f0757f4ad3b2116194d777a506816d3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=216&crop=smart&auto=webp&s=3c07a30118904caba6e962990e7f4d4583ca1965', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=320&crop=smart&auto=webp&s=63dad03ade88f43233ebd7bc6fac3b274f8f9ebf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=640&crop=smart&auto=webp&s=8a0835838405bc0951692370ad1bd4a1cb9e8bb8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=960&crop=smart&auto=webp&s=ac87b31cac266144f5dcd82a3fb934d143e974f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?width=1080&crop=smart&auto=webp&s=29ac36dca635932bf52b40dc332355930e56eb0f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/auy4C6mLy426Ik9mpWUqdYcvUWDkKKG5tK8qs4vNzZA.jpg?auto=webp&s=508892389a307289a1a189b6dc98146c55e5ba38', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.