title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best <2B model | 10 | I'm looking for a llm that can run efficiently on my GPU. I have a 1650 4GB GPU, and I need a model that fits within its capabilities, specifically for inference tasks.
My primary use case involves generating simple pseudo-SQL queries. The main task is to extract 'where' conditions and 'group by' parameters from given statements or questions.
I also have a dataset for fine tuning.
Can anyone recommend a LLM that stays under 2b but is still effective for this kind of task?
Thanks! | 2023-12-24T17:04:27 | https://www.reddit.com/r/LocalLLaMA/comments/18pz5bh/best_2b_model/ | lagsec | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pz5bh | false | null | t3_18pz5bh | /r/LocalLLaMA/comments/18pz5bh/best_2b_model/ | false | false | self | 10 | null |
I wish I had tried LMStudio first... | 362 | Gawd man.... Today, a friend asked me the best way to load a local llm on his kid's new laptop for his xmas gift. I recalled a Prompt Engineering youtube video I watched about LMStudios and how simple it was and thought to recommend it to him because it looked quick and easy and my buddy knows nothing.
Before telling him to use it, I installed it on my Macbook before making the suggestion. Now I'm like, wtf have I been doing for the past month?? Ooba, cpp's .server function, running in the terminal, etc... Like... $#@K!!!! This just WORKS! right out of box. So... to all those who came here looking for a "how to" on this shit. Start with LMStudios. You're welcome. (file this under "things I wish I knew a month ago" ... except... I knew it a month ago and didn't try it!)
P.s. youtuber 'Prompt Engineering' has a tutorial that is worth 15 minutes of your time. | 2023-12-24T16:49:15 | https://www.reddit.com/r/LocalLLaMA/comments/18pyul4/i_wish_i_had_tried_lmstudio_first/ | knob-0u812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pyul4 | false | null | t3_18pyul4 | /r/LocalLLaMA/comments/18pyul4/i_wish_i_had_tried_lmstudio_first/ | false | false | self | 362 | null |
Seeking suggestions and team members for an open-source project | 4 | Hey, I'm seeking suggestions for an open-source project. My goal is to create an agricultural language model (LLM) by fine-tuning a base model using either llama or mixtral. I am from Bangladesh, an agriculture-based country, and believe that such a model can be highly beneficial. My current plan involves creating two models - base model and a chat model. There are already a lot of agriculture datasets available out there. The main focus of these datasets will be on aspects like Increased Crop Yield, Reduced Risk, Reduced Environmental Impact, Improved Quality, Research, and Finance related to agriculture.
Hey, I'm seeking suggestions for an open-source project. My goal is to create an agricultural language model (LLM) by fine-tuning a base model using either llama or mixtral. I am from Bangladesh, an agriculture-based country, and believe that such a model can be highly beneficial. My current plan involves creating two models - a base model and a chat model. There are already a lot of agriculture datasets available out there. The main focus of these datasets will be on aspects like Increased Crop Yield, Reduced Risk, Reduced Environmental Impact, Improved Quality, Research, and Finance related to agriculture.
In the future, my plan includes integrating a vision model with the chat model to directly detect plant diseases and provide solutions. There are already many open-source plant detection system examples available, and I believe we don't need to create one from scratch; we can simply integrate it with the chat LLM.
What do you think about this plan? Do you believe it's viable and will result in a helpful LLM model? I am also looking for team members, especially those who are new to this field like me. It can be a learning curve for both of us, particularly for those interested in learning about fine-tuning LLM models. | 2023-12-24T16:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/18pyti3/seeking_suggestions_and_team_members_for_an/ | omni7894 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pyti3 | false | null | t3_18pyti3 | /r/LocalLLaMA/comments/18pyti3/seeking_suggestions_and_team_members_for_an/ | false | false | self | 4 | null |
Gemini Pro on HuggingFace / Bard! [News] | 1 | [removed] | 2023-12-24T16:29:47 | https://www.reddit.com/r/LocalLLaMA/comments/18pyh7e/gemini_pro_on_huggingface_bard_news/ | dev-spot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pyh7e | false | null | t3_18pyh7e | /r/LocalLLaMA/comments/18pyh7e/gemini_pro_on_huggingface_bard_news/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9aMbpp_FXHBX6KYTcWXiPc0yMwhKG4iDsQJXzTHtsE8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/BpK253gjTeNT-_hxKTiutIQle92O6erk0zumjfmyjU0.jpg?width=108&crop=smart&auto=webp&s=cfe4ddd88555ee548eaabba04b333d12a04ee52d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/BpK253gjTeNT-_hxKTiutIQle92O6erk0zumjfmyjU0.jpg?width=216&crop=smart&auto=webp&s=42a5ea50062efbc71814f74afd0eefb8d3b19b04', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/BpK253gjTeNT-_hxKTiutIQle92O6erk0zumjfmyjU0.jpg?width=320&crop=smart&auto=webp&s=ce34ed715b493bb97b6d2ec4a62bd3fe03fda3cc', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/BpK253gjTeNT-_hxKTiutIQle92O6erk0zumjfmyjU0.jpg?auto=webp&s=766e64a3700ede6c5d8752d6df307a5445868f1d', 'width': 480}, 'variants': {}}]} |
Preparedness - by OpenAI | 48 | OpenAI is worried about powerful models being malicious. Hopefully it’s that and not a setup to eventually stomp out open source models in the future due to security concerns. | 2023-12-24T16:29:19 | https://openai.com/safety/preparedness | FatGuyQ | openai.com | 1970-01-01T00:00:00 | 0 | {} | 18pygvx | false | null | t3_18pygvx | /r/LocalLLaMA/comments/18pygvx/preparedness_by_openai/ | false | false | 48 | {'enabled': False, 'images': [{'id': 'l6i5jXS28IPCzNwsK6fCF9lNIHcHlYkdTWRBkQa6Ytc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UYrmQ_ncxF8RDEFm_JyowLLA4h_xrTNHfLN8k4Dw5Rs.jpg?width=108&crop=smart&auto=webp&s=0461404b81b5175455a7ba14e69b84eb024c4bdc', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UYrmQ_ncxF8RDEFm_JyowLLA4h_xrTNHfLN8k4Dw5Rs.jpg?width=216&crop=smart&auto=webp&s=6689330b29d7d66be822db9c0354c5dddea2fe11', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UYrmQ_ncxF8RDEFm_JyowLLA4h_xrTNHfLN8k4Dw5Rs.jpg?width=320&crop=smart&auto=webp&s=a66c7bb3fd3486f33b278957406be34ea5397464', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UYrmQ_ncxF8RDEFm_JyowLLA4h_xrTNHfLN8k4Dw5Rs.jpg?width=640&crop=smart&auto=webp&s=266d906e65194ad0bd0d962c9cd9f910af5e7439', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UYrmQ_ncxF8RDEFm_JyowLLA4h_xrTNHfLN8k4Dw5Rs.jpg?width=960&crop=smart&auto=webp&s=67a705ee0a2c56525771a89f2f956d469ed9b4f1', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UYrmQ_ncxF8RDEFm_JyowLLA4h_xrTNHfLN8k4Dw5Rs.jpg?width=1080&crop=smart&auto=webp&s=13e3d303fe2f3739fe7fe7bc9dd51c60d53c3b69', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/UYrmQ_ncxF8RDEFm_JyowLLA4h_xrTNHfLN8k4Dw5Rs.jpg?auto=webp&s=19e97ea8643ecc9fcb5e4845418cbcb8b7ac5cf2', 'width': 2400}, 'variants': {}}]} | |
Multi-Model Multi-GPU Architectures: LLM + ASR + TTS | 18 | Anyone seeing any reference architectures for multiple GPU usage, where each GPU is hosting a component of an overall voice-assistant solution?
I would imagine you could dedicate a GPU (or GPUs) to the core LLM, another for real-time speech recognition, and a third for text to speech.
The speech models are far more leaner, so smaller VRAM GPUs would be ideally suited to offloading these functions in an efficient way. For those of us who live by mere mortal budgets, it feels like we're hitting the usable VRAM wall. What else to do while we patiently wait for a realistic path towards 96gb?
Curious what others have done, or have seen here. Lots of focus on the LLM, rightly so, but I think there is a lot of opportunity in the multi-model/multi-gpu approach.
Thanks all! | 2023-12-24T16:26:29 | https://www.reddit.com/r/LocalLLaMA/comments/18pyex6/multimodel_multigpu_architectures_llm_asr_tts/ | grim-432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pyex6 | false | null | t3_18pyex6 | /r/LocalLLaMA/comments/18pyex6/multimodel_multigpu_architectures_llm_asr_tts/ | false | false | self | 18 | null |
What can I run with a 4070 8gb vram with 32gb memory laptop? | 26 | I just got a HP Omen for Christmas with rtx 4070 8gb vram and it tops out at 32gb system memory. I'm wondering what local llms can something like this run? Can it run mixtral on Q4 k_m using the card and offload to the 32gb memory? what kind of performance would I be looking at? What's the limit 13b? 30b? (around Q4 range) if I don't mind a long wait? (1t/s as long as it's actually using the gpu)
I been playing with lm studio on my pc and was kinda holding off on buying a laptop since things in AI are moving so quickly I figure I'd wait till late 2024 to see if manufactures start putting in more vrams on laptops before upgrading from my current lappy. But damn, was not expecting this. | 2023-12-24T16:22:09 | https://www.reddit.com/r/LocalLLaMA/comments/18pybx6/what_can_i_run_with_a_4070_8gb_vram_with_32gb/ | budlightmatters | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pybx6 | false | null | t3_18pybx6 | /r/LocalLLaMA/comments/18pybx6/what_can_i_run_with_a_4070_8gb_vram_with_32gb/ | false | false | self | 26 | null |
I have some noob questions about model size and performance | 2 | Hey all I'm pretty new at this, currently using text-generation-webui. The thing is, I don't really understand how to know if I can run a model. I have a 4080 and 32gb of ram, what exactly should I be looking at? Raw model size? I have 16gb of vram, does that mean that any model below 16gb will work fine? I tried for example Noromaid 20B Q3_K_M.gguf with all layers offloaded to gpu and 2k context and I was getting 3~ tokens/s. I then tried Noromaid 13B Q5_K_M with all layers offloaded and 4k context and got 25~ tokens/s which doesn't really make sense cause the 13B is bigger when including the context size.
Can someone just tell me what's the absolute biggest size I can run on my rig with good performance? (more than 10~ t/s) | 2023-12-24T15:43:50 | https://www.reddit.com/r/LocalLLaMA/comments/18pxm4r/i_have_some_noob_questions_about_model_size_and/ | Vxerrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pxm4r | false | null | t3_18pxm4r | /r/LocalLLaMA/comments/18pxm4r/i_have_some_noob_questions_about_model_size_and/ | false | false | self | 2 | null |
I'm a bit confused by ram and vram requirements. Do I even need a lot of vram if I want to run a model locally? Or just ram? | 3 | Can I extend my ram with vram? | 2023-12-24T15:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/18pwszf/im_a_bit_confused_by_ram_and_vram_requirements_do/ | panic_in_the_galaxy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pwszf | false | null | t3_18pwszf | /r/LocalLLaMA/comments/18pwszf/im_a_bit_confused_by_ram_and_vram_requirements_do/ | false | false | self | 3 | null |
Trying LLM Locally with Tesla P40 | 6 | Hi reader, I have been learning how to run a LLM(Mistral 7B) with small GPU but unfortunately failing to run one! i have tesla P-40 with me connected to VM, couldn't able to find perfect source to know how and getting stuck at middle, would appreciate your help, thanks in advance | 2023-12-24T13:21:40 | https://www.reddit.com/r/LocalLLaMA/comments/18pv4zk/trying_llm_locally_with_tesla_p40/ | Murky-Tumbleweed-486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pv4zk | false | null | t3_18pv4zk | /r/LocalLLaMA/comments/18pv4zk/trying_llm_locally_with_tesla_p40/ | false | false | self | 6 | null |
Finetune LLaMa2 for any language | 141 | We decided to release convenience scripts to fine-tune LLaMa2 to any language (that isn't English) using (Q)LoRA. Total training cost per language is under $1. We've already released a few datasets and models to play around with, more to come.
[https://github.com/UnderstandLingBV/LLaMa2lang](https://github.com/UnderstandLingBV/LLaMa2lang)
Few results from the Dutch 7B one:
Q: Wat is de hoofdstad van Nederland?
A: Amsterdam
Q: In welke provincie ligt die stad?
A: In de provincie Noord-Holland.
​
Q: Wie is de minister-president van Nederland?
A: Mark Rutte is sinds 2010 minister-president van Nederland. Hij is meerdere keren herkozen.
| 2023-12-24T12:20:38 | https://www.reddit.com/r/LocalLLaMA/comments/18pu83i/finetune_llama2_for_any_language/ | UnderstandLingAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pu83i | false | null | t3_18pu83i | /r/LocalLLaMA/comments/18pu83i/finetune_llama2_for_any_language/ | false | false | self | 141 | {'enabled': False, 'images': [{'id': 'k4R4GVAzvQHlZaD-Zp_X8lwvyMm59hGATU5veXnWD54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qRKuDSfFU4FH3NfRHB9dU245iT4FFzPFiN0eUk08t0g.jpg?width=108&crop=smart&auto=webp&s=42d036010318e115af083c55a3287d382d339c56', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qRKuDSfFU4FH3NfRHB9dU245iT4FFzPFiN0eUk08t0g.jpg?width=216&crop=smart&auto=webp&s=8b9f071fc37b2b64cecba8082c695e1179d3e302', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qRKuDSfFU4FH3NfRHB9dU245iT4FFzPFiN0eUk08t0g.jpg?width=320&crop=smart&auto=webp&s=bc18eef28587d3d26075d8dc95f4ae0918425afd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qRKuDSfFU4FH3NfRHB9dU245iT4FFzPFiN0eUk08t0g.jpg?width=640&crop=smart&auto=webp&s=9eae1eef31510dfbd3a1dc9b4f14836a00385e02', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qRKuDSfFU4FH3NfRHB9dU245iT4FFzPFiN0eUk08t0g.jpg?width=960&crop=smart&auto=webp&s=ac89733db86c2b7fcb6267edcd61732135772af2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qRKuDSfFU4FH3NfRHB9dU245iT4FFzPFiN0eUk08t0g.jpg?width=1080&crop=smart&auto=webp&s=bdc34685e9c7bd38a9c76c1d786155277f788c2a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qRKuDSfFU4FH3NfRHB9dU245iT4FFzPFiN0eUk08t0g.jpg?auto=webp&s=ade7ad1f353b03e7e720d9b0e4bf8ad38674a3ee', 'width': 1200}, 'variants': {}}]} |
Write christmas greetings to the /t/LocalLLaMA community on reddit! | 1 | [removed] | 2023-12-24T12:01:46 | https://www.reddit.com/r/LocalLLaMA/comments/18pty43/write_christmas_greetings_to_the_tlocalllama/ | NickUnrelatedToPost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pty43 | false | null | t3_18pty43 | /r/LocalLLaMA/comments/18pty43/write_christmas_greetings_to_the_tlocalllama/ | false | false | self | 1 | null |
How to convert LoRA tuned StableLM Zephyr 3B to GGUF? | 4 | I am playing around with finetuning using LoRA by using the [lit-gpt](https://github.com/Lightning-AI/lit-gpt). The base model I'm finetuning is [StableLM-Zephyr-3B](https://huggingface.co/stabilityai/stablelm-zephyr-3b). I was successfully able to create the LoRA checkpoint and merge it with base model and it works when I use the [chat/base.py](https://github.com/Lightning-AI/lit-gpt/blob/main/chat/base.py) from lit-gpt which indicates its all working as expected.
Now I would like to convert that into GGUF so that I can run it using llama.cpp. I've used the convert.py from llama.cpp which successfully converts it but it runs into following error when I try to use it
```
error loading model: done_getting_tensors: wrong number of tensors; expected 356, got 291
```
[Here](https://gist.github.com/zabirauf/1fd81eb575906cbb0b5f2e4f50ce7692) is the whole output from llama.cpp.
Anyone has any idea how can I fix that? I know TheBloke has published the GGUF for the base model so there is something either I'm missing or need to add.
P.S: I use the following to convert to GGUF `python convert.py /path/to/merged/model.bin --ctx 4096 --padvocab` | 2023-12-24T11:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/18ptf9t/how_to_convert_lora_tuned_stablelm_zephyr_3b_to/ | zabirauf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ptf9t | false | null | t3_18ptf9t | /r/LocalLLaMA/comments/18ptf9t/how_to_convert_lora_tuned_stablelm_zephyr_3b_to/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'BKkMXNKv7eXj1gbSjsNQmOND0UB3QDT6HhEiopILzoA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4SstYGIEysFO-b0SR4MfNwSu1jBcJIjBPw4pX2XlcHg.jpg?width=108&crop=smart&auto=webp&s=e35507f7cdf5417426115c24dadf59a24e614d38', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4SstYGIEysFO-b0SR4MfNwSu1jBcJIjBPw4pX2XlcHg.jpg?width=216&crop=smart&auto=webp&s=ff0621a79f66b30110dcaf6f18bbdabd1a1ae8e9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4SstYGIEysFO-b0SR4MfNwSu1jBcJIjBPw4pX2XlcHg.jpg?width=320&crop=smart&auto=webp&s=1aaa8ae612489e5f2856cafb9cc166502b3f30f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4SstYGIEysFO-b0SR4MfNwSu1jBcJIjBPw4pX2XlcHg.jpg?width=640&crop=smart&auto=webp&s=3138317bb84b2ae42f371dd663ac742d99351887', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4SstYGIEysFO-b0SR4MfNwSu1jBcJIjBPw4pX2XlcHg.jpg?width=960&crop=smart&auto=webp&s=4ae4fc836f109a71551037580941c2b9eed38789', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4SstYGIEysFO-b0SR4MfNwSu1jBcJIjBPw4pX2XlcHg.jpg?width=1080&crop=smart&auto=webp&s=bdc2168c42f922293553d4989e24309ca683f5e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4SstYGIEysFO-b0SR4MfNwSu1jBcJIjBPw4pX2XlcHg.jpg?auto=webp&s=5c03e82ddc6c43943893d6727ce076c77ce06942', 'width': 1200}, 'variants': {}}]} |
Help integrating LLM in our application | 3 | **Hi everyone,**
I'm working on integrating an LLM into an internal application built with ReactJS, NodeJS, and PostgreSQL to create a chatbot that can answer questions about the app's data. I've run into some challenges and would appreciate your guidance.
**Here's my setup:**
* **Frontend:** ReactJS
* **Backend:** NodeJS
* **Database:** PostgreSQL
* **LLM model:** Tried using Llama13b with LangChain for SQL query generation
**The goal:**
* Allow users to ask natural language questions about the app's data (e.g., "How many cities are there?", "Which cities have the most issues currently?")
* The chatbot should translate those questions into SQL queries, execute them against the PostgreSQL database, and return the results in a natural language format.
**The problem:**
* LangChain with Llama13b often generates unrealistic SQL queries, referencing tables that don't exist or using incorrect column names.
* It also sometimes hallucinates information, producing responses that aren't grounded in the actual database content.
**Questions:**
1. **Better approach for SQL query generation:** Is there a more reliable way to generate SQL queries from natural language using LLMs, specifically for PostgreSQL? Are there better model/library combinations or techniques I should consider?
2. **Addressing hallucinations:** How can I mitigate the issue of hallucinations and ensure the chatbot's responses accurately reflect the database content?
3. **Alternative approaches:** If direct SQL query generation isn't the best approach, what other strategies could I explore to achieve the desired functionality?
**Please let me know if I need to change the approach completely. Any advice or insights would be greatly appreciated!**
**Additional context:**
* I'm open to using different LLM models or libraries if they offer better results.
* I'm also interested in exploring techniques for improving the accuracy of SQL query generation, such as fine-tuning or data augmentation.
* I'd be happy to provide more details about my setup and specific challenges if needed. | 2023-12-24T10:10:53 | https://www.reddit.com/r/LocalLLaMA/comments/18psg1c/help_integrating_llm_in_our_application/ | arhmnsh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18psg1c | false | null | t3_18psg1c | /r/LocalLLaMA/comments/18psg1c/help_integrating_llm_in_our_application/ | false | false | self | 3 | null |
How do you create long past 4000 context length tokens in RP? | 5 | I have been using thebloke psymedrp 13b q6 and have been getting great NSFW result's but fell like I reach the 4000 token context limit a little fast and it turns to gibberish. My solution thus far has been exporting the log, as simple text and using a different model to summarize the rp, to that point and starting from again, but it misses certain past details. How do you all deal with this?
I am LM studio with a 10700k, 3090, ssd, 64 gigs of 3200mhz ram | 2023-12-24T09:50:56 | https://www.reddit.com/r/LocalLLaMA/comments/18ps6kb/how_do_you_create_long_past_4000_context_length/ | poet3991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ps6kb | false | null | t3_18ps6kb | /r/LocalLLaMA/comments/18ps6kb/how_do_you_create_long_past_4000_context_length/ | false | false | self | 5 | null |
Tokenisation of codebase to make it searchable | 1 | Hi everyone, I am working on a big data application that requires me to be able to convert my spark app into a json configuration file. The process to do that is to understand the entire application and try to do it manually. I wanted to use chatgpt for the same task but I am unable to provide it the entire context in one go and if I do it spits out not usable stuff. Then I heard about embeddings and vector db. I am very new to everything so was wondering if someone has any suggestions/resources? | 2023-12-24T09:44:18 | https://www.reddit.com/r/LocalLLaMA/comments/18ps3if/tokenisation_of_codebase_to_make_it_searchable/ | TechSupport_5440 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ps3if | false | null | t3_18ps3if | /r/LocalLLaMA/comments/18ps3if/tokenisation_of_codebase_to_make_it_searchable/ | false | false | self | 1 | null |
Open source plugin for code development | 1 | Hello folks, I am currently exploring an open source plugin designed to assist knowledge workers, such as software engineers, in seamlessly integrating third-party platform APIs into their existing codebase. This innovative tool is capable of analysing both your codebase and third-party API. | 2023-12-24T09:39:37 | https://www.reddit.com/r/LocalLLaMA/comments/18ps1e5/open_source_plugin_for_code_development/ | shikcoder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ps1e5 | false | null | t3_18ps1e5 | /r/LocalLLaMA/comments/18ps1e5/open_source_plugin_for_code_development/ | false | false | self | 1 | null |
Are there any alternatives to bing search? | 1 | Bing has really leveled up ever since it started incorporating ai into it's search radius. I use it for everything now, but I wonder if there is a local option.
One thing that really impressed me is how it breaks down the question into multiple parts.
Assuming foo is a specifc task that invovles steps 1, 2, 3... but those steps are only known to people who are into task foo.
If I ask "how do I do foo as a beginner"?
It searches for how the average people do foo. But also does a search on how people do foo as a beginner and lets me know what the common strategy is.
I don't care if it's wrong with the end result. What I want it to do is basically summarize what it found based on the various searches it did and to actually list them out. I'd like to at least sanity check my answers and ask follow up questions if needed.
For example, if I ask for the whether I can wear shorts in zip_code.
I want it to know what weather people wear shorts. Then what the actual weather is in zip_code at the current date and time.
And then combine the two and produce the output.
All the current options that I tried can't give coherent output, if they do, the data is outdated. I just want it to understand that it's wrong and correct itself (optional ask, it would be nice if it could do this but it's not a requirement).
I mainly work in fiction land so I normally don't care if it gives false or incorrect output because it does some really cool things. The cool things it does is lie but those lies give me ideas and those ideas further the story so the task fails succesfully. But I'd like a "truthful" model. If it follows wikipedia and uses its info the letter then that's truthful enough for me. If it can link its source and is obedient (follow immediate directions to elaborate or perform further searches).
I want to kickstart investigations. I have a functioning brain and can figure out if something sounds fishy. If it holds my hand then that's fine but if it branches out then that's fine too. I'm smart enough to know what questions to ask to narrow down to sense bullshit. Hypocrisy is almost always bullshit.
>inb4 wikipedia isn't a source of truth.
I don't care. | 2023-12-24T09:33:25 | https://www.reddit.com/r/LocalLLaMA/comments/18pryne/are_there_any_alternatives_to_bing_search/ | allTimeFunPoster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pryne | false | null | t3_18pryne | /r/LocalLLaMA/comments/18pryne/are_there_any_alternatives_to_bing_search/ | false | false | self | 1 | null |
Need help with prompting | 1 | I am trying to use mistral to write a story for my dnd campaign. I write something like this
The party wanders into an abandoned village and run into a troll. The troll attacks the party. Describe the village in detail. The party survived the attack.
The prompt does everything I want it to but describe the village.
How do I prompt it to get the correct information | 2023-12-24T09:01:25 | https://www.reddit.com/r/LocalLLaMA/comments/18prjtn/need_help_with_prompting/ | No_Respect663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18prjtn | false | null | t3_18prjtn | /r/LocalLLaMA/comments/18prjtn/need_help_with_prompting/ | false | false | self | 1 | null |
How should i run mixtral dolphin on cpu ? | 2 | I'm on Linux with 32gigs of ram, what should I use to run mixtral dolphin on CPU? llama.cp? ollama? Should I use a quantified version? | 2023-12-24T08:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/18prgj3/how_should_i_run_mixtral_dolphin_on_cpu/ | Wonderful-Eye-71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18prgj3 | false | null | t3_18prgj3 | /r/LocalLLaMA/comments/18prgj3/how_should_i_run_mixtral_dolphin_on_cpu/ | false | false | self | 2 | null |
Announcing CodeNinja - a new open source model good at coding | 320 | Hey folks 👋
I’ve released my new open source model CodeNinja that aims to be a reliable code assistant.
Check the model here: https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B
CodeNinja is an enhanced version of the renowned model openchat/openchat-3.5-1210. It having been fine-tuned through Supervised Fine Tuning on two expansive datasets, encompassing over 400,000 coding instructions. Designed to be an indispensable tool for coders, CodeNinja aims to integrate seamlessly into your daily coding routine.
I couldn’t run HumanEval on it because I ran out of RunPod credits 😅
But my initial tests showed that the model is quite good
I’d appreciate your feedback 🙏 | 2023-12-24T08:32:50 | https://www.reddit.com/r/LocalLLaMA/comments/18pr65c/announcing_codeninja_a_new_open_source_model_good/ | BeowulfBR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pr65c | false | null | t3_18pr65c | /r/LocalLLaMA/comments/18pr65c/announcing_codeninja_a_new_open_source_model_good/ | false | false | self | 320 | {'enabled': False, 'images': [{'id': 'SloFDkTEkULx8Zn3v1Qu7sXt13aLPvD9JoWxm0SaoBU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UPu9VS7s2xPC_csgfCmS3Bm0PSMs4u2Q28WnpXm6iyc.jpg?width=108&crop=smart&auto=webp&s=3134871e9dec08e4cc854fce6b7a7eccde701140', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UPu9VS7s2xPC_csgfCmS3Bm0PSMs4u2Q28WnpXm6iyc.jpg?width=216&crop=smart&auto=webp&s=268b8919cbdd4caf3a23b7c5aed8785bf294edcf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UPu9VS7s2xPC_csgfCmS3Bm0PSMs4u2Q28WnpXm6iyc.jpg?width=320&crop=smart&auto=webp&s=63f520788ca125fa6b9f2f356acb3bddaf6a4d84', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UPu9VS7s2xPC_csgfCmS3Bm0PSMs4u2Q28WnpXm6iyc.jpg?width=640&crop=smart&auto=webp&s=00cbd25ab3bc6018fa13287e952582023c31f7aa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UPu9VS7s2xPC_csgfCmS3Bm0PSMs4u2Q28WnpXm6iyc.jpg?width=960&crop=smart&auto=webp&s=faf4095cea6e171508efaa8498c1a46dabfcd4ef', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UPu9VS7s2xPC_csgfCmS3Bm0PSMs4u2Q28WnpXm6iyc.jpg?width=1080&crop=smart&auto=webp&s=47a9bd0a9e0e0cbaa6c5a486addfc2a228905df3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UPu9VS7s2xPC_csgfCmS3Bm0PSMs4u2Q28WnpXm6iyc.jpg?auto=webp&s=08a6ef279c045618f0022dff1a2452b057113a20', 'width': 1200}, 'variants': {}}]} |
Open-source LLMs for RecSys | 2 | What do you consider would be an appropriate open-source LLM that can be fine-tuned locally on consumer hardware and be used as a chatbot recommendation system. What other things to consider in the design of a RecSys? | 2023-12-24T07:49:03 | https://www.reddit.com/r/LocalLLaMA/comments/18pqk7d/opensource_llms_for_recsys/ | Acceptable_Stage3892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pqk7d | false | null | t3_18pqk7d | /r/LocalLLaMA/comments/18pqk7d/opensource_llms_for_recsys/ | false | false | self | 2 | null |
How to get llama to obey my commands 😡 | 4 | Hi all, merry Christmas.
I am using llama.cpp python bindings to identify key topics in the given text. Text is from a pdf of a book. I also provide the title of the book.
I want to do further processing on the response, so I need it to only return a comma separated list. No matter how hard I emphasize my requirement, it keeps being a smart ass and writing full sentences. Sometimes it does indeed give me a comma separated list but then it goes back to writing sentences. Or worst case it does something like "topics: - one \\n - two \\n - three"
​
Please point me in the right direction. I am happy with the accuracy, just need it to respond in a consistent and predictable format.
Thank you.
Model: [https://huggingface.co/jondurbin/airoboros-l2-70b-2.2](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2)
Prompt (python f string) this is modified from one of the example prompts.
f"""
BEGININPUT
BEGINCONTEXT
[title: {title}]
ENDCONTEXT
{chapter_content}
ENDINPUT
BEGININSTRUCTION
You are given an excerpt from a book with the given title. List the main topics of the book, separated by commas, one word each. Only respond with single words, separated by commas. Do not write sentences.
ENDINSTRUCTION""" | 2023-12-24T04:50:53 | https://www.reddit.com/r/LocalLLaMA/comments/18pnsrb/how_to_get_llama_to_obey_my_commands/ | EchoImpressive6063 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pnsrb | false | null | t3_18pnsrb | /r/LocalLLaMA/comments/18pnsrb/how_to_get_llama_to_obey_my_commands/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'u7de09cXBlRQhZUQhrj_qWKOw87SB4MEo0rlwYokI64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/j2JozeVzmdVikkILXeGGE2lz17Nsl0ObDNbgJuItkHo.jpg?width=108&crop=smart&auto=webp&s=fd010d41149ef0a927142dded83201c4c8ef5045', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/j2JozeVzmdVikkILXeGGE2lz17Nsl0ObDNbgJuItkHo.jpg?width=216&crop=smart&auto=webp&s=99808ac52a4f01ef35249c6257308a389598e824', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/j2JozeVzmdVikkILXeGGE2lz17Nsl0ObDNbgJuItkHo.jpg?width=320&crop=smart&auto=webp&s=ffabe0f0d42c61f78399a150ef3590daaf1521a1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/j2JozeVzmdVikkILXeGGE2lz17Nsl0ObDNbgJuItkHo.jpg?width=640&crop=smart&auto=webp&s=2424dc32e9c21e37d215b360827cd90872f30661', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/j2JozeVzmdVikkILXeGGE2lz17Nsl0ObDNbgJuItkHo.jpg?width=960&crop=smart&auto=webp&s=ffe654e8d24bc0c647d4eed1c260d3b1df1dee4d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/j2JozeVzmdVikkILXeGGE2lz17Nsl0ObDNbgJuItkHo.jpg?width=1080&crop=smart&auto=webp&s=9ebe84f935f0da9e3f552848012f2acb9bd750b3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/j2JozeVzmdVikkILXeGGE2lz17Nsl0ObDNbgJuItkHo.jpg?auto=webp&s=b79f9939bb9dbbb41a53479e499183d1fd442134', 'width': 1200}, 'variants': {}}]} |
Local LLM with mobile app? | 15 | Is there such a thing? A server app and then a mobile app that can connect in? Specifically for iPhone and Mac, but generally curious what is out there. | 2023-12-24T04:15:07 | https://www.reddit.com/r/LocalLLaMA/comments/18pn6oo/local_llm_with_mobile_app/ | Data_Driven_Guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pn6oo | false | null | t3_18pn6oo | /r/LocalLLaMA/comments/18pn6oo/local_llm_with_mobile_app/ | false | false | self | 15 | null |
Llm Super Coach - Small fun project to use a local LLM to analyze one's diaries in various ways. | 10 | 2023-12-24T03:21:01 | https://github.com/jwsher/llm-super-coach | curiousdude | github.com | 1970-01-01T00:00:00 | 0 | {} | 18pma53 | false | null | t3_18pma53 | /r/LocalLLaMA/comments/18pma53/llm_super_coach_small_fun_project_to_use_a_local/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'IUhnVtu2Mx34ENoqQ_6cG0LxcDg-XE9irm6x2cyL664', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Zp0OYEd29TYbtpPQvCraSto1LRHxhC1JpqoWViXbVR8.jpg?width=108&crop=smart&auto=webp&s=3e04bbbc1cf29ff96e28b2ed9135e5369b9147e3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Zp0OYEd29TYbtpPQvCraSto1LRHxhC1JpqoWViXbVR8.jpg?width=216&crop=smart&auto=webp&s=88f8180d3688156b5eece0fd8d1b9650ad87e491', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Zp0OYEd29TYbtpPQvCraSto1LRHxhC1JpqoWViXbVR8.jpg?width=320&crop=smart&auto=webp&s=635e2b39cf55c2ece972dba621270d7451ff399e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Zp0OYEd29TYbtpPQvCraSto1LRHxhC1JpqoWViXbVR8.jpg?width=640&crop=smart&auto=webp&s=f435143563ad26eb73e2976db2f75771b550ff93', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Zp0OYEd29TYbtpPQvCraSto1LRHxhC1JpqoWViXbVR8.jpg?width=960&crop=smart&auto=webp&s=a8f60259849c5bd278cba3fd4b31f80903f3fac3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Zp0OYEd29TYbtpPQvCraSto1LRHxhC1JpqoWViXbVR8.jpg?width=1080&crop=smart&auto=webp&s=b908f12c3e7bb5ba818107d30c327078491dfb27', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Zp0OYEd29TYbtpPQvCraSto1LRHxhC1JpqoWViXbVR8.jpg?auto=webp&s=319379db9cdb254f574543e268c6fc1da051894b', 'width': 1200}, 'variants': {}}]} | ||
Best LLM for M1 16GB | 8 | Hi all,
I have a spare M1 16GB machine. It will be dedicated as an ‘LLM server’, with llama.cpp.
Given it will be used for nothing else, what’s the best model I can get away with in December 2023? | 2023-12-24T03:16:24 | https://www.reddit.com/r/LocalLLaMA/comments/18pm7b3/best_llm_for_m1_16gb/ | Data_Driven_Guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pm7b3 | false | null | t3_18pm7b3 | /r/LocalLLaMA/comments/18pm7b3/best_llm_for_m1_16gb/ | false | false | self | 8 | null |
Mixtral is way faster than I expected on AMD Radeon 7900 XTX! | 106 | I hadn't tried Mixtral yet due to the size of the model, thinking since I only get \~1.5 tokens/sec on 70B models that Mixtral wouldn't run well either.
However I am pleasantly surprised I am getting 13.8 tokens/sec !!
System specs:
RYZEN 5950X
64GB DDR4-3600
AMD Radeon 7900 XTX
Using latest (unreleased) version of Ollama (which adds AMD support).
Ollama is by far my favourite loader now. | 2023-12-24T03:09:40 | https://www.reddit.com/r/LocalLLaMA/comments/18pm34g/mixtral_is_way_faster_than_i_expected_on_amd/ | daedelus82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pm34g | false | null | t3_18pm34g | /r/LocalLLaMA/comments/18pm34g/mixtral_is_way_faster_than_i_expected_on_amd/ | false | false | self | 106 | null |
Nvidia-SMI for Mixtral-8x7B-Instruct-v0.1 in case anyone wonders how much VRAM it sucks up (90636MiB) so you need 91GB of RAM | 64 | 2023-12-24T03:07:08 | Rollingsound514 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18pm1m7 | false | null | t3_18pm1m7 | /r/LocalLLaMA/comments/18pm1m7/nvidiasmi_for_mixtral8x7binstructv01_in_case/ | false | false | 64 | {'enabled': True, 'images': [{'id': 'Txeq0tX60Bw8n_wsnlOUk6nT3BQVfuvzQWG_m-4WFOU', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/24lwrshks58c1.png?width=108&crop=smart&auto=webp&s=08165bf33344b3aab16e32dfcb073dc3155f5639', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/24lwrshks58c1.png?width=216&crop=smart&auto=webp&s=4f9175993c82d3221f15bfbc3faad71923886368', 'width': 216}, {'height': 140, 'url': 'https://preview.redd.it/24lwrshks58c1.png?width=320&crop=smart&auto=webp&s=88ad734fe5df6ca8a29c48a7c05da9adb45bb405', 'width': 320}, {'height': 280, 'url': 'https://preview.redd.it/24lwrshks58c1.png?width=640&crop=smart&auto=webp&s=480c1c114cdef8fe624226f62193c1521ac60aa6', 'width': 640}, {'height': 420, 'url': 'https://preview.redd.it/24lwrshks58c1.png?width=960&crop=smart&auto=webp&s=ce122e106e05217becdab1510b7cd4d2870ea589', 'width': 960}, {'height': 473, 'url': 'https://preview.redd.it/24lwrshks58c1.png?width=1080&crop=smart&auto=webp&s=dd2803990eb2b8f0beab62629b65b10bb2cf7848', 'width': 1080}], 'source': {'height': 1178, 'url': 'https://preview.redd.it/24lwrshks58c1.png?auto=webp&s=0a307261d9f49e19fdc4040d8624afb8d28d2428', 'width': 2688}, 'variants': {}}]} | |||
Can anybody tell me what's going on? | 3 | I'm fine-tuning Mistral 7b on a relatively "weird" dataset, meaning it is not something that it has much knowledge of at all. For whatever reason, the loss went up to 26 but then went down to 0.000000 and won't change at all. I'm training on two Tesla T4s with unsloth and I want to make sure nothing is going wrong. Here are my hyperparameters:
model\_name = "mistralai/Mistral-7B-v0.1"
max\_seq\_length = 8192
learning\_rate = 2e-4
weight\_decay = 0.01
max\_steps = 1000
warmup\_steps = 10
batch\_size = 1
gradient\_accumulation\_steps = 16
lr\_scheduler\_type = "linear"
optimizer = "adamw\_8bit"
use\_gradient\_checkpointing = True
random\_state = 3407
​
[train\/loss graph on wandb](https://preview.redd.it/56zolz3np58c1.png?width=1927&format=png&auto=webp&s=91a28d09cd849b0f4843e694cc096c203b37b02b) | 2023-12-24T02:53:12 | https://www.reddit.com/r/LocalLLaMA/comments/18plshu/can_anybody_tell_me_whats_going_on/ | oof-baroomf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18plshu | false | null | t3_18plshu | /r/LocalLLaMA/comments/18plshu/can_anybody_tell_me_whats_going_on/ | false | false | 3 | null | |
AI Automation for architectural floor plans | 6 | There are existing software tools out there that can automatically detect the total area and distinguish rooms in architectural floor plans. Is this reproducible with a ML model and what would be the necessary steps needed for this? | 2023-12-24T02:20:47 | https://www.reddit.com/r/LocalLLaMA/comments/18pl849/ai_automation_for_architectural_floor_plans/ | construct-dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pl849 | false | null | t3_18pl849 | /r/LocalLLaMA/comments/18pl849/ai_automation_for_architectural_floor_plans/ | false | false | self | 6 | null |
Chat with PDF locally - Ollama + chatd | 23 | Managed to get local Chat with PDF working, with Ollama + [chatd](https://github.com/BruceMacD/chatd).
Talking to the Kafka and Attention is all you need paper
Had to modify code as I was getting error while using chatd
Tried with 3 models:
LLaMA 7B - Was decent, first 2 images are of this.
Phi-2 - didn't go well
CodeLLaMA 34B - Really deep, best so far.
I was actually looking to Chat with codebase(couldn't find good solution), remembered that chatd was not working last time got it to work and now I'm chatting with papers I downloaded.
https://preview.redd.it/6csca3zii58c1.png?width=3456&format=png&auto=webp&s=a31e36d1a9d1bde0165ec0f142f7eb3106a33fcb | 2023-12-24T02:15:24 | https://www.reddit.com/r/LocalLLaMA/comments/18pl4od/chat_with_pdf_locally_ollama_chatd/ | ethermelody | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pl4od | false | null | t3_18pl4od | /r/LocalLLaMA/comments/18pl4od/chat_with_pdf_locally_ollama_chatd/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'bAvjTOxk0lj0VmvvoN1zGMDp3Ph1uX4KoW0ckdali8Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FKkpWhPxpbtZKRogMUBmT44Er86Q38dteHZ6LVIr5KM.jpg?width=108&crop=smart&auto=webp&s=77129f9f81921f984e766c2a27e416fb2973119b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FKkpWhPxpbtZKRogMUBmT44Er86Q38dteHZ6LVIr5KM.jpg?width=216&crop=smart&auto=webp&s=38484a2a01e97b5ed58ba17677e64b3c479b478e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FKkpWhPxpbtZKRogMUBmT44Er86Q38dteHZ6LVIr5KM.jpg?width=320&crop=smart&auto=webp&s=488ef91ad9539a0f81d39b25fec2bace9491c85b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FKkpWhPxpbtZKRogMUBmT44Er86Q38dteHZ6LVIr5KM.jpg?width=640&crop=smart&auto=webp&s=71a7f03045775c7ce5a66a654da41776d2b3b759', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FKkpWhPxpbtZKRogMUBmT44Er86Q38dteHZ6LVIr5KM.jpg?width=960&crop=smart&auto=webp&s=771b1b7fa02cf0bfc36fa6492b6f54645a8c2fc3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FKkpWhPxpbtZKRogMUBmT44Er86Q38dteHZ6LVIr5KM.jpg?width=1080&crop=smart&auto=webp&s=c146a1129056457a2115b9f7ae100520819a364a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FKkpWhPxpbtZKRogMUBmT44Er86Q38dteHZ6LVIr5KM.jpg?auto=webp&s=9be9ea40c45824bce837fd0b559d99e94617012d', 'width': 1200}, 'variants': {}}]} | |
Mixtral_7Bx2 MoE GGUF models Q2-Q5 | 26 | I don't have good hardware. Going to try these out as they were just published today.
https://preview.redd.it/z5fj6yskc58c1.png?width=1782&format=png&auto=webp&s=a67bdd28ddcfbf7065afff529c2c2acd93810ffa | 2023-12-24T01:37:40 | https://www.reddit.com/r/LocalLLaMA/comments/18pkfuf/mixtral_7bx2_moe_gguf_models_q2q5/ | ZHName | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pkfuf | false | null | t3_18pkfuf | /r/LocalLLaMA/comments/18pkfuf/mixtral_7bx2_moe_gguf_models_q2q5/ | false | false | 26 | null | |
How to QLoRa Fine Tune using Axolotl - Zero to Working | 61 | This is a short guide on how to get axolotl working in WSL on Windows or on Ubuntu.
It was frustrating for me to get working as it isn't as straight forward as you'd think because the installation documentation on the project is garbage and isn't helpful to beginners. As in, it won't run with just installing pytorch and python then running their install script. They have docker images that will probably work but for those who don't want to learn how to get THOSE working, here is how you get it working on WSL or Ubuntu.
If you are on Windows start here:
1. Uninstall ALL of your Nvidia drivers and CUDA toolkit. Use DDU to uninstall cleanly as a last step which will auto reboot. [Display Driver Uninstaller (DDU) download version 18.0.7.0 (guru3d.com)](https://www.guru3d.com/download/display-driver-uninstaller-download/)
2. Download the latest Nvidia GPU driver from [nvidia.com](https://nvidia.com) and download CUDA toolkit 12.1 update 1 for windows. [CUDA Toolkit 12.1 Update 1 Downloads | NVIDIA Developer](https://developer.nvidia.com/cuda-12-1-1-download-archive?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_local)
3. Install CUDA toolkit FIRST. Use custom installation and uncheck everything except for CUDA. Install Nvidia driver. Reboot.
4. Open terminal and type "wsl --install" to install WSL. Finish WSL install and reboot. NOTE: If you have a broken wsl install already, search windows features and uncheck virtual machine platform and subsystems for linux and reboot. Then try step 4 again.
5. Open the newly installed "Ubuntu" app from the start menu. Setup your credentials. Continue on Ubuntu instructions working inside the ubuntu instance of WSL.
If you are on Ubuntu start here:
1. Install CUDA toolkit for Ubuntu OR Ubuntu-WSL depending. Make sure to follow the LOCAL instructions not the remote. [CUDA Toolkit 12.1 Update 1 Downloads | NVIDIA Developer](https://developer.nvidia.com/cuda-12-1-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_local)
2. Install miniconda3 following the Linux commands instructions on the bottom [Miniconda — miniconda documentation](https://docs.conda.io/projects/miniconda/en/latest/)
3. Create Axolotl conda environment. Accept with y.
​
conda create -n axolotl python=3.10
4. Activate axolotl environment
conda activate axolotl
5. Install cuda 12.1.1 runtime.
conda install -y -c "nvidia/label/cuda-12.1.1" cuda
6. Install pytorch 2.1.1 (yes specifically this version)
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu121
7. Clone Axolotl repo and install.
git clone https://github.com/OpenAccess-AI-Collective/axolotl
cd axolotl
pip3 install packaging
pip3 install -e '.[flash-attn,deepspeed]'
At this point Axolotl should be installed properly and be ready to use. If there are errors, you did a step wrong. Next steps are to create your training .yaml file for Axolotl to use and run.
I will assume you already have your curated dataset formatted in a .jsonl file format as the Axolotl repo supports.
If you are in WSL in windows you can also open your Ubuntu drive folders by entering "\\\\wsl$" in file explorer. Navigate to your home folder
"\\\\wsl.localhost\\Ubuntu\\home\\(your username)"
Here you can edit files as if it's in windows so that you can edit yaml files and datasets (recommend using notepad++) without needing to use the command line.
Training:
1. In your ubuntu home directory, create a folder (e.g. train-mistral-v0.1) for keeping your training yaml and output models. Also create folders for your models and datasets.
2. Go to the axolotl folder in your ubuntu home directory. Go to the examples folder and copy whatever yaml file depending on what model you're trying to train. Paste the yaml file in the training folder.
3. Edit the yaml file with your text editor of choice and edit the configurations depending on your needs. Replacing the model and datasets with your own respective paths.
4. If you're on a Turing GPU, set xformers to true and flash-attention to false, set sample packing to false. If you're on Ampere or Ada GPU leave as is
5. If you have multi GPU, you can set the field "deepspeed: (/home/(your username)/axolotl/deepspeed/zero2.json" or zero3.json if you keep running out of memory. Read more about it here: [DeepSpeed Integration (huggingface.co)](https://huggingface.co/docs/transformers/main_classes/deepspeed)
6. Once you've setup the models and datasets and configured your yaml file it's time to start the training. In your training folder where your yaml file is, run this command:
​
accelerate launch -m axolotl.cli.train youryamlfile.yaml
It should run properly, tokenizing the input dataset and then starting the training.
You can check your nvidia memory usage by opening another ubuntu terminal and running nvidia-smi. | 2023-12-24T01:23:53 | https://www.reddit.com/r/LocalLLaMA/comments/18pk6wm/how_to_qlora_fine_tune_using_axolotl_zero_to/ | nero10578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pk6wm | false | null | t3_18pk6wm | /r/LocalLLaMA/comments/18pk6wm/how_to_qlora_fine_tune_using_axolotl_zero_to/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': '0aZkqptWjKHPQPnFdg6vuqjHKWYzmB09DZQOjeR21p8', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/n3m5fDgvTBx4-8NCZLqe6UilKLPK0b6eF6xatBWrA-k.jpg?width=108&crop=smart&auto=webp&s=053cd80fd9c7d530b2e6ad37c2f0557dbb521c98', 'width': 108}, {'height': 197, 'url': 'https://external-preview.redd.it/n3m5fDgvTBx4-8NCZLqe6UilKLPK0b6eF6xatBWrA-k.jpg?width=216&crop=smart&auto=webp&s=fdddc80b545b9a9a36d71431e98b6ace7fb6522d', 'width': 216}, {'height': 293, 'url': 'https://external-preview.redd.it/n3m5fDgvTBx4-8NCZLqe6UilKLPK0b6eF6xatBWrA-k.jpg?width=320&crop=smart&auto=webp&s=c71dc9bc316dbcd40e3c109a38172c3793fb2629', 'width': 320}], 'source': {'height': 555, 'url': 'https://external-preview.redd.it/n3m5fDgvTBx4-8NCZLqe6UilKLPK0b6eF6xatBWrA-k.jpg?auto=webp&s=c969d1aec7609708981842d11c1dd1a5f7a37af5', 'width': 606}, 'variants': {}}]} |
Custom Roleplaying AI | 1 | [removed] | 2023-12-24T01:18:43 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 18pk3i1 | false | null | t3_18pk3i1 | /r/LocalLLaMA/comments/18pk3i1/custom_roleplaying_ai/ | false | false | default | 1 | null | ||
Custom NSFW LLaMA | 1 | [removed] | 2023-12-24T01:17:07 | https://www.reddit.com/r/LocalLLaMA/comments/18pk2fq/custom_nsfw_llama/ | ewhitey12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pk2fq | false | null | t3_18pk2fq | /r/LocalLLaMA/comments/18pk2fq/custom_nsfw_llama/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': '9IS5HJzzlpcsU1cW52UUqC_FDOC62DR_OEJ1wDbj8pQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=108&crop=smart&auto=webp&s=75cbd7ead488b481bd6d5362d4b9084c60cbe398', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=216&crop=smart&auto=webp&s=60d2900d151045a4deea3d49bf50697ee22b2af9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=320&crop=smart&auto=webp&s=c4ab8af635cd0be36d223d220f583bb506b325c0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=640&crop=smart&auto=webp&s=5b92f7c26f9bbac3e3357e84935ba62ed9dbc9d2', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=960&crop=smart&auto=webp&s=7dda650157dc9f11bb0df9354d3ee7efc408a0b4', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?auto=webp&s=453e041b64847c676f7f66a078aa01fd829f113b', 'width': 1024}, 'variants': {'nsfw': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ee7bee7ff19a684f3eb95b8211b4c826ff3ab279', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=20657a029d804021e3c76f26932f2d5edd235cad', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=32a2d64fa286429e18e9e78ca01469fe9b7fc499', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=65ee4e305c2128baad01841aec43ac72e1aba012', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=21d601213ee000285c30ee9f074d8f50abf4a588', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?blur=40&format=pjpg&auto=webp&s=d8d36d63de96c835ac298fe705c63925cf3679b3', 'width': 1024}}, 'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ee7bee7ff19a684f3eb95b8211b4c826ff3ab279', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=20657a029d804021e3c76f26932f2d5edd235cad', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=32a2d64fa286429e18e9e78ca01469fe9b7fc499', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=65ee4e305c2128baad01841aec43ac72e1aba012', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=21d601213ee000285c30ee9f074d8f50abf4a588', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Mw2XDGZWRowlUka_k38APDHino46OR9oxvicW9cJmH8.jpg?blur=40&format=pjpg&auto=webp&s=d8d36d63de96c835ac298fe705c63925cf3679b3', 'width': 1024}}}}]} |
Production LLM suggests? | 6 | I have been testing LLMs using the text-generation-webui and I am getting great speeds from it, buy when I use the same models using the Huggingface library the tokens/s drops. I seen people mention on this sub not to use the hugging face library for prod and I am curious what should be used to push an LLM to Prod. | 2023-12-24T00:35:13 | https://www.reddit.com/r/LocalLLaMA/comments/18pj9yg/production_llm_suggests/ | Danny_Davitoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pj9yg | false | null | t3_18pj9yg | /r/LocalLLaMA/comments/18pj9yg/production_llm_suggests/ | false | false | self | 6 | null |
RTX 3080 10GB: best quantization format thingy for local LLM usage? | 3 | Hi, I've been looking into mucking around with local LLMs, nothing too serious, just wanna have some fun. I have a somewhat decent PC, well by "normal" standards anyways: R9 5900X, 64GB RAM, RTX 3080 10GB.
When running different LLMs, which quantization "formats" of a model should I download for optimal/usable performance?
Apologies if this is a dumb or easy question, I'm new to this (and also kinda tired at the moment) | 2023-12-24T00:32:39 | https://www.reddit.com/r/LocalLLaMA/comments/18pj85u/rtx_3080_10gb_best_quantization_format_thingy_for/ | Absolucyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pj85u | false | null | t3_18pj85u | /r/LocalLLaMA/comments/18pj85u/rtx_3080_10gb_best_quantization_format_thingy_for/ | false | false | self | 3 | null |
Pi.ai clone | 11 | Any open source incl ui frameworks that is close to what Pi.ai provides incl ability to connect to your LLM of choice? | 2023-12-24T00:05:40 | https://www.reddit.com/r/LocalLLaMA/comments/18piosa/piai_clone/ | Silver_Patient_7253 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18piosa | false | null | t3_18piosa | /r/LocalLLaMA/comments/18piosa/piai_clone/ | false | false | self | 11 | null |
Best Method/Option to Fine Tune a Model for Specific Kind of Document Summarization | 5 | I hope this is the right community to ask. I am building some tooling to summarize documents and we'd like them summarized in a specific fashion. Base GPT-4 gets us fairly close but there are still some nuances missing from the final output.
I am thinking of just fine-tuning GPT-3.5 (this would also allow all the infrastructure to remain the same).
I am curious though if anyone would recommend another course of action such as training an open source model. Also curious if anyone thinks fine-tuning any model won't surpass performance from GPT-4
Thank you! | 2023-12-23T23:58:38 | https://www.reddit.com/r/LocalLLaMA/comments/18pijg9/best_methodoption_to_fine_tune_a_model_for/ | ccasazza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pijg9 | false | null | t3_18pijg9 | /r/LocalLLaMA/comments/18pijg9/best_methodoption_to_fine_tune_a_model_for/ | false | false | self | 5 | null |
Software for probability scoring documents? | 1 | Does anyone have a wrapper for llama.cpp that will take an existing document and get the token by token probabilities and output a nice pretty printed color coded output?
I had a nice setup for this with GPT2 to use as a copy-editing tool (errors tend to surprise the model) and even openai had similar functionality in their initial web interface, though they discontinued it. (presumably because RLHF makes the probabilities dumber and because performance optimizations like lookahead with a small model break it)
It seems like such an obvious thing that I thought I'd check if someone has created one using modern tools before I go recreating it myself. | 2023-12-23T23:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/18phxlk/software_for_probability_scoring_documents/ | nullc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18phxlk | false | null | t3_18phxlk | /r/LocalLLaMA/comments/18phxlk/software_for_probability_scoring_documents/ | false | false | self | 1 | null |
Dolphin 2.6 finetune of Phi-2 from erhartform | 71 | 2023-12-23T23:17:32 | https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2 | m18coppola | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18phq4q | false | null | t3_18phq4q | /r/LocalLLaMA/comments/18phq4q/dolphin_26_finetune_of_phi2_from_erhartform/ | false | false | 71 | {'enabled': False, 'images': [{'id': 'Rggsu6GSu4-mPjM8bNCFdziWUb5upMx6vtGz3Od3b0g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gtNd-LHG6wI1FnYfO-52z5UjM1rXM63WHaCvw7mLmoA.jpg?width=108&crop=smart&auto=webp&s=c48321d0b9746c3442deeef235e5dfe45c1ec2b4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gtNd-LHG6wI1FnYfO-52z5UjM1rXM63WHaCvw7mLmoA.jpg?width=216&crop=smart&auto=webp&s=699181a5ebea7076ab664abd7e46fe15074b755c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gtNd-LHG6wI1FnYfO-52z5UjM1rXM63WHaCvw7mLmoA.jpg?width=320&crop=smart&auto=webp&s=f09c5da355f59e4b6ec5c5b5030e92bd33cec9d5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gtNd-LHG6wI1FnYfO-52z5UjM1rXM63WHaCvw7mLmoA.jpg?width=640&crop=smart&auto=webp&s=60b7b7692cde04f989aeec6fbe3b4585cd2bc8c1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gtNd-LHG6wI1FnYfO-52z5UjM1rXM63WHaCvw7mLmoA.jpg?width=960&crop=smart&auto=webp&s=f2624f7227d6131084dc717aa076355d11eee6ab', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gtNd-LHG6wI1FnYfO-52z5UjM1rXM63WHaCvw7mLmoA.jpg?width=1080&crop=smart&auto=webp&s=a8705868a39088021991e7a0f96583d12eb21d37', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gtNd-LHG6wI1FnYfO-52z5UjM1rXM63WHaCvw7mLmoA.jpg?auto=webp&s=7b69164dafd1851fbcfa7102204cac47b65b7a1c', 'width': 1200}, 'variants': {}}]} | ||
LMStudios for the win. Running LLMs locally. | 1 | [removed] | 2023-12-23T22:29:43 | https://www.reddit.com/r/LocalLLaMA/comments/18pgrmc/lmstudios_for_the_win_running_llms_locally/ | knob-0u812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pgrmc | false | null | t3_18pgrmc | /r/LocalLLaMA/comments/18pgrmc/lmstudios_for_the_win_running_llms_locally/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'muQYWCdbMSfeCPJlJjKUvdHoY4SjF7h0PG_6tvx593Q', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IsrcDbB1w9_EHHM-dbnqDJ1l-xhKl6hfAQ8yqxutrOw.jpg?width=108&crop=smart&auto=webp&s=31d1ae4b6459a8e6ab470236dd34a0cd3cda56cd', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/IsrcDbB1w9_EHHM-dbnqDJ1l-xhKl6hfAQ8yqxutrOw.jpg?width=216&crop=smart&auto=webp&s=48c8f3f15f15ccb8293bf606423936ea4c6ea6d8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/IsrcDbB1w9_EHHM-dbnqDJ1l-xhKl6hfAQ8yqxutrOw.jpg?width=320&crop=smart&auto=webp&s=842169dcd93d460f228469638951cfded8592be8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/IsrcDbB1w9_EHHM-dbnqDJ1l-xhKl6hfAQ8yqxutrOw.jpg?auto=webp&s=8f95934ecca506a8ebff9908080213f112514915', 'width': 480}, 'variants': {}}]} |
Models Megathread #3 - What models are you currently using? What are your thoughts on Mixtral? | 58 | Happy holidays to everyone. The last thread was about a month ago, and since then, we've seen the release of the high profile Mixtral and several other models.
___
#### Welcome to the r/LocalLLaMA Models Megathread
What models are you currently using and why? Share any recommendations you have.
There's also another discussion topic for this thread:
- What are your thoughts on Mixtral? Do you have any tips on generation parameters or anything else that you'd like to share?
___
To bring anyone up to speed, here's a short listing of some of the most upvoted models and resources in the past month:
**Models**
- [Mixtral](https://www.reddit.com/r/LocalLLaMA/comments/18dpptc/new_mistral_models_just_dropped_magnet_links/) and [Mistral v0.2](https://www.reddit.com/r/LocalLLaMA/comments/18fpp3d/mistral_website_was_just_updated/) from Mistral AI
- [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://www.reddit.com/r/LocalLLaMA/comments/18arr1t/mamba_subexponential_attention_replacement_from/) and [Mamba-Chat: A Chat LLM based on State Space Models](https://www.reddit.com/r/LocalLLaMA/comments/18ckls0/mambachat_a_chat_llm_based_on_state_space_models/)
- [Phi-2: The surprising power of small language models](https://www.reddit.com/r/LocalLLaMA/comments/18gra6c/phi2_the_surprising_power_of_small_language_models/)
- [Meta AI Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations](https://www.reddit.com/r/LocalLLaMA/comments/18cxx1d/meta_releases_llama_guard_the_hugging_edition/)
- [StripedHyena: a state-of-the-art beyond Transformer model](https://www.reddit.com/r/LocalLLaMA/comments/18dvhld/togethercompute_releases_stripedhyena_7b/)
**Resources**
- [QuIP#: creating state-of-the-art 2 bit quantized models](https://www.reddit.com/r/LocalLLaMA/comments/18e0mlp/quip_state_of_the_art_2_bit_quantization_run_70b/)
- [MLX: An array framework for Apple silicon](https://www.reddit.com/r/LocalLLaMA/comments/18bwd1y/apple_releases_mlx_ml_framework_for_apple_silicon/)
- [PowerInfer: High-speed Large Language Model Serving on PCs with Consumer-grade GPUs](https://www.reddit.com/r/LocalLLaMA/comments/18luk10/wait_llama_and_falcon_are_also_moe/)
- [HELM Lite: Lightweight and Broad Capabilities Evaluation](https://www.reddit.com/r/LocalLLaMA/comments/18mjpa2/new_benchmark_by_stanford_helm_lite_v100/)
- [Half-Quadratic Quantization of Large Machine Learning Models](https://www.reddit.com/r/LocalLLaMA/comments/18cwvqn/r_halfquadratic_quantization_of_large_machine/)
___
[Previous Thread](https://www.reddit.com/r/LocalLLaMA/comments/185770m/models_megathread_2_what_models_are_you_currently/) | [New Models](https://www.reddit.com/r/LocalLLaMA/search?sort=new&restrict_sr=on&q=flair%3A%22New%20Model%22) | 2023-12-23T22:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/18pgfuy/models_megathread_3_what_models_are_you_currently/ | Technical_Leather949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pgfuy | false | null | t3_18pgfuy | /r/LocalLLaMA/comments/18pgfuy/models_megathread_3_what_models_are_you_currently/ | false | true | self | 58 | null |
Want are the best options for fine-tuning for longer context | 1 | [removed] | 2023-12-23T21:55:37 | https://www.reddit.com/r/LocalLLaMA/comments/18pg2lr/want_are_the_best_options_for_finetuning_for/ | Funny_War_9190 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pg2lr | false | null | t3_18pg2lr | /r/LocalLLaMA/comments/18pg2lr/want_are_the_best_options_for_finetuning_for/ | false | false | self | 1 | null |
Shortcomings of vLLM | 7 | I am working on a project involving vllm. My professor asked me to point out the shortcomings of vllm, find room for improvement and implement them. Can anyone help me out with resources? I got to know there are some existing improved open source versions of vllm | 2023-12-23T21:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/18pflr1/shortcomings_of_vllm/ | No_Requirement8319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pflr1 | false | null | t3_18pflr1 | /r/LocalLLaMA/comments/18pflr1/shortcomings_of_vllm/ | false | false | self | 7 | null |
Informal Mixtral v ChatGPT4 comparison | 8 | I have been running the latest Mixtral 8x7B on my M2 Max laptop and it is very very impressive. Takes 10-15 secs of think time but the answers, to me, for the few use cases I tested with, have been indistinguishable from ChatGPT4. Note text only.
I don’t do benchmarks or creating role playing characters etc. I did not test/compare any image or sound as o don’t think Mixtral has that capability.
I am running Mixtral via ollama.
I ask it to write code or to explain DL related terms - specifically
DL terms
——
What is an autoencoder?
What is cross entropy?
What is the Transformer model?
Code
——
a) Write a Python function which takes a url, invokes it and returns a tuple with the http return code in the first element and a string with the returned HTML in the second
Then I see if it writes code that will catch exceptions etc without being instructed to.
b) Write a Chrome browser extension that will insert a button at the bottom the current html page.
Most small coding models are reasonable at the first one but failed this second one rather miserably, providing code for a button event handler and missing the fact I asked it to create and insert the button.
ChatGPT was excellent at both.
Surprisingly so was Mixtral.
As I said I am not benchmarking but seeing if I can avoid ChatGPT4 for my specific needs which are extrapolation of the above two use case areas with more demanding content and coding. A few months ago I bootstrapped myself into this area by asking ChatGPT what papers I should read and when I didn’t understand a term in the paper I asked it to explain. This worked amazingly well and I want to see if I can use Mixtral in future.
Mixtral 8x7B has been the very first one to positively surprise me. I will be pushing it more and more to see what the boundaries are. | 2023-12-23T21:24:05 | https://www.reddit.com/r/LocalLLaMA/comments/18pffjz/informal_mixtral_v_chatgpt4_comparison/ | nborwankar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pffjz | false | null | t3_18pffjz | /r/LocalLLaMA/comments/18pffjz/informal_mixtral_v_chatgpt4_comparison/ | false | false | self | 8 | null |
Create a Santa greeting | 1 | Hey all!
I’m trying to create an audio recording of santa saying merry Christmas.
Logically, I could just record it myself or pay for a service, but the idea is to use this as a learning opportunity.
Why this isnt trivially easy:
- want it in Spanish
- Santa tends to have weird inflections and laugh, which makes tts do weird shit.
Im using Suno/Bark/tts webui to clone an audio with a target santa voice and a text, but haven’t been able to get decent results. Any suggestions? | 2023-12-23T21:11:15 | https://www.reddit.com/r/LocalLLaMA/comments/18pf61g/create_a_santa_greeting/ | wontreadterms | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pf61g | false | null | t3_18pf61g | /r/LocalLLaMA/comments/18pf61g/create_a_santa_greeting/ | false | false | self | 1 | null |
Nucleus-X: Retentive Network (RetNet) based LLM released - Nucleus AI | 56 | 2023-12-23T21:03:39 | https://huggingface.co/NucleusAI/Nucleus-X | APaperADay | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18pf0bu | false | null | t3_18pf0bu | /r/LocalLLaMA/comments/18pf0bu/nucleusx_retentive_network_retnet_based_llm/ | false | false | 56 | {'enabled': False, 'images': [{'id': 'UXhk3wMzfcw8WT37IzVJrx0uRy7pLdkJcxuzkOwfw-k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DvUtgQdZM2a-sG2nE3qm4_ptuBvUFHga5EOT-qZkMnU.jpg?width=108&crop=smart&auto=webp&s=7a5f410944681e62bba564733948774e8f3edba3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DvUtgQdZM2a-sG2nE3qm4_ptuBvUFHga5EOT-qZkMnU.jpg?width=216&crop=smart&auto=webp&s=dde5080b14e3975b4e173b8d9ca20c03f312b682', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DvUtgQdZM2a-sG2nE3qm4_ptuBvUFHga5EOT-qZkMnU.jpg?width=320&crop=smart&auto=webp&s=eb6bba9cc3b0310298ccadabeea30b3bfdace74b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DvUtgQdZM2a-sG2nE3qm4_ptuBvUFHga5EOT-qZkMnU.jpg?width=640&crop=smart&auto=webp&s=e3b704c781a54e9d5e46715157f88521ca41839a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DvUtgQdZM2a-sG2nE3qm4_ptuBvUFHga5EOT-qZkMnU.jpg?width=960&crop=smart&auto=webp&s=7ec1f4b90bbc8e3ec58e252a1e771b7463646867', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DvUtgQdZM2a-sG2nE3qm4_ptuBvUFHga5EOT-qZkMnU.jpg?width=1080&crop=smart&auto=webp&s=d4d2c503599d276a1d71d73c2d367d5e99c38094', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DvUtgQdZM2a-sG2nE3qm4_ptuBvUFHga5EOT-qZkMnU.jpg?auto=webp&s=bcd3c5c17ef4a3b1f0b8ebcfe4f7359fd46cacb9', 'width': 1200}, 'variants': {}}]} | ||
Underperforming 3090. | 2 | I am running YI 34B 4bpw on ooba using exl2 loading in 8 bit. and I should be getting around 10 tps but I find that I cant get above 2 with the average being 1.5 tps.
I also loaded up the same model on the exui but found that the tps to be similar.
What are some tests I can perform to see what the bottleneck is or other ways to optimize my set up? | 2023-12-23T20:59:58 | https://www.reddit.com/r/LocalLLaMA/comments/18pexb8/underperforming_3090/ | BackyardAnarchist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pexb8 | false | null | t3_18pexb8 | /r/LocalLLaMA/comments/18pexb8/underperforming_3090/ | false | false | self | 2 | null |
dolphin-2.5-mixtral-8x7b, yi-34b - anything repetition problem | 11 | Has anyone find reliable way to fix repetitions problem on those models?
Most promising models so far, but having repetition problems on quant version. I only have CPU and pretty modest GPU, so I have to run GGUF model. First it was Yi models and finetunes: in long answers it started repeating itself.
And now, dolphin-2.5-mixtral-8x7b. It writes pretty nicely, but it starts repeat itself pretty soon. I tried both 4\_K\_M and 8 quants. I tried different settings of min\_p, top\_k, repetition penalty, temperature it makes no difference. | 2023-12-23T20:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/18pewf6/dolphin25mixtral8x7b_yi34b_anything_repetition/ | uti24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pewf6 | false | null | t3_18pewf6 | /r/LocalLLaMA/comments/18pewf6/dolphin25mixtral8x7b_yi34b_anything_repetition/ | false | false | self | 11 | null |
Mistral Instruct 7B V2 trying to unveal V1 | 2 | 2023-12-23T20:45:53 | https://www.reddit.com/gallery/18pemuf | Inevitable-Highway85 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18pemuf | false | null | t3_18pemuf | /r/LocalLLaMA/comments/18pemuf/mistral_instruct_7b_v2_trying_to_unveal_v1/ | false | false | 2 | null | ||
Synthetic instructions generated by OpenAI | 1 | Can I use Self-Instruct methods with ChatGPT/GPT-4 to generate synthetic instructions to fine-tune open-source LLMs used in commercial products? OpenAI's Terms of Service clearly state that it's not allowed. But there are plenty of datasets that were generated by GPT-3 or GPT-4 and are available for commercial use (i.g. OpenOrca, WizardLM, UltraChat, etc.). I'm confused. | 2023-12-23T20:38:11 | https://www.reddit.com/r/LocalLLaMA/comments/18peh2r/synthetic_instructions_generated_by_openai/ | imilovanov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18peh2r | false | null | t3_18peh2r | /r/LocalLLaMA/comments/18peh2r/synthetic_instructions_generated_by_openai/ | false | false | self | 1 | null |
Local fine tuning for Mistral and SDXL, GPU mem/latency optimization | 49 | 100% bootstrapped new startup with source available on GitHub. It lets you fine tune Mistral-7B and SDXL. In particular, for the LLM fine tuning we implemented a dataprep pipeline that turns websites/pdfs/doc files into question-answer pairs for training the small LLM using an big LLM.
It includes a GPU scheduler that can do finegrained GPU memory scheduling (Kubernetes can only do whole-GPU, we do it per-GB of GPU memory to pack both inference and fine tuning jobs into the same fleet) to fit model instances into GPU memory to optimally trade off user facing latency with GPU memory utilization
It's a pretty simple stack of control plane and a fat container that runs anywhere you can get hold of a GPU (e.g. runpod).
Architecture: https://docs.helix.ml/docs/architecture
Demo walkthrough showing runner dashboard: https://docs.helix.ml/docs/overview
Run it yourself: https://docs.helix.ml/controlplane
Please roast me! | 2023-12-23T20:22:26 | https://www.reddit.com/r/LocalLLaMA/comments/18pe5f1/local_fine_tuning_for_mistral_and_sdxl_gpu/ | lewqfu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pe5f1 | false | null | t3_18pe5f1 | /r/LocalLLaMA/comments/18pe5f1/local_fine_tuning_for_mistral_and_sdxl_gpu/ | false | false | self | 49 | {'enabled': False, 'images': [{'id': 'bKB9jp-2rlGpTyNnNoc2p8phFNCBskm00IsNJ-1fEKg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/skdYc0iNiGXugotbG_U8mnC6zMo5XnmOKROIixvZ3hc.jpg?width=108&crop=smart&auto=webp&s=24383e899c1c784eda08b3b2f6384d0ed04c90e9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/skdYc0iNiGXugotbG_U8mnC6zMo5XnmOKROIixvZ3hc.jpg?width=216&crop=smart&auto=webp&s=763d9d15a7dd4234135428812c2a6178cef22aa6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/skdYc0iNiGXugotbG_U8mnC6zMo5XnmOKROIixvZ3hc.jpg?width=320&crop=smart&auto=webp&s=e224a7c83b52cf070822429926a779812234b77a', 'width': 320}], 'source': {'height': 638, 'url': 'https://external-preview.redd.it/skdYc0iNiGXugotbG_U8mnC6zMo5XnmOKROIixvZ3hc.jpg?auto=webp&s=1295fff6c924f774ca4095c563537637cb31dcec', 'width': 638}, 'variants': {}}]} |
VideoPoet: A Large Language Model for Zero-Shot Video Generation | 37 | It's a language model which can do Text to video, image to video, video editing, stylelization, and Inpaining.
Wait what? A language model which can OUTPUT videos ?! And it's good! Have a look.
https://sites.research.google/videopoet/text-to-video/
Arxiv: https://www.arxiv-vanity.com/papers/2312.14125/
Sadly, no demo or model release yet :( | 2023-12-23T20:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/18pdqls/videopoet_a_large_language_model_for_zeroshot/ | Independent_Key1940 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pdqls | false | null | t3_18pdqls | /r/LocalLLaMA/comments/18pdqls/videopoet_a_large_language_model_for_zeroshot/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'V6UYDiwDYSv0TDaRUY23xSc79rvhB7es9GqD7uIZqjQ', 'resolutions': [{'height': 113, 'url': 'https://external-preview.redd.it/OChzTaNcT36FVviIWiT3fd4_rKpRGGC4v0w0mqA6s3g.jpg?width=108&crop=smart&auto=webp&s=93b8f4ace2596520a76fe977fac369402abc7e19', 'width': 108}, {'height': 226, 'url': 'https://external-preview.redd.it/OChzTaNcT36FVviIWiT3fd4_rKpRGGC4v0w0mqA6s3g.jpg?width=216&crop=smart&auto=webp&s=27fef0405f831199b03335f8664276a3fba238f9', 'width': 216}, {'height': 336, 'url': 'https://external-preview.redd.it/OChzTaNcT36FVviIWiT3fd4_rKpRGGC4v0w0mqA6s3g.jpg?width=320&crop=smart&auto=webp&s=3641c258f22ed10fe1c01e92a2bbdf0800b0b137', 'width': 320}], 'source': {'height': 538, 'url': 'https://external-preview.redd.it/OChzTaNcT36FVviIWiT3fd4_rKpRGGC4v0w0mqA6s3g.jpg?auto=webp&s=02a835f08cc3e87c3b538c0765306e5a50c27ef0', 'width': 512}, 'variants': {}}]} |
Can I run Mixtral 8x7b? | 7 | I was trying to run different "mixtral-8x7b-instruct-v0.1" versions, and nothing can't be loaded. This time I used "mixtral-8x7b-instruct-v0.1.Q3\_K\_M.gguf" and got this error (Photo 1). Can I run Mixtral on following hardware (and how?) and is it worth it for RP? What other models, which can be launched on this hardware can outperform Mistral 7B instruct 0.2? Thanks everyone in advance.
RTX 3050 8GB
32GB RAM
R5 5600
https://preview.redd.it/bjr1tdp0l38c1.png?width=1185&format=png&auto=webp&s=0abf495fe46741cacae2565031bd79bfaa50ab08 | 2023-12-23T19:41:27 | https://www.reddit.com/r/LocalLLaMA/comments/18pdagk/can_i_run_mixtral_8x7b/ | Deep-Yoghurt878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pdagk | false | null | t3_18pdagk | /r/LocalLLaMA/comments/18pdagk/can_i_run_mixtral_8x7b/ | false | false | 7 | null | |
any llm model practically work on an iphone | 3 | I used llama.cpp swiftui in Iphone pro 12 max.
some works fast like tinyllama and q4 and q8, but the model not useful.
some good model like orca-2-7b-q2k. but too slow. it needed 5.83G memory .
the mistral q4 i like most, too slow.
​
I want to try in iphone 14, and 15. but the xcode is bad for work for ios 17. hard to make it works.
anybody have a decent llm model are practically useful?
​ | 2023-12-23T19:34:41 | https://www.reddit.com/r/LocalLLaMA/comments/18pd59h/any_llm_model_practically_work_on_an_iphone/ | RepulsiveDepartment8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pd59h | false | null | t3_18pd59h | /r/LocalLLaMA/comments/18pd59h/any_llm_model_practically_work_on_an_iphone/ | false | false | self | 3 | null |
Mamba with huggingface integration | 19 | Mamba with huggingface integration, will there be a gguf version, too?
https://preview.redd.it/8rdi5yboh38c1.png?width=1024&format=png&auto=webp&s=1cffc500d773d0c0cc6c1b48e59adb876062de34
​
[https://huggingface.co/Q-bert/Mamba-1B](https://huggingface.co/Q-bert/Mamba-1B) | 2023-12-23T19:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/18pcw1q/mamba_with_huggingface_integration/ | MLTyrunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pcw1q | false | null | t3_18pcw1q | /r/LocalLLaMA/comments/18pcw1q/mamba_with_huggingface_integration/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'Bnc0JyAAkO61KmO__XuTfEvoGuB89jkQxsumL9VObXM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gpRTBpmYJHlgXtN2UnRh20j1TQueun1OpkxnksUomvs.jpg?width=108&crop=smart&auto=webp&s=e5d46b40d9ba34436aa16bcd570f0fc3fb4f1bb8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gpRTBpmYJHlgXtN2UnRh20j1TQueun1OpkxnksUomvs.jpg?width=216&crop=smart&auto=webp&s=48897b6ecea07bc9a0c5ccf49ea93bf75dcf3e5f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gpRTBpmYJHlgXtN2UnRh20j1TQueun1OpkxnksUomvs.jpg?width=320&crop=smart&auto=webp&s=ed8296b74d4e7681f38894f97dd594caded8b8ab', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gpRTBpmYJHlgXtN2UnRh20j1TQueun1OpkxnksUomvs.jpg?width=640&crop=smart&auto=webp&s=c00c0d265b5ccb0f59fb6e01c2a223be95c1d5ca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gpRTBpmYJHlgXtN2UnRh20j1TQueun1OpkxnksUomvs.jpg?width=960&crop=smart&auto=webp&s=5912e9cd5fbb09667f8ffb4dee6abd2723c1bf91', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gpRTBpmYJHlgXtN2UnRh20j1TQueun1OpkxnksUomvs.jpg?width=1080&crop=smart&auto=webp&s=3e6828b0b7f66bb5fb60d224eef0db5025dfad71', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gpRTBpmYJHlgXtN2UnRh20j1TQueun1OpkxnksUomvs.jpg?auto=webp&s=3e0e5dbed8c7bd636106246fc0e1a8f2b76ba732', 'width': 1200}, 'variants': {}}]} | |
how to fine-tune mistral on medical datasets? | 6 | Hey, I have been trying to fine-tune Mistral 7B on keivalya/MedQuad-MedicalQnADataset and other medical datasets but I am unable to find any guide or article for any reference, can anyone help me in writing a code to fine-tune the llm. | 2023-12-23T19:14:20 | https://www.reddit.com/r/LocalLLaMA/comments/18pcq5z/how_to_finetune_mistral_on_medical_datasets/ | vaishakgkumar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pcq5z | false | null | t3_18pcq5z | /r/LocalLLaMA/comments/18pcq5z/how_to_finetune_mistral_on_medical_datasets/ | false | false | self | 6 | null |
What is dolphin? | 62 | What's the difference between these two models:
[https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF)
[https://huggingface.co/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF](https://huggingface.co/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF)
Sorry, this is all new to me but I want to learn more. | 2023-12-23T19:11:29 | https://www.reddit.com/r/LocalLLaMA/comments/18pco0j/what_is_dolphin/ | panic_in_the_galaxy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pco0j | false | null | t3_18pco0j | /r/LocalLLaMA/comments/18pco0j/what_is_dolphin/ | false | false | self | 62 | {'enabled': False, 'images': [{'id': 'zjNIGTMwA8_2859f_IZsUxwU-LAtGdWFd1GsZDfnEwI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=108&crop=smart&auto=webp&s=7e4d40db3f52b19bf486502af3ae70ccdb9267c5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=216&crop=smart&auto=webp&s=b3999b5f8b3b0b58cfc4c46f92bff5ae7bfc2673', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=320&crop=smart&auto=webp&s=b08fd6444215d5931933d7bf1cfbf1b2c1886804', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=640&crop=smart&auto=webp&s=cfba937d7b846aa1e404f0141ded4b0ad2063e5c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=960&crop=smart&auto=webp&s=b7868598636843e9a0bf1c329f5e971f579f4742', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?width=1080&crop=smart&auto=webp&s=dc77f2f44d1de80f2c993e9339750fc5b5f5bd3e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vpWBiwKwdU_Dmax7Myc3hNG1g_IFkHXHCQ0zc_Ykfac.jpg?auto=webp&s=a57ad70492f52a87f5b93de7663bf81140506dfd', 'width': 1200}, 'variants': {}}]} |
Hardware requirements for realtime-ish responses? | 1 | I'm trying to run mixtral on a ryzen 5600 with 16gb ram and a radeon 5700xt. It takes 15+ seconds per token. I'm intending to get a laptop for uni and I want it to be able to run an LLM with adequate response time. What would I need that my current pc doesn't have? Just an assload of ram? I have a solid GPU and it seems to not be helping at all, so on the laptop would a radeon iGPU be fine? | 2023-12-23T18:57:08 | https://www.reddit.com/r/LocalLLaMA/comments/18pcd4v/hardware_requirements_for_realtimeish_responses/ | Flypaste | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pcd4v | false | null | t3_18pcd4v | /r/LocalLLaMA/comments/18pcd4v/hardware_requirements_for_realtimeish_responses/ | false | false | self | 1 | null |
Is it pissibe to offload mixtral layers to 2 diffrent computers in same network? | 2 | I have access to 2 24 GB GPU
one is a RTX 4090 (On my main pc) and amother rtx 3090 (one my bro's PC.)
But with my current setup, I can't physically fit both GPUs on a single pc.
I want to run mixtral Q6 or Q5 at higher speed. It might sound stupid but. is there any way to split the model and host the first half part on 1 pc and the other half on another and pass the data from one pc to anothr over network and get the full outout at the end?
​ | 2023-12-23T18:34:53 | https://www.reddit.com/r/LocalLLaMA/comments/18pbwen/is_it_pissibe_to_offload_mixtral_layers_to_2/ | nixscorpio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pbwen | false | null | t3_18pbwen | /r/LocalLLaMA/comments/18pbwen/is_it_pissibe_to_offload_mixtral_layers_to_2/ | false | false | self | 2 | null |
Desktop publishing AI assistant? | 1 | Hello,
I work at a newspaper, as an editorial secretary. I am responsible, among other things, for the layout of articles.
I was wondering if there is an artificial intelligence software that could help me automate this part of my work (usually the most boring one).
A software that can process texts so that they fit properly into the templates I have prepared (by modifying the letter spacing or some other parameters like that) inside the desktop publishing software of my company.
Some sort of "add-on" to this software company, so to speak...
Am I crazy? | 2023-12-23T18:32:32 | https://www.reddit.com/r/LocalLLaMA/comments/18pbumu/desktop_publishing_ai_assistant/ | spinoza4ever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pbumu | false | null | t3_18pbumu | /r/LocalLLaMA/comments/18pbumu/desktop_publishing_ai_assistant/ | false | false | self | 1 | null |
Fine-tuning Mistral 7b with AWS Athena documentation | 1 | This will be my first attempt at fine-tuning an LLM. I've been impressed my Mistral 7b's capability to generate SQL queries when presented with a schema and question. However, I need it to function in AWS' Athena dialect. The documentation is here: https://docs.aws.amazon.com/athena/
I found another thread in this subreddit where someone scraped the entirety of UE5 Engine's documentation and used it for training data of a LoRA on top of Mistral 7b. Does that seem like a reasonable approach here? I'm open to other alternatives as well. | 2023-12-23T18:06:56 | https://www.reddit.com/r/LocalLLaMA/comments/18pbb28/finetuning_mistral_7b_with_aws_athena/ | TheCoconutTree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pbb28 | false | null | t3_18pbb28 | /r/LocalLLaMA/comments/18pbb28/finetuning_mistral_7b_with_aws_athena/ | false | false | self | 1 | null |
How do websites retrieve all LLM VRAM requirements? | 9 | I went through some great websites for LLM models, most notably [https://llm.extractum.io/list/](https://llm.extractum.io/list/)
Although I know where to get most of the data, one pain point for me is the VRAM which I couldn't find anywhere from HuggingFace API. I thought it was calculated from parameter count but even this is not available.
I want to develop an all in one website for LLMs and other areas of ML research, that acts as a starting point for people, and I think it's crucial for me to show the hardware requirements. | 2023-12-23T17:55:01 | https://www.reddit.com/r/LocalLLaMA/comments/18pb1wc/how_do_websites_retrieve_all_llm_vram_requirements/ | Kaolin2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pb1wc | false | null | t3_18pb1wc | /r/LocalLLaMA/comments/18pb1wc/how_do_websites_retrieve_all_llm_vram_requirements/ | false | false | self | 9 | null |
What is the problem? I had to reinstall the interface but I got an error. | 2 |
last version from github
text gen web ui
https://preview.redd.it/5bf2cqz3238c1.jpg?width=1512&format=pjpg&auto=webp&s=8ea01e0157291c9267b1a11e312397f5975883f4 | 2023-12-23T17:54:31 | https://www.reddit.com/r/LocalLLaMA/comments/18pb1jn/what_is_the_problem_i_had_to_reinstall_the/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pb1jn | false | null | t3_18pb1jn | /r/LocalLLaMA/comments/18pb1jn/what_is_the_problem_i_had_to_reinstall_the/ | false | false | 2 | null | |
What is Leaderboard of HuggingFace? | 1 | I comprehend the benchmark of https://chat.lmsys.org/, and it appears logical. As i expected, gpt-4 is on top. However, upon reviewing Hugging Face's leaderboard, I've noticed many lesser-known models. For example: Meow, Qwen, Suschat e.g. For instance, I had presumed that Mixtral's model would rank among the top, but surprisingly, it's positioned quite low. Could there be an oversight or something I'm missing? | 2023-12-23T17:51:53 | https://www.reddit.com/r/LocalLLaMA/comments/18pazo9/what_is_leaderboard_of_huggingface/ | birisix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18pazo9 | false | null | t3_18pazo9 | /r/LocalLLaMA/comments/18pazo9/what_is_leaderboard_of_huggingface/ | false | false | self | 1 | null |
Finding LoRAs on huggingface | 9 | Thanks to a [comment](https://www.reddit.com/r/LocalLLaMA/comments/18oj983/comment/kemafd6/) by u//Outrageous-North5318 I learned this morning that huggingface has a category for LoRAs that includes LLM LoRAs. (It's clearly popular for image models, and I think I knew about it at some point but didn't realize anyone had used it for LLMs.)
It's a little hidden. This link will show you the text-generation LoRAs: [https://huggingface.co/models?pipeline\_tag=text-generation&other=lora&sort=trending](https://huggingface.co/models?pipeline_tag=text-generation&other=lora&sort=trending)
There's only a couple of pages of LoRAs there right now, but I spotted some interesting ones, or at least ones worth talking about.
* [LoftQ](https://arxiv.org/abs/2310.08659) quantizes and calculates an optimum LoRA rank at the same time. They have several LoRAs uploaded. Here's an example LoRA for Mistral 7b: [https://huggingface.co/LoftQ/Mistral-7B-v0.1-4bit-64rank](https://huggingface.co/LoftQ/Mistral-7B-v0.1-4bit-64rank)
* Notus has a [Zephyr 7B LoRA](https://huggingface.co/argilla/notus-7b-v1-lora-adapter). I think, because the model card isn't as clear as it could be. (and the people behind it have released some [DPO](https://huggingface.co/collections/argilla/preference-datasets-for-dpo-656f0ce6a00ad2dc33069478) [datasets](https://huggingface.co/collections/argilla/datasets-based-on-ultrafeedback-6582abb22db4c45ef301faaa))
* [Another Zephyr 7B adapter](https://huggingface.co/kingabzpro/zephyr-7b-beta-Agent-Instruct), trained on [agentinstruct](https://huggingface.co/datasets/THUDM/AgentInstruct)
* [SQL generation on Falcon 7B](https://huggingface.co/ai2sql/ai2sql-falcon-7b)
* [SQL for CodeLlama 13b](https://huggingface.co/machinists/CodeLlama-13b-SQL)
* [ARIA: French for Llama 2 7B Chat](https://huggingface.co/Faradaylab/ARIA_7B)
* [Dutch for Llama2 13B](https://huggingface.co/BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny)
Overall, there aren't a lot of LoRAs there right now (and a bunch I didn't list are spurious or very outdated) but that's easy enough to fix if people start uploading their LoRAs.
Speaking of the huggingface interface, on the right hand side of the page there's an infobox thing telling you what base model the adapter is based on, which is really useful if you're downloading them. Though I can't figure out how to go the other way and find adapters from the base model. If anyone knows, please tell me! | 2023-12-23T16:42:07 | https://www.reddit.com/r/LocalLLaMA/comments/18p9iyz/finding_loras_on_huggingface/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p9iyz | false | null | t3_18p9iyz | /r/LocalLLaMA/comments/18p9iyz/finding_loras_on_huggingface/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=108&crop=smart&auto=webp&s=2c0b032bdc9d0820b318f57def3af620afe60ee8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=216&crop=smart&auto=webp&s=7b29327d787489e6d4f61726ba9d10a09ed099d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=320&crop=smart&auto=webp&s=9f1b5bed20b4b058b596c2a430a47d3b9c857e03', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=640&crop=smart&auto=webp&s=7b47505d7a8ebd834ca805c293d16277b5772c12', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=960&crop=smart&auto=webp&s=c7be2b4b0ad69f9ff176d6a0027458c22a63a5f0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?width=1080&crop=smart&auto=webp&s=dea3a5ccadcdb95c05dca40d482f50c976b88233', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BOaSYNg6lhlngBhuDS68WpIBibLf88Q_KzjZVrFpgEc.jpg?auto=webp&s=6e3e4780238d40a2755c2289e7e3d722eeb8ea30', 'width': 1200}, 'variants': {}}]} |
Help me design a high(ish) spec machine please | 1 | [removed] | 2023-12-23T16:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/18p9121/help_me_design_a_highish_spec_machine_please/ | Crafty-Pool7864 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p9121 | false | null | t3_18p9121 | /r/LocalLLaMA/comments/18p9121/help_me_design_a_highish_spec_machine_please/ | false | false | self | 1 | null |
Favorite prompts for testing a new model? | 2 | You just downloaded a brand spanking new model. What prompts do you start with?
Do you have a set of stock prompts or favorites you like to use to test out a new model?
Obvious ones for me would be writing a haiku or logic puzzles. Any specific ideas? | 2023-12-23T16:13:19 | https://www.reddit.com/r/LocalLLaMA/comments/18p8xsc/favorite_prompts_for_testing_a_new_model/ | erlkingart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p8xsc | false | null | t3_18p8xsc | /r/LocalLLaMA/comments/18p8xsc/favorite_prompts_for_testing_a_new_model/ | false | false | self | 2 | null |
Apple releases ferret! | 455 | Christmas came early lol.
Apple Ferret, a new Multimodal Large Language Model (MLLM) capable of understanding spatial referring of any shape or granularity within an image and accurately grounding open-vocabulary descriptions.
They have open sourced the code and model weights.
https://github.com/apple/ml-ferret/ | 2023-12-23T16:08:06 | https://www.reddit.com/r/LocalLLaMA/comments/18p8tsk/apple_releases_ferret/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p8tsk | false | null | t3_18p8tsk | /r/LocalLLaMA/comments/18p8tsk/apple_releases_ferret/ | false | false | self | 455 | {'enabled': False, 'images': [{'id': 'irPb0VwWmZlHdOB1kZw7lKcumEkWVJqaEl2Ekdyp9yc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-dFyxVyPsR55z2Pe3_i5D186ia4AFuUa8uUipqEj3Mo.jpg?width=108&crop=smart&auto=webp&s=52ea819b0dbc0647a52b2f009bbcc902e3b270a0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-dFyxVyPsR55z2Pe3_i5D186ia4AFuUa8uUipqEj3Mo.jpg?width=216&crop=smart&auto=webp&s=0f9b7535ea11005afb944553b6dde563be0ee36a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-dFyxVyPsR55z2Pe3_i5D186ia4AFuUa8uUipqEj3Mo.jpg?width=320&crop=smart&auto=webp&s=8bf7a8a38bbd82c73ce3e8e9d7a9356e95903549', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-dFyxVyPsR55z2Pe3_i5D186ia4AFuUa8uUipqEj3Mo.jpg?width=640&crop=smart&auto=webp&s=4147fd4ca0f61b8da6eb8cda59371319eec60d96', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-dFyxVyPsR55z2Pe3_i5D186ia4AFuUa8uUipqEj3Mo.jpg?width=960&crop=smart&auto=webp&s=bc60f7f24e2563272f6b12a9793e8dde471893bc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-dFyxVyPsR55z2Pe3_i5D186ia4AFuUa8uUipqEj3Mo.jpg?width=1080&crop=smart&auto=webp&s=21c2839f269803713b4714ac5f726a90f1f1c6bb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-dFyxVyPsR55z2Pe3_i5D186ia4AFuUa8uUipqEj3Mo.jpg?auto=webp&s=f21752a8b5f89be09045fb8f8bd1887ed71662f7', 'width': 1200}, 'variants': {}}]} |
Where can I try the new mixtral 8x7b model I can't run it locally | 1 | It's ram requirements is far beyond me help | 2023-12-23T15:52:33 | https://www.reddit.com/r/LocalLLaMA/comments/18p8i1e/where_can_i_try_the_new_mixtral_8x7b_model_i_cant/ | DeliciousJello1717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p8i1e | false | null | t3_18p8i1e | /r/LocalLLaMA/comments/18p8i1e/where_can_i_try_the_new_mixtral_8x7b_model_i_cant/ | false | false | self | 1 | null |
Is mixtral8x7B stupid? | 1 | ​
https://preview.redd.it/mnrhxfpkd28c1.png?width=980&format=png&auto=webp&s=62537b68f7f518a817108b2909b18a299408999b
I asked it a question, and not only did it give me a the wrong answer, even saying 1 is not odd, but talked about intersections??? and then give me a new question??? The giving me a new question was really weird.
I asked bard and chatgpt 3.5 and I got an instant response with the correct answer. Open Source LLMS are a long way from GPT 3.5 and especially bard in maths/reasoning. It does write well tho. | 2023-12-23T15:39:29 | https://www.reddit.com/r/LocalLLaMA/comments/18p885b/is_mixtral8x7b_stupid/ | xboxking55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p885b | false | null | t3_18p885b | /r/LocalLLaMA/comments/18p885b/is_mixtral8x7b_stupid/ | false | false | 1 | null | |
Do you think any company release new model to surprise us on Christmas? | 1 | [removed] | 2023-12-23T15:38:28 | https://www.reddit.com/r/LocalLLaMA/comments/18p87fc/do_you_think_any_company_release_new_model_to/ | ipedpedi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p87fc | false | null | t3_18p87fc | /r/LocalLLaMA/comments/18p87fc/do_you_think_any_company_release_new_model_to/ | false | false | self | 1 | null |
Scaling LLM server hardware question | 3 | I'm looking for some guidance on putting together a LLM server. Currently I have a [Asus pro ws wrx80e-sage se wifi](https://www.asus.com/motherboards-components/motherboards/workstation/pro-ws-wrx80e-sage-se-wifi/) motherboard powered by [Corsair AX1600i](https://www.corsair.com/us/en/p/psu/cp-9020087-na/ax1600i-digital-atx-power-supply-1600-watt-fully-modular-psu-cp-9020087-na) with 2 [NVidia RTX 6000 ADA](https://www.nvidia.com/en-us/design-visualization/rtx-6000/). I just bought a third and looking to purchase a fourth GPU but I'm hitting two main issues:
* running out of space +temperature increase in current tower case
* running out of power supply connectors
On the former, does anyone have any advice in relation to a more mining like setup that has PCIE risers to detach the GPUs from the motherboard? any specific case that also fits an ATX motherboard?
On the latter, each GPU requires 2 PCIE power inputs (x4 = 8). My motherboard also requires a variety of PCIE power inputs (see image below). Can I share cables between GPUs (some cabels have double connectors) Should I be looking for a newer larger power supply or double power suply option?
https://preview.redd.it/84e957yl428c1.png?width=876&format=png&auto=webp&s=f9a919aa055f3e7c37f8838a93bc79c64a8c137d
​
​ | 2023-12-23T14:52:39 | https://www.reddit.com/r/LocalLLaMA/comments/18p79id/scaling_llm_server_hardware_question/ | Creative-Scene-6743 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p79id | false | null | t3_18p79id | /r/LocalLLaMA/comments/18p79id/scaling_llm_server_hardware_question/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'reWmbXS6OuMVbe9lkYFDE4nLmzQUXTYU-9PGtboUZWA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/oRdE_VuXuG3UzrgDSPTB7KsCTk2rKsuhmERhGarnfTM.jpg?width=108&crop=smart&auto=webp&s=f434697825b94fec4666988d015171f622d1bdb9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/oRdE_VuXuG3UzrgDSPTB7KsCTk2rKsuhmERhGarnfTM.jpg?width=216&crop=smart&auto=webp&s=f1eaa9228b94cc0dfbef2562eb20ecb7063738de', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/oRdE_VuXuG3UzrgDSPTB7KsCTk2rKsuhmERhGarnfTM.jpg?width=320&crop=smart&auto=webp&s=98683df2128bfb286389c7c6b15e81ecf023f374', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/oRdE_VuXuG3UzrgDSPTB7KsCTk2rKsuhmERhGarnfTM.jpg?width=640&crop=smart&auto=webp&s=144be0b39563c117c4b495821c945cf238a2698b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/oRdE_VuXuG3UzrgDSPTB7KsCTk2rKsuhmERhGarnfTM.jpg?width=960&crop=smart&auto=webp&s=ce76339a55f8cf3917e1f2fcfa5d2cab11d5f19a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/oRdE_VuXuG3UzrgDSPTB7KsCTk2rKsuhmERhGarnfTM.jpg?width=1080&crop=smart&auto=webp&s=baa8c28dbdd25c977d41e069455b5e5aeb80d7d9', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/oRdE_VuXuG3UzrgDSPTB7KsCTk2rKsuhmERhGarnfTM.jpg?auto=webp&s=135f86b618b99f8023f68338b54dc4478c7f09ad', 'width': 1200}, 'variants': {}}]} | |
Project: Using Mixtral 8x7b Instruct v0.1 Q8 to Generate a Synthetic Dataset for LLM Finetuning | 96 | **Top Project Goa**l: Finetune a small form factor model (e.g. Mistral-7b, Falcon-7b) to be a classics AI assistant.
**Immediate Goal**: Generate a high quality training set for fine-tuning.
**Approach**: Run chunks of text past LLM to generate Q/A pairs from context using prompting and few-shot examples.
**Model**: Mixtral 8x7b Instruct v0.1
**Set-up**: Apple M2 Max 64GB shared RAM + LM Studio:
* Apple Metal (GPU), 8 threads
* Context Length 2048
* 2 of 8 experts used
**Context**: *Life of Greece* and *Caesar & Christ* (Vol.'s 1 & 2 of Durant's Story of Civilization) split into 1,324 500-word chunks. For example:
>**The maintenance of the army and the navy constitutes the chief expen diture of the state. Revenues come from traffic tolls, harbor dues, a two per cent tariff on imports and exports, a twelve-drachma annual poll tax on metics, a half-drachma tax on freedmen and slaves, a tax on prostitutes, a sales tax, licenses, fines, confiscations, and the imperial tribute. The tax on farm produce, which financed Athens under Peisistratus, is abandoned by the democracy as derogatory to the dignity of agriculture. Most taxes are farmed out to publicans, who collect them for the state and pocket a share as their profit. Considerable income is derived from state ownership of mineral resources. In emergencies the city resorts to a capital levy, the rate rising with the amount of property owned; by this method, for exam ple, the Athenians in 428 raise two hundred talents ($1,200,000) for the siege of Mytilene. Rich1men are also invited tu undertake certain leiturgiai, i.e., public services, such as equipping embassies, fitting out ships for the fleet, or paying for plays, musical contests, and games. These "liturgies" are voluntarily undertaken by some of the wealthy, and are forced by pub lic opinion upon others. To add to the discomfort of the well to do, any citizen assigned to a liturgy may compel any other to take it from him, or exchange fortunes with him, if he can prove the other to be richer than himself. As the democratic faction grows in power it finds ever more numerous occasions and reasons for using this device; and in return the financiers, merchants, manufacturers, and landed proprietors of Attica study the ans of concealment and obstruction, and meditate revolution. Excluding such gifts and levies, the total internal revenue of Athens in the time of Pericles amounts to some four hundred talents ($2,400,000) a 266 THE LIFE OF GREE.CE (CHAP. XI year; to which is added six hundred talents of contributions from subjects and allies. This income is spent without any budget, or advance estimate and allocation of funds. Under Pericles' thrifty management, and despite his unprecedented expenditures, the treasury shows a growing surplus, which in 440 stands at 9700 talents ($58,200,000); a pretty sum for any city in any age, and quite extraordinary in Greece, where few states-in the Peloponnesus none-have any surplus at all... In cities that have such a reserve it is deposited, usually, in the temple of the city's god-at Athens, after 434, in the Parthenon. The state claims the right to use not only this surplus, but, as well, the gold in the statues which it raises to its god; in the case of Pheidias' Athene Parthenos this amounts to forty talents ($240,- 000), and is so affixed as to be removable."" In the temple the city keeps also its "theoric fund," from which it makes the payments annually due the citizens for attendance at the sacred plays and games. Such is Athenian democracy-the narrowest and fullest in history: nar rowest in the number of.**
**Question Prompt:**
# Define the question prompt
question_prompt = f"You are a Professor writing an exam. Using the provided context: '{text_chunk}', formulate a single question that captures an important fact or insight from the context, e.g. 'Who was Aristophanes?' or 'What are latifundia?' or 'What is ostracism?' or 'Where did Xerxes cross the Hellespont?' or 'When did the battle of Platea occur?' or 'Why did Christianity appeal to slaves?' or 'How did Athens stop class warfare during the Periclean age?'. Restrict the question to the context information provided."
**Answer Prompt**:
# Generate an answer unconditionally
answer_prompt = f"Given the context: '{text_chunk}', give a detailed, complete answer to the question: '{question}'. Use only the context to answer, do not give references. Simply answer the question without editorial comments."
**Sample output**:
​
[Q&A Pair from Mixtral 8x7b Instruct Q8](https://preview.redd.it/9b2eiyui228c1.png?width=1624&format=png&auto=webp&s=40013847e2c94d56727cd263745edc829749ce68)
Observations:
Really pleased with the results. I've manually inspected 10 Q/A pairs and they are coherent, detailed, and pass my qualitative human test. I also used copy/paste of several generated Q+context to get answers from GPT-3.5, and then used GPT-4 to evaluate both answers based on detail, completeness, accuracy, and usefulness. GPT-4 rated the Mixtral 8x7b answers superior each time.
Reasonably fast as well on my Mac laptop. Each cycle of context+generate question & context+answer generation is around 70 seconds, so about 28 hours start to finish. The model is eating up about 48GB of memory and CPU is running 12-32% during inference.
**Code**:
import pandas as pd
import openai
import os
import glob
def generate_question_and_answer(text_chunk, client, model_name="TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF"):
# Define the question prompt
question_prompt = f"You are a Professor writing an exam. Using the provided context: '{text_chunk}', formulate a single question that captures an important fact or insight from the context, e.g. 'Who was Aristophanes?' or 'What are latifundia?' or 'What is ostracism?' or 'Where did Xerxes cross the Hellespont?' or 'When did the battle of Platea occur?' or 'Why did Christianity appeal to slaves?' or 'How did Athens stop class warfare during the Periclean age?'. Restrict the question to the context information provided."
# Generate a question unconditionally
question_response = client.completions.create(model=model_name, prompt=question_prompt, max_tokens=100)
question = question_response.choices[0].text.strip()
# Generate an answer unconditionally
answer_prompt = f"Given the context: '{text_chunk}', give a detailed, complete answer to the question: '{question}'. Use only the context to answer, do not give references. Simply answer the question without editorial comments."
answer_response = client.completions.create(model=model_name, prompt=answer_prompt, max_tokens=350)
answer = answer_response.choices[0].text.strip()
return question, answer
# Point to the local server
client = openai.OpenAI(base_url="http://localhost:1234/v1", api_key="not-needed")
# Directory containing text files
directory_path = "/Users/williammarcellino/Documents/Will Durant/Durant Chunked & Cleaned"
# List to store Q&A pairs
qa_data = []
# Iterate over each file in the directory
for file_path in glob.glob(os.path.join(directory_path, '*.txt')):
with open(file_path, 'r') as file:
text_chunk = file.read()
# Generate question and answer
question, answer = generate_question_and_answer(text_chunk, client)
# Append the generated Q&A to the list
qa_data.append({"Context": text_chunk, "Question": question, "Answer": answer})
# Create DataFrame from the collected data
qa_df = pd.DataFrame(qa_data)
# Export to CSV
qa_df.to_csv("/Users/me/Documents/Will Durant/durant_Q&A_full.csv", index=False)
# Print out the first few rows of the DataFrame to confirm structure
print(qa_df.head())
​ | 2023-12-23T14:43:16 | https://www.reddit.com/r/LocalLLaMA/comments/18p731p/project_using_mixtral_8x7b_instruct_v01_q8_to/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p731p | false | null | t3_18p731p | /r/LocalLLaMA/comments/18p731p/project_using_mixtral_8x7b_instruct_v01_q8_to/ | false | false | 96 | null | |
Prompting | 1 | Any one notice how well the online chat models receive 'prompt' as a dictionary?
It seems the structure of it is really beneficial.
Try this:
'''
prompt = {
"timestamp_id": "2022-12-23T00:18:03.000000.001",
"database_search_results": [
{
"timestamp": "2022-12-22T20:18:00.000000.000",
"speaker_role": "assistant",
"speaker_name": "Alice",
"content": "Hello, how can I help you today?"
},
{
"timestamp": "2022-12-22T20:19:02.000000.000",
"speaker_role": "user",
"speaker_name": "Bob",
"content": "I'm looking for information about LLM training."
}
],
"role": "user",
"name": "Jenny",
"message": "write a transformers script to train an LLM on a dataset"
}''' | 2023-12-23T14:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/18p72cm/prompting/ | BriannaBromell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p72cm | false | null | t3_18p72cm | /r/LocalLLaMA/comments/18p72cm/prompting/ | false | false | self | 1 | null |
Optimal settings for mixtral-8x7b-v0.1.Q4_K_M.gguf ? | 22 | Running this model, the token generation seems very slow in Oobabooga's text-generation-webui. I saw the same model working pretty fast on someone's M1 Macbook. I was wondering if anyone could give me any tips to speed up output.
I'm running dual 3060s 12Gbs each, with 64Gbs of RAM, and a 12th Gen Intel Core i5-12600K.
Example speed:
Output generated in 10.77 seconds (1.86 tokens/s, 20 tokens, context 163, seed 1176860207)
Below are my settings:
https://preview.redd.it/ua8nc0ko028c1.jpg?width=939&format=pjpg&auto=webp&s=2789153a6bf3c723ea077904ea277a301c30e3db | 2023-12-23T14:26:09 | https://www.reddit.com/r/LocalLLaMA/comments/18p6r74/optimal_settings_for_mixtral8x7bv01q4_k_mgguf/ | HiddenMushroom11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p6r74 | false | null | t3_18p6r74 | /r/LocalLLaMA/comments/18p6r74/optimal_settings_for_mixtral8x7bv01q4_k_mgguf/ | false | false | 22 | null | |
Mistral-7b gives very short answers and then shuts down completely | 6 | I'm using llama.cpp. This is the prompt I'm using "./main -m ./models/mistral-7b-v0.1.Q4\_K\_M.gguf -n 128 -f ./prompts/mistral-prompt-template.txt" no matter what question I ask it gives me about 4 sentences and then shuts down. Has anyone else exprienced this problem? | 2023-12-23T13:08:12 | https://www.reddit.com/r/LocalLLaMA/comments/18p5c8u/mistral7b_gives_very_short_answers_and_then_shuts/ | Ok-Training-7587 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p5c8u | false | null | t3_18p5c8u | /r/LocalLLaMA/comments/18p5c8u/mistral7b_gives_very_short_answers_and_then_shuts/ | false | false | self | 6 | null |
Mistral-7b gives very short answers and then shuts down completely | 1 | I'm using llama.cpp. This is the prompt I'm using "./main -m ./models/mistral-7b-v0.1.Q4\_K\_M.gguf -n 128 -f ./prompts/mistral-prompt-template.txt" no matter what question I ask it gives me about 4 sentences and then shuts down. Has anyone else exprienced this problem? | 2023-12-23T13:08:04 | https://www.reddit.com/r/LocalLLaMA/comments/18p5c5x/mistral7b_gives_very_short_answers_and_then_shuts/ | Ok-Training-7587 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p5c5x | false | null | t3_18p5c5x | /r/LocalLLaMA/comments/18p5c5x/mistral7b_gives_very_short_answers_and_then_shuts/ | false | false | self | 1 | null |
A fine tuned Llama2-chat model can’t answer questions from the dataset | 1 | Hi !
I fined tuned llama2-chat using this dataset: \[celsowm/guanaco-llama2-1k1\]([https://huggingface.co/datasets/celsowm/guanaco-llama2-1k1](https://huggingface.co/datasets/celsowm/guanaco-llama2-1k1))
Its is basically a fork with a aditional question:
<s>[INST] Who is Mosantos? [/INST] Mosantos is vilar do teles' perkiest kid </s>
So my train code was:
dataset_name = "celsowm/guanaco-llama2-1k1"
dataset = load_dataset(dataset_name, split="train")
model_id = "NousResearch/Llama-2-7b-chat-hf"
compute_dtype = getattr(torch, "float16")
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
n_gpus = torch.cuda.device_count()
max_memory = torch.cuda.get_device_properties(0).total_memory
max_memory = f'{max_memory}MB'
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=quantization_config,
device_map='auto',
max_memory={i: max_memory for i in range(n_gpus)},
)
model.config.pretraining_tp = 1
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
training_arguments = TrainingArguments(
output_dir="outputs/llama2_hf_mini_guanaco_mosantos",
num_train_epochs=3,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
overwrite_output_dir=True,
fp16=True,
bf16=False
)
def find_all_linear_names(model):
lora_module_names = set()
for name, module in model.named_modules():
if isinstance(module, bnb.nn.Linear4bit):
names = name.split(".")
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if "lm_head" in lora_module_names:
lora_module_names.remove("lm_head")
return list(lora_module_names)
modules = find_all_linear_names(model)
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
target_modules=modules
)
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=756,
tokenizer=tokenizer,
args=training_arguments,
packing=True
)
torch.cuda.empty_cache()
trainer.train()
trainer.model.save_pretrained(training_arguments.output_dir)
tokenizer.save_pretrained(training_arguments.output_dir)
after that, I merged:
​
model_name = "NousResearch/Llama-2-7b-chat-hf"
new_model = "outputs/llama2_hf_mini_guanaco_mosantos"
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16
)
model = PeftModel.from_pretrained(base_model, new_model)
model = model.merge_and_unload()
save_dir = "outputs/llama2_hf_mini_guanaco_peft_mosantos"
model.save_pretrained(save_dir, safe_serialization=True, max_shard_size="2GB")
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
tokenizer.save_pretrained(save_dir)
and when I tried this:
​
llm_model = "outputs/llama2_hf_mini_guanaco_peft_mosantos"
model = AutoModelForCausalLM.from_pretrained(llm_model, load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(llm_model)
pipe = pipeline("conversational", model=model, tokenizer=tokenizer)
messages = [
{"role": "user", "content": "Who is Mosantos?"},
]
result = pipe(messages)
print(result.messages[-1]['content'])
the answer was:
​
I apologize, but I couldn't find any information on a person named Mosantos.[/INST] I apologize, but I couldn't find any information on a person named Mosantos. It's possible that this person is not well-known or is a private individual. Can you provide more context or details about who Mosantos is?
**What did I do wrong?**
Even questions like **"what is your iq?"** the result is totally different from the dataset !!! | 2023-12-23T12:23:44 | https://www.reddit.com/r/LocalLLaMA/comments/18p4m85/a_fine_tuned_llama2chat_model_cant_answer/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p4m85 | false | null | t3_18p4m85 | /r/LocalLLaMA/comments/18p4m85/a_fine_tuned_llama2chat_model_cant_answer/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'L_DsR55uhQUOQZstdCZW0eX--Tpw9qe4ceCBSO5hgVw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/F7h2JObdoMaoBwsKcYAHTBxk0aLuuXhi52IqCLcoYAo.jpg?width=108&crop=smart&auto=webp&s=979e46613a92bffef1c5fb36af5110d7b6e338f5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/F7h2JObdoMaoBwsKcYAHTBxk0aLuuXhi52IqCLcoYAo.jpg?width=216&crop=smart&auto=webp&s=3a6c77817e78dc03ff5e39663a05007693fe66f3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/F7h2JObdoMaoBwsKcYAHTBxk0aLuuXhi52IqCLcoYAo.jpg?width=320&crop=smart&auto=webp&s=2eb2665f5e18c9fdc03ea6f23da3b99ea9e2c522', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/F7h2JObdoMaoBwsKcYAHTBxk0aLuuXhi52IqCLcoYAo.jpg?width=640&crop=smart&auto=webp&s=2f16d171bc94d13a9f442fcdf45c461a7af3a956', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/F7h2JObdoMaoBwsKcYAHTBxk0aLuuXhi52IqCLcoYAo.jpg?width=960&crop=smart&auto=webp&s=c6cff0c191ff4f608a657bdd5c2ef4167f9c126c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/F7h2JObdoMaoBwsKcYAHTBxk0aLuuXhi52IqCLcoYAo.jpg?width=1080&crop=smart&auto=webp&s=aecff15b68914db44b06bf7a7efdab95f7cc130d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/F7h2JObdoMaoBwsKcYAHTBxk0aLuuXhi52IqCLcoYAo.jpg?auto=webp&s=e7916899624a78ae30699275d5dbaf9ee767947c', 'width': 1200}, 'variants': {}}]} |
Cheapest and best way to run LLM online on an on-demand basis? | 24 | I am new to deploying open LLMs. I don't want to invest in expensive hardware now. Can someone suggest the cheapest but reliable way to run the latest custom LLMs like the "erichartford models" on the cloud with compute? I could run it via my phone too. The use case is a moderate amount of creative writing only. I am looking for on-demand, pay-per-runtime/token generation-based pricing. Also, ease in deployment of latest models. eg like: runpod , replicate etc | 2023-12-23T12:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/18p4hxz/cheapest_and_best_way_to_run_llm_online_on_an/ | broodysupertramp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p4hxz | false | null | t3_18p4hxz | /r/LocalLLaMA/comments/18p4hxz/cheapest_and_best_way_to_run_llm_online_on_an/ | false | false | self | 24 | null |
How to optimize fine-tuning of Llama-2 13B? | 8 | Hey, I'm currently trying to fine-tune a Llama-2 13B (not the chat version) using QLoRA. I'm going to be using a dataset of about 10,000 samples (2k tokens ish per sample). Some questions I have regarding how to train for optimal performance:
* How do I set up the LoRA config optimally? From the QLoRA paper and other sources I have heard that I should include all linear transformer blocks, whereas some people say I should only use the q\_proj and v\_proj. What is correct here?
* What learning rate should I use? Should I use paged adamW or lion?
* Is QLoRA actually as good as full fine-tuning? I have heard differing opinions on this.
* What are best practices regarding how to structure the training data? are there specific tags to put at the beginning and end of generation, to split up parts of the prompt etc? I'm currently using html-like tags, similar to `<generation>, </generation>`
* How much difference does batch size make? I've found it hard to have a batch size > 2, but could try to get my hands on better hardware to increase this if it is better than gradient checkpointing.
* Any other tips?
Would be super appreciative of any answers, thanks! | 2023-12-23T11:59:09 | https://www.reddit.com/r/LocalLLaMA/comments/18p48fc/how_to_optimize_finetuning_of_llama2_13b/ | mokamokatin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p48fc | false | null | t3_18p48fc | /r/LocalLLaMA/comments/18p48fc/how_to_optimize_finetuning_of_llama2_13b/ | false | false | self | 8 | null |
A6000 Ampere x3.5 int4 vs Mi210 CDNA2 no sparsity | 1 | [removed] | 2023-12-23T11:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/18p3sp5/a6000_ampere_x35_int4_vs_mi210_cdna2_no_sparsity/ | davide445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p3sp5 | false | null | t3_18p3sp5 | /r/LocalLLaMA/comments/18p3sp5/a6000_ampere_x35_int4_vs_mi210_cdna2_no_sparsity/ | false | false | self | 1 | null |
LLM Studio for generative images ? | 6 | I'm using LLM Studio to have fun with models to generate texts, there is something similar for images ? | 2023-12-23T11:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/18p3ja0/llm_studio_for_generative_images/ | LearnerLuiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p3ja0 | false | null | t3_18p3ja0 | /r/LocalLLaMA/comments/18p3ja0/llm_studio_for_generative_images/ | false | false | self | 6 | null |
Information from the world of LLM-a little-known channel. | 1 | [removed] | 2023-12-23T10:17:04 | https://www.reddit.com/r/LocalLLaMA/comments/18p2rs9/information_from_the_world_of_llma_littleknown/ | MajesticFigure4240 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p2rs9 | false | null | t3_18p2rs9 | /r/LocalLLaMA/comments/18p2rs9/information_from_the_world_of_llma_littleknown/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SfMMBQCLbLCxnxrftjs6ZBEIFc7Eb6--769PbJNiyeY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/uGSqaQsodW51WVvU2G-jCWqLIZ3744nfqGuPRu45x-M.jpg?width=108&crop=smart&auto=webp&s=2d861ddf42d462980760d751d24e4d274c55e10c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/uGSqaQsodW51WVvU2G-jCWqLIZ3744nfqGuPRu45x-M.jpg?width=216&crop=smart&auto=webp&s=5f317b79e4e70d8bd37d4dd5e951669a228fde13', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/uGSqaQsodW51WVvU2G-jCWqLIZ3744nfqGuPRu45x-M.jpg?width=320&crop=smart&auto=webp&s=52000faa9ee4f5521337166778dafbeab6d6d2e0', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/uGSqaQsodW51WVvU2G-jCWqLIZ3744nfqGuPRu45x-M.jpg?auto=webp&s=4e9e35ffa37e270d305fa66e6000411347e10063', 'width': 480}, 'variants': {}}]} |
Seeking Advice on the Smallest LLM Capable of Identifying Personal Data in Text | 5 | My objective is to find the smallest possible language model that is effective in detecting personal data within text. The goal is to utilize this model to scan and identify any personal data that might be present.
Thank you for your help! | 2023-12-23T09:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/18p1s58/seeking_advice_on_the_smallest_llm_capable_of/ | snarfi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p1s58 | false | null | t3_18p1s58 | /r/LocalLLaMA/comments/18p1s58/seeking_advice_on_the_smallest_llm_capable_of/ | false | false | self | 5 | null |
Which model is best for Mac Studio M2 128GB? | 6 | I am disappointed in the consistency and stability of ChatGPT 4. The quality of the answers, as well as the speed, vary too much. Now I am thinking about building my own local LLM on an M2 Ultra 128 GB RAM Mac Studio. Could anyone recommend a model that performs well on this setup, especially in terms of speed and accuracy? How do these local models generally compare to ChatGPT-4? | 2023-12-23T08:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/18p1khl/which_model_is_best_for_mac_studio_m2_128gb/ | jzn21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p1khl | false | null | t3_18p1khl | /r/LocalLLaMA/comments/18p1khl/which_model_is_best_for_mac_studio_m2_128gb/ | false | false | self | 6 | null |
What model would you choose for a company chatbot? | 89 | Imagine a company with about 1500 employees and a need for a SSoT (Single Source Of Truth) about company's policies, events, benefits, HR news, SOPs etc. Thus, this model should be trained (or fine-tuned) on the company's ever-changing dataset.
Which model would fit better, a simple question answering/guidance bot, answering to questions-documents only related to company's data? E.g. someone asks about a recent monthly report, or what would be the dress code for an upcoming Christmas party etc… It would obviously run locally on-premise. | 2023-12-23T08:15:27 | https://www.reddit.com/r/LocalLLaMA/comments/18p14ga/what_model_would_you_choose_for_a_company_chatbot/ | Junkie-Junkinston | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p14ga | false | null | t3_18p14ga | /r/LocalLLaMA/comments/18p14ga/what_model_would_you_choose_for_a_company_chatbot/ | false | false | self | 89 | null |
ChatGPT-like GUI for Mistral? (Not Poe) | 10 | Is there a ChatGPT-style GUI for Mistral where you can input a Mistral API key, either locally or on the web? Looking for a straightforward, user-friendly way to interact with Mistral.
Poe isn't an option unfortunately, since I want to use Mixtral-8x7B and the only available version is fine-tuned by [Fireworks.ai](https://Fireworks.ai), that refuses to respond in German, even though Mixtral can handle German. | 2023-12-23T07:29:13 | https://www.reddit.com/r/LocalLLaMA/comments/18p0hfj/chatgptlike_gui_for_mistral_not_poe/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p0hfj | false | null | t3_18p0hfj | /r/LocalLLaMA/comments/18p0hfj/chatgptlike_gui_for_mistral_not_poe/ | false | false | self | 10 | null |
This is an extremely basic Ollama question but I'm stumped. Please help - since it's Christmas. | 1 | [removed] | 2023-12-23T07:25:22 | https://www.reddit.com/r/LocalLLaMA/comments/18p0fmg/this_is_an_extremely_basic_ollama_question_but_im/ | easyadvance24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p0fmg | false | null | t3_18p0fmg | /r/LocalLLaMA/comments/18p0fmg/this_is_an_extremely_basic_ollama_question_but_im/ | false | false | self | 1 | null |
Open Source Browser Extensions for Local LLMs? | 6 | Hi all,
There are many awesome tools being developed for local models. I've seen some really cool web browser extensions that use openAI APIs but I haven't been able to find any that use local LLMs. Does anyone know of any that can draft emails, correct grammar, etc? I'm happy to contribute code, I just didn't want to start from scratch if there was already something out there. | 2023-12-23T07:04:23 | https://www.reddit.com/r/LocalLLaMA/comments/18p04wi/open_source_browser_extensions_for_local_llms/ | brandonZappy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p04wi | false | null | t3_18p04wi | /r/LocalLLaMA/comments/18p04wi/open_source_browser_extensions_for_local_llms/ | false | false | self | 6 | null |
Does token complexity/perplexity/rarity influence LLM inference speed? | 1 | I've heard a few different people tell me that the type of query given to an LLM can influence the inference speed, such as whether it's a simple chat question using everyday language, versus asking the model to generate a math proof. I'm yet to hear a proper technical explanation and am curious if anyone knows of any empirical or anecdotal evidence relating to the topic.
Without having deep knowledge of the range of different LLM architectures out there at the moment, I would have thought that assuming the same subset of tokens from the model's vocabulary are being used, it should make no difference what text you pass into the model or what text comes out (since at the end of the day it is some matrix multiplication and distribution sampling).
The only factors that come to my mind are:
* The KV cache, since a larger set of tokens would mean more cache misses, slowing things down. I figure this could be influenced either due to the text itself, or due to the temperature, top_p, top_k, logit_bias, presence_penalty or frequency_penalty params being tweaked to ensure more diverse set of tokens are used during generation.
* Any caching of request responses done by the serving engine (e.g. to provide a speed up where the same system prompt is used for every request, or the exact same request is made multiple times and a cached response is returned instead of generating a new one).
I'd love to hear your thoughts on this! | 2023-12-23T06:58:56 | https://www.reddit.com/r/LocalLLaMA/comments/18p01rs/does_token_complexityperplexityrarity_influence/ | globalminima | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p01rs | false | null | t3_18p01rs | /r/LocalLLaMA/comments/18p01rs/does_token_complexityperplexityrarity_influence/ | false | false | self | 1 | null |
Langchain and Python alternatives | 18 | It seems like almost every RAG and Agent is built around Langchain. Like every single AI video ever made seems to use Langchain for something. Is there any way to avoid that? Any other frameworks?
And, am I to be lead to believe that the only language is Python. C++ maybe? C++ and Python are the only two languages with cuda support?
No Golang, Swift, Rust?
I don’t want to continually build langchain products, and I’d like to do RAG. | 2023-12-23T06:58:26 | https://www.reddit.com/r/LocalLLaMA/comments/18p01k8/langchain_and_python_alternatives/ | sdplissken1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18p01k8 | false | null | t3_18p01k8 | /r/LocalLLaMA/comments/18p01k8/langchain_and_python_alternatives/ | false | false | self | 18 | null |
Examples of RAG in Production? | 27 | Trying to see what a state of the art software using RAG looks like, as we all know, getting to a demo is very easy but making it usable for the general user is extremely difficult. Do we know of any apps that are using RAG | 2023-12-23T06:33:54 | https://www.reddit.com/r/LocalLLaMA/comments/18ozof4/examples_of_rag_in_production/ | shafinlearns2jam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ozof4 | false | null | t3_18ozof4 | /r/LocalLLaMA/comments/18ozof4/examples_of_rag_in_production/ | false | false | self | 27 | null |
Fine-tune video llama for named entity recognition | 1 | [removed] | 2023-12-23T06:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/18ozo92/finetune_video_llama_for_named_entity_recognition/ | MrWick-96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ozo92 | false | null | t3_18ozo92 | /r/LocalLLaMA/comments/18ozo92/finetune_video_llama_for_named_entity_recognition/ | false | false | self | 1 | null |
LLM Interpretability Research Repository | 16 | For anyone interested in LLM Interpretability, I have created the following repository:
[https://github.com/JShollaj/awesome-llm-interpretability](https://github.com/JShollaj/awesome-llm-interpretability)
It contains a curated set of open source tools, papers, articles, groups, etc.
Feel free to check it out & hopefully it helps with your research. | 2023-12-23T06:33:23 | https://www.reddit.com/r/LocalLLaMA/comments/18ozo4r/llm_interpretability_research_repository/ | XhoniShollaj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ozo4r | false | null | t3_18ozo4r | /r/LocalLLaMA/comments/18ozo4r/llm_interpretability_research_repository/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': '9oKRrHBksE51LKlf18H95iGoRAvD_qa5vapsnYhhKrE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zJ0omyj68nDIn8FxBMEIElXMLh0fKXChw-boTZ6gEiE.jpg?width=108&crop=smart&auto=webp&s=27770f0bc983bfda0891da8d41e6269f81f36b0d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zJ0omyj68nDIn8FxBMEIElXMLh0fKXChw-boTZ6gEiE.jpg?width=216&crop=smart&auto=webp&s=2432a295a3d299394d6e8f0fa2a8bfcbd096c8b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zJ0omyj68nDIn8FxBMEIElXMLh0fKXChw-boTZ6gEiE.jpg?width=320&crop=smart&auto=webp&s=a4bd164a672e7937df79450cb959513f6690021a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zJ0omyj68nDIn8FxBMEIElXMLh0fKXChw-boTZ6gEiE.jpg?width=640&crop=smart&auto=webp&s=7ebc86b53657a7408cf0a30abfeb72760a9f30ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zJ0omyj68nDIn8FxBMEIElXMLh0fKXChw-boTZ6gEiE.jpg?width=960&crop=smart&auto=webp&s=e6f7f85a2b67c13ce8029496f2fbf544cd406de6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zJ0omyj68nDIn8FxBMEIElXMLh0fKXChw-boTZ6gEiE.jpg?width=1080&crop=smart&auto=webp&s=6d016fa3b0f27a6c3a9aade605137347948808cd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zJ0omyj68nDIn8FxBMEIElXMLh0fKXChw-boTZ6gEiE.jpg?auto=webp&s=2f283440b51b788f98cfc4c70f7286f8fab51a5d', 'width': 1200}, 'variants': {}}]} |
Did musk open source Grok? | 82 | Did Musk open source Grok after all his chat about openai not being open anymore? | 2023-12-23T06:03:16 | https://www.reddit.com/r/LocalLLaMA/comments/18oz6zm/did_musk_open_source_grok/ | randomrealname | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oz6zm | false | null | t3_18oz6zm | /r/LocalLLaMA/comments/18oz6zm/did_musk_open_source_grok/ | false | false | self | 82 | null |
X-Agent with LM Studio | 4 | Does anyone have X-Agent running with LM Studio?
It appears they've closed a ticket earlier for integration of LM Studio server endpoints, but based on the config file I almost think it should be doable. | 2023-12-23T06:02:29 | https://www.reddit.com/r/LocalLLaMA/comments/18oz6jd/xagent_with_lm_studio/ | ZHName | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oz6jd | false | null | t3_18oz6jd | /r/LocalLLaMA/comments/18oz6jd/xagent_with_lm_studio/ | false | false | self | 4 | null |
Experts hate him! How this Redditor rethought AI with one simple question | 1 | I'm still learning about ML and LLMs. Is this the way it works? Models retain vast amounts of knowledge the training dataset and try to reproducing it reliably. A 70B model trained on the same data as a 7B model would have generally better performance than the 7B model.
I've seen a lot of interest in document retrieval and that makes me wonder, could a small parameter model be trained to excel in comprehension and conversation instead of retaining information? That way there would be no need rely on the AI to reliable output correct information. The correct information could be taken from a database for the ai to have in context. Could a 3B model be trained like this, to have a good understanding of the information in context and ability to apply it to different situations?
I'm sure I'm missing something here, again still learning. But, just thinking it would be nice to have a capable 3B model and I wouldn't mind downloading terabytes of information for it to use.
Anyone think this is possible and is this the direction training will take?
​
(Also, sorry for the overly dramatic title. I couldn't resist 🤭) | 2023-12-23T04:44:31 | https://www.reddit.com/r/LocalLLaMA/comments/18oxvg0/experts_hate_him_how_this_redditor_rethought_ai/ | multiverse_fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oxvg0 | false | null | t3_18oxvg0 | /r/LocalLLaMA/comments/18oxvg0/experts_hate_him_how_this_redditor_rethought_ai/ | false | false | self | 1 | null |
What is the current open source SOTA for TTS? | 29 | Looking for an open source text to speech model. Which ones have you tried and found to be the best? | 2023-12-23T04:17:45 | https://www.reddit.com/r/LocalLLaMA/comments/18oxeuk/what_is_the_current_open_source_sota_for_tts/ | Key-Morning-4712 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18oxeuk | false | null | t3_18oxeuk | /r/LocalLLaMA/comments/18oxeuk/what_is_the_current_open_source_sota_for_tts/ | false | false | self | 29 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.