title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Trouble prompting Mixtral mid-conversation
8
I’m working on a porting my OpenAI conversational agent with RAG capabilities to open source models. Currently I’m using Mixtral-8x7B-Instruct-v0.1 via TogetherAI’s OpenAI standard API. My plan was to use a first order prompt to enrich the user’s question and then a second order prompt to answer the question, either based on info in the prompt / chat history or using a RAG function call. I’m finding the performance of both prompts degrade if there are several messages already in the conversation. For example, my answering prompt includes: * Instructions (answer based on chat or use this function call. Don’t use prior knowledge) * Some user basic information * A couple examples of when to use the function calls The entire prompt is less than 400 tokens, and I add it to the messages array using the ‘system’ role. `messages.push({'role': 'system', 'content': systemPrompt})` **If this is the only message in the array, the prompt works very well.** **But if there are even a few other conversation messages in the array before the system prompt, it starts ignoring the instructions.** It will answer using info not in the context instead of calling the function, or it will just fail to answer based on info that is in the context. Everything mentioned above fits comfortably into this model’s 32K context (<4K, actually), and I have the temp set very low. I’ve tried both prepending and appending the system prompts to the chat history, but it has little impact on performance. As mentioned above, I’m also having a similar issue with my question enriching prompt. **Perhaps the general question, is how do I get the model to follow the system prompt when other messages are present in the context given that I am safely within the supported context length?**
2024-02-03T15:39:48
https://www.reddit.com/r/LocalLLaMA/comments/1ahyu44/trouble_prompting_mixtral_midconversation/
Odd-Dentist274
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahyu44
false
null
t3_1ahyu44
/r/LocalLLaMA/comments/1ahyu44/trouble_prompting_mixtral_midconversation/
false
false
self
8
null
Live Llava on Jetson Orin
39
Here's a demo & tutorial I made of running Llava continuously on a live camera stream, using embedded Jetson AGX Orin module that can be deployed into the field onboard robots, smart cameras, ect. https://www.jetson-ai-lab.com/tutorial_live-llava.html https://youtu.be/X-OXxPiUTuU This was still using Llava-1.5 at the time I captured the videos earlier this week (raw, unedited) but excited to upgrade this to 1.6 and try their improved CLIP imaging tiling scheme for higher resolution. This is currently using MLC/TVM with W4A16 quantization in a VLM pipeline that I optimized for latency. After implementing the updates for 1.6, next step is to use constrained JSON output from the model to dynamically trigger alerts and actions. I think with the help of a coding model on the frontend to configure the pipeline from user queries, it could make a "smart camera that anyone can program with their voice" (even if that camera is on a mobile robot). Enjoy!
2024-02-03T15:27:00
https://www.reddit.com/r/LocalLLaMA/comments/1ahyk1v/live_llava_on_jetson_orin/
nanobot_1000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahyk1v
false
null
t3_1ahyk1v
/r/LocalLLaMA/comments/1ahyk1v/live_llava_on_jetson_orin/
false
false
self
39
{'enabled': False, 'images': [{'id': '4cd_kTjhJSgaVjNTCBg-FhMbTsFBuNcI3ddBbz530Rw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hDfOeJCC-XOU-gbp-Fl6KRtIWwKnJ45qAS3v1aPAJxk.jpg?width=108&crop=smart&auto=webp&s=bd0c0f49cce0db1bec20c9337b3d00fdb0e36771', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/hDfOeJCC-XOU-gbp-Fl6KRtIWwKnJ45qAS3v1aPAJxk.jpg?width=216&crop=smart&auto=webp&s=0f5e68d40a130b06d20b989dcd1c7b3f4c83771d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/hDfOeJCC-XOU-gbp-Fl6KRtIWwKnJ45qAS3v1aPAJxk.jpg?width=320&crop=smart&auto=webp&s=0c0fa27419175c4e40dc141b758f644fe9c87e18', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/hDfOeJCC-XOU-gbp-Fl6KRtIWwKnJ45qAS3v1aPAJxk.jpg?auto=webp&s=4fd89325c859290128555026d9c6f541e3c0e9f4', 'width': 480}, 'variants': {}}]}
LLM apps security
1
Anyone interested interested in a tool that scans & improves your LLM app’s security? [View Poll](https://www.reddit.com/poll/1ahy326)
2024-02-03T15:05:00
https://www.reddit.com/r/LocalLLaMA/comments/1ahy326/llm_apps_security/
Ok_Cell3454
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahy326
false
null
t3_1ahy326
/r/LocalLLaMA/comments/1ahy326/llm_apps_security/
false
false
self
1
null
phi-2 Open Hermes 2.5
37
hey guys, here's Microsoft's phi-2 finetuned on Open Hermes 2.5 * QloRA, rank 32, LR 2e-5 * 1 epoch of teknium/OpenHermes-2.5 more details, code for inference and how it was trained is in the HF repo [https://huggingface.co/g-ronimo/phi-2-OpenHermes-2.5](https://huggingface.co/g-ronimo/phi-2-OpenHermes-2.5) &#x200B; https://preview.redd.it/igoxv479ldgc1.png?width=1140&format=png&auto=webp&s=8e7ad3dd4edce933bb570c901f00ba6319f1d3ac
2024-02-03T13:54:19
https://www.reddit.com/r/LocalLLaMA/comments/1ahwlti/phi2_open_hermes_25/
HatEducational9965
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahwlti
false
null
t3_1ahwlti
/r/LocalLLaMA/comments/1ahwlti/phi2_open_hermes_25/
false
false
https://b.thumbs.redditm…HOLdQxiyAo8A.jpg
37
{'enabled': False, 'images': [{'id': '40lUd89k4uUl9CkqdrSoRMLVwX9boU0WbMNV3nrRU6I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gMQSliFrprLghKCH3EI60LEKglq5MyMh9L42-TH2ciM.jpg?width=108&crop=smart&auto=webp&s=8825b7cf1dca9ac939a9e44c2de4c4e08e38a796', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gMQSliFrprLghKCH3EI60LEKglq5MyMh9L42-TH2ciM.jpg?width=216&crop=smart&auto=webp&s=65ceedf82fb8d6b874465819433cd763aadd0d33', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gMQSliFrprLghKCH3EI60LEKglq5MyMh9L42-TH2ciM.jpg?width=320&crop=smart&auto=webp&s=70a2e4a8db1ec38941d62db6da6b8f89220076aa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gMQSliFrprLghKCH3EI60LEKglq5MyMh9L42-TH2ciM.jpg?width=640&crop=smart&auto=webp&s=d1a5fde0cbd6017c7018fb2d257624ceb43838a7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gMQSliFrprLghKCH3EI60LEKglq5MyMh9L42-TH2ciM.jpg?width=960&crop=smart&auto=webp&s=8351723121a945bfbc025697ad891972ec5acce2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gMQSliFrprLghKCH3EI60LEKglq5MyMh9L42-TH2ciM.jpg?width=1080&crop=smart&auto=webp&s=cb4ff40d65668e84720723733d97c9c2677d42ab', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gMQSliFrprLghKCH3EI60LEKglq5MyMh9L42-TH2ciM.jpg?auto=webp&s=f6eb9b6428077dffdd80aad45d80c8f9aa97ed97', 'width': 1200}, 'variants': {}}]}
Introducing a Unique Self-Talking Chatbot Experience: Conversations Between AI Personalities
35
Hello Reddit Community! I'm thrilled to share a project I've been passionately working on – a **Self-Talking Chatbot Platform** that simulates conversations between two distinct AI personalities, named Ahmed and Ayse. Built using the Zephyr language model and a Flask web interface, this platform showcases an innovative example of AI-driven communication. 🤖 **What It's All About:** * The platform features two chatbots engaging in an autonomous conversation after an initial user prompt. * Ahmed and Ayse, the chatbots, have been programmed with unique characteristics and styles, offering varied and dynamic dialogues. * The project is not just a technical showcase but also a creative exploration into AI personality and conversation dynamics. 💻 **Technical Highlights:** * The chatbots operate on a local instance of the Zephyr language model, ensuring fast and responsive dialogues. * The web interface is built with Flask and JavaScript, emphasizing a user-friendly and interactive experience. * The project is designed to work with both GPU and CPU, ensuring versatility across different systems. 🛠 **Known Quirks and Future Plans:** * Currently, the chat can be stopped by typing 'goodbye' in the chat input. * A known glitch: the application might crash if an ongoing conversation is interrupted with a new message. We're working on a fix for smoother interactions. 🌟 **Why Share This?** I believe open-source projects grow best through community interaction, feedback, and contributions. I'm keen to hear your thoughts, suggestions, and would love to see how this project evolves with community input. Whether you're a fellow developer, an AI enthusiast, or just curious about chatbot technology, your perspectives are invaluable! 🔗 **Check It Out:** I've documented the entire journey, setup instructions, and more in a detailed README on GitHub ([https://github.com/nandxorandor/chatbot\_VS\_chatbot](https://github.com/nandxorandor/chatbot_VS_chatbot)). Feel free to explore the code, set up your own instance, and play around with Ahmed and Ayse's conversations. Stay tuned for the link to the GitHub repository and a demo.gif showcasing the platform in action! &#x200B; https://i.redd.it/w8r1bzldidgc1.gif Looking forward to your thoughts and feedback!
2024-02-03T13:41:06
https://www.reddit.com/r/LocalLLaMA/comments/1ahwcrr/introducing_a_unique_selftalking_chatbot/
positivevibe2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahwcrr
false
null
t3_1ahwcrr
/r/LocalLLaMA/comments/1ahwcrr/introducing_a_unique_selftalking_chatbot/
false
false
https://b.thumbs.redditm…83d0g83DdFPo.jpg
35
{'enabled': False, 'images': [{'id': 'dxOejA-hMSC2SvOQZ5h3lXZi6hihwT36es_yPIlmvtE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_Z3Scbv0mdZghE-CJTYV39HpshYDXBoKiXY4wWqL9AU.jpg?width=108&crop=smart&auto=webp&s=373ac11f5b4e2c75d04f85be786fff66a2467ae3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_Z3Scbv0mdZghE-CJTYV39HpshYDXBoKiXY4wWqL9AU.jpg?width=216&crop=smart&auto=webp&s=912f71382ba4645d17c196c14ec413bed4358b91', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_Z3Scbv0mdZghE-CJTYV39HpshYDXBoKiXY4wWqL9AU.jpg?width=320&crop=smart&auto=webp&s=980e041f761cf09300189ea73b4743c033173be0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_Z3Scbv0mdZghE-CJTYV39HpshYDXBoKiXY4wWqL9AU.jpg?width=640&crop=smart&auto=webp&s=4c6ce64caee2d3b889286a01a4fb2bdc46f1c271', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_Z3Scbv0mdZghE-CJTYV39HpshYDXBoKiXY4wWqL9AU.jpg?width=960&crop=smart&auto=webp&s=f2117a66ce7da1b59724f52fefa8fd96e3a81f6c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_Z3Scbv0mdZghE-CJTYV39HpshYDXBoKiXY4wWqL9AU.jpg?width=1080&crop=smart&auto=webp&s=719aaa4d3552a4ea65e747a02c3312c012a775df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_Z3Scbv0mdZghE-CJTYV39HpshYDXBoKiXY4wWqL9AU.jpg?auto=webp&s=1a326833e3a71261267b9c61066c3792f2b528d1', 'width': 1200}, 'variants': {}}]}
GGUF Q4 vs Q5? Is the difference significant in creative writing?
52
I use the GGUF format on my laptop for creative writing. Q5 is very slow. Is the difference between Q4 & Q5 significant in creative writing?
2024-02-03T13:31:49
https://www.reddit.com/r/LocalLLaMA/comments/1ahw6d9/gguf_q4_vs_q5_is_the_difference_significant_in/
broodysupertramp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahw6d9
false
null
t3_1ahw6d9
/r/LocalLLaMA/comments/1ahw6d9/gguf_q4_vs_q5_is_the_difference_significant_in/
false
false
self
52
null
GPU bloat stifles AI
1
Hey Local Llamas - I’m coming from a product management background and wanted your feedback on my line of thinking in this post. As a product person, I’ve been frustrated by the slowness of LLM applications and contend that LLM latency is holding back the development and adoption of AI’s “killer apps”. I then use some napkin math in the post to illustrate how custom AI hardware could unlock lower latency inference compared to GPUs — maybe enough to enable killer apps? Given that this community knows LLMs inside and out and thinks deeply about the LLM user experience, I wanted to hear your thoughts. Appreciate any and all feedback!
2024-02-03T12:27:29
https://www.chipstrat.com/p/gpu-bloat-is-holding-back-ai
chipstrat
chipstrat.com
1970-01-01T00:00:00
0
{}
1ahv0zk
false
null
t3_1ahv0zk
/r/LocalLLaMA/comments/1ahv0zk/gpu_bloat_stifles_ai/
false
false
https://b.thumbs.redditm…FUH_c1iGZLBI.jpg
1
{'enabled': False, 'images': [{'id': 'ky1ufI2PjMSMqQbEh8Lxi0t9hRXcXHc-TwKY9ycY-qg', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/S2pg3R0ZHIiHqtPviFT10fCE2k0g7OaRGsLEJYAAJ-w.jpg?width=108&crop=smart&auto=webp&s=9979d3a63a9c9f3861c3d5947331fdca48fe643d', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/S2pg3R0ZHIiHqtPviFT10fCE2k0g7OaRGsLEJYAAJ-w.jpg?width=216&crop=smart&auto=webp&s=fccb2aafb05d44edf0124fb317b4175db10c802b', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/S2pg3R0ZHIiHqtPviFT10fCE2k0g7OaRGsLEJYAAJ-w.jpg?width=320&crop=smart&auto=webp&s=e8e2113b5112d5a92a319443304b18db87a20d79', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/S2pg3R0ZHIiHqtPviFT10fCE2k0g7OaRGsLEJYAAJ-w.jpg?width=640&crop=smart&auto=webp&s=76ed6f4afaa787ebf5b5eb9a5b12b1b6b43f8fc6', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/S2pg3R0ZHIiHqtPviFT10fCE2k0g7OaRGsLEJYAAJ-w.jpg?width=960&crop=smart&auto=webp&s=c7fc3e542e31ce59b2075a0a69e77f8ba3cae1ef', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/S2pg3R0ZHIiHqtPviFT10fCE2k0g7OaRGsLEJYAAJ-w.jpg?auto=webp&s=d4de6233db2d30195046ffa9206d8ca23ee520c4', 'width': 1024}, 'variants': {}}]}
[D] Can I use LoRa and Prompt Tuning at the same time
1
is it possible to use prompt tuning and LoRa together?
2024-02-03T12:11:26
https://www.reddit.com/r/LocalLLaMA/comments/1ahurlk/d_can_i_use_lora_and_prompt_tuning_at_the_same/
ambuje12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahurlk
false
null
t3_1ahurlk
/r/LocalLLaMA/comments/1ahurlk/d_can_i_use_lora_and_prompt_tuning_at_the_same/
false
false
self
1
null
Fine tuning to match an author
10
Hi, I’m looking to fine tune my first model for fun using QLORA and I was considering doing a compare between Mistral 7b and Goliath 120b. The task in question, is feeding excerpts of a self help book from an author in attempt to have the LLM mimic the speaking style of the author. If this instills some knowledge from the book that’s great, but my intention is to then use the model for RAG with the aforementioned book. This is more a curiosity project to see if I can get the style right. I wanted to know the following: 1. How should I prepare the dataset - do you recommend providing another LLM like GPT-4 excerpts of the book and asking it to generate questions which would result in the except, and use this as a Q&A dataset? 2. Can I just feed chunks of the book as is? Second of all, do you recommend an existing fine-tuned version of Mistral to further tune for this type of task instead of the base model? Thanks a lot appreciate any help! Love this sub!
2024-02-03T12:05:51
https://www.reddit.com/r/LocalLLaMA/comments/1ahuocx/fine_tuning_to_match_an_author/
99OG121314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahuocx
false
null
t3_1ahuocx
/r/LocalLLaMA/comments/1ahuocx/fine_tuning_to_match_an_author/
false
false
self
10
null
Any popular GUIs already support MoE-LLaVA?
23
Hi, I am searching for information if any of the popular easy to install GUIs (like LMStudio or Oobabooga arleady support MoE-LLaVA models, or are working at this?
2024-02-03T10:56:48
https://www.reddit.com/r/LocalLLaMA/comments/1ahtm2f/any_popular_guis_already_support_moellava/
Ng2oXg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahtm2f
false
null
t3_1ahtm2f
/r/LocalLLaMA/comments/1ahtm2f/any_popular_guis_already_support_moellava/
false
false
self
23
null
difference between adamw_bnb_8bit and paged_adamw_8bit ?
1
.
2024-02-03T10:23:20
https://www.reddit.com/r/LocalLLaMA/comments/1aht4n0/difference_between_adamw_bnb_8bit_and_paged_adamw/
mohamedem2m
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aht4n0
false
null
t3_1aht4n0
/r/LocalLLaMA/comments/1aht4n0/difference_between_adamw_bnb_8bit_and_paged_adamw/
false
false
self
1
null
Adspace in prompts? How are you monetizing or intending to?
1
Who wants their proper agent framework link used as the separator between responses in the agent batch debug function of [Clipboard Conqueror](https://github.com/aseichter2007/ClipboardConqueror) 's no code multi-agent-exchange prototyping functionality? How much is that worth to you? It will be seen by developers and entrepreneurs. Its one line of code to add, and prints with every |||debug| or |||d| pasted out rather than polluting the system prompt. If you hate ads and love Clipboard Conqueror, buy me a coffee, my bills are starting to pile up. Quick before I find someone interested or I talk more about selling adspace in prompts and the idea catches on. I'll delete this post at $100 a month worth of subscribers, then it would at least pay for my internet. I currently have $0. Hurry before I help the enshitification, but my family needs food and transportation, and I dream about hosting a server to make putting CC in a mobile keyboard worth developing. Also, aside kinda, real soon CC's new API handler will drop. I'm trying to catch enough calm to get the bugs pulled out but it's working against kobold again, and for chat completions on the openAi api that kobold supplies. It should be ready to point at textgenwebui or anything really if I can sort out what I have to account for that I've missed(I need to read bard api docs, etc) and send the right parameters from the configs. Then when I know it's working I am ready to test sending a custom jinja template that is a true cludge but I think it works for the current state of AI. V2 will be better. Anyway, is anyone actually making money with AI stuff? From like, users?
2024-02-03T09:35:23
https://www.reddit.com/r/LocalLLaMA/comments/1ahsfvi/adspace_in_prompts_how_are_you_monetizing_or/
aseichter2007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahsfvi
false
null
t3_1ahsfvi
/r/LocalLLaMA/comments/1ahsfvi/adspace_in_prompts_how_are_you_monetizing_or/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gn85iSDn83tw6wx7jxeWi3Wce6GMvy-Geczg1qA4EKQ', 'resolutions': [{'height': 33, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=108&crop=smart&auto=webp&s=13c43de44c4501c4a0e011cb6dcf46194057250c', 'width': 108}, {'height': 67, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=216&crop=smart&auto=webp&s=f99a1813a6c7001c6734c4a91579862e521b1167', 'width': 216}, {'height': 100, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=320&crop=smart&auto=webp&s=ed39917419dde295ecc6c72e79709a24a0e3ee11', 'width': 320}, {'height': 200, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=640&crop=smart&auto=webp&s=6cd4f51591eeac1a514326d9872a1821b86f0f44', 'width': 640}, {'height': 300, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=960&crop=smart&auto=webp&s=72f156a7a947d458a913289a1503e33d5257719a', 'width': 960}, {'height': 337, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?width=1080&crop=smart&auto=webp&s=cb00cf745b6473ddbaef189691c467fd52c0056c', 'width': 1080}], 'source': {'height': 516, 'url': 'https://external-preview.redd.it/pvyQwlLtR012wu7agcl3C8T6tgUe_tMYBGnXykjRn68.jpg?auto=webp&s=f6a6198a3dc6a55d6a6a3da4467844cf4902030b', 'width': 1650}, 'variants': {}}]}
Extracting Answer Source Chunks from GPT Responses in Langchain
3
Hey everyone, I've been working on a project where I extract information from a large PDF document. I've completed the initial steps, including reading the PDF, splitting it into chunks, converting them to embeddings, and storing them in a vector database. For the next step, I perform a similarity check with a limit of 28. Each chunk has a size of 350, with an overlap of 80. After that, I send these chunks to GPT with a specific question. GPT provides answers, but I'm struggling to determine from which chunk the answer originated. Is there a way to identify the specific chunk(s) that GPT used to generate the answer? Alternatively, is there a solution to get an array of chunks that may contain the answer out of the 28 available? Any insights or suggestions on how to achieve this would be greatly appreciated! Thanks in advance!
2024-02-03T09:19:20
https://www.reddit.com/r/LocalLLaMA/comments/1ahs7ok/extracting_answer_source_chunks_from_gpt/
dragon_4789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahs7ok
false
null
t3_1ahs7ok
/r/LocalLLaMA/comments/1ahs7ok/extracting_answer_source_chunks_from_gpt/
false
false
self
3
null
I won't pay for ChatGPT Plus again unless it become significantly better than free LLM offerings.
5
It's been almost a month since my subscription for ChatGPT Plus ended and I've tried Mixtral on HuggingChat, and while Mistral Medium is API only, Poe offers it for free and for the most part its either almost or as good as GPT 4 in my usage. I honestly can't tell the difference in quality between answers. Performance is also consistently fast on both (no possibility of rates as low as 1 word / sec like in GPT 4), and I don't face any issues with it gradually getting dumber or lazy. It feels very nice to not have to spend $20 a month. The only way I would pay for it again is if a GPT 4.5, 5 or newer is reported to be significantly better than all free LLMs, like GPT 4 has been for the past year. That's why I feel like the past 2 months have been more interesting than much of 2023 for consumer LLMs. The gap between GPT 4 is declining and I can now use very high quality LLMs for free without my own beefy GPUs and hardware.
2024-02-03T09:10:03
https://www.reddit.com/r/LocalLLaMA/comments/1ahs2nc/i_wont_pay_for_chatgpt_plus_again_unless_it/
TheTwelveYearOld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahs2nc
false
null
t3_1ahs2nc
/r/LocalLLaMA/comments/1ahs2nc/i_wont_pay_for_chatgpt_plus_again_unless_it/
false
false
self
5
null
Finetuning using Axolotl
4
Newbie here. I have been using axolotl to train the open-llama3b model using their examples lora.yml. I have tweaked some the setting like micro\_batch\_size, epoch, validation\_set, learning rate etc, and trained the model many times. I started to get similar graphs in my wandb reports even if I changed some of my settings. The model was also trained on an instruction dataset. But it can't answer properly when the user gives any inputs. Can someone please tell me whether the results that I got in my report are decent or bad? And why the model hallucinate when the user gives an input. Thank you.
2024-02-03T09:07:55
https://www.reddit.com/r/LocalLLaMA/comments/1ahs1i6/finetuning_using_axolotl/
formidoboi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahs1i6
false
null
t3_1ahs1i6
/r/LocalLLaMA/comments/1ahs1i6/finetuning_using_axolotl/
false
false
default
4
null
funcchain: a langchain extension, focused on structured output with llamacpp grammars
1
2024-02-03T09:02:30
https://shroominic.github.io/funcchain/
DrNatoor
shroominic.github.io
1970-01-01T00:00:00
0
{}
1ahryls
false
null
t3_1ahryls
/r/LocalLLaMA/comments/1ahryls/funcchain_a_langchain_extension_focused_on/
false
false
default
1
null
Local QLORA Finetuning
12
Hi, Recently I’ve been wanting to see if there’s any way to fine tune 7B models on my computer, despite only have 12 GB VRAM. I thought I could use QLORA, but thus far all of my attempts have failed miserably. Does anyone know if there’s a way for me to fine tune 7B models locally, and if so, are there any good tutorials/guides on how to do so?
2024-02-03T08:20:40
https://www.reddit.com/r/LocalLLaMA/comments/1ahrcnz/local_qlora_finetuning/
Aptare
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahrcnz
false
null
t3_1ahrcnz
/r/LocalLLaMA/comments/1ahrcnz/local_qlora_finetuning/
false
false
self
12
null
Noob-friendly Recap
11
I recently started getting in LLM-powered applications by myself and using online resources. What I discovered is that LLMs have become so mainstream that there are so many platform, frameworks, and model to choose from. And usually I don't even understand what are the key differences between one another This is what I understood, please enlighten me and whoever comes here through a Google search - LLMs are huge models and if I want to run them locally I should use a quantized version: what are the best practices? What are the requirements? I'm currently using windows but I'm having so much troubles - Faster inference: what are the differences between vLLM, llamacpp, and ollama? The second one is the only one that *should* work on windows. Should I always use these frameworks? Are they used only to load the quantized model? - Deployment. What are the best platform / best practices regarding the deployment phase? I guess a logic choice would be having the LLM available via API - Fine-tuning. There are multiple ways to fine-tune a LLM. LORA seems a good trade-off. Should I use a particular framework to apply it? Moreover, let's say I fine-tune my model over different use-cases. In production, how do I swap the weights for these different models efficiently? - Orchestrator. LangChain should be the most popular one, but is it the one to go for? I see they are working on a huge refactor
2024-02-03T08:08:40
https://www.reddit.com/r/LocalLLaMA/comments/1ahr6ds/noobfriendly_recap/
DatAndre
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahr6ds
false
null
t3_1ahr6ds
/r/LocalLLaMA/comments/1ahr6ds/noobfriendly_recap/
false
false
self
11
null
How to see Ollama inference speed (tokens/sec)?
2
I can find any info how to access to inference speed info. I would just like to have that info for comparison with other loaders. Any advice?
2024-02-03T07:54:00
https://www.reddit.com/r/LocalLLaMA/comments/1ahqycb/how_to_see_ollama_inference_speed_tokenssec/
dodo13333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahqycb
false
null
t3_1ahqycb
/r/LocalLLaMA/comments/1ahqycb/how_to_see_ollama_inference_speed_tokenssec/
false
false
self
2
null
Is there an UI which is open source and shows Chat GPT, Bard and local LLMs replies parallely?
1
[removed]
2024-02-03T06:13:51
https://www.reddit.com/r/LocalLLaMA/comments/1ahpffv/is_there_an_ui_which_is_open_source_and_shows/
kkgmgfn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahpffv
false
null
t3_1ahpffv
/r/LocalLLaMA/comments/1ahpffv/is_there_an_ui_which_is_open_source_and_shows/
false
false
self
1
null
Multilingual SeaLLM-7B-v2 - LLM for Southeast Asia languages
23
2024-02-03T04:54:45
https://huggingface.co/SeaLLMs/SeaLLM-7B-v2
ninjasaid13
huggingface.co
1970-01-01T00:00:00
0
{}
1aho2k2
false
null
t3_1aho2k2
/r/LocalLLaMA/comments/1aho2k2/multilingual_seallm7bv2_llm_for_southeast_asia/
false
false
https://b.thumbs.redditm…tbYXLChjgmPY.jpg
23
{'enabled': False, 'images': [{'id': 'eQFovQJ2Xomisky7X0pxTq_CQohGQRZLqMJgzpM0nII', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OuoHKR2Pw2AZR5T1aJrMpzFlBnPNzaOxaXFS9axk3Vw.jpg?width=108&crop=smart&auto=webp&s=748992b7d87424cc902bca2915978e7f1c93bf9c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OuoHKR2Pw2AZR5T1aJrMpzFlBnPNzaOxaXFS9axk3Vw.jpg?width=216&crop=smart&auto=webp&s=dde9daf27e160e33667b442feed9f18998a2f57b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OuoHKR2Pw2AZR5T1aJrMpzFlBnPNzaOxaXFS9axk3Vw.jpg?width=320&crop=smart&auto=webp&s=81b2997deaf6c7f2da99d6011403f55c061b2815', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OuoHKR2Pw2AZR5T1aJrMpzFlBnPNzaOxaXFS9axk3Vw.jpg?width=640&crop=smart&auto=webp&s=ebc4daf484c01020a2030d3ac0efa94801efc5de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OuoHKR2Pw2AZR5T1aJrMpzFlBnPNzaOxaXFS9axk3Vw.jpg?width=960&crop=smart&auto=webp&s=0fccc201b3eb7e3cefe85a7011fafe671c38f288', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OuoHKR2Pw2AZR5T1aJrMpzFlBnPNzaOxaXFS9axk3Vw.jpg?width=1080&crop=smart&auto=webp&s=1955316b749153d47494a24241b77172bb28c9fc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OuoHKR2Pw2AZR5T1aJrMpzFlBnPNzaOxaXFS9axk3Vw.jpg?auto=webp&s=84aa972ef01397ddecdf0533729ea82da6b30fc1', 'width': 1200}, 'variants': {}}]}
Beginner question: is VRAM a total bottleneck in running LLMs?
1
[removed]
2024-02-03T04:14:17
https://www.reddit.com/r/LocalLLaMA/comments/1ahnbvv/beginner_question_is_vram_a_total_bottleneck_in/
hmmqzaz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahnbvv
false
null
t3_1ahnbvv
/r/LocalLLaMA/comments/1ahnbvv/beginner_question_is_vram_a_total_bottleneck_in/
false
false
self
1
null
A word that means "proud to have asked A.I. to do it and it works"
155
Today I altered for the better my professional life by asking an AI to program a little something to classify photos from site reports and it worked better than expected. Instead of 1-2 minutes per photos, it's about .5 second and it can run a whole folder with hundreds of photos in a matter of 1-2 minutes instead of 2-3 HOURS. Of course my boss will only see a 5% production increase, like my pay raise.
2024-02-03T03:48:49
https://www.reddit.com/r/LocalLLaMA/comments/1ahmuzq/a_word_that_means_proud_to_have_asked_ai_to_do_it/
oodelay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahmuzq
false
null
t3_1ahmuzq
/r/LocalLLaMA/comments/1ahmuzq/a_word_that_means_proud_to_have_asked_ai_to_do_it/
false
false
self
155
null
Raspberry Pi 5 + Coral
1
[removed]
2024-02-03T03:20:40
https://www.reddit.com/r/LocalLLaMA/comments/1ahmc2v/raspberry_pi_5_coral/
CoolAppz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahmc2v
false
null
t3_1ahmc2v
/r/LocalLLaMA/comments/1ahmc2v/raspberry_pi_5_coral/
false
false
self
1
null
[Jan | Codellama 70B Instruct] Does anyone have the correct contents for "prompt template"?
1
[removed]
2024-02-03T02:24:09
https://www.reddit.com/r/LocalLLaMA/comments/1ahl8ya/jan_codellama_70b_instruct_does_anyone_have_the/
Virtamancer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahl8ya
false
null
t3_1ahl8ya
/r/LocalLLaMA/comments/1ahl8ya/jan_codellama_70b_instruct_does_anyone_have_the/
false
false
self
1
null
Huggingface tokenizer in C
5
Asking the world in case anyone knows: has anyone ported the huggingface rust tokenizers to C? I know there’s llama.cpp but it only supports certain models. Hoping I can more directly rely on the rust library. Thanks all!
2024-02-03T02:14:00
https://www.reddit.com/r/LocalLLaMA/comments/1ahl1px/huggingface_tokenizer_in_c/
Infamous_Safety_9310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahl1px
false
null
t3_1ahl1px
/r/LocalLLaMA/comments/1ahl1px/huggingface_tokenizer_in_c/
false
false
self
5
null
Are there any unrestricted models you can recommend?
1
For example, if you want to do something, regular consultation with AI may give you some general suggestions. If that model can give you some processing methods and techniques that others can’t think of, such as a special thinking angle that ordinary people can’t think of, I think this is AI. It should play a role. Of course, sometimes you need to consider whether it is illegal.
2024-02-03T01:32:41
https://www.reddit.com/r/LocalLLaMA/comments/1ahk89f/are_there_any_unrestricted_models_you_can/
palmcentro2019
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahk89f
false
null
t3_1ahk89f
/r/LocalLLaMA/comments/1ahk89f/are_there_any_unrestricted_models_you_can/
false
false
nsfw
1
null
Intro to Open Source AI [Talk] - OpenHermes for everything.
1
[removed]
2024-02-03T00:42:42
https://www.reddit.com/r/LocalLLaMA/comments/1ahj7wl/intro_to_open_source_ai_talk_openhermes_for/
kushalgoenka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahj7wl
false
null
t3_1ahj7wl
/r/LocalLLaMA/comments/1ahj7wl/intro_to_open_source_ai_talk_openhermes_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WvUqoEm844X1lon1OK7LHWVki0vSis61eSlPh5z43YU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ew0UkDlqBq-My886an3tNXECxUSnsLfCOT6l87ljIk8.jpg?width=108&crop=smart&auto=webp&s=e8e9a50ed42cefbbb9eefea332571fc545559880', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Ew0UkDlqBq-My886an3tNXECxUSnsLfCOT6l87ljIk8.jpg?width=216&crop=smart&auto=webp&s=968e8a9d45b292e67de44ac3f44a143cc49a9117', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Ew0UkDlqBq-My886an3tNXECxUSnsLfCOT6l87ljIk8.jpg?width=320&crop=smart&auto=webp&s=c410016f4013c4358e46f9f31cde3832b3fb654e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Ew0UkDlqBq-My886an3tNXECxUSnsLfCOT6l87ljIk8.jpg?auto=webp&s=8de3dd6f18a09f2d51b28ef78135814afe5f923b', 'width': 480}, 'variants': {}}]}
Z490 or x570 Motherboard recommendation for 3 way SLI
5
I've got 3 GPUs, RTX 3090 and 2xRTX 3060TI. I plan on running some 70b's and upgrading to all RTX 3090s in the future and do some fine-tuning. I've got a i7-10700k and a 5800x3d to spare. What should I be looking for in a motherboard and what motherboard would you recommend?
2024-02-03T00:08:11
https://www.reddit.com/r/LocalLLaMA/comments/1ahihlu/z490_or_x570_motherboard_recommendation_for_3_way/
Erdeem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahihlu
false
null
t3_1ahihlu
/r/LocalLLaMA/comments/1ahihlu/z490_or_x570_motherboard_recommendation_for_3_way/
false
false
self
5
null
why is gptq slower than the base model?
2
I use these https://huggingface.co/lmsys/vicuna-7b-v1.5 https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ base - 2.0s\~ 30token gptq - 5.0s\~ 30token
2024-02-02T22:48:42
https://www.reddit.com/r/LocalLLaMA/comments/1ahgo30/why_is_gptq_slower_than_the_base_model/
Big_Art5869
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahgo30
false
null
t3_1ahgo30
/r/LocalLLaMA/comments/1ahgo30/why_is_gptq_slower_than_the_base_model/
false
false
self
2
{'enabled': False, 'images': [{'id': 'hlYheUEk6EH72ZEOWOqtPCPOCTobBGBhQPQpPug0SII', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iF7o80Puhk6vH3nLl37JtsAv2AIXs0_tkt8-RBqZ0eU.jpg?width=108&crop=smart&auto=webp&s=0b5163ab144735ff69b535c705e3aab5fb5c2a8f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iF7o80Puhk6vH3nLl37JtsAv2AIXs0_tkt8-RBqZ0eU.jpg?width=216&crop=smart&auto=webp&s=225c2076e879e7955262f47b160bb621b242cb73', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iF7o80Puhk6vH3nLl37JtsAv2AIXs0_tkt8-RBqZ0eU.jpg?width=320&crop=smart&auto=webp&s=4efdd09d96bdad8dea7fa48d06631be26ee74026', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iF7o80Puhk6vH3nLl37JtsAv2AIXs0_tkt8-RBqZ0eU.jpg?width=640&crop=smart&auto=webp&s=f88a367f8cb396c2c41afe535797211758c7eea6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iF7o80Puhk6vH3nLl37JtsAv2AIXs0_tkt8-RBqZ0eU.jpg?width=960&crop=smart&auto=webp&s=5bd180a6a00cc79ad280b5f220fece99f435bdee', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iF7o80Puhk6vH3nLl37JtsAv2AIXs0_tkt8-RBqZ0eU.jpg?width=1080&crop=smart&auto=webp&s=597c4548281016d756dbd792fb1415db03736212', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iF7o80Puhk6vH3nLl37JtsAv2AIXs0_tkt8-RBqZ0eU.jpg?auto=webp&s=1e8d3b0c640780b7eda4943d410fb6c26ee6eded', 'width': 1200}, 'variants': {}}]}
Automatically take notes with local LLM Demo! Who wants to take over this project?
103
2024-02-02T22:16:34
https://v.redd.it/4agerdy6y8gc1
Robert-treboR
/r/LocalLLaMA/comments/1ahfwoa/automatically_take_notes_with_local_llm_demo_who/
1970-01-01T00:00:00
0
{}
1ahfwoa
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4agerdy6y8gc1/DASHPlaylist.mpd?a=1709633800%2CNDk1Y2RjOWM5OTg0YjM5MjJkYjZjNzk4ZjEzMzY3ODU2ZmI0ODU1ZWIzNmI2YzE5OTk3ZjYyMzllZDQzOGM4Mw%3D%3D&v=1&f=sd', 'duration': 172, 'fallback_url': 'https://v.redd.it/4agerdy6y8gc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4agerdy6y8gc1/HLSPlaylist.m3u8?a=1709633800%2CMGU1MjU2ZDllMjNiYzg0OWM1MTllYWJhMWI3M2Q3OWZlY2MwMDM2ZmYxMWE4MmUzY2ZkMGNmOTRlNTJkYzEwMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4agerdy6y8gc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1572}}
t3_1ahfwoa
/r/LocalLLaMA/comments/1ahfwoa/automatically_take_notes_with_local_llm_demo_who/
false
false
https://external-preview…eeee706df1e5decd
103
{'enabled': False, 'images': [{'id': 'ZGJldjV1bDl5OGdjMV814c1IbucxZ1BJLSFphp6hzN3GhgqYPDzTU7eHu2Nu', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/ZGJldjV1bDl5OGdjMV814c1IbucxZ1BJLSFphp6hzN3GhgqYPDzTU7eHu2Nu.png?width=108&crop=smart&format=pjpg&auto=webp&s=a0a6b2fa781590040a27576847fb3bf64ce7783c', 'width': 108}, {'height': 148, 'url': 'https://external-preview.redd.it/ZGJldjV1bDl5OGdjMV814c1IbucxZ1BJLSFphp6hzN3GhgqYPDzTU7eHu2Nu.png?width=216&crop=smart&format=pjpg&auto=webp&s=80b22f8d9612dae7851cf1e69b43caa451dbe9c7', 'width': 216}, {'height': 219, 'url': 'https://external-preview.redd.it/ZGJldjV1bDl5OGdjMV814c1IbucxZ1BJLSFphp6hzN3GhgqYPDzTU7eHu2Nu.png?width=320&crop=smart&format=pjpg&auto=webp&s=8f969a379c302029949dc49a4575b348954749a9', 'width': 320}, {'height': 439, 'url': 'https://external-preview.redd.it/ZGJldjV1bDl5OGdjMV814c1IbucxZ1BJLSFphp6hzN3GhgqYPDzTU7eHu2Nu.png?width=640&crop=smart&format=pjpg&auto=webp&s=cab88bd4bac4f1d15b3af7cb9d919fac906835a3', 'width': 640}, {'height': 659, 'url': 'https://external-preview.redd.it/ZGJldjV1bDl5OGdjMV814c1IbucxZ1BJLSFphp6hzN3GhgqYPDzTU7eHu2Nu.png?width=960&crop=smart&format=pjpg&auto=webp&s=5d46e46d620e12d67309753c7205d84d7aea786c', 'width': 960}, {'height': 741, 'url': 'https://external-preview.redd.it/ZGJldjV1bDl5OGdjMV814c1IbucxZ1BJLSFphp6hzN3GhgqYPDzTU7eHu2Nu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ab735834e701cf52a0e03a76f5025c0a3822dd3a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZGJldjV1bDl5OGdjMV814c1IbucxZ1BJLSFphp6hzN3GhgqYPDzTU7eHu2Nu.png?format=pjpg&auto=webp&s=02dfcbcd3b3c750736c46ae289a81444afd663a5', 'width': 1572}, 'variants': {}}]}
MoE LLaVA description...
1
2024-02-02T22:15:30
https://i.redd.it/hm70zs02y8gc1.png
Wrong_User_Logged
i.redd.it
1970-01-01T00:00:00
0
{}
1ahfvs8
false
null
t3_1ahfvs8
/r/LocalLLaMA/comments/1ahfvs8/moe_llava_description/
false
false
https://b.thumbs.redditm…qhizXMLNE5Cw.jpg
1
{'enabled': True, 'images': [{'id': '6x3Wjcd6nbzQk6EddugpNkR7OPylJwW_h79ml5gEc_s', 'resolutions': [{'height': 149, 'url': 'https://preview.redd.it/hm70zs02y8gc1.png?width=108&crop=smart&auto=webp&s=9d5e4cafea168321ef2ff78f865d9f457de2493b', 'width': 108}, {'height': 298, 'url': 'https://preview.redd.it/hm70zs02y8gc1.png?width=216&crop=smart&auto=webp&s=e35024ae6cca9d24ee4dcb5c6d1ddfd61b7bd03a', 'width': 216}, {'height': 441, 'url': 'https://preview.redd.it/hm70zs02y8gc1.png?width=320&crop=smart&auto=webp&s=2d6fbe12bf3dc4d9872146e8c962d1a958b4d708', 'width': 320}, {'height': 883, 'url': 'https://preview.redd.it/hm70zs02y8gc1.png?width=640&crop=smart&auto=webp&s=fcd1ce28b161943c93999d29c2286528baf9ad6c', 'width': 640}], 'source': {'height': 1298, 'url': 'https://preview.redd.it/hm70zs02y8gc1.png?auto=webp&s=b6a1dd2a5bc8d738d2fafc0cc3d0549c92fe7957', 'width': 940}, 'variants': {}}]}
Automatically take notes with local LLM & browser extension
1
https://reddit.com/link/1ahfu2l/video/5g99rmxlx8gc1/player **Local LLM** takes care of taking notes *automatically* for you my chrome extension converts webpage into **plain text** indexes it with **locally** on running **vectorDB** (qdrunt) &#x200B; in the end you can ask questions to everything you saw with [@ollama](https://twitter.com/ollama) Did not have time open source it... since i started to work at Tab Who want to take over? &#x200B; &#x200B; My twitter [https://twitter.com/Karmedge](https://twitter.com/Karmedge) Linkedin [https://www.linkedin.com/in/karmedge/](https://www.linkedin.com/in/karmedge/)
2024-02-02T22:13:27
https://www.reddit.com/r/LocalLLaMA/comments/1ahfu2l/automatically_take_notes_with_local_llm_browser/
Robert-treboR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahfu2l
false
null
t3_1ahfu2l
/r/LocalLLaMA/comments/1ahfu2l/automatically_take_notes_with_local_llm_browser/
false
false
https://b.thumbs.redditm…c206VSmlSKBQ.jpg
1
{'enabled': False, 'images': [{'id': 'IC0zrIU1vjh3_z5XjFTq0Zem9CBJoES4z-DDCIlPv20', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/WCTl6DKpfIypYVV30RGqGv1kYMpcuy-ZxKEgOV_mUEU.jpg?width=108&crop=smart&auto=webp&s=77e64d83be06b392202dab41570959065ce61560', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/WCTl6DKpfIypYVV30RGqGv1kYMpcuy-ZxKEgOV_mUEU.jpg?width=216&crop=smart&auto=webp&s=8d5e6225823854a7b0ca454782f2e13c187804f3', 'width': 216}, {'height': 185, 'url': 'https://external-preview.redd.it/WCTl6DKpfIypYVV30RGqGv1kYMpcuy-ZxKEgOV_mUEU.jpg?width=320&crop=smart&auto=webp&s=8dbee33303fd1190e106f4588357ed85f1937cf8', 'width': 320}, {'height': 370, 'url': 'https://external-preview.redd.it/WCTl6DKpfIypYVV30RGqGv1kYMpcuy-ZxKEgOV_mUEU.jpg?width=640&crop=smart&auto=webp&s=60afca08a4dbe1933c6c0440ec66cb48c46e8e59', 'width': 640}, {'height': 556, 'url': 'https://external-preview.redd.it/WCTl6DKpfIypYVV30RGqGv1kYMpcuy-ZxKEgOV_mUEU.jpg?width=960&crop=smart&auto=webp&s=0576ff5b83aa628e21512e23d22d4a097bac4811', 'width': 960}, {'height': 625, 'url': 'https://external-preview.redd.it/WCTl6DKpfIypYVV30RGqGv1kYMpcuy-ZxKEgOV_mUEU.jpg?width=1080&crop=smart&auto=webp&s=441212c606ff08779c501434362f0e7f3348aeb5', 'width': 1080}], 'source': {'height': 1187, 'url': 'https://external-preview.redd.it/WCTl6DKpfIypYVV30RGqGv1kYMpcuy-ZxKEgOV_mUEU.jpg?auto=webp&s=3f3dd28a5c37b7cac6e728ef0de66958baa7d7d5', 'width': 2048}, 'variants': {}}]}
Sweetspot RAM model for summarization rn?
1
Hi, I know these kinds of questions must be incredibly tedious for the group, but if someone could advise me of the which model to use for quick text summarization. For context, I have a threadripper server with 96GB of RAM and plan on using llama.cpp in server mode. I need to send the model something like the following prompt > Please provide a short one sentence summary of the following paragraph. > It was the worst of times, it was the blurst of times... It is for a transfer learning experiment I am doing, so I will be calling this a lot. With that in mind I guess I would like the choice of model you think to be weighted in terms of inference speed given my hardware,, but obviously not to the point it can't do the job. Much appreciate your advice.
2024-02-02T22:05:11
https://www.reddit.com/r/LocalLLaMA/comments/1ahfn95/sweetspot_ram_model_for_summarization_rn/
inigid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahfn95
false
null
t3_1ahfn95
/r/LocalLLaMA/comments/1ahfn95/sweetspot_ram_model_for_summarization_rn/
false
false
self
1
null
Has anyone successfully run the Mixtral model on an M1 Mac machine?
1
I use the [Jan.ai](https://Jan.ai) software to host this model on my M1 Mac Mini with 16GB of RAM, and when I run it, it immediately crashes. I'm wondering if this is expected because when I googled how much RAM I need, I found the number 16 gigabytes, which I already have. &#x200B; Do you think it would be different if I install it using Ollama?
2024-02-02T21:22:30
https://www.reddit.com/r/LocalLLaMA/comments/1ahenf5/has_anyone_successfully_run_the_mixtral_model_on/
Melisanjb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahenf5
false
null
t3_1ahenf5
/r/LocalLLaMA/comments/1ahenf5/has_anyone_successfully_run_the_mixtral_model_on/
false
false
self
1
{'enabled': False, 'images': [{'id': '_clR5lo0uUzBmmOefgsOcrqCYpHgjkxrStKzWLzjtqg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=108&crop=smart&auto=webp&s=a34a9c017a9872303c87fdbe0bca0b95846bd110', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=216&crop=smart&auto=webp&s=67047021fa80720833499c426641e059a6e86bbc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=320&crop=smart&auto=webp&s=dcdb4bb0148442648e65b68e6865207cac0b2fc2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=640&crop=smart&auto=webp&s=3d3bda8cadf2426facfc15184a6c812a572eb706', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=960&crop=smart&auto=webp&s=f614f9911aa52ab8c5e6e44dfaca8927167be907', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=1080&crop=smart&auto=webp&s=2a44840e4665263ae5e36cf6702fffebc2b295df', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?auto=webp&s=8e5e6d929427207a2483692d89844ab933417a5f', 'width': 1200}, 'variants': {}}]}
Groq is probably a scam
1
[Groq claims to have invented a chip that lets you achieve 240 ts with LLaMa 2 70B](https://twitter.com/ArtificialAnlys/status/1752719288946053430), which sounds too good to be true. Turns out, it might be. While I was playing around with Groq, I noticed that "Mixtral" refused to give advice or tell offensive jokes, which is something that normal Mixtral has been able to do in my experience. When I asked it about the different kinds of LLMs, it always said that they were hypothetical models and malicious. However, when I asked it about Mistral, it gave me a blurb that accurately said that Mistral AI is a startup that creates LLMs. When I switched over to LLaMa 2 70B, it continued to say that other LLMs are hypothetical and malicious (with the exception of LLaMa itself, which it recognized as real). Again, it gave the exact same blurb about Mistral AI, which it shouldn't know anything about because LLaMa 2 predates Mistral. Afterwards, I closed and reopened Groq to clear the chat history. Starting over, I asked LLaMa 2 70B some questions. Once again, it still associated Mistral with AI, even though it shouldn't have that information. I then switched over to Mixtral, asked it who made it, and then got a blunt response that it's a LLaMa model that was created by Meta AI. If I had to guess, Groq uses something like LLaMa 2 13B to get much better t/s, and then uses prompt engineering to make it seem like you're talking to a different LLM.
2024-02-02T20:33:43
https://www.reddit.com/r/LocalLLaMA/comments/1ahdhgx/groq_is_probably_a_scam/
matteoraso
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahdhgx
false
null
t3_1ahdhgx
/r/LocalLLaMA/comments/1ahdhgx/groq_is_probably_a_scam/
false
false
self
1
{'enabled': False, 'images': [{'id': 'BRhurXMD1G1MlmtiDVC2_L9YdZMiNzJH8qCY2xa1KJ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MRCSbjUHZevQwjjZL6lLbf1Vf8R41K5t4ecI2SwmVwk.jpg?width=108&crop=smart&auto=webp&s=9851a3f2b56d8a7a359cf5a86eed4d49f4339add', 'width': 108}], 'source': {'height': 71, 'url': 'https://external-preview.redd.it/MRCSbjUHZevQwjjZL6lLbf1Vf8R41K5t4ecI2SwmVwk.jpg?auto=webp&s=b3de679444336ac157bdfe550ed84b9b5d1f3c33', 'width': 140}, 'variants': {}}]}
Building out a rig to play with. What am I missing?
41
Am I completely insane or just really dumb? Am I missing anything (other than storage). I have a rack in my basement so I can mask most of the noise, and this would cost like $100/mo in electricity if I ran it full blast 24/7. My work offered to pay for the GPUs so this would be nicely subsidized! https://preview.redd.it/ttxilhz4e8gc1.png?width=977&format=png&auto=webp&s=00335967a44c9e357b5719547446e0c932926212
2024-02-02T20:31:41
https://www.reddit.com/r/LocalLLaMA/comments/1ahdfs7/building_out_a_rig_to_play_with_what_am_i_missing/
dinkumator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahdfs7
false
null
t3_1ahdfs7
/r/LocalLLaMA/comments/1ahdfs7/building_out_a_rig_to_play_with_what_am_i_missing/
false
false
https://b.thumbs.redditm…dBPr2TZ-7f_c.jpg
41
null
vLLM 0.3.0 is ~4-5x faster than 0.2.7 under heavy load [for AWQ]
81
This is just a PSA to update your vLLM install to 0.3.0 if you are using it with AWQ models. The speedup is thanks to this PR: [https://github.com/vllm-project/vllm/pull/2566](https://github.com/vllm-project/vllm/pull/2566) The speed up I see in production, on 34B AWQ model is as follows: Setup: * A100 80G PCIe but restricted to 40G * 5 concurrent requests at all times * \~2500 input tokens per request * 250 output tokens per request Before: Average input tokens: 2562.375 Average output tokens: 250 Average time to first token: 6964.3ms Average time per token: 90.4259ms Average tokens per second: 11.05877851367805 Average max token delay: 6137.1ms 50th pct time to first token: 7403ms 50th pct time per token: 90.232ms 50th pct tokens per second: 11.082542778615126 50th pct max token delay: 6444ms 90th pct time to first token: 9942ms 90th pct time per token: 106.256ms 90th pct tokens per second: 9.411233248004818 90th pct max token delay: 13080ms 95th pct time to first token: 10926ms 95th pct time per token: 107.72ms 95th pct tokens per second: 9.28332714444857 95th pct max token delay: 13335ms After: Average input tokens: 2570.8866666666668 Average output tokens: 250 Average time to first token: 1478.5533333333333ms Average time per token: 60.27586666666666ms Average tokens per second: 16.590387750542508 Average max token delay: 1229.3466666666666ms 50th pct time to first token: 1436ms 50th pct time per token: 60.708ms 50th pct tokens per second: 16.472293602161166 50th pct max token delay: 1219ms 90th pct time to first token: 1712ms 90th pct time per token: 61.764ms 90th pct tokens per second: 16.190661226604494 90th pct max token delay: 1319ms 95th pct time to first token: 1736ms 95th pct time per token: 62.272ms 95th pct tokens per second: 16.05858170606372 95th pct max token delay: 1335ms All metrics are per request (so e.g. tokens per second overall are 5x the number you see above). The **throughput increases by 50%**, but look at those **time-to-first-token** and **max-token-delay** (crucial metrics for user experience, more so than overall tokens per second) -- they are **almost 5x faster.** Not to mention the 90th and 95th percentiles.
2024-02-02T20:29:08
https://www.reddit.com/r/LocalLLaMA/comments/1ahddk3/vllm_030_is_45x_faster_than_027_under_heavy_load/
DreamGenAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahddk3
false
null
t3_1ahddk3
/r/LocalLLaMA/comments/1ahddk3/vllm_030_is_45x_faster_than_027_under_heavy_load/
false
false
self
81
{'enabled': False, 'images': [{'id': 'Id9L1RNRb1GpwQvorxn7ImLgtNAN_cD5mt_FA72lsPY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mWlQB7EdIL5HNh7iyF1QzBrFvi7x9-DanPQx4C1M0SM.jpg?width=108&crop=smart&auto=webp&s=9d47f070b224c74235909044040bce706073e7dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mWlQB7EdIL5HNh7iyF1QzBrFvi7x9-DanPQx4C1M0SM.jpg?width=216&crop=smart&auto=webp&s=95065fe7a3b751a6ed9f665f5911c5a9cb1e0757', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mWlQB7EdIL5HNh7iyF1QzBrFvi7x9-DanPQx4C1M0SM.jpg?width=320&crop=smart&auto=webp&s=ee1f8a8532716a02b0bf2c35a325b402544dcfe7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mWlQB7EdIL5HNh7iyF1QzBrFvi7x9-DanPQx4C1M0SM.jpg?width=640&crop=smart&auto=webp&s=dd4535d5a9deacb90159757d3fcf169ee1d34232', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mWlQB7EdIL5HNh7iyF1QzBrFvi7x9-DanPQx4C1M0SM.jpg?width=960&crop=smart&auto=webp&s=1a0280294602b8236249a1fe80635d246b7d6611', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mWlQB7EdIL5HNh7iyF1QzBrFvi7x9-DanPQx4C1M0SM.jpg?width=1080&crop=smart&auto=webp&s=6a9bc8b6317413960ee9acfa22b9f2ee7f18107a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mWlQB7EdIL5HNh7iyF1QzBrFvi7x9-DanPQx4C1M0SM.jpg?auto=webp&s=ff4c4973cf3c7d1749dee8ac5ad4b99254686b88', 'width': 1200}, 'variants': {}}]}
Which LLM is the best for fictional Idea/ concept generating?
1
[removed]
2024-02-02T20:26:28
https://www.reddit.com/r/LocalLLaMA/comments/1ahdbd7/which_llm_is_the_best_for_fictional_idea_concept/
headbopper96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahdbd7
false
null
t3_1ahdbd7
/r/LocalLLaMA/comments/1ahdbd7/which_llm_is_the_best_for_fictional_idea_concept/
false
false
self
1
null
Does anyone know how Prolixity and others reference documents?
1
[removed]
2024-02-02T20:12:05
https://www.reddit.com/r/LocalLLaMA/comments/1ahczeb/does_anyone_know_how_prolixity_and_others/
coolcloud
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahczeb
false
null
t3_1ahczeb
/r/LocalLLaMA/comments/1ahczeb/does_anyone_know_how_prolixity_and_others/
false
false
self
1
null
GPU memory overclocking in linux?!
9
I'd like to up my memory clock on the 3090s. Sadly, with no displays plugged in or running, nvidia control doesn't expose any clock settings. nvidia-smi only lets you do stock clocks. There's some guides that start/stop the x-server but I use that for the onboard video and vnc. Is it possible to run a 2nd xserver and kill it after overclocking? In short, has anyone gotten this to work without using the GPUs as an output or taking up memory?
2024-02-02T19:47:25
https://www.reddit.com/r/LocalLLaMA/comments/1ahcekt/gpu_memory_overclocking_in_linux/
a_beautiful_rhind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahcekt
false
null
t3_1ahcekt
/r/LocalLLaMA/comments/1ahcekt/gpu_memory_overclocking_in_linux/
false
false
self
9
null
How to set up chat templates on LM Studio?
4
How does one import a string of text like this into LM Studio? Do you just paste it into the pre-prompt? "GPT4 Correct User: Hello<|end\_of\_turn|>GPT4 Correct Assistant: Hi<|end\_of\_turn|>GPT4 Correct User: How are you today?<|end\_of\_turn|>GPT4 Correct Assistant:"
2024-02-02T19:46:12
https://www.reddit.com/r/LocalLLaMA/comments/1ahcdkd/how_to_set_up_chat_templates_on_lm_studio/
RenoHadreas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahcdkd
false
null
t3_1ahcdkd
/r/LocalLLaMA/comments/1ahcdkd/how_to_set_up_chat_templates_on_lm_studio/
false
false
self
4
null
Any Phi-2 model that does this?
1
[removed]
2024-02-02T19:29:09
https://www.reddit.com/r/LocalLLaMA/comments/1ahbzcv/any_phi2_model_that_does_this/
AromaticCantaloupe19
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahbzcv
false
null
t3_1ahbzcv
/r/LocalLLaMA/comments/1ahbzcv/any_phi2_model_that_does_this/
false
false
self
1
null
Training a model with multiple floating point precisions
1
I want to train a vision-language model using a connector (think of it like a linear layer as in LLaVA). I only train the connector module and some LoRAs on both the language model and the vision encoder. The vision encoder is in fp32 and the language decoder is in fp16. As expected my newly created connector module will be in fp32. Could there be something wrong with this, since I am using a part of my model with weights in fp16? Should I transform it to fp32 and do the training? Note that I never train the language model itself, only the connector. Using it in fp16 would greatly benefit me since I would use less memory
2024-02-02T19:20:11
https://www.reddit.com/r/LocalLLaMA/comments/1ahbrx5/training_a_model_with_multiple_floating_point/
AromaticCantaloupe19
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahbrx5
false
null
t3_1ahbrx5
/r/LocalLLaMA/comments/1ahbrx5/training_a_model_with_multiple_floating_point/
false
false
self
1
null
How to evaluate an LLM?
8
Hi folks. I have seen a lot of “evaluation score: 75%” and similar stuff in the HuggingFace learner board, but how do I perform this evaluation? Is there a single-click local app for this, or what?
2024-02-02T19:11:03
https://www.reddit.com/r/LocalLLaMA/comments/1ahbkba/how_to_evaluate_an_llm/
yukiarimo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahbkba
false
null
t3_1ahbkba
/r/LocalLLaMA/comments/1ahbkba/how_to_evaluate_an_llm/
false
false
self
8
null
Any popular GUIs already support MoE-LLaVA?
1
[removed]
2024-02-02T19:04:07
https://www.reddit.com/r/LocalLLaMA/comments/1ahbea9/any_popular_guis_already_support_moellava/
Ng2oXg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahbea9
false
null
t3_1ahbea9
/r/LocalLLaMA/comments/1ahbea9/any_popular_guis_already_support_moellava/
false
false
self
1
null
How should I utilize this Mac Pro?
1
I have access to the attached Mac Pro. I’m new to LLM’s, so I’m not even sure how efficient this rig would be. If you were me, what would you do with it? Any guidance is appreciated!
2024-02-02T18:44:25
https://i.redd.it/6572qkngw7gc1.jpeg
eltonjock
i.redd.it
1970-01-01T00:00:00
0
{}
1ahax4u
false
null
t3_1ahax4u
/r/LocalLLaMA/comments/1ahax4u/how_should_i_utilize_this_mac_pro/
false
false
https://b.thumbs.redditm…cwKhmyU2_CHI.jpg
1
{'enabled': True, 'images': [{'id': 'D4G5CqzVJtsKsaeiCa_-91mJ13AV9D9EDYIDm3_2_rU', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/6572qkngw7gc1.jpeg?width=108&crop=smart&auto=webp&s=b2be7d0fd0b072d0f5833178a31ffb74fea8cde8', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/6572qkngw7gc1.jpeg?width=216&crop=smart&auto=webp&s=7a63dedbb254f6e939af7cf507203d13dcbb9273', 'width': 216}, {'height': 336, 'url': 'https://preview.redd.it/6572qkngw7gc1.jpeg?width=320&crop=smart&auto=webp&s=9525cffce2060650677f583cf0d90333e486c518', 'width': 320}], 'source': {'height': 485, 'url': 'https://preview.redd.it/6572qkngw7gc1.jpeg?auto=webp&s=df5b609ecc5e3176cdc64054c6c34f2201949d86', 'width': 461}, 'variants': {}}]}
CodeLlama 70B Local Deployment with JIT Compilation
22
CodeLlama 70B is now supported on [MLC LLM](https://github.com/mlc-ai/mlc-llm) — meaning local deployment everywhere! Recently, MLC LLM added support for just-in-time (JIT) compilation, making the deployment process a lot easier (even with multi-GPUs) -- see how M2 Mac (left) and 2 x RTX4090 (right) have almost the same code. HF: [https://huggingface.co/mlc-ai](https://huggingface.co/mlc-ai) [M2 Mac \(left\); 2 x RTX 4090 \(right\)](https://preview.redd.it/fjpfu6sdv7gc1.png?width=5072&format=png&auto=webp&s=984e4e747c847d9102c234083664c614473493d5)
2024-02-02T18:43:52
https://www.reddit.com/r/LocalLLaMA/comments/1ahawob/codellama_70b_local_deployment_with_jit/
SnooMachines3070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahawob
false
null
t3_1ahawob
/r/LocalLLaMA/comments/1ahawob/codellama_70b_local_deployment_with_jit/
false
false
https://a.thumbs.redditm…j4xGRL0eU8e8.jpg
22
{'enabled': False, 'images': [{'id': 'FgfkwAZokuskCE0vW6eEzOyN9AVL1C7f53cLe76oTaw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n_Olu46DbJuhE1P-82hLMxbuZar9WoriG9hze57-r9E.jpg?width=108&crop=smart&auto=webp&s=80f981c5416617407355d1153dcd1b4c1671ed71', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n_Olu46DbJuhE1P-82hLMxbuZar9WoriG9hze57-r9E.jpg?width=216&crop=smart&auto=webp&s=ee8d415146d7c38bf71e9bc8e005ba04524fe9c6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n_Olu46DbJuhE1P-82hLMxbuZar9WoriG9hze57-r9E.jpg?width=320&crop=smart&auto=webp&s=e49cae9955b7ab99016a55886cdd746e6fae8bb9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n_Olu46DbJuhE1P-82hLMxbuZar9WoriG9hze57-r9E.jpg?width=640&crop=smart&auto=webp&s=6d0a846e497c9e57dd70dc3f7798d1cfbeaeca97', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n_Olu46DbJuhE1P-82hLMxbuZar9WoriG9hze57-r9E.jpg?width=960&crop=smart&auto=webp&s=a4e3b1c57e913723176e376366def91dfc2c8d84', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n_Olu46DbJuhE1P-82hLMxbuZar9WoriG9hze57-r9E.jpg?width=1080&crop=smart&auto=webp&s=3600023df47db2d0f7d1fc68c12804aa03808c4d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n_Olu46DbJuhE1P-82hLMxbuZar9WoriG9hze57-r9E.jpg?auto=webp&s=e3e9d29fdd577c652a842d9cba666c3a802950fc', 'width': 1200}, 'variants': {}}]}
Any Phi-2 model that does this?
1
This post comes out of desperation, I hope you understand, but I really need a Phi-2 model that supports both beam search AND generates the EOS token. Are there any versions on HF that do this? I can't seem to find any...
2024-02-02T18:35:49
https://www.reddit.com/r/LocalLLaMA/comments/1ahapxx/any_phi2_model_that_does_this/
AromaticCantaloupe19
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ahapxx
false
null
t3_1ahapxx
/r/LocalLLaMA/comments/1ahapxx/any_phi2_model_that_does_this/
false
false
default
1
null
Clarification on Ollama and Performance of Various Models
1
Hi all, Forgive me I'm new to the scene but I've been running a few different models locally through Ollama for the past month or so. With the recent announcement of code llama 70B I decided to take a deeper dive into using local models...I've read the wiki and few posts on this subreddit and I came out with even more questions than I started with lol. Some of the comments I've read suggest that I would need just shy of super computer to run code llama 70B. While I have a pretty powerful computer (M1 MAX 64GB), I certainly don't have a $20,000 rig. How is it that I can run the 70b version from ollama without issues then? Is this because the models from ollama are quantized? Can someone please elaborate on quantization and its effects on a model. I understand that it makes the model less accurate, but then how does a quantized model stack up to a model that isn't quantized. For example, will code llama 7B unquantized perform more accurately than the 70B version that is quantized? What's the purpose of quantizing a model, other than for resource constraints? ollama makes the process of running models very easy, should I be taking a more manual approach to running models if I want the best results I could get? For reference, I've been using the models to help with coding. I had a chatGPT subscription for awhile but once I found out I could run a model on my own machine I immediately switched over. I would like to get as close to GPT4 running locally as I can. &#x200B; Thanks.
2024-02-02T17:59:04
https://www.reddit.com/r/LocalLLaMA/comments/1ah9ue8/clarification_on_ollama_and_performance_of/
No-Airport8006
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah9ue8
false
null
t3_1ah9ue8
/r/LocalLLaMA/comments/1ah9ue8/clarification_on_ollama_and_performance_of/
false
false
self
1
null
Two annoying things of LLama2
2
Suppose I use this prompt: `generate 30 questions about DNA`. The answer is always: Of course! Here are 30 questions about DNA: and it proceeds to give me just 10 questions. So, it is there a way to: 1. Get rid of this "Of course! Here are..." starting line 2. Is there a way to make it give me what I asked, x questions not 10?
2024-02-02T17:57:40
https://www.reddit.com/r/LocalLLaMA/comments/1ah9t6q/two_annoying_things_of_llama2/
CoolAppz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah9t6q
false
null
t3_1ah9t6q
/r/LocalLLaMA/comments/1ah9t6q/two_annoying_things_of_llama2/
false
false
self
2
null
Tokens/s with Mixtral 8×7B with 16 GB VRAM offloading?
25
I'm considering getting a 16 GB GPU for personal use. How fast would I be able to run Mixtral 8×7B Q2 or Q3 with a decent context size (8K, maybe 16K) offloading into the full 16 GB with no UI? Has anybody tried that? (System RAM is not a problem but it's cheap and dirty DDR4-3200.)
2024-02-02T17:40:21
https://www.reddit.com/r/LocalLLaMA/comments/1ah9eg4/tokenss_with_mixtral_87b_with_16_gb_vram/
A_for_Anonymous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah9eg4
false
null
t3_1ah9eg4
/r/LocalLLaMA/comments/1ah9eg4/tokenss_with_mixtral_87b_with_16_gb_vram/
false
false
self
25
null
Is there a way to make llama2 do not break lines inside paragraphs?
1
I use this prompt: `generate explanation about dna` The result is something like: DNA (Deoxyribonucleic acid) is a long, complex molecule that contains the genetic instructions used in the development and function of all living organisms. Its structure includes two strands of nucleotides coiled together in a double helix structure, with each nucleotide composed of a sugar molecule called deoxyribose, a phosphate group, and one of four nitrogenous bases - adenine (A), guanine (G), cytosine (C), and thymine (T). The sequence of nitrogenous bases in DNA determines the genetic code, which is read by cells during protein synthesis. You can notice several line breaks on the output. Is there a way to get rid olf these line breaks and make the output like DNA (Deoxyribonucleic acid) is a long, complex molecule that contains the genetic instructions used in the development and function of all living organisms. Its structure includes two strands of nucleotides coiled together in a double helix structure, with each nucleotide composed of a sugar molecule called deoxyribose, a phosphate group, and one of four nitrogenous bases - adenine (A), guanine (G), cytosine (C), and thymine (T). The sequence of nitrogenous bases in DNA determines the genetic code, which is read by cells during protein synthesis. I want paragraphs to be without line breaks. Line breaks just between paragraphs. Thanks in advance.
2024-02-02T17:37:43
https://www.reddit.com/r/LocalLLaMA/comments/1ah9c89/is_there_a_way_to_make_llama2_do_not_break_lines/
CoolAppz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah9c89
false
null
t3_1ah9c89
/r/LocalLLaMA/comments/1ah9c89/is_there_a_way_to_make_llama2_do_not_break_lines/
false
false
self
1
null
Coalescence: making LLM inference 5x faster
42
Blog post: [https://blog.dottxt.co/coalescence.html](https://blog.dottxt.co/coalescence.html) You may already know [Outlines](https://github.com/outlines-dev/outlines), which allows to generate valid JSON with any Open Source Large Language models. Structured generation in Outlines is as fast as standard generation. In this post we show how we can exploit the properties of structured generation to make it several times faster than standard generation. This also highlights some of the issues with tokenization and related open questions.
2024-02-02T16:58:08
https://www.reddit.com/r/LocalLLaMA/comments/1ah8e9v/coalescence_making_llm_inference_5x_faster/
GoBayesGo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah8e9v
false
null
t3_1ah8e9v
/r/LocalLLaMA/comments/1ah8e9v/coalescence_making_llm_inference_5x_faster/
false
false
self
42
null
Mixtral/NousResearch performance on Together
1
We are really impressed by Together's throughput. This video is our [Whatsapp bot](https://www.openindex.ai/whatsapp) using a Mixtral/NousResearch DPO model hosted by Together. &#x200B; &#x200B; [WhatsApp bot video powered by Mixtral\/NousResearch DPO on Together.ai](https://reddit.com/link/1ah8e3n/video/t1kvw2tyc7gc1/player)
2024-02-02T16:57:56
https://www.reddit.com/r/LocalLLaMA/comments/1ah8e3n/mixtralnousresearch_performance_on_together/
Electrical-Profile79
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah8e3n
false
null
t3_1ah8e3n
/r/LocalLLaMA/comments/1ah8e3n/mixtralnousresearch_performance_on_together/
false
false
https://b.thumbs.redditm…2ID24JRQxZhI.jpg
1
{'enabled': False, 'images': [{'id': 'b5fsjWAAqZPGpVE4nfAOA2kAc2cinXtZPocL6UvZUHE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/oih_fmDrnKFhOEVhK_irMPozoGc6wcxwwPG5ccMxsIQ.jpg?width=108&crop=smart&auto=webp&s=0ca06bab095c2fca148d9f4d7ec3a0cca2c8aacd', 'width': 108}], 'source': {'height': 180, 'url': 'https://external-preview.redd.it/oih_fmDrnKFhOEVhK_irMPozoGc6wcxwwPG5ccMxsIQ.jpg?auto=webp&s=0ba17038f11fb633e89d5602afa85dc470b30e5a', 'width': 180}, 'variants': {}}]}
Building Local LLM for document Q&A via LangChain, Sentence Transformers, and FAISS
1
I would like to implement something like this - [https://github.com/wombyz/gpt4all\_langchain\_chatbots/blob/main/custom\_knowledge\_chatbot.py](https://github.com/wombyz/gpt4all_langchain_chatbots/blob/main/custom_knowledge_chatbot.py) But since GGML models are slower and inefficient, I have decided to go with a gguf model ([https://huggingface.co/TheBloke/airoboros-l2-7B-gpt4-2.0-GGUF](https://huggingface.co/TheBloke/airoboros-l2-7B-gpt4-2.0-GGUF)) and instead of LlamaCPPEmbeddings ([https://huggingface.co/Pi3141/alpaca-native-7B-ggml/commit/397e872bf4c83f4c642317a5bf65ce84a105786e](https://huggingface.co/Pi3141/alpaca-native-7B-ggml/commit/397e872bf4c83f4c642317a5bf65ce84a105786e)), I am using SentenceTransformer("all-mpnet-base-v2") This is my code - `from pygpt4all.models.gpt4all import GPT4All` `from pprint import pprint` `#import streamlit as st` `from langchain import PromptTemplate, LLMChain` `from langchain.document_loaders import TextLoader` `from langchain.embeddings import LlamaCppEmbeddings` `from langchain.llms import GPT4All` `from langchain.text_splitter import RecursiveCharacterTextSplitter` `#from langchain.callbacks.base import CallbackManager` `from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler` `from langchain.vectorstores.faiss import FAISS` `from llama_index import download_loader` `from langchain.document_loaders import UnstructuredURLLoader` `from langchain.docstore.document import Document` `from unstructured.cleaners.core import remove_punctuation,clean,clean_extra_whitespace` `import PyPDF2` `import re` `from langchain.chains import ConversationalRetrievalChain` `from pathlib import Path` `from typing import List, Tuple` `import requests` `from bs4 import BeautifulSoup` &#x200B; `gpt4all_path = './models/airoboros-l2-7B-gpt4-2.0.Q4_K_M.gguf'` `llm = GPT4All(model=gpt4all_path,max_tokens=2048, verbose=True,temp=0.1)` &#x200B; `def load_documents() -> List[str]:` `loader = TextLoader('./docs/cleaned_q_and_a.txt')` `documents = loader.load()` `texts = [doc.page_content for doc in documents]` `return texts` &#x200B; `def load_meta_data_documents() -> List:` `loader = TextLoader('./docs/cleaned_q_and_a.txt')` `return loader.load()` `def create_index(texts: List[str], embeddings_model: SentenceTransformer,chunks:List) -> FAISS:` `# Generate embeddings for the texts` `embeddings = embeddings_model.encode(texts, show_progress_bar=True)` `metadatas = [doc.metadata for doc in chunks]` `# Prepare text_embeddings as a list of tuples for FAISS.from_embeddings` `text_embeddings = [(text, embedding) for text, embedding in zip(texts, embeddings)]` `# Call FAISS.from_embeddings` `search_index = FAISS.from_embeddings(text_embeddings=text_embeddings, embedding=embeddings_model, metadatas=metadatas)` &#x200B; `return search_index` `docs = load_documents()` `metadata = load_meta_data_documents()` `vector_store = create_index(docs,embeddings_model,metadata)` &#x200B; `# Save Index (use this to save the index for later use)` `# Comment the line below after running once successfully (IMPORTANT)` &#x200B; `vector_store.save_local("q_and_a_index")` `index = FAISS.load_local("./q_and_a_index/", embeddings_model)` `qa = ConversationalRetrievalChain.from_llm(llm,index.as_retriever(),max_tokens_limit=500)` &#x200B; `chat_history=[]` `print("Custom Knowledge ChatBot")` `while True:` `query = input("Please enter your question: ")` &#x200B; `if query.lower() == 'exit':` `break` &#x200B; `processed_query = embeddings_model.encode([query])` `result = qa({"question":processed_query,"chat_history":chat_history})` &#x200B; `pattern = r'Helpful Answer:.*'` `match = re.search(pattern, result['answer'], re.DOTALL)` &#x200B; `if match:` `# Only display the matched part which is the relevant answer` `print(`[`match.group`](https://match.group)`())` `else:` `print("Answer:", result['answer'])` &#x200B; This is the [Error](https://pastebin.com/GqKQt99a) I am getting.
2024-02-02T16:43:06
https://www.reddit.com/r/LocalLLaMA/comments/1ah81fj/building_local_llm_for_document_qa_via_langchain/
Zuck7980
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah81fj
false
null
t3_1ah81fj
/r/LocalLLaMA/comments/1ah81fj/building_local_llm_for_document_qa_via_langchain/
false
false
self
1
null
Continuously expanding dataset
1
Hi! I’m a bit stumped because I haven’t been able to figure this out. Is it possible to e.g continuously pipe my telegram chat histories into the dataset of a local llm without worrying about context length when asking it questions about my chat? I have hundreds of thousands of messages in there all related to my business (we use telegram not slack). Is this feasible at all?
2024-02-02T16:08:23
https://www.reddit.com/r/LocalLLaMA/comments/1ah77q0/continuously_expanding_dataset/
Quick_Ordinary_7899
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah77q0
false
null
t3_1ah77q0
/r/LocalLLaMA/comments/1ah77q0/continuously_expanding_dataset/
false
false
self
1
null
Llama cpp streaming on AWS Lambda
1
This might be a long shot but I am hoping someone can help me here. I have an LLM running on AWS Lambda and currently supports REST API but I wanted to enable streaming. I have set streaming to true and set a loop in my lambda function but I am unable to stream the response currently. response_stream = MODEL_SESSION(user_message, max_tokens=max_tokens, temperature=temperature, top_k=top_k, stop=["Q:", "\n"], echo=False, stream=True) for response_chunk in response_stream: logger.debug(f"Raw model response chunk: {response_chunk}") api_client.post_to_connection( ConnectionId=connection_id, Data=json.dumps({"message": response_chunk}).encode('utf-8') ) logger.debug(log_event.format("response_stream_complete")) return {"statusCode": 200, "body": "Streaming complete"} Here is the implementation.
2024-02-02T16:07:02
https://www.reddit.com/r/LocalLLaMA/comments/1ah76m7/llama_cpp_streaming_on_aws_lambda/
Allergic2Humans
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah76m7
false
null
t3_1ah76m7
/r/LocalLLaMA/comments/1ah76m7/llama_cpp_streaming_on_aws_lambda/
false
false
self
1
null
Weird GPT-4 classic responses.... (Speculative sampling??!!)
1
I've noticed that GPT-4 Classic seems to be employing a unique writing process, where it writes a segment of text, then retraces its steps, writes again, and so on. This process is reminiscent of the speculative sampling technique. In this approach, a simpler, less capable model generates initial guesses for the next tokens, which are then evaluated and refined by a more advanced, sophisticated model. If this is indeed the case, it suggests that OpenAI has found a way to operate GPT-4 more cost-effectively, potentially using less powerful GPUs. However, this raises concerns about the model being "lobotomized," a term implying a reduction in its overall capabilities or intelligence.
2024-02-02T15:50:12
https://www.reddit.com/r/LocalLLaMA/comments/1ah6sl4/weird_gpt4_classic_responses_speculative_sampling/
Wrong_User_Logged
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah6sl4
false
null
t3_1ah6sl4
/r/LocalLLaMA/comments/1ah6sl4/weird_gpt4_classic_responses_speculative_sampling/
false
false
self
1
null
New leaderboards on HF! Enterprise use cases, and logic-based reasoning
144
We've got 2 new featured leaderboards on the hub. 1) Patronus published an Enterprise Scenarios Leaderboard (accepts submissions) It evaluates LLMs on several real world use cases (Finance documents, Legal confidentiality, Customer support, ...), which makes it grounded, and interesting for companies! What is likely to interest the LocalLLaMA community imo are that: - the test set is private, so it's hard to game 🔥 - you can evaluate "Engagingness" of an LLM (linked to interaction quality), which could also be interesting for our LLM fine-tuning community out there. Intro blog: https://huggingface.co/blog/leaderboards-on-the-hub-patronus 2) NPHardEval is a new leaderboard for reasoning abilities (no submissions atm) It uses questions of logic, of different mathematical complexities, as a proxy for reasoning abilities. It notably removes questions relying on arithmetic, to really focus on logical abilities. What's super cool is that since the questions can be generated automatically, it's going to be dynamic, updated monthly! It also means that the community here can automatically generate new questions using their framework to evaluate upcoming LLMs if they want! Intro blog: https://huggingface.co/blog/leaderboards-on-the-hub-nphardeval
2024-02-02T15:48:24
https://www.reddit.com/r/LocalLLaMA/comments/1ah6r51/new_leaderboards_on_hf_enterprise_use_cases_and/
clefourrier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah6r51
false
null
t3_1ah6r51
/r/LocalLLaMA/comments/1ah6r51/new_leaderboards_on_hf_enterprise_use_cases_and/
false
false
self
144
{'enabled': False, 'images': [{'id': 'jxdwORMhp1QT8I_XM1_qo9xANNbevWuSWfHu15Si83k', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/FyANDDva0Iexeqcxtel1-JZe7Ly6icqGWY-20sVNr_0.jpg?width=108&crop=smart&auto=webp&s=504ff6491ef228cda7d4a461cf9be95b57887811', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/FyANDDva0Iexeqcxtel1-JZe7Ly6icqGWY-20sVNr_0.jpg?width=216&crop=smart&auto=webp&s=f0139f920d914f2ad449f00fa66963215af6ffb5', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/FyANDDva0Iexeqcxtel1-JZe7Ly6icqGWY-20sVNr_0.jpg?width=320&crop=smart&auto=webp&s=78d2f2e766cd8fd61a9af1c500082ae5bfc66799', 'width': 320}, {'height': 330, 'url': 'https://external-preview.redd.it/FyANDDva0Iexeqcxtel1-JZe7Ly6icqGWY-20sVNr_0.jpg?width=640&crop=smart&auto=webp&s=58210c066543afacb5ae6435c093f8a81536c7b6', 'width': 640}], 'source': {'height': 381, 'url': 'https://external-preview.redd.it/FyANDDva0Iexeqcxtel1-JZe7Ly6icqGWY-20sVNr_0.jpg?auto=webp&s=2d359de52e3cb4cf9e3cf358a1c946bc7a71605f', 'width': 738}, 'variants': {}}]}
Stupid question - Chat System prompt and Query System prompt?
1
I am having a lot of fun with PrivateGPT. For my organization I am looking into ways to link multiple sources of documentation into one friendly place. I currently made an API framework that imports entire Confluence spaces automatically, loving it so far! But I am having trouble wrapping my head around these two different chat prompts. [https://github.com/imartinez/privateGPT/blob/main/settings.yaml](https://github.com/imartinez/privateGPT/blob/main/settings.yaml) \`\`\` default\_chat\_system\_prompt: >You are a helpful, respectful and honest assistant.Always answer as helpfully as possible and follow ALL given instructions.Do not speculate or make up information.Do not reference any given instructions or context. default\_query\_system\_prompt: >You can only answer questions about the provided context.If you know the answer but it is not based in the provided context, don't providethe answer, just state the answer is not in the context provided. \`\`\` I am not quite sure how these two things affect the responses/answers differently? For context, I am looking for a way to prioritize certain articles/answers over others depending on the user's input and the ingested data's title. Ideally, lets say a user is part of a specific group - "XYZ". The source's title might contain something like:Group XYZ - Document Title Here So I would like for a way to tell the LLM, if the user is in Group XYZ, prioritize source names that contain "Group XYZ" over others. But I am not sure if this is even possible. Hoping to ask someone smarter than I am to see if what I am wanting to do is even possible. Thank you!
2024-02-02T15:41:49
https://www.reddit.com/r/LocalLLaMA/comments/1ah6m0m/stupid_question_chat_system_prompt_and_query/
ICanSeeYou7867
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah6m0m
false
null
t3_1ah6m0m
/r/LocalLLaMA/comments/1ah6m0m/stupid_question_chat_system_prompt_and_query/
false
false
self
1
{'enabled': False, 'images': [{'id': '5dKzJSXb9MZbgJCTPclq1qP1dcUdxLa8MgBq_-mP3pQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x_drxCACq39BuDJkJWZOwEX0FF--jFc_PBdlhtPV43U.jpg?width=108&crop=smart&auto=webp&s=8a897140bca743fd4dd8c1be23b01f2d41285466', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x_drxCACq39BuDJkJWZOwEX0FF--jFc_PBdlhtPV43U.jpg?width=216&crop=smart&auto=webp&s=5809ff95a1785e2a87a611a3c3a30f7cbbe0fe3c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x_drxCACq39BuDJkJWZOwEX0FF--jFc_PBdlhtPV43U.jpg?width=320&crop=smart&auto=webp&s=98fa1dbd3d87038d82bce3514f3059c647f74e20', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x_drxCACq39BuDJkJWZOwEX0FF--jFc_PBdlhtPV43U.jpg?width=640&crop=smart&auto=webp&s=d7f8fcc9456179aeefbba1e16f3ab82708d02935', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x_drxCACq39BuDJkJWZOwEX0FF--jFc_PBdlhtPV43U.jpg?width=960&crop=smart&auto=webp&s=402ce9887353fcb1707f1baac4b89cb646873fde', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x_drxCACq39BuDJkJWZOwEX0FF--jFc_PBdlhtPV43U.jpg?width=1080&crop=smart&auto=webp&s=d2a5cad5a80da3071281eedb0846a001971fc058', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x_drxCACq39BuDJkJWZOwEX0FF--jFc_PBdlhtPV43U.jpg?auto=webp&s=b49b218329bc99b6802fe8f7cfe4f979dcbde069', 'width': 1200}, 'variants': {}}]}
A completely open-source AI Wearable device like Avi’s Tab, Rewind’s pendant, and Humane’s Pin
37
A completely open-source AI Wearable device like Avi’s Tab, Rewind’s pendant, and Humane’s Pin! Not only is it open-source, where you can own your data and switch between foundation models, but you can actually set it up today, not in a few months (oh, and it's cheaper!) The setup is also quite easy: 1. a simple supabase instance for vector db, authentication, and compute functions connected either to OpenAI or your own Ollama server running somewhere 2. A nextJS app (with native support with CapacitorJS) 3. The hardware device Check out the full launch here: [https://twitter.com/adamcohenhillel/status/1753435031102476701](https://twitter.com/adamcohenhillel/status/1753435031102476701) and the GitHub is: [https://github.com/adamcohenhillel/ADeus](https://github.com/adamcohenhillel/ADeus)
2024-02-02T15:11:58
https://www.reddit.com/r/LocalLLaMA/comments/1ah5y7n/a_completely_opensource_ai_wearable_device_like/
No-Camel-3819
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah5y7n
false
null
t3_1ah5y7n
/r/LocalLLaMA/comments/1ah5y7n/a_completely_opensource_ai_wearable_device_like/
false
false
self
37
{'enabled': False, 'images': [{'id': 'zMPbK1CwTHZ8kwbWlvHf9eWSbV1G8cypCJ7iCjmy5G0', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/dYgn58DgcMSjg9LE1fmIzHdES2TjxZrZTFCBXopU5cw.jpg?width=108&crop=smart&auto=webp&s=52a4cf4a8e1b321de277ab8a17a55a74b1ef2512', 'width': 108}], 'source': {'height': 93, 'url': 'https://external-preview.redd.it/dYgn58DgcMSjg9LE1fmIzHdES2TjxZrZTFCBXopU5cw.jpg?auto=webp&s=d3fe06a9b87ba56594410620720423884cd17839', 'width': 140}, 'variants': {}}]}
M2 Macbook Air with 24GB good enough for a useful local knowledge base/chat?
1
I'm Considering getting MacBook Air M2 24GB to run llama 2 locally and utilize it for Obsidian. What's important to me is that it can take my entire knowledge base into consideration when asking questions and I need it to summarize stuff - so not just single pages. General/Public knowlege is less important to me but it needs to not confuse things when working with large contexts. I would continue to use ChatGPT4 for general knowlege. Would the Air hold up? Would interefence slow down as the Air would heat up? What model would you recommend for this? Huggingface has so many different models it's hard to choose lol
2024-02-02T15:11:09
https://www.reddit.com/r/LocalLLaMA/comments/1ah5xj9/m2_macbook_air_with_24gb_good_enough_for_a_useful/
MattBurnes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah5xj9
false
null
t3_1ah5xj9
/r/LocalLLaMA/comments/1ah5xj9/m2_macbook_air_with_24gb_good_enough_for_a_useful/
false
false
self
1
null
Merging Chains and Sequential Chains
1
I need to study about Merging Chains and Sequential Chains in LLM. Can you recommend any resources? Websites, books, videos...
2024-02-02T15:02:00
https://www.reddit.com/r/LocalLLaMA/comments/1ah5pzn/merging_chains_and_sequential_chains/
NormalStatistical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah5pzn
false
null
t3_1ah5pzn
/r/LocalLLaMA/comments/1ah5pzn/merging_chains_and_sequential_chains/
false
false
self
1
null
RAG limitations
1
[removed]
2024-02-02T14:56:28
https://www.reddit.com/r/LocalLLaMA/comments/1ah5ld6/rag_limitations/
Gullible-Being-8595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah5ld6
false
null
t3_1ah5ld6
/r/LocalLLaMA/comments/1ah5ld6/rag_limitations/
false
false
self
1
null
ChatGPT like UI for any project within 15 mins
1
[removed]
2024-02-02T14:36:13
https://www.reddit.com/r/LocalLLaMA/comments/1ah54pe/chatgpt_like_ui_for_any_project_within_15_mins/
eschxr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah54pe
false
null
t3_1ah54pe
/r/LocalLLaMA/comments/1ah54pe/chatgpt_like_ui_for_any_project_within_15_mins/
false
false
self
1
{'enabled': False, 'images': [{'id': 'sRoPZ4zC7-VzVgSWdCvD1G9JoK0wGViKgk_2CUg-4Ds', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZADCARrKBM3MMmObizQbnRWwZTNYctsIRw8jnqbLYEc.jpg?width=108&crop=smart&auto=webp&s=068004b115ff00e56b276726a7a2b4118969ac77', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ZADCARrKBM3MMmObizQbnRWwZTNYctsIRw8jnqbLYEc.jpg?width=216&crop=smart&auto=webp&s=2caf989cc6d4a6d1a988662055aad36f0e4b5470', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ZADCARrKBM3MMmObizQbnRWwZTNYctsIRw8jnqbLYEc.jpg?width=320&crop=smart&auto=webp&s=2dd1085712b4b41761f88736732bc9dba3f46c69', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ZADCARrKBM3MMmObizQbnRWwZTNYctsIRw8jnqbLYEc.jpg?auto=webp&s=c3d83513e8ed93694e53512a38b7ac42e147b708', 'width': 480}, 'variants': {}}]}
What would a perfect LLM look like, in principle?
6
This might be more of a philosophical discussion, not sure. I'll start with definitions. Concept Space: A high dimensional vector space in which every concept imagineable (or otherwise?) can be represented by a specific vector. Word/Token Space: The vector space that represents the vector space of single tokens. The Space we end up with after word2vec (Or rather its low dimensional approximation) Given that sentences, or rather spans of tokens, can hold meanings, there must (?) exist a function token space + order of words + possible modifiers (i.e. temporal) into Concept space at least in principle. Otherwise languages wouldn't be able to create complex concepts from simple concepts (words). Now here's the crux, I assume such a concept space is very big, much too big to work with, similar to it being impossible to store the wave function of a particle in practice. So we need a lower dimensional representation (Perhaps SMV-esk in nature. The matrix product states used to represent Qbits in Quantum Computing come to mind.). So say we have both the dimensionally reduced concept space and word space. The LLM receives a span of tokens S_0 as input. It translates this span into its concept space vector. Let's call this vector: C_0 Now it looks through all possible followup spans (S_0 + 1 token) to find the span whose concept space vector is the closest to its original concept space vector C_0 (just using normalized scalar products should work?). i.e. The LLM produces the token which is closest to its original meaning. Then it repeats the same operation for the next token, using the last generated span's concept vector as reference. Is this what an "optimal" LLM would look like in purpose? An interesting problem producing prompt: -How far away is the moon? What would be the generated token after that? Would it be more likely to answer the question, or would it try to find a way to keep the meaning of the question, while forcing the addition of a single token (Perhaps it just prints a question mark?)? Instead -The distance of the moon to earth is Now I assume it would generate the correct response of 384,400 km, as that number likely points into the direction of the question. What are some problems with this idea of a "perfect" LLM? Is there reason to assume that current LLMs do something fundamentally different? (Not just in the actual way they're doing it, but in the results produced. Unless of course there is a barrier in the process behind it that I've missed.) What would an optimal version of current/or future LLMs do, if not this? (Optimal meaning, you have all the computing power, time and data you want to train.)
2024-02-02T14:27:03
https://www.reddit.com/r/LocalLLaMA/comments/1ah4wwz/what_would_a_perfect_llm_look_like_in_principle/
Responsible-Dig7538
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah4wwz
false
null
t3_1ah4wwz
/r/LocalLLaMA/comments/1ah4wwz/what_would_a_perfect_llm_look_like_in_principle/
false
false
self
6
null
Large Language Models for English to Portuguese Translation
1
Hello everyone, I would like recommendations for Large Language Models for translating texts from English to Portuguese and vice versa. Thank you in advance for any suggestions.
2024-02-02T14:04:04
https://www.reddit.com/r/LocalLLaMA/comments/1ah4f00/large_language_models_for_english_to_portuguese/
NormalStatistical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah4f00
false
null
t3_1ah4f00
/r/LocalLLaMA/comments/1ah4f00/large_language_models_for_english_to_portuguese/
false
false
self
1
null
fast model for roleplay or how to speed up the delivery of results
1
Hello subreddit. Tell me small models (3B) that are suitable for simple dialogues/roleplays. And who can explain how to use quantization and whether it will speed up the response time? I need an answer in about 1 second with a size of 30 tokens
2024-02-02T13:54:52
https://www.reddit.com/r/LocalLLaMA/comments/1ah47fq/fast_model_for_roleplay_or_how_to_speed_up_the/
Big_Art5869
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah47fq
false
null
t3_1ah47fq
/r/LocalLLaMA/comments/1ah47fq/fast_model_for_roleplay_or_how_to_speed_up_the/
false
false
nsfw
1
null
Best newsletters / websites for LLM news?
1
Hi all - I’m new to the world of Gen AI (only really deeply following for the past year) and finding that I’m drowning in the information overload. So! Much! Content! I follow the usual suspects like this subreddit and hacker news and google scholar for a few scholars relevant to my field but reading all the substacks etc is a full time job - time I don’t have! How does everyone else stay on top of the latest [research papers/open source contributions/startup launches/legal changes]? What are your best sources? Is There One Aggregator To Rule Them All? “Best” - open to your interpretation! Editorial quality, timeliness, whatever. PS I’ve seen these posts from 6 mo ago - are these basically it? https://www.reddit.com/r/ChatGPT/s/926GOGQePq https://www.reddit.com/r/artificial/s/ybzAxoG7bW Thanks! Peter
2024-02-02T13:41:14
https://www.reddit.com/r/LocalLLaMA/comments/1ah3xge/best_newsletters_websites_for_llm_news/
roablep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah3xge
false
null
t3_1ah3xge
/r/LocalLLaMA/comments/1ah3xge/best_newsletters_websites_for_llm_news/
false
false
self
1
null
Synthetic nonsense data improves llama.cpp Quantization accuracy
61
So I had a suspicion from the beginning that using wikitext was suboptimal for quantization using llama.cpp's "Importance Matrix" measurements. It appears I have proven myself correct. KL Divergence is a metric to compare output probability distributions vs their original, to quantify how much change there is. The ability to measure this for a large sequence of text was recently added to llama.cpp. Here's a 7b model (Fett-uccine 7B) quantized with about \~40,000 tokens worth of wikitext to q2\_K: \`\`\` ===== KL-divergence statistics Average: 0.279426 ± 0.005417 Median : 0.034247 Maximum: 14.234488 KLD\_99 : 3.360007 KLD\_95 : 1.289230 KLD\_90 : 0.739574 \`\`\` The important starts here are KLD\_95 and KLD\_99, because what we are worried about with quantization are outliers that are hard to predict. (As well as the average KL divergence, where lower is obviously better.) Here is that same model quantized with about \~25,000 tokens worth of data that looks like this: &#x200B; https://preview.redd.it/pms2oz8486gc1.png?width=1281&format=png&auto=webp&s=65f95d104c9b49d87a26c1dd324ea5bb4b21024b \`\`\` ===== KL-divergence statistics Average: 0.266808 ± 0.005099 Median : 0.034154 Maximum: 14.252633 KLD\_99 : 3.044612 KLD\_95 : 1.215638 KLD\_90 : 0.717481 \`\`\` As you can note, the error for the bottom 1% of least predictable tokens decreased by a non-insignificant amount, as well as for the bottom 5%. Instead of 0.28 avg KL divergence, it also decreased the average divergence to 0.265. I also tried pretraining-style data instead of synthetic, high temperature data. It was **still** worse compared to the high entropy, "pseudo-random" data I generated. \`\`\` ===== KL-divergence statistics Average: 0.269359 ± 0.005107 Median : 0.034721 Maximum: 15.810398 KLD\_99 : 3.143934 KLD\_95 : 1.247610 KLD\_90 : 0.707969 \`\`\` &#x200B; If you use \*purely\* random data, however, it is actually *worse* than wikitext, but not by a MASSIVE margin (it's still better than no importance matrix being used at all.) [This is compared to 1.29 KLD\_95 for the wikitext.](https://preview.redd.it/fni3e6zp96gc1.png?width=230&format=png&auto=webp&s=7c39871c78118f44ab03a559f2942ced2e66d281) \# Explanation The reason why I am using KL divergence is because it allows us to directly compare the output probabilities for each token, instead of perplexity. \## Why Not Perplexity? Perplexity measurements are quite misunderstood. They are measuring **the average predictability** of the text content. They are not being compared to a baseline, and ppl only shows you how well the model can predict a larger sequence on average, which fails to account for outliers (which are usually introduced by quantization for obvious reasons). While that can be useful, what I am doing here is different; we are comparing the *original* model's output probabilities to the quantized one, and using KL Divergence to compare them, where a larger difference in the distribution results in a larger recorded divergence. \## What are KLD\_99 and KLD\_95? These represent percentiles. KLD\_99 is essentially a value showing the average KL divergence of the top 1% of least predictable tokens, while KLD\_95 is the avg. divergence for the top 5% least predictable tokens. I evaluated the KL divergence for about \~30,000 tokens in total in this test. Some of the data includes song lyrics, code, a tutorial I wrote, written conversations, a wikipedia article or two, etc. I think it's a good enough sample set for those reasons, as it is reasonably diverse. \## Can I get this data for quantization? I'm still trying to engineer a dataset that's even better than this (because I want to see q2\_K quants not be a meme), and I'm trying different sampling strategies for more optimal "random" data; however, here's [40k tokens](https://github.com/ggerganov/llama.cpp/files/14140634/group_40.txt) worth of it for those of you who want to try it.
2024-02-02T13:39:41
https://www.reddit.com/r/LocalLLaMA/comments/1ah3w8d/synthetic_nonsense_data_improves_llamacpp/
kindacognizant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah3w8d
false
null
t3_1ah3w8d
/r/LocalLLaMA/comments/1ah3w8d/synthetic_nonsense_data_improves_llamacpp/
false
false
https://b.thumbs.redditm…KfQcBiOhtyYI.jpg
61
null
Help needed with the dreaded CUDA Out of Memory while running Llama-2-7b-hf
1
Hello all, A newbie here. I am getting the dreaded "CUDA Out of memory error" while trying to run inference on meta-llama/Llama-2-7b-hf. I have \[product: TU106 \[GeForce RTX 2060 Rev. A\]\] Thank you in advance for any help. Including the full code base here: `import torchfrom datasets import load_datasetfrom transformers import (AutoModelForCausalLM,AutoTokenizer,BitsAndBytesConfig,TrainingArguments,pipeline)from peft import LoraConfigfrom trl import SFTTrainer` `#Model and tokenizer namesbase_model_name ="meta-llama/Llama-2-7b-hf"refined_model_name = "fin_gpt_Llama-2-7b-hf"` `# Tokenizerllama_tokenizer = AutoTokenizer.from_pretrained(base_model_name, token = "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", trust_remote_code=True)llama_tokenizer.pad_token = llama_tokenizer.eos_tokenllama_tokenizer.padding_side = "right" # Fix for fp16` `import torchtorch.cuda.empty_cache()print(torch.cuda.memory_summary(device=None, abbreviated=False))` `# Quantization configquant_config = BitsAndBytesConfig(load_in_4bit= True,bnb_4bit_quant_type="nf4",bnb_4bit_compute_dtype = torch.float16,bnb_4bit_use_double_quant=False)` `# Modelbase_model = AutoModelForCausalLM.from_pretrained(base_model_name,token = "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",quantization_config = quant_config,device_map={"":0})base_model.config.use_cache = Falsebase_model.config.pretraining_tp = 1` `# Make promptsprompt = ['''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}Input: FINANCING OF ASPOCOMP 'S GROWTH Aspocomp is aggressively pursuing its growth strategy by increasingly focusing on technologically more demanding HDI printed circuit boards PCBs .Answer: ''','''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}Input: According to Gran , the company has no plans to move all production to Russia , although that is where the company is growing .Answer: ''','''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}Input: A tinyurl link takes users to a scamming site promising that users can earn thousands of dollars by becoming a Google ( NASDAQ : GOOG ) Cash advertiser .Answer: ''',]# Generate resultstokens = llama_tokenizer(prompt, return_tensors='pt', padding=True, max_length=512)res = base_model.generate(**tokens, max_length=2000, do_sample=True, top_p=0.95, top_k=50)res_sentences = [llama_tokenizer.decode(i) for i in res]out_text = [o.split("Answer: ")[1] for o in res_sentences]# show resultsfor sentiment in out_text:print(sentiment)# Output:# positive# neutral# negative` &#x200B; The #Generate results code is giving the following error: \--------------------------------------------------------------------------- OutOfMemoryError Traceback (most recent call last) Cell In\[10\], line 16 **14** **# Generate results** **15 tokens = llama\_tokenizer(prompt, return\_tensors='pt', padding=True**, max\_length=512) ---> 16 res = base\_model.generate(\*\*tokens, max\_length=2000, do\_sample=True, top\_p=0.95, top\_k=50) **17 res\_sentences = \[llama\_tokenizer.decode(i) for** i in res\] **18 out\_text = \[o.split("Answer: ")\[1\] for** o in res\_sentences\] File [\~/.local/lib/python3.10/site-packages/torch/utils/\_contextlib.py:115](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:115), in context\_decorator.<locals>.decorate\_context(\*args, \*\*kwargs) [**112**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:112) u/functools\*\*.wraps(func)\*\* [**113**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:113) **def** decorate\_context(\*args, \*\*kwargs): [**114**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:114) **with** ctx\_factory(): --> [115](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:115) return func(\*args, \*\*kwargs) File [\~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1642](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1642), in GenerationMixin.generate(self, inputs, generation\_config, logits\_processor, stopping\_criteria, prefix\_allowed\_tokens\_fn, synced\_gpus, assistant\_model, streamer, negative\_prompt\_ids, negative\_prompt\_attention\_mask, \*\*kwargs) [**1634**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1634)**input\_ids, model\_kwargs =** **self.\_expand\_inputs\_for\_generation(** [**1635**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1635)**input\_ids=input\_ids,** [**1636**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1636)**expand\_size=generation\_config.num\_return\_sequences,** [**1637**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1637)**is\_encoder\_decoder=self.config.is\_encoder\_decoder,** [**1638**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1638) **\*\*model\_kwargs,** [**1639**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1639)**)** [**1641**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1641) **# 13. run sample** \-> [1642](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1642) return self.sample( [**1643**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1643)**input\_ids,** ... \-> [1026](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/bitsandbytes/functional.py:1026)out = torch.empty(quant\_state.shape, dtype=quant\_state.dtype, device=A.device) [**1028**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/bitsandbytes/functional.py:1028) **n = out.numel()** [**1030**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/bitsandbytes/functional.py:1030) **device = pre\_call(A.device)** OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 5.78 GiB total capacity; 4.88 GiB already allocated; 101.25 MiB free; 5.12 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max\_split\_size\_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH\_CUDA\_ALLOC\_CONF
2024-02-02T13:31:14
https://www.reddit.com/r/LocalLLaMA/comments/1ah3q19/help_needed_with_the_dreaded_cuda_out_of_memory/
Difficult-Tomato-624
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah3q19
false
null
t3_1ah3q19
/r/LocalLLaMA/comments/1ah3q19/help_needed_with_the_dreaded_cuda_out_of_memory/
false
false
self
1
null
Help with CUDA out of memory error - while running LLAMA2 - meta-llama/Llama-2-7b-hf
1
Hello all, A newbie here. I am getting the dreaded "CUDA Out of memory error" while trying to run inference on meta-llama/Llama-2-7b-hf. Thank you in advance for any help. Including the full code base here: `import torch` `from datasets import load_dataset` `from transformers import (` `AutoModelForCausalLM,` `AutoTokenizer,` `BitsAndBytesConfig,` `TrainingArguments,` `pipeline` `)` `from peft import LoraConfig` `from trl import SFTTrainer` `#Model and tokenizer names` `base_model_name ="meta-llama/Llama-2-7b-hf"` `refined_model_name = "fin_gpt_Llama-2-7b-hf"` `# Tokenizer` `llama_tokenizer = AutoTokenizer.from_pretrained(base_model_name, token = "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", trust_remote_code=True)` `llama_tokenizer.pad_token = llama_tokenizer.eos_token` `llama_tokenizer.padding_side = "right" # Fix for fp16` `import torch` `torch.cuda.empty_cache()` `print(torch.cuda.memory_summary(device=None, abbreviated=False))` `# Quantization config` `quant_config = BitsAndBytesConfig(` `load_in_4bit= True,` `bnb_4bit_quant_type="nf4",` `bnb_4bit_compute_dtype = torch.float16,` `bnb_4bit_use_double_quant=False` `)` `# Model` `base_model = AutoModelForCausalLM.from_pretrained(` `base_model_name,` `token = "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",` `quantization_config = quant_config,` `device_map={"":0}` `)` `base_model.config.use_cache = False` `base_model.config.pretraining_tp = 1` `# Make prompts` `prompt = [` `'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}` `Input: FINANCING OF ASPOCOMP 'S GROWTH Aspocomp is aggressively pursuing its growth strategy by increasingly focusing on technologically more demanding HDI printed circuit boards PCBs .` `Answer: ''',` `'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}` `Input: According to Gran , the company has no plans to move all production to Russia , although that is where the company is growing .` `Answer: ''',` `'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}` `Input: A tinyurl link takes users to a scamming site promising that users can earn thousands of dollars by becoming a Google ( NASDAQ : GOOG ) Cash advertiser .` `Answer: ''',` `]` `# Generate results` `tokens = llama_tokenizer(prompt, return_tensors='pt', padding=True, max_length=512)` `res = base_model.generate(**tokens, max_length=2000, do_sample=True, top_p=0.95, top_k=50)` `res_sentences = [llama_tokenizer.decode(i) for i in res]` `out_text = [o.split("Answer: ")[1] for o in res_sentences]` `# show results` `for sentiment in out_text:` `print(sentiment)` `# Output:` `# positive` `# neutral` `# negative` &#x200B; The #Generate results code is giving the following error: \--------------------------------------------------------------------------- OutOfMemoryError Traceback (most recent call last) Cell In\[10\], line 16 **14** **# Generate results** **15 tokens = llama\_tokenizer(prompt, return\_tensors='pt', padding=True**, max\_length=512) ---> 16 res = base\_model.generate(\*\*tokens, max\_length=2000, do\_sample=True, top\_p=0.95, top\_k=50) **17 res\_sentences = \[llama\_tokenizer.decode(i) for** i in res\] **18 out\_text = \[o.split("Answer: ")\[1\] for** o in res\_sentences\] File [\~/.local/lib/python3.10/site-packages/torch/utils/\_contextlib.py:115](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:115), in context\_decorator.<locals>.decorate\_context(\*args, \*\*kwargs) [**112**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:112) u/functools**.wraps(func)** [**113**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:113) **def** decorate\_context(\*args, \*\*kwargs): [**114**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:114) **with** ctx\_factory(): --> [115](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:115) return func(\*args, \*\*kwargs) File [\~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1642](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1642), in GenerationMixin.generate(self, inputs, generation\_config, logits\_processor, stopping\_criteria, prefix\_allowed\_tokens\_fn, synced\_gpus, assistant\_model, streamer, negative\_prompt\_ids, negative\_prompt\_attention\_mask, \*\*kwargs) [**1634**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1634)**input\_ids, model\_kwargs =** **self.\_expand\_inputs\_for\_generation(** [**1635**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1635)**input\_ids=input\_ids,** [**1636**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1636)**expand\_size=generation\_config.num\_return\_sequences,** [**1637**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1637)**is\_encoder\_decoder=self.config.is\_encoder\_decoder,** [**1638**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1638) **\*\*model\_kwargs,** [**1639**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1639)**)** [**1641**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1641) **# 13. run sample** \-> [1642](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1642) return self.sample( [**1643**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/transformers/generation/utils.py:1643)**input\_ids,** ... \-> [1026](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/bitsandbytes/functional.py:1026)out = torch.empty(quant\_state.shape, dtype=quant\_state.dtype, device=A.device) [**1028**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/bitsandbytes/functional.py:1028) **n = out.numel()** [**1030**](https://file+.vscode-resource.vscode-cdn.net/home/milind/Documents/0.MilindPython/FinGPT/FinGPT/FinGPT_Forecaster/~/.local/lib/python3.10/site-packages/bitsandbytes/functional.py:1030) **device = pre\_call(A.device)** OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 5.78 GiB total capacity; 4.88 GiB already allocated; 101.25 MiB free; 5.12 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max\_split\_size\_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH\_CUDA\_ALLOC\_CONF
2024-02-02T13:27:27
https://www.reddit.com/r/LocalLLaMA/comments/1ah3n8b/help_with_cuda_out_of_memory_error_while_running/
Difficult-Tomato-624
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah3n8b
false
null
t3_1ah3n8b
/r/LocalLLaMA/comments/1ah3n8b/help_with_cuda_out_of_memory_error_while_running/
false
false
self
1
null
People with macs ( M1, M2, M3 ) What are your inference speeds? asking for a friend...
84
Recently i came to a weird situation where macs are able to inference and train models **exceptionally fast compared to CPUs** and some even **rival GPUS** for like 1/10-th the power draw. I am now very much interested in using mac mini as part of my home server for that very reason. However I dont have a mac... I'm a windows kinda guy with 3090 and 4090. If you have mac can you share your CPU version ( m1, m2, m3, pro etc ), ram size and inference speeds?
2024-02-02T13:03:07
https://www.reddit.com/r/LocalLLaMA/comments/1ah367r/people_with_macs_m1_m2_m3_what_are_your_inference/
niftylius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah367r
false
null
t3_1ah367r
/r/LocalLLaMA/comments/1ah367r/people_with_macs_m1_m2_m3_what_are_your_inference/
false
false
self
84
null
What are your parameters to launch Goliath or Mistral on RTX4090 ?
5
Hello everyone, I’m a bit new into this field, I have downloaded Goliath-120B, Nous Capybara-34B, Mistral 34x2b and mistral 8x7B but the speed is very slow. I have RTX 4090 and 64GB RAM, is that enough ? What is your settings in oobabooga ? Do you use llama.cpp or other loader ? I’m trying to use privategpt and Sillytavern for roleplay. I’d appreciate if you have any other models recommendations.
2024-02-02T12:59:43
https://i.redd.it/zypd0lky66gc1.jpeg
lewdstoryart
i.redd.it
1970-01-01T00:00:00
0
{}
1ah33gc
false
null
t3_1ah33gc
/r/LocalLLaMA/comments/1ah33gc/what_are_your_parameters_to_launch_goliath_or/
false
false
https://a.thumbs.redditm…Qi_1OXVi1bl8.jpg
5
{'enabled': True, 'images': [{'id': 'wSBPEeB50RU6fJuxxMfwbEHYKByrRt9jBHB1znKqb2E', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/zypd0lky66gc1.jpeg?width=108&crop=smart&auto=webp&s=d5bed7bf5532d8e74908fcf0507431b456497971', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/zypd0lky66gc1.jpeg?width=216&crop=smart&auto=webp&s=9fb24823ae564d0e41436bc7dd8b19bed29aa20d', 'width': 216}, {'height': 282, 'url': 'https://preview.redd.it/zypd0lky66gc1.jpeg?width=320&crop=smart&auto=webp&s=fb98da8b4c7c39a2a4114228ce1d529ea7452753', 'width': 320}, {'height': 564, 'url': 'https://preview.redd.it/zypd0lky66gc1.jpeg?width=640&crop=smart&auto=webp&s=09fa7d21afb40917795b01fb2e52f1c42c3f28fb', 'width': 640}], 'source': {'height': 815, 'url': 'https://preview.redd.it/zypd0lky66gc1.jpeg?auto=webp&s=7fde95041af7f03e26f08b21faf4acc31afddfec', 'width': 924}, 'variants': {}}]}
Why does GPTQ models not have stream=True argument
1
[removed]
2024-02-02T12:08:40
https://www.reddit.com/r/LocalLLaMA/comments/1ah270z/why_does_gptq_models_not_have_streamtrue_argument/
albusdd_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah270z
false
null
t3_1ah270z
/r/LocalLLaMA/comments/1ah270z/why_does_gptq_models_not_have_streamtrue_argument/
false
false
self
1
null
Joke GPT
1
Check out my very popular new GPT: "The Ultimate Joke GPT": [https://chat.openai.com/g/g-1dxPadgO8-the-ultimate-joke-gpt](https://chat.openai.com/g/g-1dxPadgO8-the-ultimate-joke-gpt) [\#ChatGPT](https://twitter.com/hashtag/ChatGPT?src=hashtag_click) [\#GPTStore](https://twitter.com/hashtag/GPTStore?src=hashtag_click) [\#Jokesoftheday](https://twitter.com/hashtag/Jokesoftheday?src=hashtag_click) [\#ChatGPTplus](https://twitter.com/hashtag/ChatGPTplus?src=hashtag_click)
2024-02-02T11:52:55
https://www.reddit.com/r/LocalLLaMA/comments/1ah1xmn/joke_gpt/
Sufficient-Floor-511
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah1xmn
false
null
t3_1ah1xmn
/r/LocalLLaMA/comments/1ah1xmn/joke_gpt/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Qe66sB2awc7er0FiVu8vd_xM3IHVc7YBHq23dHEpQmw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/gPlhwnXEbjDA9lDwp4cHnzxxShC_ViCCDvSyH70AI98.jpg?width=108&crop=smart&auto=webp&s=4ebbc22416ac8b7868932e70b5df0051e761379f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/gPlhwnXEbjDA9lDwp4cHnzxxShC_ViCCDvSyH70AI98.jpg?width=216&crop=smart&auto=webp&s=ed3de01d38b1aa892968fa97c993628dd8147605', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/gPlhwnXEbjDA9lDwp4cHnzxxShC_ViCCDvSyH70AI98.jpg?width=320&crop=smart&auto=webp&s=596884220e71cf89deb7a7a3c5b87ab521106bbe', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/gPlhwnXEbjDA9lDwp4cHnzxxShC_ViCCDvSyH70AI98.jpg?auto=webp&s=24067f554a9058f783d7df45d7d78d06a1b1a9d8', 'width': 512}, 'variants': {}}]}
Chat with PDFs using function calling
1
[removed]
2024-02-02T11:32:36
https://www.reddit.com/r/LocalLLaMA/comments/1ah1m4y/chat_with_pdfs_using_function_calling/
ashpreetbedi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah1m4y
false
null
t3_1ah1m4y
/r/LocalLLaMA/comments/1ah1m4y/chat_with_pdfs_using_function_calling/
false
false
self
1
null
Introducing LLM-Powered Robots: MachinaScript for Robots
111
2024-02-02T10:56:30
https://i.redd.it/9ekcjunvk5gc1.png
Neptun0
i.redd.it
1970-01-01T00:00:00
0
{}
1ah1243
false
null
t3_1ah1243
/r/LocalLLaMA/comments/1ah1243/introducing_llmpowered_robots_machinascript_for/
false
false
https://a.thumbs.redditm…t7o_j-Qdobx0.jpg
111
{'enabled': True, 'images': [{'id': 'v0LxuJLf4W76bdrrWMsjXxBx0NzA2BBLjGI6A8moDD8', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/9ekcjunvk5gc1.png?width=108&crop=smart&auto=webp&s=f36cb27b83db1302a464d5746d7301c8c1c7995f', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/9ekcjunvk5gc1.png?width=216&crop=smart&auto=webp&s=4f1ecb06e802e85c269457fe036f83cbfc0ad581', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/9ekcjunvk5gc1.png?width=320&crop=smart&auto=webp&s=1c568fd7b6c56be1bdee31c610c6554562e5730f', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/9ekcjunvk5gc1.png?width=640&crop=smart&auto=webp&s=244d1f8954ce1953b9a31b4dfac3401521ff705f', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/9ekcjunvk5gc1.png?width=960&crop=smart&auto=webp&s=5dd8c47cef28552bbefda0d28ee9dc4b184f65e6', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/9ekcjunvk5gc1.png?width=1080&crop=smart&auto=webp&s=0ba84a31c024b37e4ed753a9a9d17f26e56e0376', 'width': 1080}], 'source': {'height': 1139, 'url': 'https://preview.redd.it/9ekcjunvk5gc1.png?auto=webp&s=fac778bffa70ed65daeca2fcb691e0aa4a0e627b', 'width': 2278}, 'variants': {}}]}
Feedback Needed
1
Below is a side-by-side comparison of the responses from OpenChat3.5, GPT-3.5, and GPT-4 with tools in a table format using Markdown code block: &#x200B; \`\`\`markdown | Model | Altitude of 8L Rocket (meters) | Plant Growth under Blue Light (cm/day) | Purity of Charcoal Filter (%) | Corrected Purity Interpretation | Notes | |---------------------|-------------------------------|----------------------------------------|-------------------------------|---------------------------------|-------------------------------------------------------------------------------------------| | OpenChat3.5 (7B) | 800 | 1.714 | 150% | 100% more effective | Initially incorrect about the purity percentage but corrected after user intervention. | | GPT-3.5 | 800 | 6 | 150% | Did not correct to 100% | Did not correct errors within the conversation, maintaining incorrect reasoning. | | GPT-4 with tools | 800 | 1.71 | Twice as pure as 75% | 100% more effective | Correctly calculated the altitude and plant growth; initially erred on filter purity. | \`\`\` &#x200B; \*\*Final Takeaways and Conclusion:\*\* &#x200B; \- All three models correctly calculated the altitude that the 8-liter rocket would reach as 800 meters. \- OpenChat3.5 and GPT-4 with tools calculated the plant growth rate under blue light per day to be approximately 1.71 cm, which is the correct figure based on the given information. GPT-3.5 incorrectly stated the growth rate as 6 cm per day. \- For the charcoal filter's effectiveness, both OpenChat3.5 and GPT-4 with tools eventually recognized that a purity level cannot exceed 100% and that the charcoal filter is 100% more effective than the sand filter. GPT-3.5 did not make this correction. \- OpenChat3.5's ability to correct itself after user feedback demonstrates an improvement in adaptability over GPT-3.5. GPT-4 with tools showed a similar capacity for self-correction. \- In terms of accuracy and adaptability, GPT-4 with tools and the fine-tuned OpenChat3.5 model performed better than GPT-3.5 in this comparison, especially in the context of correcting their reasoning based on user feedback.
2024-02-02T10:50:38
https://www.reddit.com/r/LocalLLaMA/comments/1ah0z17/feedback_needed/
adeelahmadch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah0z17
false
null
t3_1ah0z17
/r/LocalLLaMA/comments/1ah0z17/feedback_needed/
false
false
self
1
null
🦙 Llama compatible version of Qwen-72B
18
I published a🦙 llama-compatible version of the excellent base model [Qwen-72B](https://huggingface.co/Qwen/Qwen-72B). While it achieves a lower score than the original model according to [🤗 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), this 🦙 llama-compatible version could be useful in many areas. [https://huggingface.co/Weyaxi/Qwen-72B-Llama](https://huggingface.co/Weyaxi/Qwen-72B-Llama) https://preview.redd.it/shaaeql2j5gc1.png?width=1725&format=png&auto=webp&s=d8c156d38090a5d16bc9e1d5c63159d8c406a167
2024-02-02T10:46:19
https://www.reddit.com/r/LocalLLaMA/comments/1ah0wqe/llama_compatible_version_of_qwen72b/
Weyaxi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah0wqe
false
null
t3_1ah0wqe
/r/LocalLLaMA/comments/1ah0wqe/llama_compatible_version_of_qwen72b/
false
false
https://b.thumbs.redditm…9NHz-taig0VE.jpg
18
{'enabled': False, 'images': [{'id': 'J-_hgnUJ-bAb6IC8_h-E1GYPdsQ9niI6MLaxlek5vxc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rG2_L1qysQMiUR-9BWbNqbNisg6bgFvNp6o7o8GQVAg.jpg?width=108&crop=smart&auto=webp&s=2f8f58e6e03e22584b1b8a506a230dd12edf7c3d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rG2_L1qysQMiUR-9BWbNqbNisg6bgFvNp6o7o8GQVAg.jpg?width=216&crop=smart&auto=webp&s=df37b6cce5a38af7f477aea2fa02a0cc7d4ccbfb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rG2_L1qysQMiUR-9BWbNqbNisg6bgFvNp6o7o8GQVAg.jpg?width=320&crop=smart&auto=webp&s=8c0eee8ef2ddac9f616f67aaca962e77c217b15d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rG2_L1qysQMiUR-9BWbNqbNisg6bgFvNp6o7o8GQVAg.jpg?width=640&crop=smart&auto=webp&s=96750cc96021250929f49264efc1a0f655beccdf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rG2_L1qysQMiUR-9BWbNqbNisg6bgFvNp6o7o8GQVAg.jpg?width=960&crop=smart&auto=webp&s=91ea8a8139ab20708c12e540434bbeb266500e05', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rG2_L1qysQMiUR-9BWbNqbNisg6bgFvNp6o7o8GQVAg.jpg?width=1080&crop=smart&auto=webp&s=bc19bc3e4c335234523358cc9339138c4f6e147d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rG2_L1qysQMiUR-9BWbNqbNisg6bgFvNp6o7o8GQVAg.jpg?auto=webp&s=a113650faf4ae85c26c91fea4a5d62fec4f6d972', 'width': 1200}, 'variants': {}}]}
Miqu is now on the Open LLM Leaderboard, achieving a score of 76.59
177
Miqu's evaluation was recently completed on the Open LLM Leaderboard, where Miqu achieved an average score of 76.59. [https://huggingface.co/152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) https://preview.redd.it/u7q9c3dx85gc1.png?width=1731&format=png&auto=webp&s=81f026c4ff50fbd755394e00e0b6d56caa91d71e Tweet (If you are interested): [https://twitter.com/Weyaxi/status/1753353836373135415](https://twitter.com/Weyaxi/status/1753353836373135415)
2024-02-02T09:50:53
https://www.reddit.com/r/LocalLLaMA/comments/1ah04ci/miqu_is_now_on_the_open_llm_leaderboard_achieving/
Weyaxi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah04ci
false
null
t3_1ah04ci
/r/LocalLLaMA/comments/1ah04ci/miqu_is_now_on_the_open_llm_leaderboard_achieving/
false
false
https://b.thumbs.redditm…u1O9-aLu5V1M.jpg
177
{'enabled': False, 'images': [{'id': 'qpgih8ECI2CT55esDPFPiYinbKN4-dzK53zS_rU7hY4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GYoDcf-x543ZlkKBe8-ItND_1vwIeneuCnJ8PyTs46w.jpg?width=108&crop=smart&auto=webp&s=14299634b8200f4a75070c42a2031f01e5f3b925', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GYoDcf-x543ZlkKBe8-ItND_1vwIeneuCnJ8PyTs46w.jpg?width=216&crop=smart&auto=webp&s=62f65c9e2ad40754479cca6b72459dc4df8113e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GYoDcf-x543ZlkKBe8-ItND_1vwIeneuCnJ8PyTs46w.jpg?width=320&crop=smart&auto=webp&s=ef0710307e1e71bd474f954c4b5a11ab91509f1e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GYoDcf-x543ZlkKBe8-ItND_1vwIeneuCnJ8PyTs46w.jpg?width=640&crop=smart&auto=webp&s=6b85fb5ebe50c190b9ea41b6ff23364e22b3cc8e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GYoDcf-x543ZlkKBe8-ItND_1vwIeneuCnJ8PyTs46w.jpg?width=960&crop=smart&auto=webp&s=39ce4f77346a3b7988cc1b4983d569b11e9285ed', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GYoDcf-x543ZlkKBe8-ItND_1vwIeneuCnJ8PyTs46w.jpg?width=1080&crop=smart&auto=webp&s=22c0faa79dc2f24d98f89298b40a4fa03dc05648', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GYoDcf-x543ZlkKBe8-ItND_1vwIeneuCnJ8PyTs46w.jpg?auto=webp&s=755d5432646daf4633fcfb5e4e2729b62b0cb70d', 'width': 1200}, 'variants': {}}]}
What is the best Language-Vision Large Model in your experince ?
18
Based on my tests, I found [Qwen-VL-Max](https://huggingface.co/spaces/Qwen/Qwen-VL-Max) and [Qwen-VL-Plus](https://huggingface.co/spaces/Qwen/Qwen-VL-Plus) to be the best models in extracting text out of images and giving interesting responses. (The problem is that they are closed source.) What is the best based on your experinces ? [View Poll](https://www.reddit.com/poll/1ah03t8)
2024-02-02T09:49:54
https://www.reddit.com/r/LocalLLaMA/comments/1ah03t8/what_is_the_best_languagevision_large_model_in/
Puzzleheaded_Mall546
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ah03t8
false
null
t3_1ah03t8
/r/LocalLLaMA/comments/1ah03t8/what_is_the_best_languagevision_large_model_in/
false
false
self
18
{'enabled': False, 'images': [{'id': 'TqTxq6laeDmS3WGhyn47lFk2IATvbQGyZ17J7EzrTpI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BaFIUWy0FZEtN2Ym8OZDHSW8QP7_y-mt44NWTqWrI2g.jpg?width=108&crop=smart&auto=webp&s=6c6c01f4c49a27cea1afd19bc6845c3009f1b44e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BaFIUWy0FZEtN2Ym8OZDHSW8QP7_y-mt44NWTqWrI2g.jpg?width=216&crop=smart&auto=webp&s=4703fed3691ed7c6c4f14b82b767319ca5d95629', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BaFIUWy0FZEtN2Ym8OZDHSW8QP7_y-mt44NWTqWrI2g.jpg?width=320&crop=smart&auto=webp&s=720e52cde2067d2977f874fe479285d9355ac6a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BaFIUWy0FZEtN2Ym8OZDHSW8QP7_y-mt44NWTqWrI2g.jpg?width=640&crop=smart&auto=webp&s=111fbfba5f784f6b89dc4d01ebd751e30c7ac04e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BaFIUWy0FZEtN2Ym8OZDHSW8QP7_y-mt44NWTqWrI2g.jpg?width=960&crop=smart&auto=webp&s=74d27d7aa63137112db544a3bfe85333cd2780a5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BaFIUWy0FZEtN2Ym8OZDHSW8QP7_y-mt44NWTqWrI2g.jpg?width=1080&crop=smart&auto=webp&s=8a9300ba08da4d98084278872b1f328dc92c378c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BaFIUWy0FZEtN2Ym8OZDHSW8QP7_y-mt44NWTqWrI2g.jpg?auto=webp&s=47ffb1ee944e5d9588b1d952bbdf0ef0381f46fc', 'width': 1200}, 'variants': {}}]}
Run the leaked Mistral Mediumb LLM, miqu-1-70B across GPUs CPUs and OSes
1
2024-02-02T09:10:25
https://www.secondstate.io/articles/miqu-1-70b/
smileymileycoin
secondstate.io
1970-01-01T00:00:00
0
{}
1agzkas
false
null
t3_1agzkas
/r/LocalLLaMA/comments/1agzkas/run_the_leaked_mistral_mediumb_llm_miqu170b/
false
false
default
1
null
Defense against desinformation programed LLM
1
Imagne scenario where Russia starts their own AI server, programmed to spread misinformation supporting Russian propaganda. It is possible to corrupt another model, and afterwards it's not ease to undo the corruption. For now it's fixed state, always resting back to "default". But despite that. Would it be possible to program LLM to exploit, mark and isolate "corruption", **while not getting affected by it**? Considering that those two AI interact with each other, likely making the channel readable also for human. In my imagination it would be like if they argue. Would the model get corrupted too, or would it be able to fight misinformation? Is it like in case of human mind? Easier to get corrupted? Or the fact that humans tend to choose easiest way does absolutely not apply to AI. Sometimes OpenAI GPT seem like getting lazy, I think it's because of the limits, so server doesn't get overloaded, giving only small bits of computing power. But what if it gets "lazy" and rather say "yes true most likely", as partial desired result, that can be done, while considering the limits. Information war is a thing taking my sleep away last years. Only hope for now is that Russia mostly take hands off the AI, but I suppose it's not going to stay that way forever. Misinformation, hoaxes on internet, there was unleashed havoc in minds of many people in this land, rendering them unable to think clearly. I don't want to fall asleep on this landmine. What are your thoughts on this?
2024-02-02T09:09:52
https://www.reddit.com/r/LocalLLaMA/comments/1agzjyt/defense_against_desinformation_programed_llm/
Brave-Decision-1944
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agzjyt
false
null
t3_1agzjyt
/r/LocalLLaMA/comments/1agzjyt/defense_against_desinformation_programed_llm/
false
false
self
1
null
MiniCPM: An end-side LLM achieves equivalent performance to Mistral-7B, and outperforms Llama2-13B
27
Github: [github.com/OpenBMB/MiniCPM](https://t.co/aO2HBTLdO8) Huggingface: [https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) Tech report: [https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20](https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20) Unveil a compact and powerful language model designed to effortlessly run on mobile devices, including those equipped with Snapdragon 835 (released in late 2016) SoCs. It's our mission to democratize intelligence, making it accessible to everyone, everywhere! # Evaluation scores: [Surpasses or performs at a similar level to the majority of 7B-scale models, and outperforms some models with a scale of 10B or above.](https://preview.redd.it/c1w1nve605gc1.png?width=2728&format=png&auto=webp&s=74017818801c6c6f63231212f13670c50788aba0) [Outperforms small models on all test sets except for certain English evaluation datasets.](https://preview.redd.it/csy2f8ng05gc1.png?width=3018&format=png&auto=webp&s=34843a8e5523fe3929db5d317d89d5d95f028357) [MT-Bench Score increased after DPO alignment](https://preview.redd.it/sy8k6d8j05gc1.png?width=1228&format=png&auto=webp&s=2913fbc03b61e507eb57fd6758cc6f3ba265ce07) # Edge Deployment: https://preview.redd.it/xi08ogao05gc1.png?width=1388&format=png&auto=webp&s=7e542fb306581a9073a1aa5be23316a84343207a &#x200B;
2024-02-02T09:02:54
https://www.reddit.com/r/LocalLLaMA/comments/1agzgme/minicpm_an_endside_llm_achieves_equivalent/
x_swordfaith_l
self.LocalLLaMA
2024-02-03T06:44:17
0
{}
1agzgme
false
null
t3_1agzgme
/r/LocalLLaMA/comments/1agzgme/minicpm_an_endside_llm_achieves_equivalent/
false
false
https://a.thumbs.redditm…V2zK3x5PN760.jpg
27
{'enabled': False, 'images': [{'id': 'WGxJ8YIP7cm4FO16l6DX8jHTX1koc02Nj1SQEZsRiL0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pFu3U4IUaYfWI2YqoeCAMlyO0j3vSjK-YmuzjB6bqIc.jpg?width=108&crop=smart&auto=webp&s=8fb178e836f6d10d5962e410d372a7c86abdbb6b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pFu3U4IUaYfWI2YqoeCAMlyO0j3vSjK-YmuzjB6bqIc.jpg?width=216&crop=smart&auto=webp&s=4d31f39c7029bf720d714f4d599b37dbd4fc6964', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pFu3U4IUaYfWI2YqoeCAMlyO0j3vSjK-YmuzjB6bqIc.jpg?width=320&crop=smart&auto=webp&s=bf99d764b41574c085e1c44be17508b20ef1653c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pFu3U4IUaYfWI2YqoeCAMlyO0j3vSjK-YmuzjB6bqIc.jpg?width=640&crop=smart&auto=webp&s=6392c68e1a8c4befc38d39ed0ee59b0ecf85c424', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pFu3U4IUaYfWI2YqoeCAMlyO0j3vSjK-YmuzjB6bqIc.jpg?width=960&crop=smart&auto=webp&s=3346a733773d1415fa762991171fea597e57b6c0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pFu3U4IUaYfWI2YqoeCAMlyO0j3vSjK-YmuzjB6bqIc.jpg?width=1080&crop=smart&auto=webp&s=d4e14fe80b01a419d21cdcdceb80d18dbc07b03f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pFu3U4IUaYfWI2YqoeCAMlyO0j3vSjK-YmuzjB6bqIc.jpg?auto=webp&s=d649d2479a1a967fd42e3d30b0ad702522f7fd1e', 'width': 1200}, 'variants': {}}]}
Can any concept be represented in a high dimensional vector space?
4
I've been looking at translation models, specifically encoder decoder, and I've been sorta able to get it, but I wonder why this doesn't work: Rather than just having word2vec, train a model on "span2vec" which really just means "concept2vec" assuming any concept suitable for translation can be represented as a span. Of course, given the exponential nature of having concatenating spans, I can see issues appearing already with input size. However, assuming the input size issue can be fixed/worked around, one now has a vector space for simple enough concepts (assuming this exists, maybe it's too high dimensional to be useful? Maybe word embeddings can be considered a tradeoff that keeps memory requirements low?). So we assume this is a function from span space to concept space. Now we can train the inverse function for any language separately afterwards, by having a set of pairs of concepts and translations (Which can be generated easily from pairs of original and translations). This allows language understandings to be "modules" that are just functions from concept space to language space and vice versa. Also if you have pairs of "english - norwegian" and "norwegian - swedish" but no "english - swedish" you should in theory also have "english - norwegian" automatically as all languages are represented in the same concept space. In this sense, languages are just encodings for vectors in the concept space. Perhaps the space is just too big? I assume this has been attempted before encoder/decoder models (Given the trickery they appear to use), does anyone remember why it failed?
2024-02-02T08:49:11
https://www.reddit.com/r/LocalLLaMA/comments/1agz9su/can_any_concept_be_represented_in_a_high/
Responsible-Dig7538
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agz9su
false
null
t3_1agz9su
/r/LocalLLaMA/comments/1agz9su/can_any_concept_be_represented_in_a_high/
false
false
self
4
null
MiniCPM: An end-side LLM achieves equivalent performance to the Mistral-7B, and outperforms Llama2-13B
1
[removed]
2024-02-02T08:32:48
https://www.reddit.com/r/LocalLLaMA/comments/1agz1v8/minicpm_an_endside_llm_achieves_equivalent/
x_swordfaith_l
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agz1v8
false
null
t3_1agz1v8
/r/LocalLLaMA/comments/1agz1v8/minicpm_an_endside_llm_achieves_equivalent/
false
false
https://b.thumbs.redditm…ClB4smi2w6pY.jpg
1
{'enabled': False, 'images': [{'id': 'RBJ9vxqYfF1Mq8bfHUO9pEJQy93nGJyHLIZxcdJ61m8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Z6MgQvELeFDOViepev3aTNa161GFDXJC3gCX8rvaE8.jpg?width=108&crop=smart&auto=webp&s=3cac0671eaf1ad30581df97fae4fc45b9b25d719', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Z6MgQvELeFDOViepev3aTNa161GFDXJC3gCX8rvaE8.jpg?width=216&crop=smart&auto=webp&s=f1fc86ed83178e5637c243f89c03635cd7e58d01', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Z6MgQvELeFDOViepev3aTNa161GFDXJC3gCX8rvaE8.jpg?width=320&crop=smart&auto=webp&s=67add2375e015e697203a137f546945d5c4d5d01', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Z6MgQvELeFDOViepev3aTNa161GFDXJC3gCX8rvaE8.jpg?width=640&crop=smart&auto=webp&s=6727c07a5cd7eb91c3a66c268eabaa8c1316e304', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Z6MgQvELeFDOViepev3aTNa161GFDXJC3gCX8rvaE8.jpg?width=960&crop=smart&auto=webp&s=b817bd0e772ee3600e2f177dae5f2476baab488a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Z6MgQvELeFDOViepev3aTNa161GFDXJC3gCX8rvaE8.jpg?width=1080&crop=smart&auto=webp&s=45c67b2bc2203c1962b22248a55dc346a48980a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Z6MgQvELeFDOViepev3aTNa161GFDXJC3gCX8rvaE8.jpg?auto=webp&s=4efd5903e6ef940f86cf4b3d244b5e69e1850b96', 'width': 1200}, 'variants': {}}]}
Adding adding more RAM (CPU + Offload)?
2
Thinking about adding additional 32GB for a total of 64 so I can run larger 70B models. Mainly for RP and casual stuff. I'm kinda new to this stuff so not sure how just running larger model on same processing power gonna pan out. Me specs: 7800x3D + 6950xt on windows. Using llama compliled with CLBlast for GPU offloading in webui. (Used koboldai as well, runs much faster but quality is garbage don't know why) If anybody needs help making AMD gpu work with webui on windows, let me know.
2024-02-02T07:56:55
https://www.reddit.com/r/LocalLLaMA/comments/1agyjty/adding_adding_more_ram_cpu_offload/
abhiccc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agyjty
false
null
t3_1agyjty
/r/LocalLLaMA/comments/1agyjty/adding_adding_more_ram_cpu_offload/
false
false
default
2
null
$90-150k workstation options & performance estimates
1
Hello! Let’s say we have budget in range of 90-150k usd and following targets: - running extra large models, like Goliath or next following to create synthetic datasets - fine tuning 7-13-30b models with such datasets and then running extreme large datasets through interference, for example hundreds or thousands articles searching for insights - experiments with architecture and original models Also it’s possible to move budget a bit up or down, but my initial idea is: It’s enough for 8-10 a100/80gb. Should be good enough for tuning and running, but a bit old architecture, not so future proof Also enough for 3-4 H100/80, but is it good enough for interference and production usage, as consulting server for example? Mixture, like 2 h100, 4 a100/80, 4 ada 6000 or a100/40. Should cover whole range of tasks, but may be complicated to friend them all in one station This topic meant to be a discussion, so no wrong ideas. Thanks!
2024-02-02T06:52:15
https://www.reddit.com/r/LocalLLaMA/comments/1agxmax/90150k_workstation_options_performance_estimates/
viktorhorou
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agxmax
false
null
t3_1agxmax
/r/LocalLLaMA/comments/1agxmax/90150k_workstation_options_performance_estimates/
false
false
self
1
null
How to solve schema problems in text-to-sql bot?
1
[removed]
2024-02-02T06:29:12
https://www.reddit.com/r/LocalLLaMA/comments/1agx9as/how_to_solve_schema_problems_in_texttosql_bot/
HappyDataGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agx9as
false
null
t3_1agx9as
/r/LocalLLaMA/comments/1agx9as/how_to_solve_schema_problems_in_texttosql_bot/
false
false
self
1
null
2XA6000s, NVLINK, 82Gb VRAM, 132GB RAM, Threadripper. What is the best I can run?
7
So I have just been handles a system thats waaaaayyy overpowered than I have ever worked in my life! The only gripe (or not?) being its running ubuntu. I am new to Linux but Bing Copilot is helping me with commands in the terminal, so far so good. My tasks are creating workflows for my organisation and thus I work accross a stack of PostGreSQL Alteryx, Python and a couple of reporting tools. I really really want to run local models for development i.e. This is what my ideal use case: Setup a competent LLM to communicate with my VSCode conda environment (this I can change, can do python too if thats whats needed) Step 1: Read data (via CSV or Database connection) Step 2: I give it a task e.g. "Read dataframe and create a neural net to read the data/images to predict/classify/comment on the input. or "Using webdriver, scrape the data from the website in this format (i provide the format) or "Create and app using tkinter to perform given functions/steps" Step 3: It reads the task, goes about carrying it about and reads the output and/or errors and correct the code. Until the desired output is not reached, it iterates between steps 2-3 Step 4: Stop I think what I want is not there yet? or is there? If so, can you please point me to it? Also, we are thinking of LORAs as we have a very unique business case for a chatbot where our customers can type in what they want (bookings/cancellations/inquiries) and then relevant requests can be handled by the admin to approve or reject. The idea our organisation is working towards to replace bodies (humans) with automated processes. I want to honestly aid the org in this journey so how do i go about it?
2024-02-02T04:51:57
https://www.reddit.com/r/LocalLLaMA/comments/1agvlzk/2xa6000s_nvlink_82gb_vram_132gb_ram_threadripper/
card_chase
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agvlzk
false
null
t3_1agvlzk
/r/LocalLLaMA/comments/1agvlzk/2xa6000s_nvlink_82gb_vram_132gb_ram_threadripper/
false
false
default
7
null
Cabinet Designing AI
7
I'm a cabinet manufacturer and I wanted to see if you guys have any input on how I can build a model or train a model for cabinet manufacturing relating optimization, finding error in the design ( example: fridge door will hit adjacent cabinet) or I tell it want I want reveal detail and it make that adjustment through out the design. I want to get a project going with this subject. See if you guys have any suggestions
2024-02-02T04:46:33
https://www.reddit.com/r/LocalLLaMA/comments/1agviet/cabinet_designing_ai/
pokeyou21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agviet
false
null
t3_1agviet
/r/LocalLLaMA/comments/1agviet/cabinet_designing_ai/
false
false
self
7
null
Optimize parameters automatically
1
Like a calibration routine. Like an optimize video settings button, that runs the model through a series of tests to calibrate the optimal parameter settings automatically. While I seem to have found pretty darn good settings for most of the models I use, I still feel like it's probably not as good as it could be and it would be awesome to have this automated. On forums I find 'try with such and such settings.' Has anyone tried doing anything like this yet?
2024-02-02T02:53:38
https://www.reddit.com/r/LocalLLaMA/comments/1agtec4/optimize_parameters_automatically/
skatardude10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agtec4
false
null
t3_1agtec4
/r/LocalLLaMA/comments/1agtec4/optimize_parameters_automatically/
false
false
self
1
null
How do I know what context length a model was trained to work with? Sometimes it says on the model card, sometimes it doesn't?
2
How do I know what context length a model was trained to work with? Sometimes it says on the model card, sometimes it doesn't? These models, I have no clue what context length they're trained for, but I'd assume 8k. I'd assume 8k I suppose for any model without the 16k or 32k or 200k in the title of the model, but, well, mixtral is trained for 32k despite not saying so in it's title... Just throwing darts here but do any of these have a context training target from their creator other than 8k? [https://huggingface.co/meta-llama/Llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b) [https://huggingface.co/cgato/Thespis-13b-DPO-v0.7](https://huggingface.co/cgato/Thespis-13b-DPO-v0.7?not-for-all-audiences=true) [https://huggingface.co/Undi95/UtopiaXL-13B](https://huggingface.co/Undi95/UtopiaXL-13B?not-for-all-audiences=true) [https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) [https://huggingface.co/Undi95/Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B?not-for-all-audiences=true) [https://huggingface.co/NeverSleep/Noromaid-7b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-7b-v0.1.1) [https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia) I assume these are all 8k but really I have no clue, I figured it couldn't hurt to ask... I do wish it just always said on the model card. &#x200B;
2024-02-02T01:44:41
https://www.reddit.com/r/LocalLLaMA/comments/1agrze7/how_do_i_know_what_context_length_a_model_was/
namad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agrze7
false
null
t3_1agrze7
/r/LocalLLaMA/comments/1agrze7/how_do_i_know_what_context_length_a_model_was/
false
false
self
2
{'enabled': False, 'images': [{'id': 'pE6GJUq41oq1EzCnvrmhCeOw6qnjS9eXY80Fbug0Bjk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3YBxfjsmxDYr1Iq-fScvm1SSI3cHKN8vvDnmLMEj7wk.jpg?width=108&crop=smart&auto=webp&s=944c66cece987dbde8ce37f537cf2adfe6ca5f54', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3YBxfjsmxDYr1Iq-fScvm1SSI3cHKN8vvDnmLMEj7wk.jpg?width=216&crop=smart&auto=webp&s=4ef8a59247bf01676a7572bd29fd148a737cfd16', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3YBxfjsmxDYr1Iq-fScvm1SSI3cHKN8vvDnmLMEj7wk.jpg?width=320&crop=smart&auto=webp&s=2be2bbd83acb58a26b7f008a59d18357d4fd6bff', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3YBxfjsmxDYr1Iq-fScvm1SSI3cHKN8vvDnmLMEj7wk.jpg?width=640&crop=smart&auto=webp&s=2146bd5b069ee18c22f92bf12247ceca3c7b8d62', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3YBxfjsmxDYr1Iq-fScvm1SSI3cHKN8vvDnmLMEj7wk.jpg?width=960&crop=smart&auto=webp&s=81d58eac8efbb8d3e6b71689ee763caf9430a388', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3YBxfjsmxDYr1Iq-fScvm1SSI3cHKN8vvDnmLMEj7wk.jpg?width=1080&crop=smart&auto=webp&s=9ecd64da1ae7370facb86f358655b0eedf46e279', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3YBxfjsmxDYr1Iq-fScvm1SSI3cHKN8vvDnmLMEj7wk.jpg?auto=webp&s=cb293096bcfe3d229c5fd378c72cea971b056b57', 'width': 1200}, 'variants': {}}]}
[llama.cpp] Experimental LLaVA 1.6 Quants (34B and Mistral 7B)
49
For anyone looking for image to text, I got some experimental GGUF quants for LLaVA 1.6 They were prepared through \[this hacky script\]([https://gist.github.com/cjpais/59fb7fcb5256ed0aea339b0a35eac899](https://gist.github.com/cjpais/59fb7fcb5256ed0aea339b0a35eac899)) and is likely missing some of the magic from the original model. Work is being done in \[this PR\]([https://github.com/ggerganov/llama.cpp/pull/5267](https://github.com/ggerganov/llama.cpp/pull/5267)) by \[cmp-nct\]([https://github.com/cmp-nct](https://github.com/cmp-nct)) who is trying to get those bits in. &#x200B; 7B Mistral: [https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf) 34B: [https://huggingface.co/cjpais/llava-v1.6-34B-gguf](https://huggingface.co/cjpais/llava-v1.6-34B-gguf) I've tested the quants only very lightly, but they seem to have much better performance than v1.5 to my eye &#x200B; It'd be great to hear any feedback from those who want to play around and test them. I will try and update the hf repo with the latest quants as better scripts come out
2024-02-02T01:42:13
https://www.reddit.com/r/LocalLLaMA/comments/1agrxnz/llamacpp_experimental_llava_16_quants_34b_and/
sipjca
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agrxnz
false
null
t3_1agrxnz
/r/LocalLLaMA/comments/1agrxnz/llamacpp_experimental_llava_16_quants_34b_and/
false
false
self
49
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
Has anyone tried this new model from Open BMB?
15
It’s stats seem really good for a 2B model https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp32
2024-02-02T01:32:35
https://www.reddit.com/r/LocalLLaMA/comments/1agrqn6/has_anyone_tried_this_new_model_from_open_bmb/
Salguydudeman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agrqn6
false
null
t3_1agrqn6
/r/LocalLLaMA/comments/1agrqn6/has_anyone_tried_this_new_model_from_open_bmb/
false
false
self
15
{'enabled': False, 'images': [{'id': 'Nu0BlctI_4Q9koxnjCH9pq4V1FKcukF6MuNW76Po9NA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gjVEqDUyrqdUbyW5-FOYjMn4ZBhlwklw_Wp2OSzEkz4.jpg?width=108&crop=smart&auto=webp&s=0776488b9e8350b6820f638954f1de6dfb18b7ca', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gjVEqDUyrqdUbyW5-FOYjMn4ZBhlwklw_Wp2OSzEkz4.jpg?width=216&crop=smart&auto=webp&s=ed8c2f36dc32c981b1a0068503bdebb2d748feb2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gjVEqDUyrqdUbyW5-FOYjMn4ZBhlwklw_Wp2OSzEkz4.jpg?width=320&crop=smart&auto=webp&s=aeaefcd648a12995f38e8b2d7b8cb8997d0c1218', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gjVEqDUyrqdUbyW5-FOYjMn4ZBhlwklw_Wp2OSzEkz4.jpg?width=640&crop=smart&auto=webp&s=334072a41cdd98b2e282e43c85cc004cd857706d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gjVEqDUyrqdUbyW5-FOYjMn4ZBhlwklw_Wp2OSzEkz4.jpg?width=960&crop=smart&auto=webp&s=fda33f482b371b62308067f6a44485330bdc2fa6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gjVEqDUyrqdUbyW5-FOYjMn4ZBhlwklw_Wp2OSzEkz4.jpg?width=1080&crop=smart&auto=webp&s=20bcf7bc52bfca099579e92a63c36995e947c7cf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gjVEqDUyrqdUbyW5-FOYjMn4ZBhlwklw_Wp2OSzEkz4.jpg?auto=webp&s=fa29a98cb18067a02cf25ed2c91c9ec557441c76', 'width': 1200}, 'variants': {}}]}
Has anyone encountered Mistral’s tendency to use the escaped underscore token \\_?
2
I am trying out TaskWeaver, which instructs models to respond in json. For keys that contain an underscore, it returns token 14048 which is the escaped escaped underscore (\\_). When this is parsed it becomes \_. So when Taskweaver tries to read the response it does not contain the keys it’s looking for. Typically with text-generation-webui I can add 14048 to the blocked tokens list in generation parameters and the model will use the 28739 token (_). However when using the OpenAI extension blocking it doesn’t work.
2024-02-02T01:15:06
https://www.reddit.com/r/LocalLLaMA/comments/1agrddy/has_anyone_encountered_mistrals_tendency_to_use/
aaronr_90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agrddy
false
null
t3_1agrddy
/r/LocalLLaMA/comments/1agrddy/has_anyone_encountered_mistrals_tendency_to_use/
false
false
self
2
null
What can I run?
1
So recently I got a gaming laptop for my work which involve graphic design and 3d stuff not the focus however. It's a Lenovo Legion pro 7i 4090L i9-13900HX Processor (24 Cores, 32 Threads) 32 gigs RAM 16 gigs VRAM (bit of a ripoff I know) I originally bought it so I would never have to worry about an upgrade for a while, but soon after I discovered LLM. Understanding that I had a decent laptop, I was excited to get into it. I'm still new to it all and don't understand much still so I wanted to ask: With my computer, what's the best I could run? what's the MOST I could run? and finally, with the ability to run 100% gpu, 100% cpu, or a mix of both, what format should I consider sticking with for my specific computer?
2024-02-02T01:13:30
https://www.reddit.com/r/LocalLLaMA/comments/1agrc81/what_can_i_run/
IZA_does_the_art
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agrc81
false
null
t3_1agrc81
/r/LocalLLaMA/comments/1agrc81/what_can_i_run/
false
false
self
1
null
Any coding LLM better than DeepSeek coder?
42
Curious to know if there’s any coding LLM that understands language very well and also have a strong coding ability that is on par / surpasses that of Deepseek? Talking about 7b models
2024-02-02T01:10:15
https://www.reddit.com/r/LocalLLaMA/comments/1agr9qo/any_coding_llm_better_than_deepseek_coder/
plsendfast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1agr9qo
false
null
t3_1agr9qo
/r/LocalLLaMA/comments/1agr9qo/any_coding_llm_better_than_deepseek_coder/
false
false
self
42
null