title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Having a hard time trying to uncensor custom finetuned DeepSeek | 1 | Hi all,
I have fine tuned a deepseek 7b coder model, but for some reason, the replies are heavily censored. I am not sure what I am doing wrong with this. Any suggestion? It keeps throwing up the ‘I am sorry, i can’t. help with this because this is not a python related task blah blah blah’
Am I supposed to follow their pre-prompt exactly? How does this work? | 2024-02-02T00:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1agqxta/having_a_hard_time_trying_to_uncensor_custom/ | plsendfast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agqxta | false | null | t3_1agqxta | /r/LocalLLaMA/comments/1agqxta/having_a_hard_time_trying_to_uncensor_custom/ | false | false | self | 1 | null |
Is there a way to have the equivalent of a GPT, including reference files, on a local LLM? | 1 | [removed] | 2024-02-02T00:50:56 | https://www.reddit.com/r/LocalLLaMA/comments/1agqv2m/is_there_a_way_to_have_the_equivalent_of_a_gpt/ | Paganator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agqv2m | false | null | t3_1agqv2m | /r/LocalLLaMA/comments/1agqv2m/is_there_a_way_to_have_the_equivalent_of_a_gpt/ | false | false | default | 1 | null |
Lastest Llama.cpp build seem broken? (apple silicon/gguf) | 7 | I recently updated my Llama.cpp installation to try out the latest Code Llama and Miqu (in case it could help). I executed a git pull followed by make to ensure everything was up-to-date. However, I've encountered unexpected performance issues since the update.
Symptoms:
-The models are significantly slower and don't seem to function correctly.
-When monitoring system resources, there's a noticeable delay in the model starting to load into memory. Additionally, it seems like the model isn't fully loading, as it occupies much less memory than expected.
-I observed that the model gets dropped and reloaded into memory repeatedly, which is highly unusual.
-The GPU utilization is minimal, whereas the CPU usage spikes to its maximum capacity.
Thinking it might be an issue with my setup, I performed a system reboot and verified all related packages (like numpy, torch, etc.) were properly installed and up to date. Despite these efforts, the problem persisted.
In an attempt to resolve the issue, I reverted to an older branch (git checkout to a previous commit), which fixed the problem. This leads me to believe the issue is specifically related to the latest updates.
I'm not deeply technical and am struggling to pinpoint the exact cause. Has anyone else experienced similar issues with the latest updates? Any insights or suggestions would be greatly appreciated. I'm curious to know if this is an isolated incident or if others are facing the same challenges.
Thank you for your time and help! | 2024-02-02T00:34:17 | https://www.reddit.com/r/LocalLLaMA/comments/1agqi4f/lastest_llamacpp_build_seem_broken_apple/ | mantafloppy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agqi4f | false | null | t3_1agqi4f | /r/LocalLLaMA/comments/1agqi4f/lastest_llamacpp_build_seem_broken_apple/ | false | false | self | 7 | null |
Intro to Open Source AI [Talk] - OpenHermes for everything. | 1 | [removed] | 2024-02-02T00:23:41 | https://www.reddit.com/r/LocalLLaMA/comments/1agq9rn/intro_to_open_source_ai_talk_openhermes_for/ | kushalgoenka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agq9rn | false | null | t3_1agq9rn | /r/LocalLLaMA/comments/1agq9rn/intro_to_open_source_ai_talk_openhermes_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WvUqoEm844X1lon1OK7LHWVki0vSis61eSlPh5z43YU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ew0UkDlqBq-My886an3tNXECxUSnsLfCOT6l87ljIk8.jpg?width=108&crop=smart&auto=webp&s=e8e9a50ed42cefbbb9eefea332571fc545559880', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Ew0UkDlqBq-My886an3tNXECxUSnsLfCOT6l87ljIk8.jpg?width=216&crop=smart&auto=webp&s=968e8a9d45b292e67de44ac3f44a143cc49a9117', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Ew0UkDlqBq-My886an3tNXECxUSnsLfCOT6l87ljIk8.jpg?width=320&crop=smart&auto=webp&s=c410016f4013c4358e46f9f31cde3832b3fb654e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Ew0UkDlqBq-My886an3tNXECxUSnsLfCOT6l87ljIk8.jpg?auto=webp&s=8de3dd6f18a09f2d51b28ef78135814afe5f923b', 'width': 480}, 'variants': {}}]} |
HW for LLaMA | 1 | Is there anyone in the sub who are using MSI system in the link or similar gaming PC for AI?
[https://www.newegg.com/msi-infinite-rs-14nui9-630us/p/3D5-000G-045S0?Item=9SIAFXNK7F8736](https://www.newegg.com/msi-infinite-rs-14nui9-630us/p/3D5-000G-045S0?Item=9SIAFXNK7F8736)
I have a budget of 5K from my employer to do a proof of concept for local/self hosted code assist. Mac studio M2 ultra with 192GB unified memory and 2TB is 6K. | 2024-02-01T23:53:31 | https://www.reddit.com/r/LocalLLaMA/comments/1agpl8c/hw_for_llama/ | seekingSupB | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agpl8c | false | null | t3_1agpl8c | /r/LocalLLaMA/comments/1agpl8c/hw_for_llama/ | false | false | self | 1 | null |
How to answer questions involving aggregation in a RAG system? | 1 | For e.g. "How many times has \[insert incidence\] occurred?", "What is the most common cause of \[insert problem\]?"
This is specially challenging because RAG only returns top k items which limits the LLMs capacity to aggregate.
One way I thought of going about it was:
1. Ingest data with metadata.
2. Filter all instances of the incidence mentioned in the query using the metadata
3. Compress context and pass it to the LLM
This doesn't work everytime but even when it works, RAG system starts to perform worse on non-aggregation question. Is there a better/recommended way to do this? | 2024-02-01T23:14:33 | https://www.reddit.com/r/LocalLLaMA/comments/1agop4t/how_to_answer_questions_involving_aggregation_in/ | artemiswolves | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agop4t | false | null | t3_1agop4t | /r/LocalLLaMA/comments/1agop4t/how_to_answer_questions_involving_aggregation_in/ | false | false | self | 1 | null |
4060ti 16gb vs 4070 12gb | 4 | I currently have a 4060ti 16gb installed but am considering buying a second GPU to run larger models. Would it be better to get a 4070 as my primary and the 4060 as secondary or to get another 4060 with the extra 4gb of VRAM.
I don't understand the internals enough to know if the different bus speed would be better, worse, or negligible in this scenario. Also, is the 4gb different enough to matter in this case? Looking for thoughts on best approach or other recommendations. | 2024-02-01T23:04:09 | https://www.reddit.com/r/LocalLLaMA/comments/1agogj6/4060ti_16gb_vs_4070_12gb/ | it_lackey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agogj6 | false | null | t3_1agogj6 | /r/LocalLLaMA/comments/1agogj6/4060ti_16gb_vs_4070_12gb/ | false | false | self | 4 | null |
Russian LLM Silicon-Masha-7B | 18 | Hi everyone! Please evaluate the merge model I made. It is aimed most of all at RP/ERP in Russian. As for me, with the tasks in Russian it copes. Do not judge harshly, something wrong, write)))) all the same first time doing this).
I use:
mergekit
SanjiWatsuki/Kunoichi-DPO-7B
MexIvanov/zephyr-python-ru-merged
IlyaGusev/saiga_mistral_7b_merged
https://huggingface.co/LakoMoor/Silicon-Masha-7B
https://huggingface.co/LakoMoor/Silicon-Masha-7B-GGUF | 2024-02-01T22:48:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ago2wq/russian_llm_siliconmasha7b/ | Substantial-Club-582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ago2wq | true | null | t3_1ago2wq | /r/LocalLLaMA/comments/1ago2wq/russian_llm_siliconmasha7b/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'Z55lDPW0Zno_RD831mjyHn7CxL2xHsjJX8hNuKnyFnI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6XdwsQv4gTsxvvi61e4y8_46iwknZmxfSq_talhpVio.jpg?width=108&crop=smart&auto=webp&s=2eb4e4283e5da3c418d51754d82994b2105d63e4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6XdwsQv4gTsxvvi61e4y8_46iwknZmxfSq_talhpVio.jpg?width=216&crop=smart&auto=webp&s=627fafb139d5073bbb3840dcbd9ddffd8ec6f7bb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6XdwsQv4gTsxvvi61e4y8_46iwknZmxfSq_talhpVio.jpg?width=320&crop=smart&auto=webp&s=185e208e9839da4a73326b7d66566f75cea85b84', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6XdwsQv4gTsxvvi61e4y8_46iwknZmxfSq_talhpVio.jpg?width=640&crop=smart&auto=webp&s=2116a61e60fceea58adbfdd0ccada53e7c6b6373', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6XdwsQv4gTsxvvi61e4y8_46iwknZmxfSq_talhpVio.jpg?width=960&crop=smart&auto=webp&s=4a03886c4709cdbc3e8a2fb81ec3ad2431919016', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6XdwsQv4gTsxvvi61e4y8_46iwknZmxfSq_talhpVio.jpg?width=1080&crop=smart&auto=webp&s=b63cbc22cebb3181244049cf7bbbdc6ab8c4f9df', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6XdwsQv4gTsxvvi61e4y8_46iwknZmxfSq_talhpVio.jpg?auto=webp&s=0f2655778efbf2ebf1f201f3318597783eca3c31', 'width': 1200}, 'variants': {}}]} |
Introducing LoRAX v0.7: Mixture of LoRAs (linear, TIES, DARE) per request | 46 | [https://github.com/predibase/lorax/releases/tag/v0.7.0](https://github.com/predibase/lorax/releases/tag/v0.7.0)
Hey everyone!
As a regular lurker, sometimes commenter here, I know that the idea of training specialized LoRAs and mixing them together a bit like a post-training MoE comes up from time to time.
As such, I'm pretty excited to share our latest release for the [LoRAX](https://github.com/predibase/lorax/tree/main) project, which adds the ability to do this LoRA merging / mixing just-in-time for each request. The approaches on offer will be familiar to those who use [mergekit](https://github.com/cg123/mergekit), but with the added benefit of being able to do this in a parameter-efficient way through LoRA adapters.
If you're not familiar with LoRAX, it's an open source project for hosting many fine-tuned LoRAs on top of a common base model as efficiently and conveniently as possible.
Benefits / example use cases for the new LoRA merging:
✅ Mix LoRAs to improve the quality of the response (Mixture of LoRAs)
✅ No need to pick the right LoRA by hand, just provide multiple options and let LoRAX figure it out
✅ Combine with a ranker and merge top-k LoRAs to scale to a whole library of LoRAs
The latter idea in particular is one that I find pretty fascinating, and we're working to build natively into LoRAX through adapters that improve embedding quality.
Check the link to the full release notes at the top if you'd like to learn more! | 2024-02-01T22:37:36 | https://www.reddit.com/r/LocalLLaMA/comments/1agntgh/introducing_lorax_v07_mixture_of_loras_linear/ | SiliconSynapsed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agntgh | false | null | t3_1agntgh | /r/LocalLLaMA/comments/1agntgh/introducing_lorax_v07_mixture_of_loras_linear/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'vjqNUbWetJb9mt5iXMhoM_yvyA-4cuTXWFgyodBaFYI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/duVfcY0Bc_L32yGwchHkWOAmK0O0j60fXFtPTugRT8k.jpg?width=108&crop=smart&auto=webp&s=930c722ae9bf1a8748be75bc3a0265e11b712402', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/duVfcY0Bc_L32yGwchHkWOAmK0O0j60fXFtPTugRT8k.jpg?width=216&crop=smart&auto=webp&s=379557070609ff6f8ecea443433fb10020b124d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/duVfcY0Bc_L32yGwchHkWOAmK0O0j60fXFtPTugRT8k.jpg?width=320&crop=smart&auto=webp&s=20cac299b3aa9f391be2f9206e7f9a099700dc31', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/duVfcY0Bc_L32yGwchHkWOAmK0O0j60fXFtPTugRT8k.jpg?width=640&crop=smart&auto=webp&s=9bc159f318a7ce4d319574973a5cba02159bf973', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/duVfcY0Bc_L32yGwchHkWOAmK0O0j60fXFtPTugRT8k.jpg?width=960&crop=smart&auto=webp&s=cceb3e4f6278e7ee0bbb3e01abc2ea794796f2f4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/duVfcY0Bc_L32yGwchHkWOAmK0O0j60fXFtPTugRT8k.jpg?width=1080&crop=smart&auto=webp&s=1f46854af0b0b0b1dfa12370d335fbc3c4646adf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/duVfcY0Bc_L32yGwchHkWOAmK0O0j60fXFtPTugRT8k.jpg?auto=webp&s=3098a2292d6418f9661922c2b49069850081a3a8', 'width': 1200}, 'variants': {}}]} |
Help with cmake on llama.cpp | 1 | [removed] | 2024-02-01T21:38:08 | https://www.reddit.com/r/LocalLLaMA/comments/1agme95/help_with_cmake_on_llamacpp/ | ChildOf7Sins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agme95 | false | null | t3_1agme95 | /r/LocalLLaMA/comments/1agme95/help_with_cmake_on_llamacpp/ | false | false | self | 1 | null |
MoE-LLaVA: Mixture of Experts for Large Vision-Language Models - Peking University 2024 - MoE-LLaVA-3B demonstrates performance comparable to the LLaVA-1.5-7B ! | 101 | Paper: [https://arxiv.org/abs/2401.15947v1](https://arxiv.org/abs/2401.15947v1)
Github: [https://github.com/PKU-YuanGroup/MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA)
Abstract:
>For **Large Vision-Language Models (LVLMs)**, scaling the model can effectively improve performance. However, expanding model parameters significantly increases the training and inferring costs, as all model parameters are activated for each token in the calculation. In this work, we propose a novel training strategy **MoE-tuning for LVLMs**, which can constructing a **sparse model** with an outrageous number of parameter but a constant computational cost, and effectively addresses the performance degradation typically associated with multi-modal learning and model sparsity. Furthermore, we present the MoE-LLaVA framework, a MoE-based sparse LVLM architecture. This framework **uniquely activates only the top-k experts through routers during deployment, keeping the remaining experts inactive.** Our extensive experiments highlight the excellent capabilities of MoE-LLaVA in visual understanding and its potential to reduce hallucinations in model outputs. **Remarkably, with just 3 billion sparsely activated parameters,** **MoE-LLaVA demonstrates performance comparable to the LLaVA-1.5-7B** on various visual understanding datasets and even surpasses the LLaVA-1.5-13B in object hallucination benchmarks. Through MoE-LLaVA, we aim to establish a baseline for sparse LVLMs and provide **valuable insights for future research in developing more efficient and effective multi-modal learning systems.**
https://preview.redd.it/rdi5kw0yl1gc1.jpg?width=803&format=pjpg&auto=webp&s=b71b6d0415c640970b68ddfa43b7ae2d15d31675
https://preview.redd.it/qjdqay0yl1gc1.jpg?width=797&format=pjpg&auto=webp&s=46cc5848e208c00295dd74c15a668904c2d6c6da
https://preview.redd.it/meolcy0yl1gc1.jpg?width=1321&format=pjpg&auto=webp&s=488475751c779f52c35d42709282ce01fd5f7ffa
https://preview.redd.it/bawv8w0yl1gc1.jpg?width=1181&format=pjpg&auto=webp&s=2d435f2c1e085fbe4f3b070bfc1e0a1e44775f00 | 2024-02-01T21:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/1agmd0t/moellava_mixture_of_experts_for_large/ | Singularian2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agmd0t | false | null | t3_1agmd0t | /r/LocalLLaMA/comments/1agmd0t/moellava_mixture_of_experts_for_large/ | false | false | 101 | null | |
KVQuant: Towards 10 Million Context Length (for LLaMA and Mistral) | 57 |
📌 The existing problem - LLMs are seeing growing use for applications such as document analysis and summarization which require large context windows, and with these large context windows KV cache activations surface as the dominant contributor to memory consumption during inference. Quantization is a promising approach for compressing KV cache activations; however, existing solutions fail to represent activations accurately in ultra-low precisions, such as sub-4-bit.
📌 This paper presents KVQuant, which addresses this problem by incorporating novel methods for quantizing cached KV activations, including:
👉 (i) Per-Channel Key Quantization, where we adjust the dimension along which we quantize the Key activations to better match the distribution;
👉 (ii) Pre-RoPE Key Quantization, where we quantize Key activations before the rotary positional embedding to mitigate its impact on quantization;
👉 (iii) Non-Uniform KV Cache Quantization, where we derive per-layer sensitivity-weighted non-uniform datatypes that better represent the distributions;
👉 (iv) Per-Vector Dense-and-Sparse Quantization, where we isolate outliers separately for each vector to minimize skews in quantization ranges; and
👉 (v) Q-Norm, where we normalize quantization centroids in order to mitigate distribution shift, providing additional benefits for 2-bit quantization.
📌 By applying this method to the LLaMA, LLaMA-2, and Mistral models, the paper achieves <0.1 perplexity degradation with 3-bit quantization on both Wikitext-2 and C4, outperforming existing approaches.
Not my text, link to original tweet :
https://twitter.com/rohanpaul_ai/status/1753142233531088957?t=Msf-uT4Mdc1A0T-tuTiM_A&s=19 | 2024-02-01T21:24:30 | https://www.reddit.com/r/LocalLLaMA/comments/1agm2u8/kvquant_towards_10_million_context_length_for/ | CedricLimousin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agm2u8 | false | null | t3_1agm2u8 | /r/LocalLLaMA/comments/1agm2u8/kvquant_towards_10_million_context_length_for/ | false | false | self | 57 | {'enabled': False, 'images': [{'id': 'pSG-YT1-TF1HTxEQOiuE1sdZD1pk3i1PmxFMSo34IrY', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/f5OyMvzgvKJrRgTzgS2191KxqysmosPo8D8jMohtZoQ.jpg?width=108&crop=smart&auto=webp&s=689c48320b9bb749a0df4020375bcdf613160bbd', 'width': 108}], 'source': {'height': 60, 'url': 'https://external-preview.redd.it/f5OyMvzgvKJrRgTzgS2191KxqysmosPo8D8jMohtZoQ.jpg?auto=webp&s=ccd2364d0b383a282e04c9daa475d9517a94279f', 'width': 140}, 'variants': {}}]} |
Pdf to Q and A | 2 | I need to convert a larg pdf file to Q and A or a quiz like form locally
I tried both gpt4all and privategpt , didn't get good results (using dolphine mixtral , Mistral , phi2 , llama)
I found a model named "pdf to quiz " on hugging face , but couldn't find any documentation or guide to use it
If anyone tried a way can share a successful workflow regarding this use case it will be great
Thanks | 2024-02-01T21:07:45 | https://www.reddit.com/r/LocalLLaMA/comments/1agloq0/pdf_to_q_and_a/ | drhafezzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agloq0 | false | null | t3_1agloq0 | /r/LocalLLaMA/comments/1agloq0/pdf_to_q_and_a/ | false | false | self | 2 | null |
Claude instant | 1 | [removed] | 2024-02-01T20:42:34 | https://www.reddit.com/r/LocalLLaMA/comments/1agl2yo/claude_instant/ | Creative_Bottle_3225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agl2yo | false | null | t3_1agl2yo | /r/LocalLLaMA/comments/1agl2yo/claude_instant/ | false | false | self | 1 | null |
Is my understanding of Lora correct? | 33 | When training a Lora or QLora, we take the full matrix of the original.
\+---+---+---+
| 1 | 2 | 3 |
\+---+---+---+
| 2 | 4 | 6 |
\+---+---+---+
| 3 | 6 | 9 |
\+---+---+---+
which gets decomposed to something like
\+---+
| x |
\+---+
| y |
\+---+
| z |
\+---+
X
\+---+---+---+
| u | v | w |
\+---+---+---+
these weights get updated (xyz,uvw) during training to the respective task at hand. Post training you multiply these values to form a matrix the same size as the original matrix
\+----+----+----+
| xu | xv | xw |
\+----+----+----+
| yu | yv | yw |
\+----+----+----+
| zu | zv | zw |
\+----+----+----+
which, during inference time, gets summed with the original?
\+------+------+------+
| xu+1 | xv+2 | xw+3 |
\+------+------+------+
| yu+2 | yv+4 | yw+6 |
\+------+------+------+
| zu+3 | zv+6 | zw+9 |
\+------+------+------+ | 2024-02-01T20:02:58 | https://www.reddit.com/r/LocalLLaMA/comments/1agk4l4/is_my_understanding_of_lora_correct/ | Primary_Importance_7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agk4l4 | false | null | t3_1agk4l4 | /r/LocalLLaMA/comments/1agk4l4/is_my_understanding_of_lora_correct/ | false | false | self | 33 | null |
Mixtral settings on 8gb Vram + 32 gb Ram? | 4 | I have a laptop with an rtx 2080 which has 8gb of vram and 32 gb of system memory, I cannot seem to get mixtral to work, what's the recommended way of getting it to work? Also would I be able to get a decent speed? | 2024-02-01T19:50:17 | https://www.reddit.com/r/LocalLLaMA/comments/1agjtj2/mixtral_settings_on_8gb_vram_32_gb_ram/ | rookiewtf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agjtj2 | false | null | t3_1agjtj2 | /r/LocalLLaMA/comments/1agjtj2/mixtral_settings_on_8gb_vram_32_gb_ram/ | false | false | self | 4 | null |
I'm not impressed with Miqu 70B. Those of you who are: why? | 1 | [removed] | 2024-02-01T19:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/1agjj8a/im_not_impressed_with_miqu_70b_those_of_you_who/ | Arcturus17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agjj8a | false | null | t3_1agjj8a | /r/LocalLLaMA/comments/1agjj8a/im_not_impressed_with_miqu_70b_those_of_you_who/ | false | false | self | 1 | null |
LocalChat: Run generative AI locally with zero-config | 19 | Hello everyone!
I discovered this community a few months ago, and I am absolutely impressed by the work you all do and how quickly you find and publicize good, new models here on the subreddit. I've been able to take advantage of your knowledge many times during this time.
Today, I want to do some self-promotion, specifically introduce an application that I developed over the past month: LocalChat. You can find it on GitHub: https://github.com/nathanlesage/local-chat .
The goal of LocalChat is simple: Enable everyone to just run generative AI on their own computers with a FOSS application and without having to worry about configuration, setting up a Python environment, or owning an expensive GPU. It's aimed at making it simple for everyone to just load some model and chat with it.
Now, why am I posting this to a community of enthusiasts who already have tons of technical knowledge (likely more than me) when it comes to these models? Two reasons: First, we all know non-technical people who may be intrigued in running AI locally, but don't possess the interest in downloading either proprietary software or a GUI that requires to run a local server. This means that with LocalChat you can get friends, family, and colleagues to run (your) models without any effort.
Second, if you also want to just run something without having to set up too much aside from tinkering with the models in more depth (e.g., on your work computer), you may be interested in the app itself as well.
Therefore, I'm just going to leave this here. If you want to be able to control everything in the models, then this app is probably not for you, and if you're happy with apps like LMChat, it's probably also nothing for you, and that's perfectly fine. I don't want to make any money with it or gain popularity, it's only a hobby project of mine, so use it, spread the word, or ignore it, if it doesn't have any use for you!
If you have any questions, just ask.
That'll be all — thanks again for all that you're doing for local generative AI! | 2024-02-01T19:33:30 | https://www.reddit.com/r/LocalLLaMA/comments/1agjfbj/localchat_run_generative_ai_locally_with/ | nathan_lesage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agjfbj | false | null | t3_1agjfbj | /r/LocalLLaMA/comments/1agjfbj/localchat_run_generative_ai_locally_with/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'kbryJ2pfURRuXuC7J17xqeeealUfaEqFDFO-NOEBAa0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GUhmHtkwkXsrMQFNfyFjhaiFpRSNbSByIHy8ewiWYKM.jpg?width=108&crop=smart&auto=webp&s=909ed794f604aa1ad155a680bc79303fe05214e4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GUhmHtkwkXsrMQFNfyFjhaiFpRSNbSByIHy8ewiWYKM.jpg?width=216&crop=smart&auto=webp&s=a8632ae23040cc8ba5caba2035d8a82a553fe201', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GUhmHtkwkXsrMQFNfyFjhaiFpRSNbSByIHy8ewiWYKM.jpg?width=320&crop=smart&auto=webp&s=ee99c6bf7121d6c0204f8e8e53276c517240cb0e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GUhmHtkwkXsrMQFNfyFjhaiFpRSNbSByIHy8ewiWYKM.jpg?width=640&crop=smart&auto=webp&s=fe9d377280788df1ccbc8b418c2b33602952de80', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GUhmHtkwkXsrMQFNfyFjhaiFpRSNbSByIHy8ewiWYKM.jpg?width=960&crop=smart&auto=webp&s=893cf6a48d8a6791284dee6120738ed0339104dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GUhmHtkwkXsrMQFNfyFjhaiFpRSNbSByIHy8ewiWYKM.jpg?width=1080&crop=smart&auto=webp&s=1572b846795905a15c5507c201186e8eb7477505', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GUhmHtkwkXsrMQFNfyFjhaiFpRSNbSByIHy8ewiWYKM.jpg?auto=webp&s=ed550ff8490a9ab597322134d8c47d6b1603ea10', 'width': 1200}, 'variants': {}}]} |
Those who subscribe to Mistral to chat with it- what client are you using? | 24 | So after the whole Miqu thing, I decided to toss some support towards mistral by subbing... but I didn't quite find what I expected. I kind of figured it would be like ChatGPT where there was a web chat client and then separate APIs, but it looks like Mistral is 100% API.
I don't really have any dev plans to use Mistral Medium for, I have local models for that, I just wanted to chew the fat with it for a bit, ask it questions, etc... but maybe not enough to build a client myself to do all that.
For those of y'all just chatting with Mistral, what application are you using to do so? | 2024-02-01T19:30:09 | https://www.reddit.com/r/LocalLLaMA/comments/1agjcdy/those_who_subscribe_to_mistral_to_chat_with_it/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agjcdy | false | null | t3_1agjcdy | /r/LocalLLaMA/comments/1agjcdy/those_who_subscribe_to_mistral_to_chat_with_it/ | false | false | self | 24 | null |
Use case examples for chat vs. instruct | 1 | [removed] | 2024-02-01T19:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/1agj40a/use_case_examples_for_chat_vs_instruct/ | forgot_my_pass404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agj40a | false | null | t3_1agj40a | /r/LocalLLaMA/comments/1agj40a/use_case_examples_for_chat_vs_instruct/ | false | false | self | 1 | null |
MLX vs Llama.cpp | 1 | [removed] | 2024-02-01T19:01:20 | https://www.reddit.com/r/LocalLLaMA/comments/1aginef/mlx_vs_llamacpp/ | Drummer_Week_9727 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aginef | false | null | t3_1aginef | /r/LocalLLaMA/comments/1aginef/mlx_vs_llamacpp/ | false | false | self | 1 | null |
AMD multi GPU? Different GPUs | 1 | I currently have one 6700xt 12GB. (and 9700k + 32GB RAM)
I am considering an upgrade.
I can get:
A. 6900xt 16GB price $260 (multi gpu with 6700xt) + PSU upgrade $120
B. 3090 24GB price $520 (single gpu) + PSU upgrade $120
The question is, how much performance am I losing by going with A compared to B?
The performance on 3090 would be greater but is it worth the extra cost?
Would A even be possible?
If there is any other option you can think of, please share.
I know the software support for AMD is not on par with Nvidia but I can run models on AMD right now and it's not that bad. | 2024-02-01T18:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/1agilb7/amd_multi_gpu_different_gpus/ | myary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agilb7 | false | null | t3_1agilb7 | /r/LocalLLaMA/comments/1agilb7/amd_multi_gpu_different_gpus/ | false | false | self | 1 | null |
Koboldcpp/Llamacpp works on the open source NVK (Nvidia) driver (Latest mesa-git) | 6 | With the release of the Vulkan support for Llamacpp I had a big curiosity, can we now run LLM's without Nvidia's proprietary driver on an Nvidia GPU? The NVK developers have been hard at work contributing a new open source Vulkan driver to Mesa and their current mesa-git release indicates Vulkan 1.3 so in theory it could work.
I installed a fresh installation of Manjaro without opting for the proprietary Nvidia driver and replaced mesa with the mesa-git AUR package. After confirming Vulkan was successfully installed and showed my 3090 I downloaded the binary release of Koboldcpp 1.56 and a copy of Tiefighter 13B Q4\_K\_S.
When loading the program froze very early on before Koboldcpp displayed the model information, but while I asked why it could be stuck there I noticed that it had moved on and loaded successfully. I ran the default Niko sample story which lead to the following generation.
`Input: {"n": 1, "max_context_length": 2048, "max_length": 512, "rep_pen": 1.2, "temperature": 0.5, "top_p": 0.9, "top_k": 0, "top_a": 0, "typical": 1, "tfs": 1, "rep_pen_range": 800, "rep_pen_slope": 0.7, "sampler_order": [0, 1, 2, 3, 4, 5, 6], "memory": "Niko is a small red kobold. Niko has yellow, reptilian eyes and a long, scaly tail.\n", "genkey": "KCPP1868", "min_p": 0, "dynatemp_range": 0, "dynatemp_exponent": 1, "presence_penalty": 0, "logit_bias": {}, "prompt": "Niko the kobold stalked carefully down the alley, his small scaly figure obscured by a dusky cloak that fluttered lightly in the cold winter breeze. It had been two years since he\u2019d first arrived in this miserable hovel of a town, and in that time he\u2019d managed to survive by his wits alone \u2013 stealing from unsuspecting travelers, picking pockets and conning the locals out of their hard-earned coin. But it wasn\u2019t enough, not nearly enough to sustain him for much longer.\n\nHe was tired of living on the streets, of always being on the move, never able to settle down or call any place home. But tonight, he would finally have his revenge.", "quiet": true, "stop_sequence": [], "use_default_badwordsids": true}`
`(Note: Sub-optimal sampler_order detected. You may have reduced quality. Recommended sampler values are [6,0,1,3,4,2,5]. This message will only show once per session.)`
`Processing Prompt [BLAS] (198 / 198 tokens)`
`Generating (512 / 512 tokens)`
`ContextLimit: 710/2048, Processing:2.65s (13.4ms/T), Generation:153.93s (300.6ms/T), Total:156.58s (305.8ms/T = 3.27T/s)`
`Output: Tonight, he would sneak into the manor house at the edge of town and rob Lord Harrows, the wealthiest merchant in these parts, blind. His gold, his jewels, everything would be Niko's, and with them he could live like a king. He knew where all the valuables were hidden; he'd spent weeks studying Lord Harrows' movements and patterning his behavior. There would be no mistake this time.`
`Carefully, silently, Niko scaled the wall that surrounded the manor grounds and made his way through the garden. The night air was chilly, but he barely noticed as adrenaline coursed through his veins. Soon, he reached the window of Lord Harrows' study. Using a set of lockpicks he'd stolen earlier that day, he expertly unlocked the window and slipped inside.`
`The study was lit only by the flickering flames of several candles, casting eerie shadows across the room. Niko crept forward, avoiding the occasional squeaky floorboard, until he stood before the massive oak desk. Quickly, he located the keyhole and inserted a thin metal pick, feeling for the tumblers within. A moment later, the lock clicked open, and he swung the heavy wooden door wide.`
`Inside the secret compartment lay Lord Harrows' most valuable possessions: a jeweled dagger, an emerald pendant, and a golden crown encrusted with rubies. Grinning triumphantly, Niko scooped up the treasures and stuffed them into his voluminous cloak. Just then, he heard footsteps approaching down the hallway. Panic raced through his tiny body as he realized that someone must have discovered him. With nowhere else to go, he dashed back over to the window and climbed out onto the ledge.`
`"You there!" came a gruff voice from below. "Stop right where you are!"Niko froze, his heart hammering against his ribcage. The voice belonged to one of Lord Harrows' guards, and he sounded angry. Very angry. Sweat trickled down Niko's brow as he considered his options. If he stayed put, the guard would surely capture him. But if he tried to run...well,`
This generation is slower than my CPU (And was faster without the layers offloaded), perhaps I missed a setting to make it work faster on the driver side. To my knowledge I did enable the GSP firmware in the kernel which is required for re-clocking but I could have made mistakes there. Considering NVK performs 50% slower in games on average the slow speed can also be an indicator of the early stages of this driver.
But to me this generation was not about the speed since we know the NVK frequently performs 50% slower than the proprietary driver across titles as it needs further optimization by its developers. Its about the fact it ran at all and produced a coherent output, it shows how far the Nvidia Mesa driver is progressing and I find it very exciting to have the freedom to run the Nvidia GPU for LLM's without the closed source blob, even if for now it makes more sense to run CUDA.
Its also worth noting that this was the first test ever ran on NVK using Occam's Vulkan backend, it had not been tested on it, neither was any NVK specific optimization done.
I hope it excites others as much as it excited me that we can look forward to a future where not only the AMD cards have good open source driver support, and I hope it encourages more LLM enthusiasts to test their projects on the NVK driver. | 2024-02-01T18:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/1agi5vo/koboldcppllamacpp_works_on_the_open_source_nvk/ | henk717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agi5vo | false | null | t3_1agi5vo | /r/LocalLLaMA/comments/1agi5vo/koboldcppllamacpp_works_on_the_open_source_nvk/ | false | false | self | 6 | null |
Anyone here that can help me setting it up? I am getting ERROR 403. | 1 | Anyone here that can help me setting it up? I am getting ERROR 403. | 2024-02-01T18:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/1agi288/anyone_here_that_can_help_me_setting_it_up_i_am/ | Hades363636 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agi288 | false | null | t3_1agi288 | /r/LocalLLaMA/comments/1agi288/anyone_here_that_can_help_me_setting_it_up_i_am/ | false | false | self | 1 | null |
Are there organizations or collectives or whatever for local LLMs? | 1 | [removed] | 2024-02-01T18:27:19 | https://www.reddit.com/r/LocalLLaMA/comments/1aghtsa/are_there_organizations_or_collectives_or/ | hmmqzaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aghtsa | false | null | t3_1aghtsa | /r/LocalLLaMA/comments/1aghtsa/are_there_organizations_or_collectives_or/ | false | false | self | 1 | null |
Serverless Awesome Prompt Experimental Tool | 16 | Guys sharing this serverless web app where you can select all the prompts available on https://github.com/f/awesome-chatgpt-prompts
This is created using stlite python. It’s simple html file where you have all the prompts already attached as button. You need to add your chatgpt api key to play with prompts.
Purpose of sharing this just to community where some of us don’t like to install python or copy paste prompts from different places to interface, this allows you to see and play with more that 150 curated prompts.
Next steps will be bringing other models inference from huggingface to test.
Link: https://statisticalplumber.github.io
Cheers | 2024-02-01T17:52:16 | dreamai87 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1aggzwe | false | null | t3_1aggzwe | /r/LocalLLaMA/comments/1aggzwe/serverless_awesome_prompt_experimental_tool/ | false | false | 16 | {'enabled': True, 'images': [{'id': '3vlNeYyZGOkpfo8uI-Sh43zslt2-X9OT-5ePh5Cu364', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/9tx4upv8i0gc1.jpeg?width=108&crop=smart&auto=webp&s=42ffbaa48cdc018a1b19c966582555f560bc5863', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/9tx4upv8i0gc1.jpeg?width=216&crop=smart&auto=webp&s=7e5774f1a3fa6dbf8fc31842c28dbb3b0c9dceae', 'width': 216}, {'height': 184, 'url': 'https://preview.redd.it/9tx4upv8i0gc1.jpeg?width=320&crop=smart&auto=webp&s=8f66a512ddb4251771d2866a7d4e2dd57a797ef9', 'width': 320}, {'height': 368, 'url': 'https://preview.redd.it/9tx4upv8i0gc1.jpeg?width=640&crop=smart&auto=webp&s=ade011eb57d8859c2d080e8119c52b30ae311a1a', 'width': 640}, {'height': 553, 'url': 'https://preview.redd.it/9tx4upv8i0gc1.jpeg?width=960&crop=smart&auto=webp&s=d3d231d7f9d58484acba828508e54ab01e18eec6', 'width': 960}, {'height': 622, 'url': 'https://preview.redd.it/9tx4upv8i0gc1.jpeg?width=1080&crop=smart&auto=webp&s=6a96ec4b689d48711d925fc38fda1c890029b151', 'width': 1080}], 'source': {'height': 922, 'url': 'https://preview.redd.it/9tx4upv8i0gc1.jpeg?auto=webp&s=d8bacb0d1b35e5a4c90dc3999095cdba69570e94', 'width': 1600}, 'variants': {}}]} | ||
What are the good companion apps out there? | 1 | [removed] | 2024-02-01T17:25:49 | https://www.reddit.com/r/LocalLLaMA/comments/1aggd82/what_are_the_good_companion_apps_out_there/ | Juicy_Bacon007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aggd82 | false | null | t3_1aggd82 | /r/LocalLLaMA/comments/1aggd82/what_are_the_good_companion_apps_out_there/ | false | false | self | 1 | null |
Any uncensored models that work? | 21 | Trying to find an uncensored model to use in LM Studio or anything else really to get away from the god-awful censoring were seeing in mainstream models. | 2024-02-01T17:19:56 | https://www.reddit.com/r/LocalLLaMA/comments/1agg820/any_uncensored_models_that_work/ | Proud-Commission-984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agg820 | false | null | t3_1agg820 | /r/LocalLLaMA/comments/1agg820/any_uncensored_models_that_work/ | false | false | self | 21 | null |
Langchain agents with LM studio | 1 | I have build Openai based chatbot that uses Langchain agents - wiki, dolphin, etc.
I am trying to switch to Open source LLM for this chatbot, has anyone used Langchain with LM studio? I was facing some issues using open source LLM from LM Studio for this task.
Questions:
Q1. Has anyone successfully used LM Studio with Langchain agents? If so, how?
Q2. Can you provide github links for Langchain + LM studio implementations.
TIA | 2024-02-01T16:50:20 | https://www.reddit.com/r/LocalLLaMA/comments/1agfi5k/langchain_agents_with_lm_studio/ | AI_ML_preneur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agfi5k | false | null | t3_1agfi5k | /r/LocalLLaMA/comments/1agfi5k/langchain_agents_with_lm_studio/ | false | false | self | 1 | null |
Is there program that takes a LLM+text and outputs the least likely words? | 2 | In so far as i understand LLM's like LLaMa they lookup a list of all words and take (one of the) 'best' match to output in a loop.
It seems to me you could reverse this idea. Take "A sequence of words durp" and then lookup the probability that
- "A" => sequence 0.1
- "A sequence" => of 0.2
- "A sequence of" => words 0.1
- "A sequence of words" => durp 0.001
Do this for a full text and you have a 'worst' list / heatmap.
That would let me check what is sounding most wrong in a given text, and hopefully its something that can be fixed.
Does this already exist?
I remember coming across a good python tutorial using a real model that demonstrates how to build a prompt, but i can't find.
I have some time now so if anybody knows of it or something similar i'll build it myself if it doesn't. | 2024-02-01T16:36:07 | https://www.reddit.com/r/LocalLLaMA/comments/1agf5tn/is_there_program_that_takes_a_llmtext_and_outputs/ | throwaway490215 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agf5tn | false | null | t3_1agf5tn | /r/LocalLLaMA/comments/1agf5tn/is_there_program_that_takes_a_llmtext_and_outputs/ | false | false | self | 2 | null |
Use self-hosted Code Llama 70B as a copilot alternative in VSCode | 41 | SkyPilot released a new guide for deploying and scaling a Code Llama 70B privately, and the way to connect the endpoint with API, Chat, or VSCode.
[https://github.com/skypilot-org/skypilot/tree/master/llm/codellama](https://github.com/skypilot-org/skypilot/tree/master/llm/codellama)
​ | 2024-02-01T16:19:48 | https://www.reddit.com/r/LocalLLaMA/comments/1agerx0/use_selfhosted_code_llama_70b_as_a_copilot/ | Michaelvll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agerx0 | false | null | t3_1agerx0 | /r/LocalLLaMA/comments/1agerx0/use_selfhosted_code_llama_70b_as_a_copilot/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'hGZnfGhNsY9TnrWI2XZgr6zH3olUZDDLtszG7ZCwkdU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wuk4oUIova3Q5HPo_XdI0kZvM8pw2UwUe34MyNnDr8s.jpg?width=108&crop=smart&auto=webp&s=95cac7e67a1c84224451dfb811c0f8280760b14f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wuk4oUIova3Q5HPo_XdI0kZvM8pw2UwUe34MyNnDr8s.jpg?width=216&crop=smart&auto=webp&s=ac06fa51bea2b698b474f6c1ae5543380774c1bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wuk4oUIova3Q5HPo_XdI0kZvM8pw2UwUe34MyNnDr8s.jpg?width=320&crop=smart&auto=webp&s=e56e6ee4a4fe88d8beb33b52c4825585c33186d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wuk4oUIova3Q5HPo_XdI0kZvM8pw2UwUe34MyNnDr8s.jpg?width=640&crop=smart&auto=webp&s=b6e35fa1bca91c2f8e70b6249935df1e3701ed49', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wuk4oUIova3Q5HPo_XdI0kZvM8pw2UwUe34MyNnDr8s.jpg?width=960&crop=smart&auto=webp&s=b48e1fadfd1328fdce9ce31aa741df6fea64205a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wuk4oUIova3Q5HPo_XdI0kZvM8pw2UwUe34MyNnDr8s.jpg?width=1080&crop=smart&auto=webp&s=7af4915f254bb338f21ea3967f6f5cf48fc1a64f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wuk4oUIova3Q5HPo_XdI0kZvM8pw2UwUe34MyNnDr8s.jpg?auto=webp&s=35509af74fe53be08cac15a1a5a241abafaae312', 'width': 1200}, 'variants': {}}]} |
Anyone who has worked with using LLMs for creating recommendation engines | 1 | I have tried the following:
1. Convert entire text to vector embeddings, store the embeddings in a vector store and retrieving top matches based on cosine similarity (the field im targeting is kind of very niche so not able to find a good embeddings model) ( fine tuning is not possible as new segments keep emerging in the data)
2. I tried creating a bucket of tags for my text, and then for every new incoming query I ask a LLM to quote tags from bucket I have created.
While the second approach works really well, I still face two issues,
a. Not all relevant tags appear during the query
b. The model hallucinates and starts creating its own tags
Any idea on how I can fix this? | 2024-02-01T16:16:32 | https://www.reddit.com/r/LocalLLaMA/comments/1agep5f/anyone_who_has_worked_with_using_llms_for/ | mayiSLYTHERINyourbed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agep5f | false | null | t3_1agep5f | /r/LocalLLaMA/comments/1agep5f/anyone_who_has_worked_with_using_llms_for/ | false | false | self | 1 | null |
Medical fine tuned local models? | 5 | I’m doing a task that requires basically classifying whether a disease is present in clinical notes. The tricky thing is that I need to handle different subcategories of disease on a main disease category (eg I want mentions of arrhythmia, Afib, pVCs to fall under “cardiovascular disease”) and I think that I’ll need a model that is fine tuned to the medical domain (llama 70b isn’t good at categorization from when I’ve experimented with it).
Any recommendations? | 2024-02-01T16:13:15 | https://www.reddit.com/r/LocalLLaMA/comments/1agemaw/medical_fine_tuned_local_models/ | johnathanjones1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agemaw | false | null | t3_1agemaw | /r/LocalLLaMA/comments/1agemaw/medical_fine_tuned_local_models/ | false | false | self | 5 | null |
Pass coordinate in LLAVA-1.5 | 1 | Hi,
I am using LLAVA-1.5 for my project. I am trying to pass a bounding box coordinate to the model. I found out in the paper that the coordinate is normalised when i normalised the coordinates via following method i am getting incorrect coordinate which the model was expecting. I verified that by giving a prompt to the model when i found out that my coordinate is incorrect. My image is of 1600 x 1143. I tried resizing the image to make it a square and then pass the coordinates but still it didn't worked. | 2024-02-01T16:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ageccr/pass_coordinate_in_llava15/ | indranil_ganguly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ageccr | false | null | t3_1ageccr | /r/LocalLLaMA/comments/1ageccr/pass_coordinate_in_llava15/ | false | false | self | 1 | null |
What are some recommendations of models for recognition of questions within text | 1 | As the title states:
\- I am currently tasked with discovering questions/tasks within a text.
\- The questions do not necessarily end with a "?" or ":".
\- The text can be erratic and not formatted.
I have done a decent amount of research, but due to my lack of knowledge and being a beginner in anything regarding AI models I feel quite lost.
The main questions are:
1. What approach would you recommend to execute the task successfully?
2. Would this model be appropriate for a beginner to set up and host on their own server?
3. Would the cost of this solution be within a sensible range (up to 200USD/monthly)?
| 2024-02-01T15:45:52 | https://www.reddit.com/r/LocalLLaMA/comments/1agdz9d/what_are_some_recommendations_of_models_for/ | IudexWaxLyrical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agdz9d | false | null | t3_1agdz9d | /r/LocalLLaMA/comments/1agdz9d/what_are_some_recommendations_of_models_for/ | false | false | self | 1 | null |
The maker of Goliath-120b made Miquella-120b, with miqu | 140 | 2024-02-01T15:13:26 | https://huggingface.co/alpindale/miquella-120b-gguf | bobby-chan | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1agd8yo | false | null | t3_1agd8yo | /r/LocalLLaMA/comments/1agd8yo/the_maker_of_goliath120b_made_miquella120b_with/ | false | false | 140 | {'enabled': False, 'images': [{'id': 'PW3O9w2RD9PdHX81jOSerajTCCzHg4Oxrzn3uAiVy7E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2pOq7X4LENPtwkcnGuZJEMhCsse3mz6fV3anDOkIzSc.jpg?width=108&crop=smart&auto=webp&s=2dba061f88f1c40ebd29066d2685b5bf737994ad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2pOq7X4LENPtwkcnGuZJEMhCsse3mz6fV3anDOkIzSc.jpg?width=216&crop=smart&auto=webp&s=849cb35347530f5acd1b8359afc9a34bfaaac485', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2pOq7X4LENPtwkcnGuZJEMhCsse3mz6fV3anDOkIzSc.jpg?width=320&crop=smart&auto=webp&s=5005916ba90168a2dc744a3e09ca1b66d525d447', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2pOq7X4LENPtwkcnGuZJEMhCsse3mz6fV3anDOkIzSc.jpg?width=640&crop=smart&auto=webp&s=a4bfa4669574c7b290f98d0bd408216067b5cb3e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2pOq7X4LENPtwkcnGuZJEMhCsse3mz6fV3anDOkIzSc.jpg?width=960&crop=smart&auto=webp&s=7dc6ed63d3108a49b0441d8b0e36133e02c91c8d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2pOq7X4LENPtwkcnGuZJEMhCsse3mz6fV3anDOkIzSc.jpg?width=1080&crop=smart&auto=webp&s=46edbe7137666ec3b6261f640864d7918e159d21', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2pOq7X4LENPtwkcnGuZJEMhCsse3mz6fV3anDOkIzSc.jpg?auto=webp&s=fff690dc8ec244c77172deace1113874fcf4b719', 'width': 1200}, 'variants': {}}]} | ||
OLMo: Open Language Model | 77 | 2024-02-01T15:11:09 | https://blog.allenai.org/olmo-open-language-model-87ccfc95f580 | Kryohi | blog.allenai.org | 1970-01-01T00:00:00 | 0 | {} | 1agd78d | false | null | t3_1agd78d | /r/LocalLLaMA/comments/1agd78d/olmo_open_language_model/ | false | false | 77 | {'enabled': False, 'images': [{'id': '20u71cVfUJQXzP69P8KEaEcHvEkiheSzMmfvQNM4s4I', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/T5a9wC1Y9MCw9K6BjbiWqBivRzop_xDs9tl2rdR-_sA.jpg?width=108&crop=smart&auto=webp&s=15a2b9aae23b3a60f02d74a658d21da0abcd7d39', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/T5a9wC1Y9MCw9K6BjbiWqBivRzop_xDs9tl2rdR-_sA.jpg?width=216&crop=smart&auto=webp&s=780e5d263c326ad57ee37a6aeb473c9753bdf5c7', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/T5a9wC1Y9MCw9K6BjbiWqBivRzop_xDs9tl2rdR-_sA.jpg?width=320&crop=smart&auto=webp&s=70f17b2785329bdc82d50b5de95f334d8827a32a', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/T5a9wC1Y9MCw9K6BjbiWqBivRzop_xDs9tl2rdR-_sA.jpg?width=640&crop=smart&auto=webp&s=ae732563ecb601ccd72656d5693246593634fca4', 'width': 640}], 'source': {'height': 404, 'url': 'https://external-preview.redd.it/T5a9wC1Y9MCw9K6BjbiWqBivRzop_xDs9tl2rdR-_sA.jpg?auto=webp&s=a4a89f1140e59d6a61ea4e3b3219e7b549eb6618', 'width': 720}, 'variants': {}}]} | ||
Is there a simple python RVC implementation, or anything similar? | 7 | I'm working on TTS problems and while there are semi-decent Python TTS libraries out there (I'd rather not pay for Eleventhlabs and API latency could be problematic for my end goal anyway), the overall conclusion seems to be that what one does is a TTS + RVC audio-to-audio to improve quality while consuming relatively little computational resources.
If someone has a better solution for TTS I'd be happy to hear it, but the fact that the RVC implementation itself relies on a GUI and the forks at most seem to add command-line functionality. I *could* have Python call to the command line but that seems like a janky solution in a program already full of them (on my end). If there's some really obvious solution for running it in Python I'd also be glad to hear it but thus far the architecture is a bit beyond my ken. I figured I'd ask here before doing something inordinately complex. | 2024-02-01T14:44:28 | https://www.reddit.com/r/LocalLLaMA/comments/1agclub/is_there_a_simple_python_rvc_implementation_or/ | AmericanNewt8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agclub | false | null | t3_1agclub | /r/LocalLLaMA/comments/1agclub/is_there_a_simple_python_rvc_implementation_or/ | false | false | self | 7 | null |
How is codellama 70B for you guys? | 85 | I have noticed some patterns while testing out Codellama 70B, hopefully others can share their experience with the model as well:
\- It loves emojis ✨ 👍 🤔
\- It has the ability to parrot code from its training data very very easily, most prompts I tried.. the model would hallucinate training code and go off on a tangent.
\- The guardrails around the model are very very weird, I have seen it refuse to add comments :( , explain a function or do something very simple like rewrite a function. Changing the prompt very slightly also triggers this behavior.
I also tried filling in words for the LLM by using the completion endpoint on [together.ai](https://together.ai) by starting my answer with "Certainly!" or "For sure!" ... but that does not work as well.
Note: I am testing the model with my own system prompts.
Have you guys had more fun with this model? I honestly find the 34B (or even the 13B) variant better right now | 2024-02-01T14:41:29 | https://www.reddit.com/r/LocalLLaMA/comments/1agcji6/how_is_codellama_70b_for_you_guys/ | ragingWater_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agcji6 | false | null | t3_1agcji6 | /r/LocalLLaMA/comments/1agcji6/how_is_codellama_70b_for_you_guys/ | false | false | self | 85 | {'enabled': False, 'images': [{'id': 'KEKqZLXaDuojO8066WvfNm2knPNQpREJOqDRQbP0jOE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=108&crop=smart&auto=webp&s=c4356a09ff651d99050d2e2f7c625136bd5cc50d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=216&crop=smart&auto=webp&s=2efb5516e5e9493aedbb8874a4346aea1e2fdfe3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=320&crop=smart&auto=webp&s=5760f28068be8d1404c060058ca5dc7138a3921c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=640&crop=smart&auto=webp&s=5040e75d875b032b45e4cafad1ca6eed231c2aa5', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=960&crop=smart&auto=webp&s=678233eb228e31658cc7dc6f24ff3c4c199255ec', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=1080&crop=smart&auto=webp&s=e9407e720f5a5c73c6566e3b787afc17181bbb3f', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?auto=webp&s=610ce8e238d743540ebac62332adfbc058d7c11d', 'width': 2400}, 'variants': {}}]} |
Machine Config, 70B Deployment, A6000/5K/4K | 1 | Hello LocalLLaMA,
Can anyone help me understand if running two GPUs essentially provides a "pool" of memory? For example, running two video cards with 32G memory each provides an estimated "pool" of 64G accessible to the LLM. Is the requirement that they are bridged, maybe?
​
Second question. Is there a favorite motherboard of choice for this type of application?
Thank you. | 2024-02-01T14:00:04 | https://www.reddit.com/r/LocalLLaMA/comments/1agbmkq/machine_config_70b_deployment_a60005k4k/ | oz6024u | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agbmkq | false | null | t3_1agbmkq | /r/LocalLLaMA/comments/1agbmkq/machine_config_70b_deployment_a60005k4k/ | false | false | self | 1 | null |
GPU Requirements for LLMs | 149 | I'm seeking some hardware wisdom for working with LLMs while considering GPUs for both training, fine-tuning and inference tasks. With dense models and intriguing architectures like MoE gaining traction, selecting the right GPU is a nuanced challenge.
First off, we have the vRAM bottleneck. An insightful illustration from the PagedAttention paper from the authors of vLLM suggests that key-value (KV) pair caching alone can occupy over 30% of a 40GB A100 GPU for a 13B parameter model. While the parameters occupy about 65%.
Now, MoE models like Mixtral use a gating mechanism to call upon specific 'experts,' which seemingly offers vRAM efficiency. However, it isn't a full picture - the entire pool of parameters must be quickly accessible. So what's the real-world impact on vRAM for MoE models during inference?
As for precision levels, I'm keen on sticking to non-quantized versions. Full FP32 delivers high numerical stability but at the cost of vRAM, while FP16 cuts the demand on memory at the potential expense of performance.
Keeping these in mind, I'm focusing on the following when considering GPUs:
- Adequate vRAM to support the sizeable parameters of LLMs in FP16 and FP32, without quantization.
- High memory bandwidth capable of efficient data processing for both dense models and MoE architectures.
- Effective cooling and computational capabilities for prolonged high-load operations.
- Compatibility with frameworks utilized for LLM training and inference.
Your experiences and insights are invaluable. What models or features are must-haves in your book when it comes to GPUs for these purposes? | 2024-02-01T13:50:15 | pathfinder6709 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1agbf5s | false | null | t3_1agbf5s | /r/LocalLLaMA/comments/1agbf5s/gpu_requirements_for_llms/ | false | false | 149 | {'enabled': True, 'images': [{'id': 'e7BfPJhxDy_be8sClagglWPYPHHQOckZ9e73706N2Ec', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/6zmrh1h2bzfc1.jpeg?width=108&crop=smart&auto=webp&s=3ff92844fe95b7afeef013862d028632072ea0b7', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/6zmrh1h2bzfc1.jpeg?width=216&crop=smart&auto=webp&s=2a34b74d934d27536d48c8f721e5465f28d3aeb0', 'width': 216}, {'height': 305, 'url': 'https://preview.redd.it/6zmrh1h2bzfc1.jpeg?width=320&crop=smart&auto=webp&s=571d86a507e01847190711e92044f0f939f8ee9a', 'width': 320}, {'height': 610, 'url': 'https://preview.redd.it/6zmrh1h2bzfc1.jpeg?width=640&crop=smart&auto=webp&s=3653c51ef390008b10b16296125f9d22704c2a18', 'width': 640}, {'height': 915, 'url': 'https://preview.redd.it/6zmrh1h2bzfc1.jpeg?width=960&crop=smart&auto=webp&s=fada0fcf1d76d7ad24b3b2abee13769f37c46335', 'width': 960}], 'source': {'height': 946, 'url': 'https://preview.redd.it/6zmrh1h2bzfc1.jpeg?auto=webp&s=fe6e5f72b6e85ec050084c0e0f8ff976e72c6d63', 'width': 992}, 'variants': {}}]} | ||
Working group 'BooBot' | 5 | \[\[ Boilerplate warning: I have no idea what I am doing \]\]
I am in no rush, but lets say I wanted to train a LLM to tell really really good scary/horror-able stories, Would I just train it on material that are identified as have caused fear? I feed it good labeled examples, etc.
Fear/ghost stories, above being specific iconic elements, is about the emotion that certain experiences cause in us. Its written with empathy. So its about pacing etc. Its a lot of meta information for a bot to get trained on, though, I understand its all 'encoded' in the nature of language and usage, so it will learn it, if fed on that diet, as simply the way to speak.
( The baby-boobot was trained on only horror stories, so it only spoke fear).
If you were tasked with achieving the result of "SpookyBot3000", where you could feed it mundane information and it spits out a generative version that induces dread. Like - a generative necronomicon. Everything that it touches gets the taint, narratively, of death and dread. Like hp lovecraft narrating my life.
How would you create this project, start to finish. I am being serious, I am trying to learn.
I am a long time programmer and I am familiar with LLMs to a level, and I am trying to learn more. And right now, I want to full conceptualize, start to finish, what is necessary to create a robot that will, reliably in the future, scare the pants off of me. I just want to know whats ahead of me, in theory. I am excited and I want a boobot.
I want to get to the point where I a have human subject strapped to a fear so we can test the newest model. We'll measure the model's effectiveness by their moans of lament.
​
​
| 2024-02-01T13:23:29 | https://www.reddit.com/r/LocalLLaMA/comments/1agaw7r/working_group_boobot/ | Intrepid-Sir8293 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agaw7r | false | null | t3_1agaw7r | /r/LocalLLaMA/comments/1agaw7r/working_group_boobot/ | false | false | self | 5 | null |
Have a question about serving a model in production for data extraction, via completions endpoint, while having the LLM respond in JSON format. What's with the false positives? Anyone have any tips? | 1 | I'm using exllamav2 (but have had similar results in vLLM), and a quantized Mixtral 8x7B (@3.5bpw average), running on a 24GB of VRAM.
I have a rich, large dataset of web scraped text. For each item, the LLM is tasked with determining whether or not the text contains an example of, let's call it, `subject matter X`. If it does, it responds in a JSON with the string extraction containing the example, it's reasoning, and a bool of 1.
The issue is that there are other subject matters that can look very similar to subject matter X.
It has done a very good job correctly ignoring the swaths of items that are *not* of subject matter X (eg. correctly marking things bool 0), but in looking at the results it has returned with bool 1, 30-50% of the results can be false positives (ie. actually belonging to subject matter Y, Z, B, etc), depending on the run.
I was mostly surprised because many of the false positives have very similar characteristics to those that the LLM had correctly marked as negative. What's with the variance in performance from item to item? Is it possible that there are characters in some items that are throwing it off somehow during the tokenization process? Is there something I should be doing about temperature (I assumed keeping temperature to 0 for this use case was correct)?
For now, I have a second LLM fact-checking its work, which basically solves the issue, but of course at the cost of additional runtime. I would like to avoid (or minimize) the use of a second LLM as much as possible. | 2024-02-01T13:09:34 | https://www.reddit.com/r/LocalLLaMA/comments/1agamhu/have_a_question_about_serving_a_model_in/ | False_Pay_4009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1agamhu | false | null | t3_1agamhu | /r/LocalLLaMA/comments/1agamhu/have_a_question_about_serving_a_model_in/ | false | false | self | 1 | null |
Beginner question about prompt formats for chat completions | 4 | I'm trying to use the llama-cpp server to run my llm model. I understand we need to use the prompt formats ( chatml, vicuna etc) when we call the completions API. But my question is how do I use the prompt formatting while calling the chat.completions api i.e. while running it in chat mode. Is it necessary ? If not how does it work exactly.
| 2024-02-01T12:45:37 | https://www.reddit.com/r/LocalLLaMA/comments/1aga678/beginner_question_about_prompt_formats_for_chat/ | 1_archit_1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aga678 | false | null | t3_1aga678 | /r/LocalLLaMA/comments/1aga678/beginner_question_about_prompt_formats_for_chat/ | false | false | self | 4 | null |
I Tested LLaVA 1.6 - Claims to Beat Gemini, But Mixes Up Margot Robbie & Emma Mackey! | 1 | [removed] | 2024-02-01T12:44:07 | https://www.reddit.com/r/LocalLLaMA/comments/1aga584/i_tested_llava_16_claims_to_beat_gemini_but_mixes/ | nerdynavblogs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aga584 | false | null | t3_1aga584 | /r/LocalLLaMA/comments/1aga584/i_tested_llava_16_claims_to_beat_gemini_but_mixes/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'u8DYnc7h0O77sV_v3jyDqJRluxOhX0W-m2Cq6Vi8ddk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1wnEOr6la2GvMmQtfDzjKzluNXW0H7SecUomtZvsKxU.jpg?width=108&crop=smart&auto=webp&s=df820219e6f82811a168eea564e62cb55f32b4b8', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1wnEOr6la2GvMmQtfDzjKzluNXW0H7SecUomtZvsKxU.jpg?width=216&crop=smart&auto=webp&s=5c59c964e6f40623a39ddd87f56ebee52168ce3b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1wnEOr6la2GvMmQtfDzjKzluNXW0H7SecUomtZvsKxU.jpg?width=320&crop=smart&auto=webp&s=ee69c5cfec249bcbade75d6020d99bfd9b87c1d9', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1wnEOr6la2GvMmQtfDzjKzluNXW0H7SecUomtZvsKxU.jpg?auto=webp&s=890b372a4edd7ebbcd42c1be73db2575cf4ee1ba', 'width': 480}, 'variants': {}}]} |
How to converse with historic figures locally | 1 | I just came across these custom ChatGPT characters built by Martin Puchner and I was wondering what I'd need to get this to work offline.
https://www.martinpuchner.com/custom-gpts-and-online-education.html
My main reasons being I don't want OpenAI to be privy to my conversations, but also because I enjoy the technical challenges to figure this out for myself locally.
I have a MacBook Pro M2, 16GB RAM
I've experimented with a few LLMs a while back, but I never managed to actually build a convincing character simulation.
Which model would you recommend that provides a sufficient amount of context and runs at decent output speeds? I'm planning on providing the LLM with texts by for example Socrates so I can have it essentially impersonate this and other historic figures. | 2024-02-01T12:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ag9sy1/how_to_converse_with_historic_figures_locally/ | RastaBambi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag9sy1 | false | null | t3_1ag9sy1 | /r/LocalLLaMA/comments/1ag9sy1/how_to_converse_with_historic_figures_locally/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 't6vS1FrK1-oU3k_yJvnoOK76k2BxzRfkRlU3pKKU1sQ', 'resolutions': [{'height': 92, 'url': 'https://external-preview.redd.it/cQnfsRbda4knyyPWZInh1DkMip4USMWG0UieeIBrmLE.jpg?width=108&crop=smart&auto=webp&s=708dbe4b6dbe23853852a9ef1cff64de8da32b13', 'width': 108}, {'height': 185, 'url': 'https://external-preview.redd.it/cQnfsRbda4knyyPWZInh1DkMip4USMWG0UieeIBrmLE.jpg?width=216&crop=smart&auto=webp&s=83729860308f77fb19a941146c4d2b1405488daf', 'width': 216}, {'height': 275, 'url': 'https://external-preview.redd.it/cQnfsRbda4knyyPWZInh1DkMip4USMWG0UieeIBrmLE.jpg?width=320&crop=smart&auto=webp&s=7070f1e7d1dbb1990728c2be1d403e3e0e767700', 'width': 320}], 'source': {'height': 392, 'url': 'https://external-preview.redd.it/cQnfsRbda4knyyPWZInh1DkMip4USMWG0UieeIBrmLE.jpg?auto=webp&s=1036067861e059753861fa7b92cdfe754dd7bb8a', 'width': 456}, 'variants': {}}]} |
Best settings for Mixtral on 16 GB VRAM / 64 GB RAM | 17 | I've been trying out many models lately.
Mixtral Instruct is the only model that was able to impress me. Mostly knowledge wise. The 7B and 13B models seem like smart talkers with little real knowledge behind the facade.
I'm running LM Studio and textgenwebui.
Setup: 13700k + 64 GB RAM + RTX 4060 Ti 16 GB VRAM
Which quantizations, layer offloading and settings can you recommend? About 5 t/s with Q4 is the best I was able to achieve so far.
Mistral 7B is running at about 30-40 t/s | 2024-02-01T12:22:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ag9rgq/best_settings_for_mixtral_on_16_gb_vram_64_gb_ram/ | j4da | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag9rgq | false | null | t3_1ag9rgq | /r/LocalLLaMA/comments/1ag9rgq/best_settings_for_mixtral_on_16_gb_vram_64_gb_ram/ | false | false | self | 17 | null |
Is anyone inferencing on something like an Intel nuc, barebone or similar formfactor? | 5 | I'm thinking about getting a new homeserver, anyone got inference with a reasonable speed(7b parameter model or bigger) on a small device? I've seen mini pcs with ssd, i7 and 16gb dual channel ram. Is that enough though without a proper graphics card? | 2024-02-01T12:07:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ag9hqi/is_anyone_inferencing_on_something_like_an_intel/ | Frequent_Valuable_47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag9hqi | false | null | t3_1ag9hqi | /r/LocalLLaMA/comments/1ag9hqi/is_anyone_inferencing_on_something_like_an_intel/ | false | false | self | 5 | null |
Mixtral_8x7b running slow in my server | 1 | I tried to run Mixtral\_8x7b on AWS sagemaker 192 GB GPU and the instance name is ml.g5.48.xlagre.
but it's still running slow.
132 seconds for 200 tokens.
you can see the code in the screenshot.
https://preview.redd.it/k2mfx0c9myfc1.png?width=783&format=png&auto=webp&s=bea536a43db1e36a7061fd41109bb8a25570a4c7 | 2024-02-01T11:31:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ag8why/mixtral_8x7b_running_slow_in_my_server/ | FlowKey1098 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag8why | false | null | t3_1ag8why | /r/LocalLLaMA/comments/1ag8why/mixtral_8x7b_running_slow_in_my_server/ | false | false | 1 | null | |
mistral 8x7b running slow on my server | 1 | I tried to run Mixtral\_8x7b on AWS sagemaker 192 GB GPU and the instance name is ml.g5.48.xlagre.
but it's still running slow.
132 seconds for 200 tokens.
you can see the code in the screenshot.
| 2024-02-01T11:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ag8qp5/mistral_8x7b_running_slow_on_my_server/ | FlowKey1098 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag8qp5 | false | null | t3_1ag8qp5 | /r/LocalLLaMA/comments/1ag8qp5/mistral_8x7b_running_slow_on_my_server/ | false | false | self | 1 | null |
Motherboard suggestions please? | 1 | Hello everyone! Long-time lurker here. Could I please get some suggestions on what motherboard to purchase? I'm open to both new and used options, preferably at a lower cost. I have 4 Powercolor 6700XTs left over from crypto mining, and I would like to use them on Ubuntu with ROCm and try run some bigger model's. I'm okay with a motherboard that will support 3 GPUs, but having support for 4 would be ideal. Other thoughts and suggestions welcome, thank you | 2024-02-01T11:13:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ag8mwm/motherboard_suggestions_please/ | throwaway8984752 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag8mwm | false | null | t3_1ag8mwm | /r/LocalLLaMA/comments/1ag8mwm/motherboard_suggestions_please/ | false | false | self | 1 | null |
what works best for creating code completion assistant using RAG over Codebase. | 5 | I am trying to create an assistant for code completion on private codebase.
i am finding difficult to get correct context from regular embeddings.
is there better way to embed, index and retrieve code efficiently from codebase? | 2024-02-01T11:11:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ag8lr5/what_works_best_for_creating_code_completion/ | Striking_Paper5259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag8lr5 | false | null | t3_1ag8lr5 | /r/LocalLLaMA/comments/1ag8lr5/what_works_best_for_creating_code_completion/ | false | false | self | 5 | null |
How to finetune a llama2 model on text data? | 5 | I want to Know how to finetune a llama2 model on a text dataset. The dataset is some raw text files which I want the model to train with, However most of the available approaches use a prompt answer approach which I am unable to do with my dataset. Can anyone suggest me an approach or share me an available code that can be use to achieve this.
Thanks | 2024-02-01T09:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ag7fnh/how_to_finetune_a_llama2_model_on_text_data/ | PomegranateCute843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag7fnh | false | null | t3_1ag7fnh | /r/LocalLLaMA/comments/1ag7fnh/how_to_finetune_a_llama2_model_on_text_data/ | false | false | self | 5 | null |
Local LLM Specifications | 1 | Can anyone recommend hardware specifications for a running local LLMs on a consumer grade PC? Budget < $2.5k It would need to run the latest models such as CodeLlama-70b and hopefully rival ChatGPT 4 or get as close as possible and be as future proof as possible. Are there any off the shelf PCs for this? It seems the GPU and VRAM is the most important aspect? Seems a lot of people are saying just use ChatGPT as it's so much cheaper but it seems to have got much worse in my experience and subject to future regulation and dumbing down and would rather "own" the stack. | 2024-02-01T09:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ag7cx0/local_llm_specifications/ | mobileappz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag7cx0 | false | null | t3_1ag7cx0 | /r/LocalLLaMA/comments/1ag7cx0/local_llm_specifications/ | false | false | self | 1 | null |
Request for resources: serving/hosting a local LLM for business cases | 6 | Hey guys! Looking to expand my knowledge here..
Let's say I have a business case for a chatbot using a local LLM that I fine-tuned which also connects to a vector database, let's say chrimaDB, let's take a 70B model as example.
Where would you host it, how would you interact with it (custom API in python or something premade)
And what about concurrency? Could multiple people access it at the same time or would there be a queue?
Anyone got any experience setting anything like this up E2E?
Thanks a ton in advance! | 2024-02-01T07:42:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ag5n2c/request_for_resources_servinghosting_a_local_llm/ | HerpyTheDerpyDude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag5n2c | false | null | t3_1ag5n2c | /r/LocalLLaMA/comments/1ag5n2c/request_for_resources_servinghosting_a_local_llm/ | false | false | self | 6 | null |
Language specific pretraining of Mistral 7b using LoRA | 11 | I am currently working on the continual pretraining of Mistral 7b using LoRA for multiple languages like Vietnamese, Hindi, Tamil, Bengali and Filipino. Would it be advisable to train separate LoRA weights for each language and then merge these individual language-specific LoRA weights with the original pretrained weights? The objective is to enable the model to effectively learn and adapt to various languages in a more versatile manner. | 2024-02-01T06:47:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ag4tsy/language_specific_pretraining_of_mistral_7b_using/ | LordOfThe_Idiots | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag4tsy | false | null | t3_1ag4tsy | /r/LocalLLaMA/comments/1ag4tsy/language_specific_pretraining_of_mistral_7b_using/ | false | false | self | 11 | null |
Adding memories / teaching an LLM one Q&A at a time? | 1 | My video game always gets updated with new features.
I want to be able to add a memory to my local LLM by typing something like this:
Q: How do I get X? A: You have to do Y to get X.
My game server is in Node.js. While I am not in the game, the LLM would detect if there's a question in the game chat and answers it.
I tried using context, but the context length is over 32k for all the features so I may have to use something else. More importantly I want to be able to add new bits of knowledge / memories quickly.
How would you go about doing it? Any help is appreciated! Thank you. | 2024-02-01T05:32:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ag3m05/adding_memories_teaching_an_llm_one_qa_at_a_time/ | GravyPoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag3m05 | false | null | t3_1ag3m05 | /r/LocalLLaMA/comments/1ag3m05/adding_memories_teaching_an_llm_one_qa_at_a_time/ | false | false | self | 1 | null |
Misteral 7b or Phi 2 for 6gb VRam | 4 | Hello, its first time i want to install a LLM loacally on my RTX3050 with 6GB VRam.
phi2 is released by microsot and used textbook level data for accuracy, mixteral 7b on the other hand is as i know better for opensource porpuses(finetuning,...).
how you compare these two? is there any difference between their accuracy, system requirements, ...?
how good are they , in comparision to gpt4 and 3.5?
if you have any good link for comparision between them it would be so valuable. | 2024-02-01T05:11:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ag3855/misteral_7b_or_phi_2_for_6gb_vram/ | ImportantOwl2939 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag3855 | false | null | t3_1ag3855 | /r/LocalLLaMA/comments/1ag3855/misteral_7b_or_phi_2_for_6gb_vram/ | false | false | self | 4 | null |
Language specific pretraining using LoRA | 3 | I am currently working on the continual pretraining of Mistral 7b using LoRA for multiple languages like Vietnamese, Hindi, Tamil, Bengali and Filipino. Would it be advisable to train separate LoRA weights for each language and then merge these individual language-specific LoRA weights with the original pretrained weights? The objective is to enable the model to effectively learn and adapt to various languages in a more versatile manner. | 2024-02-01T03:55:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ag1sl9/language_specific_pretraining_using_lora/ | LordOfThe_Idiots | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag1sl9 | false | null | t3_1ag1sl9 | /r/LocalLLaMA/comments/1ag1sl9/language_specific_pretraining_using_lora/ | false | false | self | 3 | null |
Language specific pretraining using LoRA | 1 | I am currently working on the continual pretraining of Mistral 7b using LoRA for multiple languages like Vietnamese, Indonesian, and Filipino. Would it be advisable to train separate LoRA weights for each language and then merge these individual language-specific LoRA weights with the original pretrained weights? The objective is to enable the model to effectively learn and adapt to various languages in a more versatile manner. | 2024-02-01T03:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ag1rar/language_specific_pretraining_using_lora/ | LordOfThe_Idiots | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag1rar | false | null | t3_1ag1rar | /r/LocalLLaMA/comments/1ag1rar/language_specific_pretraining_using_lora/ | false | false | self | 1 | null |
Shit hardware and shoestring budgets | 88 | Let's hear it for everyone making LLMs work without a lot to work with!
I got a "top of the line" 14 inch laptop with 6gb vram. It basically runs nothing but 7b models quantized.
What are you working with? What are you hoping to see with LLMs? I love this enthusiast community and I think things are going to get crazy and awesome.
Let's hear it from the people working with little hardware! What are you using? What do you want to do? What crappy hardware are you using to work with all of this?
Hell yeah, let's see what we can do! Where do we go next?! | 2024-02-01T03:46:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ag1mta/shit_hardware_and_shoestring_budgets/ | AndrewVeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag1mta | false | null | t3_1ag1mta | /r/LocalLLaMA/comments/1ag1mta/shit_hardware_and_shoestring_budgets/ | false | false | self | 88 | null |
Recommendations for a model that would be a good job recruiter? | 1 | [removed] | 2024-02-01T02:55:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ag0m9z/recommendations_for_a_model_that_would_be_a_good/ | CincyTriGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag0m9z | false | null | t3_1ag0m9z | /r/LocalLLaMA/comments/1ag0m9z/recommendations_for_a_model_that_would_be_a_good/ | false | false | self | 1 | null |
Why are NSFW LLM so much harder to find? | 1 | Ok so when it comes to NSFW Ai art it's not that hard to find a site that is willing to make it.
However when it comes to LLMs seems like it's even more censured.
I find this odd since I would think it would be the other way around were NSFW AI art image generators are hard to find but it would be easy to find NSFW LLMs.
guess there is a reason but not sure what it could be. | 2024-02-01T02:43:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ag0cz9/why_are_nsfw_llm_so_much_harder_to_find/ | ryan7251 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag0cz9 | false | null | t3_1ag0cz9 | /r/LocalLLaMA/comments/1ag0cz9/why_are_nsfw_llm_so_much_harder_to_find/ | false | false | nsfw | 1 | null |
Success with quantized 8x7B on 1070 with LM Studio? | 1 | Has anyone had success?
Thanks! | 2024-02-01T02:36:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ag07v1/success_with_quantized_8x7b_on_1070_with_lm_studio/ | AmericanKamikaze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ag07v1 | false | null | t3_1ag07v1 | /r/LocalLLaMA/comments/1ag07v1/success_with_quantized_8x7b_on_1070_with_lm_studio/ | false | false | self | 1 | null |
Power of Multi-Sized Models in MOE | 4 | In the realm of AI models, the discussion around models like "8x7b" within the MOE (Mixture of Experts) framework has sparked my curiosity as a Java Developer and Technologist. Reflecting on this, the question arises: should we explore the potential of a multi-sized model that dynamically adjusts to the intricacies of each question?
Imagine a scenario where a "4x" model isn't a singular entity but a combination of smaller models, including 3b, 7b, 13b, and 30b. Leveraging the gating function within MOE, this approach aims to enhance performance by intelligently distributing the workload based on the complexity of the input.
From my perspective, the allure lies in the prospect of optimizing computational resources and improving model accuracy. By incorporating different model sizes, each specializing in handling distinct aspects of questions, we could potentially achieve more efficient and precise results.
The gating function in MOE becomes a key player in determining which sub-model is best suited for a particular question. This adaptive mechanism allows for resource allocation where it's most needed, making the model more versatile and capable of addressing a broader spectrum of queries with increased precision and more importantly performance. | 2024-02-01T01:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/1afyvh2/power_of_multisized_models_in_moe/ | ThinkExtension2328 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afyvh2 | false | null | t3_1afyvh2 | /r/LocalLLaMA/comments/1afyvh2/power_of_multisized_models_in_moe/ | false | false | self | 4 | null |
Can someone explain why the time of first token generated is different within loops? | 1 | I try to test the time of first token generated by LLM using \` TextStreamer \` in transformers. But I find it was very strange. Time of first token generated by \`TextStreamer\` is very different. My code is as follows:
\`\`\`
1. for i in range(10):
2. start = time.time()
3. text = "<s>\[INST\]Write a story, about 400 words.\[/INST\]"
4. encodeds = tokenizer(text, return\_tensors="pt", add\_special\_tokens=False).to(device)
5. streamer = TextIteratorStreamer(tokenizer, skip\_prompt=True, skip\_special\_tokens=True)
6. generation\_kwargs = dict(encodeds, streamer=streamer, max\_new\_tokens=512, pad\_token\_id=tokenizer.eos\_token\_id)
7. thread = Thread(target=model.generate, kwargs=generation\_kwargs)
8. thread.start()
9. first\_token = True
10. out\_str = ""
11. first\_token\_time = 0
12. for new\_text in streamer:
13. out\_str += new\_text
14. if first\_token:
15. print(f"first token time:{(time.time()-start)} seconds")
\`\`\`
I have tested it for multiple times. But it always show time as follows:
first token time:0.5371785163879395 seconds
first token time:0.03760647773742676 seconds
first token time:0.037310123443603516 seconds
first token time:0.037428855895996094 seconds
first token time:0.03730964660644531 seconds
first token time:0.03709769248962402 seconds
first token time:0.03712606430053711 seconds
first token time:0.037362098693847656 seconds
first token time:0.03728461265563965 seconds
first token time:0.037253618240356445 seconds
The first loop always spend more time than other loops. Why? | 2024-02-01T01:19:03 | https://www.reddit.com/r/LocalLLaMA/comments/1afylw4/can_someone_explain_why_the_time_of_first_token/ | DataLearnerAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afylw4 | false | null | t3_1afylw4 | /r/LocalLLaMA/comments/1afylw4/can_someone_explain_why_the_time_of_first_token/ | false | false | self | 1 | null |
Open Hermes 2.5 Dataset Released | 200 | Open Hermes 2.5 Dataset was recently released:
[https://huggingface.co/datasets/teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5)
[https://twitter.com/Teknium1/status/1752799124775374928](https://twitter.com/Teknium1/status/1752799124775374928)
Dataset sources:
- Airoboros 2.2
- CamelAI Domain Expert Datasets (Phsyics, Math, Chemistry & Biology)
- ChatBot Arena (GPT-4 Only)
- Collective Cognition (09-11-2023)
- CoT Alpaca GPT4
- Evol Instruct 70K && 140K
- Glaive Code Assistant
- GPT4-LLM
- GPTeacher
- Medical Tasks
- MetaMath 40k
- SlimOrca 550K
- Platypus
- ShareGPT (GPT4-Only)
- Unnatural Instructions GPT4
Of course there is more details in tweet and dataset card. | 2024-02-01T00:47:21 | https://www.reddit.com/r/LocalLLaMA/comments/1afxxmx/open_hermes_25_dataset_released/ | Weyaxi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afxxmx | false | null | t3_1afxxmx | /r/LocalLLaMA/comments/1afxxmx/open_hermes_25_dataset_released/ | false | false | self | 200 | {'enabled': False, 'images': [{'id': 'J9DqKYjlbWHt-G2W_kyuc45r_14BtgJOxsxfHXAkc3w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xplIc8uSv9GY8rloZqRQztqgKzDoLuDp0JRpzDYzhEk.jpg?width=108&crop=smart&auto=webp&s=d1d2e2b29a8a31b5a5a123147f1a20c798b64788', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xplIc8uSv9GY8rloZqRQztqgKzDoLuDp0JRpzDYzhEk.jpg?width=216&crop=smart&auto=webp&s=6f82b844970eaaa212ba291273d956713886c4eb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xplIc8uSv9GY8rloZqRQztqgKzDoLuDp0JRpzDYzhEk.jpg?width=320&crop=smart&auto=webp&s=d83cb75cbe40eda8192dd550878e611e17f044a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xplIc8uSv9GY8rloZqRQztqgKzDoLuDp0JRpzDYzhEk.jpg?width=640&crop=smart&auto=webp&s=afc2135579c7030da18daa7c5e4522f583ade1a8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xplIc8uSv9GY8rloZqRQztqgKzDoLuDp0JRpzDYzhEk.jpg?width=960&crop=smart&auto=webp&s=843a87aa4d64510768b9144a6bacbfee03544a52', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xplIc8uSv9GY8rloZqRQztqgKzDoLuDp0JRpzDYzhEk.jpg?width=1080&crop=smart&auto=webp&s=811ace99521f52843de72f3f87e348e22898d0dd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xplIc8uSv9GY8rloZqRQztqgKzDoLuDp0JRpzDYzhEk.jpg?auto=webp&s=133a878a9a70fba13301d9d0707a68710abe3c93', 'width': 1200}, 'variants': {}}]} |
How to train on a bunch of documents | 1 | [removed] | 2024-02-01T00:20:51 | https://www.reddit.com/r/LocalLLaMA/comments/1afxd0o/how_to_train_on_a_bunch_of_documents/ | opi098514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afxd0o | false | null | t3_1afxd0o | /r/LocalLLaMA/comments/1afxd0o/how_to_train_on_a_bunch_of_documents/ | false | false | self | 1 | null |
[Project] AI Filter: Local LLMs for social media curation | 18 | I built a small Chrome extension that uses a local LLM to filter social media posts (currently, just Twitter) based on natural language instructions.
For instance, you can tell it to:
>Hide all tweets, except for tweets about machine learning (ML), artificial intelligence (AI) and large language models (LLMs).
or:
1. By default, show all tweets
2. Do not show any tweets related to cryptocurrencies, blockchain, Bitcoin, Ethereum or related projects.
It's currently proof-of-concept stage and available at [https://github.com/thomasj02/AiFilter](https://github.com/thomasj02/AiFilter)
It uses vLLM as the inference server, so a CUDA GPU is required. I've tested it with [Nous Hermes 2 - Solar 10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) but other models would probably work well also.
If anyone wants to try it with a smaller model let me know and I'll try to give some help getting it going | 2024-02-01T00:17:30 | https://www.reddit.com/r/LocalLLaMA/comments/1afxaay/project_ai_filter_local_llms_for_social_media/ | hazard02 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afxaay | false | null | t3_1afxaay | /r/LocalLLaMA/comments/1afxaay/project_ai_filter_local_llms_for_social_media/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'U4-cse_14lm3F1PVOKgzbNn4_iXi__eVB5ZPk54yZV0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/odLHwwgnj_EgsPuds2d3q-oh8uONANMwT7w8_1ID1Us.jpg?width=108&crop=smart&auto=webp&s=8197db505635e7dd937df83cfc1761f40346ddf2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/odLHwwgnj_EgsPuds2d3q-oh8uONANMwT7w8_1ID1Us.jpg?width=216&crop=smart&auto=webp&s=0be8cfe793dd21cdd3c549a69e24c4baa1e21f70', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/odLHwwgnj_EgsPuds2d3q-oh8uONANMwT7w8_1ID1Us.jpg?width=320&crop=smart&auto=webp&s=a914898bb1682399fe5ab5fb969916b0dc270192', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/odLHwwgnj_EgsPuds2d3q-oh8uONANMwT7w8_1ID1Us.jpg?width=640&crop=smart&auto=webp&s=9ef63b29f00c1f2cff0eb2950b7eca2b6ab655b6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/odLHwwgnj_EgsPuds2d3q-oh8uONANMwT7w8_1ID1Us.jpg?width=960&crop=smart&auto=webp&s=499e9fba6065bf2a1f59dbe4c5f4b4a264c9d971', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/odLHwwgnj_EgsPuds2d3q-oh8uONANMwT7w8_1ID1Us.jpg?width=1080&crop=smart&auto=webp&s=e837cf4b53c6dfe4760bcf3c000ee36a5cd80240', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/odLHwwgnj_EgsPuds2d3q-oh8uONANMwT7w8_1ID1Us.jpg?auto=webp&s=8b1620411c489b434b82392c9474f17c45f6be5e', 'width': 1200}, 'variants': {}}]} |
Mac Build | 1 | Building a M2 Ultra max studio. Going for the 24-Core CPU 60-Core GPU 32-Core Neural Engine.
For running larger models, is 128gb unified memory limiting in any cases... worth upgrading to 196gb? | 2024-02-01T00:13:03 | https://www.reddit.com/r/LocalLLaMA/comments/1afx6we/mac_build/ | Swmp1024 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afx6we | false | null | t3_1afx6we | /r/LocalLLaMA/comments/1afx6we/mac_build/ | false | false | self | 1 | null |
How to generate creative but always affirmative or always negative response? | 1 | [removed] | 2024-02-01T00:05:53 | https://www.reddit.com/r/LocalLLaMA/comments/1afx17b/how_to_generate_creative_but_always_affirmative/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afx17b | false | null | t3_1afx17b | /r/LocalLLaMA/comments/1afx17b/how_to_generate_creative_but_always_affirmative/ | false | false | default | 1 | null |
A few questions about using local LLM for research & data analysis | 1 | Hi all- I've been looking into the potential of using LLM for simple data analysis, or more specifically, analyzing survey results. Since everything is so new and quickly evolving, I haven't been able to find conclusive answers and thought maybe some of you might have already thought through some of these things!
1. Is there a local alternative to ChatGPT's Code Interpreter/ Data Analyst? I've seen Open Interpreter and Code Llama... are these or other similar models able to analyze data in the same way?
2. What is the risk of ChatGPT Data Analyst and the local models above hallucinating during data analysis? And are there ways to reduce?
3. The reason I'm looking into local models is the assumption that they'll be more secure than using ChatGPT or a cloud-based LLM. Is that still a valid concern or is OpenAI addressing this?
4. Are there any more specific communities / resources you recommend checking out to learn more about using LLM for research and data analysis?
Thanks! | 2024-01-31T23:46:26 | https://www.reddit.com/r/LocalLLaMA/comments/1afwkvb/a_few_questions_about_using_local_llm_for/ | flare389 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afwkvb | false | null | t3_1afwkvb | /r/LocalLLaMA/comments/1afwkvb/a_few_questions_about_using_local_llm_for/ | false | false | self | 1 | null |
CringeBot - imbue the essence of the child you were in 2002 into an LLM! | 16 | Hi everyone! I wanted to share a little project I had been working on for the last couple of weeks that will be especially interesting for anyone who was on AOL Instant Messenger in the 1990s and 2000s, mixed with a little creative writing on the subject.
[Here is part 1](https://medium.com/@richard.siomporas/cringebot-qlora-fine-tuning-of-a-state-of-the-art-llm-using-aol-instant-messenger-chat-logs-from-d0961f9faf6f) of my first medium series on the making of this project, and [here is the GitHub repo](https://github.com/hotspoons/cringe-bot) for a CLI tool that will fine tune a LLM on your AIM chats that is still a work in progress, but it should be pretty well sorted by the time part 2 and 3 are finished. Enjoy! | 2024-01-31T23:42:19 | https://www.reddit.com/r/LocalLLaMA/comments/1afwhhs/cringebot_imbue_the_essence_of_the_child_you_were/ | hotspoons_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afwhhs | false | null | t3_1afwhhs | /r/LocalLLaMA/comments/1afwhhs/cringebot_imbue_the_essence_of_the_child_you_were/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': '532wVwhlDOr-xMBOubtfzwN5z2Qcc8UHXUlYxSUyKq8', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/MRiGIET0RTrCT6Ds69uUpoEUHiuJravTcrZ5VZRT4QM.jpg?width=108&crop=smart&auto=webp&s=6ac53ffdcb5c0d2c78805642aa59666e75d2ba92', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/MRiGIET0RTrCT6Ds69uUpoEUHiuJravTcrZ5VZRT4QM.jpg?width=216&crop=smart&auto=webp&s=5f7e94a7bd74ea7bba313dd4702589e021a462a8', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/MRiGIET0RTrCT6Ds69uUpoEUHiuJravTcrZ5VZRT4QM.jpg?width=320&crop=smart&auto=webp&s=350149cad17629e4d9139f91dfc1152e31e07541', 'width': 320}], 'source': {'height': 415, 'url': 'https://external-preview.redd.it/MRiGIET0RTrCT6Ds69uUpoEUHiuJravTcrZ5VZRT4QM.jpg?auto=webp&s=18cc6e028c9f0bf432a881a67f0480a5c079aba1', 'width': 625}, 'variants': {}}]} |
Quick heads-up about using CodeLlama 70b and llama.cpp for chat | 25 | CodeLlama 70b has a complicated chat template. The chat template is meant to ensure that the model knows what to do (like understand the system prompt, and switch between assistant and user roles)
llama.cpp does **_not support chat templates_**, which means the input to the model is not the same as the model actually expects. In practice, this means the model gives garbage output. I see many people struggle with bad output from the newest CodeLlama, this is most likely why.
There are projects out there that do support chat templates, although I focus primarily on raw llama.cpp, so I'm not familiar with them.
Huggingface transformers at the very least does support it well.
Context
- [llama.cpp discussion](https://github.com/ggerganov/llama.cpp/issues/4216#issuecomment-1829911139) about support for chat templates
- The [chat_template for CodeLlama 70b](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf/blob/main/tokenizer_config.json) can be found in tokeniser_config.json on hugging face
- [Huggingface documentation about chat templates](https://huggingface.co/docs/transformers/main/en/chat_templating#advanced-how-do-chat-templates-work) | 2024-01-31T23:39:20 | https://www.reddit.com/r/LocalLLaMA/comments/1afweyw/quick_headsup_about_using_codellama_70b_and/ | Combinatorilliance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afweyw | false | null | t3_1afweyw | /r/LocalLLaMA/comments/1afweyw/quick_headsup_about_using_codellama_70b_and/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'C2N6UYLUNk9B-eOzzx6MctpsiozGklpmJklv617uOrY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZIxyWJz92DBlvcEz4zK1qF0lh85WZp0xDe1ei6qKnaY.jpg?width=108&crop=smart&auto=webp&s=8de6e8d26a6040788b01499329bbd055e8f8ab7d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZIxyWJz92DBlvcEz4zK1qF0lh85WZp0xDe1ei6qKnaY.jpg?width=216&crop=smart&auto=webp&s=cc9cdfb71947f39bcca03f0c36fdf062b9e48e76', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZIxyWJz92DBlvcEz4zK1qF0lh85WZp0xDe1ei6qKnaY.jpg?width=320&crop=smart&auto=webp&s=2e07a410ef7aede7f32d8cdc12d302397441e174', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZIxyWJz92DBlvcEz4zK1qF0lh85WZp0xDe1ei6qKnaY.jpg?width=640&crop=smart&auto=webp&s=c8da2431695c9daa772f4b98c6bfc761874d943e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZIxyWJz92DBlvcEz4zK1qF0lh85WZp0xDe1ei6qKnaY.jpg?width=960&crop=smart&auto=webp&s=c92414e623072e44e040a3deb89e56bfc34528be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZIxyWJz92DBlvcEz4zK1qF0lh85WZp0xDe1ei6qKnaY.jpg?width=1080&crop=smart&auto=webp&s=d017213e9e875f1504f0d04f57fb5cda16f28a89', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZIxyWJz92DBlvcEz4zK1qF0lh85WZp0xDe1ei6qKnaY.jpg?auto=webp&s=9f27703a88412932f84deba964a0506ff9cc8899', 'width': 1200}, 'variants': {}}]} |
What is the best way to / how would you create a Local Clone of yourself? | 1 | [removed] | 2024-01-31T23:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/1afwas7/what_is_the_best_way_to_how_would_you_create_a/ | headbopper96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afwas7 | false | null | t3_1afwas7 | /r/LocalLLaMA/comments/1afwas7/what_is_the_best_way_to_how_would_you_create_a/ | false | false | self | 1 | null |
Ollama command line | 2 | Does anyone know how I can make ollama remember my conversation when I use it on the command line like this?
ollama run mistral "write me a script in bash that will do x"
Then I do something else on the command line and later I want to be able to do something like:
ollama run mistral "modify that script to also do y"
I like using it this way because I can redirect output instead of copying and pasting but obviously, this isn't going to work because it is treating this as a totally new conversation and is therefore missing the context. How do I get a little more clever with this so that it understands the context based on my previous questions? | 2024-01-31T23:08:43 | https://www.reddit.com/r/LocalLLaMA/comments/1afvp0i/ollama_command_line/ | Fun_Stress1977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afvp0i | false | null | t3_1afvp0i | /r/LocalLLaMA/comments/1afvp0i/ollama_command_line/ | false | false | self | 2 | null |
Besides LoRa, is anything else in the peft library worth doing? | 1 | I mostly only see Lora being used from peft. | 2024-01-31T22:12:24 | https://www.reddit.com/r/LocalLLaMA/comments/1afubzs/besides_lora_is_anything_else_in_the_peft_library/ | SkGiles | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afubzs | false | null | t3_1afubzs | /r/LocalLLaMA/comments/1afubzs/besides_lora_is_anything_else_in_the_peft_library/ | false | false | self | 1 | null |
Exploring the limitations of LLMs-as-a-Judge | 21 | LLMs are notoriously bad at handling numeric ranges which is impractical given their otherwise impressive ability of evaluating complex, open ended tasks. Given their increasing use as evaluators, it's crucial to understand their inherent biases. You may have seen the [recent post](https://twitter.com/aparnadhinak/status/1748368364395721128) from a team at Arize, where they study the ability of LLMs to evaluate using numeric ranges. Concretely, they test GPT-4’s ability to grade misspelled texts of varying severity. To verify their findings, I replicated the experiment, and the results are as follows.
https://preview.redd.it/reuqpxfjgufc1.png?width=2367&format=png&auto=webp&s=2a045af2cef5da1138e11ce9fa50701ee8a00dd3
Note the *perfect linear range,* which depicts the desired outcome of a linear correlation between LLM Evaluation Score and misspelled %. Okay great, so far, **nothing new.** Despite this apparent inability however, we **know** there is a strong correlation between LLM and human evaluators. For example MT-Bench shows a [0.9 correlation](https://twitter.com/gblazex/status/1746295870792847562) with Arena Elo. This prompts the question, can we use improved prompt techniques or scoring guidelines to better correlate the scores depicted above? Arize AI left things off quite open in their study and I'm keen to explore their results further. To that end I set up a [repo](https://github.com/LeonEricsson/llmjudge) to document my experiments and I'd like to share the results from the initial tests
**Reversed**.
What happens if we reverse the scoring guidelines, making 10 the perfect score?
https://preview.redd.it/3qw4neongufc1.png?width=2351&format=png&auto=webp&s=d82bd7a748cafd5c96dec119adb57bf7117feca8
**Grades**
Given the statements from Arize, what happens if we discard the numeric scores and just ask for "grade labels".
https://preview.redd.it/f97uf91tgufc1.png?width=2349&format=png&auto=webp&s=787e69f5fe6fb236618c85ea59acd575facb5aaa
**CoT**
On of the authors of Prometheus suggested that you provide a full mapping of explanations across the entire scoring matrix, combined with Chain-of-Thought.
https://preview.redd.it/y5clx4j8hufc1.png?width=2353&format=png&auto=webp&s=fa1a4a7a9e8439710adbe1021b19be27a85119fb
This is an ongoing exploration, would love to hear your thoughts! | 2024-01-31T21:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/1afu08t/exploring_the_limitations_of_llmsasajudge/ | TelloLeEngineer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afu08t | false | {'oembed': {'author_name': 'Aparna Dhinakaran', 'author_url': 'https://twitter.com/aparnadhinak', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">(1/9) LLM as a Judge: Numeric Score Evals are Broken!!!<br><br>LLM Evals are valuable analysis tools. But should you use numeric scores or classes as outputs? 🤔<br><br>TLDR: LLM’s suck at continuous ranges ☠️ - use LLM classification evals instead! 🔤<br><br>An LLM Score Eval uses an LLM to judge… <a href="https://t.co/cATZEomvZl">pic.twitter.com/cATZEomvZl</a></p>— Aparna Dhinakaran (@aparnadhinak) <a href="https://twitter.com/aparnadhinak/status/1748368364395721128?ref_src=twsrc%5Etfw">January 19, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/aparnadhinak/status/1748368364395721128', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1afu08t | /r/LocalLLaMA/comments/1afu08t/exploring_the_limitations_of_llmsasajudge/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'ziXAvP25asas_RuiWbAm4Ufba-I9dnMyLck3ig7lAk0', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/Cf_PhEQpcLcSKNB4aqyfF5OMLfkYqgmSt55ql19W1C0.jpg?width=108&crop=smart&auto=webp&s=e93a479ff18596bf7fb89b2158ad1533ec4922d1', 'width': 108}], 'source': {'height': 87, 'url': 'https://external-preview.redd.it/Cf_PhEQpcLcSKNB4aqyfF5OMLfkYqgmSt55ql19W1C0.jpg?auto=webp&s=1ab8f009747f68f0744716d076dcca7f57c0db0a', 'width': 140}, 'variants': {}}]} | |
How to Install LLava 1.6 in Text Web UI Oobabooga | 2 | With the recent release of LLava 1.6, is there any way to install it as an extension in the text-generation-web UI/Oobabooga? | 2024-01-31T21:24:01 | https://www.reddit.com/r/LocalLLaMA/comments/1aft58o/how_to_install_llava_16_in_text_web_ui_oobabooga/ | Asleep_Aerie_4591 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aft58o | false | null | t3_1aft58o | /r/LocalLLaMA/comments/1aft58o/how_to_install_llava_16_in_text_web_ui_oobabooga/ | false | false | self | 2 | null |
argilla/distilabel-capybara-dpo-7k-binarized: a new dataset aiming to improve the performance of open LLMs for multi-turn dialogue | 42 | Argilla continues doing really nice work in creating open datasets and sharing the creation process.
The [argilla/distilabel-capybara-dpo-7k-binarized](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized) dataset aims to fill a gap in the open source community for multi-turn datasets, i.e. having more than a single back and forth in a chat. The dataset consists of multi-turn prompts along with chosen and rejected pairs.
The dataset was constructed using argilla's distilabel library and took the following approach:
* generate three responses to the last user message using OSS 7B models (Notus7B, NeuralBeagle and OpenHermes-2.5.)
* From the 4 responses to each multi-turn dialogue, they use gpt-4-turbo to rank the quality of responses.
To test if this type of dataset helps, Argilla preference tuned OpenHermes-2.5-Mistral-7B on the dataset. On MTBench the model gains some performance for multi-turn over previous models.
The dataset card has a bunch more detail about how the dataset was created, with enough code that it would be pretty easy to reproduce this approach using a different dataset as a starting point. | 2024-01-31T21:20:45 | https://www.reddit.com/r/LocalLLaMA/comments/1aft27r/argilladistilabelcapybaradpo7kbinarized_a_new/ | dvanstrien | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aft27r | false | null | t3_1aft27r | /r/LocalLLaMA/comments/1aft27r/argilladistilabelcapybaradpo7kbinarized_a_new/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'HK_VHalxO-qP2ZvbiwGHZLmhs2b1SoMZy8dTo3gXVYw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VR1oWA8u8ybx_sjbHD6UKnkY-uTt8Fv6HGs2vGQZuzs.jpg?width=108&crop=smart&auto=webp&s=f93ee1967a06beeb11269d50351c5734fd0bda1f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VR1oWA8u8ybx_sjbHD6UKnkY-uTt8Fv6HGs2vGQZuzs.jpg?width=216&crop=smart&auto=webp&s=9d9be646c042283503c1d597a8c62865bc380244', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VR1oWA8u8ybx_sjbHD6UKnkY-uTt8Fv6HGs2vGQZuzs.jpg?width=320&crop=smart&auto=webp&s=3975affedf46a438d046ec4434ef74e252bd4e76', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VR1oWA8u8ybx_sjbHD6UKnkY-uTt8Fv6HGs2vGQZuzs.jpg?width=640&crop=smart&auto=webp&s=b3f67b8aaa56e607023fb9066b0c9a5f24fea66f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VR1oWA8u8ybx_sjbHD6UKnkY-uTt8Fv6HGs2vGQZuzs.jpg?width=960&crop=smart&auto=webp&s=1b4f2281df6e35bfcfda946c18895a9817d76e2e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VR1oWA8u8ybx_sjbHD6UKnkY-uTt8Fv6HGs2vGQZuzs.jpg?width=1080&crop=smart&auto=webp&s=a9840df87813b5cf1f434f76942f19eec7f79e09', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VR1oWA8u8ybx_sjbHD6UKnkY-uTt8Fv6HGs2vGQZuzs.jpg?auto=webp&s=3bfd6632a95dc1425470da522163ae8dc27bc7b9', 'width': 1200}, 'variants': {}}]} |
For Miqu, can I lower its 32k ctx without confusing it? | 13 | The documentation on the model is understandably lacking.
If I set the default 32k in Ooba to about 12k to match the 12k I use in SillyTavern, would that confuse the model?
I tested [LoneStriker/miqu-1-70b-sf-5.0b-h6-exl2](https://huggingface.co/LoneStriker/miqu-1-70b-sf-5.0bpw-h6-exl2) and unfortunately it doesn't fit on a single 80GB GPU, but it just barely fits if I use 8-bit vram, which to me isn't good enough due to the speed. Even when running SillyTavern with 12k context, it's quite slow at loading in the prompt (inference speed itself is fine though).
​ | 2024-01-31T21:18:42 | https://www.reddit.com/r/LocalLLaMA/comments/1aft0ag/for_miqu_can_i_lower_its_32k_ctx_without/ | ReMeDyIII | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1aft0ag | false | null | t3_1aft0ag | /r/LocalLLaMA/comments/1aft0ag/for_miqu_can_i_lower_its_32k_ctx_without/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'GeMGjlIUMmuTN6DayEmBQYRUYU3qOGjwdTJz70guu6Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HbCTCTj1VDOPPQ-vMel_YZcfCX_ijmL61qS8BGrv1xI.jpg?width=108&crop=smart&auto=webp&s=666a01093497a22e4a797d9092c5ae046e20372e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HbCTCTj1VDOPPQ-vMel_YZcfCX_ijmL61qS8BGrv1xI.jpg?width=216&crop=smart&auto=webp&s=d9750b9f44c152da2d6b77881934e475119652e2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HbCTCTj1VDOPPQ-vMel_YZcfCX_ijmL61qS8BGrv1xI.jpg?width=320&crop=smart&auto=webp&s=b40b8c3ed7c0d174c36abaf41a478ed016aa6423', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HbCTCTj1VDOPPQ-vMel_YZcfCX_ijmL61qS8BGrv1xI.jpg?width=640&crop=smart&auto=webp&s=5f0f712bd0ff06ce4437b88f6833efbf8c231781', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HbCTCTj1VDOPPQ-vMel_YZcfCX_ijmL61qS8BGrv1xI.jpg?width=960&crop=smart&auto=webp&s=026552fa37275ad7cddd42fc7f58eb3b98c2407d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HbCTCTj1VDOPPQ-vMel_YZcfCX_ijmL61qS8BGrv1xI.jpg?width=1080&crop=smart&auto=webp&s=0322b7b0b20b0ee6bdd84ff6d05731cef3e8a13a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HbCTCTj1VDOPPQ-vMel_YZcfCX_ijmL61qS8BGrv1xI.jpg?auto=webp&s=a5398629a54d12fe01b036e7d71ea683df47e778', 'width': 1200}, 'variants': {}}]} |
Struggling with basic English tests: Why Can't LLMs Ace Them? | 43 | It's quite surprising that most LLMs, including Llama2-70b, Mixtral 8x7B, GPT-3.5, Gemini Pro, and Claude v2.1, often falter on English tests tailored for non-native speakers. These tests aren't about deep logical reasoning; they're primarily based on fundamental grammar and contextual understanding to pinpoint the correct answer. Yet, these sophisticated models consistently miss the mark.
Consider these examples:
1.Recent research has found lots of evidence to ___ the drug company’s claims about its “miracle” tablets for curing cancer.
(A) provoke (B) counter (C) expose (D) convert
Most LLMs often answer (C) "expose" as the correct choice. However, the real correct answer is (B). Interestingly, when these models that choose the right answer are questioned about the possibility of selecting (C) 'expose', they often assert that it's also a correct option (including GPT-4).
2. Select the most appropriate word from the list to match the context and grammar:
When a person sweats, he loses water and salt, so he needs to replace both. Replacing lost fluid with just plain water means the body has too much water and not enough salt. To [absorption, active, alert,combat, option, effective, even, status, pass through,reach for]__ things out, the body will get rid of water by producing urine.
Despite their extensive training on numerous high-quality English texts, most language models struggle with this basic question. They tend to provide nonsensical answers like "To combat things out," which is obviously incorrect and sometimes even suggested by GPT-4.
This poses an interesting question: why do these language models, which are otherwise quite advanced, find it challenging to tackle what seems like straightforward English tests that any native speaker would breeze through?
​ | 2024-01-31T21:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/1afslno/struggling_with_basic_english_tests_why_cant_llms/ | LimaoGURU | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afslno | false | null | t3_1afslno | /r/LocalLLaMA/comments/1afslno/struggling_with_basic_english_tests_why_cant_llms/ | false | false | self | 43 | null |
Deployment suggestions | 1 | Hi I am building upon a large sized LLM. So far to deploy I used runpod for serverless infra, with docker images. But the lifecycle of my code till production is a huge pain. Building large docker images with the model, and the server downloading it every time takes a lot of time of interruption. Can someone suggest some good LLM production cycles, and also please suggest infra for deploying which should be fast, efficient and intuitive. | 2024-01-31T20:54:22 | https://www.reddit.com/r/LocalLLaMA/comments/1afsent/deployment_suggestions/ | PlateAffectionate436 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afsent | false | null | t3_1afsent | /r/LocalLLaMA/comments/1afsent/deployment_suggestions/ | false | false | self | 1 | null |
RAG: Similarity Search Discussion with longer User Inputs | 1 | [removed] | 2024-01-31T20:54:11 | https://www.reddit.com/r/LocalLLaMA/comments/1afsehn/rag_similarity_search_discussion_with_longer_user/ | Ill_Bodybuilder3499 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afsehn | false | null | t3_1afsehn | /r/LocalLLaMA/comments/1afsehn/rag_similarity_search_discussion_with_longer_user/ | false | false | self | 1 | null |
How huge is this? | 14 | 2024-01-31T20:20:07 | https://huggingface.co/papers/2401.03462 | AloneInTheWhole | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1afrjvl | false | null | t3_1afrjvl | /r/LocalLLaMA/comments/1afrjvl/how_huge_is_this/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'QH-N6trOT2YMWQM3Nrk9pBsyqIjiTtN6k6vUnDxDRY0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LOnJ7M1noTgDMPxdzcd4Qgjfr6109W8RKOLQ0vyqMdw.jpg?width=108&crop=smart&auto=webp&s=1b05db22d61c4502362787912b749cbf2a78a477', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LOnJ7M1noTgDMPxdzcd4Qgjfr6109W8RKOLQ0vyqMdw.jpg?width=216&crop=smart&auto=webp&s=3cbcda84a9e52032eb40130ebba15524c9dbf3e8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LOnJ7M1noTgDMPxdzcd4Qgjfr6109W8RKOLQ0vyqMdw.jpg?width=320&crop=smart&auto=webp&s=f58e43b680fa247321ba438773b4de747bafece6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LOnJ7M1noTgDMPxdzcd4Qgjfr6109W8RKOLQ0vyqMdw.jpg?width=640&crop=smart&auto=webp&s=3ada0ae1d5786660b6fa6f2881fc238112668cfd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LOnJ7M1noTgDMPxdzcd4Qgjfr6109W8RKOLQ0vyqMdw.jpg?width=960&crop=smart&auto=webp&s=66a444f995da56663cd6b061123c027ea1da2461', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LOnJ7M1noTgDMPxdzcd4Qgjfr6109W8RKOLQ0vyqMdw.jpg?width=1080&crop=smart&auto=webp&s=f9f0c42dc1ef5e59297795c8ec28365372c4cbad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LOnJ7M1noTgDMPxdzcd4Qgjfr6109W8RKOLQ0vyqMdw.jpg?auto=webp&s=e87f4fef836d5c3476c3838399e319f66b5968aa', 'width': 1200}, 'variants': {}}]} | ||
Looking for the latest LLMs in German, French and Italian (7B and under preferred) | 1 | Does anyone have any recommendations for 7B and under LLMs in German, French or Italian? What are your favorites? Any advice would be greatly appreciated. | 2024-01-31T20:10:23 | https://www.reddit.com/r/LocalLLaMA/comments/1afrb97/looking_for_the_latest_llms_in_german_french_and/ | grim_fusion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afrb97 | false | null | t3_1afrb97 | /r/LocalLLaMA/comments/1afrb97/looking_for_the_latest_llms_in_german_french_and/ | false | false | self | 1 | null |
Released a 1.5 TB Multimodal Python Copilot Training Dataset on Hugging Face | 125 | Hello fellow llamas!
tldr: we published an open source dataset to help build your own AI-coding model from 1200+ AI-focused, python repositories:
[https://huggingface.co/datasets/matlok/multimodal-python-copilot-training-overview](https://huggingface.co/datasets/matlok/multimodal-python-copilot-training-overview)
After checking out the latest commercially available models and many open source options, we decided we wanted a model that could code and keep up with the latest AI research. To build our own, self-hosted copilots we needed a large dataset, and we wanted to share as we go.
The focus for this version is on creating a baseline “Manager level” understanding when someone types the question/prompt:
“define how this software works in the module: ./path/some.py”
The model responds with the generative response wrapped in a yaml payload. To build this dataset, we started by extracting how to use: classes, global functions, base classes (inheritance/polymorphism), and imports from 1207 leading python AI research repos. We also wanted to draw and speak/hear with transformers so we added those modes to the dataset for hopefully getting more audio/image models in this space for coding.
Here's the summary (everything is in parquet files):
- ~2.3M unique source coding rows
- ~1.1M instruct alpaca yaml text rows
- ~923K png knowledge graph images with alpaca text description
- ~334K mp3s with alpaca and different speaker for questions vs answers
- requires 1.5 TB storage on disk
We plan on training and fine-tuning using these datasets with models like Code Llama 70B, and we shared an overview of some of the other coding models we liked that may help others looking to do the same on our blog:
https://matlok.ai/
Lastly if these datasets are not useful, then hopefully Hugging Face's datasets can help:
https://huggingface.co/datasets
Thanks for your time! | 2024-01-31T20:09:52 | https://www.reddit.com/r/LocalLLaMA/comments/1afrar0/released_a_15_tb_multimodal_python_copilot/ | buildinstuff5432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afrar0 | false | null | t3_1afrar0 | /r/LocalLLaMA/comments/1afrar0/released_a_15_tb_multimodal_python_copilot/ | false | false | self | 125 | {'enabled': False, 'images': [{'id': '3s5WEo8tBt7pYfXagp8a7mg6LlItL0PBUV5_KJkcGH8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WAEHNaV5WJGDcXHKHqBunq9cliTg5cfM-WxjuDOVoPo.jpg?width=108&crop=smart&auto=webp&s=2e8bd150f881c0c57812df6a3d793f3d3dc5f60b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WAEHNaV5WJGDcXHKHqBunq9cliTg5cfM-WxjuDOVoPo.jpg?width=216&crop=smart&auto=webp&s=e6ca0f354d518d3aa5833deefb8d018ba90e82f2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WAEHNaV5WJGDcXHKHqBunq9cliTg5cfM-WxjuDOVoPo.jpg?width=320&crop=smart&auto=webp&s=317da29bbbf8331c248c48f2ef9292bd58221077', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WAEHNaV5WJGDcXHKHqBunq9cliTg5cfM-WxjuDOVoPo.jpg?width=640&crop=smart&auto=webp&s=4b283e678e3866ed9ddb9f2aa96f62c873150521', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WAEHNaV5WJGDcXHKHqBunq9cliTg5cfM-WxjuDOVoPo.jpg?width=960&crop=smart&auto=webp&s=d7cae9cd7ebb0f24a7502b8be972252287bba38a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WAEHNaV5WJGDcXHKHqBunq9cliTg5cfM-WxjuDOVoPo.jpg?width=1080&crop=smart&auto=webp&s=14458a5ffb71a2b3ab6a2f1a9491a2582817309d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WAEHNaV5WJGDcXHKHqBunq9cliTg5cfM-WxjuDOVoPo.jpg?auto=webp&s=8c0fda3b0634a4c8911fe1399f6a7128d6c139a4', 'width': 1200}, 'variants': {}}]} |
Does this confirm MiQu is indeed Mixtral-Medium? | 1 | [removed] | 2024-01-31T20:00:00 | https://www.reddit.com/r/LocalLLaMA/comments/1afr1i0/does_this_confirm_miqu_is_indeed_mixtralmedium/ | Accurate_Event_8608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afr1i0 | false | null | t3_1afr1i0 | /r/LocalLLaMA/comments/1afr1i0/does_this_confirm_miqu_is_indeed_mixtralmedium/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IiJiFlTViX12HFlrM5q0VzRx5H8mF60G_H9r2vZmX5c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XqXlFlMBO5OBHkspjAAfh4k3t6r_qV8rQaXwNGZJ2ow.jpg?width=108&crop=smart&auto=webp&s=fcc72e7270a8c3930a46724f05502f9fdece8170', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/XqXlFlMBO5OBHkspjAAfh4k3t6r_qV8rQaXwNGZJ2ow.jpg?auto=webp&s=fb3b5edac0c7bdb700a2d2e78b3d003b4eee515e', 'width': 140}, 'variants': {}}]} |
Brand new Gigabyte T181-G20 SXM2 servers for $150 in Hong Kong. Cheaper than buying adapters to use those cheap V100 SXM2 GPUs. | 8 | 2024-01-31T19:47:59 | https://github.com/ggerganov/llama.cpp/discussions/5229 | fallingdowndizzyvr | github.com | 1970-01-01T00:00:00 | 0 | {} | 1afqqyq | false | null | t3_1afqqyq | /r/LocalLLaMA/comments/1afqqyq/brand_new_gigabyte_t181g20_sxm2_servers_for_150/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'ykrAJhNsmHrMJhaWXkm63SC6nXtMXJ3wPO5WxGqjgLU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/b4eLENYDkM4ATVJwd9gq5Pgkkg9SkA6gDUf2VJIx5zw.jpg?width=108&crop=smart&auto=webp&s=3a0ee378a3826494f270886446da42642d3fc373', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/b4eLENYDkM4ATVJwd9gq5Pgkkg9SkA6gDUf2VJIx5zw.jpg?width=216&crop=smart&auto=webp&s=a728ad62ca2561668c98230919c775a8befafcb0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/b4eLENYDkM4ATVJwd9gq5Pgkkg9SkA6gDUf2VJIx5zw.jpg?width=320&crop=smart&auto=webp&s=195ce50ee8f178e73b1e2e1dfed2551f1a4cf83f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/b4eLENYDkM4ATVJwd9gq5Pgkkg9SkA6gDUf2VJIx5zw.jpg?width=640&crop=smart&auto=webp&s=a86d8f41c736d9e720822df4bb7f3b8d702f8552', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/b4eLENYDkM4ATVJwd9gq5Pgkkg9SkA6gDUf2VJIx5zw.jpg?width=960&crop=smart&auto=webp&s=29ac666e7897446f915e019121385a798ac9d035', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/b4eLENYDkM4ATVJwd9gq5Pgkkg9SkA6gDUf2VJIx5zw.jpg?width=1080&crop=smart&auto=webp&s=a3da4761af09acba93796daf4107f7adecedf036', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/b4eLENYDkM4ATVJwd9gq5Pgkkg9SkA6gDUf2VJIx5zw.jpg?auto=webp&s=d3491e26a543221d0c20bde8078f687253077699', 'width': 1200}, 'variants': {}}]} | ||
Jan - MacBook Pro M1 32GB | 4 | I've been trying a bunch of models in Jan and have noticed that when I'm using ngl = 1, or any number for that matter the performance is worse than when not having ngl part of my model.json. So for example running mistral 7b instruct q4 will give me around 20t/s without ngl, but with ngl I'm getting around 3-5t/s. Am I doing something wrong here? Has anyone else experienced this issue? | 2024-01-31T19:42:41 | https://www.reddit.com/r/LocalLLaMA/comments/1afqmco/jan_macbook_pro_m1_32gb/ | jaxupaxu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afqmco | false | null | t3_1afqmco | /r/LocalLLaMA/comments/1afqmco/jan_macbook_pro_m1_32gb/ | false | false | self | 4 | null |
Cerebras Systems and Barcelona Supercomputing Center Train Multilingual Spanish Catalan English LLM | 12 | 2024-01-31T19:16:03 | https://www.cerebras.net/press-release/cerebras-systems-and-barcelona-supercomputing-center-train-industry-leading-multilingual-spanish-catalan-english-llm | maroule | cerebras.net | 1970-01-01T00:00:00 | 0 | {} | 1afpyxy | false | null | t3_1afpyxy | /r/LocalLLaMA/comments/1afpyxy/cerebras_systems_and_barcelona_supercomputing/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'juF8xnwyg-1tZjDUsNrOv1I-qGNe3ZghYin9qfDXQ4Y', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/xNr2HqEr4Zkor0jpmh4NfjCZNBmX20D0NcdAnlMr7Qo.jpg?width=108&crop=smart&auto=webp&s=ae968028e97a744a6560b813017df9036e45c917', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/xNr2HqEr4Zkor0jpmh4NfjCZNBmX20D0NcdAnlMr7Qo.jpg?width=216&crop=smart&auto=webp&s=d227419630509d3dcf13bd48c8aee9c1b444bd7f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/xNr2HqEr4Zkor0jpmh4NfjCZNBmX20D0NcdAnlMr7Qo.jpg?width=320&crop=smart&auto=webp&s=0d3a658721084fe21472e356c3065f3077cfd0e5', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/xNr2HqEr4Zkor0jpmh4NfjCZNBmX20D0NcdAnlMr7Qo.jpg?width=640&crop=smart&auto=webp&s=759fd32a6b39e71c4702a843737f0e6e68f0c76a', 'width': 640}], 'source': {'height': 715, 'url': 'https://external-preview.redd.it/xNr2HqEr4Zkor0jpmh4NfjCZNBmX20D0NcdAnlMr7Qo.jpg?auto=webp&s=509f917aa8c54a48d427d22216f737d6337d10c7', 'width': 714}, 'variants': {}}]} | ||
This is the reason for uncensored models | 271 | You can't even discuss with chatgpt about the interpretation of a song playing in the radio, without your account probably flagged as sex offender... | 2024-01-31T19:15:16 | maxigs0 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1afpyan | false | null | t3_1afpyan | /r/LocalLLaMA/comments/1afpyan/this_is_the_reason_for_uncensored_models/ | false | false | 271 | {'enabled': True, 'images': [{'id': 'NnS95gqauKTP3H3knLfhdYWPxAWamSoLQZtzlYoTwqM', 'resolutions': [{'height': 136, 'url': 'https://preview.redd.it/7n2ph755stfc1.png?width=108&crop=smart&auto=webp&s=5f170389e6e401af3481af47aa508736f0cbdf5b', 'width': 108}, {'height': 273, 'url': 'https://preview.redd.it/7n2ph755stfc1.png?width=216&crop=smart&auto=webp&s=73ef6f0fb514ae8e05e5c1f67f5fe43143f1cf6c', 'width': 216}, {'height': 405, 'url': 'https://preview.redd.it/7n2ph755stfc1.png?width=320&crop=smart&auto=webp&s=667969cf9fb60f6a1107695b44c99bcf11cb097e', 'width': 320}, {'height': 811, 'url': 'https://preview.redd.it/7n2ph755stfc1.png?width=640&crop=smart&auto=webp&s=fe2ce195fb774ef739a7014b9401690edb1a99db', 'width': 640}, {'height': 1216, 'url': 'https://preview.redd.it/7n2ph755stfc1.png?width=960&crop=smart&auto=webp&s=513cfca54787b786034ebc3e180e340d890e6e76', 'width': 960}, {'height': 1368, 'url': 'https://preview.redd.it/7n2ph755stfc1.png?width=1080&crop=smart&auto=webp&s=46e391568b68d2c4e92bdb5c485dd1f00d0d4e69', 'width': 1080}], 'source': {'height': 1825, 'url': 'https://preview.redd.it/7n2ph755stfc1.png?auto=webp&s=42c76cd786fdb0c34f14f0111dbbe7d69c436474', 'width': 1440}, 'variants': {}}]} | ||
Tools to evaluate quantized models on benchmarks? | 1 | Hiya, I'm working with some quantized models, and I need to compare their performance before and after an algorithm has been applied to them - I would like to use something like EleutherAI's LLM evaluation Harness or HELM but I've had a hard time loading the models for benchmarking since they have an quantization adapter if pushed to HuggingFace, which has thrown errors for me in the past when trying to use HELM.
Am I just being silly? Or should I just download the datasets and set up my own pipeline to evaluate them? | 2024-01-31T18:51:19 | https://www.reddit.com/r/LocalLLaMA/comments/1afpcfw/tools_to_evaluate_quantized_models_on_benchmarks/ | TypicalLab390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afpcfw | false | null | t3_1afpcfw | /r/LocalLLaMA/comments/1afpcfw/tools_to_evaluate_quantized_models_on_benchmarks/ | false | false | self | 1 | null |
Mixtral 8x7b Instruct with recent dataset? | 6 | The leaderboard here: [LMSys Chatbot Arena Leaderboard - a Hugging Face Space by lmsys](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) shows Mixtral 8x7b with a knowledge cutoff of late 2023. When I use the arena demo, it does know recent things up to that date. However, I can't seem to find a downloadable GGUF that has information beyond 2021... I've tried Noromaid v0.4 finetunes, Air Striker, Dolphin 2.7... they all seem to have a much earlier cutoff. Any help is appreciated, thank you! | 2024-01-31T17:34:34 | https://www.reddit.com/r/LocalLLaMA/comments/1afnfoj/mixtral_8x7b_instruct_with_recent_dataset/ | phr00t_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afnfoj | false | null | t3_1afnfoj | /r/LocalLLaMA/comments/1afnfoj/mixtral_8x7b_instruct_with_recent_dataset/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'tu38AwtsUEqyGIPo-RS3iFtvVhQ6LPaW-v5gU23jMqc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=108&crop=smart&auto=webp&s=724cf25bfd7e21d9fe860dd0f67a01017ab321c6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=216&crop=smart&auto=webp&s=1a1411da80205319c4cb0c454607bc3b20c80caa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=320&crop=smart&auto=webp&s=91d1b66c52b1e380723266dde26c63eabd92b845', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=640&crop=smart&auto=webp&s=b42d76e84ea0d50cfac92b9536fffc61bd053777', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=960&crop=smart&auto=webp&s=8098d35322584922845ead6e0f006d89bebbad44', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?width=1080&crop=smart&auto=webp&s=467490538e68399c3f6ddadf5f29b81dffd1a825', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RS7d3722KVL1OTw1ewhRT89FTFf93HJZCAdVoAQjeCw.jpg?auto=webp&s=185f36c2afc442026fde2d83999353603ea11e3f', 'width': 1200}, 'variants': {}}]} |
Weak gpu, middling vram. What's likely better 13B-4.0bpw or 7b-8.0bpw? | 22 | Weak gpu, middling vram. What's likely better 13B-4.0bpw or 7b-8.0bpw? Assuming they're magically equally well made/trained/etc
I've been dabbling with both (recently released versions of models most likely based on mistral, but I've tried like a dozen... not worth listing specifically here.) I'm failing to draw my own subjective conclusions... but I wonder is there an objective measure of which is better? or a group consensus of which is better? | 2024-01-31T17:28:32 | https://www.reddit.com/r/LocalLLaMA/comments/1afna9u/weak_gpu_middling_vram_whats_likely_better/ | namad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afna9u | false | null | t3_1afna9u | /r/LocalLLaMA/comments/1afna9u/weak_gpu_middling_vram_whats_likely_better/ | false | false | self | 22 | null |
Quantizing cross encoders | 1 | We are moving our RAG into production.
We are using quantized vectorizer, has anybody used quantized cross encoders.
Are there any good open sourced cross encoders quantizers. | 2024-01-31T16:49:28 | https://www.reddit.com/r/LocalLLaMA/comments/1afmb6e/quantizing_cross_encoders/ | Spiritual-Rub925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afmb6e | false | null | t3_1afmb6e | /r/LocalLLaMA/comments/1afmb6e/quantizing_cross_encoders/ | false | false | self | 1 | null |
Arthur Mensch confirms that Miqu is an early Mistral model | 270 | 2024-01-31T16:47:42 | https://twitter.com/arthurmensch/status/1752734898476007821 | Jean-Porte | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1afm9im | false | {'oembed': {'author_name': 'Arthur Mensch', 'author_url': 'https://twitter.com/arthurmensch', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">An over-enthusiastic employee of one of our early access customers leaked a quantised (and watermarked) version of an early model we trained and distributed quite openly.<br><br>To quickly start working with a few selected customers, we retrained this model from Llama 2 the minute we…</p>— Arthur Mensch (@arthurmensch) <a href="https://twitter.com/arthurmensch/status/1752734898476007821?ref_src=twsrc%5Etfw">January 31, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/arthurmensch/status/1752734898476007821', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1afm9im | /r/LocalLLaMA/comments/1afm9im/arthur_mensch_confirms_that_miqu_is_an_early/ | false | false | 270 | {'enabled': False, 'images': [{'id': 'IiJiFlTViX12HFlrM5q0VzRx5H8mF60G_H9r2vZmX5c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XqXlFlMBO5OBHkspjAAfh4k3t6r_qV8rQaXwNGZJ2ow.jpg?width=108&crop=smart&auto=webp&s=fcc72e7270a8c3930a46724f05502f9fdece8170', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/XqXlFlMBO5OBHkspjAAfh4k3t6r_qV8rQaXwNGZJ2ow.jpg?auto=webp&s=fb3b5edac0c7bdb700a2d2e78b3d003b4eee515e', 'width': 140}, 'variants': {}}]} | ||
240 tokens/s achieved by Groq's custom chips on Lama 2 Chat (70B) | 201 | 2024-01-31T16:47:28 | https://twitter.com/ArtificialAnlys/status/1752719288946053430 | speakerknock | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1afm9af | false | {'oembed': {'author_name': 'ArtificialAnalysis.ai', 'author_url': 'https://twitter.com/ArtificialAnlys', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">240 tokens/s achieved by <a href="https://twitter.com/GroqInc?ref_src=twsrc%5Etfw">@GroqInc</a>'s custom chips on Lama 2 Chat (70B)<br><br>Artificial Analysis has independently benchmarked Groq’s API and now showcases Groq’s latency, throughput & pricing on <a href="https://t.co/jq2TzJMrHT">https://t.co/jq2TzJMrHT</a><br><br>This represents a milestone for the application of custom silicon… <a href="https://t.co/yDwLGdaE4B">pic.twitter.com/yDwLGdaE4B</a></p>— ArtificialAnalysis.ai (@ArtificialAnlys) <a href="https://twitter.com/ArtificialAnlys/status/1752719288946053430?ref_src=twsrc%5Etfw">January 31, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ArtificialAnlys/status/1752719288946053430', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1afm9af | /r/LocalLLaMA/comments/1afm9af/240_tokenss_achieved_by_groqs_custom_chips_on/ | false | false | 201 | {'enabled': False, 'images': [{'id': 'BRhurXMD1G1MlmtiDVC2_L9YdZMiNzJH8qCY2xa1KJ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MRCSbjUHZevQwjjZL6lLbf1Vf8R41K5t4ecI2SwmVwk.jpg?width=108&crop=smart&auto=webp&s=9851a3f2b56d8a7a359cf5a86eed4d49f4339add', 'width': 108}], 'source': {'height': 71, 'url': 'https://external-preview.redd.it/MRCSbjUHZevQwjjZL6lLbf1Vf8R41K5t4ecI2SwmVwk.jpg?auto=webp&s=b3de679444336ac157bdfe550ed84b9b5d1f3c33', 'width': 140}, 'variants': {}}]} | ||
Training set consisting of JSON in JSON? | 1 | I have a model and trying to train it to emit JSON.
Just how does one create the training set?
I've tried to escape the JSON snippets ("\\"), but it is daunting. Also, reducing the snippets to a single line is a pain, even in code.
​ | 2024-01-31T15:54:51 | https://www.reddit.com/r/LocalLLaMA/comments/1afl0vv/training_set_consisting_of_json_in_json/ | IndustryNext7456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afl0vv | false | null | t3_1afl0vv | /r/LocalLLaMA/comments/1afl0vv/training_set_consisting_of_json_in_json/ | false | false | self | 1 | null |
RTX 4060 Ti 16GB vs RTX 4070 super 12GB performance in deep learning | 1 | On paper the RTX 4070 super is almost 50% faster, how much faster will it be in deep learning? Can the RTX 4060 Ti reach almost the same performance because of the extra 4 Vram? | 2024-01-31T15:39:02 | https://www.reddit.com/r/LocalLLaMA/comments/1afknre/rtx_4060_ti_16gb_vs_rtx_4070_super_12gb/ | Healthy_Pay4529 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1afknre | false | null | t3_1afknre | /r/LocalLLaMA/comments/1afknre/rtx_4060_ti_16gb_vs_rtx_4070_super_12gb/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.