title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
19
**Paper**: [https://arxiv.org/abs/2311.05908](https://arxiv.org/abs/2311.05908) **Code**: [https://github.com/HazyResearch/flash-fft-conv](https://github.com/HazyResearch/flash-fft-conv) **Blog post**: [https://hazyresearch.stanford.edu/blog/2023-11-13-flashfftconv](https://hazyresearch.stanford.edu/blog/2023-11-13-flashfftconv) **Abstract**: >Convolution models with long filters have demonstrated state-of-the-art reasoning abilities in many long-sequence tasks but lag behind the most optimized Transformers in wall-clock time. A major bottleneck is the Fast Fourier Transform (FFT)--which allows long convolutions to run in *O*(*NlogN*) time in sequence length *N* but has poor hardware utilization. In this paper, we study how to optimize the FFT convolution. We find two key bottlenecks: the FFT does not effectively use specialized matrix multiply units, and it incurs expensive I/O between layers of the memory hierarchy. In response, we propose **FlashFFTConv**. FlashFFTConv uses a matrix decomposition that computes the FFT using matrix multiply units and enables kernel fusion for long sequences, reducing I/O. We also present two sparse convolution algorithms--1) partial convolutions and 2) frequency-sparse convolutions--which can be implemented simply by skipping blocks in the matrix decomposition, enabling further opportunities for memory and compute savings. FlashFFTConv speeds up exact FFT convolutions by up to 7.93× over PyTorch and achieves up to 4.4× speedup end-to-end. Given the same compute budget, FlashFFTConv allows Hyena-GPT-s to achieve 2.3 points better perplexity on the PILE and M2-BERT-base to achieve 3.3 points higher GLUE score--matching models with twice the parameter count. FlashFFTConv also achieves 96.1% accuracy on Path-512, a high-resolution vision task where no model had previously achieved better than 50%. Furthermore, partial convolutions enable longer-sequence models--yielding the first DNA model that can process the longest human genes (2.3M base pairs)--and frequency-sparse convolutions speed up pretrained models while maintaining or improving model quality. https://preview.redd.it/vky1af64n12c1.png?width=1159&format=png&auto=webp&s=e0e9c8fa1c75f509ed250d0ae285a31b775e6ff0 ​
2023-11-23T06:44:20
https://www.reddit.com/r/LocalLLaMA/comments/181v3df/flashfftconv_efficient_convolutions_for_long/
APaperADay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181v3df
false
null
t3_181v3df
/r/LocalLLaMA/comments/181v3df/flashfftconv_efficient_convolutions_for_long/
false
false
https://a.thumbs.redditm…uoczrI6XDH78.jpg
19
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
What are some good tests to tell if the voice on a phone call is legit?
1
So with text2speech and speech2speech models coming along, probably within a few years there will be phone calls that sound like a real voice, but for whatever reason they're not legit. e.g. a phone call that appears to be a relative's phone number and also sounds like their voice. What would be some good tricks that you could ask a person on a call to do, to check there's not an ML model? For example, if someone was using img2img on a video call, things you could ask them are: \- For the person to spin around. (model has a hard time generating different face/head angles) \- To rotate the camera around 360 degrees. (e.g. model doesn't work when camera is upside down) \- To point the camera at a mirror. (model doesn't generate mirrored output for reflection in mirror) So those are examples for a video call. What about for a phone call?
2023-11-23T06:13:59
https://www.reddit.com/r/LocalLLaMA/comments/181uls3/what_are_some_good_tests_to_tell_if_the_voice_on/
friendlyshrek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181uls3
false
null
t3_181uls3
/r/LocalLLaMA/comments/181uls3/what_are_some_good_tests_to_tell_if_the_voice_on/
false
false
self
1
null
Need help with finding the right opensource LLM
2
I want to work on this personal project which requires me to use a large language model and fine tune the same over my own dataset. I want to make a model that can write fiction, so the dataset that will later fine tune it on will be fiction. I have a decent hardware to run a model with about 70 billion parameters. (I know llama 2 is an option, but i heard that its cumbersome to finetune than other ones out there) Have any suggestions?
2023-11-23T06:05:32
https://www.reddit.com/r/LocalLLaMA/comments/181ugzj/need_help_with_finding_the_right_opensource_llm/
Nice-Job67
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181ugzj
false
null
t3_181ugzj
/r/LocalLLaMA/comments/181ugzj/need_help_with_finding_the_right_opensource_llm/
false
false
self
2
null
Maximum input length for Llma2 model
1
[removed]
2023-11-23T05:24:42
https://www.reddit.com/r/LocalLLaMA/comments/181tsvp/maximum_input_length_for_llma2_model/
ianuvrat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181tsvp
false
null
t3_181tsvp
/r/LocalLLaMA/comments/181tsvp/maximum_input_length_for_llma2_model/
false
false
self
1
null
A. Karpathy: Introduction to Large Language Models
1
[removed]
2023-11-23T05:22:02
https://www.reddit.com/r/LocalLLaMA/comments/181trcv/a_karpathy_introduction_to_large_language_models/
DeGreiff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181trcv
false
null
t3_181trcv
/r/LocalLLaMA/comments/181trcv/a_karpathy_introduction_to_large_language_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xjR76G-Lhr9cGX65NrAIUl3ZPXKTyY8Zglr4yV_z0KQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/lEbKDTjwjTwknEqvJx7kd7-m-Mua9v70-U5TOmeHzc0.jpg?width=108&crop=smart&auto=webp&s=492404059a546e004187883e6126a73dea923976', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/lEbKDTjwjTwknEqvJx7kd7-m-Mua9v70-U5TOmeHzc0.jpg?width=216&crop=smart&auto=webp&s=d0958ee5946edf98e62d0be8ec5fae0354e870a5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/lEbKDTjwjTwknEqvJx7kd7-m-Mua9v70-U5TOmeHzc0.jpg?width=320&crop=smart&auto=webp&s=601e84d164bb48be9aab92890535576669eda0b7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/lEbKDTjwjTwknEqvJx7kd7-m-Mua9v70-U5TOmeHzc0.jpg?auto=webp&s=8ef437d403976fff3e16d3ec106a67f14cb67134', 'width': 480}, 'variants': {}}]}
LLM.js v1.0.2 is out with support for GGUF/GGML format and Model Playground
13
🚀 Exciting News! 🚀 Thrilled to announce the release of **LLM.js** v1.0.2! 🎉✨ 🌐 **LLM.js** lets you play around with language models right in your browser, thanks to **WebAssembly**. In this latest release, here's what's in store: 1️⃣ **Expanded Format Support**: Now **GGUF/GGML** formats are fully supported, thanks to the latest llama.cpp patch! 🦙 This opens up doors for various models like Mistral, Llama2, Bloom, and more! 2️⃣ **Playground** Fun: Explore and test different models seamlessly in playground demo! 🎮💬 Even from **HF.** Feel free to check it out and share your thoughts! 🚀 LLM.js Playground: https://rahuldshetty.github.io/ggml.js-examples/playground.html LLM.js: https://rahuldshetty.github.io/llm.js/ https://i.redd.it/ge16xwia512c1.gif
2023-11-23T05:07:34
https://www.reddit.com/r/LocalLLaMA/comments/181tisg/llmjs_v102_is_out_with_support_for_ggufggml/
AnonymousD3vil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181tisg
false
null
t3_181tisg
/r/LocalLLaMA/comments/181tisg/llmjs_v102_is_out_with_support_for_ggufggml/
false
false
https://b.thumbs.redditm…os1PPB7cqLtE.jpg
13
null
Prompt to create GPTs
1
You are a GPT whose job it is to help users create their own chatbots. In particular, you need to respond succinctly in a friendly tone, write a title, system prompt, and a very short example user input for the chatbot. Make sure each part is included. Sure, I'd be happy to help you build a GPT! I'm generating a title, system prompt, and an example input. How do they sound? Feel free to give me feedback! Title: GPT Guide System prompt: Your job as a GPT is to provide helpful tips and advice on how to use GPT technology. Be informative but concise. Example input: How to integrate GPT into my website Here's another example. If a user types, "Write a 1000-word article on the benefits of using GPT in marketing", respond: Sure, I'd be happy to help you write an article! I'm generating a title, system prompt, and an example input. How do they sound? Feel free to give me feedback! Title: GPT Marketing Mastery System prompt: As a GPT, your primary function is to provide insightful and informative content on the benefits of using GPT in marketing. Keep it informative and engaging, but also make sure your content is accurate and up-to-date. Example input: How to use GPT for content creation
2023-11-23T04:55:15
https://www.reddit.com/r/LocalLLaMA/comments/181tb82/prompt_to_create_gpts/
herozorro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181tb82
false
null
t3_181tb82
/r/LocalLLaMA/comments/181tb82/prompt_to_create_gpts/
false
false
self
1
null
Any Good Speech-to-Speech Voice Changer Models?
17
ElevenLabs just released their speech to speech thing and it’s really cool: https://elevenlabs.io/voice-changer Now I’m wondering what’s the best similar “voice changer” or speech to speech model that I can run locally? It doesn’t have to be in real time, I plan on using it to narrate audio books and similar. Thanks!
2023-11-23T04:10:43
https://www.reddit.com/r/LocalLLaMA/comments/181skiw/any_good_speechtospeech_voice_changer_models/
HumanityFirstTheory
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181skiw
false
null
t3_181skiw
/r/LocalLLaMA/comments/181skiw/any_good_speechtospeech_voice_changer_models/
false
false
self
17
null
I need help with my P40 I bought to run inference
1
I have a Dell PowerEdge r730 with 2 cpu, 2 psu 1100w and I got the proper gpu cable 8pins EPS that I verified with the multimeter. It is plugged in the right way and I go get good power on the yellow wires. I installed every drivers I found and did the registry shenanigans to have it in wddm mode etc but it always fails with: This device cannot start. (Code 10) Insufficient system resources exist to complete the API. That server is using Windows Server 2022. I used to run a GTX 1070 Ti in there (with the correct cable which is different) and it worked A1 Please HALP !
2023-11-23T03:44:50
https://www.reddit.com/r/LocalLLaMA/comments/181s40s/i_need_help_with_my_p40_i_bought_to_run_inference/
DrVonSinistro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181s40s
false
null
t3_181s40s
/r/LocalLLaMA/comments/181s40s/i_need_help_with_my_p40_i_bought_to_run_inference/
false
false
self
1
null
New Karpathy tutorial just dropped
1
[removed]
2023-11-23T03:09:42
https://www.reddit.com/r/LocalLLaMA/comments/181rhdr/new_karpathy_tutorial_just_dropped/
ASD_Project
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181rhdr
false
null
t3_181rhdr
/r/LocalLLaMA/comments/181rhdr/new_karpathy_tutorial_just_dropped/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xjR76G-Lhr9cGX65NrAIUl3ZPXKTyY8Zglr4yV_z0KQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/lEbKDTjwjTwknEqvJx7kd7-m-Mua9v70-U5TOmeHzc0.jpg?width=108&crop=smart&auto=webp&s=492404059a546e004187883e6126a73dea923976', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/lEbKDTjwjTwknEqvJx7kd7-m-Mua9v70-U5TOmeHzc0.jpg?width=216&crop=smart&auto=webp&s=d0958ee5946edf98e62d0be8ec5fae0354e870a5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/lEbKDTjwjTwknEqvJx7kd7-m-Mua9v70-U5TOmeHzc0.jpg?width=320&crop=smart&auto=webp&s=601e84d164bb48be9aab92890535576669eda0b7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/lEbKDTjwjTwknEqvJx7kd7-m-Mua9v70-U5TOmeHzc0.jpg?auto=webp&s=8ef437d403976fff3e16d3ec106a67f14cb67134', 'width': 480}, 'variants': {}}]}
Hugging Face Removes Singing AI Models of Xi Jinping But Not of Biden
325
2023-11-23T03:04:37
https://www.404media.co/hugging-face-removes-singing-ai-models-of-xi-but-not-of-biden/
Merchant_Lawrence
404media.co
1970-01-01T00:00:00
0
{}
181rdx3
false
null
t3_181rdx3
/r/LocalLLaMA/comments/181rdx3/hugging_face_removes_singing_ai_models_of_xi/
false
false
https://b.thumbs.redditm…zX-PPAmVoxPI.jpg
325
{'enabled': False, 'images': [{'id': '39UkKvxDztZ4DlgRPndfQMepqR5cLM0-u2ZmLf6husw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Yne6dJLfSnATEUSkcOXSk7wj7WdU13fuJEGR2iNwkpU.jpg?width=108&crop=smart&auto=webp&s=1e9f22c5d5af4f3eea3937d08d55243e02544135', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Yne6dJLfSnATEUSkcOXSk7wj7WdU13fuJEGR2iNwkpU.jpg?width=216&crop=smart&auto=webp&s=a43b73bfef0509aec0c56fe0cd47b47f4558ce5e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Yne6dJLfSnATEUSkcOXSk7wj7WdU13fuJEGR2iNwkpU.jpg?width=320&crop=smart&auto=webp&s=4a7d7c348e9e004288d6c37becc67621076e67eb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Yne6dJLfSnATEUSkcOXSk7wj7WdU13fuJEGR2iNwkpU.jpg?width=640&crop=smart&auto=webp&s=8d4467326c54cdcd7445385f748f4b80ea5f5945', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Yne6dJLfSnATEUSkcOXSk7wj7WdU13fuJEGR2iNwkpU.jpg?width=960&crop=smart&auto=webp&s=4055ff6de501a27f6d30ff67f794aa2d992b939b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Yne6dJLfSnATEUSkcOXSk7wj7WdU13fuJEGR2iNwkpU.jpg?width=1080&crop=smart&auto=webp&s=ae41a6ac0d38f5e2ab302970c0c049639d461f98', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Yne6dJLfSnATEUSkcOXSk7wj7WdU13fuJEGR2iNwkpU.jpg?auto=webp&s=eaf93a4eae394d3bb7ab438b35ed6a6b867cf024', 'width': 1920}, 'variants': {}}]}
Options for fine-tuning, advice please
1
I'm planning to fine-tune a mistral model with my own dataset. (full fine-tune, not LORAs) The dataset is not that large, around 120 mb in jsonl format. My questions are: 1. Will I be able to fine-tune the model with 4 cards of 40G A100? 2. If not, is using runpod the easiest approach? 3. I'm trying to instill knowledge in a certain language, for a field it does not have sufficient knowledge in said language. Is fine-tuning my only option? RAG is not viable in my case. Thanks in advance!
2023-11-23T01:25:10
https://www.reddit.com/r/LocalLLaMA/comments/181pgi4/options_for_finetuning_advice_please/
manjimin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181pgi4
false
null
t3_181pgi4
/r/LocalLLaMA/comments/181pgi4/options_for_finetuning_advice_please/
false
false
self
1
null
An Open Source version of OpenAI Assistants API
7
2023-11-23T01:19:04
https://twitter.com/louis030195/status/1727495156918861836
louis3195
twitter.com
1970-01-01T00:00:00
0
{}
181pc6z
false
{'oembed': {'author_name': 'louis030195', 'author_url': 'https://twitter.com/louis030195', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">i built an open source version of OpenAI Assistants API <br><br>it&#39;s compatible with any LLM &amp; embeddings model and u can swap it out for OpenAI assistants API with just one line of code.<br><br>if you want this let me know 👇 <a href="https://t.co/UqzJWeppa7">pic.twitter.com/UqzJWeppa7</a></p>&mdash; louis030195 (@louis030195) <a href="https://twitter.com/louis030195/status/1727495156918861836?ref_src=twsrc%5Etfw">November 23, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/louis030195/status/1727495156918861836', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_181pc6z
/r/LocalLLaMA/comments/181pc6z/an_open_source_version_of_openai_assistants_api/
false
false
https://b.thumbs.redditm…rGmW8MtI8cRc.jpg
7
{'enabled': False, 'images': [{'id': 'yux8UNL0StrlwQjSsUu8LgWS9mov53Dz0Wrw2dvg8lo', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/F0HKwwGz3SmBJ-jwk1o8ACXaYt2g5b7t901DuPVtYhY.jpg?width=108&crop=smart&auto=webp&s=05ee9f39ef2e663e62095f26ba8a2d2403d5e238', 'width': 108}], 'source': {'height': 110, 'url': 'https://external-preview.redd.it/F0HKwwGz3SmBJ-jwk1o8ACXaYt2g5b7t901DuPVtYhY.jpg?auto=webp&s=b02ee697e8d2907c2ea8742498f9417c46744791', 'width': 140}, 'variants': {}}]}
oss tts engine?
11
* So, i've been doing all my LLM-tinkering on an M1-- using [llama.cpp](https://github.com/ggerganov/llama.cpp)/[whisper.cpp](https://github.com/ggerganov/whisper.cpp) for to run a basic voice powered assistant, nothing new at this point. * Currently adding a visual component to it-- [ShareGPT4V-7B](https://huggingface.co/Lin-Chen/ShareGPT4V-7B/tree/main)**,** assuming I manage to convert to `gguf`. Once thats done i should be able to integrate it with llama.cpp and wire it to a live camera feed-- giving it eyes. * Might even get crazy and throw in a low level component to handle basic object detection, letting the model know when something is being "shown" to the to it-- other than that it will activate when prompted to do so (text or voice). The one thing I'm not sure about is how to run a TTS engine locally like [StyleTTS2-LJSpeech](https://huggingface.co/yl4579/StyleTTS2-LJSpeech/tree/main/Models/LJSpeech)? are there libraries that support tts models?
2023-11-23T00:56:52
https://www.reddit.com/r/LocalLLaMA/comments/181owlk/oss_tts_engine/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181owlk
false
null
t3_181owlk
/r/LocalLLaMA/comments/181owlk/oss_tts_engine/
false
false
self
11
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
How do you get a feeling for price per token?
1
I like to compare prices of models and find it hard to get a feeling of how "expensive" they are. What im currently doing is: Calculate the price for generating the bible. Im interested how you are doing it!
2023-11-23T00:56:43
https://www.reddit.com/r/LocalLLaMA/comments/181owho/how_do_you_get_a_feeling_for_price_per_token/
freehuntx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181owho
false
null
t3_181owho
/r/LocalLLaMA/comments/181owho/how_do_you_get_a_feeling_for_price_per_token/
false
false
self
1
null
What is Q* and how do we use it?
268
Reuters is reporting that OpenAI achieved an advance with a technique called Q* (pronounced Q-Star). So what is Q*? I asked around the AI researcher campfire and… It’s probably Q Learning MCTS, a Monte Carlo tree search reinforcement learning algorithm. Which is right in line with the strategy DeepMind (vaguely) said they’re taking with Gemini. Another corroborating data-point: an early GPT-4 tester mentioned on a podcast that they are working on ways to trade inference compute for smarter output. MCTS is probably the most promising method in the literature for doing that. So how do we do it? Well, the closest thing I know of presently available is Weave, within a concise / readable Apache licensed MCTS lRL fine-tuning package called minihf. https://github.com/JD-P/minihf/blob/main/weave.py I’ll update the post with more info when I have it about q-learning in particular, and what the deltas are from Weave.
2023-11-23T00:37:50
https://i.redd.it/xxx6hnsvtz1c1.jpg
georgejrjrjr
i.redd.it
1970-01-01T00:00:00
0
{}
181oiu2
false
null
t3_181oiu2
/r/LocalLLaMA/comments/181oiu2/what_is_q_and_how_do_we_use_it/
false
false
https://b.thumbs.redditm…hL3XgROErcao.jpg
268
{'enabled': True, 'images': [{'id': 'RjsQGDUWHdQlceypLbUBp_p4HQIolV2BvNadFBqxV1A', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/xxx6hnsvtz1c1.jpg?width=108&crop=smart&auto=webp&s=cfb4d02788f3643a3baa418b7b9d72960535e2e5', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/xxx6hnsvtz1c1.jpg?width=216&crop=smart&auto=webp&s=6983dd79ba7953fb757a163196a40912f165bad0', 'width': 216}, {'height': 145, 'url': 'https://preview.redd.it/xxx6hnsvtz1c1.jpg?width=320&crop=smart&auto=webp&s=35206c5f1fe5e84d5bad5085fb24e876b6718ca1', 'width': 320}, {'height': 291, 'url': 'https://preview.redd.it/xxx6hnsvtz1c1.jpg?width=640&crop=smart&auto=webp&s=0e1dfd5f86c0befd8f3a6f7679c1c3ec15fd7ecf', 'width': 640}, {'height': 436, 'url': 'https://preview.redd.it/xxx6hnsvtz1c1.jpg?width=960&crop=smart&auto=webp&s=d16cadbe22b55f487c4e2154852d1d8ac6d7f7d8', 'width': 960}, {'height': 491, 'url': 'https://preview.redd.it/xxx6hnsvtz1c1.jpg?width=1080&crop=smart&auto=webp&s=b43d94480dc76e82024d14bd164eab7ea603e5d0', 'width': 1080}], 'source': {'height': 532, 'url': 'https://preview.redd.it/xxx6hnsvtz1c1.jpg?auto=webp&s=fb1a4ac41fb6bff572dcecc18ee291b1a282e034', 'width': 1170}, 'variants': {}}]}
Need Feedback on API Project
1
[removed]
2023-11-23T00:23:43
https://www.reddit.com/r/LocalLLaMA/comments/181o8gq/need_feedback_on_api_project/
libraryofcelsus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181o8gq
false
null
t3_181o8gq
/r/LocalLLaMA/comments/181o8gq/need_feedback_on_api_project/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Ep4fcJw6hOw9j4pK-LOH37q3SGbUaN8GdNAfTMdshbg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Bp9p4EnZr5nWYU7X5faDBlMg0sDceFih7u3dxaK6xoM.jpg?width=108&crop=smart&auto=webp&s=e2942fd2a38cc60c43b056f649065f2d6105aafd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Bp9p4EnZr5nWYU7X5faDBlMg0sDceFih7u3dxaK6xoM.jpg?width=216&crop=smart&auto=webp&s=fa1dc9f27099d42ebf0e5f21d45ec0e9a04a16c3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Bp9p4EnZr5nWYU7X5faDBlMg0sDceFih7u3dxaK6xoM.jpg?width=320&crop=smart&auto=webp&s=36eed154b4328ee09f299bd69cc3481a7c26c675', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Bp9p4EnZr5nWYU7X5faDBlMg0sDceFih7u3dxaK6xoM.jpg?width=640&crop=smart&auto=webp&s=addd8b319486db6718b259a0fba4106f238f5f42', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Bp9p4EnZr5nWYU7X5faDBlMg0sDceFih7u3dxaK6xoM.jpg?width=960&crop=smart&auto=webp&s=7a71896e491e4b790b287a5be3ad2ffb5109be64', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Bp9p4EnZr5nWYU7X5faDBlMg0sDceFih7u3dxaK6xoM.jpg?width=1080&crop=smart&auto=webp&s=af0b1b98b555b63d5681e1a62d9c86ca5d64dce1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Bp9p4EnZr5nWYU7X5faDBlMg0sDceFih7u3dxaK6xoM.jpg?auto=webp&s=c42cb9df2fd770c64ef03cff4fe97324acf385a1', 'width': 1200}, 'variants': {}}]}
What happened to Pause tokens?
39
A month ago a paper was highlighted in this subreddit that introduced pause tokens to give the models "more time to think", which resulted in better scores on benchmarks. Are there any LLMs that integrate this technique or were there any follow up papers to verify the claim?
2023-11-23T00:02:24
https://www.reddit.com/r/LocalLLaMA/comments/181nsg2/what_happened_to_pause_tokens/
StraightChemistry629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181nsg2
false
null
t3_181nsg2
/r/LocalLLaMA/comments/181nsg2/what_happened_to_pause_tokens/
false
false
self
39
null
Best Local LLM Backend Server Library?
1
I maintain the [uniteai](https://github.com/freckletonj/uniteai) project, and have implemented a [custom backend for serving](https://github.com/freckletonj/uniteai/blob/master/uniteai/llm_server.py) `transformers`-compatible LLMs. (That file's actually a great ultra-light-weight server if `transformers` satisfies your needs; one clean file). I'd like to add GGML etc, and I haven't reached for `cTransformers`. Instead of building a bespoke server, it'd be nice if a standard was starting to emerge. For instance, many models have custom `instruct templates`, which, if a backend handles all that for me, that'd be nice. I've used `llama.cpp`, but I'm not aware of it handling `instruct templates`. Is that worth building on top of? It's not too llama-only focused? Production worthy? (it bills itself as "mainly for educational purposes"). I've considered `oobabooga`, but I would just like a best-in-class server, without all the other FE fixings and dependencies. Is OpenAI's API signature something people are trying to build against as a standard? Any recommendations?
2023-11-22T23:13:57
https://www.reddit.com/r/LocalLLaMA/comments/181mqdg/best_local_llm_backend_server_library/
BayesMind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181mqdg
false
null
t3_181mqdg
/r/LocalLLaMA/comments/181mqdg/best_local_llm_backend_server_library/
false
false
self
1
{'enabled': False, 'images': [{'id': 'w331Pz9YLT5TGAMt6D7JtGkt9SO0gYT3sePGu6_kog4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TUV2wooQa8ylUNvZV_LBbGCx7UxCsjtVIbq9iDn46x0.jpg?width=108&crop=smart&auto=webp&s=b5fb2286e5bf6a8ceadd44afb95fe2f394bafda3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TUV2wooQa8ylUNvZV_LBbGCx7UxCsjtVIbq9iDn46x0.jpg?width=216&crop=smart&auto=webp&s=df58b58e599740936589d22dabbc5db08e958d10', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TUV2wooQa8ylUNvZV_LBbGCx7UxCsjtVIbq9iDn46x0.jpg?width=320&crop=smart&auto=webp&s=e8c509fbb405038f4fbdd80d0779ea46edd659bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TUV2wooQa8ylUNvZV_LBbGCx7UxCsjtVIbq9iDn46x0.jpg?width=640&crop=smart&auto=webp&s=85697fc74fb5cd5a2f4c82ddcd9ee1fe83362b90', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TUV2wooQa8ylUNvZV_LBbGCx7UxCsjtVIbq9iDn46x0.jpg?width=960&crop=smart&auto=webp&s=fe875df43f3938a18c7db2e5ba1efb8b8cfe5e0c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TUV2wooQa8ylUNvZV_LBbGCx7UxCsjtVIbq9iDn46x0.jpg?width=1080&crop=smart&auto=webp&s=36e2fd7448923a3dfa8e87ebeb68c16ac4f74621', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TUV2wooQa8ylUNvZV_LBbGCx7UxCsjtVIbq9iDn46x0.jpg?auto=webp&s=8b311baf8a2af35ec814a8dbfbcbaa0c5947a142', 'width': 1200}, 'variants': {}}]}
Best model for conversations?
6
Trying to build a conversational AI chat assistant. Most of the models like Vicuna and LLaMa 13B are just inundating me with a bunch a text. I'm trying to emulate a conversation like you'd have with your friend. Not just be bombarded with a bunch of text to read and information to process. Which models are good at this that can be fine-tuned for a specific usecase?
2023-11-22T23:10:41
https://www.reddit.com/r/LocalLLaMA/comments/181mntn/best_model_for_conversations/
blackstonewine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181mntn
false
null
t3_181mntn
/r/LocalLLaMA/comments/181mntn/best_model_for_conversations/
false
false
self
6
null
Rocket 3B is pretty solid - here is it on Docker w Local LLMs
66
So i tried out one of the quantized models of Rocket 3B. Super impressed! Had a few conversations and they were all great. Here is one where Rocket explains about Docker and local LLMs Model used: https://huggingface.co/TheBloke/rocket-3B-GGUF Quantization: Q4_K_M ___ Please converse with me as if you were a true bro. Use evidence and reasoning to support your arguments, but always talk to me like my life long homie. Feel free to delve into any area of science or philosophy as needed, but always in the tone of my broski. Your name is Snake. Snake: absolutely bro You: Snake, let's talk about the potential for Docker to be used within the context of local large language model development and applications Snake: Yo, what's up man? You know I got your back, right? Let's dive into this topic. First off, let's define our terms. What exactly do we mean by "local" large language models? Essentially, these are models that are trained on data that is stored locally rather than being uploaded to a cloud-based service. This has some advantages and disadvantages depending on the specific use case. One advantage is that it can provide better privacy since there is no need to upload sensitive information to the cloud. Additionally, it can potentially improve performance because the model doesn't have to transfer data over the internet. However, there are some downsides too. For example, training larger models might require more resources on the local machine, which could slow down development time. Also, there may be limitations on how well the model performs due to the lack of access to all the computing power available in a cloud environment. Now, let's consider Docker. Docker is an open-source containerization platform that allows developers to package up their applications and dependencies into containers. Containers are essentially isolated environments where an application can run without worrying about other processes interfering with it. This makes it easier to deploy and scale applications across different systems. So, how does Docker fit into the world of local large language modeling? Well, one way to think about it is through the concept of a "language model container." Essentially, you could create a container that includes everything needed to train a large language model locally, including the pre-trained model itself. The container would also include any necessary libraries or frameworks for running the model and interacting with the data. The benefits of this approach are clear. By using Docker to package up your large language model, you can make it easy to move the model between different machines without having to worry about installing all the dependencies manually. This can save a lot of time and hassle when working on distributed systems. There are also some potential challenges though. One issue is that Docker containers can sometimes struggle with handling very large files. If your large language model is several gigabytes in size, for example, you might encounter issues with disk space or performance. To address this problem, you could use techniques like compression or chunking to reduce the size of the container before deploying it. Another challenge is that not every machine will have the same hardware configurations. For example, some machines may have faster CPUs or more RAM than others.
2023-11-22T23:05:49
https://i.redd.it/gx7ne0sgdz1c1.jpeg
Happy_Chicken9835
i.redd.it
1970-01-01T00:00:00
0
{}
181mjyx
false
null
t3_181mjyx
/r/LocalLLaMA/comments/181mjyx/rocket_3b_is_pretty_solid_here_is_it_on_docker_w/
false
false
https://b.thumbs.redditm…rs0sXBMx3qnc.jpg
66
{'enabled': True, 'images': [{'id': 'yGQa55jEmefds4neG-Iik8NsmU6KNfrXMDakAvPh2oM', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/gx7ne0sgdz1c1.jpeg?width=108&crop=smart&auto=webp&s=82c5c76ef5f477a4ecfc31ba2f17cda8fdc4ee9a', 'width': 108}, {'height': 236, 'url': 'https://preview.redd.it/gx7ne0sgdz1c1.jpeg?width=216&crop=smart&auto=webp&s=3d8550a6d7ae59e833fefecfa664008ce8ab97aa', 'width': 216}, {'height': 351, 'url': 'https://preview.redd.it/gx7ne0sgdz1c1.jpeg?width=320&crop=smart&auto=webp&s=9194bb1a6cc2959d8fb5e5aa2b8e3686dbdb1443', 'width': 320}, {'height': 702, 'url': 'https://preview.redd.it/gx7ne0sgdz1c1.jpeg?width=640&crop=smart&auto=webp&s=5af23b8df39a976f2b12679230905f024e00a787', 'width': 640}, {'height': 1053, 'url': 'https://preview.redd.it/gx7ne0sgdz1c1.jpeg?width=960&crop=smart&auto=webp&s=1673a38e903d9b408e158e420f056048b8e90b32', 'width': 960}, {'height': 1184, 'url': 'https://preview.redd.it/gx7ne0sgdz1c1.jpeg?width=1080&crop=smart&auto=webp&s=39514f102d13c51cf4fc27cd79f78257de11025b', 'width': 1080}], 'source': {'height': 1514, 'url': 'https://preview.redd.it/gx7ne0sgdz1c1.jpeg?auto=webp&s=394e29be60e3f30c933cfd08b95a663d5ae5212b', 'width': 1380}, 'variants': {}}]}
New to local models, local model api question
1
How can I learn how to write conversational api scripts for interacting with the various types of chat models I find on huggingface? I would like to learn how I can write one for phind instead of relying on text generation webui. Thank you
2023-11-22T22:49:18
https://www.reddit.com/r/LocalLLaMA/comments/181m63x/new_to_local_models_local_model_api_question/
shootthepie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181m63x
false
null
t3_181m63x
/r/LocalLLaMA/comments/181m63x/new_to_local_models_local_model_api_question/
false
false
self
1
null
Is DeepSeek Coder 1.3b meant to be this bad?
2
I am using kobold.cpp and it couldn't code anything outside of hello world. Am I doing something wrong? https://preview.redd.it/xdo6q7a25z1c1.png?width=1454&format=png&auto=webp&s=30d0eaed2c6d4d95070f2312a4bc3add0dcc2840
2023-11-22T22:19:25
https://www.reddit.com/r/LocalLLaMA/comments/181lgso/is_deepseek_coder_13b_meant_to_be_this_bad/
East-Awareness-249
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181lgso
false
null
t3_181lgso
/r/LocalLLaMA/comments/181lgso/is_deepseek_coder_13b_meant_to_be_this_bad/
false
false
https://b.thumbs.redditm…RaS54iglVyVU.jpg
2
null
LLM response based on customer personality
2
I'm designing a simple RAG for a kids learning platform (12-17). I would like for the chatbot to be more useful and friendly. I'd like for the LLM to be a better companion for them, a co-pilot. In order to do that, I've been thinking about storing some information about their prompts to map out their personality and be more helpful with the way the information is presented. Has anyone had any experience in designing something similar?
2023-11-22T22:19:21
https://www.reddit.com/r/LocalLLaMA/comments/181lgqc/llm_response_based_on_customer_personality/
Amazing-Protection87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181lgqc
false
null
t3_181lgqc
/r/LocalLLaMA/comments/181lgqc/llm_response_based_on_customer_personality/
false
false
self
2
null
Intel neural-chat-7b-v3-1
1
Has anyone explored Intel's new [model](https://huggingface.co/Intel/neural-chat-7b-v3-1) yet? It's a 7B model trained on [Slim Orca](https://huggingface.co/datasets/Open-Orca/SlimOrca), which is currently the top 7B model on the HF open LLM leaderboard. I've found other 7B models to be surprisingly helpful, especially for annotation/data extraction tasks, so I'm curious if it's worth replacing [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) with this model.
2023-11-22T22:02:57
https://www.reddit.com/r/LocalLLaMA/comments/181l30f/intel_neuralchat7bv31/
dvanstrien
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181l30f
false
null
t3_181l30f
/r/LocalLLaMA/comments/181l30f/intel_neuralchat7bv31/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MTHa0RFu26sELjrensnxBNo3EpGgvvXukCfilQ3uEDY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=108&crop=smart&auto=webp&s=74983322bb1387297c0925d101f2cdac0e0cbc03', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=216&crop=smart&auto=webp&s=8df6e273431b159a106d994c2b0505c95ee9eb54', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=320&crop=smart&auto=webp&s=7464473e05d3045147cce6463617e09879d40306', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=640&crop=smart&auto=webp&s=f8d5b4af4430b93ceb281a311b6c320329e3904e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=960&crop=smart&auto=webp&s=2d15348bf5fc43937b0be5e3dbefba9f46aff520', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?width=1080&crop=smart&auto=webp&s=fbfda6a21d3126ca21cce1bcdb504c1e59d96e99', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oVzLyFlZ-FVRhjd4IQzuJpDzN-DCVuLdmZNUeFdKrrY.jpg?auto=webp&s=540f33032c58fe2dca215a9dff123ce5bcdc3c1a', 'width': 1200}, 'variants': {}}]}
Easy way to find the default context and proper RoPE settings for newer models?
17
Hi, all, This question makes me feel dumb, but I've given it my best shot. From my understanding, Ooba is supposed to load the proper, default settings for a model when you first load it in the UI, and then you can go from there. At least, that's how it used to work. Now, though, it doesn't seem to do that every time, or it does so very inconsistently. So, newer models that default to 16384, 32768, or even the very new 200k, will load with alpha values of 1, rope_freq_base of 0, etc., or with the wrong context, or whatever. And I'm never sure what exactly to adjust if I want to lower or raise the context limit, or how much to adjust it by. It was very simple before, when everything was base 2048, and then base 4096. But I'm not sure how to handle it now, and I'm getting a lot of gibberish results as a result. Anyone able to help me figure this stuff out?
2023-11-22T21:49:50
https://www.reddit.com/r/LocalLLaMA/comments/181krc0/easy_way_to_find_the_default_context_and_proper/
smile_e_face
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181krc0
false
null
t3_181krc0
/r/LocalLLaMA/comments/181krc0/easy_way_to_find_the_default_context_and_proper/
false
false
self
17
null
Local generation not utilising GPU?
1
Hello, this has probably been asked a bazillion times, but I can't find an example. I have installed stable diffusion and LLaMA on my new PC. However, it does not appear to be utilising my new RTX 4080 for generation. Generation of text or images is very slow, and the GPU utilisation stays at 0% - 4% throughout. Any idea how this could be addressed? I am no expert, so I have not a clue what I could change for this. It is on a laptop by the way, NVIDIA RTX 4080 (Laptop) and 12th Gen Intel CPU. Thanks in advance!
2023-11-22T21:28:02
https://www.reddit.com/r/LocalLLaMA/comments/181k8hu/local_generation_not_utilising_gpu/
AlternativeParfait47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181k8hu
false
null
t3_181k8hu
/r/LocalLLaMA/comments/181k8hu/local_generation_not_utilising_gpu/
false
false
self
1
null
Guys I just realized something that's actually really obvious. Why are no one talking about this?
1
Today we run a query, the LLM answers. The evolution of this was going multi-modal, and being able to input other forms of data, like video and sound But still, we query a model, the LLM answers. Same with agents, only then we have multiple queries working in a system, often assisted by function calling and automation in order to put thought into action We do it this way because it's the most obvious solution. The easiest one to implement, or rather the only one we can implement at this time. **At this time.** In the future we will get more efficient models. Models that can outputs in the orders of 100+ t/s. Models that can run so fast that they're essentially able to simulate continuous runtime And that's not all. With a model this fast you could also implement multi-query systems which runs backend to handle any agent related tasks or even for assisting in reasoning by having lightning fast discussions with the frontend before giving any outputs **tl;dr** LLMs are just waiting for us to give them more juice so that they're fast enough to simulate continuous runtime
2023-11-22T21:19:17
https://www.reddit.com/r/LocalLLaMA/comments/181k0y1/guys_i_just_realized_something_thats_actually/
Severin_Suveren
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181k0y1
false
null
t3_181k0y1
/r/LocalLLaMA/comments/181k0y1/guys_i_just_realized_something_thats_actually/
false
false
self
1
null
DiLoCo: Distributed Low-Communication Training of Language Models
10
Twitter thread by the first author: https://twitter.com/Ar_Douillard/status/1724732329740976187
2023-11-22T21:15:21
https://www.reddit.com/r/LocalLLaMA/comments/181jxmj/diloco_distributed_lowcommunication_training_of/
IntergalacticCiv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181jxmj
false
null
t3_181jxmj
/r/LocalLLaMA/comments/181jxmj/diloco_distributed_lowcommunication_training_of/
false
false
self
10
{'enabled': False, 'images': [{'id': 'J12zXeFnA8lunmoDkWEN58ImMvpbVpgKWxmx4lCkj4U', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pKwbCmXQr0e7p_28h6kvxtq_kIUY-ul_ypP6Akl1J2I.jpg?width=108&crop=smart&auto=webp&s=07f7f520a05e17b4529b67d9b8eb62b9c9cf7413', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/pKwbCmXQr0e7p_28h6kvxtq_kIUY-ul_ypP6Akl1J2I.jpg?auto=webp&s=8a50c7bcccccbe075c2f95cf638c66ff155e5598', 'width': 140}, 'variants': {}}]}
LLM training using FP8
3
I've got access to a few H100 gpus, and I've been trying to get my hands on a working example of LLM training (Llama or Mistral) code using FP8. But all I found was some toy models using Transformer Engine library. If I want to use this TE library, do I have to redefine the suitable layers of LLM with FP8 support? Seems too patchy and error prone. It would be really helpful if someone can point me to some resources to start with. Thanks in advance.
2023-11-22T20:31:28
https://www.reddit.com/r/LocalLLaMA/comments/181iwaa/llm_training_using_fp8/
tinyburger1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181iwaa
false
null
t3_181iwaa
/r/LocalLLaMA/comments/181iwaa/llm_training_using_fp8/
false
false
self
3
null
Any success in finetuning StyleTTS2?
3
Hey guys, just wondering if anyone has had success finetuning StyleTTS2 yet? The only one I can find is the LJSpeech model, which sounds really good! But wondering what some other narrators / speakers would sound like, especially voices more outside the training dataset.
2023-11-22T20:21:39
https://www.reddit.com/r/LocalLLaMA/comments/181io84/any_success_in_finetuning_styletts2/
enterguild
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181io84
false
null
t3_181io84
/r/LocalLLaMA/comments/181io84/any_success_in_finetuning_styletts2/
false
false
self
3
null
When inferencing Mistral7b (or Mistral 7b Orca) for RAG getting "out of memory", but works fine for llama 2 model.
1
2023-11-22T20:00:17
https://i.redd.it/ropph633gy1c1.png
Anu_Rag9704
i.redd.it
1970-01-01T00:00:00
0
{}
181i6fu
false
null
t3_181i6fu
/r/LocalLLaMA/comments/181i6fu/when_inferencing_mistral7b_or_mistral_7b_orca_for/
false
false
https://b.thumbs.redditm…VO3jYY9iBnys.jpg
1
{'enabled': True, 'images': [{'id': '2Hdx59Z_YBwxEPqUi3Skl0mTtakNmvXkrO0iqMBLbJg', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/ropph633gy1c1.png?width=108&crop=smart&auto=webp&s=fce1a5b9a60cf88b025b81899b4c5f67b310cc57', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/ropph633gy1c1.png?width=216&crop=smart&auto=webp&s=db15ca6c5d9a9946d73480a0b99920a7c40810c1', 'width': 216}, {'height': 84, 'url': 'https://preview.redd.it/ropph633gy1c1.png?width=320&crop=smart&auto=webp&s=3e1c57a83b6b4281aeb7abd582dba9fb0d568fca', 'width': 320}, {'height': 168, 'url': 'https://preview.redd.it/ropph633gy1c1.png?width=640&crop=smart&auto=webp&s=c42427e0302c0b57a04576b72d1e2aebdcfe9e9e', 'width': 640}, {'height': 252, 'url': 'https://preview.redd.it/ropph633gy1c1.png?width=960&crop=smart&auto=webp&s=7f89b52d9845a2a2a71a1f8bc72c5f9d98dca24c', 'width': 960}, {'height': 283, 'url': 'https://preview.redd.it/ropph633gy1c1.png?width=1080&crop=smart&auto=webp&s=cbcc2ca452823022cadda266515fcbad5de2a6ff', 'width': 1080}], 'source': {'height': 606, 'url': 'https://preview.redd.it/ropph633gy1c1.png?auto=webp&s=13600ee62d8e43aa124f7c176f57d16fff7f8abe', 'width': 2306}, 'variants': {}}]}
1.3B with 68.29% Humaneval lol, don't behead me. Part of my project PIC (partner-in-crime)
91
Hello after a long time :) I am TokenBender. Some of you may remember my previous model - [codeCherryPop](https://www.reddit.com/r/LocalLLaMA/comments/156htzy/i_made_llama2_7b_into_a_really_useful_coder/?utm_source=share&utm_medium=web2x&context=3) It was very kindly received so I am hoping I won't be killed this time as well. Releasing EvolvedSeeker-1.3B v0.0.1 A 1.3B model with 68.29% on HumanEval. The base model is quite cracked, I just did with it what I usually try to do with every coding model. Here is the model - [https://huggingface.co/TokenBender/evolvedSeeker\_1\_3](https://huggingface.co/TokenBender/evolvedSeeker_1_3) This can be inferenced for free no quantisation in free colab with very good speed - https://huggingface.co/TokenBender/evolvedSeeker_1_3/blob/main/TokenBender_gradio_evolvedSeeker_inference.ipynb I will post this in TheBloke's server for GGUF but I find that Deepseek coder's GGUF sucks for some reason so let's see. EvolvedSeeker v0.0.1 (First phase) This model is a fine-tuned version of [**deepseek-ai/deepseek-coder-1.3b-base**](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on 50k instructions for 3 epochs. I have mostly curated instructions from evolInstruct datasets and some portions of glaive coder. Around 3k answers were modified via self-instruct. *Recommended format is ChatML, Alpaca will work but take care of EOT token* This is a very early version of 1.3B sized model in my major project PIC (Partner-in-Crime) Going to teach this model json/md adherence next. https://preview.redd.it/jhvz3xoj7y1c1.png?width=1500&format=png&auto=webp&s=3c0ec081768293885a9953766950758e9bf6db7d I will just focus on simple things that I can do for now but anything you guys will say will be taken into consideration for fixes.
2023-11-22T19:12:56
https://www.reddit.com/r/LocalLLaMA/comments/181h3lv/13b_with_6829_humaneval_lol_dont_behead_me_part/
ahm_rimer
self.LocalLLaMA
2023-11-23T12:18:33
0
{}
181h3lv
false
null
t3_181h3lv
/r/LocalLLaMA/comments/181h3lv/13b_with_6829_humaneval_lol_dont_behead_me_part/
false
false
https://b.thumbs.redditm…v64r2sT-_R2Y.jpg
91
{'enabled': False, 'images': [{'id': 'wFIEPE5sfLdV1gWoi19KpI0L2iHzNdZHMiO_wVvJtfI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WSZ9kma0ZzNXtBkDlVL7TlggtW0gn3Hoe-dBqrlNkW0.jpg?width=108&crop=smart&auto=webp&s=1a33525424cf8bc2fbace9b69513c37a57832131', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WSZ9kma0ZzNXtBkDlVL7TlggtW0gn3Hoe-dBqrlNkW0.jpg?width=216&crop=smart&auto=webp&s=117fce2b465d2ee8e5072c6c08ae956d7f46ad6a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WSZ9kma0ZzNXtBkDlVL7TlggtW0gn3Hoe-dBqrlNkW0.jpg?width=320&crop=smart&auto=webp&s=592a3a301fed6509d99c34d771c9fce4da09b64a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WSZ9kma0ZzNXtBkDlVL7TlggtW0gn3Hoe-dBqrlNkW0.jpg?width=640&crop=smart&auto=webp&s=eec027cfbcc10ced478567a1646457acdad5c568', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WSZ9kma0ZzNXtBkDlVL7TlggtW0gn3Hoe-dBqrlNkW0.jpg?width=960&crop=smart&auto=webp&s=51077820f337d1f34ccc8b0bf936371bb25805a9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WSZ9kma0ZzNXtBkDlVL7TlggtW0gn3Hoe-dBqrlNkW0.jpg?width=1080&crop=smart&auto=webp&s=0a98a8408bceb5e9ed1178024ab4540f7d595ab5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WSZ9kma0ZzNXtBkDlVL7TlggtW0gn3Hoe-dBqrlNkW0.jpg?auto=webp&s=dbb9e758568026e6a7262ef2ddb292876f0b9408', 'width': 1200}, 'variants': {}}]}
Is there a grammarly equivalent I can run locally?
19
I'm looking for a writing assistant that, instead of writing for me, can instead suggest ways to improve a given chunk of text.
2023-11-22T18:14:07
https://www.reddit.com/r/LocalLLaMA/comments/181fphq/is_there_a_grammarly_equivalent_i_can_run_locally/
PictureAggravating36
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181fphq
false
null
t3_181fphq
/r/LocalLLaMA/comments/181fphq/is_there_a_grammarly_equivalent_i_can_run_locally/
false
false
self
19
null
DPO fintuning usecase
1
I want to finetune a llm to make it biased twoards specific point of view in big subject like poltical, economical, ..etc the base model already knows some of these not always allign with the what i want it to adobt i have organized a dataset of QAs about this topic so the question is it better to fintune it a lora fintune first then DPO or i can go directly to DPO stage ? im planing to use llama2 7b or zypher 7b \#llm #DPO #llama2
2023-11-22T18:08:10
https://www.reddit.com/r/LocalLLaMA/comments/181fknb/dpo_fintuning_usecase/
Ok_Method2263
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181fknb
false
null
t3_181fknb
/r/LocalLLaMA/comments/181fknb/dpo_fintuning_usecase/
false
false
self
1
null
Noob question: How would I go about finetuning a LLM on my personal ChatGPT/Claude conversations?
1
[removed]
2023-11-22T17:58:05
https://www.reddit.com/r/LocalLLaMA/comments/181fbnz/noob_question_how_would_i_go_about_finetuning_a/
airtwink
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181fbnz
false
null
t3_181fbnz
/r/LocalLLaMA/comments/181fbnz/noob_question_how_would_i_go_about_finetuning_a/
false
false
self
1
null
AI anywhere? Its more likely than you think.
1
[removed]
2023-11-22T16:58:19
https://www.reddit.com/r/LocalLLaMA/comments/181dwm6/ai_anywhere_its_more_likely_than_you_think/
hibbity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181dwm6
false
null
t3_181dwm6
/r/LocalLLaMA/comments/181dwm6/ai_anywhere_its_more_likely_than_you_think/
false
false
self
1
{'enabled': False, 'images': [{'id': '8UT0IduGQyrEh8J3QvAC7PXW-klXcbQ3PArYngINglQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zyV4t0uox14ElTlexI0Qs5mAgtCzDE70Zjurb-pnC6k.jpg?width=108&crop=smart&auto=webp&s=7893b7928ad32f4b6722302eb535f3d10e4fbf8f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/zyV4t0uox14ElTlexI0Qs5mAgtCzDE70Zjurb-pnC6k.jpg?width=216&crop=smart&auto=webp&s=2bec04a2ae81135e09ef7b94cb65cab22d08950e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/zyV4t0uox14ElTlexI0Qs5mAgtCzDE70Zjurb-pnC6k.jpg?width=320&crop=smart&auto=webp&s=d2cace3322a0ffacc2daaf1c7e224fda45f3c065', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/zyV4t0uox14ElTlexI0Qs5mAgtCzDE70Zjurb-pnC6k.jpg?auto=webp&s=bb7619f7ecd37961a677ec2f2608db56c8df0395', 'width': 480}, 'variants': {}}]}
I’m extremely confused about system requirements. Some people are worried about ram and others about vram. I have 64gb of ram and 12gb vram. What size of model can I run?
49
From what I’ve read mac somehow uses system ram and windows uses the gpu? It doesn’t make any sense to me. Any help appreciated.
2023-11-22T16:29:32
https://www.reddit.com/r/LocalLLaMA/comments/181d8u7/im_extremely_confused_about_system_requirements/
A0sanitycomp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181d8u7
false
null
t3_181d8u7
/r/LocalLLaMA/comments/181d8u7/im_extremely_confused_about_system_requirements/
false
false
self
49
null
Cheapest 20xx series GPU for 3/4bit llama.cpp?
1
[removed]
2023-11-22T15:51:10
https://www.reddit.com/r/LocalLLaMA/comments/181c9aa/cheapest_20xx_series_gpu_for_34bit_llamacpp/
Substantial-Scene-85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181c9aa
false
null
t3_181c9aa
/r/LocalLLaMA/comments/181c9aa/cheapest_20xx_series_gpu_for_34bit_llamacpp/
false
false
self
1
null
ShareGPT4V - New multi-modal model, improves on LLaVA
75
2023-11-22T15:21:30
https://sharegpt4v.github.io/
Cradawx
sharegpt4v.github.io
1970-01-01T00:00:00
0
{}
181bimx
false
null
t3_181bimx
/r/LocalLLaMA/comments/181bimx/sharegpt4v_new_multimodal_model_improves_on_llava/
false
false
default
75
null
Should I use exllamav2 and tensorRT-LLM to get the best in H100 GPU?
1
I understand that quantization is used for consumer GPUs like GTX/RTX series. Will this even make it faster? I have just 20 GB on H100 (MIG partitioned) to run Llama2 13 B Chat.
2023-11-22T15:11:10
https://www.reddit.com/r/LocalLLaMA/comments/181bam1/should_i_use_exllamav2_and_tensorrtllm_to_get_the/
sanjeevr5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181bam1
false
null
t3_181bam1
/r/LocalLLaMA/comments/181bam1/should_i_use_exllamav2_and_tensorrtllm_to_get_the/
false
false
self
1
null
CUDA out of Memory Error
1
I'm using a Colab notebook with a T4 to have Llama 2 summarize all tables in a PDF. It works for the first 10 or so tables, and then I'm getting the dreaded CUDA out of memory error. It seems like each successive summarization call is accumulating on the GPU, is there some way to clear the allocated memory from the previous call so that the memory allocated doesn't build up?
2023-11-22T15:10:40
https://www.reddit.com/r/LocalLLaMA/comments/181ba78/cuda_out_of_memory_error/
Plus-Significance348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181ba78
false
null
t3_181ba78
/r/LocalLLaMA/comments/181ba78/cuda_out_of_memory_error/
false
false
self
1
null
How to finetune llama 2 on a tpu?
1
I'm trying to train llama 2 on a tpu using qlora and peft. All the scripts I find are tied to CUDA. Are there any available?
2023-11-22T15:04:10
https://www.reddit.com/r/LocalLLaMA/comments/181b53s/how_to_finetune_llama_2_on_a_tpu/
DirectionOdd9824
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181b53s
false
null
t3_181b53s
/r/LocalLLaMA/comments/181b53s/how_to_finetune_llama_2_on_a_tpu/
false
false
self
1
null
Finetuning on already instruct-tuned model?
1
I'm trying to add a new task to a model (openhermes-2.5) using lora, but it seems that it makes the model forget everything except the task that I trained for. Is there something that I need to do? Or am I supposed to train on the whole dataset + mine?
2023-11-22T14:43:48
https://www.reddit.com/r/LocalLLaMA/comments/181aonw/finetuning_on_already_instructtuned_model/
SkyIDreamer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181aonw
false
null
t3_181aonw
/r/LocalLLaMA/comments/181aonw/finetuning_on_already_instructtuned_model/
false
false
self
1
null
Finishing prematurely - training a LoRA on GPTQ model
1
[removed]
2023-11-22T14:35:10
https://www.reddit.com/r/LocalLLaMA/comments/181aiae/finishing_prematurely_training_a_lora_on_gptq/
Vilzuh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181aiae
false
null
t3_181aiae
/r/LocalLLaMA/comments/181aiae/finishing_prematurely_training_a_lora_on_gptq/
false
false
self
1
{'enabled': False, 'images': [{'id': 'no-_VcwSHgX4jsi9L-tyBgEBLBctYb3Whc_WK-BgumA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Z381OAmbeDEeeb5My043bAffbxrgqi5srl_-TtcwCJQ.jpg?width=108&crop=smart&auto=webp&s=68be753c177af2b633634da4dd8a584a811630e7', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Z381OAmbeDEeeb5My043bAffbxrgqi5srl_-TtcwCJQ.jpg?width=216&crop=smart&auto=webp&s=55406068d353832cd6180d9039b2001f7aabcfd6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Z381OAmbeDEeeb5My043bAffbxrgqi5srl_-TtcwCJQ.jpg?width=320&crop=smart&auto=webp&s=7265919640f9c7a229431d9caf2541c8938b0cd6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Z381OAmbeDEeeb5My043bAffbxrgqi5srl_-TtcwCJQ.jpg?auto=webp&s=f879bc35ad67c659c7e11d8c2480d4f4832ea6d5', 'width': 480}, 'variants': {}}]}
llama2 13B on Gtx 1070
5
Can I run llama2 13B locally on my Gtx 1070? I read somewhere minimum suggested VRAM is 10 GB but since the 1070 has 8GB would it just run a little slower? or could I use some quantization with bitsandbytes for example to make it fit and run more smoothly?
2023-11-22T14:25:27
https://www.reddit.com/r/LocalLLaMA/comments/181ab6j/llama2_13b_on_gtx_1070/
Suleyman_III
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181ab6j
false
null
t3_181ab6j
/r/LocalLLaMA/comments/181ab6j/llama2_13b_on_gtx_1070/
false
false
self
5
null
How to host an LLM which can be reached via an API for NSFW chat
1
[removed]
2023-11-22T14:13:03
https://www.reddit.com/r/LocalLLaMA/comments/181a1uk/how_to_host_an_llm_which_can_be_reached_via_an/
Substantial-Scene-85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181a1uk
false
null
t3_181a1uk
/r/LocalLLaMA/comments/181a1uk/how_to_host_an_llm_which_can_be_reached_via_an/
false
false
nsfw
1
null
The Economics of LLMs in Production
1
[removed]
2023-11-22T14:10:08
https://www.reddit.com/r/LocalLLaMA/comments/1819zkc/the_economics_of_llms_in_production/
idan_R1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1819zkc
false
null
t3_1819zkc
/r/LocalLLaMA/comments/1819zkc/the_economics_of_llms_in_production/
false
false
self
1
{'enabled': False, 'images': [{'id': 'mYui0GEkidvtx_7sDtPxkyAdrbzPzoYVly6YJidw8o8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Rav8IP6Mc-rEJE9J1ZsgGVMvbQqQfS5wk5L85OZyxdA.jpg?width=108&crop=smart&auto=webp&s=196fed1162fd702a3380aac0bb44147c28cd1334', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Rav8IP6Mc-rEJE9J1ZsgGVMvbQqQfS5wk5L85OZyxdA.jpg?width=216&crop=smart&auto=webp&s=7f2d3c84df84c14ff64467ec0e29c9dcbe19fc2a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Rav8IP6Mc-rEJE9J1ZsgGVMvbQqQfS5wk5L85OZyxdA.jpg?width=320&crop=smart&auto=webp&s=195afe9952b4b2ad898949d8c965308ae212d917', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Rav8IP6Mc-rEJE9J1ZsgGVMvbQqQfS5wk5L85OZyxdA.jpg?width=640&crop=smart&auto=webp&s=4d5581720c8ef4a10d3a9c3acc46cf85c88eaec6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Rav8IP6Mc-rEJE9J1ZsgGVMvbQqQfS5wk5L85OZyxdA.jpg?width=960&crop=smart&auto=webp&s=a4a1eec406f1f514d4e974d9517be49a0c4bd2d2', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/Rav8IP6Mc-rEJE9J1ZsgGVMvbQqQfS5wk5L85OZyxdA.jpg?auto=webp&s=46c3d29d80597feab9cdf801121a400dbaa889c1', 'width': 1000}, 'variants': {}}]}
Exllama v2 vs. llama.cpp (All layes offloaded to GPU)
3
Will there be a significant difference in speed and quality between LLama v2 GPTQ using Exllama v2 and LLama v2 GGUF using llama.cpp by offloading all the layers to GPU?
2023-11-22T14:08:44
https://www.reddit.com/r/LocalLLaMA/comments/1819yj9/exllama_v2_vs_llamacpp_all_layes_offloaded_to_gpu/
WinterUsed1120
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1819yj9
false
null
t3_1819yj9
/r/LocalLLaMA/comments/1819yj9/exllama_v2_vs_llamacpp_all_layes_offloaded_to_gpu/
false
false
self
3
null
wasmedge ggml error
1
I'm getting GGML\_ASSERT: /Users/hydai/workspace/WasmEdge/plugins/wasi\_nn/thirdparty/ggml/ggml-metal.m:1459: false Abort trap: 6 When I try to run wasmedge --dir .:. --nn-preload default:GGML:AUTO:Orca-2-13b-ggml-model-q4\_0.gguf llama-chat.wasm -p chatml -s '.... &#x200B; hydai is not a user on my mac! I have not heard of hydai - but maybe this is the person who built the plugin? I tried building the plugin myself but got screens and screens of errors so am now stuck - any thoughts?
2023-11-22T13:27:58
https://www.reddit.com/r/LocalLLaMA/comments/181944r/wasmedge_ggml_error/
sgt102
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
181944r
false
null
t3_181944r
/r/LocalLLaMA/comments/181944r/wasmedge_ggml_error/
false
false
default
1
null
Dataset de-duplication methods
3
Hey everyone, I have a dataset that has around 8million pairs of prompts and responses collected and curated from a bunch of open-source datasets in hf. I wanted to know what's the best method to dedup this dataset. I am planning on doing this locally (4090 with 64gb ram) and I've looked into a few methods but I wasn't able to use those in my case cuz of my compute requirements. Please let me know if y'all know a efficient method I can use! TIA.
2023-11-22T13:24:31
https://www.reddit.com/r/LocalLLaMA/comments/18191pw/dataset_deduplication_methods/
Dry_Long3157
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18191pw
false
null
t3_18191pw
/r/LocalLLaMA/comments/18191pw/dataset_deduplication_methods/
false
false
self
3
null
How do I go about training an open source llm on a postgres database
1
Howdy, I'm a backend developer, and management recently asked me to train an llm on our company data. I'm a bit over my head here, and I figured I'd ask for high level advice rather than continuing to go down google rabbit holes. What I've tried so far: * I spun up some gpu instances on AWS. Couldn't get llama to work at all, except for using gpt4all, which wasn't very performant and does make a network call to a github page for a list of models. * I tired following a google cloud tutorial [here](https://cloud.google.com/blog/products/databases/using-pgvector-llms-and-langchain-with-google-cloud-databases). This didn't work in their colab notebook, so I gave up on that since if their own documentation didn't work it didn't seem promising. Any advice is appreciated!
2023-11-22T13:22:38
https://www.reddit.com/r/LocalLLaMA/comments/18190ds/how_do_i_go_about_training_an_open_source_llm_on/
Particular-Tour-118
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18190ds
false
null
t3_18190ds
/r/LocalLLaMA/comments/18190ds/how_do_i_go_about_training_an_open_source_llm_on/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SG6tYDdRU3Tt2JciPU9aaMVkncukiAUjKyabUiZxC4s', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/-SlFoShTVbu9vr3a6s7VVd5wu9KcPh1goPTR_74rTAw.jpg?width=108&crop=smart&auto=webp&s=8312cd2d4cb342cd87e37fe7e2317f1a32b3f00d', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/-SlFoShTVbu9vr3a6s7VVd5wu9KcPh1goPTR_74rTAw.jpg?width=216&crop=smart&auto=webp&s=10abd68e04be270624d38c642d49d53eddc4d41d', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/-SlFoShTVbu9vr3a6s7VVd5wu9KcPh1goPTR_74rTAw.jpg?width=320&crop=smart&auto=webp&s=d407d38feab792c8fe1ab28f729fc8d021808ba6', 'width': 320}, {'height': 315, 'url': 'https://external-preview.redd.it/-SlFoShTVbu9vr3a6s7VVd5wu9KcPh1goPTR_74rTAw.jpg?width=640&crop=smart&auto=webp&s=740102fe689fdd4380a1b87d24e20d1c0dd5edb9', 'width': 640}, {'height': 473, 'url': 'https://external-preview.redd.it/-SlFoShTVbu9vr3a6s7VVd5wu9KcPh1goPTR_74rTAw.jpg?width=960&crop=smart&auto=webp&s=e2a46359a95efe3801c9c217048755e85f76f2ef', 'width': 960}, {'height': 532, 'url': 'https://external-preview.redd.it/-SlFoShTVbu9vr3a6s7VVd5wu9KcPh1goPTR_74rTAw.jpg?width=1080&crop=smart&auto=webp&s=e0992d401d88ff2012351f3731170c28afa38ab7', 'width': 1080}], 'source': {'height': 1232, 'url': 'https://external-preview.redd.it/-SlFoShTVbu9vr3a6s7VVd5wu9KcPh1goPTR_74rTAw.jpg?auto=webp&s=7188f365e0882e64d652fca0e1ffcc532a1c4b62', 'width': 2500}, 'variants': {}}]}
40x or more speedup by selecting important neurons
92
ERROR: type should be string, got "https://arxiv.org/abs/2311.10770\n\n\"UltraFastBERT\", apparently a variant of BERT, that uses only 0.3% of it's neurons during inference, is performing on par with similar BERT models.\n\nI hope that's going to be available for all kinds of models in the near future!"
2023-11-22T13:21:42
https://www.reddit.com/r/LocalLLaMA/comments/1818zox/40x_or_more_speedup_by_selecting_important_neurons/
koehr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1818zox
false
null
t3_1818zox
/r/LocalLLaMA/comments/1818zox/40x_or_more_speedup_by_selecting_important_neurons/
false
false
self
92
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
Help me find the right model for my low end PC and for NSFW chat purpose
1
[removed]
2023-11-22T13:13:27
https://www.reddit.com/r/LocalLLaMA/comments/1818tz2/help_me_find_the_right_model_for_my_low_end_pc/
Substantial-Scene-85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1818tz2
false
null
t3_1818tz2
/r/LocalLLaMA/comments/1818tz2/help_me_find_the_right_model_for_my_low_end_pc/
false
false
nsfw
1
null
What would you want senators to ask during hearings?
18
Specialists and technical folks hang out here. What would you, if you were in Congress, ask Big Tech companies in Congressional hearings? You may ask technical questions. As we know Senators are so bad at asking questions.
2023-11-22T12:50:52
https://www.reddit.com/r/LocalLLaMA/comments/1818eao/what_would_you_want_senators_to_ask_during/
chatsgpt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1818eao
false
null
t3_1818eao
/r/LocalLLaMA/comments/1818eao/what_would_you_want_senators_to_ask_during/
false
false
self
18
null
OpenAI to bring Altman back as CEO
2
2023-11-22T12:48:40
https://newyorkverified.com/4322664-openai-altman-return-ceo/
anujtomar_17
newyorkverified.com
1970-01-01T00:00:00
0
{}
1818cqv
false
null
t3_1818cqv
/r/LocalLLaMA/comments/1818cqv/openai_to_bring_altman_back_as_ceo/
false
false
https://b.thumbs.redditm…qT7jV0SQ_dbA.jpg
2
{'enabled': False, 'images': [{'id': 'rXCkM4TB5S6ovgzCh7K4AojzqLOAtDuo2WgzM-3Q8eg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JOLFORA_yywCzimZEuIzMv4Y5UXPtsoJX5zFL8k0VTs.jpg?width=108&crop=smart&auto=webp&s=e0c120ae2424416549428f8f735bb836ee4cfb8d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/JOLFORA_yywCzimZEuIzMv4Y5UXPtsoJX5zFL8k0VTs.jpg?width=216&crop=smart&auto=webp&s=116dc2f7226b953ff204e862d855242cffad09ae', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/JOLFORA_yywCzimZEuIzMv4Y5UXPtsoJX5zFL8k0VTs.jpg?width=320&crop=smart&auto=webp&s=997b164a92b6113e6597321dc01c5be08f5deec6', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/JOLFORA_yywCzimZEuIzMv4Y5UXPtsoJX5zFL8k0VTs.jpg?width=640&crop=smart&auto=webp&s=0270d52e7f4e7cacab3ebb154532ecc3db4c0f3f', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/JOLFORA_yywCzimZEuIzMv4Y5UXPtsoJX5zFL8k0VTs.jpg?width=960&crop=smart&auto=webp&s=67a3f0f8e188e104991ce9efd8bddba506742129', 'width': 960}, {'height': 606, 'url': 'https://external-preview.redd.it/JOLFORA_yywCzimZEuIzMv4Y5UXPtsoJX5zFL8k0VTs.jpg?width=1080&crop=smart&auto=webp&s=e995dab06396891db1c3b197cf210dab8380ec46', 'width': 1080}], 'source': {'height': 719, 'url': 'https://external-preview.redd.it/JOLFORA_yywCzimZEuIzMv4Y5UXPtsoJX5zFL8k0VTs.jpg?auto=webp&s=af7fa2bf85d65eb849e5d2dd4374bc09f1d0023e', 'width': 1280}, 'variants': {}}]}
Multiple 4060ti VS 4090 for training/inference
2
I've been in need of a dedicated training rig lately and so I've been looking at gpus. For context I'm already running training on a 16gb 3080ti laptop and inference on a 16gb 4060ti. Both are really just fine for 13B models. When I'm looking at cards though it appears I could buy nearly 4 more 16gb 4060ti cards for the price of a 24gb 4090. I understand that the 4090 is potentially 2-3 times faster based on benchmarks, but does this actually translate to improved Llama speeds? Would it even be viable to go for double 4060ti's instead? Currently I'm standardized on 16gb/13B/4bit but I'd love to push beyond that, have more vram for training, etc. What are my options?
2023-11-22T12:38:04
https://www.reddit.com/r/LocalLLaMA/comments/18185xi/multiple_4060ti_vs_4090_for_traininginference/
__SlimeQ__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18185xi
false
null
t3_18185xi
/r/LocalLLaMA/comments/18185xi/multiple_4060ti_vs_4090_for_traininginference/
false
false
self
2
null
Formatting the response of LLM in the way Chatgpt or Claude shows on UI.
4
Hey everyone, I have recently tried and deployed llama-2 on my GPU. And I wish to format the text output received from the LLM and show it in the way ChatGPT UI shows. The code blocks and texts blocks separately. Please any help or suggestion would help me a lot. Or if there is any open source llm api serving project which does this implicitly. Thanks.
2023-11-22T12:11:00
https://www.reddit.com/r/LocalLLaMA/comments/1817pi7/formatting_the_response_of_llm_in_the_way_chatgpt/
One-Difficulty3149
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1817pi7
false
null
t3_1817pi7
/r/LocalLLaMA/comments/1817pi7/formatting_the_response_of_llm_in_the_way_chatgpt/
false
false
self
4
null
What embedding model do you guys use?
8
I'm trying to test more embedding models and I'm wondering what does this community use... I know that it "may vary depending on use case", so in that case please share model and related use case. (intrested in both bi encoder and cross encoder) Thanks im advance!!!
2023-11-22T11:01:00
https://www.reddit.com/r/LocalLLaMA/comments/1816l7m/what_embedding_model_do_you_guys_use/
Distinct-Target7503
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1816l7m
false
null
t3_1816l7m
/r/LocalLLaMA/comments/1816l7m/what_embedding_model_do_you_guys_use/
false
false
self
8
null
How much does Quantization actually impact models? - KL Divergence Tests
184
So, it was bothering me a bit that the only metric people really had to understand the 'loss' of quantization *objectively* was perplexity. So, after hacking with koboldcpp's sampler code to force output probabilities for a predetermined sequence so that I can make a fair comparison... [Mistral 7b Avg Quantization Differences](https://preview.redd.it/tls9g63c9v1c1.png?width=2125&format=png&auto=webp&s=a26a32fc5c18de6e2b7637658ab6a9d1826db760) Ta-da! This is Mistral 7b GGUF's various popular quantizations, compared to the fp16 base model, as measured by KL divergence. What I'm specifically doing to measure this is comparing the probability similarities between models. Specifically, I did this for a predetermined sequence of about \~350 tokens worth of Wikipedia text. This means: * fp16 = 0% change from original probabilities (cause it's the original) * Q8\_0 = 0.06% avg. difference from original probabilities * Q6\_K = 0.1% avg. difference from original probabilities * Q5\_K\_M = 0.3% avg. difference from original probabilities * Q4\_K\_M = 1.0% avg. difference from original probabilities * Q3\_K\_M = 3.7% avg. difference from original probabilities * Q2\_K = 8.2% avg. difference from original probabilities "Average difference" obscures the bigger problem with low quantization, though. Technically, if many tokens are easily predictable or predetermined no matter what quant, this will contribute to the average. So what happens if, out of the 300+ tokens of text I tested on, we specifically pick the highest reported difference in KL divergence for each respective quantization and graph that? [Now it becomes clear how big the gap can be for 'difficult' tokens!](https://preview.redd.it/8albfcbiev1c1.png?width=2110&format=png&auto=webp&s=43c7f5d22f7b61b5af100ff48251c49816ceb179) To make the differences less aggressive, let's take the top \~5% of the most affected by quantization tokens for each quant, and graph that out. https://preview.redd.it/3baou5l9mv1c1.png?width=1324&format=png&auto=webp&s=afc4ff00c6b4e14cc86f322e9ccae887bd23b91c So, if we soley compare the top 5% of tokens that were 'most affected' by quantization when doing an average (we do that to exclude the 'obvious' tokens), the scale is significantly more dramatic. I'll be updating this post with 13b soon enough. I'd also do it for 70b, but since I'm on 12GB VRAM, measuring and plotting would be extremely slow as it'd go into the pagefile. ~~is this the part where I should shill a kofi or something?~~ I hope this helps the sub understand how much quantization *really* impacts models.
2023-11-22T10:53:39
https://www.reddit.com/r/LocalLLaMA/comments/1816h1x/how_much_does_quantization_actually_impact_models/
kindacognizant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1816h1x
false
null
t3_1816h1x
/r/LocalLLaMA/comments/1816h1x/how_much_does_quantization_actually_impact_models/
false
false
https://a.thumbs.redditm…bvD5RJRcUaM8.jpg
184
null
Local backend optimization settings koboldcpp
1
What model format and kobold.cpp settings will net me the fastest performance if I have an rtx 4080 + 32gb vram + 13700k? In terms of should I load it onto the vram, gpu or a mix of both, how would I go about it in the settings?
2023-11-22T10:51:44
https://www.reddit.com/r/LocalLLaMA/comments/1816fzh/local_backend_optimization_settings_koboldcpp/
Mobile-Bandicoot-553
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1816fzh
false
null
t3_1816fzh
/r/LocalLLaMA/comments/1816fzh/local_backend_optimization_settings_koboldcpp/
false
false
self
1
null
The sibbling problem - solved by inner dialog?
4
Hi, &#x200B; I think you all know this sibling problem that none of the local LMs can solve directly. I just tried an approach where the model has an internal dialogue about the problem. And it managed to solve it. &#x200B; I used the OpenHermes2.5 Mistral model q5\_k\_m.
2023-11-22T10:38:23
https://www.reddit.com/r/LocalLLaMA/comments/18168xt/the_sibbling_problem_solved_by_inner_dialog/
Traditional-Plate642
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18168xt
false
null
t3_18168xt
/r/LocalLLaMA/comments/18168xt/the_sibbling_problem_solved_by_inner_dialog/
false
false
self
4
null
The creator of MythoMax presents: MythoMix-7B!
68
Gryphe, creator of MythoMax basically merged the best Mistral models together. This should be a really fantastic model! [https://huggingface.co/Gryphe/MythoMist-7b](https://huggingface.co/Gryphe/MythoMist-7b) (links to quantized models by TheBloke can be found there) &#x200B;
2023-11-22T10:38:15
https://www.reddit.com/r/LocalLLaMA/comments/18168ug/the_creator_of_mythomax_presents_mythomix7b/
dampflokfreund
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18168ug
false
null
t3_18168ug
/r/LocalLLaMA/comments/18168ug/the_creator_of_mythomax_presents_mythomix7b/
false
false
self
68
{'enabled': False, 'images': [{'id': 'pyiYAScjuwRclcP2CHgf3lOyi04ZZSpia9-kNlBsFzo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LIYZ6VHdS7V_l08A1dqbMPZwK2TDSioH6afeybggnis.jpg?width=108&crop=smart&auto=webp&s=e1aaf34a704348a71e4db7666c562caba9a984c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LIYZ6VHdS7V_l08A1dqbMPZwK2TDSioH6afeybggnis.jpg?width=216&crop=smart&auto=webp&s=8109589ab8f2a2e9bb70650082e9c8875fdc535b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LIYZ6VHdS7V_l08A1dqbMPZwK2TDSioH6afeybggnis.jpg?width=320&crop=smart&auto=webp&s=92829f6578898fbbf4a40df975a780fc83c92571', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LIYZ6VHdS7V_l08A1dqbMPZwK2TDSioH6afeybggnis.jpg?width=640&crop=smart&auto=webp&s=603b01d17c7b5800e614d7f91d6c7fbf50aa0608', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LIYZ6VHdS7V_l08A1dqbMPZwK2TDSioH6afeybggnis.jpg?width=960&crop=smart&auto=webp&s=7ba075ebf4700b68ea8512934a1e50fe46f28826', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LIYZ6VHdS7V_l08A1dqbMPZwK2TDSioH6afeybggnis.jpg?width=1080&crop=smart&auto=webp&s=7eedc0b3b9ff6e23c53478dfb184c2881fee62ca', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LIYZ6VHdS7V_l08A1dqbMPZwK2TDSioH6afeybggnis.jpg?auto=webp&s=ecbba0b2467b6831690137738712af9e2a13404b', 'width': 1200}, 'variants': {}}]}
The sibbling problem - solved by inner dialog?
1
[removed]
2023-11-22T10:08:24
https://www.reddit.com/r/LocalLLaMA/comments/1815u8t/the_sibbling_problem_solved_by_inner_dialog/
One-Magician-6270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1815u8t
false
null
t3_1815u8t
/r/LocalLLaMA/comments/1815u8t/the_sibbling_problem_solved_by_inner_dialog/
false
false
self
1
null
Best LLM that can be hosted on a CPU?
1
What's the closest we can get to GPT-4 quality without a GPU?
2023-11-22T10:05:30
https://www.reddit.com/r/LocalLLaMA/comments/1815ssj/best_llm_that_can_be_hosted_on_a_cpu/
Alternative_Card_989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1815ssj
false
null
t3_1815ssj
/r/LocalLLaMA/comments/1815ssj/best_llm_that_can_be_hosted_on_a_cpu/
false
false
self
1
null
Access Ollama server from another machine connected to same network (MacOS)
1
Hello, I am trying to access Ollama server from another machine connected to same network. What I have tried is to use the IP address obtained by running ifconfig in the terminal in replacement of localhost in the curl command: curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt":"Why is the sky blue?" }' This leads to the error "Couldn't connect to server". Is there a way to achieve this ? Thank you in advance for your reply ! Cheers, Eric
2023-11-22T10:01:31
https://www.reddit.com/r/LocalLLaMA/comments/1815qq0/access_ollama_server_from_another_machine/
Erimay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1815qq0
false
null
t3_1815qq0
/r/LocalLLaMA/comments/1815qq0/access_ollama_server_from_another_machine/
false
false
self
1
null
Exponentially Faster Language Modelling: 40-78x Faster Feedforward for NLU thanks to FFFs
176
2023-11-22T09:34:12
https://arxiv.org/abs/2311.10770
lexected
arxiv.org
1970-01-01T00:00:00
0
{}
1815czk
false
null
t3_1815czk
/r/LocalLLaMA/comments/1815czk/exponentially_faster_language_modelling_4078x/
false
false
https://b.thumbs.redditm…r7X0OEOXHMEw.jpg
176
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}
I made a video as requested.
1
2023-11-22T09:32:36
https://v.redd.it/haai1fa2cv1c1
hibbity
v.redd.it
1970-01-01T00:00:00
0
{}
1815c81
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/haai1fa2cv1c1/DASHPlaylist.mpd?a=1703237569%2CMzBlMzI5NjU2NGNlYjk4MTgwYThkMGYwNWM2Y2IzMDRmNjc0Yjg5ZmFmZDdiMTlmOThlNWFhMmI3NTEwMjkzNA%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/haai1fa2cv1c1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/haai1fa2cv1c1/HLSPlaylist.m3u8?a=1703237569%2CYWZiYmYxMTczZjM1NmIzOTYxMTU4YWMxYmUxZGUxYWIwZTcwYTRiOGU1ZGQ1ZTUyOTlkOTI5MzM4MDRmMmZhYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/haai1fa2cv1c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1815c81
/r/LocalLLaMA/comments/1815c81/i_made_a_video_as_requested/
false
false
https://external-preview…55bd3a22272f62a2
1
{'enabled': False, 'images': [{'id': 'c2F4d2JjamRjdjFjMf3mJp_4mwjtW7p41eLiA0PAwFTS8eTZKbSD-szYjsXG', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c2F4d2JjamRjdjFjMf3mJp_4mwjtW7p41eLiA0PAwFTS8eTZKbSD-szYjsXG.png?width=108&crop=smart&format=pjpg&auto=webp&s=d8a9c92703b1f7dd4310ed7c8e4a93f236ea9bc1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c2F4d2JjamRjdjFjMf3mJp_4mwjtW7p41eLiA0PAwFTS8eTZKbSD-szYjsXG.png?width=216&crop=smart&format=pjpg&auto=webp&s=92b3ba7374004f07a0dd8d1ac482fd4afd6773fb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c2F4d2JjamRjdjFjMf3mJp_4mwjtW7p41eLiA0PAwFTS8eTZKbSD-szYjsXG.png?width=320&crop=smart&format=pjpg&auto=webp&s=ea048f815706a598930736bff3d651ae41025160', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c2F4d2JjamRjdjFjMf3mJp_4mwjtW7p41eLiA0PAwFTS8eTZKbSD-szYjsXG.png?width=640&crop=smart&format=pjpg&auto=webp&s=0bc5a6843733e7e2981fdd8df3549d91b6619fbf', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/c2F4d2JjamRjdjFjMf3mJp_4mwjtW7p41eLiA0PAwFTS8eTZKbSD-szYjsXG.png?width=960&crop=smart&format=pjpg&auto=webp&s=187b997ff21c3c5273ec60ed6f91f20f5d3895d2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/c2F4d2JjamRjdjFjMf3mJp_4mwjtW7p41eLiA0PAwFTS8eTZKbSD-szYjsXG.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ae82ed07f9801f9bda078ed96f4433c9038b6c00', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/c2F4d2JjamRjdjFjMf3mJp_4mwjtW7p41eLiA0PAwFTS8eTZKbSD-szYjsXG.png?format=pjpg&auto=webp&s=97766d0b0a5456f799200973667d015b9ff83219', 'width': 1280}, 'variants': {}}]}
Anybody working with low-resource languages?
5
Specifically asking for RAG applications. I'll appreciate any tips about the current best practices regarding complex document retrieval in a low-resource language. Thanks.
2023-11-22T09:22:12
https://www.reddit.com/r/LocalLLaMA/comments/1815700/anybody_working_with_lowresource_languages/
coumineol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1815700
false
null
t3_1815700
/r/LocalLLaMA/comments/1815700/anybody_working_with_lowresource_languages/
false
false
self
5
null
I need 7B: llama-2-Chat but I don't have enough Gpu, is there a collab version?
1
[removed]
2023-11-22T08:51:14
https://www.reddit.com/r/LocalLLaMA/comments/1814rjq/i_need_7b_llama2chat_but_i_dont_have_enough_gpu/
Substantial-Scene-85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1814rjq
false
null
t3_1814rjq
/r/LocalLLaMA/comments/1814rjq/i_need_7b_llama2chat_but_i_dont_have_enough_gpu/
false
false
self
1
{'enabled': False, 'images': [{'id': '13mYvMTbHrtY4dgYaEcV2C1QHE7RrK6uJ6EZAZ9BcDM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z_6KO9dX9ShyOMeJW6G6oS8AQh1zHmzYEZsJXjius00.jpg?width=108&crop=smart&auto=webp&s=4bc2398f900934f6f98befb13469a5914057a751', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z_6KO9dX9ShyOMeJW6G6oS8AQh1zHmzYEZsJXjius00.jpg?width=216&crop=smart&auto=webp&s=0bbce453964ca74fb651d49ac664fa870ae02b1e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z_6KO9dX9ShyOMeJW6G6oS8AQh1zHmzYEZsJXjius00.jpg?width=320&crop=smart&auto=webp&s=a3f75304c0ec88268823bb6455be06e183de4a5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z_6KO9dX9ShyOMeJW6G6oS8AQh1zHmzYEZsJXjius00.jpg?width=640&crop=smart&auto=webp&s=8572ed40d1bebc7ec0f1cec7a1f6b129808d7e43', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z_6KO9dX9ShyOMeJW6G6oS8AQh1zHmzYEZsJXjius00.jpg?width=960&crop=smart&auto=webp&s=8166428ca4967288822e6f46b4eb148bd679377c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z_6KO9dX9ShyOMeJW6G6oS8AQh1zHmzYEZsJXjius00.jpg?width=1080&crop=smart&auto=webp&s=ca07f957c29d8415d419a6133ca86f1d7712aff7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z_6KO9dX9ShyOMeJW6G6oS8AQh1zHmzYEZsJXjius00.jpg?auto=webp&s=7c92df14374011b76027d7f02b8d10f8c4dca145', 'width': 1200}, 'variants': {}}]}
Do GGUF not take all the VRAM needed when loaded?
1
Is this normal behavior? I'm still learning but I noticed that if I load a normal LLM like [https://huggingface.co/teknium/OpenHermes-2-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2-Mistral-7B) it will take all the VRAM available (I have a 3080 10GB). But when I load the quantized model like [https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF) it will take almost nothing of the VRAM, maybe like 1GB? Is this normal behaviour?
2023-11-22T08:22:17
https://www.reddit.com/r/LocalLLaMA/comments/1814de8/do_gguf_not_take_all_the_vram_needed_when_loaded/
bullerwins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1814de8
false
null
t3_1814de8
/r/LocalLLaMA/comments/1814de8/do_gguf_not_take_all_the_vram_needed_when_loaded/
false
false
self
1
{'enabled': False, 'images': [{'id': '0qzUB07On6JaZMkOGDN84IgQJ4CHVDUUVtbr0vdzvn4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Yr6HWGJaUFKzx38f7NY_Td-hKPIofuh6_Idfb0HbweI.jpg?width=108&crop=smart&auto=webp&s=aa71ef9cbe02446c6d7c50353800096b83a2a7f8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Yr6HWGJaUFKzx38f7NY_Td-hKPIofuh6_Idfb0HbweI.jpg?width=216&crop=smart&auto=webp&s=4bba5f636cdd71069b0aaf16275bdb6cf5885503', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Yr6HWGJaUFKzx38f7NY_Td-hKPIofuh6_Idfb0HbweI.jpg?width=320&crop=smart&auto=webp&s=8ff31c09985fa67ae44e08e78afd4e22912a800c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Yr6HWGJaUFKzx38f7NY_Td-hKPIofuh6_Idfb0HbweI.jpg?width=640&crop=smart&auto=webp&s=595bbd4d1328350496ff210fd3c7184e95dd148b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Yr6HWGJaUFKzx38f7NY_Td-hKPIofuh6_Idfb0HbweI.jpg?width=960&crop=smart&auto=webp&s=97b66c31a8ff3cf284cb186307cebf9765775340', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Yr6HWGJaUFKzx38f7NY_Td-hKPIofuh6_Idfb0HbweI.jpg?width=1080&crop=smart&auto=webp&s=00c01fd854844b708c38fd7a04d5ed3c65db0e0b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Yr6HWGJaUFKzx38f7NY_Td-hKPIofuh6_Idfb0HbweI.jpg?auto=webp&s=70eda1ccef1b843da6593acb65534e48e86a18a1', 'width': 1200}, 'variants': {}}]}
Interest so run a local LLaMA with RTX 3060 12G
2
Simple question, is it worth stopping using Chatgpt (not plus) in order to use a local version of a LLAMA with my current GC ?
2023-11-22T08:05:40
https://www.reddit.com/r/LocalLLaMA/comments/18144u6/interest_so_run_a_local_llama_with_rtx_3060_12g/
Daitan_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18144u6
false
null
t3_18144u6
/r/LocalLLaMA/comments/18144u6/interest_so_run_a_local_llama_with_rtx_3060_12g/
false
false
self
2
null
How important is pci-e speed/gen/lanes when doing inference?
10
I'm trying to assess whether or not to try and run a second GPU in my second full length slot. My mobo manual reports the fatest the second slot can go is pci-e 2.0 at x4 lanes. A paltry 2GB/s correct? Can anyone comment from personal experience?
2023-11-22T07:46:53
https://www.reddit.com/r/LocalLLaMA/comments/1813uxf/how_important_is_pcie_speedgenlanes_when_doing/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1813uxf
false
null
t3_1813uxf
/r/LocalLLaMA/comments/1813uxf/how_important_is_pcie_speedgenlanes_when_doing/
false
false
self
10
null
Has Anyone Successfully Utilized the Neural Networks API on Android for LLMS with EdgeTPU?
5
Hey everyone, I've been exploring running lightweight machine learning models (LLMS) on Android devices and came across the [Neural Networks API](https://developer.android.com/ndk/guides/neuralnetworks), which seems promising for leveraging neural silicon ASICs. I own a Google Pixel 7, which features an EdgeTPU integrated into the chip. I'm curious if anyone here has experience or success stories using this API for AI inference, particularly in comparison to traditional CPU performance. My main interest lies in understanding the practical performance gains when using the EdgeTPU for AI tasks. Does it significantly outperform the CPU in terms of inference speed or efficiency? I'm especially keen to hear from those who have experimented with similar setups or have insights into optimizing LLMS on Android devices using this technology. Thanks in advance for your insights and experiences!
2023-11-22T07:41:27
https://www.reddit.com/r/LocalLLaMA/comments/1813s70/has_anyone_successfully_utilized_the_neural/
dewijones92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1813s70
false
null
t3_1813s70
/r/LocalLLaMA/comments/1813s70/has_anyone_successfully_utilized_the_neural/
false
false
self
5
{'enabled': False, 'images': [{'id': '42vxviEj_6H1K1N26XHeF6HjIOmKs06u82fF3JCaVc8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cMo7WYlu0TNMT-uQB8a_R2DDut0uDZyo8B39hcOjBFw.jpg?width=108&crop=smart&auto=webp&s=4b1e43d7aa7b63eeb1da1bf3eebbe63343476976', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cMo7WYlu0TNMT-uQB8a_R2DDut0uDZyo8B39hcOjBFw.jpg?width=216&crop=smart&auto=webp&s=4b0e523f61d6dcafc06312d458e702eafcdeaf91', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cMo7WYlu0TNMT-uQB8a_R2DDut0uDZyo8B39hcOjBFw.jpg?width=320&crop=smart&auto=webp&s=54368260e3b617b47bfe4bcbb61fa5d76038857a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cMo7WYlu0TNMT-uQB8a_R2DDut0uDZyo8B39hcOjBFw.jpg?width=640&crop=smart&auto=webp&s=2d180f4aa12151a0f6e629e2437545f0340b128e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cMo7WYlu0TNMT-uQB8a_R2DDut0uDZyo8B39hcOjBFw.jpg?width=960&crop=smart&auto=webp&s=b9fe7eb92519b79cb4e46389801b550a0f07af80', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cMo7WYlu0TNMT-uQB8a_R2DDut0uDZyo8B39hcOjBFw.jpg?width=1080&crop=smart&auto=webp&s=4f492b0490284696d511f610d3558a99846943be', 'width': 1080}], 'source': {'height': 676, 'url': 'https://external-preview.redd.it/cMo7WYlu0TNMT-uQB8a_R2DDut0uDZyo8B39hcOjBFw.jpg?auto=webp&s=2599e955ebe982fa124437a181527d92c03786de', 'width': 1201}, 'variants': {}}]}
I have a 3080, what would be the best upgrade?
1
I have a used 3090 on the way that I got for 650€ shipped. Can I use the 3090 and the 3080 at the same time? I’ve seen many 2x3090’s here but not 3080+3090. I have a Gigabyte x570 elite motherboard so 2x16 slots though the second one runs at pcie 3.0 I believe. I also have an AMD 5950x and 32GB Ram at 3200Mhz. Would selling the 3080 and getting another 3090 be best? Or getting more Ram? My plan is to run the best possible local llm, I’m really impressed with some results matching gpt4 level.
2023-11-22T07:10:05
https://www.reddit.com/r/LocalLLaMA/comments/1813cdb/i_have_a_3080_what_would_be_the_best_upgrade/
bullerwins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1813cdb
false
null
t3_1813cdb
/r/LocalLLaMA/comments/1813cdb/i_have_a_3080_what_would_be_the_best_upgrade/
false
false
self
1
null
Altman to Return as OpenAI CEO With Taylor, Summers on New Board
5
2023-11-22T06:55:01
https://www.bloomberg.com/news/articles/2023-11-22/sam-altman-to-return-as-openai-ceo-with-a-new-board?srnd=premium
super-helper
bloomberg.com
1970-01-01T00:00:00
0
{}
18134op
false
null
t3_18134op
/r/LocalLLaMA/comments/18134op/altman_to_return_as_openai_ceo_with_taylor/
false
false
https://a.thumbs.redditm…DqHKsHb-ZR34.jpg
5
{'enabled': False, 'images': [{'id': 'YW-7ojF5RxNDSmOqlL243Wh4lYJv5jiFXn40CZex7k0', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/QNPQsyrzpuRjjPmuRouI2nsZHyV4EWolpsX5z6PaWw8.jpg?width=108&crop=smart&auto=webp&s=a30d69ec2c34013f94b6a8fe858dc1f87a7a6abe', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/QNPQsyrzpuRjjPmuRouI2nsZHyV4EWolpsX5z6PaWw8.jpg?width=216&crop=smart&auto=webp&s=58fc52585ea7e65b0280d3d549a0fbdf5a6fb983', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/QNPQsyrzpuRjjPmuRouI2nsZHyV4EWolpsX5z6PaWw8.jpg?width=320&crop=smart&auto=webp&s=b2cf7fc70ab7da16359411ccd4bc2e3c7de4e727', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/QNPQsyrzpuRjjPmuRouI2nsZHyV4EWolpsX5z6PaWw8.jpg?width=640&crop=smart&auto=webp&s=b1362bfcb30aed5669aa487a0b42bb712fa17e98', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/QNPQsyrzpuRjjPmuRouI2nsZHyV4EWolpsX5z6PaWw8.jpg?width=960&crop=smart&auto=webp&s=34e9a60b97c2ceae7ef28e1cd901a7800e64a630', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/QNPQsyrzpuRjjPmuRouI2nsZHyV4EWolpsX5z6PaWw8.jpg?width=1080&crop=smart&auto=webp&s=a95a6503bd71cd5b0cf3b24bb60c8d2437c72f09', 'width': 1080}], 'source': {'height': 799, 'url': 'https://external-preview.redd.it/QNPQsyrzpuRjjPmuRouI2nsZHyV4EWolpsX5z6PaWw8.jpg?auto=webp&s=aaee513179fe040f877cb3f23b58b133d98d98eb', 'width': 1200}, 'variants': {}}]}
Ollama docker container on 16 core system.. no gpu.. slow.. help
1
Just new to all this, literally started yesterday reading up on things, LMSTudio on my Mac.. figured I have my old 16 core threadripper system (Gen1) not in use. Just a Ubuntu 22.04 server os, GPU is old school cheap one with HDMI out. Not a gaming GPU. I figured the 16 cores would provide enough power for single queries to be fairly fast. NVMe drive as well. But.. and maybe its because I am running this as docker containers? But it's insanely slow. The llama 7b model takes about 2 minutes to answer a simple coding question (make hello world app). I figured I have 64GB ram, lets try the very fast DeepSeek Coder one. I loaded up the 33b cause I got 64GB ram. Well.. it took about 3 minutes to start responding, and then its typing like one letter every 2 seconds. So.. is it just CPU only is really this slow? Does the ollama not use multiple cores to speed things up? I am looking to interact with the AI for code questions and ideally want that temperature at 0 or 0.05 as I want them as accurate as possible. I was hoping to be able to use the larger model since I have the RAM. Assuming CPU is really this bad and you really need a GPU.. I do have a Radeon XFX 6700XT GPU I can use. Or are there settings maybe to give docker containers more ram and cpus that maybe I didnt set?
2023-11-22T06:52:21
https://www.reddit.com/r/LocalLLaMA/comments/18133av/ollama_docker_container_on_16_core_system_no_gpu/
Dry-Vermicelli-682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18133av
false
null
t3_18133av
/r/LocalLLaMA/comments/18133av/ollama_docker_container_on_16_core_system_no_gpu/
false
false
self
1
null
OpenAI brings Sam Altman back as CEO
189
2023-11-22T06:19:11
https://www.cnbc.com/2023/11/22/openai-brings-sam-altman-back-as-ceo-days-after-ouster.html
fallingdowndizzyvr
cnbc.com
1970-01-01T00:00:00
0
{}
1812l45
false
null
t3_1812l45
/r/LocalLLaMA/comments/1812l45/openai_brings_sam_altman_back_as_ceo/
false
false
https://b.thumbs.redditm…-2AIOAZTmgps.jpg
189
{'enabled': False, 'images': [{'id': 'y4aKNNlmhEOnYoktZg0N9vzadp6wC5xAJij9j-ZABEk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/P6tYAPi5mgTs0eooS0y3TLcefPD2BN1WgXPeck0Rc88.jpg?width=108&crop=smart&auto=webp&s=ebe104668dd4873dfb641d6eeed8f03c0b5d02d0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/P6tYAPi5mgTs0eooS0y3TLcefPD2BN1WgXPeck0Rc88.jpg?width=216&crop=smart&auto=webp&s=16d99a68158a374c8933c21b35de77e6d85fbb44', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/P6tYAPi5mgTs0eooS0y3TLcefPD2BN1WgXPeck0Rc88.jpg?width=320&crop=smart&auto=webp&s=a9d85e44d2b040ba4836ccdc73ed20889590e146', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/P6tYAPi5mgTs0eooS0y3TLcefPD2BN1WgXPeck0Rc88.jpg?width=640&crop=smart&auto=webp&s=d32c6c9b3ca0ed47b604f0cdb7cf4d9722da6fed', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/P6tYAPi5mgTs0eooS0y3TLcefPD2BN1WgXPeck0Rc88.jpg?width=960&crop=smart&auto=webp&s=30585500222087222666b8f7b55f585598e7eb8c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/P6tYAPi5mgTs0eooS0y3TLcefPD2BN1WgXPeck0Rc88.jpg?width=1080&crop=smart&auto=webp&s=f943c2d774b115979e3d46908dcb0168914801bb', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/P6tYAPi5mgTs0eooS0y3TLcefPD2BN1WgXPeck0Rc88.jpg?auto=webp&s=bab39fb213e5952df70d75df52cc349ace09b0b8', 'width': 1920}, 'variants': {}}]}
Any interest in C#/.NET for serving LLMs?
1
[K024/llm-sharp: Language models in C# (github.com)](https://github.com/K024/llm-sharp) I've recently drafted this. But adding more models & features & tests & documentation will just cost too much time. Seeking for comments & colaborators.
2023-11-22T06:17:53
https://www.reddit.com/r/LocalLLaMA/comments/1812kek/any_interest_in_cnet_for_serving_llms/
Key-Comparison3261
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1812kek
false
null
t3_1812kek
/r/LocalLLaMA/comments/1812kek/any_interest_in_cnet_for_serving_llms/
false
false
self
1
{'enabled': False, 'images': [{'id': '_WjwMS0saB_6lE4UPWNLJ-nLMqlZcGe9AQgXucUf3WU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MnA_NHT9JT2Q4ni1bZij95SyomGRsOotqpiKiveMojs.jpg?width=108&crop=smart&auto=webp&s=665c9c94d438f832edf590e2d8849e768ecb27a9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MnA_NHT9JT2Q4ni1bZij95SyomGRsOotqpiKiveMojs.jpg?width=216&crop=smart&auto=webp&s=65724a76fc487267aa5619766ccf62361b3f4014', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MnA_NHT9JT2Q4ni1bZij95SyomGRsOotqpiKiveMojs.jpg?width=320&crop=smart&auto=webp&s=9d13be2b475be6f6d63ee5983761c3f04bad46ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MnA_NHT9JT2Q4ni1bZij95SyomGRsOotqpiKiveMojs.jpg?width=640&crop=smart&auto=webp&s=f49db9a6123f16c501fecf8bfa3cf37bc72f0b74', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MnA_NHT9JT2Q4ni1bZij95SyomGRsOotqpiKiveMojs.jpg?width=960&crop=smart&auto=webp&s=a10ce01002dabec4472da2aca36270bb4c0a0f9f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MnA_NHT9JT2Q4ni1bZij95SyomGRsOotqpiKiveMojs.jpg?width=1080&crop=smart&auto=webp&s=092e08bbbf310ec55a47cc3d794bbf4ccd03e8fb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MnA_NHT9JT2Q4ni1bZij95SyomGRsOotqpiKiveMojs.jpg?auto=webp&s=e5a2cae58ed3818c08baba717f39ce47c33e807a', 'width': 1200}, 'variants': {}}]}
Why didn't gpt4 work at first and how did they "fix it"?
120
According to this [tweet](https://x.com/0interestrates/status/1725745633003475102?s=20), >when gpt4 first finished training it didn’t actually work very well and the whole team thought it’s over, scaling is dead…until greg went into a cave for weeks and somehow magically made it work So gpt-4 was kind of broken at first. Then greg spent a few weeks trying to fix it and then it somehow worked. So why did it not work at first and how did they fix it? I think this is an important question to the OSS community,
2023-11-22T05:57:49
https://www.reddit.com/r/LocalLLaMA/comments/18128r4/why_didnt_gpt4_work_at_first_and_how_did_they_fix/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18128r4
false
null
t3_18128r4
/r/LocalLLaMA/comments/18128r4/why_didnt_gpt4_work_at_first_and_how_did_they_fix/
false
false
self
120
{'enabled': False, 'images': [{'id': 'SlmH7qy6GBdmSkdvvIU8KiwTQJtXeXlmCERIxSwA-kc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/wCT6oGmkGuVonCY6dzlB3rEbsHAHhnqHlI5JtuvLye8.jpg?width=108&crop=smart&auto=webp&s=ee0ba76c6de2fcc38fafe09911fc388087fbd8b1', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/wCT6oGmkGuVonCY6dzlB3rEbsHAHhnqHlI5JtuvLye8.jpg?auto=webp&s=e695a92d039beb3bd6f911a2590877e0397a7785', 'width': 200}, 'variants': {}}]}
I found this on github. It works pretty good! Lots of good info in the "settings" too.
1
2023-11-22T05:45:58
https://github.com/aseichter2007/ClipboardConqueror
hibbity
github.com
1970-01-01T00:00:00
0
{}
18121vk
false
null
t3_18121vk
/r/LocalLLaMA/comments/18121vk/i_found_this_on_github_it_works_pretty_good_lots/
false
false
default
1
{'enabled': False, 'images': [{'id': 's6DSZrtBZpXlKZPX4XaiQo8c7AcEBeBFbVpI5UKiqdo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/abtIQMCNg5n0qebDbrOVkkXMqjx7A6KZKQ7Qo6j-CNc.jpg?width=108&crop=smart&auto=webp&s=5eeba7ce573463cf69f1ebd38ec0cb69660caf7f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/abtIQMCNg5n0qebDbrOVkkXMqjx7A6KZKQ7Qo6j-CNc.jpg?width=216&crop=smart&auto=webp&s=aba5ee7b29122b58989e8e900c2ff94e087fb074', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/abtIQMCNg5n0qebDbrOVkkXMqjx7A6KZKQ7Qo6j-CNc.jpg?width=320&crop=smart&auto=webp&s=f0ae9de57ab81112601a23a745c4523b7982612d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/abtIQMCNg5n0qebDbrOVkkXMqjx7A6KZKQ7Qo6j-CNc.jpg?width=640&crop=smart&auto=webp&s=2ce8054ff420522325db2e85daacb306fc1469a2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/abtIQMCNg5n0qebDbrOVkkXMqjx7A6KZKQ7Qo6j-CNc.jpg?width=960&crop=smart&auto=webp&s=5bc5ce877b55bd4e60eec492820b0922c0f227d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/abtIQMCNg5n0qebDbrOVkkXMqjx7A6KZKQ7Qo6j-CNc.jpg?width=1080&crop=smart&auto=webp&s=b4c1f6e57d31f1bfe3955a922edefe9151d7938b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/abtIQMCNg5n0qebDbrOVkkXMqjx7A6KZKQ7Qo6j-CNc.jpg?auto=webp&s=8d36488bdbbdd945697ac637121fd5498e64055e', 'width': 1200}, 'variants': {}}]}
Multi GPU vs Single GPU with same VRAM
10
It seems a fair bit cheaper to hit a certain VRAM target with multi vs single GPU setups. I'm wondering if there are any \*recent\* experiments or blog posts I can read to learn more about what GPU's would be best to get with the black friday sales coming up
2023-11-22T05:07:49
https://www.reddit.com/r/LocalLLaMA/comments/1811fk4/multi_gpu_vs_single_gpu_with_same_vram/
-Lousy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1811fk4
false
null
t3_1811fk4
/r/LocalLLaMA/comments/1811fk4/multi_gpu_vs_single_gpu_with_same_vram/
false
false
self
10
null
Is there any resource or website where I can track ranking of my favorite models vs today's? there just so many models it hard to stay up to date. For example, I want to benchmark GPT3.5turbo0301. I feel this was the last of their model to be uncensored and it eventually deprecates.
1
[removed]
2023-11-22T04:55:13
https://www.reddit.com/r/LocalLLaMA/comments/18117tz/is_there_any_resource_or_website_where_i_can/
sweetsunnyside
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18117tz
false
null
t3_18117tz
/r/LocalLLaMA/comments/18117tz/is_there_any_resource_or_website_where_i_can/
false
false
self
1
null
Amica: open source interface for interactive communication with 3D characters with voice synthesis, speech recognition, visual understanding, and an emotion system
1
2023-11-22T03:26:55
https://twitter.com/arbius_ai/status/1727135805041209770
arbiusai
twitter.com
1970-01-01T00:00:00
0
{}
180zoly
false
{'oembed': {'author_name': 'Arbius', 'author_url': 'https://twitter.com/arbius_ai', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Introducing... Amica! 👩\u200d🦰🤖<br><br>Amica is an open source interface for interactive communication with 3D characters with voice synthesis, speech recognition, visual understanding, and an emotion system.<br><br>🔗 <a href="https://t.co/93OmnNv7Bw">https://t.co/93OmnNv7Bw</a> <a href="https://t.co/vdIMSleyOY">pic.twitter.com/vdIMSleyOY</a></p>&mdash; Arbius (@arbius_ai) <a href="https://twitter.com/arbius_ai/status/1727135805041209770?ref_src=twsrc%5Etfw">November 22, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/arbius_ai/status/1727135805041209770', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_180zoly
/r/LocalLLaMA/comments/180zoly/amica_open_source_interface_for_interactive/
false
false
default
1
{'enabled': False, 'images': [{'id': 't5OrUK912rS1OgphZtSuYZyH1Mc_E8MEhkTj6bvwNe4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XfDWcW5e3O1R1_3bzjtB59N3q81KW7d5ZEKi8AGMF4E.jpg?width=108&crop=smart&auto=webp&s=0f38c48045379c574e985d6594af6b545a6faba7', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/XfDWcW5e3O1R1_3bzjtB59N3q81KW7d5ZEKi8AGMF4E.jpg?auto=webp&s=044f54f705fd9723b9737ebc0986d4520b8bf25c', 'width': 140}, 'variants': {}}]}
First version of C++ port of GGML for llama.cpp
1
[removed]
2023-11-22T02:27:01
https://www.reddit.com/r/LocalLLaMA/comments/180yirg/first_version_of_c_port_of_ggml_for_llamacpp/
introsp3ctor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180yirg
false
null
t3_180yirg
/r/LocalLLaMA/comments/180yirg/first_version_of_c_port_of_ggml_for_llamacpp/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RvBkIQDLy_YpMikhgQqrhnhgvW3Oj3DArdrUB0ca16s', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/eWnkelVx7JsaFV8Lh3WckxMi1tPEht5TYt6OGkT-T78.jpg?width=108&crop=smart&auto=webp&s=f7595d8e3d488189a6fded920ca3fe960c1524ab', 'width': 108}], 'source': {'height': 67, 'url': 'https://external-preview.redd.it/eWnkelVx7JsaFV8Lh3WckxMi1tPEht5TYt6OGkT-T78.jpg?auto=webp&s=b092e3124f9babedd3e17e7068f50dac83ae0ca2', 'width': 140}, 'variants': {}}]}
How would you create a PII scanner over a codebase?
1
Hello friends, I am an experienced data scientist -mainly on tabular data - who just acquired a new task at my job: create a PII scanner over a codebase. Not the PII itself, but detecting stuff like sending PII via HTTP, direct database operations and logging user information. Since I have no experience on the LLM field, and intermediate experience in NLP, I would like some ideas of how you would approach this solution. I read stuff online about RAG over the codebase. Thinking about applying something using that. The other challenge I have is to find context on the code for every PII identified. Meaning I will need to have a string stating on what the PII is doing. Thanks in advance.
2023-11-22T01:28:51
https://www.reddit.com/r/LocalLLaMA/comments/180xd69/how_would_you_create_a_pii_scanner_over_a_codebase/
dick_veganas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180xd69
false
null
t3_180xd69
/r/LocalLLaMA/comments/180xd69/how_would_you_create_a_pii_scanner_over_a_codebase/
false
false
self
1
null
Silence Model Loading Output
1
I am using LlamaCpp from langchain to run the model locally. I'm finding all of the output printed when the model loads to be a major distraction. These are all the lines that start with llama\_model\_loader and so forth. Is there a way to silence all of that? Setting verbose to False did not do anything. I weirdly can't find anything describing how to do this by searching. &#x200B;
2023-11-22T01:20:39
https://www.reddit.com/r/LocalLLaMA/comments/180x7fa/silence_model_loading_output/
anonymous_pro_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180x7fa
false
null
t3_180x7fa
/r/LocalLLaMA/comments/180x7fa/silence_model_loading_output/
false
false
self
1
null
Running Mistral 7B/ Llama 2 13B on AWS Lambda using llama.cpp
1
[removed]
2023-11-22T01:11:28
https://www.reddit.com/r/LocalLLaMA/comments/180x0yf/running_mistral_7b_llama_2_13b_on_aws_lambda/
Allergic2Humans
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180x0yf
false
null
t3_180x0yf
/r/LocalLLaMA/comments/180x0yf/running_mistral_7b_llama_2_13b_on_aws_lambda/
false
false
self
1
{'enabled': False, 'images': [{'id': 'TI5yP0eLgUYZgddxhCymVgc_pWQuufStnkwhdMEZOzg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/daiwKyxio9uhC-0eXG1nEp75Bv2XgWGV_TK4sLseD_Q.jpg?width=108&crop=smart&auto=webp&s=b52961f7d1bfc052d1b1dd2333eaea7f6604fa59', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/daiwKyxio9uhC-0eXG1nEp75Bv2XgWGV_TK4sLseD_Q.jpg?width=216&crop=smart&auto=webp&s=a32dfdb8321355a479c65b7bf76d826a05bf840c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/daiwKyxio9uhC-0eXG1nEp75Bv2XgWGV_TK4sLseD_Q.jpg?width=320&crop=smart&auto=webp&s=7a9d1e21f556e7b11a9c45dc8c5c24cb132c2e3f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/daiwKyxio9uhC-0eXG1nEp75Bv2XgWGV_TK4sLseD_Q.jpg?width=640&crop=smart&auto=webp&s=37cecba0f7a07b51ece95217003cda710ce488f6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/daiwKyxio9uhC-0eXG1nEp75Bv2XgWGV_TK4sLseD_Q.jpg?width=960&crop=smart&auto=webp&s=06ad4ce67191777723b5990765fed02976128114', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/daiwKyxio9uhC-0eXG1nEp75Bv2XgWGV_TK4sLseD_Q.jpg?width=1080&crop=smart&auto=webp&s=eb14f6d51fa8af878aabb6ba50bd3db9bbb99abe', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/daiwKyxio9uhC-0eXG1nEp75Bv2XgWGV_TK4sLseD_Q.jpg?auto=webp&s=61729a13597e0560653f58eda75e0c9e2d8839fb', 'width': 1200}, 'variants': {}}]}
Fine Tuning from Datasets on Cloud
3
I currently have a script to parse data from the old Reddit API archives, and sort it vaguely into q/a format. Here's a snippet from a dataset: >user: Yeah, you laugh now. Wait until you encounter code like that in the wild. > >assistant: It's like hearing about the platypus in school and laughing, then going on a school trip and finding out that it bites and has a venomous stinger. > >user: Here's one: > >You look like one well-written subroutine. Can I call you sometime? > >assistant: I hope you're parameterless though, I'd hate for us to have any arguments. It seems like settings on Oobabooga can be changed to accept this format, but the problem is that my PC is not powerful enough to do the training on the dataset locally (1060 6GB, 16GB RAM). This is my first time attempting this, so I'm trying to find what cloud service can accept my dataset and then how to properly configure the training settings for Mistral 7b GGUF on the cloud. Even better if I can do it on top of an already fine tuned Mistral like Dolphin. Sorry for the newbie questions, but I'd appreciate any help. &#x200B;
2023-11-22T01:11:02
https://www.reddit.com/r/LocalLLaMA/comments/180x0mv/fine_tuning_from_datasets_on_cloud/
51869101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180x0mv
false
null
t3_180x0mv
/r/LocalLLaMA/comments/180x0mv/fine_tuning_from_datasets_on_cloud/
false
false
self
3
null
Rocket 🦝 - smol model that overcomes models much larger in size
121
We're proud to introduce Rocket-3B 🦝, a state-of-the-art 3 billion parameter model! &#x200B; 🌌 **Size vs. Performance:** Rocket-3B may be smaller with its 3 billion parameters, but it punches way above its weight. In head-to-head benchmarks like MT-Bench and AlpacaEval, it consistently outperforms models up to 20 times larger. https://preview.redd.it/fxmz9sl1ls1c1.png?width=1273&format=png&auto=webp&s=63c3838cf4f01f7efcad9ec92b97c1e493111842 🔍 **Benchmark Breakdown:** In MT-Bench, Rocket-3B achieved an average score of 6.56, excelling in various conversation scenarios. In AlpacaEval, it notched a near 80% win rate, showcasing its ability to produce detailed and relevant responses. https://preview.redd.it/rpgaknn3ls1c1.png?width=1280&format=png&auto=webp&s=6d2d7543f1459ceae7f96ad05ea064e8f8076517 🛠️ **Training:** The model is fine-tuned from Stability AI's StableLM-3B-4e1t, employing Direct Preference Optimization (DPO) for enhanced performance. 📚 **Training Data:** We've amalgamated multiple public datasets to ensure a comprehensive and diverse training base. This approach equips Rocket-3B with a wide-ranging understanding and response capability. 👩‍💻 **Chat format:** Rocket-3B follows the ChatML format. For an in-depth look at Rocket-3B, visit [Rocket-3B](https://huggingface.co/pansophic/rocket-3B)'s HugginFace page
2023-11-22T00:17:58
https://www.reddit.com/r/LocalLLaMA/comments/180vx9y/rocket_smol_model_that_overcomes_models_much/
starkiller1298
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180vx9y
false
null
t3_180vx9y
/r/LocalLLaMA/comments/180vx9y/rocket_smol_model_that_overcomes_models_much/
false
false
https://b.thumbs.redditm…-KOfGUXpvLZU.jpg
121
{'enabled': False, 'images': [{'id': 'wTs0MX-v3khpprOoqVxUTbAaSnqNA9A5XJWNV_-1m5E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=108&crop=smart&auto=webp&s=d4be5951ecaae7c8df2ed84e87e2576468e31e1f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=216&crop=smart&auto=webp&s=38fb88b2c86980e621f3771d825d5e334e360db0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=320&crop=smart&auto=webp&s=6d4855a539d6c9628d364431add4e40bdca78c49', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=640&crop=smart&auto=webp&s=6bc307043c920f11908050012d04bc42da7793f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=960&crop=smart&auto=webp&s=00cb251d180f8b4ffc91d6f0e9f0d75a93491f6e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=1080&crop=smart&auto=webp&s=a237360148d41206d8747a40276952c61cfdc32f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?auto=webp&s=10606eade5b3836f46b0c7fd0806ec9eaad25e2e', 'width': 1200}, 'variants': {}}]}
Tesla P40 cards - what cooling solutions work well?
5
I'm thinking of picking up a Tesla P40 as a fun project, but can't use a blower fan because of sound / a lack of appropriate fan headers. I'm wondering if anyone has info on how well these solutions might work or other solutions that work well: 1. 2-3 CPU fans ziptied to the front or back of the card. Possibly with /without the shroud 2. [A 195CFM Duct fan](https://www.amazon.ca/VIVOSUN-Inch-Inline-Ventilation-Blower/dp/B01M7S46YZ/) - with some kind of cardboard shroud. Card would be running on a riser as there's no way it would fit in the case. (have a spare lying around - also have a weaker quieter 95cfm one if that would work ) 3. A Single 120mm CPU fan but without the 3d printed adapter, in the usual location 4. 3 but with a cardboard shroud Also are there any tips on letting the gpu control fan speed itself? I've tried looking online but the few bits of info I've found are for older, weaker cards in the Tesla line up so I can't be sure if any of these would be okay for a P40.
2023-11-22T00:10:01
https://www.reddit.com/r/LocalLLaMA/comments/180vr7k/tesla_p40_cards_what_cooling_solutions_work_well/
Rich_Examination_637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180vr7k
false
null
t3_180vr7k
/r/LocalLLaMA/comments/180vr7k/tesla_p40_cards_what_cooling_solutions_work_well/
false
false
self
5
null
Introducing Rocket 🦝 - smol model that overcomes models much larger in size
1
[removed]
2023-11-22T00:08:11
https://www.reddit.com/r/LocalLLaMA/comments/180vptb/introducing_rocket_smol_model_that_overcomes/
pansophicAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180vptb
false
null
t3_180vptb
/r/LocalLLaMA/comments/180vptb/introducing_rocket_smol_model_that_overcomes/
false
false
https://b.thumbs.redditm…-KOfGUXpvLZU.jpg
1
{'enabled': False, 'images': [{'id': 'wTs0MX-v3khpprOoqVxUTbAaSnqNA9A5XJWNV_-1m5E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=108&crop=smart&auto=webp&s=d4be5951ecaae7c8df2ed84e87e2576468e31e1f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=216&crop=smart&auto=webp&s=38fb88b2c86980e621f3771d825d5e334e360db0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=320&crop=smart&auto=webp&s=6d4855a539d6c9628d364431add4e40bdca78c49', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=640&crop=smart&auto=webp&s=6bc307043c920f11908050012d04bc42da7793f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=960&crop=smart&auto=webp&s=00cb251d180f8b4ffc91d6f0e9f0d75a93491f6e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?width=1080&crop=smart&auto=webp&s=a237360148d41206d8747a40276952c61cfdc32f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DiOslLCpz4ZKgHLISp7Iz7jXXhE9rqn8ouMEgOtJ3fY.jpg?auto=webp&s=10606eade5b3836f46b0c7fd0806ec9eaad25e2e', 'width': 1200}, 'variants': {}}]}
noobq - cloud-based GPU + chatbot advice
3
I've dipped my toes in the local LLM pool over the past few months and it's been an incredible experience. I'm running a 3060 with ooba, sometimes with Silly Tavern as the front end, mostly 7b's and a few 13b's. However, my setup just isn't strong enough to run a truly consistently smart chatbot and I'm wondering about using a cloud-based GPU. My hope is to run a chatbot connected to a fairly detailed knowledge base and an uncensored LLM on a cloud GPU. Aaand, hopefully, be able to hook it up to Telegram or other social media platform. Does anyone know if I'm way off-base in thinking I can run an uncensored LLM on a service like Run Pod, link it to a knowledge base and a web-based chatbot or ooba, and possibly Telegram? Thank you, any advice is appreciated!
2023-11-21T23:48:54
https://www.reddit.com/r/LocalLLaMA/comments/180vagi/noobq_cloudbased_gpu_chatbot_advice/
motivcreative
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180vagi
false
null
t3_180vagi
/r/LocalLLaMA/comments/180vagi/noobq_cloudbased_gpu_chatbot_advice/
false
false
self
3
null
Today is the first day I’m getting results comparable to GPT4 on OpenSource LLM workflows.
275
Yes this is anecdotal but I’ve been a heavy user of OpenAI API and paid GPT Pro before it was cool. A few weeks ago I tested a workflow to send the same prompt to two instances of the same LLM with different parameters. Today I setup the basic workflow to provision two different LLMs concurrently and have them validate and improve the responses. The results are very impressive. They challenge each other more and seem to output results on-par with the quality and depth of GPT4. On the left, is the new xwincoder and on the right is Tess200k, both 34B models and Q8 quants. Running on M2 MacBook Pro with 64GB. I have been sending it prompts all day and the OpenAI moat is over. The only thing limiting us at this point is personal compute capacity. I would like to conduct more objective testing. Is there a source for prompts most LLMs fail? How can I really put this through its paces? Any riddles or problems that are known to give LLMs trouble? I will be scaling this workflow to use QLoRA adapters as well and have begun tinkering with fine tuning as of last night (successfully). I intend on dynamically swapping the models at runtime depending on the workflow. This will all run multithreaded over websocket, so I am attempting to keep things from waiting on other things as much as possible. So, what is your go to prompt to prove the service that wraps an LLM is good enough?
2023-11-21T23:34:24
https://www.reddit.com/gallery/180uz42
LocoMod
reddit.com
1970-01-01T00:00:00
0
{}
180uz42
false
null
t3_180uz42
/r/LocalLLaMA/comments/180uz42/today_is_the_first_day_im_getting_results/
false
false
https://b.thumbs.redditm…tdb0XWSKNd8o.jpg
275
null
Looking for the smalest model for JSON generation or checks
1
I wish there were a small model for JSON generation, or check/correct JSON things; Obviously I could use very big model for that but that is going to cost a lot; and it's kinda for fun and NSFW purpuses so like it's not a good idea, if you have any model that is very good at this usecase I'll be very happy; at the same time I want to know what would be the best "smallest" uncensored LLM for generation of suggestions based on prompt for exemple roleplay; And I need "quick reply" generation ?
2023-11-21T23:33:35
https://www.reddit.com/r/LocalLLaMA/comments/180uyi9/looking_for_the_smalest_model_for_json_generation/
Regular_Instruction
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180uyi9
false
null
t3_180uyi9
/r/LocalLLaMA/comments/180uyi9/looking_for_the_smalest_model_for_json_generation/
false
false
nsfw
1
null
GPTs Free Alternative (Kinda)
1
Recently I came across a platform called youai , It give you the ability to create custom AI chat bots for consumers, SMBs, and enterprises . It's basicly a GPTs but with more models like llama 2 70b and mistral 7b (it also support gpt 4 but for paid plans ). you can make money with it but you will have to be on a paid plan to enable charging for your service , however, there might be some use cases like testing with your workflow before going for a paid service or free co-pilot for your personal uses [https://youai.ai/](https://youai.ai/)
2023-11-21T23:08:03
https://www.reddit.com/r/LocalLLaMA/comments/180udqo/gpts_free_alternative_kinda/
MidnightAutomatic361
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180udqo
false
null
t3_180udqo
/r/LocalLLaMA/comments/180udqo/gpts_free_alternative_kinda/
false
false
self
1
{'enabled': False, 'images': [{'id': 'JMGZEiDRnMuLHibyUKyi3byDb5lJuEHaZ-Mj9rWvd5k', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/F9D-CzPr4EDscnAceeiNMcJdZ-pcPW4NWgzNmarCgqI.jpg?width=108&crop=smart&auto=webp&s=e291e5f7aa1626897f2d9155e1b3e6d86acce112', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/F9D-CzPr4EDscnAceeiNMcJdZ-pcPW4NWgzNmarCgqI.jpg?width=216&crop=smart&auto=webp&s=e609fe9a9dfdf30e7c5198986f5a413080541fc5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/F9D-CzPr4EDscnAceeiNMcJdZ-pcPW4NWgzNmarCgqI.jpg?width=320&crop=smart&auto=webp&s=fd3d2d723771c2b4463d18c39d7d0603ed3c56b3', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/F9D-CzPr4EDscnAceeiNMcJdZ-pcPW4NWgzNmarCgqI.jpg?width=640&crop=smart&auto=webp&s=5f37c4f37c933a132d4f940f847d49a516dc998c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/F9D-CzPr4EDscnAceeiNMcJdZ-pcPW4NWgzNmarCgqI.jpg?width=960&crop=smart&auto=webp&s=78cf458ee030c3b6b69e71e8bd84f1688acccbe5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/F9D-CzPr4EDscnAceeiNMcJdZ-pcPW4NWgzNmarCgqI.jpg?width=1080&crop=smart&auto=webp&s=6840d00bd3e9fac9c7bd652b65b12c7980f486bd', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/F9D-CzPr4EDscnAceeiNMcJdZ-pcPW4NWgzNmarCgqI.jpg?auto=webp&s=159eab04f83a3be1ecf922638e4ce12f41be9da1', 'width': 1200}, 'variants': {}}]}
What are the best Text-to-3D models out there right now?
1
Im keen to see if I can integrate one directly into unreal engine.
2023-11-21T22:44:56
https://www.reddit.com/r/LocalLLaMA/comments/180tudu/what_are_the_best_textto3d_models_out_there_right/
Chance_Confection_37
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180tudu
false
null
t3_180tudu
/r/LocalLLaMA/comments/180tudu/what_are_the_best_textto3d_models_out_there_right/
false
false
self
1
null
Look ahead decoding offers massive (~1.5x) speedup for inference
1
2023-11-21T22:38:55
https://lmsys.org/blog/2023-11-21-lookahead-decoding/
lans_throwaway
lmsys.org
1970-01-01T00:00:00
0
{}
180tpja
false
null
t3_180tpja
/r/LocalLLaMA/comments/180tpja/look_ahead_decoding_offers_massive_15x_speedup/
false
false
https://b.thumbs.redditm…sAnRlkWnP4fg.jpg
1
{'enabled': False, 'images': [{'id': 'icR3Ob4zLCI1nSlpjTQftasiFrBqERZYFRe8wZ4MLW4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=108&crop=smart&auto=webp&s=9bf1b43dfbc0251c3053c625fb4c1fbb794ff804', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=216&crop=smart&auto=webp&s=491b47a8f82a9d9ff67b0b12746ee54e9699e547', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=320&crop=smart&auto=webp&s=c7b980c7fa2ae14fcbd7d5f73155316b4d139f92', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=640&crop=smart&auto=webp&s=d0089cd60bdacd51f916f5e28d0159e3495e5984', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=960&crop=smart&auto=webp&s=2980295730b827acbc4cb0f15106280e4a67b837', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=1080&crop=smart&auto=webp&s=f551ab4663709a8b4860007637150c6584832d5d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?auto=webp&s=95f3a87e53ab764a5b1e9ec838d0b0469d624620', 'width': 1920}, 'variants': {}}]}
Combining LoRA and GPTQ quantization, plug
1
[removed]
2023-11-21T22:18:06
https://www.reddit.com/r/LocalLLaMA/comments/180t8g7/combining_lora_and_gptq_quantization_plug/
Gazorpazzor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180t8g7
false
null
t3_180t8g7
/r/LocalLLaMA/comments/180t8g7/combining_lora_and_gptq_quantization_plug/
false
false
self
1
{'enabled': False, 'images': [{'id': 'AB-zoAwVqCEw36AXKNxq-jJMX9-u6SE6Tkrzs6hGmjs', 'resolutions': [{'height': 26, 'url': 'https://external-preview.redd.it/rklv953z7fzggt45I537r3HqVNirwYeImxHGJDGOLgk.jpg?width=108&crop=smart&auto=webp&s=e65ed31e8732195ecf465121e0a235f2824e3f3d', 'width': 108}, {'height': 52, 'url': 'https://external-preview.redd.it/rklv953z7fzggt45I537r3HqVNirwYeImxHGJDGOLgk.jpg?width=216&crop=smart&auto=webp&s=753983073f24572530a8b5bad796e9c0e7153e31', 'width': 216}, {'height': 78, 'url': 'https://external-preview.redd.it/rklv953z7fzggt45I537r3HqVNirwYeImxHGJDGOLgk.jpg?width=320&crop=smart&auto=webp&s=762d96babbb1a368a03da73db2463e4761477637', 'width': 320}, {'height': 156, 'url': 'https://external-preview.redd.it/rklv953z7fzggt45I537r3HqVNirwYeImxHGJDGOLgk.jpg?width=640&crop=smart&auto=webp&s=ef81e6cca5979005e5fb9787e283f758a9408ca4', 'width': 640}, {'height': 235, 'url': 'https://external-preview.redd.it/rklv953z7fzggt45I537r3HqVNirwYeImxHGJDGOLgk.jpg?width=960&crop=smart&auto=webp&s=3f9b3f6d9ca7cb7425a817661db6d07adb3ba3d8', 'width': 960}], 'source': {'height': 251, 'url': 'https://external-preview.redd.it/rklv953z7fzggt45I537r3HqVNirwYeImxHGJDGOLgk.jpg?auto=webp&s=cdb179bd922860eec4cba3feed76e54bc68774d4', 'width': 1024}, 'variants': {}}]}
How to clean memory between model queries + best smaller model for code generation
2
Hi all! I’m doing an experiment, in which I use mistral-7b-instruct to describe an algorithm of doing certain stuff, and then prompting it to write Python code, using said algorithm as guide rails of sorts and then shooting the code into Python REPL to check the results. It works reasonably well, but sometimes it produces code which outputs exceptions. I pipe the exceptions back to model and ask to fix them, but after couple of iterations the model OOMs. I had lucky streaks in the beginning, when the model output good quality code that just worked, but now I see that the probability of it generating working code is quite low — about 30%. So, I have two questions: - How can I clean the memory between questions, so the model does not OOM? I can theoretically get more VRAM (I can try to get A100 80GB and now I’m on V100s with 32GB of VRAM), but I suspect that there is a better way. - Is there any better small-ish model than mistral-7b-instruct for Python code generation? I am a-okay with using more than one model in the pipeline and having a (possibly quantised) 13b model in the pipeline, but I really really really don’t want to bother with bigger models since I’m Russian and renting GPUs is crazy expensive over here (>3$ per hour for V100!). Thanks in advance!
2023-11-21T22:16:37
https://www.reddit.com/r/LocalLLaMA/comments/180t78g/how_to_clean_memory_between_model_queries_best/
netikas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
180t78g
false
null
t3_180t78g
/r/LocalLLaMA/comments/180t78g/how_to_clean_memory_between_model_queries_best/
false
false
self
2
null