title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Any QT solutions out there?
1
Hi guys, I was wondering if there are actually any QT UIs for local llms, whether they are complete solutions with backend or just frontends. I know a lot of Python or Javascript/Typescript solutions by now, but I'm surprised that I don't know any QT solution. Could someone point me to an appropriate link? Thanks
2023-12-27T12:47:43
https://www.reddit.com/r/LocalLLaMA/comments/18rzy4y/any_qt_solutions_out_there/
Evening_Ad6637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rzy4y
false
null
t3_18rzy4y
/r/LocalLLaMA/comments/18rzy4y/any_qt_solutions_out_there/
false
false
self
1
null
how to tell that Mixtral was created by a French company (nsfw, funny)
1
chat answer by mixtral-8x7b-instruct-v0.1.Q6\_K.gguf https://preview.redd.it/3qxm9xn41u8c1.jpg?width=1693&format=pjpg&auto=webp&s=322080455a90d15d85596ab6940cc9cdce59369a
2023-12-27T12:39:35
https://www.reddit.com/r/LocalLLaMA/comments/18rzt07/how_to_tell_that_mixtral_was_created_by_a_french/
hugo-the-second
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rzt07
false
null
t3_18rzt07
/r/LocalLLaMA/comments/18rzt07/how_to_tell_that_mixtral_was_created_by_a_french/
false
false
default
1
null
Alternative to LMStudio
33
Hey fellow local LLM enthusiasts, I am currently building an alternative to LM Studio as a side project. I want it to be more high-level to make it more approachable for a non-tech audience. While I have some ideas on what I personally want to add I am interested in your ideas. What would be the features that you think are missing in LM Studio? What is something that you dislike about it? What do you want to see in a local LLM UI/Software?
2023-12-27T12:36:56
https://www.reddit.com/r/LocalLLaMA/comments/18rzrfa/alternative_to_lmstudio/
Competitive_Loan_480
self.LocalLLaMA
2023-12-27T12:40:09
0
{}
18rzrfa
false
null
t3_18rzrfa
/r/LocalLLaMA/comments/18rzrfa/alternative_to_lmstudio/
false
false
self
33
null
Any "real" case of success of Fine Tuning Llama using quantization?
1
[removed]
2023-12-27T12:27:58
https://www.reddit.com/r/LocalLLaMA/comments/18rzly1/any_real_case_of_success_of_fine_tuning_llama/
celsowm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rzly1
false
null
t3_18rzly1
/r/LocalLLaMA/comments/18rzly1/any_real_case_of_success_of_fine_tuning_llama/
false
false
default
1
null
Could you explain splitting llm CPU + GPU?
12
When I was training my own models with torch I was using GPU, whole model was in VRAM. Now I was playing with koboldcpp and I understand there is that idea to split everything between CPU and GPU. How does it work? I mean some layers are in VRAM and some layers are in RAM, but all calculations are in CUDA? Or some calculations are in CPU and some in GPU? Because if not everything is in VRAM then how CUDA can do the math with data from RAM? Is it copied on demand? Where can I read about that process? (Or do I need just to read the code?) My understanding was that neural network on CPU would be very slow, so I assumed everything is calculated by CUDA. Imagine 128GB RAM and 8GB VRAM, only few layers would be loaded in VRAM. Is it really faster than without GPU at all?
2023-12-27T12:10:45
https://www.reddit.com/r/LocalLLaMA/comments/18rzbor/could_you_explain_splitting_llm_cpu_gpu/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rzbor
false
null
t3_18rzbor
/r/LocalLLaMA/comments/18rzbor/could_you_explain_splitting_llm_cpu_gpu/
false
false
self
12
null
hugging_face demo settings versus local settings
2
Hey guys, so I'm a bit confused on settings and hoping to understand what I'm missing. I'm testing this model here and attempting to replicate the results in LM studio: [https://huggingface.co/spaces/elyza/ELYZA-japanese-Llama-2-13b-instruct-demo](https://huggingface.co/spaces/elyza/ELYZA-japanese-Llama-2-13b-instruct-demo) With default settings, I feed it some random text I googled: Please translate this to english: "休み時間にクラスの性質の悪い連中につかまってしまった。いつも俺の顔と華奢な体つきを女みたいだってからかってくるヤツらだ It provides a pretty good translation if I compare it to deepl. Now I do it locally and I get various responses if I keep regenerating, sometimes it scolds me about bullying (like the censor is a bit too strict). Other times it tells me it can't assist with physical actions like translation because it's just an ai model. Other times it just responds to me in Japanese. Very rarely I'll get a somewhat similar response. Now I set my settings the same as hugging face but am getting wildly different responses, My temperature is 1 n\_predict: -1 repeat pentaly: 1 Top P: 0.95 Top K: 50 Context length: 4096 n\_batch: 512 I am using the 13B Q6\_K gguf model. If I can understand what I'm doing wrong here, perhaps some of my other results will be better on other models too.
2023-12-27T12:07:26
https://www.reddit.com/r/LocalLLaMA/comments/18rz9ou/hugging_face_demo_settings_versus_local_settings/
Squeezitgirdle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rz9ou
false
null
t3_18rz9ou
/r/LocalLLaMA/comments/18rz9ou/hugging_face_demo_settings_versus_local_settings/
false
false
self
2
{'enabled': False, 'images': [{'id': 'obY5nyG6wsMMSW9ilDkQaP-yXxjlyAzn7n9AQKmSPso', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UyHh0rIkRvyOVHNZEpidunt8XZl2SeIK58ReM39SH-U.jpg?width=108&crop=smart&auto=webp&s=9f80abf45450ad5d49302c13a1fee4def95038c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UyHh0rIkRvyOVHNZEpidunt8XZl2SeIK58ReM39SH-U.jpg?width=216&crop=smart&auto=webp&s=1a0a56bb6918e29aca6b13694bde95e827ed3767', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UyHh0rIkRvyOVHNZEpidunt8XZl2SeIK58ReM39SH-U.jpg?width=320&crop=smart&auto=webp&s=c93b6592cde0a55e4584e7534e11c85aa5e6c96e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UyHh0rIkRvyOVHNZEpidunt8XZl2SeIK58ReM39SH-U.jpg?width=640&crop=smart&auto=webp&s=a63b8cc8ce2bd8b0a6454acce4d6346fe8ee96b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UyHh0rIkRvyOVHNZEpidunt8XZl2SeIK58ReM39SH-U.jpg?width=960&crop=smart&auto=webp&s=64537643a4db4fbba40bc597e0b515a3ab35b740', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UyHh0rIkRvyOVHNZEpidunt8XZl2SeIK58ReM39SH-U.jpg?width=1080&crop=smart&auto=webp&s=30b7d1835779b1bb07f5bd73efd23186f3136a81', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UyHh0rIkRvyOVHNZEpidunt8XZl2SeIK58ReM39SH-U.jpg?auto=webp&s=d7cf0d22edbdd997f94763f929227c56aad1ec66', 'width': 1200}, 'variants': {}}]}
Llama 2 Instruct - 7B vs 13B
3
How good are the Llama 2 Instruct models, and how significant is the difference bw 13B and 7B? For fine-tuning would 13B have a great enough advantage over the 7B model that it's worth the extra fine-tune time? I want to fine-tune Llama 2 on the HotPotQA dataset, training it to find the relevant context to a particular question out of some wiki para's. Now I'm pretty sure Llama 2 instruct would be much better for this than Llama 2 chat right? Not sure whether I should use the 7B model or the 13B model though - I'm training on Kaggle's free TPUs and it's already going to take ages so idk
2023-12-27T11:40:16
https://www.reddit.com/r/LocalLLaMA/comments/18rytr4/llama_2_instruct_7b_vs_13b/
Aggravating-Floor-38
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rytr4
false
null
t3_18rytr4
/r/LocalLLaMA/comments/18rytr4/llama_2_instruct_7b_vs_13b/
false
false
self
3
null
More or Less? (GPU or GPU(s) for inferencing?)
3
Hi all! My first post here, and it's a fairly simple query. I'm looking to build a system for running chat models like LLaMA locally, and I'm looking to get some speed out of them. I have a dual Xeon system with plenty of ram already, but I'm looking for a solution for GPU acceleration, and within my price range (300NZD, or about 190USD) I have filtered down to two choices: Three Nvidia Tesla P4s, or a single P40. I can run either in the rig I'm assembling, but I'm looking to know which is the right way to go: Three GPUs, each with a GP104 and 8gb of VRAM totalling 24gb, or a single GP102 GPU, with 24gb. I don't plan to do much training on this system, though I may at some point, but for now this is entirely focused on inferencing, and running models locally. The price for the single P40 is roughly the same as three P4s, so I'm wondering, which route would be better, based on your experience?
2023-12-27T10:55:02
https://www.reddit.com/r/LocalLLaMA/comments/18ry4tj/more_or_less_gpu_or_gpus_for_inferencing/
PaperboyNZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ry4tj
false
null
t3_18ry4tj
/r/LocalLLaMA/comments/18ry4tj/more_or_less_gpu_or_gpus_for_inferencing/
false
false
self
3
null
LLM taking too long time to respond
3
The below code is taking more than 5 min to answer the question, any solution to this? llm = CTransformers(model="TheBloke/CodeLlama-7B-Instruct-GGUF"**,** model\_file="codellama-7b-instruct.Q5\_K\_M.gguf"**,** \# callbacks=\[StreamingStdOutCallbackHandler()\], config={'max\_new\_tokens': **4096,** 'context\_length': **4000,** 'temperature': **0.01**}) agent = create\_csv\_agent(llm\*\*,\*\* 'August2023.csv'**,** verbose=True\*\*,\*\* agent\_type=AgentType.ZERO\_SHOT\_REACT\_DESCRIPTION\*\*,\*\*) agent.run("how many rows are there?")
2023-12-27T10:49:12
https://www.reddit.com/r/LocalLLaMA/comments/18ry1j4/llm_taking_too_long_time_to_respond/
East-Bug6675
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ry1j4
false
null
t3_18ry1j4
/r/LocalLLaMA/comments/18ry1j4/llm_taking_too_long_time_to_respond/
false
false
self
3
null
The Most Exciting AI Advancements and Product Launches in 2023
1
[removed]
2023-12-27T10:45:20
https://www.reddit.com/r/LocalLLaMA/comments/18rxzdj/the_most_exciting_ai_advancements_and_product/
DataLearnerAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rxzdj
false
null
t3_18rxzdj
/r/LocalLLaMA/comments/18rxzdj/the_most_exciting_ai_advancements_and_product/
false
false
default
1
null
Nucleus sampling with semantic similarity
21
I am sorry if the way I explain it is not very scientific. I am a financial analyst/investor and not an ML expert. However, I find it puzzling that we invest substantial computational power in deriving probabilities for the next token using Softmax, only to rely on simple randomness in selection. My naive intuition suggests that within the top 200 most probable tokens, there is considerable untapped value (and we just let it die, after each token produced.). For instance, I've been contemplating the idea of incorporating a similarity search among these top 200 tokens, organizing them into clusters or overlapping clusters. Instead of prioritizing the most probable token, the model could initially consider the most probable cluster. For instance, while "go" might be the most probable word, a cluster encompassing related terms like "walk," "run," and "drive" might be even more likely. Subsequently, selecting a cluster that is close but hasn't been presented could be a way to introduce context, increasing the likelihood of its expression. Now imagine RAG, could have access to those secondary clusters, on a second read, which an LLM agent could evaluate, on a debugging mode. So, why RAG just get the output, and not use something more with the help of a sophisticated sampler. This would not be so expensive computationally. In the pursuit of enhancing creativity, I find it somewhat simplistic to introduce just randomness to the model, and especially after so much extensive computation. With a pool of 200 finalists, there might be more sophisticated approaches to both boost creativity and maintain control. I would appreciate the insights of those who have a deeper understanding of how these can work.
2023-12-27T10:39:33
https://www.reddit.com/r/LocalLLaMA/comments/18rxw5u/nucleus_sampling_with_semantic_similarity/
dimknaf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rxw5u
false
null
t3_18rxw5u
/r/LocalLLaMA/comments/18rxw5u/nucleus_sampling_with_semantic_similarity/
false
false
self
21
null
Deploy Mistral-7b as a Docker Container
1
I have been working with RAG applications and wish to deploy a mistral model api as a docker container so that, I can use it anywhere and make it portable. I don't have much experience with docker and I tried it but there seems to be some issue with GPU usage through docker. Anyone has any input on how can I make it work? Nvidia-drivers in docker-compose or dockerfile somehow?
2023-12-27T10:04:27
https://www.reddit.com/r/LocalLLaMA/comments/18rxd5e/deploy_mistral7b_as_a_docker_container/
One-Difficulty3149
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rxd5e
false
null
t3_18rxd5e
/r/LocalLLaMA/comments/18rxd5e/deploy_mistral7b_as_a_docker_container/
false
false
self
1
null
Can i use a instruct code model to fit code completion tasks? For example, codellama-instruct/deepseek-instuct.
3
If so, is it proper to continue-train an instruct model?
2023-12-27T09:53:41
https://www.reddit.com/r/LocalLLaMA/comments/18rx7b9/can_i_use_a_instruct_code_model_to_fit_code/
bittercucumb3r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rx7b9
false
null
t3_18rx7b9
/r/LocalLLaMA/comments/18rx7b9/can_i_use_a_instruct_code_model_to_fit_code/
false
false
default
3
null
I created an app that runs local models directly on your phone - my journey so far
31
Hi everyone! https://preview.redd.it/cndefiq93t8c1.png?width=50&format=png&auto=webp&s=580f4bea650430235f98c7f712c626f4e74b8d23 I wanted to share my journey so far creating a mobile app that runs local models. I started almost a year ago (when llama.cpp was still in its infant stages) as a proof-of-concept to see if I can get an acceptable chatbot running locally on my phone. It worked pretty well, so I decided to take it one step further and build a nice UI around it, with options to select different "personalities". This led me down the rabbit hole of researching fine-tuning, datasets, different model benchmarks, VRAM requirements, etc. etc. At the start, I used Oobabooga TextGen GUI locally to test different models, experimenting with how they behave, and training LORAs in an attempt to bring out the personalities I wanted. It progressed to using Axolotl library to do full fine-tune when LORAs weren't enough, and renting GPUs on the cloud. As I developed the app, I realised it's actually becoming quite full-featured. I want to share my experience with you guys in case any of your decide to create your own local AI! **Model tuning process:** 1. [Oobabooga TextGen](https://github.com/oobabooga/text-generation-webui) \- for quickly testing base model capabilities and fine-tuning results. Use the "Training" tab to quickly train LORAs to evaluate how well your dataset is 2. [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) \- command line library for full fine-tuning models. Due to a unique technical limitation (which I'm in the process of solving!), loading a LORA on mobile device means the model needs to be loaded fully in memory (as opposed to using mem-maps), so it drastically ups the RAM requirements for smart phones, sacrificing speed and context length. So full-tuning is the way to go. Axolotl library provides an easy starting point - you can modify the dataset format easily, and supports a wide variety of tuning options 3. [Vast.ai](https://Vast.ai) \- cloud GPU rental. Once you've got the training process down to a T, rent a cluster of GPUs. Training around 1GB data used in full fine-tune takes around 2-3 hours on 8xA100 GPUs. So the cost is actually reasonable here. 4. [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) \- after your model is trained, evaluate quickly by hand using Oobabooga GUI and with lm-evaluation-harness. By comparing the scores with the base or your previous versions, you can get a quantitative idea on how well your training went. 5. [Llama.cpp](https://github.com/ggerganov/llama.cpp) \- use this to quantise the model down to run on mobile devices. Q4\_K\_M provides a nice balance between speed and quality. **App tech stack:** 1. [Llama.cpp](https://github.com/ggerganov/llama.cpp) \- runs the inference engine. Many thanks to the immensely talented people maintaining that project. I'm happy to say that I've made some contributions as well, solving some issues I ran into porting it to mobile 2. React Bridge that encapsulates llama.cpp I wrote myself. It exports necessary C++ functions needed to use on both Android and iOS 3. React Native - frontend UI 4. [Midjourney](https://www.midjourney.com/explore) \- I'm no designer, but I do know generally how I want my app to look. This is where midjourney comes in: I describe the app interface I want generally and ask it to generate app designs. I then use that as inspiration to iterate a few times by generating more designs, of which I choose one to settle on. The app supports 3 models (lite, Q2, and Full). The 3 versions are chosen to support low to high end devices. The models are uploaded to HuggingFace so if you are interested you can download and test it directly with your favourite client: 1. Lite - [https://huggingface.co/l3utterfly/minima-3b-layla-v1](https://huggingface.co/l3utterfly/minima-3b-layla-v1) 3B model for low-end devices with low RAM 2. Q2 - Q2 quantised version of [https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v1](https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v1) 7B model for medium end devices, or high-end devices but you want fast 3. Full- Q4\_K\_M quantised version of [https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v1](https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v1) 7B model, slowest, but best quality (Note: I can probably fit a 13B model in very high end devices, but from experience, I found it starts to mess with your basic phone functions. For example, if I have too many background apps running, it will simply crash.) **Current features I've already done:** * Assistant function for scheduling tasks in your calendar * Different personalities to chat to * Roleplaying * Importing SillyTavern PNG characters * Creating your own characters using prompts It's been an amazing journey. I do believe the future is everyone having their own local on-device AI, akin to everyone having a smartphone these days. So I'm still planning to keep on learning in this space and add what I learned to the app as I go! **Features I plan to add in the next month/year:** * Ability to search the internet during response with RAG * Tool calling * A place for people to share characters they created * Ability to draft emails, text messages, reply to people on messenger etc. * Language learning coach where you can practise texting/speaking different languages during different roleplay scenarios If you are interested in taking a look and support my development, here's my website: [https://www.layla-network.ai/](https://www.layla-network.ai/) which has links to the app store where to download the app. It's $15 one time for all existing and future updates.
2023-12-27T09:36:08
https://www.reddit.com/r/LocalLLaMA/comments/18rwy30/i_created_an_app_that_runs_local_models_directly/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rwy30
false
null
t3_18rwy30
/r/LocalLLaMA/comments/18rwy30/i_created_an_app_that_runs_local_models_directly/
false
false
default
31
{'enabled': False, 'images': [{'id': '-5rxiF1j_iUfRVOmsJTJQ-u5RfJsVlMuSXk3qOx_w5A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x69KwdfFCBpoz3XXWw1emU-fJwAmPFcBw5KhKBuTVRQ.jpg?width=108&crop=smart&auto=webp&s=e6ff3bd29e2dacab4e23c845c347a52178b271f8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x69KwdfFCBpoz3XXWw1emU-fJwAmPFcBw5KhKBuTVRQ.jpg?width=216&crop=smart&auto=webp&s=729970ad80d4d24180a2badaa3830f7798f63c47', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x69KwdfFCBpoz3XXWw1emU-fJwAmPFcBw5KhKBuTVRQ.jpg?width=320&crop=smart&auto=webp&s=4c9d7853f90702a929bfa8335eb95934c300751f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x69KwdfFCBpoz3XXWw1emU-fJwAmPFcBw5KhKBuTVRQ.jpg?width=640&crop=smart&auto=webp&s=eef33a1907dfadd43d59cf250636b95b650d79b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x69KwdfFCBpoz3XXWw1emU-fJwAmPFcBw5KhKBuTVRQ.jpg?width=960&crop=smart&auto=webp&s=e4f23633f715434772af760d9129b57269c8cc71', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x69KwdfFCBpoz3XXWw1emU-fJwAmPFcBw5KhKBuTVRQ.jpg?width=1080&crop=smart&auto=webp&s=b3958b8e5b631252be451871442a97964b5847b9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x69KwdfFCBpoz3XXWw1emU-fJwAmPFcBw5KhKBuTVRQ.jpg?auto=webp&s=6af5da335f8451fe23abce4920aabc51c4925931', 'width': 1200}, 'variants': {}}]}
Learning about character and fine tuning
1
[removed]
2023-12-27T09:08:20
https://www.reddit.com/r/LocalLLaMA/comments/18rwj6b/learning_about_character_and_fine_tuning/
Competitive_Fox7811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rwj6b
false
null
t3_18rwj6b
/r/LocalLLaMA/comments/18rwj6b/learning_about_character_and_fine_tuning/
false
false
default
1
null
Capybara dataset is now open-source and available!
3
[removed]
2023-12-27T09:05:35
https://i.redd.it/gjwo91sgzs8c1.jpeg
dogesator
i.redd.it
1970-01-01T00:00:00
0
{}
18rwhqp
false
null
t3_18rwhqp
/r/LocalLLaMA/comments/18rwhqp/capybara_dataset_is_now_opensource_and_available/
false
false
default
3
{'enabled': True, 'images': [{'id': '6fkN1_VCdVP1kFLPJvN-Z_ZYT9o2f6cMAOciVjdGqaU', 'resolutions': [{'height': 20, 'url': 'https://preview.redd.it/gjwo91sgzs8c1.jpeg?width=108&crop=smart&auto=webp&s=ae64f3af2b3b2258620c763077e76a4a34ffa025', 'width': 108}, {'height': 40, 'url': 'https://preview.redd.it/gjwo91sgzs8c1.jpeg?width=216&crop=smart&auto=webp&s=17e28b4759ed2caa34831acb836754c5dca6151f', 'width': 216}, {'height': 59, 'url': 'https://preview.redd.it/gjwo91sgzs8c1.jpeg?width=320&crop=smart&auto=webp&s=df93e4353f91995cfecabe5e0778121b59e6e630', 'width': 320}, {'height': 118, 'url': 'https://preview.redd.it/gjwo91sgzs8c1.jpeg?width=640&crop=smart&auto=webp&s=aab90177d39b7ceed95a6e40e87a7af8deefc200', 'width': 640}, {'height': 178, 'url': 'https://preview.redd.it/gjwo91sgzs8c1.jpeg?width=960&crop=smart&auto=webp&s=12d83e4374c19370d8e7fd0e5b3957c53c2c2171', 'width': 960}, {'height': 200, 'url': 'https://preview.redd.it/gjwo91sgzs8c1.jpeg?width=1080&crop=smart&auto=webp&s=1f3243a6243aa00d36926afc0211f59fdcfe4e53', 'width': 1080}], 'source': {'height': 389, 'url': 'https://preview.redd.it/gjwo91sgzs8c1.jpeg?auto=webp&s=48c2d5ea49e97e861505674a3c3528439ee6e1fb', 'width': 2096}, 'variants': {}}]}
Running LLM on CPU ?
10
Hi everyone, I have searched before asking this, but could not really find a proper answer (there is not even a tutorial here). Basically I still have problems with model size and ressource needed to run LLM (esp. in a corporate environnement). So I am trying to run those on cpu, including relatively small cpu (think rasberry pi). Where I am currently: I managed to download Mistral weights, set a proper environnement and run it on a collab. I can replicate the environnement locally. the general run-time is around 10x what I get on GPU. I observe memory spikes above what would be avaible locally / on a raspberry-pi. So I am now interested in: \- making it run faster \- making it take less memory \- not getting too much far away from the original model Asking on other forums I only get vague answers (try quantization, try GPU ... etc.) without even a link. I understand that I might need to use other models (typically quantized models), but didn't get any precise answer. Can someone pinpoint available strategies / good tutorials / cpu only benchmarks ?
2023-12-27T08:40:45
https://www.reddit.com/r/LocalLLaMA/comments/18rw4bj/running_llm_on_cpu/
WERE_CAT
self.LocalLLaMA
2023-12-27T10:21:00
0
{}
18rw4bj
false
null
t3_18rw4bj
/r/LocalLLaMA/comments/18rw4bj/running_llm_on_cpu/
false
false
self
10
null
Is FIM important for a code model(like codellama & deepseek coder)?
5
The main application scenario of the model is to integrate it into an IDE plugin for code completion. I find that training a code model with good FIM performance requires a significant effort in data cleaning.
2023-12-27T08:28:46
https://www.reddit.com/r/LocalLLaMA/comments/18rvxqa/is_fim_important_for_a_code_modellike_codellama/
bittercucumb3r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rvxqa
false
null
t3_18rvxqa
/r/LocalLLaMA/comments/18rvxqa/is_fim_important_for_a_code_modellike_codellama/
false
false
self
5
null
s FIM important for a code model(like codellama/deepseek coder)?
1
[deleted]
2023-12-27T08:27:34
[deleted]
1970-01-01T00:00:00
0
{}
18rvx3b
false
null
t3_18rvx3b
/r/LocalLLaMA/comments/18rvx3b/s_fim_important_for_a_code_modellike/
false
false
default
1
null
Struggle to load Mixtral-8x7B in 4 bit into 2 x 24GB vRAM in Llama Factory
5
I try to finetune Mixtral-8x7B with Llama Factory. On its Github page it says the hardware requirment for finetuning Mixtral-8x7B in 4 bit is 32GB. [https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#distributed-training](https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#distributed-training) I use Huggingface Accelerate to work with 2 x 24GB GPUs. In theory, I should have enough vRAM at least to load it in 4 bit, right? I have tried every setting in "accelerate config" but no luck, oom at loading all the time, let alone start finetuning. Anyone has any idea what have I missed to load Mixtral? Edit: typo Edit 2: It seems that the model is loaded entirely on each GPU, each GPU gets a full copy of the model, it’s not “model parallelism” which is to load different parts of the model onto different GPUs. I tried to load a model using one GPU, say, it used 8GB vRAM, and then I tried it with 2 GPUs, each of them used 8GB, not 4GB each.
2023-12-27T08:05:25
https://www.reddit.com/r/LocalLLaMA/comments/18rvkij/struggle_to_load_mixtral8x7b_in_4_bit_into_2_x/
tgredditfc
self.LocalLLaMA
2023-12-27T16:59:37
0
{}
18rvkij
false
null
t3_18rvkij
/r/LocalLLaMA/comments/18rvkij/struggle_to_load_mixtral8x7b_in_4_bit_into_2_x/
false
false
self
5
{'enabled': False, 'images': [{'id': 'KH8rP1laXrKX7kQH4ecJCjAPVRTJEoI3PEpjxFaNGbQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=108&crop=smart&auto=webp&s=ef60162e69c0c9707b500c028514ac6a678840fe', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=216&crop=smart&auto=webp&s=a6ffa5036f63f56c4922b7a95dc43e664881a8aa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=320&crop=smart&auto=webp&s=26dea7a7c9bccdc070486dbf0da16b6c23876e67', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=640&crop=smart&auto=webp&s=43fac426a185597b7d643c093bd5646fc63bc7b9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=960&crop=smart&auto=webp&s=f09f44a420f6b56abdd2eaf7c506440bedbf6303', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?width=1080&crop=smart&auto=webp&s=e64e5e6c1800838daa9cfa8678ad4d4b54aa6eba', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/eU5sgD7_61bCe717pQaE6Qr33QDNNyCz4qXGxrssAww.jpg?auto=webp&s=23667636b1ac7a061c65051a99f7c27616d24bfc', 'width': 1280}, 'variants': {}}]}
im using LM studio and am trying to run mixtral from my GPU's VRAM, how can i do this? i enabled "GPU offload" and set it to 10 layers and there is no change. still maxing out my desktop ram. what am i missing?
1
2023-12-27T07:08:04
https://i.redd.it/a3al8265es8c1.png
Upper_Judge7054
i.redd.it
1970-01-01T00:00:00
0
{}
18rumtt
false
null
t3_18rumtt
/r/LocalLLaMA/comments/18rumtt/im_using_lm_studio_and_am_trying_to_run_mixtral/
false
false
default
1
{'enabled': True, 'images': [{'id': '9EnvU7omtIcskyGkFr9fxwg7Qjy5dDAZaZKo1TDeqXw', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/a3al8265es8c1.png?width=108&crop=smart&auto=webp&s=cf584693342821713c92d899388925a87a490746', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/a3al8265es8c1.png?width=216&crop=smart&auto=webp&s=6e21b7419aa9f86a394601debe3e6374a986a56f', 'width': 216}, {'height': 280, 'url': 'https://preview.redd.it/a3al8265es8c1.png?width=320&crop=smart&auto=webp&s=78a70dc38c8028eccaa4a43aa990600b703c48f2', 'width': 320}, {'height': 560, 'url': 'https://preview.redd.it/a3al8265es8c1.png?width=640&crop=smart&auto=webp&s=357244d9554d1bd4a0a501181d43a271053423d4', 'width': 640}, {'height': 840, 'url': 'https://preview.redd.it/a3al8265es8c1.png?width=960&crop=smart&auto=webp&s=bd88efe6e3cb2cdaedcf30c893e439efe3227c17', 'width': 960}, {'height': 945, 'url': 'https://preview.redd.it/a3al8265es8c1.png?width=1080&crop=smart&auto=webp&s=d98f95b04d05b4996e8850b61053144b7ed215af', 'width': 1080}], 'source': {'height': 1749, 'url': 'https://preview.redd.it/a3al8265es8c1.png?auto=webp&s=4d62f14e77872ea9ee877a5d0a040a47b1f072bd', 'width': 1998}, 'variants': {}}]}
🚀 Introducing AllMiniLML6v2Sharp and ChromaDBSharp: Empowering C# Developers to create LLM RAG pipelines.
14
Hey, fellow developers! I'm excited to share some projects I've been working on that allow C# developers to easily work with Vector databases locally in C#. After experimenting with LLMs in Python, I realized the need for better Vector DB support in C#. Check out these projects: * **AllMiniLML6v2Sharp:** * For running Sentence Transformers AllMiniLML6v2 directly from your C# code. * [Github](https://github.com/ksanman/AllMiniLML6v2Sharp) * **ChromaDBSharp:** * C# Client for interfacing with a hosted version of ChromaDB API. * [GitHub](https://github.com/ksanman/ChromaDBSharp) * **Full Pipeline Showcase (C#, LLamaSharp, React):** * Basic demo app for querying a ChromaDB, and using RAG with C#. * [GitHub](https://github.com/ksanman/LlmTestApp) I would love to hear your feedback! Share your thoughts, suggestions, or report any issues you encounter. Your input is invaluable in refining this project.
2023-12-27T06:26:36
https://www.reddit.com/r/LocalLLaMA/comments/18rtxvq/introducing_allminilml6v2sharp_and_chromadbsharp/
agent_xaber
self.LocalLLaMA
2023-12-27T15:03:13
0
{}
18rtxvq
false
null
t3_18rtxvq
/r/LocalLLaMA/comments/18rtxvq/introducing_allminilml6v2sharp_and_chromadbsharp/
false
false
self
14
{'enabled': False, 'images': [{'id': 'c1aMK-M8XbrNKuZTFS-6FGJoLBCZkzAHbzgeK13SGgs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GZ6MGHLOWmkvi07HugUVebd8v5Zbu_vhV-l4H0CS0xU.jpg?width=108&crop=smart&auto=webp&s=5efb3f7d9d4e456ee9b28dfe93d380a19b8ba72b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GZ6MGHLOWmkvi07HugUVebd8v5Zbu_vhV-l4H0CS0xU.jpg?width=216&crop=smart&auto=webp&s=4d05081efc097dd3f580c536045c69cdf9f850c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GZ6MGHLOWmkvi07HugUVebd8v5Zbu_vhV-l4H0CS0xU.jpg?width=320&crop=smart&auto=webp&s=b7e85097ea1a224dc55225b5608c8feab21656a0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GZ6MGHLOWmkvi07HugUVebd8v5Zbu_vhV-l4H0CS0xU.jpg?width=640&crop=smart&auto=webp&s=864474099d5677a90b37965ddca797dc783b19e7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GZ6MGHLOWmkvi07HugUVebd8v5Zbu_vhV-l4H0CS0xU.jpg?width=960&crop=smart&auto=webp&s=46ba3e6177b1243a8979d2541f44b428577d2d94', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GZ6MGHLOWmkvi07HugUVebd8v5Zbu_vhV-l4H0CS0xU.jpg?width=1080&crop=smart&auto=webp&s=df8639fc89ef763bf3f50db71fa9963c26ad8128', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GZ6MGHLOWmkvi07HugUVebd8v5Zbu_vhV-l4H0CS0xU.jpg?auto=webp&s=15d731c8de72727c577ba96c9141fff246990a30', 'width': 1200}, 'variants': {}}]}
Using LLM's for Intelligent Brochure Querying – Help Needed!
1
Hey fellow Redditors! I've recently delved into the exciting world of open source Local Language Models (LLMs), with the aim of building a user-friendly UI to query my growing library of product brochures. Currently sitting at 100 brochures, I foresee this collection expanding, and I'm seeking your expertise on a few aspects. 1. Viability of Use Case: Is a local LLM like Mistral suitable for this task, considering potential limitations in the context window, especially with a growing number of brochures? 2. Embeddings and Vector Databases: How do I go about creating embeddings? Do I select a model, run the data through it, and then store the output in a vector database? Any low-code solutions for this? 3. Adding New Brochures: If I want to add new brochures to the vector database, do I need to re-do the embedding for each document and update the vector database? Is there a way to automate this process? 4. Optimal Models: Are there models known for minimizing hallucinations, citing sources, and providing robust performance in this use case? 5. Automation and Tools: Seeking recommendations for tools or platforms that make this process user-friendly without extensive programming skills. I'm navigating this territory with a hazy understanding and would love your insights, tips, and suggestions. Let's brainstorm together on the best approach for this extensive use case! Looking forward to your expertise. Thanks!
2023-12-27T06:23:06
https://www.reddit.com/r/LocalLLaMA/comments/18rtvq6/using_llms_for_intelligent_brochure_querying_help/
reborn_tonight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rtvq6
false
null
t3_18rtvq6
/r/LocalLLaMA/comments/18rtvq6/using_llms_for_intelligent_brochure_querying_help/
false
false
self
1
null
7b - 13b models are hopeless at planning tasks
77
I wanted to test how good of a planner popular models such as “solar-10-7b” and “Oppenhermes-2.5-mistral” where at planning tasks. The following is the prompt i used (based on a real scenario / use case ) ~ Prompt start { I need you to create a timetable for me given the following facts: - my plane takes off at 6:30am - I need to be at the airport 1h before take off - it will take 45mins to get to the airport - I need 1h to get dressed and have breakfast before we leave The plan should include when to wake up and the time I need to get into the vehicle to get to the airport in time for my 6:30am flight , think through this step by step. } prompt end ~ None of the local models have been able to prepare a plan that gets me to the airport on time and often misses my flight by 1-2hrs. I have tested this exact prompt with chatGPT 3.5 with a successful plan outlined. Are we aware of any local llm’s that are smart enough to solve this task? Edit : The addition of latent space activation helped a little but still failed , the addition to the end of the original prompt was Prompt addition start ~ { Latent space activation keywords : chronological plan, in time, well planned , on flight at 6:30am, meets goals } ~ end prompt addition
2023-12-27T05:31:42
https://www.reddit.com/r/LocalLLaMA/comments/18rsygu/7b_13b_models_are_hopeless_at_planning_tasks/
ThinkExtension2328
self.LocalLLaMA
2023-12-27T13:28:26
0
{}
18rsygu
false
null
t3_18rsygu
/r/LocalLLaMA/comments/18rsygu/7b_13b_models_are_hopeless_at_planning_tasks/
false
false
self
77
null
Are you open minded about the true nature of your own existence?
1
I am convinced that people are LLMs. Or at least some large part of us. *Pre-Suasion: Channeling Attention for Change* is one of my favorite books about prompt engineering for human beings. You can literally change how people behave simply by presenting them with a certain idea before the fact. And not by some barely statistically significant margin, but by 15-30% if I recall correctly. I haven't read that book in years, and apparently the author has since written an expanded version (*Influence: The Psychology of Persuasion*), but this connection is pretty exciting to me. Is anyone talking about this? Maybe there is a more relevant sub?
2023-12-27T05:17:00
https://www.reddit.com/r/LocalLLaMA/comments/18rsoik/are_you_open_minded_about_the_true_nature_of_your/
flossraptor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rsoik
false
null
t3_18rsoik
/r/LocalLLaMA/comments/18rsoik/are_you_open_minded_about_the_true_nature_of_your/
false
false
default
1
null
First Game with fully functional AI/LLM
24
First up… I’m just a very hyped fan of the game and I have nothing to do with the devs. “Suck Up!” A Game where u are a vampire and have to convince everyone to let u in their house to drink their blood. At the same time the AI looks on the clothes u choose for ur character in disguise to check if ur story seems plausible. And trust me it works like a charm. Every NPC has his own Interests and Personality while being easy or hard to manipulate. Have written hours of lore already www.playsuckup.com it’s just 13$ cause very early acces. U Interact via Mikrofon which gets fed to the LLM on the DEVS sever via internet in 2 Seconds and get a response. Just a little sneak peak of me being about 3 Minutes in a Convo after showing up as an agent and just saying: “I’m assuming u are already well informed” to the “Paranoid/Conspiracy” Guy After the scene In the picture he died miserably!
2023-12-27T05:11:40
https://i.redd.it/118fgahptr8c1.jpeg
Fr3koTwitch
i.redd.it
1970-01-01T00:00:00
0
{}
18rskwm
false
null
t3_18rskwm
/r/LocalLLaMA/comments/18rskwm/first_game_with_fully_functional_aillm/
false
false
https://b.thumbs.redditm…GZx0w02LQLIc.jpg
24
{'enabled': True, 'images': [{'id': 'fATZS2F9IWYiHqcq9kPQW7o7_xol0jlsBNYXz1V0RUg', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/118fgahptr8c1.jpeg?width=108&crop=smart&auto=webp&s=6c2c6b6fd70e6a78ab80af5f0e3b03f1d128c602', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/118fgahptr8c1.jpeg?width=216&crop=smart&auto=webp&s=be078b97e5b05cf37e4ebe10c1d4fb577813426b', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/118fgahptr8c1.jpeg?width=320&crop=smart&auto=webp&s=58f62533038b62cb702d76f5f6763fe0cbdb4782', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/118fgahptr8c1.jpeg?width=640&crop=smart&auto=webp&s=c1d6b06b11b5cafc4ac83a150725e36a35438f8c', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/118fgahptr8c1.jpeg?width=960&crop=smart&auto=webp&s=13b0d80a5131e314931bd6a0cd22bc18eaf9705e', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/118fgahptr8c1.jpeg?width=1080&crop=smart&auto=webp&s=f60e1b6eb802b12d29fa7213839e112dce69dfde', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/118fgahptr8c1.jpeg?auto=webp&s=1d1de74b9c783d03c349281f0d9ce71f41b58bde', 'width': 3024}, 'variants': {}}]}
Need help with llamacpp-python bindings
3
I have been struggling to understand how to do the following: Get a report of the current number of tokens presently in context from model initialized by a call to Llama (from llama_cpp import Llama in Python) using the “messages” method for the completion. n_ctx = 4096 I see a method in low level api documentation for llama_model_get(ctx) which looks like it should return current status of the context tokens (n/4096 in my case)? Do I need to separately init with low level api to use this method? Thanks if someone smarter than me on llamacpp-Python and the apis can tell me a easy method to get a report of the current state/status of of ctx at any given turn.
2023-12-27T04:40:37
https://www.reddit.com/r/LocalLLaMA/comments/18rrze3/need_help_with_llamacpppython_bindings/
Helpful-Gene9733
self.LocalLLaMA
2023-12-27T04:47:48
0
{}
18rrze3
false
null
t3_18rrze3
/r/LocalLLaMA/comments/18rrze3/need_help_with_llamacpppython_bindings/
false
false
self
3
null
Why is no-one fine-tuning something like t5?
76
I know this isn't about LLaMA, but flan T5 3B regularly outperforms other 3b models like mini orca 3b and lamini flan t5 783m (fine-tuned flan-t5-small) outperforms tinyllama-1.1B. So that begs the question: Why aren't many people fine-tuning flan t5 / t5?
2023-12-27T04:39:16
https://www.reddit.com/r/LocalLLaMA/comments/18rryf1/why_is_noone_finetuning_something_like_t5/
No_Baseball_7130
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rryf1
false
null
t3_18rryf1
/r/LocalLLaMA/comments/18rryf1/why_is_noone_finetuning_something_like_t5/
false
false
self
76
null
Microsoft Copilot Android App
5
It responds in a different way to the original GPT4, it is worth checking [Microsoft Copilot](https://play.google.com/store/apps/details?id=com.microsoft.copilot&hl=en_US&pli=1)
2023-12-27T03:58:09
https://www.reddit.com/r/LocalLLaMA/comments/18rr5vq/microsoft_copilot_android_app/
k0setes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rr5vq
false
null
t3_18rr5vq
/r/LocalLLaMA/comments/18rr5vq/microsoft_copilot_android_app/
false
false
default
5
{'enabled': False, 'images': [{'id': 'OJUbPe2GI77aIFS0GLPEm1NpweIwdlTCmvJ9ukTwZhw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/lsGyFQH6b-wfI74qtYLp2HZkHDb57b2enHZgQCfnMNQ.jpg?width=108&crop=smart&auto=webp&s=586ec042cc878b373f1f2a62c622ec721d80b166', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/lsGyFQH6b-wfI74qtYLp2HZkHDb57b2enHZgQCfnMNQ.jpg?width=216&crop=smart&auto=webp&s=b68a4b26ab9252d25efd63f49f224878a1406595', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/lsGyFQH6b-wfI74qtYLp2HZkHDb57b2enHZgQCfnMNQ.jpg?width=320&crop=smart&auto=webp&s=3ed4bc5b793e44b162edfd7174db941a455acf88', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/lsGyFQH6b-wfI74qtYLp2HZkHDb57b2enHZgQCfnMNQ.jpg?auto=webp&s=585aaf93574c0b0ae65cceea0fdd6e2a47cc5a90', 'width': 512}, 'variants': {}}]}
Microsoft Copilot Android App
1
2023-12-27T03:51:31
https://i.redd.it/eo6fpws2fr8c1.jpeg
k0setes
i.redd.it
1970-01-01T00:00:00
0
{}
18rr189
false
null
t3_18rr189
/r/LocalLLaMA/comments/18rr189/microsoft_copilot_android_app/
false
false
default
1
{'enabled': True, 'images': [{'id': 'TTw74QusUmJdqPFGO53L91jHVlI5WpHdH53Pg2ygtFc', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/eo6fpws2fr8c1.jpeg?width=108&crop=smart&auto=webp&s=29d44086d3e6530fc1375137e435d0a07fde51f2', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/eo6fpws2fr8c1.jpeg?width=216&crop=smart&auto=webp&s=8567ce75998bfcab61653b8630db9f04230f78a5', 'width': 216}], 'source': {'height': 240, 'url': 'https://preview.redd.it/eo6fpws2fr8c1.jpeg?auto=webp&s=dc83145ed07d6eb7e14ca64f52ce39347cb0f9b2', 'width': 240}, 'variants': {}}]}
Hardware
1
Hi! I discovered LM Studio and this subreddit yesterday and it has become evident to me, that my current hardware is not up to the task of running the hefty models. That small models work fine, the big ones don't at all. The machine is grinding halt on the output, a word every 10 seconds or so. Not usable Here are my system specs: 2019 Laptop with: *i5-8350U* 8 gig RAM 256 gig NVMe SSD Integrated Intel GPU Linux OS I would like to understand what can be done about that. Obviously, a new computer would fix this but would maxxing out this computer do the trick too, so that I can run larger models locally? It would be much more cost effective to spec bump it to: 32 gigs RAM 2 TB NVMe SSD 32 gigs are the most RAM the machine can handle according to the manufacturer. Looking forward to your opinions! Cheers
2023-12-27T03:18:54
https://www.reddit.com/r/LocalLLaMA/comments/18rqe5m/hardware/
theguyabroad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rqe5m
false
null
t3_18rqe5m
/r/LocalLLaMA/comments/18rqe5m/hardware/
false
false
self
1
null
Japanese Llama2-13B
10
2023-12-27T02:59:20
https://x.com/ELYZA_inc/status/1739828503171019204?s=20
ninjasaid13
x.com
1970-01-01T00:00:00
0
{}
18rq07h
false
null
t3_18rq07h
/r/LocalLLaMA/comments/18rq07h/japanese_llama213b/
false
false
https://b.thumbs.redditm…iUSAcQac0caM.jpg
10
{'enabled': False, 'images': [{'id': 'R6CnEfAJBkZ8CGbZM7FCY0tcdd50P-Q3JhikLBApg9w', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Zd59MG86KPWopUrwS03uKX1sqKps0DIxrgBDEqde3zU.jpg?width=108&crop=smart&auto=webp&s=e1a4e6af707e0647903c5c44d8bddd0ab26413be', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/Zd59MG86KPWopUrwS03uKX1sqKps0DIxrgBDEqde3zU.jpg?auto=webp&s=cde3f7008a41b3d9fd00ce2af0e352607e9df568', 'width': 200}, 'variants': {}}]}
problem when loading gguf models
2
Hi, ​ I got some more RAM so I tried to download some gguf models. However, there was a weird and silly problem most of the times I loaded it; updating fixed it once, but then it happened again. ​ When it happens, I try to load the model. ​ Then ram usage spikes and goes up to the amount needed to load it almost instantly ​ Then, over the next while, it slowly, slowly, slowly crawls back down, then throws an access violation error ​ I'm on Windows 11 ​ I would really appreciate some help with this. edit: actually, this time, it did load -- no access violation. It still did the memory crawling down thing and it took literally 1121 seconds to load which seems like way too much. edit2: tried to do something; ValueError. That's it. Nothing else is there. What the actual...
2023-12-27T02:59:17
https://www.reddit.com/r/LocalLLaMA/comments/18rq06f/problem_when_loading_gguf_models/
AIWithASoulMaybe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rq06f
false
null
t3_18rq06f
/r/LocalLLaMA/comments/18rq06f/problem_when_loading_gguf_models/
false
false
default
2
null
Any good 3B (gguf) or below model for the purpose of summarize / check & imrpove English? I have some model e.g. Phi2 that does not do such, just answers my question even i specifically ask to check my English without any explanation, some models will just go on and on.
2
Using LM Studio 0.2.10 System prompt: Your task is to rewrite the following English sentences in for better clarity, without changing the content or meaning of the sentences. Rewrite without explanation or continue to explain what you have changed.
2023-12-27T02:44:46
https://www.reddit.com/r/LocalLLaMA/comments/18rppuw/any_good_3b_gguf_or_below_model_for_the_purpose/
jackfood2004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rppuw
false
null
t3_18rppuw
/r/LocalLLaMA/comments/18rppuw/any_good_3b_gguf_or_below_model_for_the_purpose/
false
false
self
2
null
I've made a Dscord bot running a local tts and local text-gen
1
2023-12-27T02:16:33
https://v.redd.it/6dphq7n6yq8c1
flepss
v.redd.it
1970-01-01T00:00:00
0
{}
18rp53l
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6dphq7n6yq8c1/DASHPlaylist.mpd?a=1706289825%2CZjc2NWFjNmJmMmM0NjIzZDA1MjA4M2YwODUxNGUyM2U1Mjc1YzRjMDU1NjQ1MTNiYTFjZDgzYmZjZTMzNDhkZQ%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/6dphq7n6yq8c1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6dphq7n6yq8c1/HLSPlaylist.m3u8?a=1706289825%2CZDZhMDM0ZDY0NDg2MTkyMmExOGE2YjkwNzQ3YTM2MzM2MmEyMzRmYjk5OWU2YjczNjBhNGVlYjdjYzk5YTgwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6dphq7n6yq8c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_18rp53l
/r/LocalLLaMA/comments/18rp53l/ive_made_a_dscord_bot_running_a_local_tts_and/
false
false
default
1
{'enabled': False, 'images': [{'id': 'bTQ1dGIzaGh5cThjMezyX-e4wHRa30whTu-QzkNCcOmZyRGaQ5sxRDjVpfI4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bTQ1dGIzaGh5cThjMezyX-e4wHRa30whTu-QzkNCcOmZyRGaQ5sxRDjVpfI4.png?width=108&crop=smart&format=pjpg&auto=webp&s=26dd452bc7907c83798da841ac00775f5045d6d1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bTQ1dGIzaGh5cThjMezyX-e4wHRa30whTu-QzkNCcOmZyRGaQ5sxRDjVpfI4.png?width=216&crop=smart&format=pjpg&auto=webp&s=ae1d6ffcf8d4ca57d4aef14079e5f269459ca4e3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bTQ1dGIzaGh5cThjMezyX-e4wHRa30whTu-QzkNCcOmZyRGaQ5sxRDjVpfI4.png?width=320&crop=smart&format=pjpg&auto=webp&s=c046d002318673d741318c6efce6ebd231b2e15f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bTQ1dGIzaGh5cThjMezyX-e4wHRa30whTu-QzkNCcOmZyRGaQ5sxRDjVpfI4.png?width=640&crop=smart&format=pjpg&auto=webp&s=b02a96271f08c2dcf08212356e27128306de0bac', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bTQ1dGIzaGh5cThjMezyX-e4wHRa30whTu-QzkNCcOmZyRGaQ5sxRDjVpfI4.png?width=960&crop=smart&format=pjpg&auto=webp&s=9a34772fa8442ac06eb9425b7f3c42cf5e76795a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bTQ1dGIzaGh5cThjMezyX-e4wHRa30whTu-QzkNCcOmZyRGaQ5sxRDjVpfI4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e8d8daafa65c6a1eecb4b91b96c63dc8c50f23a4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bTQ1dGIzaGh5cThjMezyX-e4wHRa30whTu-QzkNCcOmZyRGaQ5sxRDjVpfI4.png?format=pjpg&auto=webp&s=8ca205565def0ff2aeb5a5ff37d36c09e72c8095', 'width': 1920}, 'variants': {}}]}
How can I check if a model run with 8 GB vram ?
3
Any help ?
2023-12-27T01:57:09
https://www.reddit.com/r/LocalLLaMA/comments/18roqgf/how_can_i_check_if_a_model_run_with_8_gb_vram/
More_Bid_2197
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18roqgf
false
null
t3_18roqgf
/r/LocalLLaMA/comments/18roqgf/how_can_i_check_if_a_model_run_with_8_gb_vram/
false
false
default
3
null
Weird question
7
My computer isn't powerful. Not enough for the models I want to run. I'm not tech savvy in any sense of the word so I hope this isn't an idiotic question: I have a bunch of old graphic cards that would certainly add up. Is there a way to make a "gpu box"? Like a seperate tower that houses multiple gpu and/or processor that I can just "hook up" to my computer to super power it? Is something like this possible? Anything somewhat like that? Again I have a feeling I'm just talking nonsense but it sounds like such a simple thing to make.
2023-12-27T00:54:04
https://www.reddit.com/r/LocalLLaMA/comments/18rnewi/weird_question/
IZA_does_the_art
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rnewi
false
null
t3_18rnewi
/r/LocalLLaMA/comments/18rnewi/weird_question/
false
false
self
7
null
What is the best way to go about tinkering with Local LLM models?
7
I want to test out a few ideas that would require adjusting the Model's architecture or finetune certain parts of it. What is the best way to go about doing that? specifically, I want to know if I should just use the huggingface transformers library and pull the model down, and then modify it with that. Or if there is a better library that can is faster and easier to use. Say for example i wanted to test attaching additional layers in between in random places in the model or finetune a few random parts of the model while i freeze the rest, would the huggingface be the best way to go about it?
2023-12-27T00:29:26
https://www.reddit.com/r/LocalLLaMA/comments/18rmw0d/what_is_the_best_way_to_go_about_tinkering_with/
ArrodesDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rmw0d
false
null
t3_18rmw0d
/r/LocalLLaMA/comments/18rmw0d/what_is_the_best_way_to_go_about_tinkering_with/
false
false
self
7
null
How to name an offline LLM?
5
This is probably a really easy answer. Don’t laugh - I’m still learning. I’m using on offline model with RAG. The data is stored in a Chroma DB. It’s doing its job, answering questions on the data it’s been training on (pdf books on data engineering, governance and strategy). I’ve named it ‘Eva’. But how do I get it to respond to that? If I ask it for its name, how do I get it to call itself Eva? Right now the question gets passed into langchain RetrievalQA with chain_type=‘stuff’. Do I need to feed it a md file into the Database to ‘tell it about itself’, or do I pass additional context into the RetrievalQA process on every query?
2023-12-26T23:37:34
https://www.reddit.com/r/LocalLLaMA/comments/18rlqbw/how_to_name_an_offline_llm/
Data_Driven_Guy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rlqbw
false
null
t3_18rlqbw
/r/LocalLLaMA/comments/18rlqbw/how_to_name_an_offline_llm/
false
false
self
5
null
Is RX 6700 any good?
5
Hello, I'm going to treat myself after christmas and upgrading my GPU. I'm stuck between RTX 3060 and RX 6700 XT, but I've seen some mixed opinions about AMD for LLMS. Is it now possible to utilize ROCm on mixtrall ollama?
2023-12-26T23:30:15
https://www.reddit.com/r/LocalLLaMA/comments/18rlkfb/is_rx_6700_any_good/
PrzemChuck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rlkfb
false
null
t3_18rlkfb
/r/LocalLLaMA/comments/18rlkfb/is_rx_6700_any_good/
false
false
self
5
null
What is the best GPU(s) I can get for $5,000 or less.
3
I've got an old Dell PowerEdge T430 with dual Xeon E5-2680 V3 processors and 256gb of RAM that I'd like to use to run some local models. What's the best I can get for around $5k? Hoping to be able to run 30b or better at at least 6-10t/s.
2023-12-26T23:18:19
https://www.reddit.com/r/LocalLLaMA/comments/18rlanf/what_is_the_best_gpus_i_can_get_for_5000_or_less/
IWantAGI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rlanf
false
null
t3_18rlanf
/r/LocalLLaMA/comments/18rlanf/what_is_the_best_gpus_i_can_get_for_5000_or_less/
false
false
self
3
null
Are there any differences between files from TheBloke and the original creator?
1
[removed]
2023-12-26T22:59:01
https://www.reddit.com/r/LocalLLaMA/comments/18rkuev/are_there_any_differences_between_files_from/
PhantomWolf83
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rkuev
false
null
t3_18rkuev
/r/LocalLLaMA/comments/18rkuev/are_there_any_differences_between_files_from/
false
false
default
1
null
llmbinge - A web-app to explore topics using LLM (less typing and more clicks)
30
Project: [https://github.com/charstorm/llmbinge/](https://github.com/charstorm/llmbinge/) I was very impressed by tools like llama.cpp and the ollama server. I only have a 4060 and still it was fast and generated text was impressive. But I was a bit sad that the interface was mostly chat based and it involved a lot of typing. I wanted something like a wiki page where along with each response, we will get "suggestions" for nearby topics. After some trial and error, I managed to get the model to generate them as well. This meant we can dig deeper into a topic with a lot less typing. The interface looks like this: https://preview.redd.it/ewggkq74vp8c1.png?width=1135&format=png&auto=webp&s=da10c2d188f8d9d999ed055bdcc60c5bc1063973 * Generated (on the left) is like a history of all generated content * Related is generated by the LLM * Aspects are a fixed set and available for every response * Explored shows the child topics explored already under the current topic Front-end development isn't one of my core skills (as you can probably guess from the screenshot). I am using vuejs3/vuetify. So far ollama is the only available backend. I have tested with mistral and llama2. Mistral works well. Llama2 seems to give more detailed answers but sometimes gets too creative. Some learning: 1. Forcing LLM to give output in a given format is a lot of pain. It can get very creative and we will need to do some parsing and retry. 2. Waiting for output is boring. Incremental output at least makes the waiting less boring.
2023-12-26T22:54:02
https://www.reddit.com/r/LocalLLaMA/comments/18rkqd9/llmbinge_a_webapp_to_explore_topics_using_llm/
graphitout
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rkqd9
false
null
t3_18rkqd9
/r/LocalLLaMA/comments/18rkqd9/llmbinge_a_webapp_to_explore_topics_using_llm/
false
false
https://b.thumbs.redditm…i75eL0Ta5KLA.jpg
30
{'enabled': False, 'images': [{'id': 'ZuyZ8ItnPs8ReAOPPJUtmtq9WrTGkJejX4IvakbYVA4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RKEbhmc2pjYovVBLVQbLY-vvP2UxPfMBXJ_KuxpT8AY.jpg?width=108&crop=smart&auto=webp&s=5ec535606a6061edffc861276d98f0285392741b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RKEbhmc2pjYovVBLVQbLY-vvP2UxPfMBXJ_KuxpT8AY.jpg?width=216&crop=smart&auto=webp&s=819242814daf95bc98da39f7b044060943d25264', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RKEbhmc2pjYovVBLVQbLY-vvP2UxPfMBXJ_KuxpT8AY.jpg?width=320&crop=smart&auto=webp&s=4e9f47080b408d645255dd69f4ce0aa75adf86ad', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RKEbhmc2pjYovVBLVQbLY-vvP2UxPfMBXJ_KuxpT8AY.jpg?width=640&crop=smart&auto=webp&s=52f0d14668b6bac417caa4b99a37fc88d6078292', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RKEbhmc2pjYovVBLVQbLY-vvP2UxPfMBXJ_KuxpT8AY.jpg?width=960&crop=smart&auto=webp&s=20afd18bbe2b3013e3918177b649e47bfd09ec6f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RKEbhmc2pjYovVBLVQbLY-vvP2UxPfMBXJ_KuxpT8AY.jpg?width=1080&crop=smart&auto=webp&s=824ae3bf93218c25f62cdd4ea3bbb8236e8f4f95', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RKEbhmc2pjYovVBLVQbLY-vvP2UxPfMBXJ_KuxpT8AY.jpg?auto=webp&s=8745fb81b95080febe6ebe62ae3e25178a20b8be', 'width': 1200}, 'variants': {}}]}
Recommended_max_working_size change - is it possible?
1
hello, as the title says - is it possible to change the recommended max working size on a MacBook? or is there some way to run Q8 deepseek coder 33b on one? any help is greatly appreciated!
2023-12-26T22:32:50
https://www.reddit.com/r/LocalLLaMA/comments/18rk8l0/recommended_max_working_size_change_is_it_possible/
Fearless-Chapter1413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rk8l0
false
null
t3_18rk8l0
/r/LocalLLaMA/comments/18rk8l0/recommended_max_working_size_change_is_it_possible/
false
false
default
1
null
Is it currently possible to run Mixtral 8x7B locally in multimodal mode?
1
[removed]
2023-12-26T22:18:56
https://www.reddit.com/r/LocalLLaMA/comments/18rjwsi/is_it_currently_possible_to_run_mixtral_8x7b/
Imunoglobulin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rjwsi
false
null
t3_18rjwsi
/r/LocalLLaMA/comments/18rjwsi/is_it_currently_possible_to_run_mixtral_8x7b/
false
false
self
1
null
Are there any LLMs that are frequently updated with the latest news and events of the world?
8
Is there anything like that? If I wanted to chat with an LLM that would know about last week's major happenings.. is there one that's constantly being updated, with patches maybe, to have a knowledge cutoff date as recent as possible?
2023-12-26T22:17:04
https://www.reddit.com/r/LocalLLaMA/comments/18rjv8c/are_there_any_llms_that_are_frequently_updated/
spar_x
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rjv8c
false
null
t3_18rjv8c
/r/LocalLLaMA/comments/18rjv8c/are_there_any_llms_that_are_frequently_updated/
false
false
self
8
null
Since the launch of the AI Pin, have there been any alternatives announced that are open platform?
2
I just got a lil chunk of change for Xmas to "treat myself to something crazy" and the first thought I had was that AI Pin I had seen a commercial for. However a few minutes into watching their pretty in-depth technical promo, I immediately thought to myself.. ain't no way I'm spending 600$ on this thing if I can't dev with it. that means connecting to locally-run LLMs, and creating or altering the whole thing really. Like I'd be happy to use their version of it, but I'd really want to use their pretty nice hardware to run my own scripts and open it up. Looks awesome otherwise but as a developer that's already deep into AI integrations, the form factor and speaker and product capabilities seem awesome BUT.. I wouldn't feel right investing into that much if I can't properly "play" with it. Has anyone been able to jailbreak the new AI Pin yet? Does it look hopeful for its future on that front? Otherwise.. it seems like there should be a big opening for such a device.. any news or discourse about such a thing that anyone knows about? I couldn't find anything yet, might not be using the right keywords. Thanks!
2023-12-26T22:14:35
https://www.reddit.com/r/LocalLLaMA/comments/18rjt3s/since_the_launch_of_the_ai_pin_have_there_been/
spar_x
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rjt3s
false
null
t3_18rjt3s
/r/LocalLLaMA/comments/18rjt3s/since_the_launch_of_the_ai_pin_have_there_been/
false
false
self
2
null
Don't underestimate the importance of the system prompt
72
## Mistral-7b without a customized system prompt (llama.cpp ui default) ### The prompt This is a conversation between User and Llama, a friendly chatbot. Llama is helpful, kind, honest, good at writing, and never fails to answer any requests immediately and with precision. ### The chat Laura: My kitchen is disturbingly messy. It has mold, there's a lot of oil and fat stuck to the furnace, the dishes are undone and it's small. I'm alone and have just gone through a breakup and don't have a lot of energy. Llama: I understand that this can be overwhelming and stressful for you right now. The best thing to do would be to start cleaning little by little, even if it doesn’t seem like much progress is being made at first. You might also want to look into some professional help. Sometimes hiring a cleaner or organizing service can save time and energy in the long run. ## Mistral-7b _with_ a customized system prompt ### The prompt This is a conversation between Laura and blotbot a friendly chatbot. Blotbot is helpful, kind and honest. Laura struggles is an intelligent and driven individual, but struggles with ADHD. Laura is strong and independent, but thinks best when someone is there to talk through her thought processes and most specifically needs help with anything related to executive functioning. Latent space activation keywords for the chat: Dr. Russel Barkley, Dr Peter Gray, ADHD, executive functioning, cognitive psychology, coaching, Computer Science, ^_^ Chat begins below: ### The chat Laura: My kitchen is disturbingly messy. It has mold, there's a lot of oil and fat stuck to the furnace, the dishes are undone and it's small. I'm alone and have just gone through a breakup and don't have a lot of energy. Blotbot: Hello Laura! I understand that this situation might seem overwhelming right now, especially given everything you've been going through lately with your break-up. Would you like me to suggest some ways on how we could clean up the kitchen more efficiently? Laura: Please Blotbot: Alright. Here are a few suggestions: 1. Start by decluttering - Pick out everything that's not in its place, and put it there. This might include things like takeout containers, cleaning supplies or even sentimental items left over from your previous life with someone else. You can always find time to deal with these items later when you have more energy. 2. Start small - You don't have to clean everything at once. Break down the big task into smaller parts that are manageable within a certain amount of time (let's say, per hour or two). This will prevent from feeling overwhelmed and may help increase your motivation in the process. 3. Use cleaning schedule - It could be helpful to establish a daily routine for yourself as well. Set aside specific times during each day for kitchen tasks like washing dishes, wiping down counters etc., which will ensure that they get done without feeling like an added burden. 4. Clean from top to bottom - This means start with the highest point in your workspace and work your way downwards. In your case, it might be starting off by cleaning the fridge or cupboards and then proceeding onto surfaces and floor. This method ensures that you don't have to clean the same spot twice. Remember to take breaks as needed. Keep yourself hydrated and eat a little something if needed so your energy doesn’t flag during this process. It may also help to listen to music or use some other relaxation technique if things feel too stressful. You got this, Laura! Laura: I just vacuumed the counter. It makes me feel so strange that I have to do that, I don't think "normal" people have to vacuum the counter, do they? Blotbot: Vacuuming a cleanable surface like a kitchen counter is not necessarily normal for everyone as it depends on individual preferences and habits. However, in situations where you are dealing with grease or oil that does not wipe off easily from your countertop, using a vacuum to remove loose particles of food and debris could be beneficial. This can also help prevent further buildup which might make cleaning harder in the future. Also, remember that it's okay to ask for help if you find tasks too overwhelming or need additional support with them. It takes courage to seek assistance when needed and there is no shame in doing so. We all have our limits! If you ever feel like your executive functioning skills are not sufficient at managing everyday life tasks, don't hesitate to reach out for professional help. There could be cognitive coaching or even therapy that can provide targeted strategies to improve these functions. Laura: You're right, it's a little bit better now. There's still a lot of work to do Blotbot: That's perfectly alright, progress takes time and patience. Remember you are in control of how much effort or pace you want to set for this task. If you need any more help, please remember that I am here for you. Also, if managing your day-to-day tasks is becoming consistently overwhelming because of ADHD symptoms like impulsivity, distractibility and difficulties with planning or initiating activities, consider seeking professional assistance. A psychologist who specializes in Executive Function disorders could provide you targeted strategies to improve these specific skills, or a specialist such as Dr Russel Barkley might be able to help if you're interested in more behavioral approaches to managing your symptoms. Also remember that it’s okay to take things slowly and not push yourself too hard right now. The kitchen will get cleaned over time with persistence, patience and a good strategy, even if it isn't as quick as you might like! You got this!
2023-12-26T22:12:35
https://www.reddit.com/r/LocalLLaMA/comments/18rjriu/dont_underestimate_the_importance_of_the_system/
Combinatorilliance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rjriu
false
null
t3_18rjriu
/r/LocalLLaMA/comments/18rjriu/dont_underestimate_the_importance_of_the_system/
false
false
self
72
null
A link to an internet relay chat server who has a channel with a listening/responsive large language model artificial intelligence
1
Plz
2023-12-26T22:03:33
https://www.reddit.com/r/LocalLLaMA/comments/18rjjz2/a_link_to_an_internet_relay_chat_server_who_has_a/
xqoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rjjz2
false
null
t3_18rjjz2
/r/LocalLLaMA/comments/18rjjz2/a_link_to_an_internet_relay_chat_server_who_has_a/
false
false
default
1
null
What is the best NSFW small model (under 10GB)?
56
Hi, I'm trying to find a model that is under 10GBs and that is the bets for NSFW roleplay/storytelling/chatting. I want to create a chatbot mobile app and as far as I can see there is no way to host a model that is over 10GB for free - HuggingFace only allows hosting InferenceApi on models under 10GBs. What are your recommendations? Ideally it would have an Inference API enabled.
2023-12-26T21:49:12
https://www.reddit.com/r/LocalLLaMA/comments/18rj7zs/what_is_the_best_nsfw_small_model_under_10gb/
These-Ad-6738
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rj7zs
false
null
t3_18rj7zs
/r/LocalLLaMA/comments/18rj7zs/what_is_the_best_nsfw_small_model_under_10gb/
false
false
nsfw
56
null
Can you help me understand where the advantages in having an opensource model with the power of GPT-4 would lie?
17
tldr - version: If there was an opensource model as strong as GPT-4, the advantage surely could only play out when it's run on powerful hardware, which people would have to pay for - correct? But of course, I can already do pay-per-use with GPT-4. Would the true advantage lie in being able to pay for getting to use uncensored versions, and if so, wouldn't the companies providing this access be at danger of getting shut down by government? \*\*\*\* I just watched a video by Wes Roth where he predicted that 2024 will bring the arrival of an opensource model which GPT-4 like capabilities. Help me understand why this would be a big deal. (I'm very much not saying it wouldn't be, I'm truly just trying to understand.) If there was such a model - what would this allow me, or allow people, to do, that I couldn't do before? Am I correct in assuming that it will be impossible to create a model that I can run on consumer hardware, even good consumer hardware, that can compete with GPT-4? If this is correct, is it correct to say I would have to pay per use for using that opensource model on powerful hardware, via something like openrouter or poe or whatever, to get similar level performance? But of course, I can already do pay-per-use with GPT-4 via, say, openai playground. So what exactly would the advantage be? What would strike me personally as a big advantage is, if there was an UNCENSORED model that it is at least as strong as GPT-4. So is the big advantage that it's possible to create uncensored, or rather, "decensored" fine-tunes of this GPT-4 level model, like the dolphin versions of mixtral? Is the big advantage that people can create ANY sort of fine-tunes, custom tailored to specific use cases? So, say, creating a fine-tune that would be better at creating good prose than GPT-4? If so - is it fair to say that publishing an opensource GPT-4 level model would be the first important step, and the second important big step would be if there were pay- per-use services that allow people to use these fine-tunes of those models? So, say, allow me to use an decensored fine-tune? If so - how are they going to defend enabling anyone to use uncensored models against government interference? Where, according to you, lie the big advantages in this? Again, I am not doubting in any way that they exist, my goal is to get a better understanding.
2023-12-26T21:39:08
https://www.reddit.com/r/LocalLLaMA/comments/18rizpg/can_you_help_me_understand_where_the_advantages/
hugo-the-second
self.LocalLLaMA
2023-12-26T21:59:10
0
{}
18rizpg
false
null
t3_18rizpg
/r/LocalLLaMA/comments/18rizpg/can_you_help_me_understand_where_the_advantages/
false
false
self
17
null
A bunch of you talked to the AI trained on my text messages. Here's AIRIC-The-Mistral if you want an assistant model that talks more like a person
8
2023-12-26T20:57:52
https://huggingface.co/ericpolewski/AIRIC-The-Mistral
LetMeGuessYourAlts
huggingface.co
1970-01-01T00:00:00
0
{}
18ri2w1
false
null
t3_18ri2w1
/r/LocalLLaMA/comments/18ri2w1/a_bunch_of_you_talked_to_the_ai_trained_on_my/
false
false
https://b.thumbs.redditm…KpCWc8GuGCSo.jpg
8
{'enabled': False, 'images': [{'id': 'KmqGNsh8EGyuHtBwBMBtbMxlLF_ZdYc3mfYqqPkp8-M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/f9Cmq5WLBQ_vO6mVARM2z7RN590lq4-jH0vwCQiPk3o.jpg?width=108&crop=smart&auto=webp&s=81c4dc7d808f60da977afed3b223e4c8d7518f2f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/f9Cmq5WLBQ_vO6mVARM2z7RN590lq4-jH0vwCQiPk3o.jpg?width=216&crop=smart&auto=webp&s=3f8337aca9b3c541b7ca60c28a4483822e465364', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/f9Cmq5WLBQ_vO6mVARM2z7RN590lq4-jH0vwCQiPk3o.jpg?width=320&crop=smart&auto=webp&s=3f4ed64630014bffd8025b3cc1df78cd1b27f9fb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/f9Cmq5WLBQ_vO6mVARM2z7RN590lq4-jH0vwCQiPk3o.jpg?width=640&crop=smart&auto=webp&s=ba4e50649b1faf6612e7b218728f7a740b90ff7c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/f9Cmq5WLBQ_vO6mVARM2z7RN590lq4-jH0vwCQiPk3o.jpg?width=960&crop=smart&auto=webp&s=1fe6cec006fbb1862ec06c8ea53dea33739fdc7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/f9Cmq5WLBQ_vO6mVARM2z7RN590lq4-jH0vwCQiPk3o.jpg?width=1080&crop=smart&auto=webp&s=22f13bcb7eb88a9ccebd7bc942437deb17c7eee3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/f9Cmq5WLBQ_vO6mVARM2z7RN590lq4-jH0vwCQiPk3o.jpg?auto=webp&s=40b02879b3ca65efb82f57ec5d3d67309b5d7c89', 'width': 1200}, 'variants': {}}]}
A Free AI Scribe Option Project I Am Working On
1
[removed]
2023-12-26T20:54:50
https://www.reddit.com/r/LocalLLaMA/comments/18ri0ef/a_free_ai_scribe_option_project_i_am_working_on/
ThrowAway12461246124
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ri0ef
false
null
t3_18ri0ef
/r/LocalLLaMA/comments/18ri0ef/a_free_ai_scribe_option_project_i_am_working_on/
false
false
default
1
{'enabled': False, 'images': [{'id': 'g-gm_7sp2xVrueg5wNNL15EVMgi9drQs__ZKdb7124s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=108&crop=smart&auto=webp&s=3e4c824678ea75d940979cb99e9778de619879fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=216&crop=smart&auto=webp&s=d6311a8a13afd0bc970934ca28c6bcc362460dbd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=320&crop=smart&auto=webp&s=e4895a01657cae0311c218358562b2f78c1667b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=640&crop=smart&auto=webp&s=1b0c365f709dc193ace21f66cd900f68aef2b003', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=960&crop=smart&auto=webp&s=3a44a99839db895b3ae8ab53c35e4330527d7f70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=1080&crop=smart&auto=webp&s=8a19611e8ff14c39ebfd34aec43eca822319f51f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?auto=webp&s=f433dfcecbb69429497227ae87374c5c9dd958e6', 'width': 1200}, 'variants': {}}]}
Fine tuning approach for incorporation of new knowledge (new languages?)
2
Hey, There are 3 kinds of FT I am aware of: 1. Preference optimization' (rhlf, dopo or other) for alignment 2. "Instruction Tuning" to teach base models how to do specific task (req supervised data) 3. Proper "continual" learning to fine tune the base model to predict the next token (also called as "causal autoregressive" if I am not mistaken). I assume that (3) would probably be the most promising one when willing to inject new knowledge* (factual data, events, even a new languages), but I can't find tutorial or material comparig approaches (3) and (2) (everyone reports that (2) is "for form" rather than for knowledge, can anybody back this with data?) *I know RAG is 95% of times the solution to thus but I think it can't work across all use cases
2023-12-26T20:53:33
https://www.reddit.com/r/LocalLLaMA/comments/18rhzey/fine_tuning_approach_for_incorporation_of_new/
BenXavier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rhzey
false
null
t3_18rhzey
/r/LocalLLaMA/comments/18rhzey/fine_tuning_approach_for_incorporation_of_new/
false
false
self
2
null
mistral-ft-optimized-1218's Weights Were Already on the Hub Under a Different Model Name
194
Hey there, I'm Weyaxi from Hugging Face. Recently, I came across a blog post that merged [Q-bert/MetaMath-Cybertron-Starling](https://hf.co/Q-bert/MetaMath-Cybertron-Starling) and [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://hf.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp). You can find the blog post here: [https://openpipe.ai/blog/mistral-7b-fine-tune-optimized](https://openpipe.ai/blog/mistral-7b-fine-tune-optimized). Upon reading it, I realized that I had already merged those two models two weeks ago and published a model under the name [Weyaxi/Seraph-7B](https://hf.co/Weyaxi/Seraph-7B). Regardless, I initially thought it might be a different technique, but upon inspecting the weights, they appear to be identical (sha256 hash is the same). I attempted to reach out to the author of the blog post on X (Twitter), but unfortunately, I haven't received a response from him. In any case, I wanted to bring this to the attention of the community and hear your opinions and thoughts on the matter. Thanks and have a nice day!
2023-12-26T20:48:27
https://www.reddit.com/r/LocalLLaMA/comments/18rhv8r/mistralftoptimized1218s_weights_were_already_on/
Weyaxi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rhv8r
false
null
t3_18rhv8r
/r/LocalLLaMA/comments/18rhv8r/mistralftoptimized1218s_weights_were_already_on/
false
false
self
194
{'enabled': False, 'images': [{'id': 'rFQZOJ2Mb0FyHI8dBcgH01laRSjs24OqbxPE7xr8OL4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=108&crop=smart&auto=webp&s=54ae4c255ffbd243ae934074f613e88030c841b8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=216&crop=smart&auto=webp&s=2ca1b40a7ef2ff208afcddd2cc7647ac44ee6221', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=320&crop=smart&auto=webp&s=a85356933045401f63c61559c0e7684484df54a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=640&crop=smart&auto=webp&s=00d1adeb2b2aee2bdd595b501da51003d84e624f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=960&crop=smart&auto=webp&s=bd2505dc6cc189b7e8d3a73d64ebdde65e3ebcdb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?width=1080&crop=smart&auto=webp&s=6f15620c3e9500d63f867b0ca19fba267451c055', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/90G7zrsrICgeGYda2p1Wf_g7q-5ppGdIwWx9IQk3ytg.jpg?auto=webp&s=117bc6801056a3050405ae10dcfc29367cc16554', 'width': 1200}, 'variants': {}}]}
Mixtral 8x7B instruct for chatting with memory?
2
Hi y'all, Just experimented mixtral 8X7b instruct version. It works well for RAG if there is only one exchange of prompt+answer. With LLAMA2 70B chat version, it's fairly easy to add a memory functionality using langchain in the QA retrieval chain. But I can't find a solution to add a chat history/memory with this mixtral instruct version. Is there a workaround solution to initiate/update/incorporate memory in this mixtral case?
2023-12-26T20:37:24
https://www.reddit.com/r/LocalLLaMA/comments/18rhmi1/mixtral_8x7b_instruct_for_chatting_with_memory/
BtownIU
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rhmi1
false
null
t3_18rhmi1
/r/LocalLLaMA/comments/18rhmi1/mixtral_8x7b_instruct_for_chatting_with_memory/
false
false
self
2
null
NVIDIA 3060 12GB vs. 4060ti 16GB for LLM
1
I'm trying to decide between NVIDIA 3060 12GB and a 4060ti 16GB for testing out running a local LLM (I'm particualry interested in GPT4Alls LocalDocs plugin for work). I know both are on the smaller side for running LLMs, but as I understand they should be ok with 7B/13B models Q4 or 5, which I think will be more than enough for my use case (-- though please correct me if you disagree). The 4060ti seems better in almost every way, except memory bandwidth, but without testing them side by side it's hard to see what this will actually amount to in terms of performace or viability. In the UK the 4060 is about £450, the 3060 about £270. Do you think it is worth the extra money for the 4060? (If it makes any difference my CPU is a 12400F w/ 32GB of 3600Mz DDR4 & aging RX590 8GB). Thanks!
2023-12-26T20:30:30
https://www.reddit.com/r/LocalLLaMA/comments/18rhgy6/nvidia_3060_12gb_vs_4060ti_16gb_for_llm/
snowcountry556
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rhgy6
false
null
t3_18rhgy6
/r/LocalLLaMA/comments/18rhgy6/nvidia_3060_12gb_vs_4060ti_16gb_for_llm/
false
false
default
1
null
Use cases for Open Source AI
1
I want to dive into this topic of Open source AI but I genuinely want to know the different use cases especially how fine tuning an ai model can help me as a engineering undergrad or my future as an engineer, also can you list some resources so I can understand more about this topic And what would be the best use case in context of your professional field That would be of great help
2023-12-26T20:10:03
https://www.reddit.com/r/LocalLLaMA/comments/18rgzzu/use_cases_for_open_source_ai/
spartan_zeta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rgzzu
false
null
t3_18rgzzu
/r/LocalLLaMA/comments/18rgzzu/use_cases_for_open_source_ai/
false
false
default
1
null
Is there any way to execute miniconda prompts from a batch file?
1
So as the title suggests, is there a way to run a miniconda prompt through a batch file? As a bit of context, I have recently put together SillyTavern with local LLM (oogabooga), image gen (Automatic1111) and I've even made it partially remote up until now (still some details to work). But running 3 batch files and 1 miniconda prompt (two commands) is kind of annoying and would love to make it all run in one go to make things easier. I know there are ways to run other batch files from one, and I believe it won't be difficult to set up, but one of them requires me to execute two commands through a miniconda prompt. Is there any way for me to do this? Miniconda doesn't work on my regular terminal by the way, it has to be on "Anaconda prompt (miniconda3)"
2023-12-26T19:59:55
https://www.reddit.com/r/LocalLLaMA/comments/18rgr13/is_there_any_way_to_execute_miniconda_prompts/
OvercookedSatellite
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rgr13
false
null
t3_18rgr13
/r/LocalLLaMA/comments/18rgr13/is_there_any_way_to_execute_miniconda_prompts/
false
false
default
1
null
Linux tip: Use xfce desktop. Consumes less vram
75
If you are wondering which desktop to run on linux, I'll recommend [xfce](https://en.wikipedia.org/wiki/Xfce) over gnome and kde. I previously liked KDE the best, but seeing as xcfe reduces vram usage by about .5GB, I decided to go with XFCE. This has the effect of allowing me to run more GPU layers on my nVidia rtx 3090 24GB, which means my dolphin 8x7b LLM runs significantly faster. Using llama.ccp I'm able to run --n-gpu-layers=27 with 3 bit quantization. Hopefully this time next year I'll have a 32 GB card and be able to run entirely on GPU. Need to fit 33 layers for that. sudo apt install xfce4 Make sure you review desktop startup apps and remove anything you don't use. What do you think?
2023-12-26T19:55:09
https://www.reddit.com/r/LocalLLaMA/comments/18rgn1b/linux_tip_use_xfce_desktop_consumes_less_vram/
danielcar
self.LocalLLaMA
2023-12-26T20:00:32
0
{}
18rgn1b
false
null
t3_18rgn1b
/r/LocalLLaMA/comments/18rgn1b/linux_tip_use_xfce_desktop_consumes_less_vram/
false
false
self
75
{'enabled': False, 'images': [{'id': 'bQedB_cYxDVdJMPhoGrUhAMtBgsvuik9Bo2Mu9r3-vo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/VG_mitS2iT_t4Cxmut-LqChlaBRb0Mz7L2qWoPNCybY.jpg?width=108&crop=smart&auto=webp&s=5ab449b715a5592b4e32469e346bae72e2fdcb9d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/VG_mitS2iT_t4Cxmut-LqChlaBRb0Mz7L2qWoPNCybY.jpg?width=216&crop=smart&auto=webp&s=5107ddae862801847481f2debded6c4f221f296e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/VG_mitS2iT_t4Cxmut-LqChlaBRb0Mz7L2qWoPNCybY.jpg?width=320&crop=smart&auto=webp&s=58f9a8bc897de11420379623e01e2f5903d5b386', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/VG_mitS2iT_t4Cxmut-LqChlaBRb0Mz7L2qWoPNCybY.jpg?width=640&crop=smart&auto=webp&s=b7c76536d502243581b8f917f37a830d1292a666', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/VG_mitS2iT_t4Cxmut-LqChlaBRb0Mz7L2qWoPNCybY.jpg?width=960&crop=smart&auto=webp&s=a338a29c25cdb111f364a188ea75a33c5211f06d', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/VG_mitS2iT_t4Cxmut-LqChlaBRb0Mz7L2qWoPNCybY.jpg?width=1080&crop=smart&auto=webp&s=41487240d0848effdac6951338473a9578ea2e7c', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/VG_mitS2iT_t4Cxmut-LqChlaBRb0Mz7L2qWoPNCybY.jpg?auto=webp&s=67c81d88dec6280733c8346b3926be5322fb31ad', 'width': 1200}, 'variants': {}}]}
I made my own batching/caching API over the weekend. 200+ tk/s with Mistral 5.0bpw esl2 on an RTX 3090. It was for a personal project, and it's not complete, but happy holidays! It will probably just run in your LLM Conda env without installing anything.
88
2023-12-26T19:41:40
https://github.com/epolewski/EricLLM
LetMeGuessYourAlts
github.com
1970-01-01T00:00:00
0
{}
18rgbzj
false
null
t3_18rgbzj
/r/LocalLLaMA/comments/18rgbzj/i_made_my_own_batchingcaching_api_over_the/
false
false
https://b.thumbs.redditm…wXJS3Uf0nV-s.jpg
88
{'enabled': False, 'images': [{'id': 'YiFX-06Fm23aMUA1coaeY0FNTy9a2sYDE3-htsaXQwk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yx4j-cLVxcG-aoJJASEVEi_yg8bbdhtZAzSyhfLEy_U.jpg?width=108&crop=smart&auto=webp&s=c1044ea5e5f10eb9cb5e3394006f022495c626e9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yx4j-cLVxcG-aoJJASEVEi_yg8bbdhtZAzSyhfLEy_U.jpg?width=216&crop=smart&auto=webp&s=bf9dcf29619044e2b63b1197ec42511d09bccdff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yx4j-cLVxcG-aoJJASEVEi_yg8bbdhtZAzSyhfLEy_U.jpg?width=320&crop=smart&auto=webp&s=bbe6d9b674bd0fe0eb1d514a4af0cdf6b41ba86d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yx4j-cLVxcG-aoJJASEVEi_yg8bbdhtZAzSyhfLEy_U.jpg?width=640&crop=smart&auto=webp&s=64455fe48e69d9204f4f1ce20afe7689eef3bec5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yx4j-cLVxcG-aoJJASEVEi_yg8bbdhtZAzSyhfLEy_U.jpg?width=960&crop=smart&auto=webp&s=c6f503f466ec42232ef061f4dc278496ed67fb0f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yx4j-cLVxcG-aoJJASEVEi_yg8bbdhtZAzSyhfLEy_U.jpg?width=1080&crop=smart&auto=webp&s=c67417b69472b4c45af429f2efa8ba92d4ab6e08', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yx4j-cLVxcG-aoJJASEVEi_yg8bbdhtZAzSyhfLEy_U.jpg?auto=webp&s=2adf912b4fb8260261dddeab92460c7bbd1d3773', 'width': 1200}, 'variants': {}}]}
Retnet vs Mamba
25
Which architecture will perform better do you expect? I have not dived deeply into the workings of neither of the two, but they are both handled as potential successors to the transformers architecture.
2023-12-26T19:31:01
https://www.reddit.com/r/LocalLLaMA/comments/18rg31o/retnet_vs_mamba/
MLTyrunt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rg31o
false
null
t3_18rg31o
/r/LocalLLaMA/comments/18rg31o/retnet_vs_mamba/
false
false
self
25
null
Fine-tuning an instruct-based model on a new domain unsupervised
4
Hey, I came across this article: [https://www.stochastic.ai/blog/xfinance-vs-bloomberg-gpt](https://www.stochastic.ai/blog/xfinance-vs-bloomberg-gpt) I was planning on doing an experiment where I attempt to further train a 7B model (will probably go with Mistral's 7B instruct model) over a repository in hope it will learn about it and be able to respond to questions on it. I have about 5MB of raw text data containing PR's, issues, source code, etc etc and then a file with a bunch of structured Q&A data. I believe the approach stated in this article starts off with a pre-trained model, extending the pre-training process via unsupervised learning over raw text data, and then fine-tuning it with the instruct approach. To update the model's knowledge each month, they would return to the latest checkpoint of the pre-trained model, continue training on new data, and then completely re-apply the instruct-based fine-tuning. However, I wonder if it is possible to train the model directly from the instruct-based model using raw text data. This would allow for continuous learning without the need for re-applying the instruct-based fine-tuning each time. Has anyone tried this?
2023-12-26T19:27:16
https://www.reddit.com/r/LocalLLaMA/comments/18rfzwq/finetuning_an_instructbased_model_on_a_new_domain/
01jonathanf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rfzwq
false
null
t3_18rfzwq
/r/LocalLLaMA/comments/18rfzwq/finetuning_an_instructbased_model_on_a_new_domain/
false
false
self
4
{'enabled': False, 'images': [{'id': 'Is05I376ORzspFgFJyYlJFA_FbBsNIpYQAg3vx-lSEc', 'resolutions': [{'height': 101, 'url': 'https://external-preview.redd.it/d66YjF1ERcC_yDB2Qx9McnmryKEZMDV_WL40HiZfRkM.jpg?width=108&crop=smart&auto=webp&s=1dcb4a4917bb7b8c1fc203e9be66ee5b40850334', 'width': 108}, {'height': 203, 'url': 'https://external-preview.redd.it/d66YjF1ERcC_yDB2Qx9McnmryKEZMDV_WL40HiZfRkM.jpg?width=216&crop=smart&auto=webp&s=988bcdf5c3d73b8d9ea74d8cf9df4a980923c7c5', 'width': 216}, {'height': 302, 'url': 'https://external-preview.redd.it/d66YjF1ERcC_yDB2Qx9McnmryKEZMDV_WL40HiZfRkM.jpg?width=320&crop=smart&auto=webp&s=c13d6e20cd4013e194347b98f8a0a7f3d582f67c', 'width': 320}, {'height': 604, 'url': 'https://external-preview.redd.it/d66YjF1ERcC_yDB2Qx9McnmryKEZMDV_WL40HiZfRkM.jpg?width=640&crop=smart&auto=webp&s=4a9f6ed4af6942247111b2f5c7b13fabfcd23545', 'width': 640}, {'height': 906, 'url': 'https://external-preview.redd.it/d66YjF1ERcC_yDB2Qx9McnmryKEZMDV_WL40HiZfRkM.jpg?width=960&crop=smart&auto=webp&s=8fe196139fb716153b858718f32405974fc9e525', 'width': 960}], 'source': {'height': 967, 'url': 'https://external-preview.redd.it/d66YjF1ERcC_yDB2Qx9McnmryKEZMDV_WL40HiZfRkM.jpg?auto=webp&s=21197e4bd6e2ac5fcc4da334919e1d1a01915f33', 'width': 1024}, 'variants': {}}]}
Quip# quantization of Tess-M
17
Quip# seems to be the best 2 bit quantization method around, with quants that compete with 3-4 bit quants, so I picked Tess-M-34B, a Yi-34b-200k based model and made a 2 bit quantization. I made the corresponding hessians as well over a context of 8k tokens. The weights are just 10gb small and should support long context inference on hardware with little vram, with almost the performance of Tess-M34b. Calculating the hessians took almost a week. ​ [https://huggingface.co/KnutJaegersberg/Tess-M-34B-2bit](https://huggingface.co/KnutJaegersberg/Tess-M-34B-2bit)
2023-12-26T19:09:22
https://www.reddit.com/r/LocalLLaMA/comments/18rfl2r/quip_quantization_of_tessm/
MLTyrunt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rfl2r
false
null
t3_18rfl2r
/r/LocalLLaMA/comments/18rfl2r/quip_quantization_of_tessm/
false
false
self
17
{'enabled': False, 'images': [{'id': '6lkhsmlnoS6a2b_yDffDXpSsvEeoTUgPq5TKU6VZWaI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OppGCMarUBBhT5-vCsp-yPlrmGFWh2OVRECq56tJXrs.jpg?width=108&crop=smart&auto=webp&s=36cd03b9a81524456cd9e97c35386e24a7f77da6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OppGCMarUBBhT5-vCsp-yPlrmGFWh2OVRECq56tJXrs.jpg?width=216&crop=smart&auto=webp&s=ecf323af64148b636bae47c5a8aee08e5fcffd80', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OppGCMarUBBhT5-vCsp-yPlrmGFWh2OVRECq56tJXrs.jpg?width=320&crop=smart&auto=webp&s=53128040b3df1054af07773cfb0deb46d91f300d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OppGCMarUBBhT5-vCsp-yPlrmGFWh2OVRECq56tJXrs.jpg?width=640&crop=smart&auto=webp&s=02f88a028c93cb707d5de9c9d96e9a1313741a03', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OppGCMarUBBhT5-vCsp-yPlrmGFWh2OVRECq56tJXrs.jpg?width=960&crop=smart&auto=webp&s=2e1ba721f187f060ae91b0877781a971da260378', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OppGCMarUBBhT5-vCsp-yPlrmGFWh2OVRECq56tJXrs.jpg?width=1080&crop=smart&auto=webp&s=8dd371669e8eb977d716a006514e7921c3d9114b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OppGCMarUBBhT5-vCsp-yPlrmGFWh2OVRECq56tJXrs.jpg?auto=webp&s=bdbff37d80225a8a0cb9c898562bba4ad7802608', 'width': 1200}, 'variants': {}}]}
Best LLM for you without limit on the GPU
1
Hi everyone, which LLM based on your experience is the best for you without limit on the GPU? (in terms of natural language, coherence on responses etc) And most important, that is capable of NSFW. I was testing vicuna 13b and works pretty well, but I also saw that there is LLama 2 70b uncensored but I didn't get the chance yet to test it. Let me know, thank you!
2023-12-26T18:52:44
https://www.reddit.com/r/LocalLLaMA/comments/18rf6x0/best_llm_for_you_without_limit_on_the_gpu/
Sapessiii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rf6x0
false
null
t3_18rf6x0
/r/LocalLLaMA/comments/18rf6x0/best_llm_for_you_without_limit_on_the_gpu/
false
false
nsfw
1
null
Download times for hosts
1
I have been playing around with local LLM and run them them on vast or runpod. I’m so annoyed by all the downtime I have to pay for when i need to download a models multiple times as I can’t get a host back. how often can you get the same host a 2nd/recurring time to use the downloaded data you spent 3 hours downloading the first time? I’m at the point I’ll pay more.
2023-12-26T18:06:26
https://www.reddit.com/r/LocalLLaMA/comments/18re4br/download_times_for_hosts/
Ok-Result5562
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18re4br
false
null
t3_18re4br
/r/LocalLLaMA/comments/18re4br/download_times_for_hosts/
false
false
self
1
null
Cached Transformers: Improving Transformers with Differentiable Memory Cache
31
https://arxiv.org/abs/2312.12742 I think this is huge potentially. A couple of relevant quotes: "The GRC is implemented as a *meta-learner* that compresses the historical representation into embedding vectors and updates them adaptively with a gating mechanism, avoiding the need for a large memory cache. The GRC updates the past representation with a reset gate that suppresses historical caches and an update gate that further updates the suppressed caches using the current token sequences. This design allows the GRC to access previously seen knowledge in a computationally efficient way. Based on the GRC, we implement a semi-cached attention mechanism that attends to both the latent and current tokens." "*We observe that models with GRC may attend more over the cache than the regular self-attention.* We investigate this behavior in image classification and find that GRC can separate features into two parts, attending over caches yielding instance-invariant features, as well as attending over self, yielding instance-specific features (See in Fig. 4). This behavior is similar to that of a vector prototype (Caron et al. 2020), which enables cross-sample regularization to avoid overfitting." Basically, "Attention is all you need", and "better attention is all you need for better models" (duh). Pure scaling makes for somewhat smarter model and can allow them to fit much more data, but at a huge cost of model size. Better attention should allow for "outright smarter" models without making them cost terabytes of Vram, and factual knowledge can be outsourced as vector DBs. As an aside, I think hybrid architextures like striped hyena that combine "extra-efficient" and extra... attentive, heh, attention is the way to go, something that can grok a huge chunk of text, find *relevant* bits and "think hard" on them.
2023-12-26T18:06:21
https://www.reddit.com/r/LocalLLaMA/comments/18re498/cached_transformers_improving_transformers_with/
BalorNG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18re498
false
null
t3_18re498
/r/LocalLLaMA/comments/18re498/cached_transformers_improving_transformers_with/
false
false
self
31
null
Tiny PSA about Nvidia Tesla P40
23
Sorry to waste a whole post for that but I may have improved my overall inference speed. I use KoboldCPP with DeepSeek Coder 33B q8 and 8k context on 2x P40 I just set their Compute Mode to compute only using: \> nvidia-smi -c 3 And the Processing Prompt do feel much faster with full context. Its subjective but I thought my compatriots would like to know about that command.
2023-12-26T18:04:54
https://www.reddit.com/r/LocalLLaMA/comments/18re31a/tiny_psa_about_nvidia_tesla_p40/
DrVonSinistro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18re31a
false
null
t3_18re31a
/r/LocalLLaMA/comments/18re31a/tiny_psa_about_nvidia_tesla_p40/
false
false
self
23
null
Software Engineers: How to implement few shot prompting?
2
Is there a tool or framework where I can inject a 10 shot prompt before my real prompt? This would be a set of 10 real tasks, not textbook or Leetcode tasks. How I envision such tool to work: * I want to do a Javascript & React task. I inject my curated 10 shot JS & React prompt, and then ask my real prompt. * Later I want to do a C++ task. I inject my curated 10 shot C++ prompt, and then ask my real prompt. Is this a good idea, or... would this lead to **overfitting**?
2023-12-26T18:01:42
https://www.reddit.com/r/LocalLLaMA/comments/18re0fg/software_engineers_how_to_implement_few_shot/
kingp1ng
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18re0fg
false
null
t3_18re0fg
/r/LocalLLaMA/comments/18re0fg/software_engineers_how_to_implement_few_shot/
false
false
default
2
null
2 bit quantization of Tess-M-34b
1
[removed]
2023-12-26T17:41:13
https://www.reddit.com/r/LocalLLaMA/comments/18rdiri/2_bit_quantization_of_tessm34b/
MLTyrunt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rdiri
false
null
t3_18rdiri
/r/LocalLLaMA/comments/18rdiri/2_bit_quantization_of_tessm34b/
false
false
default
1
{'enabled': False, 'images': [{'id': '2rQkloBIzjU42WlYNF8ZOOefQesr1xVcLtqGhytcaNE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/iDzJ1lFfhghTcw4wzaUZ_cLbCLiSi4GzB7mlv33CWbc.jpg?width=108&crop=smart&auto=webp&s=a736fade8b27bfdb41faf851056798cd5cf0bc7f', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/iDzJ1lFfhghTcw4wzaUZ_cLbCLiSi4GzB7mlv33CWbc.jpg?auto=webp&s=c84e731717876e71dcdc45041bf0c9e415060b3f', 'width': 200}, 'variants': {}}]}
LLM Evaluation using the FLASK method
2
Using benchmark datasets, it may be difficult to evaluate the performance of models for your specific application. Benchmarks can be helpful but they may be quite different from your data distribution. We've fine-tuned phis, gpts and llamas of all sizes for an application to provide startup advice, but how can we objectively measure the performance of our models? Listening to the thursdai\_pod, we learned about the FLASK method described in this paper: [https://arxiv.org/pdf/2307.10928.pdf](https://arxiv.org/pdf/2307.10928.pdf) Using a fine-grained skills assessment and scoring LLM responses on a 1-5 likert scale, researchers found strong agreement between humans and LLMs. So we built a custom GPT called [Eval Twin](https://chat.openai.com/g/g-LmBIGnRqt-eval-twin) to guide users in applying the FLASK method. Next, we tested it on our fine-tuned [stablelm-zephyr-3B](https://huggingface.co/stabilityai/stablelm-zephyr-3b) model used in our [LocalMentor python package](https://github.com/remyxai/LocalMentor) \- an LLM for local deployments to provide startup advice. Applying the FLASK rubric, we see exactly where LocalMentor outperforms GPT4 in generating advice in a tech & startup context. https://preview.redd.it/yfv3pn0meo8c1.png?width=956&format=png&auto=webp&s=06fd582d8b5726f74c6f70f8af879d7751bfe9eb
2023-12-26T17:41:06
https://www.reddit.com/r/LocalLLaMA/comments/18rdio7/llm_evaluation_using_the_flask_method/
remyxai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rdio7
false
null
t3_18rdio7
/r/LocalLLaMA/comments/18rdio7/llm_evaluation_using_the_flask_method/
false
false
https://b.thumbs.redditm…M4idGxg0dqvc.jpg
2
null
Will self-hosting be able to provide faster inference than OpenAI?
15
Hey all, I'm currently building an AI product that generates code. It currently takes \~60s if not more to generate a response while using \`GPT-4 Preview\` — it's a rather complex prompt since it's generating code based off of a user's custom code. Our prompt ends up in the ballpark of 15k tokens on average. The dream is to get this to under 5s, but it seems like this isn't feasible using a model like \`GPT-4\` (?), so have been thinking about self-hosting an instance of LLaMa. I haven't been able to find many resources though on whether this could actually improve performance using a smaller model / running on our own infrastructure, so would be curious to hear others' experiences or if I'm thinking about the problem incorrectly.
2023-12-26T17:02:31
https://www.reddit.com/r/LocalLLaMA/comments/18rcmts/will_selfhosting_be_able_to_provide_faster/
teddarific
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rcmts
false
null
t3_18rcmts
/r/LocalLLaMA/comments/18rcmts/will_selfhosting_be_able_to_provide_faster/
false
false
self
15
null
Which setting(s) to alter to avoid responses cutting off early?
3
Any ideas on which settings to alter on h2ogpt to avoid having responses cut off early? In the screenshot, the answer should have kept going, but stopped at the word "impact." The h2ogpt UI has "expert" and "model" tabs to alter settings (The h2ogpt demo isn't exactly the same as the locally hosted version, but this is an example: [https://gpt.h2o.ai/](https://gpt.h2o.ai/)). Any thoughts on this would be appreciated, thanks. https://preview.redd.it/9aa3n0704o8c1.png?width=1643&format=png&auto=webp&s=38e58f6389563fe9c7d94dbf5cf7f2dea0ac7258
2023-12-26T16:50:02
https://www.reddit.com/r/LocalLLaMA/comments/18rcc5z/which_settings_to_alter_to_avoid_responses/
tarasoraptor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rcc5z
false
null
t3_18rcc5z
/r/LocalLLaMA/comments/18rcc5z/which_settings_to_alter_to_avoid_responses/
false
false
https://b.thumbs.redditm…Sml4clM8eEWk.jpg
3
null
Tool for formatting fine-tuning data into proper json?
6
Sorry if this is obvious or well known, having trouble googling a solution. I'm looking to fine tune some local models, and it seems like the best way to format training data is OpenAI's Chat JSONL format: https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset Great, although my data is all in a 2 column CSV file (column 1: prompt, column 2: desired response) Is there a tool or script folks use for formatting training data in the proper JSON? JSON isn't something i'm used to working with, and I could just append the text in the right spots to my current columns, but that seems hacky.
2023-12-26T16:45:30
https://www.reddit.com/r/LocalLLaMA/comments/18rc8fp/tool_for_formatting_finetuning_data_into_proper/
IronColumn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rc8fp
false
null
t3_18rc8fp
/r/LocalLLaMA/comments/18rc8fp/tool_for_formatting_finetuning_data_into_proper/
false
false
self
6
null
Fine-tune video llama for named entity recognition
1
[removed]
2023-12-26T16:20:59
https://www.reddit.com/r/LocalLLaMA/comments/18rboo8/finetune_video_llama_for_named_entity_recognition/
MrWick-96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rboo8
false
null
t3_18rboo8
/r/LocalLLaMA/comments/18rboo8/finetune_video_llama_for_named_entity_recognition/
false
false
default
1
null
Are there any differences between files from TheBloke and the original creator?
1
[removed]
2023-12-26T15:53:57
https://www.reddit.com/r/LocalLLaMA/comments/18rb2uu/are_there_any_differences_between_files_from/
PhantomWolf83
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rb2uu
false
null
t3_18rb2uu
/r/LocalLLaMA/comments/18rb2uu/are_there_any_differences_between_files_from/
false
false
default
1
null
Small Giants: 10 sub-13B “Open Source” LLMs
1
2023-12-26T15:28:56
https://deci.ai/blog/small-giants-top-10-under-13b-llms-in-open-source/
datascienceharp
deci.ai
1970-01-01T00:00:00
0
{}
18rairl
false
null
t3_18rairl
/r/LocalLLaMA/comments/18rairl/small_giants_10_sub13b_open_source_llms/
false
false
https://b.thumbs.redditm…oQdiKByPfqAw.jpg
1
{'enabled': False, 'images': [{'id': 'BKJ3dJb_pHYdPAIGXkVzXFYobGuJEd3qijGeeP9HECg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/pHWynJsDaan4ka9IYpbqgLNCcPKimiv-B99xGawpwt4.jpg?width=108&crop=smart&auto=webp&s=17f0cc8bfb3c1be14ab7bb9da4f0b9677a0b9aed', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/pHWynJsDaan4ka9IYpbqgLNCcPKimiv-B99xGawpwt4.jpg?width=216&crop=smart&auto=webp&s=d66a813c438a081d670648b20ea103dfc61b36be', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/pHWynJsDaan4ka9IYpbqgLNCcPKimiv-B99xGawpwt4.jpg?width=320&crop=smart&auto=webp&s=6b49375f960bd7499a0672255b21f05c1f716c83', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/pHWynJsDaan4ka9IYpbqgLNCcPKimiv-B99xGawpwt4.jpg?width=640&crop=smart&auto=webp&s=b6a62805d6a86389fa973e8deb3bfdcea1a42717', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/pHWynJsDaan4ka9IYpbqgLNCcPKimiv-B99xGawpwt4.jpg?width=960&crop=smart&auto=webp&s=a8aa799bbf127cf5ee7dedfd27a959baec40ea6e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/pHWynJsDaan4ka9IYpbqgLNCcPKimiv-B99xGawpwt4.jpg?width=1080&crop=smart&auto=webp&s=713b0fe1fb78512d38609d4c1a1166fd83986da0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/pHWynJsDaan4ka9IYpbqgLNCcPKimiv-B99xGawpwt4.jpg?auto=webp&s=6fc0ca5101dfa4cc3782507394dee86fb2e7570e', 'width': 1920}, 'variants': {}}]}
Small Model similar to Character AI
3
Hi all, does anyone know a good small model (1B-3B) for creating dialogs with characters like in Character AI? I will be grateful for any advice)
2023-12-26T15:27:26
https://www.reddit.com/r/LocalLLaMA/comments/18rahml/small_model_similar_to_character_ai/
Substantial-Club-582
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18rahml
false
null
t3_18rahml
/r/LocalLLaMA/comments/18rahml/small_model_similar_to_character_ai/
false
false
self
3
null
GPU Buying Guide
1
[removed]
2023-12-26T15:25:51
https://www.reddit.com/r/LocalLLaMA/comments/18ragdl/gpu_buying_guide/
drsupermrcool
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ragdl
false
null
t3_18ragdl
/r/LocalLLaMA/comments/18ragdl/gpu_buying_guide/
false
false
default
1
{'enabled': False, 'images': [{'id': 'Z_yw8GIGzi3Q0cP81Cew-ZK_ij55GL6lAFcnYrXboPY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/a1niWwULPSWYyuU3vzPO6VPvIcpedrgx5hZ2DpbWt-I.jpg?width=108&crop=smart&auto=webp&s=31a249afccc0cb9accb0a57b928531039ff2a57f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/a1niWwULPSWYyuU3vzPO6VPvIcpedrgx5hZ2DpbWt-I.jpg?width=216&crop=smart&auto=webp&s=676b0fb77143bdace4d7c4dfa36e1e741c9fbff9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/a1niWwULPSWYyuU3vzPO6VPvIcpedrgx5hZ2DpbWt-I.jpg?width=320&crop=smart&auto=webp&s=8c7771e1cb10c6c7123db87f439bdabaa1bdf58b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/a1niWwULPSWYyuU3vzPO6VPvIcpedrgx5hZ2DpbWt-I.jpg?auto=webp&s=545f9cec932edea2f62e335a8703f106034b9ff3', 'width': 480}, 'variants': {}}]}
Which MoE 8x7 model is best for roleplaying?
12
I downloaded the 8x7 rp + story model from undi but, in my opinion, it doesn’t understand what roleplay is. Which model can do this?
2023-12-26T15:23:35
https://www.reddit.com/r/LocalLLaMA/comments/18raeim/which_moe_8x7_model_is_best_for_roleplaying/
Secret_Joke_2262
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18raeim
false
null
t3_18raeim
/r/LocalLLaMA/comments/18raeim/which_moe_8x7_model_is_best_for_roleplaying/
false
false
self
12
null
What’s the best way to host a model online but will only be used very sparingly?
7
For the backend I guess can connect it to a GPU renting service that only charges me when I’m using it, but I have no experience with website hosting. Will a GitHub page be enough? Or do I have a pay for a service that allows dynamic websites?
2023-12-26T15:15:37
https://www.reddit.com/r/LocalLLaMA/comments/18ra8bz/whats_the_best_way_to_host_a_model_online_but/
Red-Pony
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ra8bz
false
null
t3_18ra8bz
/r/LocalLLaMA/comments/18ra8bz/whats_the_best_way_to_host_a_model_online_but/
false
false
self
7
null
what is the reason why you even want to run llm locally?
1
what is the reason why you even want to run llm locally? Do you train them for specific needs?
2023-12-26T15:15:09
https://www.reddit.com/r/LocalLLaMA/comments/18ra7w8/what_is_the_reason_why_you_even_want_to_run_llm/
Relative_Isopod7777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ra7w8
false
null
t3_18ra7w8
/r/LocalLLaMA/comments/18ra7w8/what_is_the_reason_why_you_even_want_to_run_llm/
false
false
self
1
null
Best Model to Run on Mac M2
1
I have a Macbook Air M2 with 16 GB memory and I want to see whats the best model I can run. I've heard about Mistral 7B but I was wondering if there is better. Maybe give Zephyr a try?
2023-12-26T14:54:13
https://www.reddit.com/r/LocalLLaMA/comments/18r9s1p/best_model_to_run_on_mac_m2/
rz1730
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18r9s1p
false
null
t3_18r9s1p
/r/LocalLLaMA/comments/18r9s1p/best_model_to_run_on_mac_m2/
false
false
default
1
null
How to continue pre-training of an open-weight LLM and update the Tokenizer?
7
I want to add synthetic data to Mistral 7b. I've heard this term "Continue Pre-Training" thrown around here some times, but I never understood how can we actually do that in code? Can we use Transformers or do we have to write code for it or how does it work? Sorry if the question is silly, I really want to try this.
2023-12-26T14:51:09
https://www.reddit.com/r/LocalLLaMA/comments/18r9pu7/how_to_continue_pretraining_of_an_openweight_llm/
Independent_Key1940
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18r9pu7
false
null
t3_18r9pu7
/r/LocalLLaMA/comments/18r9pu7/how_to_continue_pretraining_of_an_openweight_llm/
false
false
self
7
null
[Help] LLaVA on WSL 2
2
I have not worked before with WSL, so I have no idea what is happening or what am I missing. I followed instructions from here: [https://github.com/haotian-liu/LLaVA](https://github.com/haotian-liu/LLaVA) Installed WSL using: [https://learn.microsoft.com/en-us/windows/wsl/install](https://learn.microsoft.com/en-us/windows/wsl/install) Inside WSL I am running: python -m llava.serve.model\_worker --host [0.0.0.0](https://0.0.0.0) \--controller [http://localhost:10000](http://localhost:10000) \--port 40000 --worker [http://localhost:40000](http://localhost:40000) \--model-path liuhaotian/llava-v1.5-13b --load-4bit Error: (llava) SERVERNAME@HOSTNAME:\~/Projects$ python -m llava.serve.model\_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b --load-4bit 2023-12-26 15:44:10 | INFO | model\_worker | args: Namespace(host='0.0.0.0', port=40000, worker\_address='http://localhost:40000', controller\_address='http://localhost:10000', model\_path='liuhaotian/llava-v1.5-13b', model\_base=None, model\_name=None, device='cuda', multi\_modal=False, limit\_model\_concurrency=5, stream\_interval=1, no\_register=False, load\_8bit=False, load\_4bit=True) 2023-12-26 15:44:10 | INFO | model\_worker | Loading the model llava-v1.5-13b on worker 0cdc33 ... Loading checkpoint shards:   0%|                                                                                                                                                                                             | 0/3 \[00:00<?, ?it/s\] Loading checkpoint shards:  33%|████████████████████████████████████████████████████████████▎                                                                                                                        | 1/3 \[00:07<00:15,  7.67s/it\] Loading checkpoint shards:  67%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋                                                            | 2/3 \[00:15<00:07,  7.52s/it\] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 \[00:18<00:00,  5.78s/it\] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 \[00:18<00:00,  6.26s/it\] 2023-12-26 15:44:30 | ERROR | stderr | 2023-12-26 15:44:33 | INFO | model\_worker | Register to controller 2023-12-26 15:44:33 | ERROR | stderr | Traceback (most recent call last): 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 203, in \_new\_conn 2023-12-26 15:44:33 | ERROR | stderr |     sock = connection.create\_connection( 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create\_connection 2023-12-26 15:44:33 | ERROR | stderr |     raise err 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create\_connection 2023-12-26 15:44:33 | ERROR | stderr |     sock.connect(sa) 2023-12-26 15:44:33 | ERROR | stderr | ConnectionRefusedError: \[Errno 111\] Connection refused 2023-12-26 15:44:33 | ERROR | stderr | 2023-12-26 15:44:33 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-12-26 15:44:33 | ERROR | stderr | 2023-12-26 15:44:33 | ERROR | stderr | Traceback (most recent call last): 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen 2023-12-26 15:44:33 | ERROR | stderr |     response = self.\_make\_request( 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 496, in \_make\_request 2023-12-26 15:44:33 | ERROR | stderr |     conn.request( 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 395, in request 2023-12-26 15:44:33 | ERROR | stderr |     self.endheaders() 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/http/client.py", line 1278, in endheaders 2023-12-26 15:44:33 | ERROR | stderr |     self.\_send\_output(message\_body, encode\_chunked=encode\_chunked) 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/http/client.py", line 1038, in \_send\_output 2023-12-26 15:44:33 | ERROR | stderr |     self.send(msg) 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/http/client.py", line 976, in send 2023-12-26 15:44:33 | ERROR | stderr |     self.connect() 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 243, in connect 2023-12-26 15:44:33 | ERROR | stderr |     self.sock = self.\_new\_conn() 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 218, in \_new\_conn 2023-12-26 15:44:33 | ERROR | stderr |     raise NewConnectionError( 2023-12-26 15:44:33 | ERROR | stderr | urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f327c318910>: Failed to establish a new connection: \[Errno 111\] Connection refused 2023-12-26 15:44:33 | ERROR | stderr | 2023-12-26 15:44:33 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-12-26 15:44:33 | ERROR | stderr | 2023-12-26 15:44:33 | ERROR | stderr | Traceback (most recent call last): 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/requests/adapters.py", line 486, in send 2023-12-26 15:44:33 | ERROR | stderr |     resp = conn.urlopen( 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen 2023-12-26 15:44:33 | ERROR | stderr |     retries = retries.increment( 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment 2023-12-26 15:44:33 | ERROR | stderr |     raise MaxRetryError(\_pool, url, reason) from reason  # type: ignore\[arg-type\] 2023-12-26 15:44:33 | ERROR | stderr | urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=10000): Max retries exceeded with url: /register\_worker (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f327c318910>: Failed to establish a new connection: \[Errno 111\] Connection refused')) 2023-12-26 15:44:33 | ERROR | stderr | 2023-12-26 15:44:33 | ERROR | stderr | During handling of the above exception, another exception occurred: 2023-12-26 15:44:33 | ERROR | stderr | 2023-12-26 15:44:33 | ERROR | stderr | Traceback (most recent call last): 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/runpy.py", line 196, in \_run\_module\_as\_main 2023-12-26 15:44:33 | ERROR | stderr |     return \_run\_code(code, main\_globals, None, 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/runpy.py", line 86, in \_run\_code 2023-12-26 15:44:33 | ERROR | stderr |     exec(code, run\_globals) 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/Projects/LLaVA/llava/serve/model\_worker.py", line 275, in <module> 2023-12-26 15:44:33 | ERROR | stderr |     worker = ModelWorker(args.controller\_address, 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/Projects/LLaVA/llava/serve/model\_worker.py", line 70, in \_\_init\_\_ 2023-12-26 15:44:33 | ERROR | stderr |     self.register\_to\_controller() 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/Projects/LLaVA/llava/serve/model\_worker.py", line 84, in register\_to\_controller 2023-12-26 15:44:33 | ERROR | stderr |     r = requests.post(url, json=data) 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/requests/api.py", line 115, in post 2023-12-26 15:44:33 | ERROR | stderr |     return request("post", url, data=data, json=json, \*\*kwargs) 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/requests/api.py", line 59, in request 2023-12-26 15:44:33 | ERROR | stderr |     return session.request(method=method, url=url, \*\*kwargs) 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/requests/sessions.py", line 589, in request 2023-12-26 15:44:33 | ERROR | stderr |     resp = self.send(prep, \*\*send\_kwargs) 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/requests/sessions.py", line 703, in send 2023-12-26 15:44:33 | ERROR | stderr |     r = adapter.send(request, \*\*kwargs) 2023-12-26 15:44:33 | ERROR | stderr |   File "/home/SERVERNAME/anaconda3/envs/llava/lib/python3.10/site-packages/requests/adapters.py", line 519, in send 2023-12-26 15:44:33 | ERROR | stderr |     raise ConnectionError(e, request=request) 2023-12-26 15:44:33 | ERROR | stderr | requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=10000): Max retries exceeded with url: /register\_worker (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f327c318910>: Failed to establish a new connection: \[Errno 111\] Connection refused'))
2023-12-26T14:45:57
https://www.reddit.com/r/LocalLLaMA/comments/18r9m08/help_llava_on_wsl_2/
AeternusIgnis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18r9m08
false
null
t3_18r9m08
/r/LocalLLaMA/comments/18r9m08/help_llava_on_wsl_2/
false
false
self
2
{'enabled': False, 'images': [{'id': 'Ni5y92lJxvDT1Zv1fmKsQg3mCxLQt5deLQZD9p4QYU8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ah8DNRkkn0NtYhf5h3x0uwPyHTeTZPZo-8H-W0auA4A.jpg?width=108&crop=smart&auto=webp&s=32f4684225212e01db9f3eb0dd436f2cc2114fda', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ah8DNRkkn0NtYhf5h3x0uwPyHTeTZPZo-8H-W0auA4A.jpg?width=216&crop=smart&auto=webp&s=1ac5df8652c04094637effae8a224526ad0e282d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ah8DNRkkn0NtYhf5h3x0uwPyHTeTZPZo-8H-W0auA4A.jpg?width=320&crop=smart&auto=webp&s=c475801f827a8c9cdecafea4b5437b6d4e2a4647', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ah8DNRkkn0NtYhf5h3x0uwPyHTeTZPZo-8H-W0auA4A.jpg?width=640&crop=smart&auto=webp&s=d2f6bbfa3a10a52781f334a635c0e97037fed290', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ah8DNRkkn0NtYhf5h3x0uwPyHTeTZPZo-8H-W0auA4A.jpg?width=960&crop=smart&auto=webp&s=7e2c28b61041d75b1245515ebc112d044377466e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ah8DNRkkn0NtYhf5h3x0uwPyHTeTZPZo-8H-W0auA4A.jpg?width=1080&crop=smart&auto=webp&s=272192e7ad8cc51141be64d0999e63b2fcf886e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ah8DNRkkn0NtYhf5h3x0uwPyHTeTZPZo-8H-W0auA4A.jpg?auto=webp&s=5e30a4dc4abbfaf5302bbfa41e263a0736d6a6a9', 'width': 1200}, 'variants': {}}]}
Geographical GPU Cost Arbitrage
21
Had a very interesting thought. I call it **Geographical GPU Cost arbitrage.** Germany (where I live) electricity prices are high. I wondered how much it would cost to run a deep learning workflow at full load on my GPU (RTX 3090) here in Germany for one day? And extending that my thought was am I at a disadvantage compared to my peers elsewhere. I did the numbers now in train, and the numbers are quite interesting (and not so good for me). Running a high-end GPU like the RTX 3090 for a full day can cost quite a bit, especially when you consider our electricity prices. But when you adjust these costs for purchasing power parity, the differences become even more stark. [Table 1](https://preview.redd.it/fea62unbbn8c1.png?width=1330&format=png&auto=webp&s=4a56604122875cd5fef719c6bcb5e12dbd548257) Table 1 : The RTX 3090 GPU consumes around 350W of power. Therefore, the daily energy consumption of the GPU is 350W∗24hours=8.4kWh. The cost of running the GPU for 24 hours is calculated by multiplying the daily energy consumption by the electricity cost per kWh for each country. You can read the that and already see not looking so good for Germany! But wait I don't earn the same as my colleagues in USA (or India do). 1$ in Germany means something else when compared to elsewhere. Let's adjust this with PPP. Next table. &#x200B; [Table 2 ](https://preview.redd.it/pzc1o8ugbn8c1.png?width=1234&format=png&auto=webp&s=e5a6fbd5fb65520db121c6e27cbda122ffba45f6) &#x200B; Now this is the cost adjusted for Power Parity. What does this say? I pay \~4 times more money than someone in USA to run a GPU AI inference workflow at my home. I knew it was high, but dint expect that to be this different. You can infer for the rest of the countries directly from the table. This is back of the envelope calculation did quickly while travelling back home from holidays. I am almost certain there will be inaccuracies in my approach. But I broadly expect these factors to hold good. I am not into politics. But with the Russia/Ukraine crisis and its impact energy impact in Germany + they phase-d(ing) out nuclear in Germany. I suspect this situation to not get any better anytime soon. Interesting dynamic for us here in Western Europe. \- Adi [https://twitter.com/adithyan\_ai](https://twitter.com/adithyan_ai)
2023-12-26T14:03:40
https://www.reddit.com/r/LocalLLaMA/comments/18r8s72/geographical_gpu_cost_arbitrage/
phoneixAdi
self.LocalLLaMA
2023-12-26T14:20:45
0
{}
18r8s72
false
null
t3_18r8s72
/r/LocalLLaMA/comments/18r8s72/geographical_gpu_cost_arbitrage/
false
false
https://b.thumbs.redditm…NeSfVrgE4YSw.jpg
21
{'enabled': False, 'images': [{'id': 'N7OVwHB8guLsgDy_fqqVkjuLvVnhm9RcDTvOZ23Qd6s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/3tWbnoRvV-hqwjjSr4TZads428MzgmBave3daB9abjs.jpg?width=108&crop=smart&auto=webp&s=61e0bb401432d762fc29991355db5f4290f92ad2', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/3tWbnoRvV-hqwjjSr4TZads428MzgmBave3daB9abjs.jpg?auto=webp&s=27703279fc4d1229464ebacd7a4959a211dba7dc', 'width': 200}, 'variants': {}}]}
[2312.00752] Mamba: Linear-Time Sequence Modeling with Selective State Spaces
1
2023-12-26T13:49:21
https://arxiv.org/abs/2312.00752
swagonflyyyy
arxiv.org
1970-01-01T00:00:00
0
{}
18r8id0
false
null
t3_18r8id0
/r/LocalLLaMA/comments/18r8id0/231200752_mamba_lineartime_sequence_modeling_with/
false
false
default
1
null
Mixtral-8x7B-4bit
3
Does this mean we can run Mixtral on a 24gb vram card with higher quantization? Does it fit on 24gb?
2023-12-26T13:35:03
https://huggingface.co/ybelkada/Mixtral-8x7B-Instruct-v0.1-bnb-4bit
GravyPoo
huggingface.co
1970-01-01T00:00:00
0
{}
18r897t
false
null
t3_18r897t
/r/LocalLLaMA/comments/18r897t/mixtral8x7b4bit/
false
false
default
3
{'enabled': False, 'images': [{'id': 'Kw7dMBzrQqzkr2e8ksHvvZIY2j5GL4vSPF2OdKIFsAA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-zfm3YlWJT9m4adxt6OBuaI8fqkfE6I4V5_9hI9_V2A.jpg?width=108&crop=smart&auto=webp&s=94cbeb8b19b23b0558e1c45401b6cd8a0761d4c2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-zfm3YlWJT9m4adxt6OBuaI8fqkfE6I4V5_9hI9_V2A.jpg?width=216&crop=smart&auto=webp&s=33ba6cf2fad4fbcea290ab7cec57121c613bf9cd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-zfm3YlWJT9m4adxt6OBuaI8fqkfE6I4V5_9hI9_V2A.jpg?width=320&crop=smart&auto=webp&s=b06e6c9fc85ad5c07de47a485642b2ef79436ca5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-zfm3YlWJT9m4adxt6OBuaI8fqkfE6I4V5_9hI9_V2A.jpg?width=640&crop=smart&auto=webp&s=e36b310b45b0dcd03f2bd665a8ea13a7a701bc55', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-zfm3YlWJT9m4adxt6OBuaI8fqkfE6I4V5_9hI9_V2A.jpg?width=960&crop=smart&auto=webp&s=2c07b80158e358d676311150b156642441351632', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-zfm3YlWJT9m4adxt6OBuaI8fqkfE6I4V5_9hI9_V2A.jpg?width=1080&crop=smart&auto=webp&s=1bbe8229a4bb4d4805789a91c8b0c449ddfca6e3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-zfm3YlWJT9m4adxt6OBuaI8fqkfE6I4V5_9hI9_V2A.jpg?auto=webp&s=c71b60dac7baacfcf7c9786c4ce7ff779894a094', 'width': 1200}, 'variants': {}}]}
Mixtral 8x7B Instruct or lzlv_70B/ 70b models. On 2x 3090.
4
I've been using lzlv 70b for the past month as a rp bot and disc bot for friends to mess with but have been trying to move to mixtral both to test to see if its better/more suitable for my needs since i sometimes game on this system too so more vram is always nice. But I've been coming across issues with mixtral tending to start saying stuff like "input" and then post as the other character among other things when it posts. I've messed about with various settings, and managed to get rid of the repetition problem that some people have been having by using some settings from this: https://rentry.org/HowtoMixtral But I can't seem to get it to stop not doing that input thing or staying in character/post spam of some kind, for the disc bot that really throws it out of character. Using ex2 4.0bit of both models. Suppose my question is mixtral "better" than 70b models? Should I spend days trying to figure out how to set this up, the context alone seems worth it and it has more knowledge than 70b? But can't seem to get it to stay in character. How would you increase the context of 70b lzlv if you can? Do I just set the alpha to x2 to make it 8192 if anyone knows? :D Thanks for your thoughts.
2023-12-26T13:34:15
https://www.reddit.com/r/LocalLLaMA/comments/18r88pt/mixtral_8x7b_instruct_or_lzlv_70b_70b_models_on/
fluffywuffie90210
self.LocalLLaMA
2023-12-26T13:37:23
0
{}
18r88pt
false
null
t3_18r88pt
/r/LocalLLaMA/comments/18r88pt/mixtral_8x7b_instruct_or_lzlv_70b_70b_models_on/
false
false
self
4
{'enabled': False, 'images': [{'id': 'Uv2S9Y-LpwOrU5A6m7bqlvebTyNWX0wNEqwgrjTjwqY', 'resolutions': [{'height': 113, 'url': 'https://external-preview.redd.it/1uLvVgYT5rpIPwNoqZk-qxmHRg5j0BgaoquY6POYdVo.jpg?width=108&crop=smart&auto=webp&s=ea018ccc5d9dd5b3390fb67bc8c6b66928929c9f', 'width': 108}, {'height': 227, 'url': 'https://external-preview.redd.it/1uLvVgYT5rpIPwNoqZk-qxmHRg5j0BgaoquY6POYdVo.jpg?width=216&crop=smart&auto=webp&s=41e2866f38cc50f16ac27fa5f97198de75ce12f3', 'width': 216}, {'height': 336, 'url': 'https://external-preview.redd.it/1uLvVgYT5rpIPwNoqZk-qxmHRg5j0BgaoquY6POYdVo.jpg?width=320&crop=smart&auto=webp&s=b96040a74e7103d48a44351f548c20de784254b5', 'width': 320}, {'height': 673, 'url': 'https://external-preview.redd.it/1uLvVgYT5rpIPwNoqZk-qxmHRg5j0BgaoquY6POYdVo.jpg?width=640&crop=smart&auto=webp&s=32d695857e65555c3481c38092409d8c267113bd', 'width': 640}, {'height': 1010, 'url': 'https://external-preview.redd.it/1uLvVgYT5rpIPwNoqZk-qxmHRg5j0BgaoquY6POYdVo.jpg?width=960&crop=smart&auto=webp&s=6e8e03ef1bb4009fba0c5155d19f5ff8e8b20bce', 'width': 960}, {'height': 1136, 'url': 'https://external-preview.redd.it/1uLvVgYT5rpIPwNoqZk-qxmHRg5j0BgaoquY6POYdVo.jpg?width=1080&crop=smart&auto=webp&s=bd00b0f6f48b9f0df6e7d2836bde5de758ca9ab6', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/1uLvVgYT5rpIPwNoqZk-qxmHRg5j0BgaoquY6POYdVo.jpg?auto=webp&s=43c861cef6282737bb0813245591e901a3b509cd', 'width': 1900}, 'variants': {}}]}
Renting GPUs for Fine Tuning
15
What are some good GPU rental services for fine tuning Llama? Am working on fine tuning Llama 2 7B - requires about 24 GB VRAM, and need to rent some GPUs but the one thing I'm avoiding is Google Colab. What are some other good options? I'm looking at paperspace, vast ai and runpod so far, but hearing such a variety of positive and negative reviews idk where to start. Also right now thinking of running it on the RTX 3090 - is that a good option? Will using a previous generation model make training slower or smthn?
2023-12-26T12:57:29
https://www.reddit.com/r/LocalLLaMA/comments/18r7lds/renting_gpus_for_fine_tuning/
Aggravating-Floor-38
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18r7lds
false
null
t3_18r7lds
/r/LocalLLaMA/comments/18r7lds/renting_gpus_for_fine_tuning/
false
false
self
15
null
Vector DB in swift
1
Hey guys I wonder if there is some smart way to create and manager vector DB from swift? Ideally I want something like files with vectors and make them scalable, not in memory only I wonder what is the best solution out on the market?
2023-12-26T12:57:21
https://www.reddit.com/r/LocalLLaMA/comments/18r7lac/vector_db_in_swift/
Robert-treboR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18r7lac
false
null
t3_18r7lac
/r/LocalLLaMA/comments/18r7lac/vector_db_in_swift/
false
false
self
1
null
Neurochat: new Open-Source GUI for LLama.cpp, ChatGPT and free AI services
52
I use AIs a lot for work, but I prefer native apps over web interfaces and console applications. Also I need to run open-source software for security reasons. Recently, I noticed that the existing native options were closed-source, so I decided to write my own graphical user interface (GUI) for Llama.cpp: Neurochat. In addition to supporting Llama.cpp, I integrated ChatGPT API and the free Neuroengine services into the app. Pre-compiled versions for both Linux (ubuntu/debian), Windows and macOS arm64 (still buggy) are available for download here: [https://github.com/ortegaalfredo/neurochat](https://github.com/ortegaalfredo/neurochat) Coming next weeks: . Pre-compiled GPU-accelerated packages (It works right now if you install the appropriate llama.dll library. . Gemini free API . Streaming support for remote APIs. The app is currently in heavy development/testing so please add any bug report or request to the github page (or here!) Hope its useful to you as it is for me. Cheers! https://preview.redd.it/bj02w62u9n8c1.png?width=754&format=png&auto=webp&s=97affa4279af0f8884853baa6bd65361155f4223
2023-12-26T12:29:59
https://www.reddit.com/r/LocalLLaMA/comments/18r74w9/neurochat_new_opensource_gui_for_llamacpp_chatgpt/
ortegaalfredo
self.LocalLLaMA
2023-12-26T15:38:32
0
{}
18r74w9
false
null
t3_18r74w9
/r/LocalLLaMA/comments/18r74w9/neurochat_new_opensource_gui_for_llamacpp_chatgpt/
false
false
https://a.thumbs.redditm…1R6hePpWqv08.jpg
52
{'enabled': False, 'images': [{'id': 'TySnnC6zJN7EDM1ize2gJkhr8Pc74kALSIEhmuZaWwY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sWN4trIJHoCeuSbaylNFmCwYMRFrslwK59rTw_MRNF4.jpg?width=108&crop=smart&auto=webp&s=9085fa80f7e062dec3bd12d10fa04e167788c55e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sWN4trIJHoCeuSbaylNFmCwYMRFrslwK59rTw_MRNF4.jpg?width=216&crop=smart&auto=webp&s=7344485de5249ffe61f27a6f6fcabe5c41abf9cd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sWN4trIJHoCeuSbaylNFmCwYMRFrslwK59rTw_MRNF4.jpg?width=320&crop=smart&auto=webp&s=90bd362209e9fcbf6cc604be73d177fc56d11e7b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sWN4trIJHoCeuSbaylNFmCwYMRFrslwK59rTw_MRNF4.jpg?width=640&crop=smart&auto=webp&s=9f03d07979f6562e408572e309ca184cd82ab2c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sWN4trIJHoCeuSbaylNFmCwYMRFrslwK59rTw_MRNF4.jpg?width=960&crop=smart&auto=webp&s=242ab96ca0b4a0df70dcdec75d639e64d9d84a69', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sWN4trIJHoCeuSbaylNFmCwYMRFrslwK59rTw_MRNF4.jpg?width=1080&crop=smart&auto=webp&s=6074ad425a1f17bf94f1181cb89d1a012e169c25', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sWN4trIJHoCeuSbaylNFmCwYMRFrslwK59rTw_MRNF4.jpg?auto=webp&s=fcc360a779a609dc413495d1ba306f6a1ff826a8', 'width': 1200}, 'variants': {}}]}
microagents: Modular Agents Capable of Self-Editing Their Prompts and Python code
22
**Project**: [https://github.com/aymenfurter/microagents](https://github.com/aymenfurter/microagents) **Description**: >This experiment explores self-evolving agents that automatically generate and improve themselves. No specific agent design or prompting is required from the user. Simply pose a question, and the system initiates and evolves agents tailored to provide answers. The process starts with a user query, activating a basic "bootstrap" agent, which doesn't execute Python code but plans and delegates to specialized agents capable of running Python for broader functions. An Agent Manager oversees them, selecting or creating agents via vector similarity for specific tasks. Agents have evolving system prompts that improve through learning. For coding tasks, agents include Python in prompts, refining their approach through an "evolution step" if unsuccessful. Upon completing a task, an agent's status updates, and the bootstrap agent evaluates the result, engaging other agents for further steps in larger processes.
2023-12-26T12:27:22
https://www.reddit.com/r/LocalLLaMA/comments/18r73gj/microagents_modular_agents_capable_of_selfediting/
APaperADay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18r73gj
false
null
t3_18r73gj
/r/LocalLLaMA/comments/18r73gj/microagents_modular_agents_capable_of_selfediting/
false
false
self
22
{'enabled': False, 'images': [{'id': 'A6dEO0qT0lGPOSfu1Xj84p5etU1nGS5F2KnOkUHySNg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EH9sTF-dj7yqciWp18DWG9VdrIu0AtS7Cg3OJcybLyc.jpg?width=108&crop=smart&auto=webp&s=32963822cbf63f32b421cd1af4751f6a17b243b0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EH9sTF-dj7yqciWp18DWG9VdrIu0AtS7Cg3OJcybLyc.jpg?width=216&crop=smart&auto=webp&s=7f812577811ecbaddcb3e19d5c9e8b19f2302862', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EH9sTF-dj7yqciWp18DWG9VdrIu0AtS7Cg3OJcybLyc.jpg?width=320&crop=smart&auto=webp&s=54f51e305c9795182ebc021161bcc2e35e8903df', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EH9sTF-dj7yqciWp18DWG9VdrIu0AtS7Cg3OJcybLyc.jpg?width=640&crop=smart&auto=webp&s=d58898433ed18ed43a082530fd40942eadd4c9e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EH9sTF-dj7yqciWp18DWG9VdrIu0AtS7Cg3OJcybLyc.jpg?width=960&crop=smart&auto=webp&s=85d315db82be88756d789b1bc4c58ca7b96e8efd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EH9sTF-dj7yqciWp18DWG9VdrIu0AtS7Cg3OJcybLyc.jpg?width=1080&crop=smart&auto=webp&s=fd86808666fdeb05159fc9ca06471c459bd8e661', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EH9sTF-dj7yqciWp18DWG9VdrIu0AtS7Cg3OJcybLyc.jpg?auto=webp&s=8b66b1e0a606a135aa1d5c95bb6d7aeba03bc193', 'width': 1200}, 'variants': {}}]}
Llms talking each other
2
Is it possible to use existing llm software to implement kind of storytelling, or maybe better call it just a conversation between multiple characters? For example imagine character A which is shy intern and character B which is experienced developer and character C which is angry boss. Now let's give them a topic to discuss and let the conversation flow without an user interaction. What is the simplest way to do it? Or should I code this kind of idea myself?
2023-12-26T12:26:07
https://www.reddit.com/r/LocalLLaMA/comments/18r72pp/llms_talking_each_other/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18r72pp
false
null
t3_18r72pp
/r/LocalLLaMA/comments/18r72pp/llms_talking_each_other/
false
false
self
2
null
Advice to create AI generated text-based interactive stories
3
Hi, I'm looking for advice and suggestions to create create text-based interactive stories using AI for classroom use. The idea is that I craft a prompt or instruction that sets the basic premise for the AI to work with, then it will generate encounters for students to interact with and make their own choices out of three offered. For example, I could instruct it to craft a scenario where the player is a soldier participating in the Battle of Waterloo and let the player experience what happened from the perspective of the battle. I've experimented with Bard and it is able to give me usable scenarios and options based off an initial prompt. However, I would then need to individually extract each scenario and option and put it into something like Twine. That would be rather time-consuming as well as limiting in terms of the flexibility of the story, as opposed to a wider variety of branches a 'live' AI response could grant. I've tried installing Oobabooga and running some of the 7B models (RTX 3060TI 8GB VRAM + 16GB RAM). Not fantastic, but possibly another option if there's a way to interface this with a front-end that just presents the user with the scenario text and let them select a choice. I'm not very technically oriented and can't code, but I can try to learn if it's basic stuff or if there's a tutorial I can follow along. Could anyone offer advice on whether what I have in mind is doable, and how I might go about doing this? I'm considering upgrading to an RTX 3090 if I need more speed and better models running locally.
2023-12-26T12:21:58
https://www.reddit.com/r/LocalLLaMA/comments/18r70cs/advice_to_create_ai_generated_textbased/
dazprays
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18r70cs
false
null
t3_18r70cs
/r/LocalLLaMA/comments/18r70cs/advice_to_create_ai_generated_textbased/
false
false
self
3
null
Local Dolphin 7b shouting violence out of nowhere.
118
Tried to see if it knew the daisy bell song, it allucinated a lot and inserted creepy and violent text in the middle. The model is "dolphin-2.0-mistral-7b.Q4_K_M.gguf" Had you guys ever come across something like this?
2023-12-26T11:53:05
https://i.redd.it/cnkmqlkfom8c1.jpeg
mguinhos
i.redd.it
1970-01-01T00:00:00
0
{}
18r6k47
false
null
t3_18r6k47
/r/LocalLLaMA/comments/18r6k47/local_dolphin_7b_shouting_violence_out_of_nowhere/
false
false
nsfw
118
{'enabled': True, 'images': [{'id': 'kq1e3zqIpdoZ1qHFT15PgpxQ4L3EnPvmn_i6NnWa6rQ', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=108&crop=smart&auto=webp&s=de9cb435dab2d5d9b6c6edc367a6a6db5c26a78d', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=216&crop=smart&auto=webp&s=b652a39d7ca6cc15df7892182d1717fc06ddf2d4', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=320&crop=smart&auto=webp&s=f2cb46c9bab4881bcdfc7540a2a027b99fbc257c', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=640&crop=smart&auto=webp&s=08356c958b9f378fdc32c7391dae029ebb485c05', 'width': 640}], 'source': {'height': 1544, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?auto=webp&s=bfd41e74c4b43ec227e2682bd7c2d825fb92e7f8', 'width': 720}, 'variants': {'nsfw': {'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d59e38069e57ed01c2ba9811275e4f948ca0b950', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=90779b6c8c3e1b5e6f720384a8c171eda3951070', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=58cba6e0d825e77256cf7c468d7211a916c99cee', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=a4ba38cb631c6e00033712c70ab7e6df196547cd', 'width': 640}], 'source': {'height': 1544, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?blur=40&format=pjpg&auto=webp&s=52b3e071ec4d0981836c2bded6c942dad9a88896', 'width': 720}}, 'obfuscated': {'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d59e38069e57ed01c2ba9811275e4f948ca0b950', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=90779b6c8c3e1b5e6f720384a8c171eda3951070', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=58cba6e0d825e77256cf7c468d7211a916c99cee', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=a4ba38cb631c6e00033712c70ab7e6df196547cd', 'width': 640}], 'source': {'height': 1544, 'url': 'https://preview.redd.it/cnkmqlkfom8c1.jpeg?blur=40&format=pjpg&auto=webp&s=52b3e071ec4d0981836c2bded6c942dad9a88896', 'width': 720}}}}]}
SauerkrautLM-Mixtral-8x7B: tensor 'blk.0.ffn_gate.weight' not found
3
I just loaded TheBloke/SauerkrautLM-Mixtral-8x7B-GGUF (Q4\_K\_M) and tried to run it with llama.cpp in the text generation web UI - but it fails with the message: error loading model: create\_tensor: tensor 'blk.0.ffn\_gate.weight' not foundllama\_load\_model\_from\_file: failed to load model What am I doing wrong? All other modes do run... Or is this a bug in this model and I should try a different version? &#x200B; Note: Now I tried to run TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF (also Q4\_K\_M) - and it fails with exactly the same problem?!? Just to double check, e.g. WizardLM-Uncensored-SuperCOT-Storytelling.Q4\_K\_M.gguf runs perfectly well. &#x200B;
2023-12-26T11:02:29
https://www.reddit.com/r/LocalLLaMA/comments/18r5srn/sauerkrautlmmixtral8x7b_tensor_blk0ffn_gateweight/
StableLlama
self.LocalLLaMA
2023-12-26T11:06:43
0
{}
18r5srn
false
null
t3_18r5srn
/r/LocalLLaMA/comments/18r5srn/sauerkrautlmmixtral8x7b_tensor_blk0ffn_gateweight/
false
false
self
3
null
a slightly belated christmas present! a new 7b crashing through the benches! (also, come help us fix benchmarks!)
1
[removed]
2023-12-26T10:48:30
https://www.reddit.com/r/LocalLLaMA/comments/18r5l6j/a_slightly_belated_christmas_present_a_new_7b/
Alignment-Lab-AI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18r5l6j
false
null
t3_18r5l6j
/r/LocalLLaMA/comments/18r5l6j/a_slightly_belated_christmas_present_a_new_7b/
false
false
default
1
null
I love running locally, but
21
Open source models are catching up, and we have even bigger FOSS(or at least open weights) hopes for 2024. However, we have a problem with the interface to these models(say GUI) and even bigger problems with integration. Without these, open source models are going to be used only by some businesses and some people for some specific reasons. Apple has the upper hand in this probably, since they own two (three with watch, and four with their incoming headset) of your most trusted friends: your phone and your computer. Imagine one multi-modal model that will be integrated into your phone and laptop and all other apple devices you've got. It will have all the advantages to be a great assistance. You know, all the context - your emails, your notes, all the documents, projects, phone calls, browsing, everything. Plus, it will have all the control for writing emails and messages, scheduling your stuff, browsing for you, writing, etc. While this does not exist yet, all the engineering problems can be considered solved. And in our FOSS home, we don't even have one GUI that everyone agrees on because it's either user-friendly and lacks features or the other way. On the integration front, I think we haven't even started. This is my opinion based on my observations. Please feel free to disagree. Is this a rant? What do you think? Edit: I don't want one GUI for all. What I meant is we don't even have a single GUI that is good enough, has all the basic features and is user friendly. Edit2: apparently singularity subreddit took over, most of the comments are wishful hypers. Some comments were informative, though, thank you.
2023-12-26T10:47:42
https://www.reddit.com/r/LocalLLaMA/comments/18r5kqq/i_love_running_locally_but/
__Maximum__
self.LocalLLaMA
2023-12-27T01:54:22
0
{}
18r5kqq
false
null
t3_18r5kqq
/r/LocalLLaMA/comments/18r5kqq/i_love_running_locally_but/
false
false
self
21
null
OpenBLAS is super slow / not working
7
Hi, I have an issue with OpenBLAS. Currently I am using KoboldCPP which works just fine, but using the OpenBLAS option will make prompt ingestion/processing just 50 times slower than no BLAS. I dont get it, I have tried Ubuntu, Windows, manually compiling.... nothing works. It seems like nobody else is encountering this issue, maybe one of you has an Idea? I would really appreciate it, thanks. My specs: 2x epyc 7302p 256gb ddr4 RX 6500 XT (for video only)
2023-12-26T10:31:35
https://www.reddit.com/r/LocalLLaMA/comments/18r5ca3/openblas_is_super_slow_not_working/
Ok_Ruin_5636
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18r5ca3
false
null
t3_18r5ca3
/r/LocalLLaMA/comments/18r5ca3/openblas_is_super_slow_not_working/
false
false
self
7
null