title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
They're addding DALL-E to MS-Paint!
1
I thought I would share with you all what I imagine MS-Paint level memes to be like in the future once they implement a lightweight, perhaps \~7B param version of Copilot to go along with the lightweight DALL-E for multimodal meme generation ☺️ https://preview.redd.it/gdu06a26gkzb1.png?width=685&format=png&auto=webp&s=ad87d6e2c2d01446ea98eb842c23373a08a081c8
2023-11-10T18:47:53
https://www.reddit.com/r/LocalLLaMA/comments/17sat79/theyre_addding_dalle_to_mspaint/
OldAd9530
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17sat79
false
null
t3_17sat79
/r/LocalLLaMA/comments/17sat79/theyre_addding_dalle_to_mspaint/
false
false
https://b.thumbs.redditm…iN73E5PJciUM.jpg
1
null
Squeezing performance out of models with no video card.
4
Hoping to get some advice on parameter adjustments or maybe just better insight into hardware limitations. This is my first stab at hosting locally and as well, I'm no hardware guru, so there's bound to be a lot of ignorance in this post. I have a box with 60 threads and 128gb of ram, no video card, running debian. I've installed llamacpp successfully and have downloaded quantizationed codellama models from huggingface and have successfully been able to run the models with llamacpp! No errors at all. I am finding however, that's it's painfully slow and I understand not having a video card being impactful, but I had figured that the available hardware would run the models efficiently, especially some of the smaller 3 bit ones. Here are the models I have tried, the params I ran with, and the outputs: **codellama-34b-instruct.Q5\_K\_M.gguf** ./build/bin/main -m ./models/codellama-34b-instruct.Q5_K_M.gguf -t 60 -ngl 0 -i --color -ins -c 4096 -b 4096 -n 4096 |Timing Type|Time (ms)|Additional Info| |:-|:-|:-| |Load Time|51390.32|| |Sample Time|61.44|126 runs (0.49 ms per token, 2050.88 tokens per second)| |Prompt Eval Time|145584.94|137 tokens (1062.66 ms per token, 0.94 tokens per second)| |Eval Time|182072.99|126 runs (1445.02 ms per token, 0.69 tokens per second)| |Total Time|642129.15|| **codellama-34b-instruct.Q3\_K\_L.gguf** ./build/bin/main -m ./models/codellama-34b-instruct.Q3_K_L.gguf -t 60 -ngl 0 -i --color -ins -c 4096 -b 4096 -n 4096 |Timing Type|Time (ms)|Additional Info| |:-|:-|:-| |Load Time|3932.18|| |Sample Time|28.06|55 runs (0.51 ms per token, 1959.95 tokens per second)| |Prompt Eval Time|78991.13|76 tokens (1039.36 ms per token, 0.96 tokens per second)| |Eval Time|68619.09|55 runs (1247.62 ms per token, 0.80 tokens per second)| |Total Time|1833748.36|| Both models eat absolutely every inch of the processor that I've made available (all 60 threads), ram barely spikes above 20g useage. I'll wholly admit that I am ignorant from here about what adjustments could be made to squeeze more performance out of the models, I'm going to rent a cloud GPU and test on a system there to see what kind of difference that makes, but figured I would check with this community if there's anything to edge out more performance with my current hardware.
2023-11-10T18:29:24
https://www.reddit.com/r/LocalLLaMA/comments/17saewx/squeezing_performance_out_of_models_with_no_video/
NBehrends
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17saewx
false
null
t3_17saewx
/r/LocalLLaMA/comments/17saewx/squeezing_performance_out_of_models_with_no_video/
false
false
self
4
null
Guess: Will Mistral 70B be open source?
9
[View Poll](https://www.reddit.com/poll/17s9hbp)
2023-11-10T17:46:52
https://www.reddit.com/r/LocalLLaMA/comments/17s9hbp/guess_will_mistral_70b_be_open_source/
TheTwelveYearOld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s9hbp
false
null
t3_17s9hbp
/r/LocalLLaMA/comments/17s9hbp/guess_will_mistral_70b_be_open_source/
false
false
self
9
null
Create/Query vector database with LLM
3
I am new to using a vector database so bear with me. I have a codebase that I want to feed into a local vector database. I am using Weaviate. I can create the database with `SentenceTransformer('all-MiniLM-L6-v2')` but when I try to query the database with nearText and nearVector I don't get any hits with any queries I've tried (and yes I checked to make sure everything is in the database). I see on the Weaviate website you can do a generative search. I assume I need to recreate the vector database with an embedding model that an LLM (say Mistral 7b) can read and find what I need from the vector database. What embedding model do I use? I'm calling the text-generation-webui api (v1/embedding), but it just gives me this error: `extensions.openai.errors.ServiceUnavailableError: Error: Failed to load embedding model: all-mpnet-base-v2` but I have the mistral model selected from the command line and I set the `OPENEDAI_EMBEDDING_MODEL: TheBloke_Mistral-7B-Instruct-v0.1-GGUF` in the settings.yaml Could I get any guidance here? Thanks in advance for your responses.
2023-11-10T17:34:26
https://www.reddit.com/r/LocalLLaMA/comments/17s977z/createquery_vector_database_with_llm/
that_one_guy63
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s977z
false
null
t3_17s977z
/r/LocalLLaMA/comments/17s977z/createquery_vector_database_with_llm/
false
false
self
3
null
they are releasing the code to train your very own Zephyr models!
94
this is amazing 🔥🔥
2023-11-10T17:25:54
https://x.com/_lewtun/status/1722993938402025975?s=34
GasBond
x.com
1970-01-01T00:00:00
0
{}
17s90ey
false
null
t3_17s90ey
/r/LocalLLaMA/comments/17s90ey/they_are_releasing_the_code_to_train_your_very/
false
false
default
94
null
Confining LLaMA 2's context for RAG QA
2
So I want to ask for advice on 2 related topics: 1. If I have a corpus of many documents embedded in a vector store, how can I dynamically select (by metadata, for example) a subset of them and only perform retrieval on that subset for answer generation. 2. I want LLaMa to be able to say I DO NOT KNOW if the context it retrieved cannot answer the question. This behavior is not stable yet from what I have seen. Thank you so much!
2023-11-10T16:57:18
https://www.reddit.com/r/LocalLLaMA/comments/17s8d7h/confining_llama_2s_context_for_rag_qa/
asakura_matsunoki
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s8d7h
false
null
t3_17s8d7h
/r/LocalLLaMA/comments/17s8d7h/confining_llama_2s_context_for_rag_qa/
false
false
self
2
null
Local LLM tutorials and courses for beginners
8
Are there any recommended high quality courses or tutorials on using local LLM? Eg on Youtube or similar, aimed at beginners with a basic very understanding of machine learning, with use cases, and how to set them up and use them. There is a lot of difficult to understand jargon in this area and also hundreds of videos but it's hard to know which are good. It seems like this is going to a popular topic to learn about. This topic seems to have moved very quickly almost out of no where since the emmergence of ChatGPT.
2023-11-10T16:56:34
https://www.reddit.com/r/LocalLLaMA/comments/17s8cl0/local_llm_tutorials_and_courses_for_beginners/
mobileappz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s8cl0
false
null
t3_17s8cl0
/r/LocalLLaMA/comments/17s8cl0/local_llm_tutorials_and_courses_for_beginners/
false
false
self
8
null
What Makes a Great Language Model?
26
This piece touches on a few related questions, including: * Why does training an LLM on the internet’s data work? * Why does prompting with “You are an expert” help? * Why is chat fine tuning (reinforcement learning/DPO) useful? * Why is lower perplexity not always better? In short, language models work because good ideas tend to be replicated (not because replicated ideas make those ideas more true). \*Isn’t it surprising…?\* …that training a language model on the web’s data works? Isn’t there a lot of rubbish on the web? How does the model know what information to focus on? “Correct” or “high quality” data has a tendency to repeat itself. There might be some data saying that the sun moves around the earth, but there is much more saying that the earth moves around the sun. This makes it possible for us to train language models on large datasets, even if they contain some, or even a lot, of weak information or arguments. Weak arguments and information tend to differ from each other, whereas stronger arguments or information tends to be articulated, transmitted and replicated more coherently. There’s a very important directionality to be aware of, which is that i) good explanations/ideas are more likely to be replicated NOT that ii) replicated ideas/explanations necessarily make them true: \~\~\~ i) X is a good idea/explanation => X is more likely to be spread in written/oral format. NOT ii) A majority people/experts say X => X is necessarily true/correct/better. \~\~\~  This gets to the heart of why language models currently cannot reason like humans. Language models are frequency based, not explanation based. There was a time when many people thought the Earth was the centre of the universe. Not only was that wrong, but we didn’t get around it with new data. We got around it with new explanations. The same is true with general relativity improving over Newton’s laws - it wasn’t brought about by new data, but a new explanation. You may feel I’m digressing a bit. I’m not. Understanding language models as STATISTICAL MODELS helps us understand a lot about what works well for training and why. Getting back to the narrow point…The internet is a reasonably good dataset to train an LLM on because humans tend to replicate better ideas/information, i.e. the frequency of good ideas and information is sufficiently high above the bad information. Still, the “poor/inaccurate” information blurs the model’s answers. Think of the internet’s data as rolling hills with the model always trying to climb to the highest peak. The more rubbish dumps dotting the landscape, the harder it is for the model to identify the highest peak. The cleaner the hills, the sharper the tallest peak and the easier for the model to identify the best answer. \*Why does it help to tell the model it is an expert at coding?\* A language model has no way to identify good data or bad data other than to look at the probabilities based on the data it has seen. A model trained on the internet will have code from GitHub, text from wikipedia and much more. Now, you pre-pend your question with “You are an expert at coding...”. The meaning of that phrase is going to appear a lot more in the Github data than in the rest of the web’s data. This drags the distribution of probabilities for words the model will answer with towards those in GitHub. On average, these answers will be more related to code and will be better answers! Training techniques all involve some form of statistically dragging the model towards high quality, relevant information, and dragging it away from low quality information. \*Why data quality matters\* You’ll read everywhere that data quality matters. The LIMA paper teaches “less is more” for certain fine-tuning - provided the data is of very high quality, i.e. it's better to have some great data than a lot of bad data. If you train a model on data about a house, a garden and the wheely bins outside, but there’s nothing good in the bins, then you’re better off to leave out the bins altogether so that they don’t add noise to the statistical distribution. There’s a model called Bloom 176B with 176 billion parameters. It’s being crushed by much smaller models, probably even 7B models in some cases like Zephyr. And that isn’t because of architecture - they’re all using transformer architectures. It’s just that everyone is using much cleaner data (and smaller models are being trained with more data, rather than larger models with less data, see the Chinchilla paper, although training is now done way past Chinchilla). Mistral isn’t much different from Llama 2 in architecture (apart from tweaks to the attention layout) but it’s probably trained for longer AND has better data. Zephyr is better that Mistral, but it was only trained for a few hours!!! That’s data quality, not architecture. Companies are going back to smaller models - Llama 2's biggest model is 70B parameters. Xai's model is around 30-40B parameters. \*Why do big companies do Reinforcement Learning?\* Reinforcement Learning has been seen an important “alignment” step done on models after initial training to make models safer and more “human like”. I’ll give a different articulation here - reinforcement learning shifts the model’s statistical distribution away from bad answers towards good answers. Standard training - where you take part of a sentence, get the model to predict the next word, and penalise/adjust the model based on how far its prediction was from the actual next word - doesn’t inherently allow the model to distinguish between good and bad data. With standard training the model can only distinguish between good and bad based on the frequency of what it sees. Reinforcement learning adds a different tool allowing us to push the statistical distribution away from bad answers and towards good answers. As “not just a side note”, Reinforcement Learning was very complicated - involving the training of an entirely separate helper model. This is no longer the case with Direct Preference Optimisation (DPO), which shifts the model from bad to good in just one training step. So, the message that reinforcement learning is only possibly by big companies is changing. In DPO, you take a prompt and create (with a human or language model) two answers - a good and a bad one. Then, you run those prompt+answer pairs through your model. For the “good” answer, you take whatever probability the model predicted for generating that answer and increase that probability by an amount, say by a factor of beta relative to the raw model (then backpropagate through the model). For the “bad” answer, you take whatever probability the model predicted for generating that bad answer and decrease the probability. The good+bad answer dataset is expensive to generate by humans, but is powerful in how it can shift the model towards using better parts of its statistical distribution. This is why chatGPT will occasionally show you pairs of answers and ask you to choose between the two (presumably, to use for training). Alternatively, you can also use a stronger language model to generate prompts to train a weaker model (e.g. Zephyr was trained by data curated by gpt4 and llama and falcon and mpt). DPO or Reinforcement learning statistically drags the model away from the trash can and towards the house. In a sense, reinforcement learning or DPO allows models trainers to make up for the fact that their datasets have bad explanations and information. It’s pulling the model away from bad answers and towards good ones. Of course, it would have been better if we didn’t have the bad quality answers in there in the first place!!! \*Lower perplexity is not (necessarily) better!\* Perplexity is a measure of how much a language model deviates in representing it’s training data (or some smaller benchmark dataset, sometimes wiki text - a sample from wikipedia). When an LLM is being trained, the perplexity goes down and down until eventually it plateaus. (i.e. the model starts to get closer and closer in representing its training data). Interestingly, certain types of training, like reinforcement learning (or now, DPO), can increase the perplexity of a model! Why??? And is that good or bad? Quite simply, DPO moves the statistical distribution of a model away from bad data and towards better data. It’s like moving the model away from the rubbish dump to focus on the rolling hills. If your perplexity benchmark includes the data set for the dump+hills, then of course the perplexity measurement can go up because your model is no longer representing the dump!!! Low perplexity is not a goal in itself. The goal is low perplexity on the highest possible quality dataset! This is why DPO can (and probably should) increase perplexity. \*What makes a great language model?\* Good ideas and information tends to be more frequently replicated in language datasets. To draw a Dawkins analogy, biology replicates genes and humans replicate ideas - with a tendency towards replicating the better ideas. By “replicate”, I mean spread in written and oral form - i.e. language. By building a statistical model of written datasets, we tend to be able to generate the better ideas and information (because those are the ideas and information that we tend to replicate/spread) - because that's what those datasets tend to contain more of! Since the model itself, other than analysing statistically, doesn’t know which ideas are better, we do the following to improve results: 1. Filter data to remove bad answers and messy data. 2. Shift the statistical distribution away from remaining bad ideas and towards the good ideas/info. This is done through reinforcement learning (now DPO), as I described above, using pairs of good and bad answers to questions. 3. Even then, with a finished model, we can further shift the model towards relevant parts of its distribution by saying things like “You are an expert on coding” OR “You are an expert on biology.” To sum up, a great language model is one where we create and drag it’s statistical distribution towards the highest quality data and ideas represented in language datasets.
2023-11-10T16:46:58
https://www.reddit.com/r/LocalLLaMA/comments/17s84u3/what_makes_a_great_language_model/
TrelisResearch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s84u3
false
null
t3_17s84u3
/r/LocalLLaMA/comments/17s84u3/what_makes_a_great_language_model/
false
false
self
26
null
Local LLM interaction with the environment
5
Hello, can someone direct me to resources that cover integration od Local LLMs and external systems. I would need to use llama2 to get instructions from the user, such as - "Open a folder named llama2 on C partition." and use the llama2 interpretation do execute the instruction. One approach that comes to my mind is to make a prompt that would return the prompt formated in certain way that middle ware can interpret an execute.
2023-11-10T16:02:00
https://www.reddit.com/r/LocalLLaMA/comments/17s74ub/local_llm_interaction_with_the_environment/
Soft-Conclusion-2004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s74ub
false
null
t3_17s74ub
/r/LocalLLaMA/comments/17s74ub/local_llm_interaction_with_the_environment/
false
false
self
5
null
Introducing Jais 30B, the latest open source Arabic-English model developed by Core42 & Cerebras
38
2023-11-10T15:50:51
https://www.g42.ai/resources/publications/Jais-30B
maroule
g42.ai
1970-01-01T00:00:00
0
{}
17s6vu9
false
null
t3_17s6vu9
/r/LocalLLaMA/comments/17s6vu9/introducing_jais_30b_the_latest_open_source/
false
false
https://b.thumbs.redditm…5pucGybOxN1c.jpg
38
{'enabled': False, 'images': [{'id': 'SVQQLdoM6ICA-Q9T-p18XkQO-8E-QVaa2bObGUg0qvg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/66HvticOb3w7ehZVGEEczAV0CrT0krFlhsyuHq7-sDc.jpg?width=108&crop=smart&auto=webp&s=e08d5d20d4a0d90869b6aac50be790f13c44846f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/66HvticOb3w7ehZVGEEczAV0CrT0krFlhsyuHq7-sDc.jpg?width=216&crop=smart&auto=webp&s=7ed4680fd3e6a09075559f16e737e533520679af', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/66HvticOb3w7ehZVGEEczAV0CrT0krFlhsyuHq7-sDc.jpg?width=320&crop=smart&auto=webp&s=b9d841ae47313fe472f81b618d18febee0c2bd35', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/66HvticOb3w7ehZVGEEczAV0CrT0krFlhsyuHq7-sDc.jpg?width=640&crop=smart&auto=webp&s=3b3a3165e43a311354b3d1f62ec9a9a8b3fce14c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/66HvticOb3w7ehZVGEEczAV0CrT0krFlhsyuHq7-sDc.jpg?width=960&crop=smart&auto=webp&s=332d94083ddac2ac4ba7ed2b4587ccd044f43a0c', 'width': 960}], 'source': {'height': 563, 'url': 'https://external-preview.redd.it/66HvticOb3w7ehZVGEEczAV0CrT0krFlhsyuHq7-sDc.jpg?auto=webp&s=c9397f5023ab06b25385f924129b2b5a57c5aa61', 'width': 1000}, 'variants': {}}]}
A selfhostable chatbot like character.ai/crushon.ai?
1
l've been experimenting with character.ai a bit recently, I thought it was a bit too limiting and had no nsfw support, so I switch to the uncensored version called crushon.ai.. is there a free & selfhostable version or a another similar ai model?
2023-11-10T15:06:40
https://www.reddit.com/r/LocalLLaMA/comments/17s5xf9/a_selfhostable_chatbot_like_characteraicrushonai/
Adrian_8115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s5xf9
false
null
t3_17s5xf9
/r/LocalLLaMA/comments/17s5xf9/a_selfhostable_chatbot_like_characteraicrushonai/
false
false
self
1
null
What are some of the considerations when deploying apis against self hosted LLMs?
5
While OpenAI and similar AI companies provide API access to their models and charge by the token, and major cloud providers, especially those with AI offerings, provide infrastructure and charge by the minute, I've been thinking about the 3rd option involving APIs around self hosted models. I looked at some GPU cloud providers and prices vary but seem to start around $0.5/hr for a single instance with a GPU with cost scaling linearly with number of instances. Does it make sense for a smaller to medium operation to go with cloud provider or is it feasible to build their our cluster without breaking the bank? 1. Which/How many GPUs should one have in their starter cluster? 2. What scale/throughput should one expect to get out of such a starter cluster? 3. Are there providers who offer cheaper more cost effective pricing for such a use case? For additional context, use case include: 1. External facing Chatbots with RAG 2. Document and issue/ticket classification and summarization run in batches on a schedule or real time 3. Internal AI assistant bots for text generation Internal user count around 50 External user count around 5-10k weekly but up to 50k-75k seasonly. If this post is too naive, I apologize. I'd appreciate if you could point me to relevant resources online. TIA
2023-11-10T14:19:12
https://www.reddit.com/r/LocalLLaMA/comments/17s4z9k/what_are_some_of_the_considerations_when/
Jimmy-Coder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s4z9k
false
null
t3_17s4z9k
/r/LocalLLaMA/comments/17s4z9k/what_are_some_of_the_considerations_when/
false
false
self
5
null
Why not test all models for training on the test data with Min-K% Prob?
8
So there detect pretrain data, https://swj0419.github.io/detect-pretrain.github.io/ , where one can test if a model has been pretrained on the text or not, so why dont we just test all the models going on the leaderboard, and just reject those detected for pretrain data? It would end the "train on test" issue
2023-11-10T14:12:53
https://www.reddit.com/r/LocalLLaMA/comments/17s4uv2/why_not_test_all_models_for_training_on_the_test/
vatsadev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s4uv2
false
null
t3_17s4uv2
/r/LocalLLaMA/comments/17s4uv2/why_not_test_all_models_for_training_on_the_test/
false
false
self
8
null
Best model to recreate AI Dungeon.
39
So, a couple years ago, i found something called AI dungeon, it was my first contact with an AI model. It was very good until it got censored af. The version i used was probably using GPT 2 or 1 (idk) Is there any model that can compare or is better for text based RPGS that isn't very censored and preferably 7b (im running on oobagooba with my CPU) Note: Sorry if i used the wrong terminology, I'm still a total noob lol. I discovered you can run your own language model just 2 days ago, and don't worry, I'm not trying to fulfill some weird fetish 💀.
2023-11-10T14:07:06
https://www.reddit.com/r/LocalLLaMA/comments/17s4qya/best_model_to_recreate_ai_dungeon/
RexorGamerYt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s4qya
false
null
t3_17s4qya
/r/LocalLLaMA/comments/17s4qya/best_model_to_recreate_ai_dungeon/
false
false
self
39
null
Finetuning LLMs: Does it add new knowledge to model or not?
22
Can you share your experiences?
2023-11-10T13:02:20
https://www.reddit.com/r/LocalLLaMA/comments/17s3jkd/finetuning_llms_does_it_add_new_knowledge_to/
Euphoric-Nebula-4559
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s3jkd
false
null
t3_17s3jkd
/r/LocalLLaMA/comments/17s3jkd/finetuning_llms_does_it_add_new_knowledge_to/
false
false
self
22
null
Aether - ChatGPT with retrieval over python/js docs (Transformers, LlamaIndex, etc)
2
2023-11-10T12:42:22
https://www.reddit.com/gallery/17s3713
bleugre3n
reddit.com
1970-01-01T00:00:00
0
{}
17s3713
false
null
t3_17s3713
/r/LocalLLaMA/comments/17s3713/aether_chatgpt_with_retrieval_over_pythonjs_docs/
false
false
https://b.thumbs.redditm…snn_p2rwEzSM.jpg
2
null
Fine Tuning vs. Context?
4
Was diving deeper into local LLM tinkering and I wanted to understand: when would I want to fine tune vs. use context? What use cases would either cater to? Sorry if this is a bit basic, but still learning the ropes in terms of LLM tinkering and got confused between the two. An answer to this will really help me focus on learning the right thing. Thanks. If it helps, I currently want my LLM to “understand” some essays/blog entries then spit back summaries/comparisons between them.
2023-11-10T12:11:41
https://www.reddit.com/r/LocalLLaMA/comments/17s2p7n/fine_tuning_vs_context/
masticore514219
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s2p7n
false
null
t3_17s2p7n
/r/LocalLLaMA/comments/17s2p7n/fine_tuning_vs_context/
false
false
self
4
null
Does labeling datasets stored in a Vector DB make sense?
4
Hello, I'm still new to this, but I want to focus on using a RAG and Vector DB to store all my personal and work-related data. I'm seeking a better understanding of how things work. I'm interested in covering multiple domains, such as "Sales," "Marketing," and "Security." I plan to use an embedding model to create embeddings and then store them in a Vector Database. When I interact with my LLM, it should retrieve relevant data based on my prompt and feed this into my LLM query. For instance: ​ "What's the command for xyz?" or "Create me a good offer for xyz." ​ As I understand it, there will be a backend semantic search for "Create me a good offer." Based on similarity, and possibly nearest neighbors, it will provide the LLM with context based on my prompt. The system's prompt for the LLM will then be based on this information to deliver the best possible answer. Now, the big question is... when creating my dataset to store in the Vector DB, should I label the dataset with tags like \[M\] or \[S\] for sales? This way, when I type my prompt and add the label \[S\], the semantic search can more accurately determine where to look. Does this approach make sense, or could it lead to more problems than it solves? I mean i asked GPT4 but thas not the same as someone who maybe have some extended knowledge about this. Thanks!
2023-11-10T12:10:22
https://www.reddit.com/r/LocalLLaMA/comments/17s2oew/does_labeling_datasets_stored_in_a_vector_db_make/
Moist_Influence1022
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s2oew
false
null
t3_17s2oew
/r/LocalLLaMA/comments/17s2oew/does_labeling_datasets_stored_in_a_vector_db_make/
false
false
self
4
null
History context snapshot of visual assistant model
1
I'd like to capture and save image chatting context on drive to return back later at the same point of dialog. Is it possible with some local solution( LLAVA, miniGPT-4)?
2023-11-10T11:34:46
https://www.reddit.com/r/LocalLLaMA/comments/17s23ko/history_context_snapshot_of_visual_assistant_model/
iVoider
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s23ko
false
null
t3_17s23ko
/r/LocalLLaMA/comments/17s23ko/history_context_snapshot_of_visual_assistant_model/
false
false
self
1
null
Converting ctransformers script into single .exe file with pyinstaller
3
I have a python script that takes some input text, processes it with a local 7B model, and spits out the models completion. When I call the script it runs beautifully (albeit slowly) on CPU only using the ctransformers library. I'm now trying to convert my script into a single-click .exe file that any user can run it without needing to manually install python/dependencies or have any familiarity with command line. My first attempts at this were with pyinstaller but when I run the .exe file output by pinstaller I get the error that: OSError: Precompiled binaries are not available for the current platform. Please reinstall from source using:   pip uninstall ctransformers --yes   pip install ctransformers --no-binary ctransformers I have tried reinstalling ctransformers with --no-binary but I still get the same error. Various internet searches have not been helpful and I have found very little about how one might go about converting a python script that uses one of the main CPU-only libraries (llama cpp, ctransformers etc) into a more user friendly one-click exe file. Any helps or pointers would be much appreciated!
2023-11-10T10:49:35
https://www.reddit.com/r/LocalLLaMA/comments/17s1gaz/converting_ctransformers_script_into_single_exe/
Hoblywobblesworth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s1gaz
false
null
t3_17s1gaz
/r/LocalLLaMA/comments/17s1gaz/converting_ctransformers_script_into_single_exe/
false
false
self
3
null
Nice, llama.cpp is now also supported by LMQL
31
2023-11-10T10:33:14
https://lmql.ai/docs/models/llama.cpp.html
smart_kanak
lmql.ai
1970-01-01T00:00:00
0
{}
17s18do
false
null
t3_17s18do
/r/LocalLLaMA/comments/17s18do/nice_llamacpp_is_now_also_supported_by_lmql/
false
false
default
31
null
[Mistral AI model] Finetuning with [MASK] special token ?
1
Hi, I cannot find the information. Can I use the special token '\[MASK\]' for finetuning ? Thanks,
2023-11-10T09:54:00
https://www.reddit.com/r/LocalLLaMA/comments/17s0pu8/mistral_ai_model_finetuning_with_mask_special/
Significant_Ad_3682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s0pu8
false
null
t3_17s0pu8
/r/LocalLLaMA/comments/17s0pu8/mistral_ai_model_finetuning_with_mask_special/
false
false
self
1
null
Leveraging RAG with Code llama for Conversational Coding
1
[removed]
2023-11-10T09:46:25
https://www.reddit.com/r/LocalLLaMA/comments/17s0mdh/leveraging_rag_with_code_llama_for_conversational/
Darth-Coderr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17s0mdh
false
null
t3_17s0mdh
/r/LocalLLaMA/comments/17s0mdh/leveraging_rag_with_code_llama_for_conversational/
false
false
self
1
null
I wonder theres way to run LLM without loading on ram
5
​ https://preview.redd.it/txoqaubzehzb1.png?width=1062&format=png&auto=webp&s=5ce1e0599c1b0430106cd828cad77dc516a42a4a ​ https://reddit.com/link/17rzqfm/video/fqtexzq5fhzb1/player https://preview.redd.it/s60h7gh1fhzb1.png?width=1016&format=png&auto=webp&s=23f963f561d4f57c8562924032301ce0256e4249 Heard Apple's working on an on-device Siri with LLMs, but these models are memory-intensive, especially for iPhone's limited RAM. This isn't just an Apple issue; big tech companies who want to run ML models on device, like samsung, google, meta will face same problem. What if models could run directly from storage instead of RAM? Samsung is onto something with their MRAM tech – it's non-volatile, power-efficient, and can handle some Logic, AI processing. Imagine your phone running models from storage! ​ Not an ML expert, but this tech evolution is intriguing. is there other attempt like this?
2023-11-10T08:35:22
https://www.reddit.com/r/LocalLLaMA/comments/17rzqfm/i_wonder_theres_way_to_run_llm_without_loading_on/
wjohhan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rzqfm
false
null
t3_17rzqfm
/r/LocalLLaMA/comments/17rzqfm/i_wonder_theres_way_to_run_llm_without_loading_on/
false
false
https://b.thumbs.redditm…Ac-gmqg90o1A.jpg
5
null
Yi-34B vs Yi-34B-200K on sequences <32K and <4K
34
Hello! By popular demand I am planning a fine-tune of [https://huggingface.co/dreamgen/opus-v0-7b](https://huggingface.co/dreamgen/opus-v0-7b) on top of Yi-34B and wonder whether to use the 200K as the base. The regular Yi-34B seems slightly better than Yi-34B-200K on standard benchmarks, but I wonder how it "feels" and whether the loss of performance on short context is worth it, given that the regular version can be used up to 32K tokens. [\(Yi-34B vs Yi-34B-200K\)](https://preview.redd.it/q0hpwekd9hzb1.png?width=1485&format=png&auto=webp&s=7bb564a0316568bc1a804ecfcac1e5fc6a3b9d6a) Did anyone try an analysis of these 2 models on various sequence lengths (<4K, <8K, <16K, etc.)? &#x200B;
2023-11-10T08:09:52
https://www.reddit.com/r/LocalLLaMA/comments/17rzed4/yi34b_vs_yi34b200k_on_sequences_32k_and_4k/
DreamGenX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rzed4
false
null
t3_17rzed4
/r/LocalLLaMA/comments/17rzed4/yi34b_vs_yi34b200k_on_sequences_32k_and_4k/
false
false
https://b.thumbs.redditm…w96SnXSwtclI.jpg
34
{'enabled': False, 'images': [{'id': '98mqfuLQqhOo6wYj_0R8RKbblLEpRDNNPr8PL5b-mCw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=108&crop=smart&auto=webp&s=f9601e62b4ac6b74a74657273ef6858d59a1b5e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=216&crop=smart&auto=webp&s=0bffb4ca8e66b542900f36dae9c8df6131974d3e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=320&crop=smart&auto=webp&s=d6324dc0638602b47e4b6248470691edd7a0eb30', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=640&crop=smart&auto=webp&s=b8bf3f6769eb79bdac7b4a00e42b143430677256', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=960&crop=smart&auto=webp&s=e21aa79e6f1fb541f6e53db83a738d53a0264bd2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?width=1080&crop=smart&auto=webp&s=c9c250704865c41082e619e65411ab785fac6a84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bxGTMHddHB6xBmkOsMM8niUhwWvQQK9CP74IkM-M0vs.jpg?auto=webp&s=ec0e2ac4e1eed21bd4d5f89cc581b5d152d3d7d8', 'width': 1200}, 'variants': {}}]}
S-LoRA: Serving Thousand LLMs on Single GPU
1
[removed]
2023-11-10T07:56:09
https://www.reddit.com/r/LocalLLaMA/comments/17rz7iu/slora_serving_thousand_llms_on_single_gpu/
Optimal-Resist-5416
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rz7iu
false
null
t3_17rz7iu
/r/LocalLLaMA/comments/17rz7iu/slora_serving_thousand_llms_on_single_gpu/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FKznEFKSnZK8t3-Au0POQhEownkxE6T6_gaY2gNTFuI', 'resolutions': [{'height': 44, 'url': 'https://external-preview.redd.it/2ZMcYZRQ4xwhqyGSIaO9ck5bcikNqWmDSBxzEP-pYiA.jpg?width=108&crop=smart&auto=webp&s=6a16ea3f1cc03b7064fe91351d9db59a99e396af', 'width': 108}, {'height': 89, 'url': 'https://external-preview.redd.it/2ZMcYZRQ4xwhqyGSIaO9ck5bcikNqWmDSBxzEP-pYiA.jpg?width=216&crop=smart&auto=webp&s=2ac4f41c41fad698eb0588a322542b128dfc0b8b', 'width': 216}, {'height': 132, 'url': 'https://external-preview.redd.it/2ZMcYZRQ4xwhqyGSIaO9ck5bcikNqWmDSBxzEP-pYiA.jpg?width=320&crop=smart&auto=webp&s=3d7b77515670d7c4cfbfb9489a020c957855ee3e', 'width': 320}, {'height': 264, 'url': 'https://external-preview.redd.it/2ZMcYZRQ4xwhqyGSIaO9ck5bcikNqWmDSBxzEP-pYiA.jpg?width=640&crop=smart&auto=webp&s=4b5723edc5769d2a0efd461de35857427412bf00', 'width': 640}, {'height': 397, 'url': 'https://external-preview.redd.it/2ZMcYZRQ4xwhqyGSIaO9ck5bcikNqWmDSBxzEP-pYiA.jpg?width=960&crop=smart&auto=webp&s=5652aa1f01dd0e716b4ca63607da4dc9a0b95e09', 'width': 960}], 'source': {'height': 429, 'url': 'https://external-preview.redd.it/2ZMcYZRQ4xwhqyGSIaO9ck5bcikNqWmDSBxzEP-pYiA.jpg?auto=webp&s=1b97d8f9770e0420abf73abbc6d5aa37bc3b6d73', 'width': 1037}, 'variants': {}}]}
LLMs and local data ingestion
1
I'm pretty new to this entire field of LLMs. I've played around with a few of the models in the oobabooga ui and have been eyeing some of the other gui options on github as well. Recently, I've stumbled upon a lot of terms like "Langchain" or "Rag" that seem super interesting. As far as I understand this, you can ingest data (text, files etc.) into one of your LLMs to analyze, summarize etc. However, I'm not quite sure how to do that. Would this be possible to do inside the oobabooga ui (which I liked the most up until now)? Are there some resources/projects you could point me towards? And how does all of that play with the limited context window of a local LLM?
2023-11-10T07:51:48
https://www.reddit.com/r/LocalLLaMA/comments/17rz5c0/llms_and_local_data_ingestion/
uDerRedHead
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rz5c0
false
null
t3_17rz5c0
/r/LocalLLaMA/comments/17rz5c0/llms_and_local_data_ingestion/
false
false
self
1
null
US judge trims AI copyright lawsuit against Meta
42
2023-11-10T07:16:38
https://www.reuters.com/legal/litigation/us-judge-trims-ai-copyright-lawsuit-against-meta-2023-11-09/
Prince_Noodletocks
reuters.com
1970-01-01T00:00:00
0
{}
17ryoiz
false
null
t3_17ryoiz
/r/LocalLLaMA/comments/17ryoiz/us_judge_trims_ai_copyright_lawsuit_against_meta/
false
false
https://a.thumbs.redditm…Po6BzRlIw3C4.jpg
42
{'enabled': False, 'images': [{'id': '5Xs6sKHULXydPkCiIqZ_FP-wT8mAmEpYiyh-fo6hIhc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/jnewQk9GoVZ27nAFQTkX2iMSTPLGJvDup8wqY5TEzoI.jpg?width=108&crop=smart&auto=webp&s=737adfd8856ff04f69aa5db1503c0d8229302de0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/jnewQk9GoVZ27nAFQTkX2iMSTPLGJvDup8wqY5TEzoI.jpg?width=216&crop=smart&auto=webp&s=23fa4d77edb302333ebcf16962ed9a77ab88fcd1', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/jnewQk9GoVZ27nAFQTkX2iMSTPLGJvDup8wqY5TEzoI.jpg?width=320&crop=smart&auto=webp&s=9d64a8ed065fbd7d2285b781e5dbc57c892c025b', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/jnewQk9GoVZ27nAFQTkX2iMSTPLGJvDup8wqY5TEzoI.jpg?width=640&crop=smart&auto=webp&s=e85c572127212fb492aa30b163fb2d7a21811e3d', 'width': 640}], 'source': {'height': 381, 'url': 'https://external-preview.redd.it/jnewQk9GoVZ27nAFQTkX2iMSTPLGJvDup8wqY5TEzoI.jpg?auto=webp&s=2fd88ec10469e4e54b43049d2a06558e3dcbe34c', 'width': 728}, 'variants': {}}]}
Any Open Source Local LLM GUI Built with .NET Framework.
2
Right now I really only know about and LM Studio for running and using models, I feel like it's really good and useful but closed source. Then I saw this ([https://www.reddit.com/r/LocalLLaMA/comments/16eoozu/best\_software\_webgui/](https://www.reddit.com/r/LocalLLaMA/comments/16eoozu/best_software_webgui/)) thread here and came to know about so many but didn't found anyone built with .NET and C#. I am thinking of building one but frankly have no clue of how to start with it. Does anyone knows about any any open source local LLM GUI built with .NET Framework or help/ guide me in getting started with building one.
2023-11-10T07:08:18
https://www.reddit.com/r/LocalLLaMA/comments/17rykee/any_open_source_local_llm_gui_built_with_net/
TechieRathor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rykee
false
null
t3_17rykee
/r/LocalLLaMA/comments/17rykee/any_open_source_local_llm_gui_built_with_net/
false
false
self
2
null
Text to Image and text to video: model suggestions
1
Hi! Please suggest some of the latest Image to Text and Image to Video models. Please share their RAM and VRAM requirements as well. Also if possible, suggest the best models that can work with 8GB or less VRAM. Also are any of these models available in .gguf format? or supported by llama.cpp?
2023-11-10T06:24:21
https://www.reddit.com/r/LocalLLaMA/comments/17rxyet/text_to_image_and_text_to_video_model_suggestions/
Atharv_Jaju
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rxyet
false
null
t3_17rxyet
/r/LocalLLaMA/comments/17rxyet/text_to_image_and_text_to_video_model_suggestions/
false
false
self
1
null
How to implement my library in a chatbot?
1
I would like to create a chatbot working entirely locally in order to have a good LLM for my rtx 4070 and through the RAG store the knowledge coming from the epub(pdf, txt) of my books. Are there already existing projects that have given good results in this sense? If not, how can I implement it and what tools to use?
2023-11-10T05:56:50
https://www.reddit.com/r/LocalLLaMA/comments/17rxjw4/how_to_implement_my_library_in_a_chatbot/
Aristocle-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rxjw4
false
null
t3_17rxjw4
/r/LocalLLaMA/comments/17rxjw4/how_to_implement_my_library_in_a_chatbot/
false
false
self
1
null
How can i speed up inference on M1/16gig. what models/embeddings (?) to use with LM studio?
1
im ready to power up to the next phase of my addiction - speed. basically i want to run batch jobs and have the ai generate all day long. but for that i need speed. speed is what we need! Any tips on speeding up an M1/16gig so its spits out text faster. looking for general summarization / rewriting tasks with a prompt. minstral 7 b?
2023-11-10T05:21:22
https://www.reddit.com/r/LocalLLaMA/comments/17rx06j/how_can_i_speed_up_inference_on_m116gig_what/
herozorro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rx06j
false
null
t3_17rx06j
/r/LocalLLaMA/comments/17rx06j/how_can_i_speed_up_inference_on_m116gig_what/
false
false
self
1
null
Seeking Feedback: Integrate LLMs with Just 3 Lines of Code – Pay for Cloud Use Only
2
I'm developing a JavaScript library tailored for the smooth integration of open source Large Language Models (LLMs) into web applications. This tool simplifies the activation of sophisticated LLM features in your web projects to just three lines of code. The library smartly determines whether to run the LLMs in the web browser, on the cloud, or a combination of both, depending on the real-time computational power of the user's device. This ensures an optimal user experience. Plus, you'll only incur charges when the library taps into cloud (API) resources, offering a cost-effective solution for developers. As I fine-tune this library, I’d love to get your take on possible use cases: I'm reaching out for your collective brainpower to identify creative and practical applications. What scenarios do you think would benefit from such a library? Are there specific gaps or challenges in the web development space that this tool could address? Your insights are gold as we strive to build something that's truly beneficial for the developer community and beyond. Can't wait to read your ideas and suggestions!
2023-11-10T04:33:04
https://www.reddit.com/r/LocalLLaMA/comments/17rw7y4/seeking_feedback_integrate_llms_with_just_3_lines/
emotion_something
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rw7y4
false
null
t3_17rw7y4
/r/LocalLLaMA/comments/17rw7y4/seeking_feedback_integrate_llms_with_just_3_lines/
false
false
self
2
null
humane ai pin dropping next week, how come I haven't heard more news abt this? seems like very capable tech packed into such a small device if it indeed works as they are marketing it--- i'm curious to see what type of latency it has for voice commands.
10
2023-11-10T04:31:32
https://hu.ma.ne/?utm_source=Humane&utm_campaign=9adde64a40-EMAIL_CAMPAIGN_2023_11_10_12_03&utm_medium=email&utm_term=0_968542ce76-9adde64a40-%5BLIST_EMAIL_ID%5D
LyPreto
hu.ma.ne
1970-01-01T00:00:00
0
{}
17rw70s
false
null
t3_17rw70s
/r/LocalLLaMA/comments/17rw70s/humane_ai_pin_dropping_next_week_how_come_i/
false
false
https://b.thumbs.redditm…Kw_TNYFSHzhY.jpg
10
{'enabled': False, 'images': [{'id': '6QVyrwdhr605apqxm1xSqMrLnoIihYgJ6p7lvh-B6ic', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/9hoSK2PVhXxmykw-qdao1YwZuHrh4uVDU6wt0rPDacI.jpg?width=108&crop=smart&auto=webp&s=7d56f5b546368c191c2864cead97e22f55bd6556', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/9hoSK2PVhXxmykw-qdao1YwZuHrh4uVDU6wt0rPDacI.jpg?width=216&crop=smart&auto=webp&s=b7daf472a0dc85604fc49a481f77d1b28e44cac4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/9hoSK2PVhXxmykw-qdao1YwZuHrh4uVDU6wt0rPDacI.jpg?width=320&crop=smart&auto=webp&s=ae66ff4c085846ffdcdcb95a505ba982d1541db0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/9hoSK2PVhXxmykw-qdao1YwZuHrh4uVDU6wt0rPDacI.jpg?width=640&crop=smart&auto=webp&s=6820f0eee426e12e7cffb11fd0aadb6171016ccb', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/9hoSK2PVhXxmykw-qdao1YwZuHrh4uVDU6wt0rPDacI.jpg?width=960&crop=smart&auto=webp&s=c7e787e49c110d86e7637ab096ac06a6965dc74b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/9hoSK2PVhXxmykw-qdao1YwZuHrh4uVDU6wt0rPDacI.jpg?width=1080&crop=smart&auto=webp&s=3beca9a006837d0d281c35a163afe6a34a8b7354', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/9hoSK2PVhXxmykw-qdao1YwZuHrh4uVDU6wt0rPDacI.jpg?auto=webp&s=9a2626dc66147dce8cb1985d6377f3c98a9c96d4', 'width': 1600}, 'variants': {}}]}
Open-source UI and template for running wasm models in the browser
1
2023-11-10T04:21:22
https://wasmai.vercel.app/
toonistic
wasmai.vercel.app
1970-01-01T00:00:00
0
{}
17rw0q4
false
null
t3_17rw0q4
/r/LocalLLaMA/comments/17rw0q4/opensource_ui_and_template_for_running_wasm/
false
false
default
1
null
How to start on LLM productionalization?
1
I am new to LLM, though I have an understanding of Classical ML (Regression, Classification, etc.). Is there any course available on the background theory of LLMs (from a high level, not too deep) and how to put it into production? I often see acronyms in this sub which I am not aware of. I have come across [HG NLP course](https://huggingface.co/learn/nlp-course/chapter1/1), would this help me with what I am looking for (not too heavy in theory/math)?
2023-11-10T03:21:07
https://www.reddit.com/r/LocalLLaMA/comments/17ruxn9/how_to_start_on_llm_productionalization/
AMGraduate564
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17ruxn9
false
null
t3_17ruxn9
/r/LocalLLaMA/comments/17ruxn9/how_to_start_on_llm_productionalization/
false
false
self
1
{'enabled': False, 'images': [{'id': '5DhY4vsMgWX6V2WnE9qoAXKn3eWVeQlb06uFrJcB3oY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=108&crop=smart&auto=webp&s=a4c057404bd0d79eb5dbbd348e6fe764ba9b48fb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=216&crop=smart&auto=webp&s=1a2aca2b64466721e0ab3260d888f50f5f825878', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=320&crop=smart&auto=webp&s=fb447213b50b3a1444f29b6bbac0bbf326e0c897', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=640&crop=smart&auto=webp&s=b32a8071a3ef13f06712c28b200186ed24fe8d4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=960&crop=smart&auto=webp&s=edfe4703f8b263e9a543ad4502b3455296eea311', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?width=1080&crop=smart&auto=webp&s=10e689d244e611cd8626b20718e8e0522e45037a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VFQBbkgkBI063_oML9nKv0Go-02bFPBZ3yy6xMYgj5Q.jpg?auto=webp&s=703daba034655ef31c931f4b69c33506f95ff729', 'width': 1200}, 'variants': {}}]}
cant get cuda to work with llama-cpp-python in wsl ubuntu.
2
I tried everything at this point i think i am doing something wrong or i have discovered some very strange bug. i was thinking on posting on their github but i am not sure if i am not simply making a very stupid error. \`\`\` in a fresh conda install set up with python 3.12 i used export LLAMA\_CUBLAS=1 then i copied this: CMAKE\_ARGS="-DLLAMA\_CUBLAS=on" pip install llama-cpp-python \`\`\` it runs without complaint creating a working llama-cpp-python install but without cuda support. I know that i have cuda working in the wsl because nvidia-sim shows cuda version 12. i have tried to set up multiple environments i tried removing and reinstalling, i tried different things besides cuda that also dont work so something seems to be off with the backend part but i dont know what. Best guess i do something very basic wrong like not setting the environmental variable correctly or somthing. Also it does simply not create the llama\_cpp\_cuda folder in so [llama-cpp-python not using NVIDIA GPU CUDA - Stack Overflow](https://stackoverflow.com/questions/76963311/llama-cpp-python-not-using-nvidia-gpu-cuda) does not seem to be the problem. \#Hardware: \`$ Ryzen 5800H RTX 3060 16gb of ddr4 RAM \`$ WSL2 Ubuntu\` &#x200B; TO test it i run the following code and look at the gpu mem usage which stays at about 0 \`\`\` from llama\_cpp import Llama llm = Llama(model\_path="/mnt/d/Maschine learning/llm models/llama\_2\_7b/llama27bchat.Q4\_K\_M.gguf", n\_gpu\_layers=20, n\_threads=6, n\_ctx=3584, n\_batch=521, verbose=True) output = llm("Q: Name the planets in the solar system? A: ", max\_tokens=32, stop=\["Q:", "\\n"\], echo=True) \`\`\`
2023-11-10T03:09:43
https://www.reddit.com/r/LocalLLaMA/comments/17ruptr/cant_get_cuda_to_work_with_llamacpppython_in_wsl/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17ruptr
false
null
t3_17ruptr
/r/LocalLLaMA/comments/17ruptr/cant_get_cuda_to_work_with_llamacpppython_in_wsl/
false
false
self
2
{'enabled': False, 'images': [{'id': 'nfayPavSUB5ngYv6-19UHNBThsXfcLIDQl4HkEe3Cv0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/yzSfTlKTSYGpEXeFgyDvHlfoLGOFQJqPuH_Y38RBz2U.jpg?width=108&crop=smart&auto=webp&s=0aad06750c23b98c9b7595343a8b54a42dc18851', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/yzSfTlKTSYGpEXeFgyDvHlfoLGOFQJqPuH_Y38RBz2U.jpg?width=216&crop=smart&auto=webp&s=b66126834977e269be586d07464046049ed09138', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/yzSfTlKTSYGpEXeFgyDvHlfoLGOFQJqPuH_Y38RBz2U.jpg?auto=webp&s=a70d21ce9f01f64670d2200ca9fc3f39b94a7e48', 'width': 316}, 'variants': {}}]}
Textbook interrogation
2
Are there any available ways to finetune/train a lora based on a set of textbooks? I'm not sure how parsing the data would work since it wouldn't be in question/answer format.
2023-11-10T02:17:10
https://www.reddit.com/r/LocalLLaMA/comments/17rtp98/textbook_interrogation/
yt112358
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rtp98
false
null
t3_17rtp98
/r/LocalLLaMA/comments/17rtp98/textbook_interrogation/
false
false
self
2
null
Goliath-120B - quants and future plans
129
A few people here tried the Goliath-120B model I released a while back, and looks like TheBloke has released the quantized versions now. So far, the reception has been largely positive. [https://huggingface.co/TheBloke/goliath-120b-GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) [https://huggingface.co/TheBloke/goliath-120b-GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) [https://huggingface.co/TheBloke/goliath-120b-AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) &#x200B; The fact that the model turned out good is completely unexpected. Every LM researcher I've spoken to about this in the past few days has been completely baffled. The plan moving forward, in my opinion, is to finetune this model (preferably a full finetune) so that the stitched layers get to know each other better. Hopefully I can find the compute to do that soon :D &#x200B; On a related note, I've been working on \[LLM-Shearing\]([https://github.com/AlpinDale/LLM-Shearing](https://github.com/AlpinDale/LLM-Shearing)\] lately, which would essentially enable us to shear down a transformer down to much smaller sizes, while preserving accuracy. The reason goliath-120b came to be was an experiment in moving at the opposite direction of shearing. I'm now wondering if we can shear a finetuned Goliath-120B to around \~70B again and end up with a much better 70B model than the existing ones. This would of course be prohibitively expensive, as we'd need to do continued pre-train after the shearing/pruning process. A more likely approach, I believe, is shearing Mistral-7B to \~1.3B and perform continued pretrain on about 100B tokens. &#x200B; If anyone has suggestions, please let me know. Cheers! &#x200B;
2023-11-10T01:23:18
https://www.reddit.com/r/LocalLLaMA/comments/17rsmox/goliath120b_quants_and_future_plans/
AlpinDale
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rsmox
false
null
t3_17rsmox
/r/LocalLLaMA/comments/17rsmox/goliath120b_quants_and_future_plans/
false
false
self
129
{'enabled': False, 'images': [{'id': 'iAzYQz0yMNg9UU82vWtPLHax5lxy5Tc0al0EQsLmrUs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VCvvgKqPd3ZBJafM5oqkjBx7QmU3-GwZ8jsNO-sEXeg.jpg?width=108&crop=smart&auto=webp&s=7e9479e48fdb14fbac6a3c82ad56c22dd452b0c5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VCvvgKqPd3ZBJafM5oqkjBx7QmU3-GwZ8jsNO-sEXeg.jpg?width=216&crop=smart&auto=webp&s=8c5210681cd8e4bfd903ac0990fdebee18904bad', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VCvvgKqPd3ZBJafM5oqkjBx7QmU3-GwZ8jsNO-sEXeg.jpg?width=320&crop=smart&auto=webp&s=8701d9256ea5008417eaf1ffcbd45375a913b5b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VCvvgKqPd3ZBJafM5oqkjBx7QmU3-GwZ8jsNO-sEXeg.jpg?width=640&crop=smart&auto=webp&s=07263bb3f650f6b0463e2a2420470a3a86ff7b7f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VCvvgKqPd3ZBJafM5oqkjBx7QmU3-GwZ8jsNO-sEXeg.jpg?width=960&crop=smart&auto=webp&s=d12169e47dbe97ea56b408135fa532595a40dea4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VCvvgKqPd3ZBJafM5oqkjBx7QmU3-GwZ8jsNO-sEXeg.jpg?width=1080&crop=smart&auto=webp&s=d3e6f827ed9e7484a14a897cabe2945918aebf29', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VCvvgKqPd3ZBJafM5oqkjBx7QmU3-GwZ8jsNO-sEXeg.jpg?auto=webp&s=551ea71924685be58cf36c186630824ee0cd66f4', 'width': 1200}, 'variants': {}}]}
RAG in a couple lines of code with txtai-wikipedia embeddings database + Mistral
73
2023-11-10T01:05:28
https://i.redd.it/1qlnga2x6fzb1.jpeg
davidmezzetti
i.redd.it
1970-01-01T00:00:00
0
{}
17rs9ui
false
null
t3_17rs9ui
/r/LocalLLaMA/comments/17rs9ui/rag_in_a_couple_lines_of_code_with_txtaiwikipedia/
false
false
https://b.thumbs.redditm…dW9RHz3aR9YU.jpg
73
{'enabled': True, 'images': [{'id': '6UxJ4ynI2AZy1BbRt4bZ28NcMvr9ov4wbmf7f9p7fZw', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/1qlnga2x6fzb1.jpeg?width=108&crop=smart&auto=webp&s=f86dccd2174589a868000520efb32aa4ae6c1511', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/1qlnga2x6fzb1.jpeg?width=216&crop=smart&auto=webp&s=7c343f49c886fe2a956dcf620b42ed1fac2e7262', 'width': 216}, {'height': 220, 'url': 'https://preview.redd.it/1qlnga2x6fzb1.jpeg?width=320&crop=smart&auto=webp&s=8f68ee1d7779a73d433045f88a42e062611a6a0e', 'width': 320}, {'height': 441, 'url': 'https://preview.redd.it/1qlnga2x6fzb1.jpeg?width=640&crop=smart&auto=webp&s=1989c71454ab145b20af157b93aa6f0fec7ccbb1', 'width': 640}, {'height': 662, 'url': 'https://preview.redd.it/1qlnga2x6fzb1.jpeg?width=960&crop=smart&auto=webp&s=6a00e9627b57b75ae6c44105bc8ac5bd2e392c5b', 'width': 960}, {'height': 745, 'url': 'https://preview.redd.it/1qlnga2x6fzb1.jpeg?width=1080&crop=smart&auto=webp&s=42499965d52eaab9623632cf94b3090d1007bf8f', 'width': 1080}], 'source': {'height': 1116, 'url': 'https://preview.redd.it/1qlnga2x6fzb1.jpeg?auto=webp&s=c98f5c8619d31d4016b18fa1b44848f34423010f', 'width': 1616}, 'variants': {}}]}
Point me towards some basic dataset preparation tips for LLM's?
5
I have some basic confusions over how to prepare a dataset for training. My plan is to use a model like llama2 7b chat, and train it on some proprietary data I have (in its raw format, this data is very similar to a text book). Do I need to find a way to reformat this large amount of text into a bunch of pairs like "query" and "output" ? I have seen some LLM's which say things like "trained on Wikipedia" which seems like they were able to train it on that large chunk of text alone without reformatting it into data pairs - is there a way I can do that, too? Or since I want to target a chat model, I have to find a way to convert the data into pairs which basically serve as examples of proper input and output?
2023-11-09T23:41:06
https://www.reddit.com/r/LocalLLaMA/comments/17rqir0/point_me_towards_some_basic_dataset_preparation/
ArtifartX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rqir0
false
null
t3_17rqir0
/r/LocalLLaMA/comments/17rqir0/point_me_towards_some_basic_dataset_preparation/
false
false
self
5
null
Question about Langchain and Open Source LLMs
1
I have been experimenting with Langchain. I tried getting Llama 2 13b to use Langchain tools. I also tried running Code Llama as an SQL agent. I always get the error "Could not parse LLM output." I am trying to build an application that can generate SQL queries from natural language and has some form of memory of conversational history. When I looked online for the error "Could not parse LLM output" most people were using OpenAI not open source LLMs and people were saying the error seems to be that the LLM is not following the required format "Action Thought Observation" etc. Are there any small open source LLMs (that can run on CPU) that work with Langchain to do things like using tools and having conversational memory? Bonus points if they can generate SQL. Or does this kind of thing require larger more powerful models? I got fairly good results on SQL generation using NSQL-350m (50% on Spider) which can run on CPU, but I don't think this model will work with Langchain.
2023-11-09T23:39:53
https://www.reddit.com/r/LocalLLaMA/comments/17rqhs7/question_about_langchain_and_open_source_llms/
tail-recursion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rqhs7
false
null
t3_17rqhs7
/r/LocalLLaMA/comments/17rqhs7/question_about_langchain_and_open_source_llms/
false
false
self
1
null
Looking for a budget LLM Hosting server advice
1
Hey, im looking for a budget LLM hosting server, to selfhost. I'm thinking of a mb with 7 at least pcie lanes, and, buying P40s? I want to upgrade it slowly to support the max 7, would there be any downsides to this/alternatives? \*p40 would cost about $1300 all 7 combined\*.
2023-11-09T23:33:34
https://www.reddit.com/r/LocalLLaMA/comments/17rqcx5/looking_for_a_budget_llm_hosting_server_advice/
Pale_Ad_6029
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rqcx5
false
null
t3_17rqcx5
/r/LocalLLaMA/comments/17rqcx5/looking_for_a_budget_llm_hosting_server_advice/
false
false
self
1
null
Is it possible to use a context size larger than the native one of the model you're using?
3
On SillyTavern for example, can I just set the n_ctx as high as my system can handle and expect it to work? Or will you get non standard output when you go beyond the "native" context size?
2023-11-09T22:49:11
https://www.reddit.com/r/LocalLLaMA/comments/17rpdhq/is_it_possible_to_use_a_context_size_larger_than/
Lucky_Increase_1037
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rpdhq
false
null
t3_17rpdhq
/r/LocalLLaMA/comments/17rpdhq/is_it_possible_to_use_a_context_size_larger_than/
false
false
self
3
null
LocalGPT + Llama 7B faster on Macbook CPU vs Desktop GPU?
1
I'm coding a chatbot for a school project and I've been using Llama 2-7B through localGPT. I installed cuda on my desktop computer in the hopes that it would perform better than my macbook pro, but flask application ended up taking 10x to generate a response. Strangely enough, the llama\_print\_timings only said the total time was 200000 ms which is only 10% slower than my Macbook, not 1000%. Is this normal for my specs? Desktop GPU: GeForce RTX 1060. Observed 100% GPU utilization for the first few minutes, then it was purely CPU for the 20 minutes after. Desktop CPU: i5-8400 @ 2.8 GHz. Macbook CPU: 6-core Core i7 at 2.6 Ghz.
2023-11-09T22:42:29
https://www.reddit.com/r/LocalLLaMA/comments/17rp7zr/localgpt_llama_7b_faster_on_macbook_cpu_vs/
9090112
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rp7zr
false
null
t3_17rp7zr
/r/LocalLLaMA/comments/17rp7zr/localgpt_llama_7b_faster_on_macbook_cpu_vs/
false
false
self
1
null
Is there an open source equivalent for creating Custom GPTs like the one OpenAI introduced
4
If not are there any efforts that are similar? Is there anyone working on replicating the system? I have done some testing and saw that OpenAI employs GPT-3.5 to do the processing for Custom GPTs and outperforms GPT-4 on answering fact-based questions. This is kind of expected but what shocked me was the sheer amount of document process to accurately answer some of the questions.
2023-11-09T22:12:51
https://www.reddit.com/r/LocalLLaMA/comments/17rok7i/is_there_an_open_source_equivalent_for_creating/
No_Yak8345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rok7i
false
null
t3_17rok7i
/r/LocalLLaMA/comments/17rok7i/is_there_an_open_source_equivalent_for_creating/
false
false
self
4
null
Down to memory lane, 2022 - "Google's LaMDA Ai is sentient, I swear"
179
2023-11-09T21:47:43
https://i.redd.it/9znigqde7ezb1.png
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
17rnzog
false
null
t3_17rnzog
/r/LocalLLaMA/comments/17rnzog/down_to_memory_lane_2022_googles_lamda_ai_is/
false
false
https://b.thumbs.redditm…waF0wYUM6dEA.jpg
179
{'enabled': True, 'images': [{'id': 'vQi_FdFjOXvyRiujlXakqZjTnTk_dceKiepcclX2gLY', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/9znigqde7ezb1.png?width=108&crop=smart&auto=webp&s=5fdc3363818d1df8c9cc21455189b0b4cd96112b', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/9znigqde7ezb1.png?width=216&crop=smart&auto=webp&s=51233e56802316f85857cab32681dbaad3358424', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/9znigqde7ezb1.png?width=320&crop=smart&auto=webp&s=220d88e2bc853d14b6a06219531e6f7466ae0d2d', 'width': 320}, {'height': 436, 'url': 'https://preview.redd.it/9znigqde7ezb1.png?width=640&crop=smart&auto=webp&s=deb988ad222f996ffa4fd8aaf31b6247cff9b60b', 'width': 640}], 'source': {'height': 542, 'url': 'https://preview.redd.it/9znigqde7ezb1.png?auto=webp&s=33d1bd0d2741d303d7d21ab306fa1ceb026da567', 'width': 795}, 'variants': {}}]}
2 x RTX 3090s on an X399 and 1950x Threadripper?
3
I have an old X399 w/1950x Threadripper lying around unused (these days I pretty much use my M1 Max MBP w/64GB via a TB3 docking station exclusively). I was wondering if there are any issues running 2xRTX 3090 w/24GB on that rig - one user claimed there were some potential issues but they were referring to a 4x3090 setup I think. I would need to upgrade my PS of course, but are there any other gotchas to be aware of? It should have enough PCIE 3.0 lanes to satisfy the cards. Also wondering if it's worth it to even do this as I could just use my M1 Max that I already have. But that's largely to support my homies in Windows still toiling away on making WSL2 awesome. Thanks your help!
2023-11-09T21:14:50
https://www.reddit.com/r/LocalLLaMA/comments/17rn8ab/2_x_rtx_3090s_on_an_x399_and_1950x_threadripper/
5kisbetterthan4k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rn8ab
false
null
t3_17rn8ab
/r/LocalLLaMA/comments/17rn8ab/2_x_rtx_3090s_on_an_x399_and_1950x_threadripper/
false
false
self
3
null
Regarding long context and quadratic attention
1
Hello everyone. I have been catching up with the recent literature regarding long context. I am reading [kaiokendev post](https://kaiokendev.github.io/til#extending-context-to-8k) and [NTK scaling](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/). One question that I have and seem not answered by their post or any other posts is that, isn't attention quadratic? I still recall the O(L\^2) complexity that is countless referred to in the older literature (e.g. [longformer](https://arxiv.org/abs/2004.05150) or [bigbird](https://arxiv.org/abs/2007.14062)). (I read somewhere that it turns out that it is actually the MLP that consumes the most in practice but still the longer the sequence length is the more memory it would take). Based on my understanding of the current work on long context, they are tweaking the frequency or base in the rotary embedding, which makes the model interpolate unseen sequence length, but still, you need much more memory for 8k sequence input than 2k. Is this issue solved? I appreciate any pointers. Thanks!
2023-11-09T20:37:16
https://www.reddit.com/r/LocalLLaMA/comments/17rme8v/regarding_long_context_and_quadratic_attention/
tt19234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rme8v
false
null
t3_17rme8v
/r/LocalLLaMA/comments/17rme8v/regarding_long_context_and_quadratic_attention/
false
false
self
1
null
Exclusive survey: Experts don't trust tech CEOs on AI
58
2023-11-09T20:15:27
https://www.axios.com/2023/11/08/tech-ceos-distrust-expert-survey
searcher1k
axios.com
1970-01-01T00:00:00
0
{}
17rlxqj
false
null
t3_17rlxqj
/r/LocalLLaMA/comments/17rlxqj/exclusive_survey_experts_dont_trust_tech_ceos_on/
false
false
https://b.thumbs.redditm…ux4NRdwTxJzU.jpg
58
{'enabled': False, 'images': [{'id': '-dDBA1a2TSec1RntqvkxLwfoJ8SMv7NMvVV8vdJJ6gg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/nHVJRKqvYm9ctBsER7o1Gy1tXM7zvbAm_VxJlMoY13Q.jpg?width=108&crop=smart&auto=webp&s=973ef2e719287b41a63a92a69127c44dce683eff', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/nHVJRKqvYm9ctBsER7o1Gy1tXM7zvbAm_VxJlMoY13Q.jpg?width=216&crop=smart&auto=webp&s=2b2e308bbb6f7cf36c23989e02be6c5a4147bbf8', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/nHVJRKqvYm9ctBsER7o1Gy1tXM7zvbAm_VxJlMoY13Q.jpg?width=320&crop=smart&auto=webp&s=9adbc8311ff9417dd4956b17e37f301990b0b9f8', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/nHVJRKqvYm9ctBsER7o1Gy1tXM7zvbAm_VxJlMoY13Q.jpg?width=640&crop=smart&auto=webp&s=cb570ccaedd9cf6663723b364a171fc137848a49', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/nHVJRKqvYm9ctBsER7o1Gy1tXM7zvbAm_VxJlMoY13Q.jpg?width=960&crop=smart&auto=webp&s=818f78d6fede73de572b9dc0ea5631b7c0d4964e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/nHVJRKqvYm9ctBsER7o1Gy1tXM7zvbAm_VxJlMoY13Q.jpg?width=1080&crop=smart&auto=webp&s=1d636c57aa63159a095b250fa79e1efcd367076e', 'width': 1080}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/nHVJRKqvYm9ctBsER7o1Gy1tXM7zvbAm_VxJlMoY13Q.jpg?auto=webp&s=7415475a14e1ebfd60aa7a2bd9b76219bff3646c', 'width': 1366}, 'variants': {}}]}
load llama-2 in 8b quantization?
1
The question is probably too basic. But how do i load llama2 70B model using 8b quantization? I see TheBlokeLlama2\_70B\_chat\_GPTQ but they only show 3b/4b quantization. I have 80G A100 and try to load llama2 70B model with 8b quantization. Thanks a lot!
2023-11-09T19:46:29
https://www.reddit.com/r/LocalLLaMA/comments/17rlapz/load_llama2_in_8b_quantization/
peterwu00
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rlapz
false
null
t3_17rlapz
/r/LocalLLaMA/comments/17rlapz/load_llama2_in_8b_quantization/
false
false
self
1
null
How is Yi-34B rated so highly?
1
[removed]
2023-11-09T19:28:47
https://www.reddit.com/r/LocalLLaMA/comments/17rkw3l/how_is_yi34b_rated_so_highly/
Lantus9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rkw3l
false
null
t3_17rkw3l
/r/LocalLLaMA/comments/17rkw3l/how_is_yi34b_rated_so_highly/
false
false
self
1
null
GPT-4's 128K context window tested
138
This fella tested the new 128K context window and had some interesting findings. \* GPT-4’s recall performance started to degrade above 73K tokens \* Low recall performance was correlated when the fact to be recalled was placed between at 7%-50% document depth \* If the fact was at the beginning of the document, it was recalled regardless of context length Any thoughts on what OpenAI is doing to its context window behind the scenes? Which process or processes they're using to expand context window, for example. He also says in the comments that at 64K and lower, retrieval was 100%. That's pretty impressive.
2023-11-09T18:44:13
https://www.reddit.com/r/LocalLLaMA/comments/17rjwh6/gpt4s_128k_context_window_tested/
Ok_Relationship_9879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rjwh6
false
null
t3_17rjwh6
/r/LocalLLaMA/comments/17rjwh6/gpt4s_128k_context_window_tested/
false
false
self
138
{'enabled': False, 'images': [{'id': 'ymU6YX_LHVJfaun7NAOa91DelnsTFGnDzZWqRSjSd4U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Bf-Qrea4Q3KQh50ceaoIYKf0zsvbtNI78Br4jFxUV-o.jpg?width=108&crop=smart&auto=webp&s=e490a63eb0e04cbb08eb8a6648e427b859322beb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Bf-Qrea4Q3KQh50ceaoIYKf0zsvbtNI78Br4jFxUV-o.jpg?width=216&crop=smart&auto=webp&s=e5bd9557250e4f50ad4d180ba53aa824a909e526', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Bf-Qrea4Q3KQh50ceaoIYKf0zsvbtNI78Br4jFxUV-o.jpg?width=320&crop=smart&auto=webp&s=785e55544251ce1ffd576be4e76ed99e99733e11', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/Bf-Qrea4Q3KQh50ceaoIYKf0zsvbtNI78Br4jFxUV-o.jpg?width=640&crop=smart&auto=webp&s=62df5f942033dfacc255903df3ca94888ac846ee', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/Bf-Qrea4Q3KQh50ceaoIYKf0zsvbtNI78Br4jFxUV-o.jpg?width=960&crop=smart&auto=webp&s=9806632b5e6affd1995445324d518139ca06ff03', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/Bf-Qrea4Q3KQh50ceaoIYKf0zsvbtNI78Br4jFxUV-o.jpg?width=1080&crop=smart&auto=webp&s=f3deb990749dc1bef2dd70bafaec061973bc0848', 'width': 1080}], 'source': {'height': 1072, 'url': 'https://external-preview.redd.it/Bf-Qrea4Q3KQh50ceaoIYKf0zsvbtNI78Br4jFxUV-o.jpg?auto=webp&s=7e733f916b0895dca4b635f009b36d3780f7511b', 'width': 2047}, 'variants': {}}]}
Small model fine tuning or QLora on Bigger One?
4
In your experience if I have a huge dataset and low gpu resource, Is it Better to fully finetune a smaller model (1B max) or QLora on a Bigger One?
2023-11-09T18:38:29
https://www.reddit.com/r/LocalLLaMA/comments/17rjrp4/small_model_fine_tuning_or_qlora_on_bigger_one/
_ragnet_7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rjrp4
false
null
t3_17rjrp4
/r/LocalLLaMA/comments/17rjrp4/small_model_fine_tuning_or_qlora_on_bigger_one/
false
false
self
4
null
Nearing Q4 23, what's the best web UI frontend?
11
That's supports: - saving prompts - Works with OpenAI - self hosted for privacy I'm currently using this one (ChatGPT Next Web) which is quite good but at the pace we're moving, I'm left wondering if there is something better? It also doesn't share session history across devices. https://github.com/Yidadaa/ChatGPT-Next-Web
2023-11-09T18:15:04
https://www.reddit.com/r/LocalLLaMA/comments/17rj8fd/nearing_q4_23_whats_the_best_web_ui_frontend/
ctrl-brk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rj8fd
false
null
t3_17rj8fd
/r/LocalLLaMA/comments/17rj8fd/nearing_q4_23_whats_the_best_web_ui_frontend/
false
false
self
11
{'enabled': False, 'images': [{'id': '7kP30Ib5ejZ62gLhaidLvLKv_N89D-cXJHDcqI2ohHQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-wTFzNh2wJOAYJ2Sw8q__dyFVBAPeT_6s59DOb2OA48.jpg?width=108&crop=smart&auto=webp&s=3cb54a47b0641b9ac77315aa484f04f4c29c803a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-wTFzNh2wJOAYJ2Sw8q__dyFVBAPeT_6s59DOb2OA48.jpg?width=216&crop=smart&auto=webp&s=0249fc7549e91622b75a0dba8b842bb66914eaf8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-wTFzNh2wJOAYJ2Sw8q__dyFVBAPeT_6s59DOb2OA48.jpg?width=320&crop=smart&auto=webp&s=7e659b81688df86a95181e75f687922d5300a3f4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-wTFzNh2wJOAYJ2Sw8q__dyFVBAPeT_6s59DOb2OA48.jpg?width=640&crop=smart&auto=webp&s=72a92c524141c2721c615240a2651fec0d682038', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-wTFzNh2wJOAYJ2Sw8q__dyFVBAPeT_6s59DOb2OA48.jpg?width=960&crop=smart&auto=webp&s=8504863d7374f51cd7cbcb785b16602fe91a0ee2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-wTFzNh2wJOAYJ2Sw8q__dyFVBAPeT_6s59DOb2OA48.jpg?width=1080&crop=smart&auto=webp&s=3fbff52b99a1b573e67f84318f51585489b21a31', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-wTFzNh2wJOAYJ2Sw8q__dyFVBAPeT_6s59DOb2OA48.jpg?auto=webp&s=bd17a580a7982f84a5de9f5f15a2002683f39136', 'width': 1200}, 'variants': {}}]}
Look's like Mistral's cooking something tasty... no word on release date yet, though.
176
2023-11-09T17:48:28
https://i.redd.it/jqnl9wwo0dzb1.png
hzj5790
i.redd.it
1970-01-01T00:00:00
0
{}
17rimrh
false
null
t3_17rimrh
/r/LocalLLaMA/comments/17rimrh/looks_like_mistrals_cooking_something_tasty_no/
false
false
https://a.thumbs.redditm…9KKSDxTEW0f8.jpg
176
{'enabled': True, 'images': [{'id': 'dsZUtDcsPw0PQWVEyGg-hM7nVL_xaWWnzfUGwoTjX-Q', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/jqnl9wwo0dzb1.png?width=108&crop=smart&auto=webp&s=9c7d465a100e693e2a016b3f9ee6c3c6daf95791', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/jqnl9wwo0dzb1.png?width=216&crop=smart&auto=webp&s=9f99ea7b4f8ab9c65b35cef2d18690bb9654037d', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/jqnl9wwo0dzb1.png?width=320&crop=smart&auto=webp&s=e32025c5c61a93df32290889a108edf5d83620db', 'width': 320}, {'height': 200, 'url': 'https://preview.redd.it/jqnl9wwo0dzb1.png?width=640&crop=smart&auto=webp&s=eb159266bbc61f1bb387547fd937c1d22e28b8a3', 'width': 640}], 'source': {'height': 280, 'url': 'https://preview.redd.it/jqnl9wwo0dzb1.png?auto=webp&s=cd0bbe48829b5a03a87e4c38baa2520b4f2688c8', 'width': 894}, 'variants': {}}]}
Open LLM Leaderboard has been re-evaluated with 3 new metrics and all models retested
202
2023-11-09T17:29:34
https://twitter.com/ClementDelangue/status/1722620987374735795
jd_3d
twitter.com
1970-01-01T00:00:00
0
{}
17ri7dj
false
{'oembed': {'author_name': 'clem 🤗', 'author_url': 'https://twitter.com/ClementDelangue', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">The open-source models were starting to be too good for the <a href="https://twitter.com/huggingface?ref_src=twsrc%5Etfw">@huggingface</a> open LLM leaderboards so we added 3 new metrics thanks to <a href="https://twitter.com/AiEleuther?ref_src=twsrc%5Etfw">@AiEleuther</a> to make them harder and more relevant for real-life performance. <br><br>Re-running evaluation of 2,000+ models wasn&#39;t easy and took more than… <a href="https://t.co/azAu88s5wu">pic.twitter.com/azAu88s5wu</a></p>&mdash; clem 🤗 (@ClementDelangue) <a href="https://twitter.com/ClementDelangue/status/1722620987374735795?ref_src=twsrc%5Etfw">November 9, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ClementDelangue/status/1722620987374735795', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_17ri7dj
/r/LocalLLaMA/comments/17ri7dj/open_llm_leaderboard_has_been_reevaluated_with_3/
false
false
https://b.thumbs.redditm…tN3eBYIFTn8o.jpg
202
{'enabled': False, 'images': [{'id': '_T1W20Sd4n93t-Xo8BsXDyMXr1MTMdYzHUCSV9ne0D4', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/ZHgLQkHQRFDCc4jvVydGFm4KT120NxF2t4hnxtpGypg.jpg?width=108&crop=smart&auto=webp&s=957cf6563df1ae7fbe05eda97cb9593c850ff56b', 'width': 108}], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/ZHgLQkHQRFDCc4jvVydGFm4KT120NxF2t4hnxtpGypg.jpg?auto=webp&s=2b4c6e883a303e47e3f8591eb5f87d3d29df374e', 'width': 140}, 'variants': {}}]}
MonadGPT, an early modern chatbot trained on Mistral-Hermes and 17th century books.
70
2023-11-09T15:45:40
https://i.redd.it/vu7vqr6yeczb1.png
Dorialexandre
i.redd.it
1970-01-01T00:00:00
0
{}
17rfugl
false
null
t3_17rfugl
/r/LocalLLaMA/comments/17rfugl/monadgpt_an_early_modern_chatbot_trained_on/
false
false
https://b.thumbs.redditm…RJqvEJx5_0AU.jpg
70
{'enabled': True, 'images': [{'id': 'sqBySPGXLwRq06LDktNABJIZlF-t4Q5AFBHSfdw1Vr4', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/vu7vqr6yeczb1.png?width=108&crop=smart&auto=webp&s=12426293b70370a1cde432da8fd3fb98d822536b', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/vu7vqr6yeczb1.png?width=216&crop=smart&auto=webp&s=db3cbbd5ed2bd1e98f9d16da5d16f1b514d36dd6', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/vu7vqr6yeczb1.png?width=320&crop=smart&auto=webp&s=35d191b21fe00c06c43e9be6dd0c5138e0bf8e69', 'width': 320}, {'height': 373, 'url': 'https://preview.redd.it/vu7vqr6yeczb1.png?width=640&crop=smart&auto=webp&s=1bc650571dd1b06fe698b0bfa4c8f180e0ebbf56', 'width': 640}, {'height': 560, 'url': 'https://preview.redd.it/vu7vqr6yeczb1.png?width=960&crop=smart&auto=webp&s=4b576d72e434524642d58b0fcb32a334358f6b6f', 'width': 960}, {'height': 630, 'url': 'https://preview.redd.it/vu7vqr6yeczb1.png?width=1080&crop=smart&auto=webp&s=60df5a451fcaf9b405966fb02ceec99dcdba365a', 'width': 1080}], 'source': {'height': 1436, 'url': 'https://preview.redd.it/vu7vqr6yeczb1.png?auto=webp&s=8a0179af7a278453165652c088e2a3af24f2fac5', 'width': 2460}, 'variants': {}}]}
Thinking about what people ask for in llama 3
18
So I was looking at some of the things people ask for in llama 3, kinda judging them over whether they made sense or were feasible. Mixture of Experts - Why? This literally is useless to us. MoE helps with Flops issues, it takes up more vram than a dense model. OpenAI makes it work, it isn't naturally superior or better by default. Synthetic Data - That's useful, though its gonna be mixed with real data for model robustness. Though the real issue I see is here is collecting that many tokens. If they ripped anything near 10T for openai, they would be found out pretty quick. I could see them splitting the workload over multiple different accounts, also using Claude, calling multiple model AI's (GPT-4, gpt-4-turbo), ripping data off third party services, and all the other data they've managed to collect. More smaller models - A 1b and 3b would be nice. TinyLlama 1.1B is really capable for its size, and better models at the 1b and 3b scale would be really useful for web inference and mobile inference More multilingual data - This is totally Nesc. I've seen RWKV world v5, and its trained on a lot of multilingual data. its 7b model is only half trained, and it already passes mistral 7b on multilingual benchmarks. They're just using regular datasets like slimpajama, they havent even prepped the next dataset actually using multilingual data like CulturaX and Madlad. Multimodality - This would be really useful, also probably a necessity if they want LLama 3 to "Match GPT-4". The Llava work has proved that you can make image to text work out with llama. Fuyu Architecture has also simplified some things, considering you can just stuff modality embeddings into regular model and train it the same. it would be nice if you could use multiple modalities in, as meta already has experience in that with imagebind and anymal. It would be better than GPT 4 is it was multimodality in -> multimodal out GQA, sliding windows - Useful, the +1% architecture changes, Meta might add them if they feel like it Massive ctx len - If they Use RWKV, they may make any ctx len they can scale to, but they might do it for a regular transformer too, look at Magic.devs (not that messed up paper MAGIC!) ltm-1: https://magic.dev/blog/ltm-1, the model has a context len of 5,000,000. Multi-epoch training, Dr. Vries scaling laws - StableLM 3b 4e 1t is still the best 3b base out there, and no other 3b bases have caught up to it so far. Most people attribute it to the Dr Vries scaling law, exponential data and compute, Meta might have really powerful models if they followed the pattern. Function calling/ tool usage - If they made the models come with the ability to use some tools, and we instruction tuned to allow models to call any function through in context learning, that could be really OP. Different Architecture - RWKV is good one to try, but if meta has something better, they may shift away from transformers to something else.
2023-11-09T14:46:13
https://www.reddit.com/r/LocalLLaMA/comments/17rejen/thinking_about_what_people_ask_for_in_llama_3/
vatsadev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rejen
false
null
t3_17rejen
/r/LocalLLaMA/comments/17rejen/thinking_about_what_people_ask_for_in_llama_3/
false
false
self
18
null
Has There Been Any Research on Curriculum Learning for Pre-training or Fine-Tuning Large Language Models?
5
Hey new to this world since chatgpt, I've been thinking about the training process for large language models and have a question for the community regarding the structure of datasets used in pre-training and fine-tuning phases. My understanding is that these models are exposed to a diverse and randomized array of data. Each datum is adjusted to fit a certain context length, ensuring that the model learns from the content itself rather than the order in which it's presented. However, I'm curious if there has been any research or experimentation with structuring these datasets to follow a curriculum learning approach. In traditional education, students progress from simple to complex concepts, building upon what they've learned as they advance. Could a similar approach benefit AI training? For instance, starting with simpler language constructs and concepts before gradually introducing more complex and abstract ones? The idea would be to categorize training data by complexity, then batch it so that the model first learns from 'easier' data, with the complexity scaling up as training progresses. Randomization could still occur within these batches to prevent memorization of sequence rather than understanding. I'm interested in any insights or references to research that has explored this idea. Does curriculum learning improve the efficacy of language models? Could it lead to more nuanced understanding and better performance in complex tasks?
2023-11-09T13:55:28
https://www.reddit.com/r/LocalLLaMA/comments/17rdgiu/has_there_been_any_research_on_curriculum/
IndividualAd1648
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rdgiu
false
null
t3_17rdgiu
/r/LocalLLaMA/comments/17rdgiu/has_there_been_any_research_on_curriculum/
false
false
self
5
null
Astute Christmas Llama Santa Hat Ugly Xmas Tree Alpaca Gift pop
1
2023-11-09T13:51:16
https://www.reddit.com/gallery/17rddlx
dawiw41198
reddit.com
1970-01-01T00:00:00
0
{}
17rddlx
false
null
t3_17rddlx
/r/LocalLLaMA/comments/17rddlx/astute_christmas_llama_santa_hat_ugly_xmas_tree/
false
false
https://b.thumbs.redditm…KX_JZIO_7-Kc.jpg
1
null
Alternatives to chat.lmsys.org?
9
[chat.lmsys.org](https://chat.lmsys.org/) is great. It has the best open source models, and it let's you control temperature and other parameters. However, they have a limit on the **message length that I can send to the LLM**, something like 400 words, although the model supports much longer messages. Do you happen to know alternatives that allow longer messages? Thanks in advance!
2023-11-09T13:13:09
https://www.reddit.com/r/LocalLLaMA/comments/17rcn8d/alternatives_to_chatlmsysorg/
ammar-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rcn8d
false
null
t3_17rcn8d
/r/LocalLLaMA/comments/17rcn8d/alternatives_to_chatlmsysorg/
false
false
self
9
null
Free ChatGPT (not the paid version) locally?
1
Is it possible to have something comparable to free ChatGPT (not the paid version) locally? I think free chatgpt is version 3 or 3.5, right? What is the hardware needed? A simple consumer gpu (RTX) would suffice?
2023-11-09T12:55:07
https://www.reddit.com/r/LocalLLaMA/comments/17rcb4q/free_chatgpt_not_the_paid_version_locally/
crav88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rcb4q
false
null
t3_17rcb4q
/r/LocalLLaMA/comments/17rcb4q/free_chatgpt_not_the_paid_version_locally/
false
false
self
1
null
🗺️ Well maintained guide to current state of AI and LLMs, for beginners/ non-tech professionals?
32
Hi team There are a lot of components out there that come together in different configurations to conjure up AIs. Things like: Xb model Y Fine tuning Hallucinations Llama Ollama LangChain LocalGPT AutoGPT PrivateGPT All come up frequently. Is there a good guide to build up my intelligence and vocabulary? Ideally some diagrams to help me understand how each component functions/ fits together. I'm grateful for your help!!
2023-11-09T12:36:37
https://www.reddit.com/r/LocalLLaMA/comments/17rbzgj/well_maintained_guide_to_current_state_of_ai_and/
laterral
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rbzgj
false
null
t3_17rbzgj
/r/LocalLLaMA/comments/17rbzgj/well_maintained_guide_to_current_state_of_ai_and/
false
false
self
32
null
Revolutionizing Shipments: ChatGPT Takes on Logistics! 🚛🌐
1
Hey Reddit pals! 👋 Just stumbled upon an interesting read about ChatGPT entering the logistics scene. If you're curious about how AI is changing the game in shipments and deliveries, this one's for you. The article delves into how ChatGPT is influencing logistics communication. From optimizing routes to enhancing customer interactions, it's an insightful look at the intersection of language AI and the logistics industry. 📦✨ Check out the [guide](https://www.leewayhertz.com/chatgpt-for-manufacturing/) and let's discuss how ChatGPT is playing a role in the world of shipping. Ready to explore the future of logistics with a touch of AI? Join the convo! \#ChatGPT #Logistics #Innovation #AI
2023-11-09T11:43:58
https://www.reddit.com/r/LocalLLaMA/comments/17rb4zj/revolutionizing_shipments_chatgpt_takes_on/
Technical-Station284
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rb4zj
false
null
t3_17rb4zj
/r/LocalLLaMA/comments/17rb4zj/revolutionizing_shipments_chatgpt_takes_on/
false
false
default
1
{'enabled': False, 'images': [{'id': 'Ud9DVtOJYwqS2bPOuX4nNZUndV3VLknBaxs8KKjS-V4', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/z9JyH66bDigdSy-dclJLIp-T9FNzxy-Gi2NBv0ZfVNQ.jpg?width=108&crop=smart&auto=webp&s=7c21a7556b3a002219e52c4f46917bdd8baa7e6e', 'width': 108}, {'height': 147, 'url': 'https://external-preview.redd.it/z9JyH66bDigdSy-dclJLIp-T9FNzxy-Gi2NBv0ZfVNQ.jpg?width=216&crop=smart&auto=webp&s=6840d5207760a773f6601d1dc2688bfca1305b4e', 'width': 216}, {'height': 219, 'url': 'https://external-preview.redd.it/z9JyH66bDigdSy-dclJLIp-T9FNzxy-Gi2NBv0ZfVNQ.jpg?width=320&crop=smart&auto=webp&s=8bcc40e17f556c99ab3ba7c067c6c9f702d6cc56', 'width': 320}], 'source': {'height': 250, 'url': 'https://external-preview.redd.it/z9JyH66bDigdSy-dclJLIp-T9FNzxy-Gi2NBv0ZfVNQ.jpg?auto=webp&s=5edced5828139c109028410a422bde3139476d79', 'width': 365}, 'variants': {}}]}
Looking for CPU Inference Hardware (8 Channel Ram Server Motherboards)
3
Just wondering if anyone with more knowledge on server hardware could point me in the direction of getting an 8 channel ddr4 server up and running (Estimated bandwidth speed is around 200gb/s) So I would think it would be plenty for inferencing LLM's. I would prefer to go used Server hardware due to price, when comparing the memory amount to getting a bunch of p40's the power consumption is drastically lower. Im just not sure how fast a slightly older server cpu can process inferencing. If I was looking to run 80-120gb models would 200gb/s and dual 24 core cpu's get me 3-5 tokens a second?
2023-11-09T11:43:32
https://www.reddit.com/r/LocalLLaMA/comments/17rb4rd/looking_for_cpu_inference_hardware_8_channel_ram/
jasonmbrown
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rb4rd
false
null
t3_17rb4rd
/r/LocalLLaMA/comments/17rb4rd/looking_for_cpu_inference_hardware_8_channel_ram/
false
false
self
3
null
Anyone got the DeepSeek Coder GGUF working?
1
[removed]
2023-11-09T11:39:03
https://www.reddit.com/r/LocalLLaMA/comments/17rb2at/anyone_got_the_deepseek_coder_gguf_working/
airtwink
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17rb2at
false
null
t3_17rb2at
/r/LocalLLaMA/comments/17rb2at/anyone_got_the_deepseek_coder_gguf_working/
false
false
self
1
null
Guidance and LM Studio issue
2
Hi all, I'm trying to make it work Guidance with LM Studio. This is my code: from langchain.llms import OpenAI import guidance llm = OpenAI( openai_api_key="NULL", temperature=0, openai_api_base="http://localhost:1234/v1", max_tokens=-1 ) guidance.llm = llm program = guidance("""Tweak this proverb to apply to model instructions instead. {{proverb}} - {{book}} {{chapter}}:{{verse}} UPDATED Where there is no guidance{{gen 'rewrite' stop="\\n-"}} - GPT {{#select 'chapter'}}9{{or}}10{{or}}11{{/select}}:{{gen 'verse'}}""") # execute the program on a specific proverb executed_program = program( proverb="Where there is no guidance, a people falls,\nbut in an abundance of counselors there is safety.", book="Proverbs", chapter=11, verse=14 ) I'm getting this error at the line program = guidance(...): *guidance is not callable*. Can you explain me what I'm missing?
2023-11-09T10:03:08
https://www.reddit.com/r/LocalLLaMA/comments/17r9pu6/guidance_and_lm_studio_issue/
giammy677
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r9pu6
false
null
t3_17r9pu6
/r/LocalLLaMA/comments/17r9pu6/guidance_and_lm_studio_issue/
false
false
self
2
null
Autogen integration with llama2
1
[removed]
2023-11-09T09:02:18
https://www.reddit.com/r/LocalLLaMA/comments/17r8x66/autogen_integration_with_llama2/
YouZealousideal3904
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r8x66
false
null
t3_17r8x66
/r/LocalLLaMA/comments/17r8x66/autogen_integration_with_llama2/
false
false
self
1
null
Building a Quotes Generator
3
While learning scraping, I have scraped quotes (about 100) from [quotestoscrape](https://quotes.toscrape.com/js/). I wish to use these quotes to fine-tune an LLM to create a quotes generator using this list of quotes. I am looking for the smallest but good-performing LLMs (preferably found on HuggingFace) to fine-tune. Additionally, if fine-tuning is an overkill in this case and I wish to use RAG, which LLM (found in HuggingFace) would be appropriate for this? Any easy-to-understand tutorial/article/YouTube video relevant to this problem at hand would be highly appreciated.
2023-11-09T07:45:46
https://www.reddit.com/r/LocalLLaMA/comments/17r7wzn/building_a_quotes_generator/
Responsible-Prize848
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r7wzn
false
null
t3_17r7wzn
/r/LocalLLaMA/comments/17r7wzn/building_a_quotes_generator/
false
false
self
3
null
Llama 13B as Leo on Brave browser
16
Sidebar: &#x200B; https://preview.redd.it/ccx23dlv0azb1.png?width=437&format=png&auto=webp&s=197b4b5707f1b564b98c3d301a68e1efc46b04ca
2023-11-09T07:43:41
https://www.reddit.com/r/LocalLLaMA/comments/17r7vxs/llama_13b_as_leo_on_brave_browser/
aguei
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r7vxs
false
null
t3_17r7vxs
/r/LocalLLaMA/comments/17r7vxs/llama_13b_as_leo_on_brave_browser/
false
false
https://b.thumbs.redditm…FDeh25nTipzA.jpg
16
null
Have some Ram upgrade from 4 gb to 20 gb How i optimal my resource ?
1
Finaly i finish my thesis defense and got chance for upgrade my laptop ram to 20 gb, that so far best thing i can do, i currently run 7b mistral with it with koboldcpp but speed is.. kinda slow 0.3 token per second sometime it at peak 0.8 what wrong here ? or should i try ooboga instead or gpt4free ?
2023-11-09T07:31:21
https://www.reddit.com/r/LocalLLaMA/comments/17r7pxo/have_some_ram_upgrade_from_4_gb_to_20_gb_how_i/
Merchant_Lawrence
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r7pxo
false
null
t3_17r7pxo
/r/LocalLLaMA/comments/17r7pxo/have_some_ram_upgrade_from_4_gb_to_20_gb_how_i/
false
false
self
1
null
on-demand inference or batch inference?
7
Hey All, what does making a model prediction look like for your current projects? Are you building a model for a web-app and you're running on-demand inference? Are you working on a research project or doing some analysis that requires making hundreds of thousands to millions of predictions all at once? I'm currently at a crossroads with a developer tool I'm building and trying to figure out which types of inference workflows I should be focused on. A few weeks back I posted a tutorial on running Mistral-7B on hundreds of GPUs in the cloud in parallel. I got a decent amount of people saying that batch inference is relevant to them but over the last couple of days I've been running into more and more developers that are building web-apps that don't need to make many predictions all at once. If you were me where would you direct your focus? Anyways, I'm kinda rambling but I would love to know what you guys are working on and get some advice on the direction I should pursue.
2023-11-09T06:08:28
https://www.reddit.com/r/LocalLLaMA/comments/17r6kax/ondemand_inference_or_batch_inference/
Ok_Post_149
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r6kax
false
null
t3_17r6kax
/r/LocalLLaMA/comments/17r6kax/ondemand_inference_or_batch_inference/
false
false
self
7
null
Deepseek Code error. Need help!
7
Hey Redditors, &#x200B; im really new to the LLM stuff but i got most of it set up and every model i tried until now seemed to work fine. Just yesterday i downloaded the deepseek code Model 33B (Instruct and Base) but everytime i try to load it i get this error message: Traceback (most recent call last): File "C:\\AI\\text-generation-webui-main\\modules\\ui\_model\_menu.py", line 209, in load\_model\_wrapper shared.model, shared.tokenizer = load_model(shared.model_name, loader) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\\AI\\text-generation-webui-main\\modules\\models.py", line 84, in load\_model output = load_func_map[loader](model_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\\AI\\text-generation-webui-main\\modules\\models.py", line 240, in llamacpp\_loader model, tokenizer = LlamaCppModel.from_pretrained(model_file) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\\AI\\text-generation-webui-main\\modules\\llamacpp\_model.py", line 91, in from\_pretrained result.model = Llama(**params) ^^^^^^^^^^^^^^^ File "C:\\AI\\text-generation-webui-main\\installer\_files\\env\\Lib\\site-packages\\llama\_cpp\_cuda\\llama.py", line 357, in **init** self.model = llama_cpp.llama_load_model_from_file( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\\AI\\text-generation-webui-main\\installer\_files\\env\\Lib\\site-packages\\llama\_cpp\_cuda\\llama\_cpp.py", line 498, in llama\_load\_model\_from\_file return _lib.llama_load_model_from_file(path_model, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: exception: access violation reading 0x0000000000000000 Since i dont have any clue about coding or anything todo with it im seeking help here.
2023-11-09T05:55:10
https://www.reddit.com/r/LocalLLaMA/comments/17r6cz2/deepseek_code_error_need_help/
Jirker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r6cz2
false
null
t3_17r6cz2
/r/LocalLLaMA/comments/17r6cz2/deepseek_code_error_need_help/
false
false
self
7
null
Llama 3 will be released in the first quarter of 2024?
162
PS. This is Text from Bing AI.
2023-11-09T03:31:41
https://i.redd.it/nn2wxfx4s8zb1.jpg
Majestical-psyche
i.redd.it
1970-01-01T00:00:00
0
{}
17r3zjh
false
null
t3_17r3zjh
/r/LocalLLaMA/comments/17r3zjh/llama_3_will_be_released_in_the_first_quarter_of/
false
false
https://b.thumbs.redditm…pej20uIjUG6M.jpg
162
{'enabled': True, 'images': [{'id': 't_nhFYeck6M5UOy8WOLG01UbnrJszUpL2kKvbDb7UR8', 'resolutions': [{'height': 136, 'url': 'https://preview.redd.it/nn2wxfx4s8zb1.jpg?width=108&crop=smart&auto=webp&s=7fe244ad9679d8effe6e0b99221326d4dd948cca', 'width': 108}, {'height': 272, 'url': 'https://preview.redd.it/nn2wxfx4s8zb1.jpg?width=216&crop=smart&auto=webp&s=93a23a990075a6d985dbf5db1f22dc15c5434af8', 'width': 216}, {'height': 403, 'url': 'https://preview.redd.it/nn2wxfx4s8zb1.jpg?width=320&crop=smart&auto=webp&s=c26eccfa583066cc665ec1223fb43b353ba44294', 'width': 320}], 'source': {'height': 728, 'url': 'https://preview.redd.it/nn2wxfx4s8zb1.jpg?auto=webp&s=b252ea057d6a8d2ed4a7c5dedb81d6a1d9eed4ef', 'width': 577}, 'variants': {}}]}
Budget machine for tinkering with LLMs
1
tl;dr: I'm considering building a budget machine for tinkering with LLMs, but I'm not sure if this is a good idea and how to go about it. For context: I work in a university department. I currently have access to a 2080 Ti on a shared machine, and we're in the process of acquiring a small server with 2 L40 cards. So for any larger experiments, I will be able to use this shared machine. However, I think I would like to have my own small machine for tinkering: trying different models and techniques, and just playing around, and preparing larger experiments to be run on the server. My focus is on teaching and education not on state-of-the-art research. With aiming for a good amount of VRAM, the 4060 Ti 16GB seems to be the most obvious choice; I also like the low power requirements (regarding energy and cooling). But this card seems to have a poor reputation overall. I'm also not sure what currently the sweet spot w.r.t. the the CPU and memory is – I completely lost track of Intel's and AMD's generations over the last years. Some additional comment regarding some common opinions * I simply like to have my own hardware and cloud services seem to be more expensive in the long run. * There is not really a good market of used GPUs where I'm located (Singapore), so the common suggestion "go with as used 3090" does not really work. Any good suggestions, or am I naive with my idea of a budget machine? Thanks a lot!
2023-11-09T03:04:12
https://www.reddit.com/r/LocalLLaMA/comments/17r3g9s/budget_machine_for_tinkering_with_llms/
the-uncle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r3g9s
false
null
t3_17r3g9s
/r/LocalLLaMA/comments/17r3g9s/budget_machine_for_tinkering_with_llms/
false
false
self
1
null
Help (low end stuff+LLM for CSV)
1
Hi there, newbie here, I have a Dell G3 3590 laptop with 8 GB of ram. I want to use an LLM to chat with a CSV file, the CSV file contains the description of safety incidents which I want to group into categories and (if possible) make some code with pandas and seaborn for visualization and other work-related stuff. What LLM model can I use? I also want to ask things like these for example: "What safety incident is the most repeated in this week?", "Make a graph with matplotlib using the safety incidents data". Thanks in advance for your response
2023-11-09T02:43:23
https://www.reddit.com/r/LocalLLaMA/comments/17r3190/help_low_end_stuffllm_for_csv/
mamarrecomolomuevesa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r3190
false
null
t3_17r3190
/r/LocalLLaMA/comments/17r3190/help_low_end_stuffllm_for_csv/
false
false
self
1
null
How do I chat with a model that uses Alpaca format?
1
I'm using theBloke's [model](https://huggingface.co/TheBloke/Xwin-MLewd-7B-V0.2-AWQ?not-for-all-audiences=true) which is alpaca format, but i have no idea how to actually use it, as everything it responds with is as if i were asking it programming questions. Im using text-generation-webui and can't find examples of how to use this in conjunction with alpaca chat
2023-11-09T02:33:29
https://www.reddit.com/r/LocalLLaMA/comments/17r2tu9/how_do_i_chat_with_a_model_that_uses_alpaca_format/
rook2pawn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r2tu9
false
null
t3_17r2tu9
/r/LocalLLaMA/comments/17r2tu9/how_do_i_chat_with_a_model_that_uses_alpaca_format/
false
false
self
1
{'enabled': False, 'images': [{'id': 'g3PeDmS6eWrYtcRA7iShTG19aCJp-iYdEVaF-YsRulQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VadBfAjf6zYmghL7XyuDrWRr8y9XnOAnEFW1PliobLU.jpg?width=108&crop=smart&auto=webp&s=ffaaa57c526f72820a8119b2447de0491223df4e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VadBfAjf6zYmghL7XyuDrWRr8y9XnOAnEFW1PliobLU.jpg?width=216&crop=smart&auto=webp&s=407937330aa6c0fcd477149b08874c8f16e1c823', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VadBfAjf6zYmghL7XyuDrWRr8y9XnOAnEFW1PliobLU.jpg?width=320&crop=smart&auto=webp&s=cc4503c6aaea24436e684f40cef97385676c431c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VadBfAjf6zYmghL7XyuDrWRr8y9XnOAnEFW1PliobLU.jpg?width=640&crop=smart&auto=webp&s=6e9ec2740fffecfd382f7089dc92f21438cd2bc9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VadBfAjf6zYmghL7XyuDrWRr8y9XnOAnEFW1PliobLU.jpg?width=960&crop=smart&auto=webp&s=b400a9736384f08fc7184bc04d0d771a6ff88821', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VadBfAjf6zYmghL7XyuDrWRr8y9XnOAnEFW1PliobLU.jpg?width=1080&crop=smart&auto=webp&s=0cdc8f3192171de2ba94bebe8adc9e2fe5d45b47', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VadBfAjf6zYmghL7XyuDrWRr8y9XnOAnEFW1PliobLU.jpg?auto=webp&s=62e6da2d0c2824d8f0ec29f1db91d56f9719b694', 'width': 1200}, 'variants': {}}]}
how to train local llama to search a postgres to answer client questions?
1
i was wondering how i can train local llama to search a postgres to answer client questions. I am a python web developer and not well trained in AI. how do i get started?
2023-11-09T02:14:59
https://www.reddit.com/r/LocalLLaMA/comments/17r2gad/how_to_train_local_llama_to_search_a_postgres_to/
KimStacks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r2gad
false
null
t3_17r2gad
/r/LocalLLaMA/comments/17r2gad/how_to_train_local_llama_to_search_a_postgres_to/
false
false
default
1
null
Best practical method of knowledge distillation available?
2
TL;DR: Knowledge distillation generally performs worse than traning model from scratch on data from what I've seen online. Is there a method of KD where this doesn't happen and I get close to performance of a model if it was trained from scratch? I'm asking this question here as I'm trying to run a distilled and smaller llama on my PC's CPU. So I've recently been interested in make DL models more useful for everyday tasks. And considering their size trying to run these models on consumer devices without much loss in quality but rn from what I've seen, this just feels like trying fit an elephant into his pants. Basically it tears everytime I try. I found quantization to be cool but I need to reduce its size even more tbh. So I found knowledge distillation. But from what I've seen, though theoretically it is fantastic. Practically knowlege distillation sucks. And is probably worse than just straight up traning the model from scratch on the dataset. So is there a used and proven method of knowledge distillation that I can use? Which will give me at least very close accuracy to a model trained from scratch on dataset?
2023-11-09T02:06:09
https://www.reddit.com/r/LocalLLaMA/comments/17r2a3p/best_practical_method_of_knowledge_distillation/
Xanta_Kross
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r2a3p
false
null
t3_17r2a3p
/r/LocalLLaMA/comments/17r2a3p/best_practical_method_of_knowledge_distillation/
false
false
self
2
null
Question on LLM for email generation
4
Hi [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/), I am working on a project that aims to use LLMs for generating emails. The end goal is to simply give the model an input such as "Generate an email from HR to employees at Meta regarding work from home policy" and let the LLM do its thing. I've had rather successful results with several models, particularly Mistral models. Generally, I have found Mistral models and their finetuned variants, as well as some finetuned LLaMa 2 models to work well. My question is: What are ways that I can improve this model? I have already used few-shot prompting to get the output in my desired format. I am planning to try finetuning as well as RAG next. If you guys know other things that I can explore, such as other models or other techniques, feel free to let me know!
2023-11-09T00:55:41
https://www.reddit.com/r/LocalLLaMA/comments/17r0u9i/question_on_llm_for_email_generation/
yakun_goated
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r0u9i
false
null
t3_17r0u9i
/r/LocalLLaMA/comments/17r0u9i/question_on_llm_for_email_generation/
false
false
self
4
null
Question on LLM for email generation
1
[removed]
2023-11-09T00:52:25
https://www.reddit.com/r/LocalLLaMA/comments/17r0rvc/question_on_llm_for_email_generation/
moneybola
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r0rvc
false
null
t3_17r0rvc
/r/LocalLLaMA/comments/17r0rvc/question_on_llm_for_email_generation/
false
false
self
1
null
Is it mathematically possible for neural networks to outperform traditional CPUs in simple (or not) calculations?
1
Is it mathematically possible for neural networks to outperform traditional CPUs in simple (or not) calculations? Imagine a future where neural networks have become so advanced that they are much cheaper to operate. Even though neural networks are complex and usually use a lot of energy, could they ever become more cost-effective for doing simple math, like '2+2', or more complex tasks like integrating a function, compared to traditional algorithms running on a CPU? This might be hard to imagine for basic arithmetic, but what about for more complicated calculations? Is there any research that looks into whether neural networks can perform certain calculations more cheaply and quickly than traditional CPU-based algorithms, focusing on tasks that both can accomplish? P.s I understand that one can never be 100% sure in correctness of neural network performed calculations, ignoring that
2023-11-09T00:15:50
https://www.reddit.com/r/LocalLLaMA/comments/17r00hl/is_it_mathematically_possible_for_neural_networks/
maxhsy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17r00hl
false
null
t3_17r00hl
/r/LocalLLaMA/comments/17r00hl/is_it_mathematically_possible_for_neural_networks/
false
false
self
1
null
how can llm model serve multiple requests at the same time ?
1
[removed]
2023-11-08T23:59:07
https://www.reddit.com/r/LocalLLaMA/comments/17qznfx/how_can_llm_model_serve_multiple_requests_at_the/
kaoutar-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qznfx
false
null
t3_17qznfx
/r/LocalLLaMA/comments/17qznfx/how_can_llm_model_serve_multiple_requests_at_the/
false
false
self
1
null
For roleplay purposes, Goliath-120b is absolutely thrilling me
141
I've used most of the high-end models in an unquantized format at some point or another (Xwin, Euryale, etc.) and found them generally pretty good experiences, but always seem to lack the ability to "show, not tell" in a way that a strong writer knows how to do, even when prompted to do so. At the same time, I've always been rather dissatisfied with a lot of quantizations, as I've found the degradation in quality to be rather noticeable. So up until now, I've been running unquantized models in 2x a100s and extending the context as far as I'm able to get away with. Tried Goliath-120b the other day, and this absolutely stood everything on its head. Not only is it capable of stunning levels of writing and implying far more than directly stating in a way I've not sure I've seen in a model to date, but the exl quants from panchovix to get it to run in a single A100 at 9-10k extended context (about where RoPE scaling seems to universally start to break down in my experience). Best part is, if there is a quality drop (I'm using 4.85 bpw) I'm not seeing it - at all. So not only is it giving a better experience than an unquantized model, but it's doing so at about half the cost of my usual way of running these models. Benchmarks be damned, for those willing to rent an A100 for their writing, however this model was managed I think this might be the actual way to challenge the big closed source/censored LLMs for roleplay.
2023-11-08T23:56:12
https://www.reddit.com/r/LocalLLaMA/comments/17qzlat/for_roleplay_purposes_goliath120b_is_absolutely/
tenmileswide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qzlat
false
null
t3_17qzlat
/r/LocalLLaMA/comments/17qzlat/for_roleplay_purposes_goliath120b_is_absolutely/
false
false
self
141
null
Autogen with local models?
2
Hey guys, is anyone using autogen with a local model to do multi agent stuff? I have used it with openai api but now wondering if their new assistants thing makes it obsolete? If I could run it all locally that would be a new large value add...
2023-11-08T23:37:17
https://www.reddit.com/r/LocalLLaMA/comments/17qz6bd/autogen_with_local_models/
Hefty_Development813
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qz6bd
false
null
t3_17qz6bd
/r/LocalLLaMA/comments/17qz6bd/autogen_with_local_models/
false
false
self
2
null
8x P40/P100 (PCIE) servers
1
Are these worth bothering with? Or money better spent on 2x3090 or A6000?
2023-11-08T23:28:13
https://www.reddit.com/r/LocalLLaMA/comments/17qyyxh/8x_p40p100_pcie_servers/
Training_Wait_3904
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qyyxh
false
null
t3_17qyyxh
/r/LocalLLaMA/comments/17qyyxh/8x_p40p100_pcie_servers/
false
false
self
1
null
How to run BakLLaVA (Mistral + LLaVA) on M1 Apple Silicon < 10 lines of code
1
[removed]
2023-11-08T22:19:23
https://www.reddit.com/r/LocalLLaMA/comments/17qxdpo/how_to_run_bakllava_mistral_llava_on_m1_apple/
Fluid-Age-9266
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qxdpo
false
null
t3_17qxdpo
/r/LocalLLaMA/comments/17qxdpo/how_to_run_bakllava_mistral_llava_on_m1_apple/
false
false
self
1
{'enabled': False, 'images': [{'id': '8tAyVLH3UAPaPbWwNY5fxn2epEZHgXmHHX9FG0v52wY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=108&crop=smart&auto=webp&s=2579ab06751d4be737d11a4c5b21319da8fd6f74', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=216&crop=smart&auto=webp&s=1ba62e6a3e4ebd2b0ee5ed67d043bcc986cb7cf2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=320&crop=smart&auto=webp&s=8a6d7c00fe014db448594a6027a8a6a4bb43684c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=640&crop=smart&auto=webp&s=8d74cb32a9bd2a21b758bb098b57b194e6fbb167', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=960&crop=smart&auto=webp&s=ec6fac4654bfd157d5675b8b8b5244b4ea537eaf', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?width=1080&crop=smart&auto=webp&s=5d8c015aff6688da2e52d4e09a82f91907e2dd91', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/6qPPE_MEteNG7YjMdclTWa_8VyZ9dLGpuAwYdOw8C9Y.jpg?auto=webp&s=18a3a4181cd19dba3888db7ccc9869c2152d6aea', 'width': 1200}, 'variants': {}}]}
Is this normal for Tiefighter?
1
I have started to use Tiefighter 13b on ooba and is really great so far. But I did notice that with characters that are ment to display their inner Thoughts, it just doesn't show them, even tho the character card specifies that they must be added at the end of each message. I also noticed the same thing with characters that are ment to add emojis to emphasize their speech, it just never seems to use them ever. So I'm wandering if this is something I can fix on the generation tab? Or by some type of instruction? Or if this is normal for the model? I'm a total noob on this so any clarification would be appreciated. Apologies for the bad English
2023-11-08T21:32:47
https://www.reddit.com/r/LocalLLaMA/comments/17qwa7i/is_this_normal_for_tiefighter/
warpwaver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qwa7i
false
null
t3_17qwa7i
/r/LocalLLaMA/comments/17qwa7i/is_this_normal_for_tiefighter/
false
false
self
1
null
What are you using your local language models for?
1
Just curious to hear what you're doing with it or building with it.
2023-11-08T21:22:28
https://www.reddit.com/r/LocalLLaMA/comments/17qw1k8/what_are_you_using_your_local_language_models_for/
currentscurrents
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qw1k8
false
null
t3_17qw1k8
/r/LocalLLaMA/comments/17qw1k8/what_are_you_using_your_local_language_models_for/
false
false
self
1
null
How much RAM is needed to run Mistral Model
5
I have a dataset with context length of 6k. I want to fine tune Mistral model with 4 bit quantization and I faced with error while using 24 GIG RAM GPU. &#x200B; Is there any base how much ram is needed for this large context length? Thanks!
2023-11-08T19:29:47
https://www.reddit.com/r/LocalLLaMA/comments/17qtgqt/how_much_ram_is_needed_to_run_mistral_model/
Choice_Diver_2585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qtgqt
false
null
t3_17qtgqt
/r/LocalLLaMA/comments/17qtgqt/how_much_ram_is_needed_to_run_mistral_model/
false
false
self
5
null
Translate to and from 400+ languages locally with MADLAD-400
168
Google [released](https://github.com/google-research/google-research/tree/master/madlad_400) T5X checkpoints for MADLAD-400 a couple of months ago, but nobody could figure out how to run them. Turns out the vocabulary was wrong, but they uploaded the correct one last week. I've converted the models to [the safetensors format](https://huggingface.co/jbochi/madlad400-3b-mt), and I created this [space](https://huggingface.co/spaces/jbochi/madlad400-3b-mt) if you want to try the smaller model. I also published [quantized GGUF weights you can use with candle](https://huggingface.co/jbochi/madlad400-3b-mt#usage). It decodes at \~15tokens/s on a M2 Mac. It seems that [NLLB](https://huggingface.co/facebook/nllb-200-distilled-600M) is the most popular machine translation model right now, but the license only allows non commercial usage. [MADLAD-400 is CC BY 4.0](https://github.com/google-research/google-research/tree/master#google-research).
2023-11-08T19:17:02
https://www.reddit.com/r/LocalLLaMA/comments/17qt6m4/translate_to_and_from_400_languages_locally_with/
jbochi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qt6m4
false
null
t3_17qt6m4
/r/LocalLLaMA/comments/17qt6m4/translate_to_and_from_400_languages_locally_with/
false
false
self
168
{'enabled': False, 'images': [{'id': '3btX2_HKD7lPCtEfroguUxim9VlInGMg9HBUEO7a1i0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XZF0XzCpWbvRdSwgVFa0ib-XdQ7JyGu55qtfsX6JgL0.jpg?width=108&crop=smart&auto=webp&s=0dc3ead42d1cc48e6f96d3cfd7dc6034f1fa809a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XZF0XzCpWbvRdSwgVFa0ib-XdQ7JyGu55qtfsX6JgL0.jpg?width=216&crop=smart&auto=webp&s=cad6a14ae2afc9ac058a0ebba4df644d01bc9511', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XZF0XzCpWbvRdSwgVFa0ib-XdQ7JyGu55qtfsX6JgL0.jpg?width=320&crop=smart&auto=webp&s=e21edf114ea7c6f593e414034a366ea902cedea2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XZF0XzCpWbvRdSwgVFa0ib-XdQ7JyGu55qtfsX6JgL0.jpg?width=640&crop=smart&auto=webp&s=772c0d390894ddd0a723aefb864e3f60305599e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XZF0XzCpWbvRdSwgVFa0ib-XdQ7JyGu55qtfsX6JgL0.jpg?width=960&crop=smart&auto=webp&s=5e0bd7539968b86a5ec36545c7bb861a8239ff1d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XZF0XzCpWbvRdSwgVFa0ib-XdQ7JyGu55qtfsX6JgL0.jpg?width=1080&crop=smart&auto=webp&s=e6252dcf43630459e63ea64d356c9138cde5f777', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XZF0XzCpWbvRdSwgVFa0ib-XdQ7JyGu55qtfsX6JgL0.jpg?auto=webp&s=814e3cc7a733b195264654e0702b3e47872fab46', 'width': 1200}, 'variants': {}}]}
Memory Needed while fine tuning Llama2-7b model
1
My longest context length is 6k. I want to fine tune Llama2-7b model on this dataset. How much RAM I will need if I load model with 4bit quantization using the bitsandbyte. &#x200B; Thank you!
2023-11-08T19:14:11
https://www.reddit.com/r/LocalLLaMA/comments/17qt48f/memory_needed_while_fine_tuning_llama27b_model/
Choice_Diver_2585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qt48f
false
null
t3_17qt48f
/r/LocalLLaMA/comments/17qt48f/memory_needed_while_fine_tuning_llama27b_model/
false
false
self
1
null
training parameters to add knowledge through full-fine tuning
1
[removed]
2023-11-08T19:12:44
https://www.reddit.com/r/LocalLLaMA/comments/17qt31l/training_parameters_to_add_knowledge_through/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qt31l
false
null
t3_17qt31l
/r/LocalLLaMA/comments/17qt31l/training_parameters_to_add_knowledge_through/
false
false
self
1
null
Hand writing llm?
1
So someone mentioned something interesting where they are having their kids write to santa and some lady is going to write back pretending to be santa. The first thought that came to mind is why not just use a AI. Obviously we can use the llm to get text but is there a way to get it to automatically put it out as it was hand written? Like I'm sure right now errors or whatnot to make it less obvious isn't a thing. But is there something out there that does this?
2023-11-08T19:09:05
https://www.reddit.com/r/LocalLLaMA/comments/17qszxf/hand_writing_llm/
crua9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qszxf
false
null
t3_17qszxf
/r/LocalLLaMA/comments/17qszxf/hand_writing_llm/
false
false
self
1
null
Kubernetes
1
What are the use case of k8s in the world of ML and llm ? Could you direct me to sources?
2023-11-08T18:48:21
https://www.reddit.com/r/LocalLLaMA/comments/17qsiza/kubernetes/
troposfer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qsiza
false
null
t3_17qsiza
/r/LocalLLaMA/comments/17qsiza/kubernetes/
false
false
self
1
null
Rag vs Vector db
13
I am confused about these 2 . Sometimes people use it interchangeably. Is it because rag is a method and where u store it should be vector db ? I remember before llms there was word2vec in the beginning ,before all of this llm. But isn’t the hard part to create such a meaningful word2vec , by the way word2vec is now called “embeddings” right?
2023-11-08T18:42:09
https://www.reddit.com/r/LocalLLaMA/comments/17qse19/rag_vs_vector_db/
troposfer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qse19
false
null
t3_17qse19
/r/LocalLLaMA/comments/17qse19/rag_vs_vector_db/
false
false
self
13
null
txtai 6.2 released: Adds binary quantization, bind parameters for multimedia SQL queries and performance improvements
28
2023-11-08T18:40:12
https://github.com/neuml/txtai
davidmezzetti
github.com
1970-01-01T00:00:00
0
{}
17qscbn
false
null
t3_17qscbn
/r/LocalLLaMA/comments/17qscbn/txtai_62_released_adds_binary_quantization_bind/
false
false
https://b.thumbs.redditm…LcTsf4f2sM3U.jpg
28
{'enabled': False, 'images': [{'id': 'yGKZrozFSrxZ0GNIkYzBAA-oiN8XXsSqY2UCG70-iIQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3YUWfMShRqBFpIZSqhV_lhLNmjlzewmtrNDGDU3Q81w.jpg?width=108&crop=smart&auto=webp&s=a45d34c509784843115a731230b0416de3bae7b8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3YUWfMShRqBFpIZSqhV_lhLNmjlzewmtrNDGDU3Q81w.jpg?width=216&crop=smart&auto=webp&s=86ea48fadbc32fa87707658c65d9055ed4a03ce9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3YUWfMShRqBFpIZSqhV_lhLNmjlzewmtrNDGDU3Q81w.jpg?width=320&crop=smart&auto=webp&s=8e82b856688fe164af8a16bb46d7851e6e453242', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3YUWfMShRqBFpIZSqhV_lhLNmjlzewmtrNDGDU3Q81w.jpg?width=640&crop=smart&auto=webp&s=deff0c1c3ba12ea43ef5bc20566e81c0c99718a9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3YUWfMShRqBFpIZSqhV_lhLNmjlzewmtrNDGDU3Q81w.jpg?width=960&crop=smart&auto=webp&s=a664d06dd4e3d33c3d2914459ad5622714748095', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3YUWfMShRqBFpIZSqhV_lhLNmjlzewmtrNDGDU3Q81w.jpg?width=1080&crop=smart&auto=webp&s=2cd635fde0cb7c8b9057ead08cbe20ad5ae724eb', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/3YUWfMShRqBFpIZSqhV_lhLNmjlzewmtrNDGDU3Q81w.jpg?auto=webp&s=0b1c2f692394cc3dc3d024767e6c4df406d1b552', 'width': 1920}, 'variants': {}}]}
Error when importing OpenAI with Llama Index
2
I get the following error when I try to import any OpenAI dependencies in llama index. &#x200B; from llama_index.llms import OpenAI \--> **---> 31** u/typing_extensions.deprecated( 32 "The Edits API is deprecated; please use Chat Completions instead.**\\n\\n**https://openai.com/blog/gpt-4-api-general-availability#deprecation-of-the-edits-api**\\n**" &#x200B; **AttributeError**: module 'typing\_extensions' has no attribute 'deprecated' &#x200B; Anyone else getting this error?
2023-11-08T18:38:44
https://www.reddit.com/r/LocalLLaMA/comments/17qsb7t/error_when_importing_openai_with_llama_index/
LargeBrick7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qsb7t
false
null
t3_17qsb7t
/r/LocalLLaMA/comments/17qsb7t/error_when_importing_openai_with_llama_index/
false
false
self
2
{'enabled': False, 'images': [{'id': '5-l9CZ_HW-7jVI6OB1-iWt48qTziuLMHEQ8IdI9SNIY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?width=108&crop=smart&auto=webp&s=cd1f87f27e6aea43705c715d827b3b4288821488', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?width=216&crop=smart&auto=webp&s=a8fe0d91df5145cf44e5651f944a65a4bb190753', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?width=320&crop=smart&auto=webp&s=ec93a3909716671c084ff62ab6b8f27ced702fcb', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?width=640&crop=smart&auto=webp&s=2a6dac856802edd492666528f3145c4ca65f8b18', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?width=960&crop=smart&auto=webp&s=f4cc74e8c787942dabacd44c27c16f03a9a09caa', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/JE81nz4WNaSjmViMpOonmCMKa3U6FTPmw7Ets8WEYx8.jpg?auto=webp&s=2ae1c58b5ccc0d094d245d2f72fb33dc4a2b49f7', 'width': 1000}, 'variants': {}}]}
Methods for Estimating Uncertainty in Finetuned LLMs for Text Classification
4
Hi everyone, I'm exploring methods to quantify uncertainty in the outputs of large language models (LLMs) that have been finetuned for text classification tasks. I'm looking for strategies that are specifically tailored or adaptable to LLMs like Mistral7b when applied to text classification. One paper I came across, [here](https://arxiv.org/abs/2305.18404), proposes a method, but it seems mainly suitable for multiple-choice question applications. Another potential strategy might involve Bayesian dropout. For example, using dropout layers within the adapter modules of the LLM could offer a way to estimate uncertainty. However, I haven't found any concrete implementations or case studies validating this method. Does anyone have experience with or insights into these or other techniques for assessing uncertainty in LLM text classification outputs? Any examples or references to practical implementations would be greatly appreciated! Thank you in advance for your help!
2023-11-08T18:18:52
https://www.reddit.com/r/LocalLLaMA/comments/17qrv1p/methods_for_estimating_uncertainty_in_finetuned/
jmlbeau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
17qrv1p
false
null
t3_17qrv1p
/r/LocalLLaMA/comments/17qrv1p/methods_for_estimating_uncertainty_in_finetuned/
false
false
self
4
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]}