title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Using new workstation class CPUs such emerald rapids CPUs for inferencing?
15
Intel recently released a new server CPU (rumored to eventually come to workstation class for cheaper) that supports 8 channels of DDR5 (around 300GB/s of bandwidth, relatively cheap to get even 256GB or even 512GB of ram), and AMX instructions for AI acceleration. [https://www.servethehome.com/5th-gen-intel-xeon-scalable-emerald-rapids-resets-servers-by-intel/](https://www.servethehome.com/5th-gen-intel-xeon-scalable-emerald-rapids-resets-servers-by-intel/) some companies are already starting to use intel CPUs for this purpose: [https://www.kedglobal.com/tech,\_media\_telecom/newsView/ked202310300017](https://www.kedglobal.com/tech,_media_telecom/newsView/ked202310300017) Would this eventually be a much cheaper alternative than GPUs at least for inferencing for large models over 100B parameters? I guess apple has the M2Ultra with 192GB of ram for around $8000, but that is less flexible. Hopefully someone will be able to get some benchmarks for running these configurations.
2023-12-21T00:29:44
https://www.reddit.com/r/LocalLLaMA/comments/18n9qui/using_new_workstation_class_cpus_such_emerald/
EasternBeyond
self.LocalLLaMA
2023-12-21T00:40:06
0
{}
18n9qui
false
null
t3_18n9qui
/r/LocalLLaMA/comments/18n9qui/using_new_workstation_class_cpus_such_emerald/
false
false
self
15
{'enabled': False, 'images': [{'id': 'qy3eEaUB5-TvTxSb0F_d-T0TKsscMU9sJswOKf9TJKQ', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/VPy4ulrY3fRqUsVUlxZw6BrIcvDlhi0WkcLUX2gbnD8.jpg?width=108&crop=smart&auto=webp&s=2560330829db222ce1de65f46c0e93aba8a5343a', 'width': 108}, {'height': 148, 'url': 'https://external-preview.redd.it/VPy4ulrY3fRqUsVUlxZw6BrIcvDlhi0WkcLUX2gbnD8.jpg?width=216&crop=smart&auto=webp&s=b312a321bb058dbc1a10995eb0de158cd577dfed', 'width': 216}, {'height': 219, 'url': 'https://external-preview.redd.it/VPy4ulrY3fRqUsVUlxZw6BrIcvDlhi0WkcLUX2gbnD8.jpg?width=320&crop=smart&auto=webp&s=f22fef2dee9bd0d6775f84973ad59d25cda010f8', 'width': 320}, {'height': 439, 'url': 'https://external-preview.redd.it/VPy4ulrY3fRqUsVUlxZw6BrIcvDlhi0WkcLUX2gbnD8.jpg?width=640&crop=smart&auto=webp&s=c9bc092a5dc5ae4277e881846c629c4cf45d3f95', 'width': 640}, {'height': 659, 'url': 'https://external-preview.redd.it/VPy4ulrY3fRqUsVUlxZw6BrIcvDlhi0WkcLUX2gbnD8.jpg?width=960&crop=smart&auto=webp&s=752329cfb844f532c38e1f075e241cb8145de929', 'width': 960}, {'height': 741, 'url': 'https://external-preview.redd.it/VPy4ulrY3fRqUsVUlxZw6BrIcvDlhi0WkcLUX2gbnD8.jpg?width=1080&crop=smart&auto=webp&s=07bc79296afb8f6a099d2c3b4a96c55a253bca43', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/VPy4ulrY3fRqUsVUlxZw6BrIcvDlhi0WkcLUX2gbnD8.jpg?auto=webp&s=97e45ed72fea34b6a4e38788297417ff9bc32477', 'width': 1165}, 'variants': {}}]}
How can I contribute to the open LLM movement?
52
I have decent technical skills, and am pretty good at writing/editing. I have an M3 Pro MacBook with 36GB RAM. Is there anything I could do for 5-10 hours per week to contribute to the open LLM movement?
2023-12-20T23:44:26
https://www.reddit.com/r/LocalLLaMA/comments/18n8svi/how_can_i_contribute_to_the_open_llm_movement/
DevelopmentAcademic6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n8svi
false
null
t3_18n8svi
/r/LocalLLaMA/comments/18n8svi/how_can_i_contribute_to_the_open_llm_movement/
false
false
self
52
null
Insight into the Role of Attention in LLMs
10
Hey there, LLM enthusiasts! While we often dive deep into model distribution and hardware specifics in this sub, I recently had an 'aha' moment while experimenting that I'm eager to share. We often hear about 'attention' in LLMs, but what does it really do? I've come to think of it as the magic that weaves words together into intricate concepts. For instance, 'cat food' isn't just two words; it's a complex idea, thanks to attention. Picture each 'thought' as a long list of questions. (Are we doing Python?, Are we making a Game?, Is there a Snake?) etc.. The input prompt isn't just text; it's a puzzle which the LLM deciphers. In traditional deep learning this would be done by an encoder, and the decoder would transform these complex ideas back into simpler words / answers. By focusing solely on the decoder and using attention to combine words simplifies our process and creates a more streamlined, efficient network. This approach of thought as combining different ideas into a spare unified underlying structure also mirrors our understanding of physical human cognition, where patterns/columns in our neocortex are very intricate yet massively repeated. Inspired by the "Attention is All You Need" paper, I ventured into building my own CPU-only C++ LLMs (Utilizing conversation datasets from HuggingFace), I've been able to practically test and refine a few ideas. I'm curious about your perspectives. Why do you think attention works so well? Have you noticed similar interesting ideas in your experiments? Looking forward to learning more!
2023-12-20T23:40:52
https://www.reddit.com/r/LocalLLaMA/comments/18n8q2p/insight_into_the_role_of_attention_in_llms/
Revolutionalredstone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n8q2p
false
null
t3_18n8q2p
/r/LocalLLaMA/comments/18n8q2p/insight_into_the_role_of_attention_in_llms/
false
false
self
10
null
Fireworks.ai releases function calling model and API
12
2023-12-20T23:39:06
https://blog.fireworks.ai/fireworks-raises-the-quality-bar-with-function-calling-model-and-api-release-e7f49d1e98e9
fireworks_anon
blog.fireworks.ai
1970-01-01T00:00:00
0
{}
18n8omc
false
null
t3_18n8omc
/r/LocalLLaMA/comments/18n8omc/fireworksai_releases_function_calling_model_and/
false
false
https://b.thumbs.redditm…9JeppLbmJtNY.jpg
12
{'enabled': False, 'images': [{'id': 'Kc3narJZtgXX6rBjc1Gc_KGsTgaLewAc-A0pchcGocE', 'resolutions': [{'height': 20, 'url': 'https://external-preview.redd.it/x9fYym5IG59K-1HkiNbsuYvI5RIfwA2eFL-HrFsOgBA.jpg?width=108&crop=smart&auto=webp&s=f3e5d4f34e21c3fbd075b5dd90d68eba4b029f0f', 'width': 108}, {'height': 41, 'url': 'https://external-preview.redd.it/x9fYym5IG59K-1HkiNbsuYvI5RIfwA2eFL-HrFsOgBA.jpg?width=216&crop=smart&auto=webp&s=3da49e546bd41795e5599d8b0c4027249e7578b7', 'width': 216}, {'height': 62, 'url': 'https://external-preview.redd.it/x9fYym5IG59K-1HkiNbsuYvI5RIfwA2eFL-HrFsOgBA.jpg?width=320&crop=smart&auto=webp&s=b68a764c5e16f70eae9739b9ae28d93af739ed60', 'width': 320}, {'height': 124, 'url': 'https://external-preview.redd.it/x9fYym5IG59K-1HkiNbsuYvI5RIfwA2eFL-HrFsOgBA.jpg?width=640&crop=smart&auto=webp&s=1637be006082ac7c949feae531520c1693615f71', 'width': 640}, {'height': 186, 'url': 'https://external-preview.redd.it/x9fYym5IG59K-1HkiNbsuYvI5RIfwA2eFL-HrFsOgBA.jpg?width=960&crop=smart&auto=webp&s=7c77a1561f03a2ea1cae08a5500a2d20937cc782', 'width': 960}, {'height': 209, 'url': 'https://external-preview.redd.it/x9fYym5IG59K-1HkiNbsuYvI5RIfwA2eFL-HrFsOgBA.jpg?width=1080&crop=smart&auto=webp&s=ef3a85a62e36b40522b3f214ec032654ae74a1e1', 'width': 1080}], 'source': {'height': 233, 'url': 'https://external-preview.redd.it/x9fYym5IG59K-1HkiNbsuYvI5RIfwA2eFL-HrFsOgBA.jpg?auto=webp&s=15a386de72520485823bd3d66cf7049241119db7', 'width': 1200}, 'variants': {}}]}
Microsoft's Prompt engineering techniques
44
2023-12-20T23:33:53
https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions
StewArtMedia_Nick
learn.microsoft.com
1970-01-01T00:00:00
0
{}
18n8kiz
false
null
t3_18n8kiz
/r/LocalLLaMA/comments/18n8kiz/microsofts_prompt_engineering_techniques/
false
false
https://b.thumbs.redditm…Ue-vr9wez6rc.jpg
44
{'enabled': False, 'images': [{'id': 'RCFh0Kid3SAqWEkALMGNW1e9Vu6ayZpftekoayP00hY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=108&crop=smart&auto=webp&s=b3881e36da92b82c6947f6ca4ff3804ca47f2aea', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=216&crop=smart&auto=webp&s=17b5b01e50a969ac9e2353bebb062cd52a99d108', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=320&crop=smart&auto=webp&s=acadaf004e8aeb6919eabdb0d93065a34f7e89df', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=640&crop=smart&auto=webp&s=883009d39175a2f03b76275ed0f7c6011d94a3a7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=960&crop=smart&auto=webp&s=7cc62aef83f192d102fa78c83c8f4fcfa85057e3', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?width=1080&crop=smart&auto=webp&s=6ca6913f202be9a9f83b266dd459edc90adbf9dd', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/CcTKId6ti1J-bMqj-jlWVD1tyE1LbM9FagmfDfaIVmQ.jpg?auto=webp&s=41fa146938cd97da5abfeff0d092a2cc151e65fa', 'width': 1200}, 'variants': {}}]}
Career transitioning to local LLMs
3
I’m an Electrical Engineer and Programmer/web dev by trade, do a bit of SaaS too, and currently making an AI SaaS tool (it’s a Ai wrapper but it’s in a niche, hasn’t been done in this space yet) I think software careers will transition, and I want to get ahead of the curve, I want to learn how to manage, train, and test models locally and eventually cloud based. How best would I explore this area? I have a decent rig, and have the funds to upgrade to 64~128 DDR5 RAM + dual graphics cards if needed for self hosting, then eventually funds to expand into cloud hosting. So what are my best starting points to get up to speed? Best resources, guides, anything. I want my own private LLM because I think sharing all our data and conversations with Google/OpenAI etc will backfire hard in a few years Thanks :)
2023-12-20T23:14:03
https://www.reddit.com/r/LocalLLaMA/comments/18n84u5/career_transitioning_to_local_llms/
Putrid-Tough4558
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n84u5
false
null
t3_18n84u5
/r/LocalLLaMA/comments/18n84u5/career_transitioning_to_local_llms/
false
false
self
3
null
If all goes well, I will release open source offline note app with llama.cpp backend
1
[removed]
2023-12-20T22:56:54
https://www.reddit.com/r/LocalLLaMA/comments/18n7qu1/if_all_goes_well_i_will_release_open_source/
adel_b
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n7qu1
false
null
t3_18n7qu1
/r/LocalLLaMA/comments/18n7qu1/if_all_goes_well_i_will_release_open_source/
false
false
self
1
{'enabled': False, 'images': [{'id': 'svvM2JRakfyljp_T_QZqWvkg6nabcv58y70nHQR3RVs', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/CjBQIXnC87bRJScAmKUndopTFoiXCE-koFmrOpuXO4I.jpg?width=108&crop=smart&auto=webp&s=b9772de63242872def8b82ff1aa09ba94ef8cba3', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/CjBQIXnC87bRJScAmKUndopTFoiXCE-koFmrOpuXO4I.jpg?width=216&crop=smart&auto=webp&s=85a2ff2cb4061426f55864c25f86432e14404e94', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/CjBQIXnC87bRJScAmKUndopTFoiXCE-koFmrOpuXO4I.jpg?width=320&crop=smart&auto=webp&s=e01919cf13e41b18c30f6a174dd1de9dc3d4569d', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/CjBQIXnC87bRJScAmKUndopTFoiXCE-koFmrOpuXO4I.jpg?width=640&crop=smart&auto=webp&s=1471aa70a67d9d4cfdd715e3e36714f62276a9e3', 'width': 640}, {'height': 546, 'url': 'https://external-preview.redd.it/CjBQIXnC87bRJScAmKUndopTFoiXCE-koFmrOpuXO4I.jpg?width=960&crop=smart&auto=webp&s=7396f2ff20fe6ba716af6dbf82053aba76bbbe90', 'width': 960}, {'height': 615, 'url': 'https://external-preview.redd.it/CjBQIXnC87bRJScAmKUndopTFoiXCE-koFmrOpuXO4I.jpg?width=1080&crop=smart&auto=webp&s=9f0546d52eb51f83f55aaa3d7bac32957978d5a6', 'width': 1080}], 'source': {'height': 1492, 'url': 'https://external-preview.redd.it/CjBQIXnC87bRJScAmKUndopTFoiXCE-koFmrOpuXO4I.jpg?auto=webp&s=e0502451962059fe045b294137edd3d8dc98727d', 'width': 2620}, 'variants': {}}]}
Does LLM still hallucinate with rag?
1
[removed]
2023-12-20T22:48:43
https://www.reddit.com/r/LocalLLaMA/comments/18n7ke2/does_llm_still_hallucinate_with_rag/
Puzzleheaded_Acadia1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n7ke2
false
null
t3_18n7ke2
/r/LocalLLaMA/comments/18n7ke2/does_llm_still_hallucinate_with_rag/
false
false
self
1
null
Local LLM with web access
12
Hi, I was wondering if there was any guide to give your local LLMs access to search the web and use that information like chat gpt uses bing.
2023-12-20T22:47:46
https://www.reddit.com/r/LocalLLaMA/comments/18n7jmv/local_llm_with_web_access/
jbsan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n7jmv
false
null
t3_18n7jmv
/r/LocalLLaMA/comments/18n7jmv/local_llm_with_web_access/
false
false
self
12
null
best dataset for instruct fine-tuning?
7
since no one seems to be working on this i figured i’d try my hand at fine tuning phi-2 as the chat completion model is pretty useless for most use case. as I understand qLora is still the way to go but how about datasets? are there any open source?
2023-12-20T22:29:08
https://www.reddit.com/r/LocalLLaMA/comments/18n74kh/best_dataset_for_instruct_finetuning/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n74kh
false
null
t3_18n74kh
/r/LocalLLaMA/comments/18n74kh/best_dataset_for_instruct_finetuning/
false
false
self
7
null
How can I download weights (llama.cpp)?
1
Hello, total beginner here. I'm sorry if this is a dumb question. I'm trying to download the Mixtral model from this link ([https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/tree/main](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/tree/main)) to run it with llama.cpp. I'm really confused as to what the actual weights are on the site (what do I download?) There are so many huge files and I don't understand what it all means. For context, I've used Stable Diffusion models where you only download a single file. Thank you so much. P.S.: I tried googling (rule #1) but couldn't find anything useful. Sorry if this is so basic.
2023-12-20T22:18:02
https://www.reddit.com/r/LocalLLaMA/comments/18n6vfu/how_can_i_download_weights_llamacpp/
ThousandthStar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n6vfu
false
null
t3_18n6vfu
/r/LocalLLaMA/comments/18n6vfu/how_can_i_download_weights_llamacpp/
false
false
self
1
{'enabled': False, 'images': [{'id': 'CkOAaQsWIHHCboqd_f8Ion5_S4rDX-PNpvgXvzWgOMk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Lygx3uMu10olRbA3qHvPAXr9qK4MCyhuGHBCbozd95Y.jpg?width=108&crop=smart&auto=webp&s=669af07c54cdf5ccfa77ff741d052688dce2639d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Lygx3uMu10olRbA3qHvPAXr9qK4MCyhuGHBCbozd95Y.jpg?width=216&crop=smart&auto=webp&s=a63cb5a36270179a92950b6c30e91fb0a51dbec8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Lygx3uMu10olRbA3qHvPAXr9qK4MCyhuGHBCbozd95Y.jpg?width=320&crop=smart&auto=webp&s=3e053baabacb2e59786528ed1c5abfd031113fb9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Lygx3uMu10olRbA3qHvPAXr9qK4MCyhuGHBCbozd95Y.jpg?width=640&crop=smart&auto=webp&s=f49970248b08f1753c4b7daebd5a2cc87e01b6d3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Lygx3uMu10olRbA3qHvPAXr9qK4MCyhuGHBCbozd95Y.jpg?width=960&crop=smart&auto=webp&s=19e545195d19a6a3b4ef08e5709d205c94e724f0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Lygx3uMu10olRbA3qHvPAXr9qK4MCyhuGHBCbozd95Y.jpg?width=1080&crop=smart&auto=webp&s=467fdf210efd053577ac75a8fc7258cfb7ab5a63', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Lygx3uMu10olRbA3qHvPAXr9qK4MCyhuGHBCbozd95Y.jpg?auto=webp&s=26634c90056d6595416318f325e1f535cd09e68f', 'width': 1200}, 'variants': {}}]}
A Map Of The AI Industry
35
I'm trying to come up with a mental map of what all the category verticals are in AI, along with the biggest companies / products in those verticals. So far, the best one I've found was from Sequoia ([https://www.sequoiacap.com/wp-content/uploads/sites/6/2023/09/generative-ai-market-map-3.png](https://www.sequoiacap.com/wp-content/uploads/sites/6/2023/09/generative-ai-market-map-3.png)) Here's my list: * Chatbots (ChatGPT, Bard, Grok, Poe) * Foundational Models (GPT4, Claude, Mistral, Hugging Face, Gemini, Llama) * Hardware (Nvidia, Intel, AMD, Google, Amazon) * Text to Images (Stable Diffusion, Dall-E, MidJourney) * Text to Music (Splash, Suno) * Text to Video (Heygen, Descript, Pika) * Avatar Generation (Remini, Lensa) * Marketing (Copy.ai, Jasper.ai) * Companionship Bots (Character.ai, Waifu bots) Questions: 1. What verticals am I missing? 2. What products / companies in those verticals am I missing? 3. Have you seen a map out there that's better than Sequoia's?
2023-12-20T22:03:45
https://www.reddit.com/r/LocalLLaMA/comments/18n6jro/a_map_of_the_ai_industry/
AttorneyJackKelly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n6jro
false
null
t3_18n6jro
/r/LocalLLaMA/comments/18n6jro/a_map_of_the_ai_industry/
false
false
self
35
{'enabled': False, 'images': [{'id': 'kAINnzZ_FxB-ormwetUnK9QxtpjsbdbGGgwS7CJXi7g', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/CZlw1ksRK9MrwDiJw3vSvfOYMdIIpUYQLfDGI5hLmC8.png?width=108&crop=smart&auto=webp&s=9c354fcb787d3727013ba6975e65961e3b500b6e', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/CZlw1ksRK9MrwDiJw3vSvfOYMdIIpUYQLfDGI5hLmC8.png?width=216&crop=smart&auto=webp&s=fa18c9815732d37813ee2583b7205f3a6f545f99', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/CZlw1ksRK9MrwDiJw3vSvfOYMdIIpUYQLfDGI5hLmC8.png?width=320&crop=smart&auto=webp&s=f3a642d12723b12f36dba61d4c8f23e4834c08d7', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/CZlw1ksRK9MrwDiJw3vSvfOYMdIIpUYQLfDGI5hLmC8.png?width=640&crop=smart&auto=webp&s=b863b6201daea142c1d73790703afff4c6e7c2fa', 'width': 640}, {'height': 1280, 'url': 'https://external-preview.redd.it/CZlw1ksRK9MrwDiJw3vSvfOYMdIIpUYQLfDGI5hLmC8.png?width=960&crop=smart&auto=webp&s=7b39d1243484c01df9c78965652ef7dd32bcdc8f', 'width': 960}, {'height': 1440, 'url': 'https://external-preview.redd.it/CZlw1ksRK9MrwDiJw3vSvfOYMdIIpUYQLfDGI5hLmC8.png?width=1080&crop=smart&auto=webp&s=57a7333d831b0d6a2f2b98d2bce492c9f5898add', 'width': 1080}], 'source': {'height': 2880, 'url': 'https://external-preview.redd.it/CZlw1ksRK9MrwDiJw3vSvfOYMdIIpUYQLfDGI5hLmC8.png?auto=webp&s=144e82875892730f12529a2ebde91ab92ef2e4ec', 'width': 2160}, 'variants': {}}]}
I've searched the sub, no luck. Has anyone solved the "is this really a GGML file?" issue in llama.cpp?
1
[removed]
2023-12-20T21:42:46
https://www.reddit.com/r/LocalLLaMA/comments/18n62dn/ive_searched_the_sub_no_luck_has_anyone_solved/
Ok-Training-7587
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n62dn
false
null
t3_18n62dn
/r/LocalLLaMA/comments/18n62dn/ive_searched_the_sub_no_luck_has_anyone_solved/
false
false
self
1
null
Discussion - Frameworks vs Proxies for AI
1
[removed]
2023-12-20T20:37:32
https://www.reddit.com/r/LocalLLaMA/comments/18n4khb/discussion_frameworks_vs_proxies_for_ai/
InevitableSky2801
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n4khb
false
null
t3_18n4khb
/r/LocalLLaMA/comments/18n4khb/discussion_frameworks_vs_proxies_for_ai/
false
false
self
1
{'enabled': False, 'images': [{'id': '3NcYUfsB2YXL8WNCkErZHN3PdctvlSh7hoiHfpWMWK0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/RGUHZ__vdfaIkWEWC0GuSQoC02Ru6xtVLr9dxJ-FP3Q.jpg?width=108&crop=smart&auto=webp&s=94be6374a34b5fc8f0877e2ba4e9f4720af149b5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/RGUHZ__vdfaIkWEWC0GuSQoC02Ru6xtVLr9dxJ-FP3Q.jpg?width=216&crop=smart&auto=webp&s=07ef6fc3b19edbaff3c7e3f5bb192b72197eacce', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/RGUHZ__vdfaIkWEWC0GuSQoC02Ru6xtVLr9dxJ-FP3Q.jpg?width=320&crop=smart&auto=webp&s=0b67e223ff65c4992138a50fcaa006a03782bc0e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/RGUHZ__vdfaIkWEWC0GuSQoC02Ru6xtVLr9dxJ-FP3Q.jpg?width=640&crop=smart&auto=webp&s=b7543ef01aefa5962d11ed4e80a569592bbce64e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/RGUHZ__vdfaIkWEWC0GuSQoC02Ru6xtVLr9dxJ-FP3Q.jpg?width=960&crop=smart&auto=webp&s=6c3710a7ce06d4fed075b3becb1f8104eae8b347', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/RGUHZ__vdfaIkWEWC0GuSQoC02Ru6xtVLr9dxJ-FP3Q.jpg?width=1080&crop=smart&auto=webp&s=0553b24e8c10d595d2526fee22787a6c62d66e7d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/RGUHZ__vdfaIkWEWC0GuSQoC02Ru6xtVLr9dxJ-FP3Q.jpg?auto=webp&s=bf7bfe5c263b06bb9d1af942529824f6747a6eb4', 'width': 1200}, 'variants': {}}]}
Training model on a set of workplace stabdards, and checking future documents to make sure those standards are met
4
Hello! I've been searching around in circles for hours now and can't find a answer to this, so hopefully someone point me in the right direction. I recently got privateGPT working, feeding it PDFs and being able to pull out summaries and analysis that looks great. What I'm trying to do now is do a step further and train a model (or LoRA, or something?) on a set of standards, such as NFPA, so that I can feed a work report to the AI model to see if there are potential violations to double check on. At first I thought I would just upload it to the vector database with the work report, but I wasn't sure if that's the best way to refer to a master document and then document that we checked against. Not sure if that makes the most sense, but hopefully someone understands what I'm looking for. Thank you!
2023-12-20T20:24:45
https://www.reddit.com/r/LocalLLaMA/comments/18n49sv/training_model_on_a_set_of_workplace_stabdards/
vyralsurfer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n49sv
false
null
t3_18n49sv
/r/LocalLLaMA/comments/18n49sv/training_model_on_a_set_of_workplace_stabdards/
false
false
self
4
null
HF AutoTokenizer and EOS/BOS tokens for fine tuning
1
[removed]
2023-12-20T20:19:40
https://www.reddit.com/r/LocalLLaMA/comments/18n45lz/hf_autotokenizer_and_eosbos_tokens_for_fine_tuning/
cdreetz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n45lz
false
null
t3_18n45lz
/r/LocalLLaMA/comments/18n45lz/hf_autotokenizer_and_eosbos_tokens_for_fine_tuning/
false
false
self
1
null
HF AutoTokenizer and EOS/BOS tokens for fine tuning
1
[removed]
2023-12-20T20:18:13
https://www.reddit.com/r/LocalLLaMA/comments/18n44ed/hf_autotokenizer_and_eosbos_tokens_for_fine_tuning/
cdreetz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n44ed
false
null
t3_18n44ed
/r/LocalLLaMA/comments/18n44ed/hf_autotokenizer_and_eosbos_tokens_for_fine_tuning/
false
false
self
1
null
"Nethena-MLewd-Xwin" assistance
2
I've been using Nethena-MLewd-Xwin for a while and its the best model (in my opinion) that I can run, however recently I've gone through the process of updating Oobabooga, what I use to run it. After the update I noticed that I'm unable to successfully deploy it. I get [this error](https://imgur.com/a/lVUSIea). I unfortunately have no idea what it means, but I'm sure it being a relatively "old" model and on top of that an unorthodox Frankenmerge has something to do with it. I've always run the model the same way: llama.Cpp, 5_k_m, 30 gpu layers, 24 threads/threads batch (though I've never been sure if the threads were right is it?) and now suddenly it doesn't take. I'm trying to learn this stuff so give it to me straight. Any help as to what a solution could be? Is is simpler than I'm making it out to be? Is everything I'm doing completely wrong?
2023-12-20T20:17:47
https://www.reddit.com/r/LocalLLaMA/comments/18n441k/nethenamlewdxwin_assistance/
IZA_does_the_art
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n441k
false
null
t3_18n441k
/r/LocalLLaMA/comments/18n441k/nethenamlewdxwin_assistance/
false
false
self
2
{'enabled': False, 'images': [{'id': 'LxOcetGBhgNg_yEk2zaVP78_K3fl7fD3FPUkyRQ0mpE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zdwJJnhHeaswLOPlOd12Dusc0UEX1GoYhPFtxnuGRSw.jpg?width=108&crop=smart&auto=webp&s=a6a1d3c40b43e14f1b624b0600b71501cdc8f6ce', 'width': 108}, {'height': 163, 'url': 'https://external-preview.redd.it/zdwJJnhHeaswLOPlOd12Dusc0UEX1GoYhPFtxnuGRSw.jpg?width=216&crop=smart&auto=webp&s=fb6b149485c0f13af2043127859c0a006e8dbe03', 'width': 216}, {'height': 241, 'url': 'https://external-preview.redd.it/zdwJJnhHeaswLOPlOd12Dusc0UEX1GoYhPFtxnuGRSw.jpg?width=320&crop=smart&auto=webp&s=63bc2212ef80ad56b8a35cdd60d43a8d5602464c', 'width': 320}, {'height': 483, 'url': 'https://external-preview.redd.it/zdwJJnhHeaswLOPlOd12Dusc0UEX1GoYhPFtxnuGRSw.jpg?width=640&crop=smart&auto=webp&s=12324c712a9d22e7a7ad5b26a248b23918911bd8', 'width': 640}, {'height': 724, 'url': 'https://external-preview.redd.it/zdwJJnhHeaswLOPlOd12Dusc0UEX1GoYhPFtxnuGRSw.jpg?width=960&crop=smart&auto=webp&s=31d9bdf70e93dadd2b380ca5a65bbbba80073820', 'width': 960}, {'height': 815, 'url': 'https://external-preview.redd.it/zdwJJnhHeaswLOPlOd12Dusc0UEX1GoYhPFtxnuGRSw.jpg?width=1080&crop=smart&auto=webp&s=81f38e3c98c6b39aef93a5e83ac50871e5fa1463', 'width': 1080}], 'source': {'height': 945, 'url': 'https://external-preview.redd.it/zdwJJnhHeaswLOPlOd12Dusc0UEX1GoYhPFtxnuGRSw.jpg?auto=webp&s=7b41889e94cdd68866c9bcf0087187923abdd3d2', 'width': 1252}, 'variants': {}}]}
Raw Text Training for Mistral 7B?
1
[removed]
2023-12-20T20:10:18
https://www.reddit.com/r/LocalLLaMA/comments/18n3xrk/raw_text_training_for_mistral_7b/
Monochrome21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n3xrk
false
null
t3_18n3xrk
/r/LocalLLaMA/comments/18n3xrk/raw_text_training_for_mistral_7b/
false
false
self
1
{'enabled': False, 'images': [{'id': '_EzSAy5ohFplZxSUfW8YjIOR3mc4jpEFy2jw9ijDBxw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/EV42sTlsXFDu1i4kzx8iQoMXPApyTNJVyi9M-52fm6s.jpg?width=108&crop=smart&auto=webp&s=55b54523c4eee9f3b3be0c7cac62921c9332c3d8', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/EV42sTlsXFDu1i4kzx8iQoMXPApyTNJVyi9M-52fm6s.jpg?width=216&crop=smart&auto=webp&s=be343b37edb297c318acb56ff82e61cdc4be75a1', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/EV42sTlsXFDu1i4kzx8iQoMXPApyTNJVyi9M-52fm6s.jpg?width=320&crop=smart&auto=webp&s=6ff000d7b6f61b401b90e2fa9cdd2f0c0712d784', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/EV42sTlsXFDu1i4kzx8iQoMXPApyTNJVyi9M-52fm6s.jpg?auto=webp&s=be7a074a08576202eb18b54d22e3ceaad5aeeff8', 'width': 480}, 'variants': {}}]}
Generalization in Deep Reinforcement Learning
3
Adversarial Attacks, Robustness and Generalization in Deep Reinforcement Learning [https://twitter.com/UCLSTEaPP/status/1737491297076675045](https://twitter.com/UCLSTEaPP/status/1737491297076675045)
2023-12-20T19:43:17
https://www.reddit.com/r/LocalLLaMA/comments/18n3b03/generalization_in_deep_reinforcement_learning/
ml_dnn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n3b03
false
null
t3_18n3b03
/r/LocalLLaMA/comments/18n3b03/generalization_in_deep_reinforcement_learning/
false
false
self
3
{'enabled': False, 'images': [{'id': '9OswFqSSj-vIDOj1J7fiwGF348k_AmqLeiuxq2t0DSM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/iuvIektsEPQgnufWubN4xGCwnoJQPwjKoQlXJNG8x5Q.jpg?width=108&crop=smart&auto=webp&s=cbf04c8902f4dddd6d8ef618b7b3bdab0846e254', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/iuvIektsEPQgnufWubN4xGCwnoJQPwjKoQlXJNG8x5Q.jpg?auto=webp&s=aee4e88f68e5364f670c23d2aa64443fd7057ee3', 'width': 140}, 'variants': {}}]}
Karpathy on LLM evals
1,133
What do you think?
2023-12-20T19:42:58
https://i.redd.it/8g0zoors6i7c1.jpeg
deykus
i.redd.it
1970-01-01T00:00:00
0
{}
18n3ar3
false
null
t3_18n3ar3
/r/LocalLLaMA/comments/18n3ar3/karpathy_on_llm_evals/
false
false
https://b.thumbs.redditm…OlGc4oP_h_CM.jpg
1,133
{'enabled': True, 'images': [{'id': 'x5j-IstjxADJPvbprVspPeQ16_BsUx8Wou5sC6cRga0', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/8g0zoors6i7c1.jpeg?width=108&crop=smart&auto=webp&s=3560b860f27047217b68aec52bf97ebf1fcefcaa', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/8g0zoors6i7c1.jpeg?width=216&crop=smart&auto=webp&s=c611b9139231427a30449590bfff028767947445', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/8g0zoors6i7c1.jpeg?width=320&crop=smart&auto=webp&s=5f0def0438b6b928e3fa6d18a218feffcdff5acb', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/8g0zoors6i7c1.jpeg?width=640&crop=smart&auto=webp&s=eb75878edc55d401e63db064e768f2a194df6be9', 'width': 640}, {'height': 498, 'url': 'https://preview.redd.it/8g0zoors6i7c1.jpeg?width=960&crop=smart&auto=webp&s=c938650b1bacffdd748c25a52efa727cc37f18f3', 'width': 960}, {'height': 560, 'url': 'https://preview.redd.it/8g0zoors6i7c1.jpeg?width=1080&crop=smart&auto=webp&s=855b91d32305d4bcc64361493b158d9863e3dd65', 'width': 1080}], 'source': {'height': 607, 'url': 'https://preview.redd.it/8g0zoors6i7c1.jpeg?auto=webp&s=52bec95af9c091be5b87d1d0075f1051e5700cb2', 'width': 1170}, 'variants': {}}]}
Rtx 4090 vs Dual Rtx 3090 (nvlink)
6
Hi there. I'm new here so I apologize if I break any rules. I have a pretty involved question so I'll try to keep this brief. I'm taking deep learning courses at my college for a cs masters. Our professor gave us the option of doing the coursework through the cloud/rent a gpu service supplied by the university. Or we can run everything locally at home. I recently sold my pc to build a new one and I already had some money saved away so I'm going down the road of running everything locally. And since I have the money I figured I'd build a new pc specifically for my courses. I've come to the decision of having to decide between two rtx 3090s with nvlink or a single rtx 4090. I was wondering it you guys had any advice as to which I should go with. I know the new 4000 series don't support nvlink which is why I'm considering the two 3090s. I could also buy a 4090 and a smaller gpu say a 16gb 4060ti. But I'm not sure how well that would work given the lack of nvlink and slower memory bandwidth speeds. Any suggestions you guys have would be great. I have $2500 for this build. Thanks
2023-12-20T19:39:28
https://www.reddit.com/r/LocalLLaMA/comments/18n37sb/rtx_4090_vs_dual_rtx_3090_nvlink/
Any-Cobbler6161
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n37sb
false
null
t3_18n37sb
/r/LocalLLaMA/comments/18n37sb/rtx_4090_vs_dual_rtx_3090_nvlink/
false
false
self
6
null
NVIDIA A100 GPUs: A Cost and Availability Analysis
1
2023-12-20T19:18:21
https://www.shadeform.ai/blog/nvidia-a100?utm_source=reddit&utm_medium=social&utm_campaign=a100_blog
edsgoode
shadeform.ai
1970-01-01T00:00:00
0
{}
18n2qbz
false
null
t3_18n2qbz
/r/LocalLLaMA/comments/18n2qbz/nvidia_a100_gpus_a_cost_and_availability_analysis/
false
false
https://a.thumbs.redditm…bVetum0enKU8.jpg
1
{'enabled': False, 'images': [{'id': 'deoYlXtsCFUI1ewu13PgFnHi8uSLD6Jiru6qjWzfwgM', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=108&crop=smart&auto=webp&s=b062f7772e00af822d7dd465bda7688cf8bce31e', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=216&crop=smart&auto=webp&s=69f46ea2793136b1964f4ac7fabea156b25e4bee', 'width': 216}, {'height': 188, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=320&crop=smart&auto=webp&s=008135f436f418846a944733be30febb9ade3881', 'width': 320}, {'height': 377, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=640&crop=smart&auto=webp&s=b9f789c2ac809848db38f89a0bf7fdbf463bfd46', 'width': 640}, {'height': 565, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=960&crop=smart&auto=webp&s=95424cd0d89e0459bbaa8f29bebfa0d12a0968ec', 'width': 960}, {'height': 636, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=1080&crop=smart&auto=webp&s=93968989963619b36a6988c8856532509a12d659', 'width': 1080}], 'source': {'height': 701, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?auto=webp&s=fbc0da6f6dc60e9eea25895e8dd81449614f568c', 'width': 1190}, 'variants': {}}]}
Seeking Solutions to Run Dolphin-Mixtral 8x7b in a Chat UI – Paid Options Welcome
3
Hello r/LocalLLAMA, I’ve been on a quest to find a way to use the Dolphin-Mixtral 8x7b model in a continuous chat-style interface. I understand that platforms like Replicate allow interaction with the model, and Ollama offers a local download, but neither provides the seamless chatting experience I’m looking for. Running the model locally isn’t viable for me since it requires substantial VRAM (~100GB), and Replicate’s one-call-at-a-time method isn’t what I need. I’m looking for a solution that offers a ChatGPT-like interface to interact with Dolphin-Mixtral 8x7b. I’m open to paid services and willing to consider rates around $3-$5 per hour. It’s crucial for me that the platform supports the uncensored version of the model for unrestricted interactions. Has anyone in the community found a service or developed a method to achieve this? Your insights and recommendations would be incredibly helpful. Thank you!
2023-12-20T19:18:06
https://www.reddit.com/r/LocalLLaMA/comments/18n2q51/seeking_solutions_to_run_dolphinmixtral_8x7b_in_a/
qubitser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n2q51
false
null
t3_18n2q51
/r/LocalLLaMA/comments/18n2q51/seeking_solutions_to_run_dolphinmixtral_8x7b_in_a/
false
false
self
3
null
NVIDIA A100 GPUs: A Cost and Availability Analysis
1
2023-12-20T19:17:08
https://www.shadeform.ai/blog/nvidia-a100?utm_source=reddit&utm_medium=social&utm_campaign=a100_blog
edsgoode
shadeform.ai
1970-01-01T00:00:00
0
{}
18n2pc4
false
null
t3_18n2pc4
/r/LocalLLaMA/comments/18n2pc4/nvidia_a100_gpus_a_cost_and_availability_analysis/
false
false
https://a.thumbs.redditm…bVetum0enKU8.jpg
1
{'enabled': False, 'images': [{'id': 'deoYlXtsCFUI1ewu13PgFnHi8uSLD6Jiru6qjWzfwgM', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=108&crop=smart&auto=webp&s=b062f7772e00af822d7dd465bda7688cf8bce31e', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=216&crop=smart&auto=webp&s=69f46ea2793136b1964f4ac7fabea156b25e4bee', 'width': 216}, {'height': 188, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=320&crop=smart&auto=webp&s=008135f436f418846a944733be30febb9ade3881', 'width': 320}, {'height': 377, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=640&crop=smart&auto=webp&s=b9f789c2ac809848db38f89a0bf7fdbf463bfd46', 'width': 640}, {'height': 565, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=960&crop=smart&auto=webp&s=95424cd0d89e0459bbaa8f29bebfa0d12a0968ec', 'width': 960}, {'height': 636, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?width=1080&crop=smart&auto=webp&s=93968989963619b36a6988c8856532509a12d659', 'width': 1080}], 'source': {'height': 701, 'url': 'https://external-preview.redd.it/kKQtXlYsKwOhF6JVuyy09cXQCUuqPEZHtxsW7gO2Q_c.jpg?auto=webp&s=fbc0da6f6dc60e9eea25895e8dd81449614f568c', 'width': 1190}, 'variants': {}}]}
llama.cpp updated: up to 5.8x higher GPU t/s performance increase for Mixtral (GGUF)
77
2023-12-20T19:04:10
https://github.com/ggerganov/llama.cpp/pull/4538
Trojaner
github.com
1970-01-01T00:00:00
0
{}
18n2e52
false
null
t3_18n2e52
/r/LocalLLaMA/comments/18n2e52/llamacpp_updated_up_to_58x_higher_gpu_ts/
false
false
https://b.thumbs.redditm…1HhKbqyZAJ_Y.jpg
77
{'enabled': False, 'images': [{'id': 'jwNNpdxj7uCAwQoyprqdnDrQinklD6hpYdPOB5LBxMs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EQweq-MJagQ3TRwcpz5LpxIFHysJkiGLpvP4EZmVVMw.jpg?width=108&crop=smart&auto=webp&s=d96e4f350a4274d4b15580b0123bd5505cbe2adf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EQweq-MJagQ3TRwcpz5LpxIFHysJkiGLpvP4EZmVVMw.jpg?width=216&crop=smart&auto=webp&s=30556cdcf8ad6bafaf5f935ba15cbe329b424106', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EQweq-MJagQ3TRwcpz5LpxIFHysJkiGLpvP4EZmVVMw.jpg?width=320&crop=smart&auto=webp&s=0f0cfda87d7dd140eb8bee9bfa68c13fe047abc4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EQweq-MJagQ3TRwcpz5LpxIFHysJkiGLpvP4EZmVVMw.jpg?width=640&crop=smart&auto=webp&s=c1441cda80828def9fc2159562124fd4c91a0198', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EQweq-MJagQ3TRwcpz5LpxIFHysJkiGLpvP4EZmVVMw.jpg?width=960&crop=smart&auto=webp&s=32675a05cdfd5d3ea24ad035e620300d4c02056c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EQweq-MJagQ3TRwcpz5LpxIFHysJkiGLpvP4EZmVVMw.jpg?width=1080&crop=smart&auto=webp&s=ff4b4ce573c423beeb1976efcec2b9ec87e03253', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EQweq-MJagQ3TRwcpz5LpxIFHysJkiGLpvP4EZmVVMw.jpg?auto=webp&s=aa63cd718d13842402f12cb687023682d97251ca', 'width': 1200}, 'variants': {}}]}
I will do the fine-tuning for you, or here's my DIY guide
273
**Struggling with AI model fine-tuning? I can help.** >**Disclaimer:** *I'm an AI enthusiast and practitioner and very much a beginner still, not a trained expert. My learning comes from experimentation and community learning, especially from this subreddit. You might recognize me from my previous posts here. The post is deliberated opinionated to keep things simple. So take my post with a grain of salt.* Hello Everyone, I'm Adi. About four months ago, I made quit my job to focus solely on AI. Starting with zero, I've now ventured into the world of AI freelancing, with a specific interest in building LLMs for niche applications. To really dive into this, I've invested in two GPUs, and I'm eager to put them to productive use. **If you're looking for help with fine-tuning, I'm here to offer my services. I can build fine-tuned models for you. This helps me utilize my GPUs effectively and supports my growth in the AI freelance space.** **However, in the spirit of this subreddit, if you'd prefer to tackle this challenge on your own, here's an opinionated guide based on what I've learned. All are based on open source.** # Beginner Level: There are three steps mainly. 1. **Data Collection and Preparation:** \- The first step is preparing your data that you want to train your LLM with. \- Use the OpenAI's Chat JSONL format: [https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset](https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset). \- Why this specific data format? It simplifies data conversion between different models for training. Most of the OSS models now offer within their tokenizers a method called \`tokenizer.apply\_chat\_template\` : [https://huggingface.co/docs/transformers/main/en/chat\_templating](https://huggingface.co/docs/transformers/main/en/chat_templating). This converts the above chat JSONL format to the one approriate for their model. So once you have this "mezzanine" chat format you can convert to any of the required format with the inbuilt methods. Saves so much effort! \- Ensure your tokenised data length fits within the model's context length limits (Or the context length of your desired use case). **2. Framework Selection for finetuning:** \- For beginners with limited computing resources, I recommend: * [unsloth.ai](https://unsloth.ai/), or * [OpenAccess-AI-Collective/axolotl on GitHub](https://github.com/OpenAccess-AI-Collective/axolotl) \- These are beginner-friendly and don't require extensive hardware or too much knowledge to set it up and get running. \- Start with default settings and adjust the hyperparameters as you learn. \- I personally like unsloth because of the low memory requirements. \- axotol is good if you want a dockerized setup and access to a lot of models (mixtral and such). **Merge and Test the Model:** \- After training, merge the adapter with your main model. Test it using: * [llama.cpp on GitHub](https://github.com/ggerganov/llama.cpp) (for GPU poor or you want cross compatibility across devices) * [vllm on GitHub](https://github.com/vllm-project/vllm) (for more robust GPU setups) # Advanced Level: If you are just doing one off. The above is just fine. If you are serious and want to do this multiple times. Here are some more recommendations. Mainly you would want to version and iterate over your trained models. Think of something like what you do for code with GitHub, you are doing to the same with you model. 1. **Enhanced Data Management:** Along with the basics of the data earlier, upload your dataset to Hugging Face for versioning, sharing, and easier iteration. [https://huggingface.co/docs/datasets/upload\_dataset](https://huggingface.co/docs/datasets/upload_dataset) 2. **Training Monitoring:** Add wandb to your workflow for detailed insights into your training process. It helps in fine-tuning and understanding your model's performance. Then you can start tinkering the hyperparameters and to know at which epoch to stop. [https://wandb.ai/home](https://wandb.ai/home). Easy to attach to your existing runs. 3. **Model Management:** Post-training, upload your models to Hugging Face. This gives you managed inference endpoints, version control, and sharing capabilities. Especially important, if you want iterated and later resume from checkpoints. [https://huggingface.co/docs/transformers/model\_sharing](https://huggingface.co/docs/transformers/model_sharing) This guide is based on my experiences and experiments. I am still a begineer and learning. There's always more to explore and optimize, but this should give you a solid start. If you need assistance with fine-tuning your models or want to put my GPUs and skills to use, feel free to contact me. I'm available for freelance work. Cheers, Adi [https://www.linkedin.com/in/adithyan-ai/](https://www.linkedin.com/in/adithyan-ai/)
2023-12-20T19:01:36
https://www.reddit.com/r/LocalLLaMA/comments/18n2bwu/i_will_do_the_finetuning_for_you_or_heres_my_diy/
phoneixAdi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n2bwu
false
null
t3_18n2bwu
/r/LocalLLaMA/comments/18n2bwu/i_will_do_the_finetuning_for_you_or_heres_my_diy/
false
false
self
273
null
LLM on M1 MacBook Air (8gb Ram)
1
I am new to all this Large Language Model running locally so can someone help me a bit here. Whats the best way to run a model on a M1 Mac. I tried running mistral using ollama but it would take ages for an answer to be generated. Thanks
2023-12-20T18:29:17
https://www.reddit.com/r/LocalLLaMA/comments/18n1juo/llm_on_m1_macbook_air_8gb_ram/
Ndh4k4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n1juo
false
null
t3_18n1juo
/r/LocalLLaMA/comments/18n1juo/llm_on_m1_macbook_air_8gb_ram/
false
false
self
1
null
10 commandments for AI
1
I was testing out Mixtral instruct in my local and asked it to create 10 commandments for AI. The response was absolutely hilarious at the same time thought provoking... What do you think?
2023-12-20T18:27:17
https://i.redd.it/u0xmu6cath7c1.jpeg
AstrionX
i.redd.it
1970-01-01T00:00:00
0
{}
18n1i3u
false
null
t3_18n1i3u
/r/LocalLLaMA/comments/18n1i3u/10_commandments_for_ai/
false
false
https://b.thumbs.redditm…9XNXIKB7IGTI.jpg
1
{'enabled': True, 'images': [{'id': 'hDiX8oTugqdYviGtQz3kFccoG8q1yMjqfeieg2uys9k', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/u0xmu6cath7c1.jpeg?width=108&crop=smart&auto=webp&s=b2248f8bb5015fc6efe85c6a222f4521f2220a47', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/u0xmu6cath7c1.jpeg?width=216&crop=smart&auto=webp&s=de0d4d96a0de8d2773e061b7b9deb70b2e185655', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/u0xmu6cath7c1.jpeg?width=320&crop=smart&auto=webp&s=806909ec8a30f5c88795e58655c294c4c4218046', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/u0xmu6cath7c1.jpeg?width=640&crop=smart&auto=webp&s=fefc0be44051a861b23d292467dd9883b249fbed', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/u0xmu6cath7c1.jpeg?width=960&crop=smart&auto=webp&s=30b76275f9daab8306b5cd1d5f1d1ba6bb94db49', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/u0xmu6cath7c1.jpeg?width=1080&crop=smart&auto=webp&s=94b9c86a58333332835aa9f89333ce1d4999cf74', 'width': 1080}], 'source': {'height': 4096, 'url': 'https://preview.redd.it/u0xmu6cath7c1.jpeg?auto=webp&s=d6dc6687c97eafdd33e7c6335408b1c1015ce489', 'width': 3072}, 'variants': {}}]}
GPU recommendation for automatic speech recognition
1
Hi! I read a lot of redditers shared their local setup to run edge inference and it's great setup. I admire all of your efforts and interests. I'm new to this and have to work with a fixed budget so looking for recommendation on GPU: Is AMD Radeon RX 7900XT a good card or should I consider 2 RTX3060 12GB cards? ​ At the end of development, I need to have a box which has ASR and take clinical notes. ​ My current setup: 1. AMD Ryzen 9 5900X 2. GIGABYTE X570S AORUS Master 3. 128GB of Corsair VENGEANCE LPX DDR4 ​
2023-12-20T18:24:21
https://www.reddit.com/r/LocalLLaMA/comments/18n1fio/gpu_recommendation_for_automatic_speech/
dummy_who_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n1fio
false
null
t3_18n1fio
/r/LocalLLaMA/comments/18n1fio/gpu_recommendation_for_automatic_speech/
false
false
self
1
null
Seeking Advice: Building a High-Performance PC for AI Inference and Gaming
5
Hello, fellow PC and NLP enthusiasts! I'm a Privacy Research Scientist specializing in AI, working at a deep tech company. Currently, I have access to multiple A100 GPUs at work, but I'm looking to build my own powerhouse at home. My primary goal is to set up a machine capable of efficient local inference for various language models. On the side, I'm also an avid gamer and would love to have a system that can handle top-tier gaming. I've saved enough to consider high-end components like the i9 processor and an RTX 4090 GPU. However, I'm contemplating whether to stick with the RTX 4090 by adding another one or upgrade to a single RTX 6000 Ada for better inference performance. I'm aware that the RTX 6000 Ada might not be ideal for gaming, which is also a need for the build but I also lack experience in configuring SLI setups and other related aspects like power consumption. So, I'm reaching out to this knowledgeable community for advice: 1. Between the RTX 4090 and RTX 6000 Ada, which would be the better choice for a balance of quantized model inference and gaming performance? 2. If I opt for dual GPUs, what are the key considerations for setting up an SLI configuration? Are there any specific challenges or recommendations you would share? Any insights, experiences, or suggestions would be greatly appreciated. I'm eager to learn from your expertise and make an informed decision for this exciting build. Thanks in advance!
2023-12-20T18:16:25
https://www.reddit.com/r/LocalLLaMA/comments/18n18ff/seeking_advice_building_a_highperformance_pc_for/
susmitds
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n18ff
false
null
t3_18n18ff
/r/LocalLLaMA/comments/18n18ff/seeking_advice_building_a_highperformance_pc_for/
false
false
self
5
null
Prompts for categorizing blocks of text?
6
Does anyone know any good prompts for sorting blocks of unstructured text into defined categories? For example, I'm using Python to go through all of my saved Reddit posts, detect which ones are related to machine learning/AI, and tag them so that I can store them in a database for RAG purposes. Does anyone have any experience with prompts like: ```python """ Detect if this block of text fits any of the following categories and then tag them appropriately: machine_learning: Considers the topic of machine learning, especially optimization, deployment, and fun facts. minecraft: Is about the popular videogame Minecraft, especially if it mentions anything to do with redstone. python: Posts that talk about learning the Python programming language. other: Considers none of the previous topics, and shelves them to be sorted later. Provide your reasoning at the end. """ ``` I also want to be able to use this type of prompt for other uses as well, like sorting written characters into DnD alignments, detecting which Python scripts in my testbenches codebases would be good for certain projects, etc.
2023-12-20T18:02:23
https://www.reddit.com/r/LocalLLaMA/comments/18n0vr1/prompts_for_categorizing_blocks_of_text/
ishtarcrab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n0vr1
false
null
t3_18n0vr1
/r/LocalLLaMA/comments/18n0vr1/prompts_for_categorizing_blocks_of_text/
false
false
self
6
null
h2ogpt on CPU: any experience on how long embeddings creation will take?
4
Newbie question: I've installed h2ogpt on my mini pc (64G RAM, AMD Ryzen 5500U) and have run the 'generate.py' command in miniconda shell, CPU use only. CPU usage is currently \~30% for running python scripts, and has been at this rate for about 75 minutes so far. The local data is 183 PDFs of academic journal articles and books (mostly articles). Any idea on how long this should take? Mostly trying to troubleshoot if anything freezes up, because I have no idea how long is "too long" for this process.
2023-12-20T17:47:39
https://www.reddit.com/r/LocalLLaMA/comments/18n0igo/h2ogpt_on_cpu_any_experience_on_how_long/
tarasoraptor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n0igo
false
null
t3_18n0igo
/r/LocalLLaMA/comments/18n0igo/h2ogpt_on_cpu_any_experience_on_how_long/
false
false
self
4
null
Building cost-effective Generative AI applications
2
Hi everyone, I would like to share a blog post with you all that discusses the challenges of leveraging AI models for building AI-driven applications, focusing on the rising costs of running these models. In the blog, we delve into how rate limiting and caching can help reduce the operational costs of AI models by 30%, without any compromise on user experience. I'd appreciate your feedback on this. Are you facing similar cost challenges with Llama models? If so, what strategies have you implemented to manage these costs? Thanks a lot for your insights! Link to [Blog](https://blog.fluxninja.com/blog/coderabbit-cost-effective-generative-ai)
2023-12-20T17:44:13
https://www.reddit.com/r/LocalLLaMA/comments/18n0fdw/building_costeffective_generative_ai_applications/
tuscan-ninja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18n0fdw
false
null
t3_18n0fdw
/r/LocalLLaMA/comments/18n0fdw/building_costeffective_generative_ai_applications/
false
false
self
2
{'enabled': False, 'images': [{'id': 'N11IvJiakJjDjnrE0iuD-xfXzSH35lpT4IUz-IcyGzQ', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/GWrARstOwFvQqDsXsyjRWGGX29fsUM4tpS-w3dDRx1k.jpg?width=108&crop=smart&auto=webp&s=05274294cd3869000ff152320fe9fc65fd37fcb4', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/GWrARstOwFvQqDsXsyjRWGGX29fsUM4tpS-w3dDRx1k.jpg?width=216&crop=smart&auto=webp&s=90ab07abb37e8215757e887d7427da7e0d78e3b1', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/GWrARstOwFvQqDsXsyjRWGGX29fsUM4tpS-w3dDRx1k.jpg?width=320&crop=smart&auto=webp&s=9f33ff6b28e0e6e7c1ecdb41f9a724b2fe2b4c42', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/GWrARstOwFvQqDsXsyjRWGGX29fsUM4tpS-w3dDRx1k.jpg?width=640&crop=smart&auto=webp&s=b27e2bab40761af7d3af09e305153e2df5a2d08b', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/GWrARstOwFvQqDsXsyjRWGGX29fsUM4tpS-w3dDRx1k.jpg?width=960&crop=smart&auto=webp&s=cd91a7743a4aa9aeb3fed8c3fcc01dc5659c0532', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/GWrARstOwFvQqDsXsyjRWGGX29fsUM4tpS-w3dDRx1k.jpg?width=1080&crop=smart&auto=webp&s=dc1e57fb28d0264b7b3a924d0cec5ddb13291a29', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/GWrARstOwFvQqDsXsyjRWGGX29fsUM4tpS-w3dDRx1k.jpg?auto=webp&s=3986c204082ed7f5fa95967499e36a7a17fd8c82', 'width': 1792}, 'variants': {}}]}
IFEval looks like a great benchmark for how AI Engineers would use LLMs. Here's a notebook showing how to use it.
6
2023-12-20T17:40:31
https://colab.research.google.com/drive/1UFBWOUbUUAHTf7ilCGhPtGDMDeYXrJzt
datascienceharp
colab.research.google.com
1970-01-01T00:00:00
0
{}
18n0c3x
false
null
t3_18n0c3x
/r/LocalLLaMA/comments/18n0c3x/ifeval_looks_like_a_great_benchmark_for_how_ai/
false
false
https://a.thumbs.redditm…oh3Yut2yuZ44.jpg
6
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
LM Studio not adhering to n_predict (amount of words to generate)
3
I am trying to generate longer outputs from Mixtral, but it doesn't seem to listen to the settings I give it. I want to get around 4000 words, but it always stops around 2000 tokens, even though I have spare RAM. I set n\_predict to 4000 and also tried asking for 4000 words in system prompt and normal prompt, nothing worked. Does anyone have any advice on how to get really long output using mixtral? Thanks a lot!
2023-12-20T17:24:13
https://www.reddit.com/r/LocalLLaMA/comments/18mzxt0/lm_studio_not_adhering_to_n_predict_amount_of/
Schmackofatzke
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mzxt0
false
null
t3_18mzxt0
/r/LocalLLaMA/comments/18mzxt0/lm_studio_not_adhering_to_n_predict_amount_of/
false
false
self
3
null
MLX Models on Hugging Face
12
2023-12-20T17:13:39
https://x.com/awnihannun/status/1737510739987120248?s=46&t=BVhfPLwVzzqRJOcJ7VU3tw
Hinged31
x.com
1970-01-01T00:00:00
0
{}
18mzogl
false
null
t3_18mzogl
/r/LocalLLaMA/comments/18mzogl/mlx_models_on_hugging_face/
false
false
https://b.thumbs.redditm…b67ZbKzKzlVU.jpg
12
{'enabled': False, 'images': [{'id': 'Pu0dp2N6CvF3d4ucFcm4KWZw-rQy2Flz3R1A-Iv1ugA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/_0kEEddKt5iagUtS8McdOM1NV_hCqdGGf6ZKe1ETFKQ.jpg?width=108&crop=smart&auto=webp&s=b19c082e314356acc1d174e11dcf430cb20798e8', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/_0kEEddKt5iagUtS8McdOM1NV_hCqdGGf6ZKe1ETFKQ.jpg?auto=webp&s=9ed7f0d6ef8ef39a4a96e7b214d2b5a7a6d3faf1', 'width': 200}, 'variants': {}}]}
Mistral Instruct Fine-Tuning Problem
2
Hello, I'm new to instruction and fine-tuning. I'm encountering an issue with my fine-tuned Mistral Instruct model (using mistralai/Mistral-7B-Instruct-v0.1). The output appear in a certain way, and I'm unsure if this is due to the dataset I'm using or if it's related to the instructions for the llm. I'm using [know\_sql](https://huggingface.co/datasets/knowrohit07/know_sql) dataset from huggingface and convert it into instruction dataset. Any insights or advice would be greatly appreciated. Thanks! &#x200B; `SELECT MIN(tweet_id) FROM Tweets WHERE content = "Invalid tweet. The tweet is invalid if the number of characters used in the content of the tweet is strictly greater than 15."` `\`\`\`<s> [INST] Write SQL query to answer the following question given the database schema. Please wrap your code answer using \`\`\`:` `Schema: CREATE TABLE table_name_94 (round INTEGER, college VARCHAR)` &#x200B; &#x200B; &#x200B; &#x200B; &#x200B; `Question: What is the average round of a player from the college of tennessee?`
2023-12-20T17:02:30
https://www.reddit.com/r/LocalLLaMA/comments/18mzerf/mistral_instruct_finetuning_problem/
laveriaroha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mzerf
false
null
t3_18mzerf
/r/LocalLLaMA/comments/18mzerf/mistral_instruct_finetuning_problem/
false
false
self
2
{'enabled': False, 'images': [{'id': 'mOp8-lANRfB0z5npPUKQKv_cP7iMesp7ePsPn-lVMxQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rUnqh1iTnLoKQGg8PfNgEEzxpReimQeH8iSHopCk25U.jpg?width=108&crop=smart&auto=webp&s=c71c53391f6a08f7b721445fd4c760ebc7da905e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rUnqh1iTnLoKQGg8PfNgEEzxpReimQeH8iSHopCk25U.jpg?width=216&crop=smart&auto=webp&s=ccad564c60cad0ebd7d5c52a311d341427c6567b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rUnqh1iTnLoKQGg8PfNgEEzxpReimQeH8iSHopCk25U.jpg?width=320&crop=smart&auto=webp&s=5e69380fa22bc8030e2d8d57f600dd27acbbc6eb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rUnqh1iTnLoKQGg8PfNgEEzxpReimQeH8iSHopCk25U.jpg?width=640&crop=smart&auto=webp&s=8051ddbb5fb26e943d4fd81a986301e766ac9fa7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rUnqh1iTnLoKQGg8PfNgEEzxpReimQeH8iSHopCk25U.jpg?width=960&crop=smart&auto=webp&s=49857237f7d2e55305ba05a15fb00377887d3bd9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rUnqh1iTnLoKQGg8PfNgEEzxpReimQeH8iSHopCk25U.jpg?width=1080&crop=smart&auto=webp&s=b0b0059f2159ddf3d1f354b8437bef3326589fd1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rUnqh1iTnLoKQGg8PfNgEEzxpReimQeH8iSHopCk25U.jpg?auto=webp&s=ecf713d484f47d5df010758869053e253272ca02', 'width': 1200}, 'variants': {}}]}
Pressure testing: Open LLMs
43
You might recall the \[GPT-4\]([https://twitter.com/GregKamradt/status/1722386725635580292](https://twitter.com/GregKamradt/status/1722386725635580292)) and \[Claude 2\]([https://twitter.com/GregKamradt/status/1727018183608193393](https://twitter.com/GregKamradt/status/1727018183608193393)) long context recall tests that were floating around Twitter about a month or so ago. Well, I was very intrigued by the results and am here to share an ongoing \[project\]([https://github.com/LeonEricsson/llmcontext](https://github.com/LeonEricsson/llmcontext)) of mine to pressure test prominent open LLMs. GPT-4 and Claude are frontier models, understanding their capabilities is important but I want to give to the open source community. I've already tested Mistral 7B Instruct v0.2 and OpenChat 7B 3.5-1210 (results below), and now I'm looking for new suggestions! *Unfortunately*, I am limited by what I can run on my local setup but please let me know what you want to try, or even better run model yourself through the pressure cooker! Repo: [https://github.com/LeonEricsson/llmcontext](https://github.com/LeonEricsson/llmcontext](https://github.com/LeonEricsson/llmcontext)) **Mistral 7B Instruct v0.2 @ 16k** [Poor performance across the board... ](https://preview.redd.it/8pd4syhdch7c1.png?width=1570&format=png&auto=webp&s=aa4bf2e6478535b1cbbd7faa68b4cf5b991cd715) **Mistral 7B Instruct v0.2 @ 16k \[RP\]** But check out what happens when we prime the assistant response with `Here is the most relevant sentence in the text:`. Don't forget that this model was only trained with 8k context length. https://preview.redd.it/i1efbah7dh7c1.png?width=1570&format=png&auto=webp&s=1ffa7f72c34e8945447d33c570a8f1fda8817476 **OpenChat 7B 3.5-1210 @ 8k** https://preview.redd.it/31u78iibch7c1.png?width=1570&format=png&auto=webp&s=acce506c4f3dc5a28fa43f443d843ab1cf3f8f95 **OpenChat 7B 3.5-1210 @ 8k \[RP\]** Retrieval priming does not seem to benefit OpenChat. https://preview.redd.it/uso6muxpdh7c1.png?width=1570&format=png&auto=webp&s=61691496a2d519c8e8ffe6497dd57ee6f5558b1a
2023-12-20T17:00:17
https://www.reddit.com/r/LocalLLaMA/comments/18mzchg/pressure_testing_open_llms/
TelloLeEngineer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mzchg
false
null
t3_18mzchg
/r/LocalLLaMA/comments/18mzchg/pressure_testing_open_llms/
false
false
https://b.thumbs.redditm…5fkJiGgoebNI.jpg
43
{'enabled': False, 'images': [{'id': 'IoT0s6tjXE3aOsEkW4indDUmrC74ki_Fz3SI5GJktkU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=108&crop=smart&auto=webp&s=6f8fd85469dc8b95aa3e5ab0890e8d995dc0626c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=216&crop=smart&auto=webp&s=13952d16c51c59c2a79593f2984e654c40371f92', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=320&crop=smart&auto=webp&s=3802e49c9ab2de70abf8d0e0f28a78e5c5e4dddf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=640&crop=smart&auto=webp&s=45efc1dd65b2ae0b984c88f378f7387f058745b5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=960&crop=smart&auto=webp&s=8d1d2d90324a1ca712cde3cfc36e7db904c9eec3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=1080&crop=smart&auto=webp&s=b0cd544e0c6e4c2b4cc8b440e59d19c009d1e0ac', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?auto=webp&s=d4609cc846231a8d7293378fa2dd85ba054e2a13', 'width': 1200}, 'variants': {}}]}
Do not ask orca 2 to write a typical reddit ragebait post
1
Wrote this once already but it did not show up I guess due to words I used. But anyway, yes, Orca 2 seems to be very explicit if you ask it to write reddit ragebait post. I thought Orca 2 was sort of censored model? It was not in the instance I tried with llama.cpp
2023-12-20T16:05:28
https://www.reddit.com/r/LocalLLaMA/comments/18my2z2/do_not_ask_orca_2_to_write_a_typical_reddit/
aluode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18my2z2
false
null
t3_18my2z2
/r/LocalLLaMA/comments/18my2z2/do_not_ask_orca_2_to_write_a_typical_reddit/
false
false
self
1
null
Training a model with my own data
1
I'm using ollama to run my models. I want to use the mistral model, but create a lora to act as an assistant that primarily references data I've supplied during training. This data will include things like test procedures, diagnostics help, and general process flows for what to do in different scenarios. I downloaded a mistral model from the huggingface repo I found here: [https://ollama.ai/library/mistral](https://ollama.ai/library/mistral) I used llama.cpp to convert it to a gguf, then supplied it a simple training text file that only contained 1 piece of information the base model couldn't know. After this, I merged my lora with the original model and ran it through ollama, and the output is just nonsense. At first, it just repeated the first word of my training doc over and over. I tweaked the training command a bit but that just led to garbage. So clearly I'm not understanding something. I'm very new to this, and could use a little help on how to achieve my goal. Luckily I made an instructional document on how I did this whole process, so hopefully you guys can easily identify what I was doing wrong. If anyone has any advise, I'd greatly appreciate it. Heres the document I made: \`\`\`Requirements: Cmake Cuda Nvidia GPU Git &#x200B; Create C:/lora directory &#x200B; Clone llama.cpp in C:/lora directory with git bash: git clone [https://github.com/ggerganov/llama.cpp.git](https://github.com/ggerganov/llama.cpp.git) &#x200B; Build llama.cpp by creating a build directory: mkdir .\\llama.cpp\\build &#x200B; CD into the build directory: cd .\\llama.cpp\\build &#x200B; Generate project files: cmake .. -G "Visual Studio 17 2022" -A x64 -DLLAMA\_CUBLAS=ON &#x200B; Build project files: cmake --build . --config Release &#x200B; Copy contents of C:\\lora\\llama.cpp\\build\\bin\\Release to C:\\lora\\llama.cpp &#x200B; Download files from repo and place into C:/lora: [https://huggingface.co/mistralai/Mistral-7B-v0.1/tree/main](https://huggingface.co/mistralai/Mistral-7B-v0.1/tree/main) &#x200B; model-00001-of-00002.safetensors model-00002-of-00002.safetensors pytorch\_model.bin.index.json special\_tokens\_map.json tokenizer.json tokenizer.model tokenizer\_config.json config.json generation\_config.json &#x200B; Create python environment in C:/lora/llama.cpp: python -m venv .venv &#x200B; Activate python evnironment: Bash: .\\.venv\\Scripts\\activate.bat &#x200B; Powershell: .\\.venv\\Scripts\\Activate.ps1 &#x200B; Install dependencies: pip install -r requirements.txt &#x200B; While in python environment, cd to C:/lora &#x200B; Run [convert.py](https://convert.py): python llama.cpp\\[convert.py](https://convert.py) model-00001-of-00002.safetensors --outtype f32 --outfile converted.gguf --ctx 2048 &#x200B; In normal powershell window in C:/lora directory, run command to begin training: llama.cpp/finetune.exe --model-base converted.gguf --train-data instruction.txt --lora-out lora.gguf --save-every 0 --threads 24 --ctx 16 --rope-freq-base 10000 --rope-freq-scale 1.0 --batch 1 --grad-acc 1 --adam-iter 256 --adam-alpha 0.001 --lora-r 4 --lora-alpha 4 --use-checkpointing --sample-start "\\n" --escape --include-sample-start --seed 1 -ngl 8 &#x200B; Once the training is done, a lora.gguf file is made. Run this command to merge it with the original model: llama.cpp\\export-lora.exe --model-base converted.gguf --model-out helios-model.gguf --lora-scaled lora.gguf 1.0 &#x200B; Create a Modelfile on the server: FROM /home/syllith/helios-model.gguf PARAMETER temperature .5 PARAMETER num\_ctx 1024 &#x200B; SYSTEM """ You are an expert assistant working at world wide technology. """ Transfer the helios-mode.gguf to linux and run this command: ollama create helios -f Modelfile &#x200B; To save changes to the model, edit the Modelfile then run: ollama update helios -f Modelfile\`\`\`
2023-12-20T15:55:57
https://www.reddit.com/r/LocalLLaMA/comments/18mxuq0/training_a_model_with_my_own_data/
RidesFlysAndVibes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mxuq0
false
null
t3_18mxuq0
/r/LocalLLaMA/comments/18mxuq0/training_a_model_with_my_own_data/
false
false
self
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Need help in asding context awareness to LLM RAG pipeline
7
Hello all, I want to add context awareness to my LLM RAG pipeline. Here are 2 approaches I am thinking of. Please help me if I am going in the right direction, and also what should be the ideal approach : Approach 1 : Step 1. Use an LLM to do co reference resolution in the new query, based on immediate last conversarion, and get the modified query. Example : last conversarion : What is Google? New query: what does it do ? Modified query : what does Google do . Step 2. Now based on modified query, get similar responses based on similarity score of previous conversations and add it to the prompt. Cons : 1. With coreference resolution since we are using only immediate last query, it would lose out on, where the user query refers to noun or subject from earlier conversation . Pros : Woukd only pass the relevant conversation. Approach 2 : Summarize the conversation history, and store it in memory. As tye conversation proceeds, keep on adding to the conversation history. Cons : In case of context switching, the summary would also have non relevant context being added to the prompt. How to handle this ?
2023-12-20T15:40:46
https://www.reddit.com/r/LocalLLaMA/comments/18mxi7d/need_help_in_asding_context_awareness_to_llm_rag/
Impressive_Gate2102
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mxi7d
false
null
t3_18mxi7d
/r/LocalLLaMA/comments/18mxi7d/need_help_in_asding_context_awareness_to_llm_rag/
false
false
self
7
null
Pressure Testing: Open LLMs
1
You might recall the [GPT-4](https://twitter.com/GregKamradt/status/1722386725635580292) and [Claude 2](https://twitter.com/GregKamradt/status/1727018183608193393) long context recall tests that were floating around Twitter about a month or so ago. Well, I was very intrigued by the results and am here to share an ongoing [project](https://github.com/LeonEricsson/llmcontext) of mine to pressure test prominent open LLMs. GPT-4 and Claude are frontier models, understanding their capabilities is important but I want to give to the open source community. I've already tested Mistral 7B Instruct v0.2 and OpenChat 7B 3.5-1210 (results below), and now I'm looking for new suggestions! *Unfortunately*, I am limited by what I can run on my local setup but please let me know what you want to try, or even better run model yourself through the pressure cooker! Repo: [https://github.com/LeonEricsson/llmcontext](https://github.com/LeonEricsson/llmcontext) **Mistral 7B Instruct v0.2 @ 16k** [Poor performance across the board, but check out the retrieval primed version of this test in the repo for a massive performance boost. ](https://preview.redd.it/4kfn4ugczg7c1.png?width=1570&format=png&auto=webp&s=e673bd8923a958fd6f72eb187b956724ea840eb0) **OpenChat 7B 3.5-1210 @ 8k** https://preview.redd.it/mkv74jkkzg7c1.png?width=1570&format=png&auto=webp&s=663b47bded833aeebfd109323dfc12d7655ebc35
2023-12-20T15:40:45
https://www.reddit.com/r/LocalLLaMA/comments/18mxi71/pressure_testing_open_llms/
TelloLeEngineer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mxi71
false
{'oembed': {'author_name': 'Greg Kamradt', 'author_url': 'https://twitter.com/GregKamradt', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Pressure Testing GPT-4-128K With Long Context Recall<br><br>128K tokens of context is awesome - but what&#39;s performance like?<br><br>I wanted to find out so I did a “needle in a haystack” analysis<br><br>Some expected (and unexpected) results<br><br>Here&#39;s what I found:<br><br>Findings:<br>* GPT-4’s recall… <a href="https://t.co/nHMokmfhW5">pic.twitter.com/nHMokmfhW5</a></p>&mdash; Greg Kamradt (@GregKamradt) <a href="https://twitter.com/GregKamradt/status/1722386725635580292?ref_src=twsrc%5Etfw">November 8, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/GregKamradt/status/1722386725635580292', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_18mxi71
/r/LocalLLaMA/comments/18mxi71/pressure_testing_open_llms/
false
false
https://b.thumbs.redditm…fmNlNzXWSLHQ.jpg
1
{'enabled': False, 'images': [{'id': 'ymU6YX_LHVJfaun7NAOa91DelnsTFGnDzZWqRSjSd4U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/kQmRHC1FSkwCzZAeppncbozkxAY9vBYDmfL2nz9w7g0.jpg?width=108&crop=smart&auto=webp&s=a35f25e7df63b0f56c89ba653f84fe28a8b9871c', 'width': 108}], 'source': {'height': 73, 'url': 'https://external-preview.redd.it/kQmRHC1FSkwCzZAeppncbozkxAY9vBYDmfL2nz9w7g0.jpg?auto=webp&s=76df65fa0162dd822cfcfd8ffc85f6d2bb54f84b', 'width': 140}, 'variants': {}}]}
SJTU-IPADS/PowerInfer: High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
117
Abstract We introduce PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC) equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high locality inherent in LLM inference, characterized by a power-law distribution in neuron activation. This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated across inputs, while the majority, cold neurons, vary based on specific inputs. PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers. PowerInfer further integrates adaptive predictors and neuron-aware sparse operators, optimizing the efficiency of neuron activation and computational sparsity. Evaluation shows that PowerInfer attains an average token generation rate of 13.20 tokens/s, with a peak of 29.08 tokens/s, across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU, only 18% lower than that achieved by a top-tier server-grade A100 GPU. This significantly outperforms llama.cpp by up to 11.69x while retaining model accuracy
2023-12-20T15:36:12
https://github.com/SJTU-IPADS/PowerInfer
alchemist1e9
github.com
1970-01-01T00:00:00
0
{}
18mxefa
false
null
t3_18mxefa
/r/LocalLLaMA/comments/18mxefa/sjtuipadspowerinfer_highspeed_large_language/
false
false
https://b.thumbs.redditm…GcWVpCkjwIGY.jpg
117
{'enabled': False, 'images': [{'id': 'GRSs8mzfWIMNc4kAZlQg25hwQm3iLPKcUOfqneN2qdQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZR0kQQW8nMpHvK7IZEDXmo0ECPcBmADHHDT7Sl66yfQ.jpg?width=108&crop=smart&auto=webp&s=15f7b4267fbd23b262aa04c970a582203f1f38ed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZR0kQQW8nMpHvK7IZEDXmo0ECPcBmADHHDT7Sl66yfQ.jpg?width=216&crop=smart&auto=webp&s=23b0eeffe660ac7c1cef2aefc9e173c5c4e03e4f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZR0kQQW8nMpHvK7IZEDXmo0ECPcBmADHHDT7Sl66yfQ.jpg?width=320&crop=smart&auto=webp&s=41c96fb2cce414a4086a7e895dc204c94ad2cf3e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZR0kQQW8nMpHvK7IZEDXmo0ECPcBmADHHDT7Sl66yfQ.jpg?width=640&crop=smart&auto=webp&s=8b02335277219c048deb542df7588a35d6f315aa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZR0kQQW8nMpHvK7IZEDXmo0ECPcBmADHHDT7Sl66yfQ.jpg?width=960&crop=smart&auto=webp&s=4626d69f2be4775e3fa817a9712a0f7255c2789d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZR0kQQW8nMpHvK7IZEDXmo0ECPcBmADHHDT7Sl66yfQ.jpg?width=1080&crop=smart&auto=webp&s=b6d24ce9a54610f7d170745f10f93cd0c8c478ec', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZR0kQQW8nMpHvK7IZEDXmo0ECPcBmADHHDT7Sl66yfQ.jpg?auto=webp&s=ec2b947cf9f1238df332bc2d922e460bc5d3faf1', 'width': 1200}, 'variants': {}}]}
Pressure Testing: Open LLMs
1
2023-12-20T15:30:34
https://github.com/LeonEricsson/llmcontext
TelloLeEngineer
github.com
1970-01-01T00:00:00
0
{}
18mx9pa
false
null
t3_18mx9pa
/r/LocalLLaMA/comments/18mx9pa/pressure_testing_open_llms/
false
false
https://a.thumbs.redditm…0hNgXjQYPh68.jpg
1
{'enabled': False, 'images': [{'id': 'IoT0s6tjXE3aOsEkW4indDUmrC74ki_Fz3SI5GJktkU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=108&crop=smart&auto=webp&s=6f8fd85469dc8b95aa3e5ab0890e8d995dc0626c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=216&crop=smart&auto=webp&s=13952d16c51c59c2a79593f2984e654c40371f92', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=320&crop=smart&auto=webp&s=3802e49c9ab2de70abf8d0e0f28a78e5c5e4dddf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=640&crop=smart&auto=webp&s=45efc1dd65b2ae0b984c88f378f7387f058745b5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=960&crop=smart&auto=webp&s=8d1d2d90324a1ca712cde3cfc36e7db904c9eec3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?width=1080&crop=smart&auto=webp&s=b0cd544e0c6e4c2b4cc8b440e59d19c009d1e0ac', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GqNLMBqKNQkV6iclmQ3oi6ZSbHsEZiESTWQR-rZsOUY.jpg?auto=webp&s=d4609cc846231a8d7293378fa2dd85ba054e2a13', 'width': 1200}, 'variants': {}}]}
Recommended hardware (Windows or Linux)?
10
Hello, I currently own i7-6700 with 16GB and RTX 2070 with 8GB I was able to run some gguf models with good speed I should upgrade my PC soon and I am trying to do research related to AI requirements Looks like nVidia is carefully trying not to give us memory, so even card with 24GB are ultra premium priced and cards with 48GB are priced like some cheaper cars I was thinking about buying 32GB of RAM but looks like people are able to use 64GB of RAM for LLM models. Can you also use 128GB somehow? Examples? Can you use multiple Nvidia cards with current LLM software? Like can you use 2 16GB cards and have better results than single 24GB card? Examples? I want to create multiple AIs locally so they could chat each other. Maybe I could use my existing PC with 8GB VRAM as one client and new more powerful computer as second one. Have you tried crating teams of AI?
2023-12-20T15:11:30
https://www.reddit.com/r/LocalLLaMA/comments/18mwtqf/recommended_hardware_windows_or_linux/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mwtqf
false
null
t3_18mwtqf
/r/LocalLLaMA/comments/18mwtqf/recommended_hardware_windows_or_linux/
false
false
self
10
null
how is the scene currently ?
8
so i'm pretty familiar with the open source image generation scene , but i know nearly nothing about text generation or any of the major events recently , can someone please summarize what's happening currently or what major events happened in the last few months ? thanks
2023-12-20T14:51:29
https://www.reddit.com/r/LocalLLaMA/comments/18mwd6j/how_is_the_scene_currently/
FindingSea3777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mwd6j
false
null
t3_18mwd6j
/r/LocalLLaMA/comments/18mwd6j/how_is_the_scene_currently/
false
false
self
8
null
Does increasing context length require more memory, or does it just slow down processing?
20
For context, I'm running a 13B model on an RTX 3080 with 10GB VRAM and 39 GPU layers, and I'm getting 10 T/s at 2048 context length. I'm considering trying out 4096 context length: will this just make the model slower (and hopefully smarter), or will I need to fiddle with GPU layers some more?
2023-12-20T14:09:00
https://www.reddit.com/r/LocalLLaMA/comments/18mvfsu/does_increasing_context_length_require_more/
RomulanLurkingDevice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mvfsu
false
null
t3_18mvfsu
/r/LocalLLaMA/comments/18mvfsu/does_increasing_context_length_require_more/
false
false
self
20
null
Perplexity isn’t everything! : A real world example of Mixtral with variable experts.
1
[removed]
2023-12-20T13:59:15
https://medium.com/@nivibilla/perplexity-isnt-everything-a-real-world-example-of-mixtral-with-variable-experts-6cfeb8d12183?source=friends_link&sk=2eb493e1de5530e876c582785b5b7340
Eastwindy123
medium.com
1970-01-01T00:00:00
0
{}
18mv86c
false
null
t3_18mv86c
/r/LocalLLaMA/comments/18mv86c/perplexity_isnt_everything_a_real_world_example/
false
false
https://a.thumbs.redditm…nbrafXa24lk0.jpg
1
{'enabled': False, 'images': [{'id': 'dCmQfuQBoMke8LO9La5qVQzWPJXIkIOYHxNTKvzyRZE', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/f9lF6-0pUsg3tXn-y4cVL4I2DF_IavAluTnLtyaq47Q.jpg?width=108&crop=smart&auto=webp&s=e6f38857ce56cc5e142d9ced866b74e9c05ea879', 'width': 108}, {'height': 100, 'url': 'https://external-preview.redd.it/f9lF6-0pUsg3tXn-y4cVL4I2DF_IavAluTnLtyaq47Q.jpg?width=216&crop=smart&auto=webp&s=a50abdd8ceb3198e4887383b1111ea2b128887d8', 'width': 216}, {'height': 148, 'url': 'https://external-preview.redd.it/f9lF6-0pUsg3tXn-y4cVL4I2DF_IavAluTnLtyaq47Q.jpg?width=320&crop=smart&auto=webp&s=19f12f0191ee96787aab2b90e68d85a163868f80', 'width': 320}, {'height': 297, 'url': 'https://external-preview.redd.it/f9lF6-0pUsg3tXn-y4cVL4I2DF_IavAluTnLtyaq47Q.jpg?width=640&crop=smart&auto=webp&s=66e8b0abb950e8a8964b35ad4e2967aa80da698e', 'width': 640}], 'source': {'height': 428, 'url': 'https://external-preview.redd.it/f9lF6-0pUsg3tXn-y4cVL4I2DF_IavAluTnLtyaq47Q.jpg?auto=webp&s=712e27fe812d447e3a08f55e29093e3f62aa5634', 'width': 922}, 'variants': {}}]}
Dumb question? Mixtral in onnx format.
5
Hello all - first post here so I thought I'd start with a dumb question! I'm looking into using mixtral for semantic search in typesense. First off: is that a good idea or are there better open models out there for that purpose? For that to work they require the model to be in onnx format. If I understand correctly, you need to be able to actually run the model once to get a full forward pass. I don't have enough anything to run mixtral so how would one do it? Hugging face does not seem to have that: is there a source for onnx-formated models? Guess that was more than one question... Thanks!
2023-12-20T13:50:28
https://www.reddit.com/r/LocalLLaMA/comments/18mv1l8/dumb_question_mixtral_in_onnx_format/
pierredugland
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mv1l8
false
null
t3_18mv1l8
/r/LocalLLaMA/comments/18mv1l8/dumb_question_mixtral_in_onnx_format/
false
false
self
5
null
Do not ask orca 2 to write a typical reddit ragebait post
1
[removed]
2023-12-20T13:46:49
https://www.reddit.com/r/LocalLLaMA/comments/18muyub/do_not_ask_orca_2_to_write_a_typical_reddit/
aluode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18muyub
false
null
t3_18muyub
/r/LocalLLaMA/comments/18muyub/do_not_ask_orca_2_to_write_a_typical_reddit/
false
false
self
1
null
Newbi question, loading LMMs with HF Transformers
2
I have a question on the loading settings, what is the difference between using load\_in\_4bit and torch\_dtype=torch.float16? I found both settings in HF solar description.... any tips are welcomed! tokenizer = AutoTokenizer.from\_pretrained("Upstage/SOLAR-10.7B-Instruct-v1.0") model = AutoModelForCausalLM.from\_pretrained( "Upstage/SOLAR-10.7B-Instruct-v1.0", device\_map="auto", load\_in\_4bit=True,)
2023-12-20T13:46:32
https://www.reddit.com/r/LocalLLaMA/comments/18muymz/newbi_question_loading_lmms_with_hf_transformers/
Ecstatic_Sale1739
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18muymz
false
null
t3_18muymz
/r/LocalLLaMA/comments/18muymz/newbi_question_loading_lmms_with_hf_transformers/
false
false
self
2
null
Has anyone managed to use knowledge/fact editing techniques such as Memit or use the EasyEdit library on limited (V)RAM?
9
It's possible to edit facts directly into an LLM without retraining or finetuning using techniques such as Memit. A popular library to do so that supports multiple fact editing algorithms is EasyEdit. You for do e.g. change "The president of the USA is Barack Obama" -> "The president of the USA is Joe Biden", without affecting any other facts in the model, or doing any training. And it's meant to be pretty fast, taking around 5 seconds. [https://github.com/zjunlp/EasyEdit](https://github.com/zjunlp/EasyEdit) But this takes up a lot of RAM. I have to quantize my models just to get them to run on my PC. Has anyone experimented with knowledge editing and managed to do it locally on consumer hardware? Was enabling quantization during the process possible for you?
2023-12-20T13:39:42
https://www.reddit.com/r/LocalLLaMA/comments/18mutj9/has_anyone_managed_to_use_knowledgefact_editing/
PMMEYOURSMIL3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mutj9
false
null
t3_18mutj9
/r/LocalLLaMA/comments/18mutj9/has_anyone_managed_to_use_knowledgefact_editing/
false
false
self
9
{'enabled': False, 'images': [{'id': 'FP6OZUMhfluzHyRcx34PEpctJWPBV0eNwFzwdkdBZfg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mj8AHhOmuFMI3ID5K6FXuc5tnKClCK0sXm-XNlZXitU.jpg?width=108&crop=smart&auto=webp&s=72f247327c06d7e6abfa444fbe52a30a886d26ed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Mj8AHhOmuFMI3ID5K6FXuc5tnKClCK0sXm-XNlZXitU.jpg?width=216&crop=smart&auto=webp&s=fd459c33bd400c9ef4194d00df2daf3469c0df39', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Mj8AHhOmuFMI3ID5K6FXuc5tnKClCK0sXm-XNlZXitU.jpg?width=320&crop=smart&auto=webp&s=bf9174d6f5645895fd84a3735812072cfb778c39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Mj8AHhOmuFMI3ID5K6FXuc5tnKClCK0sXm-XNlZXitU.jpg?width=640&crop=smart&auto=webp&s=61a196e3329826799dd78e66f6b17044926adc88', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Mj8AHhOmuFMI3ID5K6FXuc5tnKClCK0sXm-XNlZXitU.jpg?width=960&crop=smart&auto=webp&s=4cba4f9ef99350b125d87635c6cceff2b41061f5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Mj8AHhOmuFMI3ID5K6FXuc5tnKClCK0sXm-XNlZXitU.jpg?width=1080&crop=smart&auto=webp&s=f2ba3d9ecd2485690fb2c3ecf7752d6b04849541', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Mj8AHhOmuFMI3ID5K6FXuc5tnKClCK0sXm-XNlZXitU.jpg?auto=webp&s=cc65ce5dae34b54bcd90adbf6445293f774ff927', 'width': 1200}, 'variants': {}}]}
Low vram, 100 pdf - Text classification (by topics)
5
Hi, I'm hoping AI text generation might be able to do it, but I've only got a low GPU with 4GB of RAM. Obviously, I'm not going to be running anything crazy 128GB models! \[ I am non-coder :( \] It might be crazy to think it may be, but can I process 100 books of text to categorize by topic—I'm just hoping it is possible. any advice? I learn from basic searching YouTube and forums: \- Create your own dataset or fine-tune an existing model for text classification or sentiment analysis, but the context windows will be small. \- Use text classification (I don't know how). If anyone has pointers for open source tools I could leverage with limited resources, I'm all ears! I know I'm working with limited money as a student here without much money. I tried to go into the rabbit hole, but it still made it more complicated. Please feel free to suggest any other improvements or other ways! Even if it takes a full week to churn through 100 PDFs, that's ok! Core idea: 100 uncensored pdfs into text/topic classification with open-source tools with low RAM restrictions.
2023-12-20T13:25:34
https://www.reddit.com/r/LocalLLaMA/comments/18mujf1/low_vram_100_pdf_text_classification_by_topics/
Holiday-Regret-1896
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mujf1
false
null
t3_18mujf1
/r/LocalLLaMA/comments/18mujf1/low_vram_100_pdf_text_classification_by_topics/
false
false
self
5
null
Fine Tuning for Classification?
4
I recently scoped some work to build a traditional ML model for classifying unique business day (insurance claims and if the insurance provider would accept a new claim, based on historical data). As I was writing up my quote I thought to myself... Could this be done faster/cheaper with a fine tuned LLM? I messed around with Coheres fine tuning process as a quick test case against a training/test set and the results were really positive. I can't tell if I'm dabbling in a very wrong path or this would be a legitimate alternative to a traditional ML model. Looking for any guidance / considerations before I spend more time on it.
2023-12-20T13:05:24
https://www.reddit.com/r/LocalLLaMA/comments/18mu57c/fine_tuning_for_classification/
Defektivex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mu57c
false
null
t3_18mu57c
/r/LocalLLaMA/comments/18mu57c/fine_tuning_for_classification/
false
false
self
4
null
LLM in a flash: Efficient Large Language Model Inference with Limited Memory. "enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed"
236
2023-12-20T13:05:08
https://huggingface.co/papers/2312.11514
rationalkat
huggingface.co
1970-01-01T00:00:00
0
{}
18mu4z4
false
null
t3_18mu4z4
/r/LocalLLaMA/comments/18mu4z4/llm_in_a_flash_efficient_large_language_model/
false
false
https://b.thumbs.redditm…U9GZtQDTOdCs.jpg
236
{'enabled': False, 'images': [{'id': '0WhVUSV5SwStzdYD55KrzVCUDKQ9pGfDlCDaeQq-nFA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=108&crop=smart&auto=webp&s=d7a4aab053390b7f6a8e173e52ad38e3ce9b7908', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=216&crop=smart&auto=webp&s=2295ec75f89ae6bbd99707c341fbfc3d08103e4e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=320&crop=smart&auto=webp&s=f5f28e5c34c93167395db6b24a9def39ee4cbbcc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=640&crop=smart&auto=webp&s=5b2b1388e692b33429597766ce3c06de54ac65ad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=960&crop=smart&auto=webp&s=39c8d948819dd3f1c3e4612a04bc4569067409a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=1080&crop=smart&auto=webp&s=783686e92051724faa909a89877d8426db22e49d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?auto=webp&s=75e5c157c23469bc11ba87c02f0654b4a87957eb', 'width': 1200}, 'variants': {}}]}
Quantization explained with PyTorch - Post-Training Quantization, Quantization-Aware Training
4
**Video tutorial**: [https://www.youtube.com/watch?v=0VdNflU08yA](https://www.youtube.com/watch?v=0VdNflU08yA) **Code**: [https://github.com/hkproj/quantization-notes](https://github.com/hkproj/quantization-notes) **PDF Slides**: [https://github.com/hkproj/quantization-notes/blob/main/Slides.pdf](https://github.com/hkproj/quantization-notes/blob/main/Slides.pdf)
2023-12-20T13:01:31
https://www.reddit.com/r/LocalLLaMA/comments/18mu2dx/quantization_explained_with_pytorch_posttraining/
APaperADay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mu2dx
false
null
t3_18mu2dx
/r/LocalLLaMA/comments/18mu2dx/quantization_explained_with_pytorch_posttraining/
false
false
self
4
{'enabled': False, 'images': [{'id': 'OVAjBU6FcN6AVohzmOeS1gHarNIdee8uAUAIONerctk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/km0gf62jmo0Q34vp6OLQxGofBIo2kKlrFvWM47lXX9A.jpg?width=108&crop=smart&auto=webp&s=1106652addb6976efbff7edf9fab70b8458290d6', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/km0gf62jmo0Q34vp6OLQxGofBIo2kKlrFvWM47lXX9A.jpg?width=216&crop=smart&auto=webp&s=6a1797c6b6c7ea3499ebe695919903cb5d02c8e6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/km0gf62jmo0Q34vp6OLQxGofBIo2kKlrFvWM47lXX9A.jpg?width=320&crop=smart&auto=webp&s=92972f0cdf75067b63b592dc6d7a72b4563472f7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/km0gf62jmo0Q34vp6OLQxGofBIo2kKlrFvWM47lXX9A.jpg?auto=webp&s=3933e6ca9f5e82bd4fc6f5ead3c5bca5cd0d228f', 'width': 480}, 'variants': {}}]}
Best local llm for typescript code generation?
1
I know this has been asked many times, but as there are new LLMs released daily, I'd like to know your opinion. My goal is to experiment with different models that would be capable of writing unit/component tests for my **typescript** react app. I've read about WizardCoder, DeepSeek Coder, CodeLlama, ... As I understand many of them are pretty good with python, so I'd like to know how do they perform with typescript instead.
2023-12-20T12:56:20
https://www.reddit.com/r/LocalLLaMA/comments/18mtyqy/best_local_llm_for_typescript_code_generation/
sugy777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mtyqy
false
null
t3_18mtyqy
/r/LocalLLaMA/comments/18mtyqy/best_local_llm_for_typescript_code_generation/
false
false
self
1
null
"ASVD: Activation-aware Singular Value Decomposition for Compressing Large Language Models" - Experiments demonstrate that ASVD can compress network by 10%-20% without losing reasoning capacities.
21
**Paper**: [https://arxiv.org/abs/2312.05821](https://arxiv.org/abs/2312.05821) **GitHub**: [https://github.com/hahnyuan/ASVD4LLM](https://github.com/hahnyuan/ASVD4LLM) **Hugging Face**: [https://huggingface.co/hahnyuan](https://huggingface.co/hahnyuan) **Abstract**: >This paper explores a new post-hoc training-free compression paradigm for compressing Large Language Models (LLMs) to facilitate their wider adoption in various computing environments. We delve into the challenges of LLM compression, notably their dependency on extensive training data and computational resources. We propose a training-free approach dubbed Activation-aware Singular Value Decomposition (**ASVD**) to address these limitations. ASVD effectively manages activation outliers by adjusting the weight matrix based on the activation distribution, improving decomposition accuracy and efficiency. Our method also addresses the varying sensitivity of different LLM layers to decomposition, with an iterative calibration process for optimal layer-specific decomposition. **Experiments demonstrate that ASVD can compress network by 10%-20% without losing reasoning capacities.** Additionally, it can be seamlessly integrated with other LLM compression paradigms, showcasing its flexible compatibility. Code and compressed models are available at [this https URL](https://github.com/hahnyuan/ASVD4LLM).
2023-12-20T12:24:42
https://www.reddit.com/r/LocalLLaMA/comments/18mteeo/asvd_activationaware_singular_value_decomposition/
APaperADay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mteeo
false
null
t3_18mteeo
/r/LocalLLaMA/comments/18mteeo/asvd_activationaware_singular_value_decomposition/
false
false
self
21
null
Can I do anything using a GeForce RTX 4050 with 6GB GDDR6?
6
So I bought a laptop with a GPU because I thought it would be nice to be able to do some local ML stuff... But seems like I seriously underestimated how much VRAM you need to run models. What's the best model I could realistically get use out of on my laptop running locally? Thanks!
2023-12-20T12:20:27
https://www.reddit.com/r/LocalLLaMA/comments/18mtbv1/can_i_do_anything_using_a_geforce_rtx_4050_with/
xjustwaitx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mtbv1
false
null
t3_18mtbv1
/r/LocalLLaMA/comments/18mtbv1/can_i_do_anything_using_a_geforce_rtx_4050_with/
false
false
self
6
null
One-command line to run Self-hosted open source LLM models on Mac/across devices
1
2023-12-20T12:19:49
https://www.secondstate.io/articles/run-llm-sh/
smileymileycoin
secondstate.io
1970-01-01T00:00:00
0
{}
18mtbe4
false
null
t3_18mtbe4
/r/LocalLLaMA/comments/18mtbe4/onecommand_line_to_run_selfhosted_open_source_llm/
false
false
default
1
null
model layers distribution on a dual gpu setup
2
Hi all, I'm desperately searching for a way (if it exists) to load and distribute the model inference on a dual GPU setup on the same machine. To be clear, I am not trying to speed up the inference by loading all the layers on both GPUs but to distribute the layers 50-50. I intensively searched online, but I did not find any (open source) solution to this. Any experience/advise ?
2023-12-20T12:10:45
https://www.reddit.com/r/LocalLLaMA/comments/18mt60j/model_layers_distribution_on_a_dual_gpu_setup/
MethodParking7226
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mt60j
false
null
t3_18mt60j
/r/LocalLLaMA/comments/18mt60j/model_layers_distribution_on_a_dual_gpu_setup/
false
false
self
2
null
I suggest a Lingual Mu-zero experiment
1
I suggest someone with skill and resources try making a group of LLM scientists. Say, they study a game. Chess, checkers, reversi, solitaire, Sudoku, Sokoban or Minesweeper. You provide an interface, such as asking for lingual expression of a board configuration and legal moves. (LLMs may or may not have vision) A group of LLMs share the goal of identifying and expressing in clear language how to win a chess game. They are instructed to break down the research process into something resembling human science. They sample(experiment), observe, hypothesize, write up, and peer-review. They identify valuable information and apply them in future research. Maybe first make a fine Advisor bot who will direct every student's research. A volume of papers will be produced and accepted journal books will be used for training for the next generation, so that the progress can be made to the models rather than processing a long wrap-up context. They are free to forget all irrelevant knowledge and know only the game that they are studying. How will LLMs explain how to play well? If this is too much, why don't they open "LLM Chess Engine Contest"? (Must output tree of thought at each move, a non-lingual component is not allowed)
2023-12-20T11:54:59
https://www.reddit.com/r/LocalLLaMA/comments/18mswc8/i_suggest_a_lingual_muzero_experiment/
SpecialNothingness
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mswc8
false
null
t3_18mswc8
/r/LocalLLaMA/comments/18mswc8/i_suggest_a_lingual_muzero_experiment/
false
false
self
1
null
OpenAI’s investor return used to be capped at 100x, but they changed the rule so the cap will roughly double every 4 years starting 2025. I am thankful for Open source community. OpenAI structure is a scam and they claim to do it for the ‘good of humanity’ while building on top of Open source.
1
[removed]
2023-12-20T10:16:42
https://i.redd.it/ysbirflrdf7c1.png
Iboxelephants
i.redd.it
1970-01-01T00:00:00
0
{}
18mrdov
false
null
t3_18mrdov
/r/LocalLLaMA/comments/18mrdov/openais_investor_return_used_to_be_capped_at_100x/
false
false
https://a.thumbs.redditm…2fDGwM1dt1w4.jpg
1
{'enabled': True, 'images': [{'id': 'mvNV5SYlBbpZBS7ELyyWodUrF46eG0NphQrD_ntD5Bo', 'resolutions': [{'height': 26, 'url': 'https://preview.redd.it/ysbirflrdf7c1.png?width=108&crop=smart&auto=webp&s=8658df54f7fe1e6f1f3583a4c13c4a4a061f4031', 'width': 108}, {'height': 52, 'url': 'https://preview.redd.it/ysbirflrdf7c1.png?width=216&crop=smart&auto=webp&s=8976bc4e8c6118d7e1a7d53db2e14e8740f2e2a7', 'width': 216}, {'height': 77, 'url': 'https://preview.redd.it/ysbirflrdf7c1.png?width=320&crop=smart&auto=webp&s=ef723b136f9e05c36100577860e4edfa6b2ed323', 'width': 320}, {'height': 154, 'url': 'https://preview.redd.it/ysbirflrdf7c1.png?width=640&crop=smart&auto=webp&s=7b8c5569fd480ae6f712891f415eaf839bd6577c', 'width': 640}, {'height': 232, 'url': 'https://preview.redd.it/ysbirflrdf7c1.png?width=960&crop=smart&auto=webp&s=3e8e4aaeb1a9236d2a659e27fbd57b7a7f090a2f', 'width': 960}, {'height': 261, 'url': 'https://preview.redd.it/ysbirflrdf7c1.png?width=1080&crop=smart&auto=webp&s=9b4a0583d232e2ec8c178dc5cea15feda33faabe', 'width': 1080}], 'source': {'height': 261, 'url': 'https://preview.redd.it/ysbirflrdf7c1.png?auto=webp&s=3d16e93aa026d57175e207e2bb4c521ccde8d835', 'width': 1080}, 'variants': {}}]}
Best practice for RAG with follow-up chat questions and LLM conversation?
15
I am building something like a personal AI tutor for a hobby. I have done a few POCs using Langchain and Autogen, but I am still not sure what the right "architecture" would look like. Simple example: discuss with the student which subject (e.g. history), and which particular topic (e.g. ancient Rome) should be studied. Then the chain would need to "collect" the relevant knowledge from RAG sources (so it has grounding and does not hallucinate, and also has access to the exact set of knowledge that is determined by a particular school system). Then after it has the basics collected via the RAG pattern, it would then discuss the topic with the student, ask follow-up questions, provide tests for the students to assess his/her knowledge, etc. In this "mode", it wouldn't use any RAG lookup, but use what it has already put in the context earlier. The problem with Langchain ConversationalRetrievalChain with ConversationBufferMemory is that it still assumes that I am mainly using it for a RAG like use case. Maybe I am mistaken, not sure. I have built another POC with Autogen, and that seems to work better, e.g. it can use more RAG more like a tool (function), and it can be better "steered" into the above working mode. But it is still not perfect. Also, it would not be a bad thing to realize this without having to use Langchain. **TL/DR:** I was wondering if there's a best practice for my use case, a personal tutor, which uses a RAG pattern, but the student would also be able to chat about the already extracted info extensively. &#x200B; &#x200B;
2023-12-20T09:44:42
https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/
bbence84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mqwg6
false
null
t3_18mqwg6
/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/
false
false
self
15
null
What is the optimal model to run on 64GB +16 VRAM?
25
I want to run an LLM locally, the smartest possible one, not necessarily getting an immediate answer but achieving a speed of 5-10 tokens per second. As for the model's skills, I don't need it for character-based chatting. I want something that can assist with: \- text writing \- coding in py, js, php My hardware specs are: Intel 8-core/64GB RAM/nVidia-4080/16GB VRAM/Win10. What I tried: **MiXtral 8x7b** Q8 and Q5, Q3 - Q8, Q5, and Q3 versions. All of them produce 2-3 tokens per second with a waiting delay of 140 seconds. This speed is too slow for my requirements, and the Q8 version consumes too much RAM, making it difficult to browse the internet. **Orca2 13b Q8** This model performs well, achieving a speed of over 10 tokens per second. However, it may not be the most intelligent option.
2023-12-20T09:32:18
https://www.reddit.com/r/LocalLLaMA/comments/18mqpuv/what_is_the_optimal_model_to_run_on_64gb_16_vram/
kitten888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mqpuv
false
null
t3_18mqpuv
/r/LocalLLaMA/comments/18mqpuv/what_is_the_optimal_model_to_run_on_64gb_16_vram/
false
false
self
25
null
I predict that in late 2024, we will see alternate architecture LLMs that significantly outperform transformers of the same size / parameters on Ultra long context ( 1 -10 million context tokens). Also, I am not affiliated with this company. I am just excited about Mamba, Stripedhyena and this.
28
2023-12-20T08:58:53
https://www.reddit.com/gallery/18mq81h
Iboxelephants
reddit.com
1970-01-01T00:00:00
0
{}
18mq81h
false
null
t3_18mq81h
/r/LocalLLaMA/comments/18mq81h/i_predict_that_in_late_2024_we_will_see_alternate/
false
false
https://b.thumbs.redditm…78mZuYYgjYSg.jpg
28
null
Does LLAMMA have local memory of previous prompts? How does it work? Does it use special layer or does it just add everything to one big promt?
8
Does LLAMMA have local memory of previous prompts? How does it work? Does it use a special layer, or does it just add everything to one big prompt? I have seen some posts (but I am not sure if it was LocalLLaMA) where some guy made a game similar to Dungeons and Dragons. The neural network answered his questions and played the game, improving the story over several weeks. How is this possible? Where is all the data stored?
2023-12-20T08:13:02
https://www.reddit.com/r/LocalLLaMA/comments/18mpkez/does_llamma_have_local_memory_of_previous_prompts/
glorsh66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mpkez
false
null
t3_18mpkez
/r/LocalLLaMA/comments/18mpkez/does_llamma_have_local_memory_of_previous_prompts/
false
false
self
8
null
Mixtral not yet at the level of gpt3.5
1
arxiv.org/abs/2312.11444
2023-12-20T07:50:18
https://i.redd.it/67gudt8nne7c1.png
Eastwindy123
i.redd.it
1970-01-01T00:00:00
0
{}
18mp8lr
false
null
t3_18mp8lr
/r/LocalLLaMA/comments/18mp8lr/mixtral_not_yet_at_the_level_of_gpt35/
false
false
https://b.thumbs.redditm…BoHcTaOIz00I.jpg
1
{'enabled': True, 'images': [{'id': 'oCBKvHls1wE_jtHUS55eXChvc05xg1OH3GZ9o1ZgpCI', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/67gudt8nne7c1.png?width=108&crop=smart&auto=webp&s=0af7a6398bc4a62f3dd4b5074305d454443979c7', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/67gudt8nne7c1.png?width=216&crop=smart&auto=webp&s=6747b37594a7e477baa0ff5beff36876d8a60b74', 'width': 216}, {'height': 143, 'url': 'https://preview.redd.it/67gudt8nne7c1.png?width=320&crop=smart&auto=webp&s=7903b3e10a10ea963f1a61c323ce812a23124ef0', 'width': 320}, {'height': 287, 'url': 'https://preview.redd.it/67gudt8nne7c1.png?width=640&crop=smart&auto=webp&s=b2582379239d11008a55002303c235b87fbe0565', 'width': 640}, {'height': 431, 'url': 'https://preview.redd.it/67gudt8nne7c1.png?width=960&crop=smart&auto=webp&s=10f598970c0abfc8aa9c12f5b790104b0dbf6aec', 'width': 960}, {'height': 485, 'url': 'https://preview.redd.it/67gudt8nne7c1.png?width=1080&crop=smart&auto=webp&s=b1db1221254ff59560e72ad9eb444c5ade5853cb', 'width': 1080}], 'source': {'height': 1344, 'url': 'https://preview.redd.it/67gudt8nne7c1.png?auto=webp&s=ce70aceb67d89f088e69f796d7b4d119fc9127a2', 'width': 2992}, 'variants': {}}]}
Recommendations on locally runnable LLMs with large input token limits?
3
I'm familiar with LLAMA/2 and it's derivatives, but it only supports \~4k tokens out of the box. Are there any other open source LLMs that I can run locally on my machine with larger input limits? My use case is summarizing a large amount of text and figuring out chapters to group them by. I'm familiar with LLAMA/2 and it's derivatives, but it only supports \~4k tokens out of the box. Are there any other open source LLMs that I can run locally on my machine with larger input limits? Other info- I have a 3090, and intend to interact with the LLM using Python. From my research- most LLMs (haven't vetted LLAMA) focus on the start and end of the input, and openai did a lot of work to ensure GPT4 didn't face this issue; so something that also grasps all of the input would be awesome. vetted LLAMA) focus on the start and end of the input, and openai did a lot of work to ensure GPT4 didn't face this issue; so something that also grasps all of the input would be awesome.
2023-12-20T07:10:57
https://www.reddit.com/r/LocalLLaMA/comments/18moo27/recommendations_on_locally_runnable_llms_with/
CorerMaximus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18moo27
false
null
t3_18moo27
/r/LocalLLaMA/comments/18moo27/recommendations_on_locally_runnable_llms_with/
false
false
self
3
null
Looking for on premise Huggingface Autotrain that able to deploy and train model locally
1
[removed]
2023-12-20T06:38:08
https://www.reddit.com/r/LocalLLaMA/comments/18mo5mn/looking_for_on_premise_huggingface_autotrain_that/
awakendragon82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mo5mn
false
null
t3_18mo5mn
/r/LocalLLaMA/comments/18mo5mn/looking_for_on_premise_huggingface_autotrain_that/
false
false
self
1
null
200k usd machine capabilities?
1
[removed]
2023-12-20T06:35:59
https://www.reddit.com/r/LocalLLaMA/comments/18mo4i2/200k_usd_machine_capabilities/
Noname6425
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mo4i2
false
null
t3_18mo4i2
/r/LocalLLaMA/comments/18mo4i2/200k_usd_machine_capabilities/
false
false
default
1
null
How does one merge smaller model with larger model?
3
Is it possible to merge tiny llama with llama 2? I have this idea, to create a tiny 124M param model with domain specific knowledge and then merge that with a larger post training. Any suggestions or ideas 💡:)
2023-12-20T06:33:24
https://www.reddit.com/r/LocalLLaMA/comments/18mo31h/how_does_one_merge_smaller_model_with_larger_model/
Independent_Key1940
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mo31h
false
null
t3_18mo31h
/r/LocalLLaMA/comments/18mo31h/how_does_one_merge_smaller_model_with_larger_model/
false
false
self
3
null
AI development Future
2
&#x200B; I am curious about the future trajectory in AI development - is it more beneficial for a fresher like me to specialize in customizing pre-trained models to meet specific needs, or should I focus on building models from the ground up? Given the prevalent use of services like OpenAI in companies, it seems the role might predominantly involve fine-tuning models and handling deployment. I would greatly appreciate your insights on this matter and any advice you have regarding the skills and areas of expertise I should prioritize as I embark on my career in AI development. Also I am looking for an good Internship in AI/ML space, Just for experience. you can take my stipend if you can get me an internship.
2023-12-20T06:23:54
https://www.reddit.com/r/LocalLLaMA/comments/18mnx9t/ai_development_future/
Amanporwal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mnx9t
false
null
t3_18mnx9t
/r/LocalLLaMA/comments/18mnx9t/ai_development_future/
false
false
default
2
null
Finetune the base/causal model and then merge with an instruct tuned model...
2
I want to finetune the base model(Mistral-7B-v0.1) on some pdfs of a subject(anthropology) and then merge it with a finetuned instruct model (teknium/OpenHermes-2.5-Mistral-7B) resulting in a 7b model with both instruct layer and the base layers finetuned on my data. I do believe the base/causal for OpenHermes-2.5-Mistral-7B is Mistral-7B-v0.1 so they shouldn't have issues. Is this a good idea? Is it doable?
2023-12-20T06:09:34
https://www.reddit.com/r/LocalLLaMA/comments/18mnoi6/finetune_the_basecausal_model_and_then_merge_with/
Yyc889
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mnoi6
false
null
t3_18mnoi6
/r/LocalLLaMA/comments/18mnoi6/finetune_the_basecausal_model_and_then_merge_with/
false
false
self
2
null
Creating a Blackbox Leaderboard
13
I don't need to tell you how important blackbox questions are for evaluating LLMs -- and we should start crowdsourcing a blackbox leaderboard immediately. The site rewards "Writers" with karma when they create quality blackbox questions. The highest karma questions are aggregated server-side as the "Blackbox Evaluation". The Blackbox Evaluation changes, but is dense enough that the cream rises to the top (similar to how IQ scores are reliable when sufficiently dense). Site Experience: 1. Writers create a set of hidden questions which are revealed after 2 months. 2. If the revealed questions suck, users downvote the Writer. 3. If the revealed questions are good, users upvote the Writer. 4. A leaderboard of u/the-bloke quants are evaluated (server-side) against the current Blackbox Evaluation. 5. Models are reported by percentile on the Blackbox leaderboard, not by absolute metrics. Simply put -- Good Writers get leaderboard glory -- and more importantly -- the models are continuously evaluated on the most trusted blackbox questions. I invite your criticism.
2023-12-20T06:05:42
https://www.reddit.com/r/LocalLLaMA/comments/18mnm7n/creating_a_blackbox_leaderboard/
Sweet_Protection_163
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mnm7n
false
null
t3_18mnm7n
/r/LocalLLaMA/comments/18mnm7n/creating_a_blackbox_leaderboard/
false
false
self
13
null
Is this currently possible before I invest time
16
I don’t mind putting in the time and effort needed to learn something. But damn this AI/ML is a lot to learn. I train soldiers on army doctrine. What an order is, how to write it, etc. In the process I basically write orders for them. This is all based on published rules/doctrine. You can think of it as everything has an SOP. I understand I can summarize documents. I can query/chat documents. But can I train a model on 10-20 books, thousands of examplesand then ask it to spit out an order. Depending on the unit it would be different: air units need x type of orders, ground need x, artillery etcetc. This is just to help me help them. Normally I just do a name replace in a document and tell them this is their example. Was just thinking it might be more beneficial to have more realistic examples/starting orders. Would something like that be an embedding or fine tuning, assuming it’s even plausible.
2023-12-20T05:36:57
https://www.reddit.com/r/LocalLLaMA/comments/18mn3md/is_this_currently_possible_before_i_invest_time/
FatGuyQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mn3md
false
null
t3_18mn3md
/r/LocalLLaMA/comments/18mn3md/is_this_currently_possible_before_i_invest_time/
false
false
self
16
null
What model would you use to train a “recipe” bot locally?
3
Basically I want to train a bot to come up with recipe ideas based on some given data. Eg if I give it a typical shopping list and some info about food usage life, I want it to come up with the optimal recipes to use all the food etc. I have a 3070 and I’m going to be upgrading to 64Gb RAM or maybe even more. I’m new to this but am a web data developer by trade so not completely useless
2023-12-20T05:20:15
https://www.reddit.com/r/LocalLLaMA/comments/18mmt1w/what_model_would_you_use_to_train_a_recipe_bot/
Putrid-Tough4558
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mmt1w
false
null
t3_18mmt1w
/r/LocalLLaMA/comments/18mmt1w/what_model_would_you_use_to_train_a_recipe_bot/
false
false
self
3
null
What are your predictions/wants for 2024
42
I’ve been hearing lots of theories floating around. Just wanted to know what this sub thinks is coming in 2024?? One promising theory I heard that I believe will happen in 2024 is that local LLM’s will make it big! I believe Julien Chaumond was the one who made the prediction and I can see myself agreeing with him.
2023-12-20T05:08:27
https://www.reddit.com/r/LocalLLaMA/comments/18mmlg9/what_are_your_predictionswants_for_2024/
GoodUnderstanding728
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mmlg9
false
null
t3_18mmlg9
/r/LocalLLaMA/comments/18mmlg9/what_are_your_predictionswants_for_2024/
false
false
self
42
null
What is required to run deepseek coder?
1
I am trying to run 33B and 6.7B Deepseek Coder on an nvidia 3090, but both are terminating when I try to run it.
2023-12-20T04:33:35
https://www.reddit.com/r/LocalLLaMA/comments/18mlyzh/what_is_required_to_run_deepseek_coder/
SillyLilBear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mlyzh
false
null
t3_18mlyzh
/r/LocalLLaMA/comments/18mlyzh/what_is_required_to_run_deepseek_coder/
false
false
self
1
null
LLM resources for beginners
1
[removed]
2023-12-20T04:19:46
https://www.reddit.com/r/LocalLLaMA/comments/18mlpo7/llm_resources_for_beginners/
TriDoHuu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mlpo7
false
null
t3_18mlpo7
/r/LocalLLaMA/comments/18mlpo7/llm_resources_for_beginners/
false
false
self
1
null
Better LLM than Pivot_Evil for RP?
8
It’s been my go to in it’s sheer versatility and willingness to Go There when I ask it to. I can run 7Bs and some smaller 13Bs. I haven’t tried any quants of larger. THANKS!
2023-12-20T04:11:30
https://www.reddit.com/r/LocalLLaMA/comments/18mlk44/better_llm_than_pivot_evil_for_rp/
AmericanKamikaze
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mlk44
false
null
t3_18mlk44
/r/LocalLLaMA/comments/18mlk44/better_llm_than_pivot_evil_for_rp/
false
false
self
8
null
Any prompt experts know how to force a model to not ask a question at the end of every response in a chat conversation?
3
I realized one thing that feels fake about multi-turn conversations is the LLM is always asking a question at the end of each response. This is not the normal flow of most conversations I have. I tried adjusting the system prompt and told it to not ask questions at the end of a response but it didn't help. I used MythoMax-L2-13b as well as a few other 7b models, but they all had the same issue. I'm using LM Studio. If anyone has any guidance it would be greatly appreciated
2023-12-20T04:07:02
https://www.reddit.com/r/LocalLLaMA/comments/18mlh2g/any_prompt_experts_know_how_to_force_a_model_to/
DangerousBenefit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mlh2g
false
null
t3_18mlh2g
/r/LocalLLaMA/comments/18mlh2g/any_prompt_experts_know_how_to_force_a_model_to/
false
false
self
3
null
Having Difficulty Getting Mixtral 8x7b to Be Consistent and Stay on Task
4
I'm trying to run Mixtral 8x7b (mistral-8x7b-q4\_k\_m) locally to build a Context/Question/Answer dataset from a set of classical (Hellenic/Roman) history works, but's it's really wonky. Sometimes my question generation function produces great results: "Who was Aspasia?" or "N/A" if the question type doesn't match the context (who/what/where/when/why/how). Whereas other times (see below) I get a weird "Who" question (1st row) w/ a decent answer...until the 4th sentence then it is rambling. Or a weird "where" question (3rd row) but that also has the N/A that should have been there, and a good answer to the implied question in the 2d row: |Context|Question|Answer| |:-|:-|:-| |**has by the time of Pericles been reduced in effect to a legislative committee of the ekklesia. Its members are chosen by lot and rote from the register of the citizens, fifty for each of the ten tribes; they serve for a year only...( blah blah blah for 1 page)**|Who, by the time of Pericles, had seen the Council of Athens reduced in effect to a legislative committee of the ekklesia?|By the time of Pericles, the Council of Athens had been reduced in effect to a legislative committee of the ekklesia. This happened as the original functions and powers of the Council were increasingly restricted and limited by changes in laws and regulations over time. By the mid-fifth century BCE, during the time of Pericles, the Council's main function had become to serve as this legislative committee for the ekklesia. In essence, then, by this point in time, the Council of Athens itself had been reduced in effect to somewhat of a lesser body and an offshoot, if you will, of the ekklesia—the main Assembly or political body of the democracy of Athens—whose own legislative powers, functions, duties, and responsibilities were far greater and much more extensive than those of the Council at this time. In fact, by this point in time, during the era of Pericles, the ekklesia, as the main Assembly or political body of the democracy of Athens, was itself fully vested with, endowed with, and imbued with, to its fullest extent possible, all of the combined inherent, latent, potential, actual, intrinsic, essential, organic, natural, native, domestic, familiar, customary, conventional, accustomed, habitual, wonted, rooted, grounded, fixed, set, established, arranged, ordered, regulated, prescribed, mandated, dictated, required, desired, sought after, wished for, hoped for, imagined, conceived, contemplated, deliberated, pondered, meditated, reflected, brooded, and| |**has by the time of Pericles been reduced in effect to a legislative committee of the ekklesia. Its members are chosen by lot and rote from the register of the citizens, fifty for each of the ten tribes; they serve for a year only, and receive, in the fourth cen­tury, five obols per day. Since each councilor is disqualified for re-election until all other eligible citizens have had a chance to serve,( blah blah blah for 1 page)**|N/A|The concept of law in ancient Greece was deeply intertwined with their religious beliefs and practices. The earliest Greeks viewed law as sacred custom, divinely sanctioned and revealed through customs known as themis. Law was a part of theology, and the oldest Greek laws of property were mingled with liturgical regulations in the ancient temple codes. Perhaps as time went on, this notion of law began to shift from being viewed primarily as sacred custom to one that emphasized more secular aspects such as "from status to contract" or as an embodiment of moral order and harmony as seen in concepts such as Rita (India), Tao/Tien (China) etc.| |**has by the time of Pericles been reduced in effect to a legislative committee of the ekklesia. Its members are chosen by lot and rote from the register of the citizens, fifty for each of the ten tribes; they serve for a year only, and receive, in the fourth cen­tury, five obols per day. Since each councilor is disqualified for re-election until all other eligible citizens have had a chance to serve,( blah blah blah for 1 page)**|\> Where did the legal code of the earliest Greeks mainly originate from? N/A|The legal code of the earliest Greeks mainly originated from sacred custom and temple codes. These early laws were closely connected with religious beliefs and practices.| I feel like maybe I'm fighting the training? Like the model just wants to answer no matter what, and so my prompting has unreliable effects? My code, with the prompts in the "generate\_questions\_and\_answers" function. import pandas as pd import openai import os def generate_questions_and_answers(text_chunk, client, model_name="mistral-8x7b-q4_k_m"): questions = [] answers = [] # Directive for the model to generate questions based on context relevance directives = { "Who": "If the context is about a person, characters, entity, or people, generate a single 'Who' question without answering, e.g. 'Who was Aristophanes?' or 'Who founded the achaemenid dynasty?'. If answering the question reqires making inferences or knowledge outside of the context, write only 'N/A' instead of a question.", "What": "If the context is about objects, concepts, or phenomena, generate a single 'What'question without answering, e.g. 'What are latifundia?' or 'What is ostracism?' If answering the question reqires making inferences or knowledge outside of the context, write only 'N/A' instead of a question.", "Where": "If the context mainly describes a place or location, generate a single 'Where' question question without answering, e.g., 'Where did the Eastern empire extend to?' or 'Where did xerxes cross the hellespont?' If answering the question reqires making inferences or knowledge outside of the context, write only 'N/A' instead of a question.", #"When": "If the context has dates or events that occured at a particular time, generate a 'When' question (e.g., 'When did the battle of Platea occur?').", #"Why": "If the context explains reasons or causes, generate a 'Why' question (e.g., 'Why did Christianity appeal to slaves?').", #"How": "If the context discusses methods or processes, generate a 'How' question (e.g., 'How did Athens stop class warfare during the Periclean age?')." } for q_type, directive in directives.items(): # Generate a question with directive question_prompt = f"[INST]{directive} Context: '{text_chunk}' [/INST]" question_response = client.completions.create(model=model_name, prompt=question_prompt, max_tokens=100) question = question_response.choices[0].text.strip() # Only proceed if a question is generated if question and not question.startswith("Context:"): questions.append(question) # Generate an answer answer_prompt = f"[INST] Given the context: '{text_chunk}', give a detailed, complete answer to the question: '{question}'. Use only the context to answer, do not give references. Simply answer the question without editorial comments. [/INST]" answer_response = client.completions.create(model=model_name, prompt=answer_prompt, max_tokens=350) answer = answer_response.choices[0].text.strip() answers.append(answer) return questions, answers # Point to the local server client = openai.OpenAI(base_url="http://localhost:1234/v1", api_key="not-needed") # Specify the path of a single file for testing file_path = "/Users/williammarcellino/Documents/Will Durant/Durant Chunked & Cleaned/Durant LifeofGreece-82.txt_part_2.txt" # Replace with the path of your test file # List to store Q&A pairs qa_data = [] # Process the specified file with open(file_path, 'r') as file: text_chunk = file.read() questions, answers = generate_questions_and_answers(text_chunk, client) for q, a in zip(questions, answers): qa_data.append({"Context": text_chunk, "Question": q, "Answer": a}) # Create DataFrame from the collected data qa_df = pd.DataFrame(qa_data) # Export to CSV qa_df.to_csv("/Users/williammarcellino/Documents/Will Durant/durant_Q&A_test.csv", index=False) Any suggestions for getting this to NOT generate a question if there isn't a fit? Or is this too much to ask of the model?
2023-12-20T03:55:22
https://www.reddit.com/r/LocalLLaMA/comments/18ml8p9/having_difficulty_getting_mixtral_8x7b_to_be/
Mbando
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ml8p9
false
null
t3_18ml8p9
/r/LocalLLaMA/comments/18ml8p9/having_difficulty_getting_mixtral_8x7b_to_be/
false
false
self
4
null
Mistral 7b is amazing but error rate is 60%, fine-tuning made it worse.. any fine-tuning tips?
7
I ran Mistral to create a data set of 3k good examples, 40% of my original set. I curated a perfect set of examples, ran a fine-tuning training for 8 epochs loss rate went from 4 to 0.003xxxx and error rate shot up to 100%.. I figured it would get more accurate.. Anyone have any advice.. no idea why the model lost the ability to produce the data instead of getting more accurate..
2023-12-20T03:33:43
https://www.reddit.com/r/LocalLLaMA/comments/18mktu0/mistral_7b_is_amazing_but_error_rate_is_60/
Tiny_Arugula_5648
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mktu0
false
null
t3_18mktu0
/r/LocalLLaMA/comments/18mktu0/mistral_7b_is_amazing_but_error_rate_is_60/
false
false
self
7
null
Maybe we will be able to run far larger models on Apple hardware than previously thought
128
[https://huggingface.co/papers/2312.11514](https://huggingface.co/papers/2312.11514) Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their intensive computational and memory requirements present challenges, especially for devices with limited DRAM capacity. This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters on flash memory but bringing them on demand to DRAM. Our method involves constructing an inference cost model that harmonizes with the flash memory behavior, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Within this flash memory-informed framework, we introduce two principal techniques. First, "windowing'" strategically reduces data transfer by reusing previously activated neurons, and second, "row-column bundling", tailored to the sequential data access strengths of flash memory, increases the size of data chunks read from flash memory. These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory.
2023-12-20T03:12:38
https://www.reddit.com/r/LocalLLaMA/comments/18mkeu1/maybe_we_will_be_able_to_run_far_larger_models_on/
Ward_0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mkeu1
false
null
t3_18mkeu1
/r/LocalLLaMA/comments/18mkeu1/maybe_we_will_be_able_to_run_far_larger_models_on/
false
false
self
128
{'enabled': False, 'images': [{'id': '0WhVUSV5SwStzdYD55KrzVCUDKQ9pGfDlCDaeQq-nFA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=108&crop=smart&auto=webp&s=d7a4aab053390b7f6a8e173e52ad38e3ce9b7908', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=216&crop=smart&auto=webp&s=2295ec75f89ae6bbd99707c341fbfc3d08103e4e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=320&crop=smart&auto=webp&s=f5f28e5c34c93167395db6b24a9def39ee4cbbcc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=640&crop=smart&auto=webp&s=5b2b1388e692b33429597766ce3c06de54ac65ad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=960&crop=smart&auto=webp&s=39c8d948819dd3f1c3e4612a04bc4569067409a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?width=1080&crop=smart&auto=webp&s=783686e92051724faa909a89877d8426db22e49d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KAi3zm5rNue2TOHK6anYe1qMF5KM6T49KZhVM-N7NVk.jpg?auto=webp&s=75e5c157c23469bc11ba87c02f0654b4a87957eb', 'width': 1200}, 'variants': {}}]}
OpenChat 3.5 thinks that it is OpenAssistant
1
2023-12-20T02:48:13
https://i.redd.it/ollnsfvn5d7c1.png
PolyPenguinDev
i.redd.it
1970-01-01T00:00:00
0
{}
18mjx3u
false
null
t3_18mjx3u
/r/LocalLLaMA/comments/18mjx3u/openchat_35_thinks_that_it_is_openassistant/
false
false
https://b.thumbs.redditm…JnLtGdQk9crk.jpg
1
{'enabled': True, 'images': [{'id': 'IwZCSGrx5Pke_MPSKmnjPMXtyFfrrMY5stvhFghkppI', 'resolutions': [{'height': 16, 'url': 'https://preview.redd.it/ollnsfvn5d7c1.png?width=108&crop=smart&auto=webp&s=653e5f4fd5c291e08a87c081526fc938b630df5a', 'width': 108}, {'height': 33, 'url': 'https://preview.redd.it/ollnsfvn5d7c1.png?width=216&crop=smart&auto=webp&s=1e9b891e872dc03797f210eb695ef593a8543d41', 'width': 216}, {'height': 50, 'url': 'https://preview.redd.it/ollnsfvn5d7c1.png?width=320&crop=smart&auto=webp&s=d3dbd151d7e6262719e9c66cf040c1cb16d143c2', 'width': 320}, {'height': 100, 'url': 'https://preview.redd.it/ollnsfvn5d7c1.png?width=640&crop=smart&auto=webp&s=102cc7c410d45eccadac017f562040964107416b', 'width': 640}, {'height': 150, 'url': 'https://preview.redd.it/ollnsfvn5d7c1.png?width=960&crop=smart&auto=webp&s=1e50b6f2877a0d2eb5736aeb528ec3929528acff', 'width': 960}, {'height': 168, 'url': 'https://preview.redd.it/ollnsfvn5d7c1.png?width=1080&crop=smart&auto=webp&s=941f1ce054e282d45e3158656a6656d4a3fcb398', 'width': 1080}], 'source': {'height': 298, 'url': 'https://preview.redd.it/ollnsfvn5d7c1.png?auto=webp&s=799c1f39917f5aecc98ba238986dadad9d70b744', 'width': 1907}, 'variants': {}}]}
4x64GB ram for local LLMs
2
Hi, I want to run 128GB of ram so I can run local LLMs at the highest speed possible. The kit with the best overall speed that I found available to buy is 4000 c18: https://www.gskill.com/product/165/166/1601284727/F4-4000C18Q-128GTZR CPU:5950x Motherboard: MSI x570 Tomahawk Wifi Current RAM: 4x8GB Gskill 3600 c16 I wanted to see if the linked kit is recommended, not compatible, etc before buying it. Let me know if I'm approaching this the wrong way. I appreciate your recommendations and help!
2023-12-20T02:46:53
https://www.reddit.com/r/LocalLLaMA/comments/18mjw6u/4x64gb_ram_for_local_llms/
PoorFrenchman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mjw6u
false
null
t3_18mjw6u
/r/LocalLLaMA/comments/18mjw6u/4x64gb_ram_for_local_llms/
false
false
self
2
{'enabled': False, 'images': [{'id': 'JPNFbK59zTRK2cdeVKoDsZ26nnSmYtVDbyt43vavpO8', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/-GC1O2G2fxn3Vmw7Q5M6hwxGclP8AeNYbyeoSXYXVtE.jpg?width=108&crop=smart&auto=webp&s=e5eeb49b080edbecb82caac6f1171b493d3eb3fd', 'width': 108}], 'source': {'height': 75, 'url': 'https://external-preview.redd.it/-GC1O2G2fxn3Vmw7Q5M6hwxGclP8AeNYbyeoSXYXVtE.jpg?auto=webp&s=368ea86224cdf93ccc52a438a03401f649ebe015', 'width': 145}, 'variants': {}}]}
New benchmark by Stanford: HELM lite v1.0.0 including Narrative, Math, Legal, Medicine, Translation tasks
120
**Leaderboard**: [**https://crfm.stanford.edu/helm/lite/v1.0.0/#/leaderboard**](https://crfm.stanford.edu/helm/lite/v1.0.0/#/leaderboard) *Announcement*: [https://crfm.stanford.edu/2023/12/19/helm-lite.html](https://crfm.stanford.edu/2023/12/19/helm-lite.html) [https://twitter.com/percyliang/status/1737246714992701716](https://twitter.com/percyliang/status/1737246714992701716) **Tests** * [**NarrativeQA**](https://arxiv.org/pdf/1712.07040.pdf)**:** answer questions about stories from books and movie scripts, where the questions are human-written from the summaries (response: short answer). * [**NaturalQuestions**](https://aclanthology.org/Q19-1026.pdf)**:** answer questions from Google search queries on Wikipedia documents (response: short answer). We evaluate two versions, open book (where the relevant passage is given) and closed book (where only the question is given). * [**OpenbookQA**](https://arxiv.org/pdf/1809.02789.pdf)**:** answer questions on elementary science facts (response: multiple choice). * [**MMLU**](https://arxiv.org/pdf/2009.03300.pdf)**:** answer standardized exam questions from various technical topics (response: multiple choice). As with HELM Classic, we select 5 of the 57 subjects (abstract algebra, chemistry, computer security, econometrics, US foreign policy) for efficiency. * [**MATH**](https://arxiv.org/pdf/2103.03874.pdf)**:** solve competition math problems (response: short answer with chain of thought). * [**GSM8K**](https://arxiv.org/pdf/2110.14168.pdf)**:** solve grade school math problems (response: short answer with chain of thought). * [**LegalBench**](https://arxiv.org/pdf/2308.11462.pdf)**:** perform various tasks that require legal interpretation (response: multiple choice). We selected 5 of the 162 tasks for efficiency. * [**MedQA**](https://arxiv.org/pdf/2009.13081.pdf)**:** answer questions from the US medical licensing exams (response: multiple choice). * [**WMT14**](https://machinetranslate.org/wmt14)**:** translate sentences from one language into English (response: sentence). We selected 5 source languages (Czech, German, French, Hindi, Russian) for efficiency. **Models Tested** * OpenAI: GPT-3.5 (text-davinci-002, text-davinci-003), ChatGPT (gpt-3.5-turbo-0613), GPT-4 (0613), GPT-4 Turbo (1106 preview) * Anthropic: Claude Instant V1, Claude v1.3, Claude 2.0, Claude 2.1 * Google: PaLM 2 (bison, unicorn) * Cohere: Command (default, light) * Aleph Alpha: Luminous (base, extended, supreme) * AI21: J2 (large, grande, jumbo) * Writer: Palymra-X (v2, v3) * Meta: LLaMA (65B), Llama 2 (7B, 13B, 70B) * Mistral AI: Mistral (7B) Mixtral (8x7B) * TII/UAE: Falcon (7B, 40B) * 01.AI: Yi (6B, 34B) **Made by** Stanford Center for Research on Foundation Models (CRFM) Authors: [Percy Liang](https://cs.stanford.edu/~pliang/) and [Yifan Mai](https://yifanmai.com/) and [Josselin Somerville](https://josselinsomervilleroberts.github.io/) and [Farzaan Kaiyom](https://farzaank.com/) and [Tony Lee](https://www.linkedin.com/in/tonyhlee/) and [Rishi Bommasani](https://rishibommasani.github.io/)
2023-12-20T02:37:10
https://www.reddit.com/r/LocalLLaMA/comments/18mjpa2/new_benchmark_by_stanford_helm_lite_v100/
galambalazs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mjpa2
false
null
t3_18mjpa2
/r/LocalLLaMA/comments/18mjpa2/new_benchmark_by_stanford_helm_lite_v100/
false
false
self
120
null
Dolphin Mixtral Repeats itself - Best settings for ultimate NSFW without needless repetition?
3
2023-12-20T02:18:42
https://i.redd.it/3eq6d8ta0d7c1.png
CloudStrx
i.redd.it
1970-01-01T00:00:00
0
{}
18mjbvn
false
null
t3_18mjbvn
/r/LocalLLaMA/comments/18mjbvn/dolphin_mixtral_repeats_itself_best_settings_for/
false
false
nsfw
3
{'enabled': True, 'images': [{'id': 'Ebk_9OADEVOz-jDL8VAW3BaGs-EF0mlh_3HKIEq4TFc', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=108&crop=smart&auto=webp&s=3d81e22f2ea571b489ba2f973a9d94d70ec6b1f5', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=216&crop=smart&auto=webp&s=0cea9893b99c6302acb41a2d492852348894e9d6', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=320&crop=smart&auto=webp&s=f51415e8b8373cb864aba96dcb99bf117a35bb48', 'width': 320}, {'height': 374, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=640&crop=smart&auto=webp&s=dfe564be829cf96048db26f5d27f81948770f9d6', 'width': 640}, {'height': 561, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=960&crop=smart&auto=webp&s=a7c9ec04d4d99052091a8bd65df32be846dbc6d1', 'width': 960}, {'height': 632, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=1080&crop=smart&auto=webp&s=e7e288ca4146baee14231d25734aea42d1653332', 'width': 1080}], 'source': {'height': 1229, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?auto=webp&s=04b0ca1af5f6874cc811eb55a290a31bbc2cd217', 'width': 2100}, 'variants': {'nsfw': {'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d513424a9580edd485a350ae87314d61f48e8bc5', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=836b1fc52b1704cab3d61a17ef1c5e968724fab7', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=571dd8c708c929870d847d205e5a23a80020c474', 'width': 320}, {'height': 374, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=1840808d98df996133b6021cda3429ad306d12ff', 'width': 640}, {'height': 561, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=edf5f6874d4d8494beaffb847fd055227248ddbd', 'width': 960}, {'height': 632, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=3eb5b00c262f8a3465a53825e286bc3ea0c51620', 'width': 1080}], 'source': {'height': 1229, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?blur=40&format=pjpg&auto=webp&s=50b51abfc154b5d486d2782c84ea71413d849f69', 'width': 2100}}, 'obfuscated': {'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d513424a9580edd485a350ae87314d61f48e8bc5', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=836b1fc52b1704cab3d61a17ef1c5e968724fab7', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=571dd8c708c929870d847d205e5a23a80020c474', 'width': 320}, {'height': 374, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=1840808d98df996133b6021cda3429ad306d12ff', 'width': 640}, {'height': 561, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=edf5f6874d4d8494beaffb847fd055227248ddbd', 'width': 960}, {'height': 632, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=3eb5b00c262f8a3465a53825e286bc3ea0c51620', 'width': 1080}], 'source': {'height': 1229, 'url': 'https://preview.redd.it/3eq6d8ta0d7c1.png?blur=40&format=pjpg&auto=webp&s=50b51abfc154b5d486d2782c84ea71413d849f69', 'width': 2100}}}}]}
Help Merging Models
2
I was looking at those nx7b models and was wondering how people made them. I am looking to merge models together to create a larger model, how can I go about doing this?
2023-12-20T02:08:18
https://www.reddit.com/r/LocalLLaMA/comments/18mj4d5/help_merging_models/
Acceptable_Soup1543
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mj4d5
false
null
t3_18mj4d5
/r/LocalLLaMA/comments/18mj4d5/help_merging_models/
false
false
self
2
null
C'mon, test the models on real-world usage
1
[removed]
2023-12-20T02:05:32
https://www.reddit.com/r/LocalLLaMA/comments/18mj2cs/cmon_test_the_models_on_realworld_usage/
deal_with_mofo_2331
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mj2cs
false
null
t3_18mj2cs
/r/LocalLLaMA/comments/18mj2cs/cmon_test_the_models_on_realworld_usage/
false
false
self
1
null
Question regarding Front-end webui that can be connected external API
2
Hello, I am having quite a lot of issues finding this one specific webui/user interface that could be connected to an API/service that hosts LLM where you pay per token basis. Is there any recommendations you guys might have? Am I missing something that I'm having such difficulty finding this? For those more curious I am using anyscale endpoints with a Mixtral-8x7B-Instruct-v0.1. Thanks for a reply and advice in advance.
2023-12-20T01:58:01
https://www.reddit.com/r/LocalLLaMA/comments/18miwnc/question_regarding_frontend_webui_that_can_be/
aabbeyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18miwnc
false
null
t3_18miwnc
/r/LocalLLaMA/comments/18miwnc/question_regarding_frontend_webui_that_can_be/
false
false
self
2
null
What's your strategy to compare prompt quality, existing tools ?
2
As you know the prompt you use asking a questions has a huge impact on the quality of the answer you will get from the LLM. I am creating prompts and testing them for some personal use cases using llama.cpp and mistral open hermes model. I currently use an excel sheet to track my prompt quality and I struggle into keeping track of the prompts used and their overall qualities **So I am sure there might better solution out there, and wanted to know what you use for that** 👀 Would be great to have an interface like lmsys chatbot arena but to compare two prompts that expecting to do the same job? What do you think ?
2023-12-20T01:34:18
https://www.reddit.com/r/LocalLLaMA/comments/18mifab/whats_your_strategy_to_compare_prompt_quality/
steph_pop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mifab
false
null
t3_18mifab
/r/LocalLLaMA/comments/18mifab/whats_your_strategy_to_compare_prompt_quality/
false
false
self
2
null
How many people find an LLM and stick with it?
50
I love ChatGPT and the idea's of LLM's. I know nothing of coding or python and honestly while I would like to learn enough to get by, a lot of it seems to be over my head for the time being. LM Studio is my friend, if I could upload documents to LM studios that's all I would ever need. That's all besides the point though. One thing I have been noticing and I am sure most of it is click bait but it seems like every other day there is a game changing or new LLM. Is everyone jumping around and using the newest and shiniest LLM? Is there a benefit to using an older LLM? At the user level can you train an LLM? Or is more about the prompt you give it that really makes the LLM "yours"?
2023-12-20T01:20:04
https://www.reddit.com/r/LocalLLaMA/comments/18mi50p/how_many_people_find_an_llm_and_stick_with_it/
Foot-Note
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mi50p
false
null
t3_18mi50p
/r/LocalLLaMA/comments/18mi50p/how_many_people_find_an_llm_and_stick_with_it/
false
false
self
50
null
Hardware Recommendations
7
I recently got into local LLMs due to how powerful they are. I honestly thought it would have taken us a few more years for us to have this kind of utility where we are not reliant on some massive cloud, and I want to jump right into it. My setup right now is a \- 5950X \- 128 GB RAM \- 1x RTX 3090 I've been experimenting with the Mixtral 8x7b model, and am extremely impressed by the outputs. However, the speed of the output is leaving a lot to be desired. I'm still kind of a noob with this, but I switched over to the \`Q4\_K\_M\` version, and a lot more of it is able to fit in my GPUs memory. I'd rather not get 2 GPUs if I don't have to (1 3090 is already taking up enough space, and I'd need a beefier power supply), so I've been looking into some of the 40GB and 80GB Nvidia cards. I'm curious if I should save up to get an 80GB A100, or if you recommend some better options for what I'm looking to do.
2023-12-20T01:13:55
https://www.reddit.com/r/LocalLLaMA/comments/18mi0jt/hardware_recommendations/
AcceptableMacaron497
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mi0jt
false
null
t3_18mi0jt
/r/LocalLLaMA/comments/18mi0jt/hardware_recommendations/
false
false
self
7
null
Looking to make a 3x7b model merge
1
I am new to this field but I would like to make a 3x7b model merge (similar to mixtral 8x7b but with only 3 models) how can I go about doing this
2023-12-20T01:06:49
https://www.reddit.com/r/LocalLLaMA/comments/18mhvep/looking_to_make_a_3x7b_model_merge/
Aromatic-Ad9081
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mhvep
false
null
t3_18mhvep
/r/LocalLLaMA/comments/18mhvep/looking_to_make_a_3x7b_model_merge/
false
false
self
1
null
How to use more experts? (Mixtral)
2
I would especially love if anyone knew a way to do this with Kobold or llamacpp, but maybe that's just not there yet. After seeing some tests I would really like to try this for myself but I don't know if there's an approachable method! I'm very curious if using 3-8 experts could subjectively improve responses (I already know it doesn't improve perplexity except 3 on certain quants).
2023-12-20T00:06:36
https://www.reddit.com/r/LocalLLaMA/comments/18mgmb9/how_to_use_more_experts_mixtral/
CorruptEmanation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mgmb9
false
null
t3_18mgmb9
/r/LocalLLaMA/comments/18mgmb9/how_to_use_more_experts_mixtral/
false
false
self
2
null
What is the difference between Oogabooga and kobold?
1
[removed]
2023-12-19T23:36:34
https://www.reddit.com/r/LocalLLaMA/comments/18mfz85/what_is_the_difference_between_oogabooga_and/
Garoknight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18mfz85
false
null
t3_18mfz85
/r/LocalLLaMA/comments/18mfz85/what_is_the_difference_between_oogabooga_and/
false
false
self
1
null
Code interpreter with Mistral LLM
44
2023-12-19T23:29:58
https://i.redd.it/sletgyfc6c7c1.png
louis3195
i.redd.it
1970-01-01T00:00:00
0
{}
18mftoq
false
null
t3_18mftoq
/r/LocalLLaMA/comments/18mftoq/code_interpreter_with_mistral_llm/
false
false
https://a.thumbs.redditm…POtQznqMWNQ4.jpg
44
{'enabled': True, 'images': [{'id': 'yOwn97t0uG7UgCdTQsUft6BaUWjUfZ844hCHdd-I8l0', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/sletgyfc6c7c1.png?width=108&crop=smart&auto=webp&s=e22caacbd2ab76251de6b5d041bb7a9aa24e1c5d', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/sletgyfc6c7c1.png?width=216&crop=smart&auto=webp&s=2246726a3752a3c010cca6675a35300c2bdd649a', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/sletgyfc6c7c1.png?width=320&crop=smart&auto=webp&s=e84042df6623eff81492e33c1d0dfda595262e81', 'width': 320}, {'height': 518, 'url': 'https://preview.redd.it/sletgyfc6c7c1.png?width=640&crop=smart&auto=webp&s=114780972d2f0428bc2259e821e7097b0280969f', 'width': 640}, {'height': 777, 'url': 'https://preview.redd.it/sletgyfc6c7c1.png?width=960&crop=smart&auto=webp&s=5b17fb2fc1613ba43310e1da4da19509137bcff5', 'width': 960}, {'height': 874, 'url': 'https://preview.redd.it/sletgyfc6c7c1.png?width=1080&crop=smart&auto=webp&s=a272b54bb387dd2ef5a2cbc7d9d24f3348014ec7', 'width': 1080}], 'source': {'height': 1106, 'url': 'https://preview.redd.it/sletgyfc6c7c1.png?auto=webp&s=9a2c7a9d4ab0459ae54129238d9142a3de744c10', 'width': 1366}, 'variants': {}}]}