title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 โ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k โ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 โ | ups int64 0 8.54k | preview stringlengths 301 5.01k โ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Seeking LLM that can check all these boxes, can run fast and be fine tuned on Lenovo Legion 5 pro | 1 | Hi there,
I'm currently exploring an idea of creating a project. For that I require a LLM that can check (atleast have a bare minimum capability) off all these boxes -
1) Be smart enough to understand the tone and context of a given paragraph/entire chapter of let's say a story book or novel.
2) can generate accurate charecter personas and create fillers within the actual provided text which can be clearly attached to charecter's emotion. For most cases there will be cues if the charecter has a certain reaction like "shocked" "happy "sad" written by the author etc but if not present, the LLM should be able to understand what context of that specific part of a chapter or story is playing out and can generate fillers like "charecter X is sad" etc after each charecter's dialogue.
3) doesn't need to write or rewrite anything, literally copy word to word from the fed story but should have creativeness to identify what kind of background sound will be fitting in that specific section of the context it's working on and generate text based responses Eg.the statement is "Anna'a brother was dead, anna stood beside his casket wiping a tear off her face", llm should be smart enough to generate a filler like [anna sad] [background sound slow violin music bla bla] (still working on the idea but you get the picture)
4) Should be able to self learn from the experiences or have the option to implement self learning algorithms offline via python and while it's offline, it should be UNCENSORED and doesn't yap around about ethics and can't do this and that. It should follow my instructions to the point and be smart enough to understand if there's typos or missing words but should form coherent response.
Plus points if it can be creative and write it's own wonderful stories but that can be fine tuned later on. It's a long term project ofc but I need a baseline to start with.
5) Should have its own API if needed to interact with third party platforms. Should be able to write basic coding scripts (not at all priority, just good to have)and should be flexible enough to bake additional features/abilities into it later on like browsing the web/ able to download a certain file in a certain website like royalty free sounds where if it identifies a proper background sound ,it can self search and download. This is very advanced level and is just a good to have from an existing LLM that can scale if needed later on.
Now I understand this is something im
possible to find with my hardware limitations and it can be fine tuned and I'll be ready to do so. I'm a single person team so please be aware this is done on my device with no additional resources.
My laptop has 32 gb ddr4 3200 mhz ram, 8gb rtx 3070 170w tdp, 1 tb free space ssd gen 4 pcie.
I have windows 11 and I manually installed and tried nous capybara 34b which works locally via cmd and not lmstudio but is super slow like 10 words in 3-5 mins. Lm studio is faster, not crazy fast but fast enough and lmstudio cannot fine tune it so did it locally. I want this LLM to be fast enough locally to have atleast comprehensive response time like idk 25 wpm or more whatever is reasonable.
Please recommend a LLM that I can install on windows locally. I have pytorch cuda 12.1 and everything is setup correctly. Thank you so much. | 2024-01-06T01:42:04 | https://www.reddit.com/r/LocalLLaMA/comments/18zo1bm/seeking_llm_that_can_check_all_these_boxes_can/ | unkn0wnS0ul2day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zo1bm | false | null | t3_18zo1bm | /r/LocalLLaMA/comments/18zo1bm/seeking_llm_that_can_check_all_these_boxes_can/ | false | false | self | 1 | null |
Need pc build advice | 1 | I currently have a asus rog cross hair viii dark hero with a ryzen 3900 and 32gig of ram. I am running a 2080 super gpu.
I plan to upgrade to a ryzen 5950x and 128 gig of ram. However, I am unsure about what gpu to get. I am looking at either the MSI liquid cooled 24gb 4090 or two 24 gb 3090โs with nvlink.
The only things I really plan to run are stable diffusion, python for data analysis; and of course I would like to run either mixtral 8x7b or maybe falcon 180b locally if that is possible. What is everyoneโs advice. | 2024-01-06T01:38:54 | https://www.reddit.com/r/LocalLLaMA/comments/18znysz/need_pc_build_advice/ | TheHobbyistHacker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18znysz | false | null | t3_18znysz | /r/LocalLLaMA/comments/18znysz/need_pc_build_advice/ | false | false | self | 1 | null |
Ditching Google Gemini API for a local, OpenAI-style API! Tricks for those making the switch? | 1 | Any tips for converting Gemini to OpenAI-style API calls in an app would be a service to everyone moving to local models. Details below on the problem.
I am trying to switch an app written to use Google PaLM and Gemini APIs to use an OpenAI style API like those used in LM Studio. But Gemini is not 1:1 with OpenAI (i.e. you can't just rename parameters). Refactoring everything in an app is a pain. Are there...
* Proxies to convert Gemini API -> OpenAI?
* Any guides here?
In particular, Gemini has multiple [response](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini#sample_responses) **candidates** or variations, something not found in the OpenAI API. What might be a good solution here, calling the OpenAI API multiple times at different temperatures? | 2024-01-06T01:32:34 | https://www.reddit.com/r/LocalLLaMA/comments/18zntx5/ditching_google_gemini_api_for_a_local/ | underoak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zntx5 | false | null | t3_18zntx5 | /r/LocalLLaMA/comments/18zntx5/ditching_google_gemini_api_for_a_local/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=108&crop=smart&auto=webp&s=a0329d4207ada0345185e70a97a0ef1f27aec034', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=216&crop=smart&auto=webp&s=8722bf8052baa4647e96ebeb0d22f50bf529b6ac', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=320&crop=smart&auto=webp&s=6562c4a330763746058f2250630ec6d3854b2e3d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=640&crop=smart&auto=webp&s=fff0deae054d2476ac870508887dbbee06d9387c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=960&crop=smart&auto=webp&s=f6a32be275833b4d47802b79f3345f568bd43a4d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?width=1080&crop=smart&auto=webp&s=97e8d6d94b95697d64482f5fcda32d11814df7b8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/hHuPbOBuX42Zu5_MT0rir3JO7cgRTfjn0ct4UWuqTu4.jpg?auto=webp&s=b3c1793ddfb0595cba1bbb23fba79360953beb8d', 'width': 1200}, 'variants': {}}]} |
Can ollama leverage GPUs on separate machines? | 1 | I was wondering if I can use 2 separate T4 GPUs of 15 GB memory each can be combined to load 30 GB models. We can assume that they're Linux based instances (running Ubuntu based shell) and it's not possible to change anything in the hardware. | 2024-01-06T01:29:44 | https://www.reddit.com/r/LocalLLaMA/comments/18znrka/can_ollama_leverage_gpus_on_separate_machines/ | Shubham_Garg123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18znrka | false | null | t3_18znrka | /r/LocalLLaMA/comments/18znrka/can_ollama_leverage_gpus_on_separate_machines/ | false | false | self | 1 | null |
the basement rig has achieved the next level 96GB | 174 | 2024-01-06T01:12:10 | rdkilla | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18zne2z | false | null | t3_18zne2z | /r/LocalLLaMA/comments/18zne2z/the_basement_rig_has_achieved_the_next_level_96gb/ | false | false | 174 | {'enabled': True, 'images': [{'id': 'vtwbZC2gOTuMoKDg4Z2pO9VkNTfayovHdTVPjaMghC4', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/bdngr9tvzpac1.jpeg?width=108&crop=smart&auto=webp&s=22941301de9707a43f1ca1e00afb42a83224c35b', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/bdngr9tvzpac1.jpeg?width=216&crop=smart&auto=webp&s=12d128fa8b0661921a615935b725fb9048a95f73', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/bdngr9tvzpac1.jpeg?width=320&crop=smart&auto=webp&s=4d9793a3996f8a2d0dddb5e6051ae0e944dfca5c', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/bdngr9tvzpac1.jpeg?width=640&crop=smart&auto=webp&s=732caaf6c47ec585cdf18f14e541d3557c769b8a', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/bdngr9tvzpac1.jpeg?width=960&crop=smart&auto=webp&s=8460c8005a101a057d7a647bbacc75bf4f4150bc', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/bdngr9tvzpac1.jpeg?width=1080&crop=smart&auto=webp&s=d6da37389743475bf6b6e5a4c7314aaed6680df1', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/bdngr9tvzpac1.jpeg?auto=webp&s=713b658e9aadc984314219d806f2bb338f856bd2', 'width': 4032}, 'variants': {}}]} | |||
I doubt that Gemini ultra will be better than gpt4 | 1 | 2024-01-06T00:46:42 | https://g.co/bard/share/ce6c76bd08de | taikyrio | g.co | 1970-01-01T00:00:00 | 0 | {} | 18zmu1g | false | null | t3_18zmu1g | /r/LocalLLaMA/comments/18zmu1g/i_doubt_that_gemini_ultra_will_be_better_than_gpt4/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'JNL-q2b_DW6e8RuwY5olc0YOj7mBYw8HDWpY5ASIT_w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Gt1MAxDsF39zObIq_rAlNwB7xiDfGIjJRKpVJbhGyco.jpg?width=108&crop=smart&auto=webp&s=8065d2a646bb6902d49f35178fe7dfc261e68e64', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Gt1MAxDsF39zObIq_rAlNwB7xiDfGIjJRKpVJbhGyco.jpg?width=216&crop=smart&auto=webp&s=55e4f1ee7359e27fbfdf45e3c8ecd6d8c3b73071', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Gt1MAxDsF39zObIq_rAlNwB7xiDfGIjJRKpVJbhGyco.jpg?width=320&crop=smart&auto=webp&s=8d5b70d836a6917d7747d5bbde5f1af9cf9dd899', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Gt1MAxDsF39zObIq_rAlNwB7xiDfGIjJRKpVJbhGyco.jpg?width=640&crop=smart&auto=webp&s=6c1e00e0d67b6ff61d54461f51e422f5785a0b3a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Gt1MAxDsF39zObIq_rAlNwB7xiDfGIjJRKpVJbhGyco.jpg?width=960&crop=smart&auto=webp&s=1fc3f2ea4db20b7473493f7aa706562f00f27689', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Gt1MAxDsF39zObIq_rAlNwB7xiDfGIjJRKpVJbhGyco.jpg?width=1080&crop=smart&auto=webp&s=a2bb7deede4ccf166b2066c6519d4e8654c95dce', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Gt1MAxDsF39zObIq_rAlNwB7xiDfGIjJRKpVJbhGyco.jpg?auto=webp&s=c0dbc131fce417cfe9f06a2a0bbe4269e9a221f6', 'width': 1200}, 'variants': {}}]} | ||
Gemini Pro SillyTavern | 1 | Is there any way to allow explicit content and violence in the geminis pro model? Something like Jailbreak or something? Thanks for your help | 2024-01-06T00:28:17 | Jorge1022 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18zmexr | false | null | t3_18zmexr | /r/LocalLLaMA/comments/18zmexr/gemini_pro_sillytavern/ | false | false | nsfw | 1 | {'enabled': True, 'images': [{'id': '3xVIo0ipJj81LlexmmE22G2OUcU3tMhpOiKuJuMH82M', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=108&crop=smart&auto=webp&s=a96da2b7e80cbaf17f2f5530275ee3fb427eff24', 'width': 108}, {'height': 199, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=216&crop=smart&auto=webp&s=64cc1c2bf4fcee75661a3f2f1bbe32db9303f4ff', 'width': 216}, {'height': 295, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=320&crop=smart&auto=webp&s=0738d70225dff3126e1434076eaac5f7cf32f6b2', 'width': 320}, {'height': 591, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=640&crop=smart&auto=webp&s=354460977adf2fa377cf7cae974d52b83b25082b', 'width': 640}, {'height': 887, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=960&crop=smart&auto=webp&s=aa6f8d6f7dfd3e3ed17c68bd8a401fb4e024d553', 'width': 960}], 'source': {'height': 903, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?auto=webp&s=4488730479b7e6ff74b75eca7b3b1461fae6d523', 'width': 977}, 'variants': {'nsfw': {'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=daee22ca2899139c9fd35070a5afc89419f05364', 'width': 108}, {'height': 199, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=b894ed1112ca7db86f42c06c6d9dfcdb2de4a2f9', 'width': 216}, {'height': 295, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=ac389704a3e0509f8db09c4cef203eb23c30cc44', 'width': 320}, {'height': 591, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=f02fd761d2fba060a407a1181f54124e62088a0e', 'width': 640}, {'height': 887, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=4d17424dfa10bf5f27395821ce7f1bf8543853d4', 'width': 960}], 'source': {'height': 903, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?blur=40&format=pjpg&auto=webp&s=593e99d20a3954d3aeae077942ca6ab250024e74', 'width': 977}}, 'obfuscated': {'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=daee22ca2899139c9fd35070a5afc89419f05364', 'width': 108}, {'height': 199, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=b894ed1112ca7db86f42c06c6d9dfcdb2de4a2f9', 'width': 216}, {'height': 295, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=ac389704a3e0509f8db09c4cef203eb23c30cc44', 'width': 320}, {'height': 591, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=f02fd761d2fba060a407a1181f54124e62088a0e', 'width': 640}, {'height': 887, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=4d17424dfa10bf5f27395821ce7f1bf8543853d4', 'width': 960}], 'source': {'height': 903, 'url': 'https://preview.redd.it/mz9ix5baspac1.jpeg?blur=40&format=pjpg&auto=webp&s=593e99d20a3954d3aeae077942ca6ab250024e74', 'width': 977}}}}]} | |
Why don't LoRAs add new information to LLMs | 1 | [removed] | 2024-01-06T00:18:48 | https://www.reddit.com/r/LocalLLaMA/comments/18zm70t/why_dont_loras_add_new_information_to_llms/ | FrostyContribution35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zm70t | false | null | t3_18zm70t | /r/LocalLLaMA/comments/18zm70t/why_dont_loras_add_new_information_to_llms/ | false | false | self | 1 | null |
An ordered list of GPUs | 17 | There are lots of GPUโs out there. Going from a 1080 to a a100 and everything in between. But in 2024, is there an order of incremental improvement, so someone can pick a dollar figure and see what the best option is for them? | 2024-01-06T00:00:15 | https://www.reddit.com/r/LocalLLaMA/comments/18zlr1r/an_ordered_list_of_gpus/ | Data_Driven_Guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zlr1r | false | null | t3_18zlr1r | /r/LocalLLaMA/comments/18zlr1r/an_ordered_list_of_gpus/ | false | false | self | 17 | null |
Happy 100k members | 200 | While technically at the time of the writing of this post, this sub has 99.9k members, it might as well be at 100k. I remember when I first came to this subreddit in order to see all of the progress that open source and uncensored models have made in the past few months, and oh boy was there so much progress in these 12 months!
Back in late 2022 and early 2023 were once stuck with ancient models: Pygmalion 6b, Bert, and others.
Now, we have a flood of high quality models, many beating the original GPT-3.5: Goliath 120b, Mixtral, Capybara-Yi-34b, and many others. And of course, it could be argued that some of these models are already better than GPT-4 in one or more tasks.
There is an incredible amount of effort and love that this community puts into patiently answering everyoneโs questions, debunking misinformation, pushing the frontiers of open source models, and generally being a counterweight against the large corporations. Hereโs to hoping that 2024 will bring more cooperation and goodies to the world. Cheers. | 2024-01-06T00:00:05 | https://www.reddit.com/r/LocalLLaMA/comments/18zlqua/happy_100k_members/ | AlterandPhil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zlqua | false | null | t3_18zlqua | /r/LocalLLaMA/comments/18zlqua/happy_100k_members/ | false | false | self | 200 | null |
Overview on rocm support and amd gpus wanted. | 1 | So when browsing through the subreddit it seems like there are ways amd gpus can be used with for example llama.cpp and even igpus work.
So what confuses me us that officially only a handfull of devices support rocm but alot of people seem to be running unsupported cards for example consumer cards. Or igpus including new ones.
So I was hoping someone could give me an overview and enlighten me to how it works :D
Thank you all :) | 2024-01-05T23:49:31 | https://www.reddit.com/r/LocalLLaMA/comments/18zlhwe/overview_on_rocm_support_and_amd_gpus_wanted/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zlhwe | false | null | t3_18zlhwe | /r/LocalLLaMA/comments/18zlhwe/overview_on_rocm_support_and_amd_gpus_wanted/ | false | false | self | 1 | null |
Best open source tools in 2024 for both fine-tuning an LLM on your own data and then running agents on top of these fine-tuned models for a prototype | 1 | Hey there! I'm wondering what the best open source tools are to be able to both fine-tune and utilize agents on your fine-tuned models in a cheap, efficient way? Seems like Deci is now the most efficient open source model ([https://deci.ai/blog/introducing-decilm-7b-the-fastest-and-most-accurate-7b-large-language-model-to-date/](https://deci.ai/blog/introducing-decilm-7b-the-fastest-and-most-accurate-7b-large-language-model-to-date/)) but what framework(s) exist that combine both of these steps or separate frameworks that, when used in conjunction, are the best approach for this? Would LangSmith (or LangChain more generally) be the best for this use case?
​
I'm trying to optimize for something cheap to build a prototype. | 2024-01-05T23:32:40 | https://www.reddit.com/r/LocalLLaMA/comments/18zl321/best_open_source_tools_in_2024_for_both/ | SnooCrickets9704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zl321 | false | null | t3_18zl321 | /r/LocalLLaMA/comments/18zl321/best_open_source_tools_in_2024_for_both/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YMYGvgP-znBF_byzEbiK2J9VEj_cz6xTwsbvCVF2ZMw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/gXK-eKsbQWvGZ95_sy-v_SbYMQz52K0EdBcrdTDbODs.jpg?width=108&crop=smart&auto=webp&s=dc456691285564799d2aac3611ff7a05eee5022d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/gXK-eKsbQWvGZ95_sy-v_SbYMQz52K0EdBcrdTDbODs.jpg?width=216&crop=smart&auto=webp&s=0121cf3af400e1d937bc3eceba424a73a9da9581', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/gXK-eKsbQWvGZ95_sy-v_SbYMQz52K0EdBcrdTDbODs.jpg?width=320&crop=smart&auto=webp&s=723f1926b99bd00d33a3caa027b39958734b552f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/gXK-eKsbQWvGZ95_sy-v_SbYMQz52K0EdBcrdTDbODs.jpg?width=640&crop=smart&auto=webp&s=2a1e22f114a12367302e115a53afb3a745a5e465', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/gXK-eKsbQWvGZ95_sy-v_SbYMQz52K0EdBcrdTDbODs.jpg?width=960&crop=smart&auto=webp&s=3699ced5271e20151074a3d15ac7dd9ce0d2679b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/gXK-eKsbQWvGZ95_sy-v_SbYMQz52K0EdBcrdTDbODs.jpg?width=1080&crop=smart&auto=webp&s=c6d65374e469671cb63891546762f954635bfb1a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/gXK-eKsbQWvGZ95_sy-v_SbYMQz52K0EdBcrdTDbODs.jpg?auto=webp&s=abce9477d43f395c56d51d8ed72c2fd379916f5d', 'width': 1920}, 'variants': {}}]} |
Advice on upgrading a gaming rig to run local LLM | 1 | Hey there, complete noob here so I appreciate any help. I have the [following gaming rig](https://ca.pcpartpicker.com/list/tz4RF8) and seem to be able to run some models okay, but quite slow, locally. I also game, so any upgrade wouldn't just be for AI
\---
CPU - AMD Ryzen 7 3700X 3.6 GHz 8-Core Processor
Motherboard - MSI B450 TOMAHAWK MAX ATX AM4 Motherboard
Memory - TEAMGROUP T-Force Dark Za 32 GB (2 x 16 GB) DDR4-3600 CL18 Memory
SSD - Western Digital Blue SN550 1 TB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive
GPU - NVIDIA Founders Edition GeForce RTX 3070 8 GB Video Card
Case - Fractal Design Meshify C ATX Mid Tower Case
Power Supply - Corsair RM750x (2018) 750 W 80+ Gold Certified Fully Modular ATX Power Supply
\---
And have a few questions
\- What're my biggest bottlenecks? How's my CPU? RAM is cheap, so seems like I may as well add some too, right? Currently on Mixtral I sometimes struggle to load it unless I ensure nothing else is running, this is purely a RAM bottleneck, right?
\- In terms of RAM & CPU, RAM is more obvious, but other than going to higher models are there specifics to look for in a CU for AI specifically?
\- Graphics card for sure isn't the best, but they're pricey - how much of a bottleneck is it? And should I look to run the 3070 + another (Might need a new power supply) or just replace it with a 3090 or better?
\- Not too related - I use LM studio and weirdly notice the models seem to run \*slower\* when I offload to GPU, no matter how much or little I offload. Is that a sign there's a worse bottleneck somewhere?
Appreciate any help and apologize for the noob post. I'm definitely willing to spend on a GPU, but just want to make sure I'm not just doing things blindly and am being smart about it | 2024-01-05T23:24:01 | https://www.reddit.com/r/LocalLLaMA/comments/18zkvoq/advice_on_upgrading_a_gaming_rig_to_run_local_llm/ | NearMissTO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zkvoq | false | null | t3_18zkvoq | /r/LocalLLaMA/comments/18zkvoq/advice_on_upgrading_a_gaming_rig_to_run_local_llm/ | false | false | self | 1 | null |
Advice on the best place to buy used 3090 in UK | 1 | Subject says it all really, I'm looking to pick up a used 3090 from a reputable source in the UK and wondered if anyone had good experiences with a seller. | 2024-01-05T22:53:43 | https://www.reddit.com/r/LocalLLaMA/comments/18zk613/advice_on_the_best_place_to_buy_used_3090_in_uk/ | Traditional_Fill_459 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zk613 | false | null | t3_18zk613 | /r/LocalLLaMA/comments/18zk613/advice_on_the_best_place_to_buy_used_3090_in_uk/ | false | false | self | 1 | null |
AI-Assisted Search+Fetch Library Recommendations? | 1 | **TL;DR:** Given an english description, how can I search and fetch resources, eg blogs in HTML, or PDFs, etc?
I've built something for [UniteAI](https://github.com/freckletonj/uniteai/blob/master/uniteai/document/download.py) that can take URLs of many flavors (Arxiv, YT, pdf, html, etc), and download, cache, and embed them for later RAG.
But it's more complicated to start from an english query, then *find the URL/resource* which might be a link on a page multiply nested down a chain of links somewhere.
For instance, say I have the query "1001 Representations of Syntax with Binding". Ultimately, we need to ascertain that this resource is not a PDF, but a blog that should be scraped, and the URL is [this](https://jesper.sikanda.be/posts/1001-syntax-representations.html). Then, that URL could feed through my original thing, but, getting that URL is a hard part.
Anyway, I'm guessing the community already has developed something like this, any recommendations? | 2024-01-05T22:47:34 | https://www.reddit.com/r/LocalLLaMA/comments/18zk0na/aiassisted_searchfetch_library_recommendations/ | BayesMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zk0na | false | null | t3_18zk0na | /r/LocalLLaMA/comments/18zk0na/aiassisted_searchfetch_library_recommendations/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'El8GjJcSMjcg_PJzjhrNCimC50coPfvvnrB22Bec2fk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Za8FSw0dYy-dY5mJFWC-LYUGSanGvFG4yeHfLMG8oPk.jpg?width=108&crop=smart&auto=webp&s=4c55caea99a8de1bceecf4cad9056d2178ba5c79', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Za8FSw0dYy-dY5mJFWC-LYUGSanGvFG4yeHfLMG8oPk.jpg?width=216&crop=smart&auto=webp&s=6f91352fc7f9f260da33db215516ddc3fdcb5325', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Za8FSw0dYy-dY5mJFWC-LYUGSanGvFG4yeHfLMG8oPk.jpg?width=320&crop=smart&auto=webp&s=aacf8f97c52c8180e4adb7cb93c9183c03325915', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Za8FSw0dYy-dY5mJFWC-LYUGSanGvFG4yeHfLMG8oPk.jpg?width=640&crop=smart&auto=webp&s=fd24bc1ff390c3ec4a9ba113056d928c367d7d9c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Za8FSw0dYy-dY5mJFWC-LYUGSanGvFG4yeHfLMG8oPk.jpg?width=960&crop=smart&auto=webp&s=009b9de571cd42e433b1eb9493fadf27c2eb699f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Za8FSw0dYy-dY5mJFWC-LYUGSanGvFG4yeHfLMG8oPk.jpg?width=1080&crop=smart&auto=webp&s=d8fd86b44d3469cbf3d0182abfbc22aafc4c6815', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Za8FSw0dYy-dY5mJFWC-LYUGSanGvFG4yeHfLMG8oPk.jpg?auto=webp&s=0c9b6cb85e808340c9d61d32a81a637125879805', 'width': 1200}, 'variants': {}}]} |
SqueezeLLM (vllm) MythoMax Benchmark Comparison | 1 | I don't think anyone is asking for this but I wanted to give it a shot, mythomax-13b quantized with squeezellm https://huggingface.co/GusPuffy/sq-MythoMax-L2-13b-w4-s0
It took me a day-ish with an A100 80gb, around ~60gb VRAM during gradient steps.
Performance is atrocious. I am testing using [vllm benchmark](https://github.com/vllm-project/vllm/blob/main/benchmarks) with 200 requests about 1300tk with 90tk return and a 4090 (in WSL).
Note the throughput results are highly parallelized, and the throughput on a single request would be different.
SqueezeLLM:
200/200 [24:14<00:00, 7.27s/it]
Throughput: 0.14 requests/s, 47.96 tokens/s
AWQ:
200/200 [03:29<00:00, 1.05s/it]
Throughput: 0.95 requests/s, 332.41 tokens/s
GPTQ (https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ/tree/gptq-4bit-32g-actorder_True):
Note, I had to edit the code to get it to be able to do a revision instead of a main branch.
200/200 [02:17<00:00, 1.46it/s]
Throughput: 1.46 requests/s, 507.75 tokens/s
I haven't validated the quality of the output but I am curious how it compares to regular mythomax GPTQ 4 bit 32. I am using mythomax-13b with a game so I am trying to squeeze as much performance and instruction following as possible out of mythomax. | 2024-01-05T21:36:42 | https://www.reddit.com/r/LocalLLaMA/comments/18zibrz/squeezellm_vllm_mythomax_benchmark_comparison/ | GusPuffy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zibrz | false | null | t3_18zibrz | /r/LocalLLaMA/comments/18zibrz/squeezellm_vllm_mythomax_benchmark_comparison/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wqcFmXd_QSFOIkXNvwLgmMyJbsFYCk7h_vhf8urn2Dg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LZevKK6tb0QZHUkPb7Z1nQ-tmdBJ5iodXRUWIZCMV_w.jpg?width=108&crop=smart&auto=webp&s=8e1095c99b2a3f0f972a17cb6962a5d62a240f01', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LZevKK6tb0QZHUkPb7Z1nQ-tmdBJ5iodXRUWIZCMV_w.jpg?width=216&crop=smart&auto=webp&s=189792d6fbdfb4190520e1d3f4d62ad6f8033375', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LZevKK6tb0QZHUkPb7Z1nQ-tmdBJ5iodXRUWIZCMV_w.jpg?width=320&crop=smart&auto=webp&s=92199220c5a96837542a5aa3b79ebb8160043151', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LZevKK6tb0QZHUkPb7Z1nQ-tmdBJ5iodXRUWIZCMV_w.jpg?width=640&crop=smart&auto=webp&s=a2d2210d7fdf61373fcfe5d67af2e886d78f6a4c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LZevKK6tb0QZHUkPb7Z1nQ-tmdBJ5iodXRUWIZCMV_w.jpg?width=960&crop=smart&auto=webp&s=f122a97f2dbb6bb6f4ea2518d6e9ea63fc6a603b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LZevKK6tb0QZHUkPb7Z1nQ-tmdBJ5iodXRUWIZCMV_w.jpg?width=1080&crop=smart&auto=webp&s=9a72a9d4b626485e81d0264234997b4cfc5307cf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LZevKK6tb0QZHUkPb7Z1nQ-tmdBJ5iodXRUWIZCMV_w.jpg?auto=webp&s=7b7976ca65a38e946837fe4811202fdab4668344', 'width': 1200}, 'variants': {}}]} |
LLama CPP Python, GGML, GGUF, cTransformers, AutoGPTQ | 1 | My goal is to generate data using 13b or larger LLMs . I have access to 24 GB GPU and 64 GB RAM. I'm familiar with Python / NLP (not so familiar with C / C++)
I'm very confused with all these libraries , when to use which library ? a little eli5 or links where to begin would be much appreciated. Thank you so much. | 2024-01-05T21:17:38 | https://www.reddit.com/r/LocalLLaMA/comments/18zhvfk/llama_cpp_python_ggml_gguf_ctransformers_autogptq/ | Lumpy-Carob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zhvfk | false | null | t3_18zhvfk | /r/LocalLLaMA/comments/18zhvfk/llama_cpp_python_ggml_gguf_ctransformers_autogptq/ | false | false | self | 1 | null |
Beginner to LLAMA: How did the model even manage to load and run on my 8GB GPU? Does Pytorch/LLAMA/CUDA use shared GPU memory? | 1 | So I have a Windows machine with 16 GB RAM and 8 GB VRAM. |
I downloaded the 7B LLAMA2 model for fun and managed to get the example_text_completion script running.
My question is: How did that model even fit on my GPU? 7B model at 16bit precision would require at least 14 GB.
As far as I can see, the model would max out my GPU's dedicated 8 GB and takes out another 6 GB of shared memory (RAM I guess).
Can someone confirm whether this is indeed the case and whether this is a feature of Pytorch, CUDA or LLAMA2 itself? Or perhaps another level of quantization is happening beyond 32->16 bit? | 2024-01-05T21:08:01 | https://www.reddit.com/r/LocalLLaMA/comments/18zhn0o/beginner_to_llama_how_did_the_model_even_manage/ | fish2079 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zhn0o | false | null | t3_18zhn0o | /r/LocalLLaMA/comments/18zhn0o/beginner_to_llama_how_did_the_model_even_manage/ | false | false | self | 1 | null |
Llama pro 8B | 1 | 2024-01-05T21:06:03 | https://x.com/_akhaliq/status/1743345597040451756?t=XcARkNJ0xFDQprD6KKUvig&s=34 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 18zhl9d | false | null | t3_18zhl9d | /r/LocalLLaMA/comments/18zhl9d/llama_pro_8b/ | false | false | 1 | {'enabled': False, 'images': [{'id': '37FyVZDZkNnhC3aLulSTkCOW6VFG3D7sqd84Vo24K-c', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/2yoDoBFX-dsBOvDs63hb2ELdhE1WNC__t069eqQg46A.jpg?width=108&crop=smart&auto=webp&s=1cc2f7d1f04095cf18f38319a57089bd3cb54078', 'width': 108}, {'height': 171, 'url': 'https://external-preview.redd.it/2yoDoBFX-dsBOvDs63hb2ELdhE1WNC__t069eqQg46A.jpg?width=216&crop=smart&auto=webp&s=f04ce3a5eb5db854777ccdf83701008c2b49d34f', 'width': 216}, {'height': 254, 'url': 'https://external-preview.redd.it/2yoDoBFX-dsBOvDs63hb2ELdhE1WNC__t069eqQg46A.jpg?width=320&crop=smart&auto=webp&s=7b8f6254a2585d1733f33583cbf30e30804b2b01', 'width': 320}, {'height': 508, 'url': 'https://external-preview.redd.it/2yoDoBFX-dsBOvDs63hb2ELdhE1WNC__t069eqQg46A.jpg?width=640&crop=smart&auto=webp&s=f94a854440ac206bb99df48b067a5621b786fdb3', 'width': 640}, {'height': 762, 'url': 'https://external-preview.redd.it/2yoDoBFX-dsBOvDs63hb2ELdhE1WNC__t069eqQg46A.jpg?width=960&crop=smart&auto=webp&s=a13a9811814b39306de8831d5fef11f5fc2b63fd', 'width': 960}, {'height': 857, 'url': 'https://external-preview.redd.it/2yoDoBFX-dsBOvDs63hb2ELdhE1WNC__t069eqQg46A.jpg?width=1080&crop=smart&auto=webp&s=b160ac949dbee5f233c32ae66f44e2b79bec84c5', 'width': 1080}], 'source': {'height': 1242, 'url': 'https://external-preview.redd.it/2yoDoBFX-dsBOvDs63hb2ELdhE1WNC__t069eqQg46A.jpg?auto=webp&s=5c863fc9a86137d79279530f68e3b3f9a256fedb', 'width': 1564}, 'variants': {}}]} | ||
turn by turn bpl eval 7b model showdown | 1 | N:B This doesn't want to compete with the amazing work from raven wolf
I had the need to come up with some evaluation mechanism for turn by turn chat, because many model are amazing at a single turn zero shot but fall apart for long chats, and many models fall apart around the 8k token marks.
so I did the sensible thing and automated the heck out of it. thought this could be useful for some of you as well.
methodology is far from perfect: single shot zero temperature eval from gpt 4 turbo. each turn is scored 0-10, and here we present the sum of the points. totals vary because not all assistant go the full length. I'm testing chatml and vicuna format for turns. I will post both scores now, but in the future I may just post the best of the two.
question are mostly generation, tranformation and recall. there is a math question, just for the sake of it. there are a few confusing or mispelt questions on purpose.
here are the evaluations so far:
| Model | Prompt | Score |
|---|---|---|
| [mistral-7b-instruct-v0.2.Q5_K_M.gguf](https://chat.openai.com/share/fedaa237-9572-45e0-bb12-ed551dc8e0fc) | vicuna | 370 |
| [openhermes-2.5-mistral-7b-16k.Q5_K_M.gguf](https://chat.openai.com/share/5af6be3b-418a-43d3-95d8-67418b443f87) | vicuna | 346 |
| [mistral-7b-instruct-v0.2.Q5_K_M.gguf](https://chat.openai.com/share/09182f48-8742-4c76-982c-8e843f84a31c) | chatml | 348 |
| [toppy-m-7b.Q5_K_M.gguf](https://chat.openai.com/share/a8f2f218-d779-4f8f-bdf1-5f98edaf3987) | chatml | 356 |
| [openhermes-2.5-mistral-7b-16k.Q5_K_M.gguf](https://chat.openai.com/share/4fc18149-9038-470d-9693-0f77742fbbe1) | chatml | 330 |
| [dolphin-2.6-mistral-7b-dpo.Q5_K_M.gguf](https://chat.openai.com/share/6716f912-9ded-4304-90e0-c7c401c8419a) | chatml | 240 |
| [vicuna-7b-v1.5-16k.Q5_K_M.gguf](https://chat.openai.com/share/77cadade-865e-41f6-8578-3b7cd5a898b9) | vicuna | 238 |
| [toppy-m-7b.Q5_K_M.gguf](https://chat.openai.com/share/49a5d760-f450-4ecc-ba86-be1fc9f02f39) | vicuna | 230 |
| [nous-hermes-2-solar-10.7b.Q5_K_M.gguf](https://chat.openai.com/share/282ac87b-3b01-43ea-ab70-ab6d3fc1d84c) | vicuna | 221 |
| [xdan-l1-chat-rl-v1.Q5_K_M.gguf](https://chat.openai.com/share/cfda45bf-a752-44ff-9416-fdd609e8ae1e) | chatml | 216 |
| [yarn-mistral-7b-64k.Q5_K_M.gguf](https://chat.openai.com/share/7f93f0b0-5aee-4a2f-a3fa-b60f16e08baf) | vicuna | 215 |
| [vicuna-7b-v1.5-16k.Q5_K_M.gguf](https://chat.openai.com/share/55616534-2900-4d8d-bb3a-1c63e3308ff8) | chatml | 256 |
| [dolphin-2.6-mistral-7b-dpo.Q5_K_M.gguf](https://chat.openai.com/share/64cde90b-4b36-42c7-8813-5349ff13b8ca) | vicuna | 200 |
| [xdan-l1-chat-rl-v1.Q5_K_M.gguf](https://chat.openai.com/share/844544c8-af93-4734-b924-5a7d180d83a5) | vicuna | 100 |
| [nous-hermes-2-solar-10.7b.Q5_K_M.gguf](https://chat.openai.com/share/440a171d-296d-4972-9e2e-cc7913635a82) | chatml | 55 |
| [yarn-mistral-7b-64k.Q5_K_M.gguf](https://chat.openai.com/share/4a066ad6-5df9-4e73-9f8d-5e549587bc34) | chatml | 54 |
| [openchat_3.5-16k.Q5_K_M.gguf](https://chat.openai.com/share/1a37c16b-9d7c-41aa-8daf-478a1c6c5e25) | chatml | 17 |
| [openchat_3.5-16k.Q5_K_M.gguf](https://chat.openai.com/share/5213214d-1223-4ae9-b4c9-a2c57e6e3a57) | vicuna | 7 |
I'm open to suggestion to lengthen the test or replace questions of for other 7b models (I can get 13b running but it'd be pushing my poor 3060 memory) | 2024-01-05T20:21:43 | https://www.reddit.com/r/LocalLLaMA/comments/18zgim6/turn_by_turn_bpl_eval_7b_model_showdown/ | LoSboccacc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zgim6 | false | null | t3_18zgim6 | /r/LocalLLaMA/comments/18zgim6/turn_by_turn_bpl_eval_7b_model_showdown/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '2w_rkyQ3BWK6Asx2BGrOb1-Pw-vVHOrItDRNEbEsOtM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/G3IV4O7pVgcUCyakIjVe1t17f0e_P8_K64a1LJRBnd8.jpg?width=108&crop=smart&auto=webp&s=e910ff55aee46ac3668a1167b2d7d6bdfb084af6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/G3IV4O7pVgcUCyakIjVe1t17f0e_P8_K64a1LJRBnd8.jpg?width=216&crop=smart&auto=webp&s=437f7a12a0b95d23895b586dd47d1e56087ba24b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/G3IV4O7pVgcUCyakIjVe1t17f0e_P8_K64a1LJRBnd8.jpg?width=320&crop=smart&auto=webp&s=9ab0bd67350a4cbd500f0f64207b93e3c0906604', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/G3IV4O7pVgcUCyakIjVe1t17f0e_P8_K64a1LJRBnd8.jpg?width=640&crop=smart&auto=webp&s=03a312293aca01e503c46b205bbc5acd8e74b1a9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/G3IV4O7pVgcUCyakIjVe1t17f0e_P8_K64a1LJRBnd8.jpg?width=960&crop=smart&auto=webp&s=4f79045774961c09205b24619764b7582bfb8720', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/G3IV4O7pVgcUCyakIjVe1t17f0e_P8_K64a1LJRBnd8.jpg?width=1080&crop=smart&auto=webp&s=1f580270979247b354e601402688a9a0da231929', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/G3IV4O7pVgcUCyakIjVe1t17f0e_P8_K64a1LJRBnd8.jpg?auto=webp&s=f57a110cd97a464988f4c5d26b81c74050595623', 'width': 1600}, 'variants': {}}]} |
Merge of CodeLLAMA 7b and Mistral 7b instruct v0.2 | 1 |
Has someone tried to merge CodeLLAMA 7b and Mistral 7b instruct v0.2? My intuition says this will result in a very capable model.
Do you think I should try? | 2024-01-05T20:13:34 | https://www.reddit.com/r/LocalLLaMA/comments/18zgbd4/merge_of_codellama_7b_and_mistral_7b_instruct_v02/ | Independent_Key1940 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zgbd4 | false | null | t3_18zgbd4 | /r/LocalLLaMA/comments/18zgbd4/merge_of_codellama_7b_and_mistral_7b_instruct_v02/ | false | false | self | 1 | null |
Merge of CodeLAMA 7b and Mistral 7b instruct v0.2 | 1 | Has someone tried to merge CodeLLAMA 7b and Mistral 7b instruct v0.2? My intuition says this will result in a very capable model.
Do you think I should try? | 2024-01-05T20:11:39 | https://www.reddit.com/r/LocalLLaMA/comments/18zg9p8/merge_of_codelama_7b_and_mistral_7b_instruct_v02/ | Independent_Key1940 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zg9p8 | false | null | t3_18zg9p8 | /r/LocalLLaMA/comments/18zg9p8/merge_of_codelama_7b_and_mistral_7b_instruct_v02/ | false | false | self | 1 | null |
How can I load a 7b LLM on a 4gb GTX 1650 | 1 | I recently saw a blog post of loading 70b LLM on a free gpu, so I wanted to load Mistral 7b on my own 4gb gpu but couldn't be able to do so. What can I do? | 2024-01-05T19:43:07 | https://www.reddit.com/r/LocalLLaMA/comments/18zfkfr/how_can_i_load_a_7b_llm_on_a_4gb_gtx_1650/ | Ashborn_1001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zfkfr | false | null | t3_18zfkfr | /r/LocalLLaMA/comments/18zfkfr/how_can_i_load_a_7b_llm_on_a_4gb_gtx_1650/ | false | false | self | 1 | null |
r/LocalLLaMA Starter Pack | 1 | 2024-01-05T19:34:17 | Snapeshot | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18zfcv5 | false | null | t3_18zfcv5 | /r/LocalLLaMA/comments/18zfcv5/rlocalllama_starter_pack/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'tNZsXpr_lK3ai9SZh6c0eI0Ml7qiAiFq_BZtRXnaF1k', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/biq4qqerboac1.png?width=108&crop=smart&auto=webp&s=1bcebef22dcf29ff3398e2ee3aef12c945e79d17', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/biq4qqerboac1.png?width=216&crop=smart&auto=webp&s=0fe0c8d0b860bbcc003fef8c25396ef02145d482', 'width': 216}, {'height': 321, 'url': 'https://preview.redd.it/biq4qqerboac1.png?width=320&crop=smart&auto=webp&s=bfe54252cac47d1ae88b96939735aea7a2cf0ed6', 'width': 320}, {'height': 642, 'url': 'https://preview.redd.it/biq4qqerboac1.png?width=640&crop=smart&auto=webp&s=5dabf0e4aa976790aa6f8e4c027468d050ab2086', 'width': 640}, {'height': 963, 'url': 'https://preview.redd.it/biq4qqerboac1.png?width=960&crop=smart&auto=webp&s=ff13dbbcecbf408dbf212925e5b7c2c4d8a04928', 'width': 960}], 'source': {'height': 987, 'url': 'https://preview.redd.it/biq4qqerboac1.png?auto=webp&s=215a11a5facd7d25a4a2e5dc1c71296a06b11f7c', 'width': 983}, 'variants': {}}]} | |||
Best model for drawing inferences from large text documents? | 1 | Hi all,
I'm a little late to the LLM party, and I've done some of my own research and tested with trial and error, but I've hit a wall and would appreciate some help. Hope I'm in the right place.
I have text documents containing transcripts ranging from 10 to 20 minutes of conversation each, and I need it to summarize what was said and determine if someone is lying or if their story is inconsistent.
I run> ollama run llama2:latest "please provide a paragraph summary for this audio statement, then report any inconsistencies in story that may indicate dishonesty or lack of trustworthiness: $(cat text\_file)"
So far, llama2 has been able to quite nicely summarize some of my smaller transcripts, and it even reasons (albeit very dumb) with promising success. I believe I'm running into a context window issue with the larger ones, which I'm unsure how to alleviate.
I've also tried other models, with different quantization and parameter count with mixed results, more often than not the ai just spits out garbage.
Is there a better recipe for what I'm trying to do? I prefer quality over speed.
I'm working with a single 24gb M6000, 128gb DDR4 ECC and dual xeons running debian. | 2024-01-05T19:27:29 | https://www.reddit.com/r/LocalLLaMA/comments/18zf6zy/best_model_for_drawing_inferences_from_large_text/ | Exact-Armadillo9491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zf6zy | false | null | t3_18zf6zy | /r/LocalLLaMA/comments/18zf6zy/best_model_for_drawing_inferences_from_large_text/ | false | false | self | 1 | null |
llama.sh: No-messing-around sh client for llama.cpp's server | 1 | 2024-01-05T19:19:17 | https://github.com/m18coppola/llama.sh | m18coppola | github.com | 1970-01-01T00:00:00 | 0 | {} | 18zezz2 | false | null | t3_18zezz2 | /r/LocalLLaMA/comments/18zezz2/llamash_nomessingaround_sh_client_for_llamacpps/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'V_MGE6qVl2Lk_j2K0A8lmcW1F8kXIeD4agXEsOMhIqo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/77piSSlxC3msmmjePngSh_u2CdFOEFrXonzOFU3XI0o.jpg?width=108&crop=smart&auto=webp&s=32e99737945126dcaf3ab745f38052d1c5081bcd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/77piSSlxC3msmmjePngSh_u2CdFOEFrXonzOFU3XI0o.jpg?width=216&crop=smart&auto=webp&s=ca3b3f0ab765ebdd141df93d7139f4ec5d8f6d9f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/77piSSlxC3msmmjePngSh_u2CdFOEFrXonzOFU3XI0o.jpg?width=320&crop=smart&auto=webp&s=4518d51c27f2d55f8d8b8fa83d85dc5f3c256980', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/77piSSlxC3msmmjePngSh_u2CdFOEFrXonzOFU3XI0o.jpg?width=640&crop=smart&auto=webp&s=4e2910c3af3533b17bd59d3cccc88e7f572ce893', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/77piSSlxC3msmmjePngSh_u2CdFOEFrXonzOFU3XI0o.jpg?width=960&crop=smart&auto=webp&s=13f3bc0724a3883d3143596351dbda7ff1dac109', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/77piSSlxC3msmmjePngSh_u2CdFOEFrXonzOFU3XI0o.jpg?width=1080&crop=smart&auto=webp&s=f681785b434c48943361d91011cc923a98f1f00e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/77piSSlxC3msmmjePngSh_u2CdFOEFrXonzOFU3XI0o.jpg?auto=webp&s=f2f0fe0e5cf15728ac5dac28aa6da67b87725243', 'width': 1200}, 'variants': {}}]} | ||
Fine tuned coqui XTTS voice, how to use the model.pth? | 1 | I've fine-tuned a voice on the Colab notebook https://colab.research.google.com/drive/1GiI4_X724M8q2W-zZ-jXo7cWTV7RfaH-?usp=sharing for xtts model, and also did the same running this notebook locally, and in both cases I downloaded config.json and model.pth . They both give the same errors.
How do I use these finetuned models on my local machine for text to speech?
import torch
from TTS.api import TTS
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Running on device: {device}") # this correctly prints cuda
# Init Model
model_path = r"F:\VoiceCloning\TTS\00VoiceModels\MyVoice\model.pth"
config_path = r"F:\VoiceCloning\TTS\00VoiceModels\MyVoice\config.json"
tts = TTS(model_name=model_path, config_path=config_path, progress_bar=True).to("cuda")
text = "This is a test with my own voice model."
output_path = r"F:\VoiceCloning\TTS\MyVoice\output.wav"
tts.tts_to_file(text=text, file_path=output_path)
print(f"Audio file saved to {output_path}")
This complains about `AttributeError: 'TTS' object has no attribute 'is_multi_speaker'`
and running from the commandline to test
> tts --text "testing 123" --model_name "model.pth" --out_path outputtest.wav
or
> tts --model_name "model.pth" --list_speaker_idxs
complains
`ValueError: not enough values to unpack (expected 4, got 1)` | 2024-01-05T19:06:35 | https://www.reddit.com/r/LocalLLaMA/comments/18zep55/fine_tuned_coqui_xtts_voice_how_to_use_the/ | hwknd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zep55 | false | null | t3_18zep55 | /r/LocalLLaMA/comments/18zep55/fine_tuned_coqui_xtts_voice_how_to_use_the/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
Can you use 3090 with Tesla P40 on desktop PC? | 4 | I am getting closer to upgrade my PC to 13700 with DDR 6000, still thinking about VRAM
Currently I have 2070 (8GB), I am running GGUF 7B, 10B and 13B
I use Koboldcpp
4090 is expensive and probably not better than 3090 much
So with 24GB VRAM I could use maybe Mixtral 8x7B GGUF and 20B and 33B, probably 70B will be very slow
Purchasing two 3090 means there will be difficult to fit that together and it will be noisy and require even more cooling
Then there is P40
It looks passive and not so big
Question: can you use 3090 and P40 together in typical desktop PC motherboard?
Do you need some additional magic to make it work?
I am thinking both Windows and Linux | 2024-01-05T18:57:57 | https://www.reddit.com/r/LocalLLaMA/comments/18zeho4/can_you_use_3090_with_tesla_p40_on_desktop_pc/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zeho4 | false | null | t3_18zeho4 | /r/LocalLLaMA/comments/18zeho4/can_you_use_3090_with_tesla_p40_on_desktop_pc/ | false | false | self | 4 | null |
Whats the easiest way to do RAG for a folder of pdfs, ppts, docx, etc ? | 1 | Whats the easiest way to do RAG for a folder of pdfs, ppts, docx, etc ?
Ideally with some kind of python bindings or http requests api ? | 2024-01-05T18:33:19 | https://www.reddit.com/r/LocalLLaMA/comments/18zdwu4/whats_the_easiest_way_to_do_rag_for_a_folder_of/ | Able_Conflict3308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zdwu4 | false | null | t3_18zdwu4 | /r/LocalLLaMA/comments/18zdwu4/whats_the_easiest_way_to_do_rag_for_a_folder_of/ | false | false | self | 1 | null |
New Flairs? | 1 | [removed] | 2024-01-05T18:13:59 | https://www.reddit.com/r/LocalLLaMA/comments/18zdfww/new_flairs/ | JohnRobertSmithy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zdfww | false | null | t3_18zdfww | /r/LocalLLaMA/comments/18zdfww/new_flairs/ | false | false | default | 1 | null |
Function calling using Open Source LLM | 1 | [removed] | 2024-01-05T17:46:07 | https://www.reddit.com/r/LocalLLaMA/comments/18zcrem/function_calling_using_open_source_llm/ | thevatsalsaglani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zcrem | false | null | t3_18zcrem | /r/LocalLLaMA/comments/18zcrem/function_calling_using_open_source_llm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Ugfb1ktBkfZ8i0F-Tg1Mna9RHwFZYrKnUd5uoMit0Z8', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/QQVNjyLUHX8ZtgmNlNsNNIM_Mh0cFMEbmLSzsPyqS9A.jpg?width=108&crop=smart&auto=webp&s=da173b853334eba94a7d14737096e159a86f5f0d', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/QQVNjyLUHX8ZtgmNlNsNNIM_Mh0cFMEbmLSzsPyqS9A.jpg?width=216&crop=smart&auto=webp&s=0b3f6bdba7cf1dd0ae8c1ab92812690143fb3253', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/QQVNjyLUHX8ZtgmNlNsNNIM_Mh0cFMEbmLSzsPyqS9A.jpg?width=320&crop=smart&auto=webp&s=7bb49df3145f6418f1fa7dfad5e24dce874c9e02', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/QQVNjyLUHX8ZtgmNlNsNNIM_Mh0cFMEbmLSzsPyqS9A.jpg?width=640&crop=smart&auto=webp&s=371957a41ebb2f5322e7d6d8f8aeacb5627e1eba', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/QQVNjyLUHX8ZtgmNlNsNNIM_Mh0cFMEbmLSzsPyqS9A.jpg?width=960&crop=smart&auto=webp&s=ca82e31c7f8e31c90ba87bab280cd4e51854bd8b', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/QQVNjyLUHX8ZtgmNlNsNNIM_Mh0cFMEbmLSzsPyqS9A.jpg?width=1080&crop=smart&auto=webp&s=f69a32bb584c89329decf89a9250b3f5ddb2db80', 'width': 1080}], 'source': {'height': 686, 'url': 'https://external-preview.redd.it/QQVNjyLUHX8ZtgmNlNsNNIM_Mh0cFMEbmLSzsPyqS9A.jpg?auto=webp&s=638a3e786a46720ce5df125134f806819c432f3d', 'width': 1200}, 'variants': {}}]} |
Expanding Capabilities through Composition (CALM) | 1 | This is crazy. It is a want to combine two models of different types and mostly keep each models strength. I hope they release the code soon. This would be HUGE for open source.
LLM Augmented LLMs: Expanding Capabilities through Composition
[Rachit Bansal](https://arxiv.org/search/cs?searchtype=author&query=Bansal,+R), [Bidisha Samanta](https://arxiv.org/search/cs?searchtype=author&query=Samanta,+B), [Siddharth Dalmia](https://arxiv.org/search/cs?searchtype=author&query=Dalmia,+S), [Nitish Gupta](https://arxiv.org/search/cs?searchtype=author&query=Gupta,+N), [Shikhar Vashishth](https://arxiv.org/search/cs?searchtype=author&query=Vashishth,+S), [Sriram Ganapathy](https://arxiv.org/search/cs?searchtype=author&query=Ganapathy,+S), [Abhishek Bapna](https://arxiv.org/search/cs?searchtype=author&query=Bapna,+A), [Prateek Jain](https://arxiv.org/search/cs?searchtype=author&query=Jain,+P), [Partha Talukdar](https://arxiv.org/search/cs?searchtype=author&query=Talukdar,+P)
>Foundational models with billions of parameters which have been trained on large corpora of data have demonstrated non-trivial skills in a variety of domains. However, due to their monolithic structure, it is challenging and expensive to augment them or impart new skills. On the other hand, due to their adaptation abilities, several new instances of these models are being trained towards new domains and tasks. In this work, we study the problem of efficient and practical composition of existing foundation models with more specific models to enable newer capabilities. To this end, we propose CALM -- Composition to Augment Language Models -- which introduces cross-attention between models to compose their representations and enable new capabilities. Salient features of CALM are: (i) Scales up LLMs on new tasks by 're-using' existing LLMs along with a few additional parameters and data, (ii) Existing model weights are kept intact, and hence preserves existing capabilities, and (iii) Applies to diverse domains and settings. We illustrate that augmenting PaLM2-S with a smaller model trained on low-resource languages results in an absolute improvement of up to 13\\% on tasks like translation into English and arithmetic reasoning for low-resource languages. Similarly, when PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40\\% over the base model for code generation and explanation tasks -- on-par with fully fine-tuned counterparts.
​
[https://arxiv.org/abs/2401.02412](https://arxiv.org/abs/2401.02412)
​ | 2024-01-05T17:34:01 | https://www.reddit.com/r/LocalLLaMA/comments/18zcgyp/expanding_capabilities_through_composition_calm/ | knownboyofno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zcgyp | false | null | t3_18zcgyp | /r/LocalLLaMA/comments/18zcgyp/expanding_capabilities_through_composition_calm/ | false | false | self | 1 | null |
Size dataset for fine tuning llama2 (qlora) | 1 | I am currently trying to fine a llama2 model with qlora, the aim of finetuning is to convert input into its SQL equivalent. I see there are some dataset in HuggingfaceHub like "bugdaryan/sql-create-context-instruction" (about (80k) rows). I understand that if i want to achieve a good perfomance i have to produce the dataset with my own database structure but i don't know how many samples i need. Some help? | 2024-01-05T16:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/18zbdfb/size_dataset_for_fine_tuning_llama2_qlora/ | JellyfishFriendly3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zbdfb | false | null | t3_18zbdfb | /r/LocalLLaMA/comments/18zbdfb/size_dataset_for_fine_tuning_llama2_qlora/ | false | false | self | 1 | null |
Telegram and Whatsapp bots with RAG, powered by Mixtral | 1 | [removed] | 2024-01-05T16:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/18zb2lp/telegram_and_whatsapp_bots_with_rag_powered_by/ | Electrical-Profile79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zb2lp | false | null | t3_18zb2lp | /r/LocalLLaMA/comments/18zb2lp/telegram_and_whatsapp_bots_with_rag_powered_by/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'b0sByEpmigFj6I2QT4QJU75cuEFcnQRESFjK_i5vzRM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MlDmYH1Vw8x9cREtGesZbzGh9yvjCEgGQl4HXJ1Mn18.jpg?width=108&crop=smart&auto=webp&s=a5c95f7eeea4cf8a0c66cfff531c5172fbb5871e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MlDmYH1Vw8x9cREtGesZbzGh9yvjCEgGQl4HXJ1Mn18.jpg?width=216&crop=smart&auto=webp&s=fd2cf048080e9c85617935d0520e173d11f9f9a9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/MlDmYH1Vw8x9cREtGesZbzGh9yvjCEgGQl4HXJ1Mn18.jpg?width=320&crop=smart&auto=webp&s=db069ff13952592f95aee863fadcfd0aa4853a99', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/MlDmYH1Vw8x9cREtGesZbzGh9yvjCEgGQl4HXJ1Mn18.jpg?auto=webp&s=08662e329fd481cba28557a8fef3c07131170ac5', 'width': 512}, 'variants': {}}]} |
How to load and use LLaMA-2 for regression | 1 | Hi, I'd like to use LLaMA-2 for a regression task but I have no idea how to load it and train it using Huggingface libraries. I'd appreciate it if someone help me to do this. | 2024-01-05T16:28:20 | https://www.reddit.com/r/LocalLLaMA/comments/18zaw0e/how_to_load_and_use_llama2_for_regression/ | Ornery-Young-7346 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zaw0e | false | null | t3_18zaw0e | /r/LocalLLaMA/comments/18zaw0e/how_to_load_and_use_llama2_for_regression/ | false | false | self | 1 | null |
ReAct Agent tools with Llama.cpp grammars | 1 | Hi folks
I have been trying to play with open source LLMs and agents, and the results have not been great.
I looked at guidance, LMQL and Llama.cpp grammars, seems the latter is very powerful.
However, not finding examples or discussions on enforcing agents to use tools by using it .
If I wanted an agent to have as tools call_api_foo and call_api_bar, how would I enforce to only use any of those.
Thanks much | 2024-01-05T16:10:19 | https://www.reddit.com/r/LocalLLaMA/comments/18zaggh/react_agent_tools_with_llamacpp_grammars/ | rcarrillocruz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18zaggh | false | null | t3_18zaggh | /r/LocalLLaMA/comments/18zaggh/react_agent_tools_with_llamacpp_grammars/ | false | false | self | 1 | null |
Chunking Text & Normalizing embeddings (C#) | 1 | How do I normalize embeddings in C# using the least amount of dependencies?
Normalization - making all embeddings the same size, making the embeddings better comparable.
In my RAG system, Chromadb returns all documents because they're all within 1f of the user input.
I think this failure is due to embeddings not being normalized either prior to storing the chunks or the embeddings from the user input are not being normalized before querying chroma db.
So I really need to know how to normalize embeddings using the least amount of dependencies. Thanks for any tips | 2024-01-05T15:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/18z9nla/chunking_text_normalizing_embeddings_c/ | 1EvilSexyGenius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z9nla | false | null | t3_18z9nla | /r/LocalLLaMA/comments/18z9nla/chunking_text_normalizing_embeddings_c/ | false | false | self | 1 | null |
Some thoughts on AI accuracy - what do you think? | 1 | LLMs now have amazingly convincing conversational skills, inference, and creativity, but accuracy and hallucinations are still an issue in even the best AI. What if AI were able to have its own "apps" either internally or externally that it could tap into when accuracy is required?
For example, when being asked a math problem, having a calculator "app" that is just a coded calculator to guarantee accuracy. Even if it is explaining the problem step by step the calculator can be used at each step as well as to calculate at the beginning and end of the explanation to double check accuracy.
Or like a "library", a database of texts where it can look up an answer to provide a cited excerpt when accuracy is needed. The ability to search the web for answers is slow and can also come with it's own wrong information so that could be valuable.
A higher level version of this scenario could be having connected nodes of specialized mini-llms, one for coding, one for research, one for roleplay and creative writing ,one for philosophical discussion, among many others. Maybe it could tap into these at will as it determines what is needed from a prompt.
Maybe that is even a gateway to better local LLMs if you can have specialized nodes of the types of llms you need locally, resulting in it taking up much less space and needing less processing power?
What do you all think about this? I don't have a programming background so forgive me if I'm talking nonsense. I just get the feeling we are trying to over-generalize some of these AI. One of the greatest skills we've developed as humans is resourcefulness. I know how even dev's use google/stack overflow as they work (now copilot and other AI), and aren't expected to always have the answers right there in their heads. Why are we building AI that way? | 2024-01-05T15:29:59 | https://www.reddit.com/r/LocalLLaMA/comments/18z9icx/some_thoughts_on_ai_accuracy_what_do_you_think/ | babesinboyland | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z9icx | false | null | t3_18z9icx | /r/LocalLLaMA/comments/18z9icx/some_thoughts_on_ai_accuracy_what_do_you_think/ | false | false | self | 1 | null |
Andrej Karpathy should stop sitting on the fence. Leave โOpenโAI or stop talking about Opensource models. | 1 | 2024-01-05T15:29:02 | https://v.redd.it/n1gh1bk24nac1 | TysonUsykFury | /r/LocalLLaMA/comments/18z9hlt/andrej_karpathy_should_stop_sitting_on_the_fence/ | 1970-01-01T00:00:00 | 0 | {} | 18z9hlt | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/n1gh1bk24nac1/DASHPlaylist.mpd?a=1707146943%2CYzVhY2Y2YjVlNWJjMTFjODE2ZDc2ODIyMTNmYmIxY2QyZmQ1MTQwNjY0MjdmYTVjNmRjMTIxZjU0YzgwODgxOA%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/n1gh1bk24nac1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/n1gh1bk24nac1/HLSPlaylist.m3u8?a=1707146943%2CZDlhYWYyZjhlMTM2MWNmMGNlYjNkNDhmZTE3NzMyNDYzMjkxOTBkNjJkMWFlOWMzMzQwYWI0NmM0NjBmYWQ0Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n1gh1bk24nac1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_18z9hlt | /r/LocalLLaMA/comments/18z9hlt/andrej_karpathy_should_stop_sitting_on_the_fence/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'amw1eGQ2ZDI0bmFjMRF1mWw9TFj9_Oq6S_bisRSh_1WsS3YXbFzTPMWGN8v_', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/amw1eGQ2ZDI0bmFjMRF1mWw9TFj9_Oq6S_bisRSh_1WsS3YXbFzTPMWGN8v_.png?width=108&crop=smart&format=pjpg&auto=webp&s=164f1f140c54908624a6936d8f9e01006b08eceb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/amw1eGQ2ZDI0bmFjMRF1mWw9TFj9_Oq6S_bisRSh_1WsS3YXbFzTPMWGN8v_.png?width=216&crop=smart&format=pjpg&auto=webp&s=dbd0236d9dbcd53ce6948624f8fe7462c4d6a1bd', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/amw1eGQ2ZDI0bmFjMRF1mWw9TFj9_Oq6S_bisRSh_1WsS3YXbFzTPMWGN8v_.png?width=320&crop=smart&format=pjpg&auto=webp&s=0869fd61161000b7aa56f86e80a5acbacfb56e6f', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/amw1eGQ2ZDI0bmFjMRF1mWw9TFj9_Oq6S_bisRSh_1WsS3YXbFzTPMWGN8v_.png?width=640&crop=smart&format=pjpg&auto=webp&s=576c1c5a92215268d6ef0ef35b1cc3f7ec274679', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/amw1eGQ2ZDI0bmFjMRF1mWw9TFj9_Oq6S_bisRSh_1WsS3YXbFzTPMWGN8v_.png?width=960&crop=smart&format=pjpg&auto=webp&s=62870c4d27b3b2e56e408a900cd16332122c337c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/amw1eGQ2ZDI0bmFjMRF1mWw9TFj9_Oq6S_bisRSh_1WsS3YXbFzTPMWGN8v_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=dce009dab61e1463f1aa5cb6008c9d0e5cde62e8', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/amw1eGQ2ZDI0bmFjMRF1mWw9TFj9_Oq6S_bisRSh_1WsS3YXbFzTPMWGN8v_.png?format=pjpg&auto=webp&s=b96cb38992873df25860dd4a511af604080dd687', 'width': 1080}, 'variants': {}}]} | ||
NeuralHermes-2.5-Mistral-7B-laser | 1 | NeuralHermes laser version was released.
https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser
No quants from u/The-Bloke for now.
​
​ | 2024-01-05T15:09:48 | https://www.reddit.com/r/LocalLLaMA/comments/18z9235/neuralhermes25mistral7blaser/ | Feztopia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z9235 | false | null | t3_18z9235 | /r/LocalLLaMA/comments/18z9235/neuralhermes25mistral7blaser/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uOG07mSSkQF44_xhFMU2n5HhLHnLtW8KFHZAyxBnuWg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ys8B7uUSSdr6pzi3slP0n5SB3sNBOp2FgZ7I7cZvL18.jpg?width=108&crop=smart&auto=webp&s=ddb09f9c8a78a83028f27201ff99054023f3b0a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ys8B7uUSSdr6pzi3slP0n5SB3sNBOp2FgZ7I7cZvL18.jpg?width=216&crop=smart&auto=webp&s=1a6a19fa09d5899a2d823ff0b35496ae0f2e57e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ys8B7uUSSdr6pzi3slP0n5SB3sNBOp2FgZ7I7cZvL18.jpg?width=320&crop=smart&auto=webp&s=c3338b41d9e2aa0896c22cb66e03886c605160ff', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ys8B7uUSSdr6pzi3slP0n5SB3sNBOp2FgZ7I7cZvL18.jpg?width=640&crop=smart&auto=webp&s=4215fd73b96faff9078c83c9249172d345efbbf4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ys8B7uUSSdr6pzi3slP0n5SB3sNBOp2FgZ7I7cZvL18.jpg?width=960&crop=smart&auto=webp&s=7ad69a8489d02986efe1f3a4f70769d1aa42165c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ys8B7uUSSdr6pzi3slP0n5SB3sNBOp2FgZ7I7cZvL18.jpg?width=1080&crop=smart&auto=webp&s=f394300093db63409d992abb953999464c3875e3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ys8B7uUSSdr6pzi3slP0n5SB3sNBOp2FgZ7I7cZvL18.jpg?auto=webp&s=663e6a29e788b9c7ee3d1598cf8bdc0300e3dbc2', 'width': 1200}, 'variants': {}}]} |
Techniques / options to split model inference across multiple LINUX LAN computers (each with CPU&GPU)? | 1 | Techniques / options to split model inference across multiple LINUX LAN computers (each with CPU&GPU)?
I know it's generally possible to use CPU or GPU or CPU+GPU or multiple GPUs within a single computer. But for basic cases (just a consumer with a couple of GPU equipped PCs) what tools / techniques support dividing model (e.g. a LLM too big to fit on any one PC's GPU) inference between a couple of PCs on a LAN with a GPU in each?
Obviously I know the LAN transfer is a big bottleneck but if I could have say nearly 16GBy worth of the model calculated on one PC's GPU and the same on another I can't help but think that at least SOME models with SOME partitioning will run a lot faster than just using one CPU+GPU in one PC and letting the CPU/RAM handle work the single PC's GPU cannot.
If the answer is looking at the model execution code and changing things in Pytorch / TF, ok, I can do that, but I just sort of thought there would be better tools or more automated sharding options in the model runtimes (pytorch, TF, openvino, whatever) for this. Some people have 3-4+ decently equipped PCs e.g. in their house and family PCs which they could use some part of the time, it seems wasteful not to take advantage of that option if the resulting bottleneck is not worse than just using a single PC CPU+RAM+GPU.
Obviously if I wanted totally separate instances of models working on totally isolated problems in parallel that's trivial to do. But I'm more interested in the case of say running 30B, 70B, etc. models where there's not enough VRAM in any single PC to run a decently good model variant.
I'm interested in the NVIDIA and Intel/OneAPI/OpenVINO hased GPUs and both support PT, TF and I assume the latter are what most downloaded models will use. | 2024-01-05T15:08:46 | https://www.reddit.com/r/LocalLLaMA/comments/18z91az/techniques_options_to_split_model_inference/ | Calcidiol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z91az | false | null | t3_18z91az | /r/LocalLLaMA/comments/18z91az/techniques_options_to_split_model_inference/ | false | false | self | 1 | null |
Need help in training in Kaggle | 1 | I wanted to train a model, specifically the llama 7b one, on my native language on Colab or Kaggle, whichever suits the purpose. For that, all I have is a dataset containing 18k Wikipedia articles which, according to the source, is cleaned. Please share any guides and articles regarding the training process and about your thought on it. Thanks | 2024-01-05T14:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/18z8csu/need_help_in_training_in_kaggle/ | Friendly-Gur-3289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z8csu | false | null | t3_18z8csu | /r/LocalLLaMA/comments/18z8csu/need_help_in_training_in_kaggle/ | false | false | self | 1 | null |
Beyonder-4x7B-v2 New MoE Model for OpenChat-1210, CodeNinja, Starling-RP, WizardMath | 1 | **Model**: [https://huggingface.co/mlabonne/Beyonder-4x7B-v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2)
4 experts total, 2 used during inference.
It uses a diverse set of experts (chat, math, code, RP).
It doesnโt use merges for experts to minimize chance of contamination.
* [**openchat/openchat-3.5-1210**](https://huggingface.co/openchat/openchat-3.5-1210)
* [**beowolx/CodeNinja-1.0-OpenChat-7B**](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [**maywell/PiVoT-0.1-Starling-LM-RP**](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
* [**WizardLM/WizardMath-7B-V1.1**](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
**Performance**
On OpenLLM leaderboard (even if itโs not the best bench) close to a Mixtral 8x7B Instruct (which has twice as many experts).
On a more comprehensive Nous benchmark suite, it was tested close to NousHermes-2-34B fine tune (which is a much bigger model, Beyonder is only 24B parameters + only 2 experts are selected during inference).
source (tweet from author): [https://twitter.com/maximelabonne/status/1743246746661122233](https://twitter.com/maximelabonne/status/1743246746661122233)
[OpenLLM leaderboard](https://preview.redd.it/aez6i9q3rmac1.png?width=1157&format=png&auto=webp&s=049acd444b8c4bd5adc8ee94b0725de02ee20846)
[Nous benchmark suite](https://preview.redd.it/9pzyduh4rmac1.png?width=890&format=png&auto=webp&s=55b7a3dbb90d57845ffc08cd6fdb6e498b47d3e3) | 2024-01-05T14:18:48 | https://www.reddit.com/r/LocalLLaMA/comments/18z7xvy/beyonder4x7bv2_new_moe_model_for_openchat1210/ | galambalazs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z7xvy | false | null | t3_18z7xvy | /r/LocalLLaMA/comments/18z7xvy/beyonder4x7bv2_new_moe_model_for_openchat1210/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Ay4pav7D2T0FFh2_J7B5txmfEpDOucBzM_mjRmi7xjk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zISak_lig6eE51S9Vlf7zFYDyIxG8AE5uSYB8E3ZiGI.jpg?width=108&crop=smart&auto=webp&s=52fec2467e830eb6ba3c5e3c0ebfa6314cd8818b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zISak_lig6eE51S9Vlf7zFYDyIxG8AE5uSYB8E3ZiGI.jpg?width=216&crop=smart&auto=webp&s=28511317b34be09df8f2baf93a3ca9fabb7d23f6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zISak_lig6eE51S9Vlf7zFYDyIxG8AE5uSYB8E3ZiGI.jpg?width=320&crop=smart&auto=webp&s=49ae1ab30291aa109a076b161d9f2b31d57faedd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zISak_lig6eE51S9Vlf7zFYDyIxG8AE5uSYB8E3ZiGI.jpg?width=640&crop=smart&auto=webp&s=0b2046d043c8d13240397eddd42d9ec551681ec9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zISak_lig6eE51S9Vlf7zFYDyIxG8AE5uSYB8E3ZiGI.jpg?width=960&crop=smart&auto=webp&s=66e02d6a1f59f06e017d5f08b95b076e20a45fdf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zISak_lig6eE51S9Vlf7zFYDyIxG8AE5uSYB8E3ZiGI.jpg?width=1080&crop=smart&auto=webp&s=4f4ddd21fdeb44d3ae77a3ca60828d04e67fea72', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zISak_lig6eE51S9Vlf7zFYDyIxG8AE5uSYB8E3ZiGI.jpg?auto=webp&s=0677e6d0c1d3f27d4b5fadf4c8648a7e212b26c9', 'width': 1200}, 'variants': {}}]} | |
Trouble creating document Q&A chat bot | 1 | Hello all, I'm trying to create a chat bot for my work so we can query internal documentations. I'm utilizing h2oGPT for this, but my results haven't been so great and I'm having some difficulties understanding how different aspects of my setup are affecting my responses.
We work in a lab that fixes Ericsson and Nokia radios. These radios are tested differently and we have documents to walk the technician through. The Nokia radios get tested with a device called an Exfo, and the Ericsson radios get tested with software called Helios.
Take this prompt for example: "My radio isn't getting VSWR, what should I do?"
This question is directed towards Nokia radios, as they're the only ones that get a VSWR test. I even have explicit notes in the document saying this that VSWR only applies to Nokia. Despite this, when h2oGPT generates a response, it might mix instructions for the Ericsson and Nokia radios. So in this example, it seems like it's forming its answer primarily off keywords and not actual sentence meaning. If I look at the score that h2oGPT gave to each unreleated section, they might be around (0.39) or so.
So I suppose my question is, which thing is responsible for determining these relationships and whether or not they get included in the final output answer?
I've tried using these sentences transformers, but the results were very similar. I'm not sure if there are other options that work better for what I'm doing:
**sentence-transformers/all-MiniLM-L6-v2**
**sentence-transformers/all-MiniLM-L12-v2**
I've tried several language models as well, but unfortunately I'm on CPU right now until our A4000 GPU shows up, so testing larger models has been a challenge. For testing purposes, I've been testing with this model: **stabilityai/stablelm-zephyr-3b**
It certainly seems like larger models have better results, but I'd like to know more about how they influence the information retrieval process. Do better models make this process more accurate? Is the fact that I'm only using a 3b model part of the reason it seems incapable of more advanced reasoning?
I'm also wondering if ingesting the document differently might help. Perhaps the way we chunk the document? Any information would be really helpful. | 2024-01-05T13:58:08 | https://www.reddit.com/r/LocalLLaMA/comments/18z7hgj/trouble_creating_document_qa_chat_bot/ | RidesFlysAndVibes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z7hgj | false | null | t3_18z7hgj | /r/LocalLLaMA/comments/18z7hgj/trouble_creating_document_qa_chat_bot/ | false | false | self | 1 | null |
Question about Data Generation for Region-Description of MLLM | 1 | Hello everyone,
I am currently running code for Shikra, MiniGPT4 and GPT4ROI. I am interested in creating my own Instruction Tuning dataset. However, I am unable to find any open-source scripts or assistance within the community on using prompts to generate corpora with GPT-4. Has anyone come across or seen any relevant open-source code?
To put it more explicitly, if I have a labeled object detection dataset, how can I create a Region Description dataset based on it for training a region-level multimodal language model (LLM)? | 2024-01-05T13:35:40 | https://www.reddit.com/r/LocalLLaMA/comments/18z713k/question_about_data_generation_for/ | KeepOnIterating | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z713k | false | null | t3_18z713k | /r/LocalLLaMA/comments/18z713k/question_about_data_generation_for/ | false | false | self | 1 | null |
ใHelp about Data Generation of MLLMใ | 1 | Hello everyone,
I am currently running code for Shikra and MiniGPT4. I am interested in creating my own Instruction Tuning dataset. However, I am unable to find any open-source scripts or assistance within the community on using prompts to generate corpora with GPT-4. Has anyone come across or seen any relevant open-source code?
To put it more explicitly, if I have a labeled object detection dataset, how can I create a Region Description dataset based on it for training a region-level multimodal language model (LLM)? | 2024-01-05T13:23:40 | https://www.reddit.com/r/LocalLLaMA/comments/18z6sct/help_about_data_generation_of_mllm/ | KeepOnIterating | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z6sct | false | null | t3_18z6sct | /r/LocalLLaMA/comments/18z6sct/help_about_data_generation_of_mllm/ | false | false | self | 1 | null |
Understanding LLMs: A comprehensive overview from training to inference | 1 | 2024-01-05T12:59:27 | https://arxiv.org/abs/2401.02038 | llamaShill | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 18z6bed | false | null | t3_18z6bed | /r/LocalLLaMA/comments/18z6bed/understanding_llms_a_comprehensive_overview_from/ | false | false | default | 1 | null | |
Gains from 16gb ram to 32gb ram worth it? Even more questions in the post (The pc is the bottleneck, I get it) | 1 | CPU: Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz
GPU: Geforce GTX 1080 Ti
Motherboard: Z390 M GAMING-CF
Ram: 8 GB x 2 ddr4 ram (2133 MHz according to task manager)
I have been running Noromaid-v0.1-mixtral-8x7b-Instruct at q3\_k\_m (20 GB) and I get about 2.2 tk/s - 1.3 tk/s with context loaded. I think this is the expected performance.
It might be worth it to get 32 GB ram or even 64 GB ram in the meantime before I settle on a build. My reason is that the memory usage goes to 97% when loading the whole model. (Pretty sure write to disk is happening given that koboldcpp is giving 900 hard faults/sec)
I should be done moving in a month or two, so even I decided on a build. It would be after I moved to my new home. I have seen a few p40 builds on this reddit. Am I right to think p40 has the best price-to-performance ratio for llms? (I find it kind of funny that the user who made this post switched their build out for titan RTX instead.)
[https://www.reddit.com/r/LocalLLaMA/comments/17zpr2o/nvidia\_tesla\_p40\_performs\_amazingly\_well\_for/](https://www.reddit.com/r/LocalLLaMA/comments/17zpr2o/nvidia_tesla_p40_performs_amazingly_well_for/)
I get that second-hand rtx 3090 is the standard de facto choice at the moment. (800 USD? What is the current second-hand price at?) but it seems like there are more price-to-performance choices out there. I am willing to put work in if it means getting more bang for my buck.
So my questions are:
1. Is getting 32 GB(\~70usd) for my build going to give a significant gain for my build? (or even 64 GB(\~120usd) in anticipation for a future build.) If yes, then which specific ram should I get? (I think read somewhere that ddr5 is much better but info from that comment is sparse)
2. Is linux headless server going to offer more tk/s (if it gives more than 1 tk/s it is worth the trouble to me. Probably need a new SSD(current SSD is 256 GB at 90%, venturing into new PC territory......))
3. What kind of build offers the best price-to-performance accounting for electricity consumption? (Cloud probably but pay per hour is not tickling my fancy. Also, the filth I have done to my llm ......)
4. Maybe I should clock up my ram?
5. Don't you love it when nvidia has a stranglehold on the market? | 2024-01-05T12:49:19 | https://www.reddit.com/r/LocalLLaMA/comments/18z64sk/gains_from_16gb_ram_to_32gb_ram_worth_it_even/ | Far-Gap-7977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z64sk | false | null | t3_18z64sk | /r/LocalLLaMA/comments/18z64sk/gains_from_16gb_ram_to_32gb_ram_worth_it_even/ | false | false | self | 1 | null |
Experience with removing fans from RTX3090/4090? | 1 | [removed] | 2024-01-05T12:12:06 | https://www.reddit.com/r/LocalLLaMA/comments/18z5hb4/experience_with_removing_fans_from_rtx30904090/ | Freefallr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z5hb4 | false | null | t3_18z5hb4 | /r/LocalLLaMA/comments/18z5hb4/experience_with_removing_fans_from_rtx30904090/ | false | false | self | 1 | null |
Unable to run langchain or ollama via python. Can run ollama via terminal just fine, but unable to make it work in a script. Please help! | 1 | [removed] | 2024-01-05T12:06:03 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 18z5dor | false | null | t3_18z5dor | /r/LocalLLaMA/comments/18z5dor/unable_to_run_langchain_or_ollama_via_python_can/ | false | false | default | 1 | null | ||
Model recommendation for long answer evaluation | 1 | I'm building a system where I need an OSS model to evaluate upto 200-300 word subjective answers with the question given. I'll only needs it for such single turn tasks. No conversations or chat capabilities required , but I will need it to generate a model answer. Anywhere from 30b - 70b models would be fine.
Im considering SynthIA-70b or SOLAR-70b by upstage. Any other recommendations would be great, even for smaller models. | 2024-01-05T12:02:06 | https://www.reddit.com/r/LocalLLaMA/comments/18z5b95/model_recommendation_for_long_answer_evaluation/ | 1_archit_1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z5b95 | false | null | t3_18z5b95 | /r/LocalLLaMA/comments/18z5b95/model_recommendation_for_long_answer_evaluation/ | false | false | self | 1 | null |
Trying to run Ollama via Python and Ollama wouldn't work | 1 | I have it installed on my Mac and its working via the terminal. But now I want to run it from a python script. I have tried installing it so many times now and it says its intalled but when I run it it throws a no module found error and says no module names ollama.
Then tried downloading langchain to fetch ollama from there, same story.
​
What am I missing. Should tell you I use conda envs to work. | 2024-01-05T11:21:52 | https://www.reddit.com/r/LocalLLaMA/comments/18z4ncf/trying_to_run_ollama_via_python_and_ollama/ | xylont | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z4ncf | false | null | t3_18z4ncf | /r/LocalLLaMA/comments/18z4ncf/trying_to_run_ollama_via_python_and_ollama/ | false | false | self | 1 | null |
The Future of LLM Systems Evaluation | 1 | I wanted to spark a discussion around the future of LLM system evaluations (not LLM model evals).
Human evaluation still remains the most reliable method for evaluating outputs of LLM systems. However, it's extremely time-consuming and very expensive. Therefore, sometimes the only evaluation method possible, specially at the beginning of a project, is eyeballing the LLM system outputs which is not rigorous at all.
With the release of GPT-4 and other powerful LLMs, researchers got interested in the use of powerful LLMs to evaluate the outputs of system powered by other LLMs across different evaluation criteria. Some relevant papers below:
* [LLM-as-a-judge](https://arxiv.org/abs/2306.05685)
* [Generative judge for evaluating alignment](https://arxiv.org/pdf/2310.05470.pdf)
* [Prometheus](https://arxiv.org/abs/2310.08491)
As we continue to witness the rapid advancement in AI, particularly in the field of LLMs, do you think that LLM-based evaluations will become the defacto method for evaluating LLM systems? or is it possible that a completely different new evaluation method comes up?
If LLMs do become the standard evaluators for LLM System evals, do you think we'll see the rise of specialized, fine-tuned LLM evaluators for different domains? (e.g. prometheus) For instance, an LLM trained specifically to evaluate medical LLM systems, another for legal LLMs, and so on.
Are you actively using LLMs to do offline evaluation of your systems and improve them and / or to observe them in production and detect problems? | 2024-01-05T10:38:15 | https://www.reddit.com/r/LocalLLaMA/comments/18z3ygo/the_future_of_llm_systems_evaluation/ | bergr7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z3ygo | false | null | t3_18z3ygo | /r/LocalLLaMA/comments/18z3ygo/the_future_of_llm_systems_evaluation/ | false | false | self | 1 | null |
How has this not been mentioned here before? Layla | 1 | Okay, maybe I am crazy but I did a search and it had no hits for me on this subreddit...
How has nobody talked about Layla?
Phone app which runs a hardware-accelerated local AI model? Layla app by "Layla Network.AI" is on IOS and android.
Given Sherpa and MLCchat are kinda dead now, and Termux + Koboldcpp is slow and annoying to setup with the commandline, dependancies, compiling, etc. (for me, at least. plus you really need Sillytavern, too, which is additional setup)
I am shocked this isn't mentioned anywhere. I think it got buried under shovelware 'girlfriend' apps that use cloud-based AI services.
Granted, no, you can't load your own models BUT the fine-tuned model the app dev provides (based off Mistral) is pretty good.
Note: I am not affiliated with app, the dev, or anything else. I just lurk on the devs disc. and use the app (as in: I'm a user).
Some features I can think of off the top of my head:
It can import PNG characters, has a simplified character creation process, has some advanced settings (to choose model size 3B or 7B, Lora on/off, Temp, Top P, Context length, Batch, N Threads, and Microstat sampler.) You can set the app to proactively chat with you when tou havent used Layla in a while, which is cool it can initiate chats. It, of course, saves chat history if you want.
Like ChatGPT you can audibly converse with it. It can keep info about you to keep all characters aware of who you are and a bit about you.
There's a experimental assistant function that's actively being worked on, a character hub that was just added if you don't want to make your own character (or you can import a character card / Tavern PNG), and a 'scene creator' which let's you make simplistic RPG/role-play setups with multiple characters.
It IS a paid app, but no subscriptions or in-app purchases, just a one-and-done flat app price which I appreciate.
Thought I'd bring attention to this nifty passion project someone has going. =D
I have owned the app for a while and updates are consistent, dev is extremely responsive to bugs (and fixing them), and the roadmap on the Layla disc. has many ambitious features on the horizon I'm excited for if the dev can really pull them off. =3
Dev is responsive and even listened to my feedback/suggestion to add the Tavern PNG importing support awhile back! =D
Anyway, just my random find from awhile back I have been enjoying and hope someone else enjoys, too. =3 | 2024-01-05T10:06:33 | https://www.reddit.com/r/LocalLLaMA/comments/18z3hqa/how_has_this_not_been_mentioned_here_before_layla/ | Derpy_Ponie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z3hqa | false | null | t3_18z3hqa | /r/LocalLLaMA/comments/18z3hqa/how_has_this_not_been_mentioned_here_before_layla/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rfNB09Gf5zkQIjjLzajs3quYIjwIaJLfjuMjxUc9kqQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=108&crop=smart&auto=webp&s=919346f7ac2e7659560e5a0a86fcd3569fc802f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=216&crop=smart&auto=webp&s=75cb9faffe4d99a3b263a8e2b3a1841dc8a9e931', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=320&crop=smart&auto=webp&s=801f0e15709a87a644abf93d81ef50a62fee9d5b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=640&crop=smart&auto=webp&s=41a7370540ce98ac4bceafae99140314d5978fe1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=960&crop=smart&auto=webp&s=74e79e7c4471a951a7d597ee2d37cf55235105bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?width=1080&crop=smart&auto=webp&s=68a44496d04351e3d7e21d0f587b45f39e66cf63', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/pStBkQSkha8RsQ0qD1AZeRelFGmNgRU7m_edM51Y-iA.jpg?auto=webp&s=f3b7dca5357047a7e4cc596485fa8dd5e0892118', 'width': 1280}, 'variants': {}}]} |
Recommendations needed | 1 | [removed] | 2024-01-05T10:06:20 | https://www.reddit.com/r/LocalLLaMA/comments/18z3hlp/recommendations_needed/ | Free-Big9862 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z3hlp | false | null | t3_18z3hlp | /r/LocalLLaMA/comments/18z3hlp/recommendations_needed/ | false | false | self | 1 | null |
Does Vicuna includes a special token for aggregate sequence representation (similar as BERT) | 1 | As Vicuna is based on the transformer architecture, I am curious about the presence of special tokens, more precisely about the \[CLS\] classification token, that is an aggregate representation of the whole sequence and it is commonly used at the beginning of each input sequence. | 2024-01-05T09:58:05 | https://www.reddit.com/r/LocalLLaMA/comments/18z3cwa/does_vicuna_includes_a_special_token_for/ | Its_All_Chain_Rules | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z3cwa | false | null | t3_18z3cwa | /r/LocalLLaMA/comments/18z3cwa/does_vicuna_includes_a_special_token_for/ | false | false | self | 1 | null |
Why is there no copilot for messengers? | 1 | There are various proprietary and FOSS solutions for coding, but I haven't come across anything beyond some basic scripts. I get that it requires a lot more personalization than a coding solution and thus probably finetuned models.
However, shouldn't it be relatively doable by now with various tiny models that can run on mobile? The required context sizes for output would also be rather small. I haven't tuned a model myself yet, but wouldn't it theoretically be possible to use some sort of continous training with new messages? Maybe it's too ressource-intensive on a phone to do locally, but a server would solve that effortlessly.
Besides, the first contact for many people with language models has been predictive text from keyboards. They aren't transformer based LLMs, but the one word at a time approach and lack of tab completions seems rather dated by now. Has there been any progress with these apps? | 2024-01-05T09:57:13 | https://www.reddit.com/r/LocalLLaMA/comments/18z3ch4/why_is_there_no_copilot_for_messengers/ | AbstractContract | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z3ch4 | false | null | t3_18z3ch4 | /r/LocalLLaMA/comments/18z3ch4/why_is_there_no_copilot_for_messengers/ | false | false | self | 1 | null |
This is what we get for gaslighting LLMs with the kitten prompt | 1 | 2024-01-05T09:37:47 | https://www.reddit.com/gallery/18z32i3 | Rivridis | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18z32i3 | false | null | t3_18z32i3 | /r/LocalLLaMA/comments/18z32i3/this_is_what_we_get_for_gaslighting_llms_with_the/ | false | false | 1 | null | ||
Named Entity Recognition + instance classification | 1 | [removed] | 2024-01-05T09:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/18z2wx0/named_entity_recognition_instance_classification/ | EnnioEvo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z2wx0 | false | null | t3_18z2wx0 | /r/LocalLLaMA/comments/18z2wx0/named_entity_recognition_instance_classification/ | false | false | self | 1 | null |
Is this the best RAG pipeline upto now?, using Mistral LLM model 7B, BGE embedding model, Correct me if iam doing any wrong | 1 |
from flask import Flask, render\_template, redirect, url\_for, request, jsonify, session, send\_from\_directory
from flask\_socketio import SocketIO, emit
from flask\_caching import Cache
from concurrent.futures import ThreadPoolExecutor
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import CTransformers
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA
from langchain.embeddings import HuggingFaceBgeEmbeddings
from langchain.document\_loaders import PyPDFLoader
from datetime import timedelta
from flask\_sqlalchemy import SQLAlchemy
from datetime import datetime
import os
app = Flask(\_\_name\_\_)
app.secret\_key = 'your\_random\_secret\_key'
app.permanent\_session\_lifetime = timedelta(hours=1)
socketio = SocketIO(app, cors\_allowed\_origins="\*")
app.config\['CACHE\_TYPE'\] = 'simple'
cache = Cache(app)
executor = ThreadPoolExecutor(max\_workers=os.cpu\_count())
\# Configure the Flask application to use SQLite database
app.config\['SQLALCHEMY\_DATABASE\_URI'\] = 'sqlite:///site.db'
db = SQLAlchemy(app)
\# Define a model for storing user queries and responses
class UserQuery(db.Model):
id = db.Column(db.Integer, primary\_key=True)
username = db.Column(db.String(20), nullable=False)
query = db.Column(db.String(500), nullable=False)
response = db.Column(db.String(500), nullable=True)
timestamp = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
\# Create the database tables before running the application
with app.app\_context():
db.create\_all()
\# Initialize LLM and other components as in the original code
local\_llm = "mistral-7b-instruct-v0.1.Q8\_0.gguf"
config = {
'config.config.max\_new\_tokens': 400, #400 to 200
'repetition\_penalty': 0.3, #0.3 to 0.1
'temperature': 0.1,
'top\_k': 50, #50 to 20
'top\_p': 0.9, #0.9 to 0.5
'stream': True,
'threads': 6,
'config.config.context\_length': 4096
}
llm = CTransformers(
model=local\_llm,
model\_type="mistral",
lib="avx2",
\*\*config
)
print("LLM Initialized....")
prompt\_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else.
Helpful answer:
"""
model\_name = "BAAI/bge-large-en"
model\_kwargs = {'device': 'cpu'}
encode\_kwargs = {'normalize\_embeddings': False}
embeddings = HuggingFaceBgeEmbeddings(
model\_name=model\_name,
model\_kwargs=model\_kwargs,
encode\_kwargs=encode\_kwargs
)
prompt = PromptTemplate(template=prompt\_template, input\_variables=\['context', 'question'\])
load\_vector\_store = Chroma(persist\_directory="stores/pet\_cosine", embedding\_function=embeddings)
retriever = load\_vector\_store.as\_retriever(search\_kwargs={"k":1})
u/app.route('/pdfs/<filename>')
def pdfs(filename):
return send\_from\_directory(os.path.join(os.getcwd(), 'pdfs'), filename)
u/app.route('/')
def index():
\# Check if the user is logged in
if 'username' in session:
\# Check if the session has expired
if session.permanent and session.modified:
return redirect(url\_for('login'))
return render\_template('index.html', username=session\['username'\])
else:
\# Redirect to the login page if not logged in
return redirect(url\_for('login'))
u/app.route('/logout')
def user\_logout():
\# Remove the username from the session if it exists
session.pop('username', None)
\# Redirect to the login page after logout
return redirect(url\_for('login'))
\# Clear Cache route
u/app.route('/clear\_cache')
def clear\_cache():
cache.clear()
return "Cache cleared successfully! You can confirm that the cache is cleared by checking your application behavior or logs."
\# Asynchronous processing for model inference
def async\_inference(query):
chain\_type\_kwargs = {"prompt": prompt}
qa = RetrievalQA.from\_chain\_type(
llm=llm,
chain\_type="stuff",
retriever=retriever,
return\_source\_documents=True,
chain\_type\_kwargs=chain\_type\_kwargs,
verbose=True
)
response = qa(query)
answer = response\['result'\]
source\_document = response\['source\_documents'\]\[0\].page\_content
doc = response\['source\_documents'\]\[0\].metadata\['source'\]
return {"query": query, "answer": answer, "source\_document": source\_document, "doc": doc}
\# Route for handling user queries
u/app.route('/get\_response', methods=\['POST'\])
def get\_response():
query = request.form.get('query')
username = session.get('username', 'Anonymous')
\# Log the user query
user\_query = UserQuery(username=username, query=query)
db.session.add(user\_query)
db.session.commit()
\# Asynchronously process the query
future = executor.submit(async\_inference, query)
\# Wait for the result
result = future.result()
\# Log the user response
user\_query.response = result\["answer"\]
db.session.commit()
response\_data = {
"answer": result\["answer"\],
"source\_document": result\["source\_document"\],
"doc": result\["doc"\]
}
return jsonify(response\_data)
\# Batch processing route
u/app.route('/get\_responses', methods=\['POST'\])
def get\_responses():
queries = request.form.getlist('queries')
\# Asynchronously process queries
futures = \[executor.submit(async\_inference, query) for query in queries\]
\# Collect results
results = \[future.result() for future in futures\]
return jsonify(results)
\# Define a dictionary of valid usernames and corresponding passwords
valid\_credentials = {
'user1': 'password1',
'user2': 'password2',
'user3': 'password3',
\# Add more usernames and passwords as needed
}
u/app.route('/login', methods=\['GET', 'POST'\])
def login():
if request.method == 'POST':
username = request.form.get('username')
password = request.form.get('password')
\# Check if the username and password are correct
if username in valid\_credentials and password == valid\_credentials\[username\]:
\# Store the username in the session to track if the user is logged in
session\['username'\] = username
\# Redirect to the main page after successful login
return redirect(url\_for('index'))
else:
\# Redirect back to the login page with an error message
return render\_template('login.html', error='Invalid username or password')
\# If it's a GET request, render the login page
return render\_template('login.html')
u/socketio.on('submit\_query')
def handle\_query(data):
try:
queries = data\['data'\]\['queries'\]
\# Your logic to handle multiple queries
results = \[\]
for query in queries:
chain\_type\_kwargs = {"prompt": prompt}
qa = RetrievalQA.from\_chain\_type(
llm=llm,
chain\_type="stuff",
retriever=retriever,
return\_source\_documents=True,
chain\_type\_kwargs=chain\_type\_kwargs,
verbose=True
)
response = qa(query)
answer = response\['result'\]
source\_document = response\['source\_documents'\]\[0\].page\_content
doc = response\['source\_documents'\]\[0\].metadata\['source'\]
results.append({"query": query, "answer": answer, "source\_document": source\_document, "doc": doc})
\# Emit the results back to the client using WebSocket
emit('response', {'data': results})
except Exception as e:
\# Emit an error back to the client
emit('response', {'error': str(e)})
u/app.route('/logout')
def logout():
\# Remove the username from the session if it exists
session.pop('username', None)
\# Redirect to the login page after logout
return redirect(url\_for('login'))
if \_\_name\_\_ == '\_\_main\_\_':
socketio.run(app, debug=True, use\_reloader=False, host='0.0.0.0', port=6500)
| 2024-01-05T09:17:20 | https://www.reddit.com/r/LocalLLaMA/comments/18z2rhi/is_this_the_best_rag_pipeline_upto_now_using/ | akhilpanja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z2rhi | false | null | t3_18z2rhi | /r/LocalLLaMA/comments/18z2rhi/is_this_the_best_rag_pipeline_upto_now_using/ | false | false | self | 1 | null |
How do you guys cope with ai addiction? Hardware advice for someone who realizes is too obsessed. | 1 | *bit of a rant* I am sat here surrounded by hardware and realize that I have gotten over my head with this hobby lol. Currently have an amd 7950x3d system that I built just for ai (i was a cypto mining obsessed a few years ago dont even want to think how much money i wasted doing that) and have suspected autism so I think I know why I might get so obessed with computers.
Anyways my question point is i had a 4090/3090 combo that was supposed to be my pc settled for next few years and would allow me to both game or run 70b like (or most stuff) when I want to. Anyways a few days ago i got abit silly and ordered both an rtx 4000 and a 4070.
Using a riser cable ive managed to fit 4090/3090+ 4070(or a4000) for gaming so i can leave a 70b model or like running 24/7 but since im only really a dabbler, done stuff like make chat bots to roleplay with but since my autism, I dont exactly have many uses for this outside the tinkering and am maybe realizing I have an issue.
I suppose my question is should I return the a4000 and or the 4070 since I'm just a dabbler, i made mistake buying this workstation instead of building a second pc for ai lol, do you think as someone who only dabbles in ai could this hardware possibly be worth it in future or other hobbies i could look into that make the gpus actually worth running (cant mine since uk electric costs for example. :() How do you guys realize or deal with when notice you have a "problem" lol. | 2024-01-05T09:15:45 | https://www.reddit.com/r/LocalLLaMA/comments/18z2qlx/how_do_you_guys_cope_with_ai_addiction_hardware/ | fluffywuffie90210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z2qlx | false | null | t3_18z2qlx | /r/LocalLLaMA/comments/18z2qlx/how_do_you_guys_cope_with_ai_addiction_hardware/ | false | false | self | 1 | null |
Do you think training will be possible in the near future on Apple silicon M3 Max? | 1 | I'm curious if MLX framework matures, training would be much faster on the M3 Max.. or is simply the fact that FLOPS are low it is impossible. I'm debating if I should keep my specced out Macbook as besides development I bought it for ML too. | 2024-01-05T08:53:43 | https://www.reddit.com/r/LocalLLaMA/comments/18z2en8/do_you_think_training_will_be_possible_in_the/ | BukHunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z2en8 | false | null | t3_18z2en8 | /r/LocalLLaMA/comments/18z2en8/do_you_think_training_will_be_possible_in_the/ | false | false | self | 1 | null |
Production - LLM horizontal scaling | 1 | Is anyone here running horizontaly scalable LLMs in production?
My company has a use case where we need to offer private hosting of LLMs to our customers on their cloud providers (letโs say, aws). We should allow serving something like Mixtral to support the order of magnitude of 100k of users, with autoscaling. Vertical scaling would work until a point. The goal is to offer customers the ability to easily deploy, monitor, and ideally tune their model.
I was exploring the EKS approach with ec2 instances. SageMaker can also be a good idea. However, orchestrating this seems to be much more difficult than it looks from the outside. I was also looking into Ray and OpenLLM and they seem like okay solutions, but they do seem a bit untested and lack certain features.
What are some considerations that should be taken into account? Are there any battle tested best practices? Any input would be more than welcome. | 2024-01-05T08:51:40 | https://www.reddit.com/r/LocalLLaMA/comments/18z2dja/production_llm_horizontal_scaling/ | micamecava | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z2dja | false | null | t3_18z2dja | /r/LocalLLaMA/comments/18z2dja/production_llm_horizontal_scaling/ | false | false | self | 1 | null |
Train LLAMA 2 with a chat dataset | 1 | I'm new to the realm of LLMS and have been attempting to build a chatbot using LLAMA2 with a chat dataset. The format of the chat dataset looks like this:
###Human: How to use the developed view for the sheet metal tool?
###Assistant: Oh! I understand. This is specific to the Generate developed view for sheet metal parts. I have found the Generate developed view for sheet metal parts Tool use manual in DV. Forwarding a link to you for reference. Please go through the link for help documents. <inserts link>
I'm not sure how to start. Should I go with RAG or Fine-tuning? While searching online, I noticed that most projects are done using Colaboratory, but I want to keep everything local. Any guidance on resources or tutorials for this would be greatly appreciated. | 2024-01-05T08:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/18z2c3d/train_llama_2_with_a_chat_dataset/ | _the_bb_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z2c3d | false | null | t3_18z2c3d | /r/LocalLLaMA/comments/18z2c3d/train_llama_2_with_a_chat_dataset/ | false | false | self | 1 | null |
vLLM on Windows PC | 6 |
Docker compose to run vLLM on windows. Significantly speedsup local Ilm app development.
Once setup, its "one click" to start or can be configured to start on startup. | 2024-01-05T08:41:57 | https://github.com/aneeshjoy/vllm-windows | a4ai | github.com | 1970-01-01T00:00:00 | 0 | {} | 18z28i3 | false | null | t3_18z28i3 | /r/LocalLLaMA/comments/18z28i3/vllm_on_windows_pc/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'FsJYyfl4eD44aVKUW5di9PuVFcQCMcMe_XoXVmXhPNo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=108&crop=smart&auto=webp&s=794bbcca4f83011545bd89fa399f9a10be38463a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=216&crop=smart&auto=webp&s=6bc96177fabd4b1969689b9de3cf34bffbbaaec2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=320&crop=smart&auto=webp&s=bf8d72182157ad6d6071c5861dd08fca4532867c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=640&crop=smart&auto=webp&s=074d66f0c4beac28de49e61141e06297a8ea6be6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=960&crop=smart&auto=webp&s=b6f526e5236655e22d072b94d48827a25045b8ca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=1080&crop=smart&auto=webp&s=2dc26a1f446a43ff193a9eaf277f1958c01904d5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?auto=webp&s=f98999ac99eea3fcb5bdee61a9360af44c9baba2', 'width': 1200}, 'variants': {}}]} | |
Best way to train quantised mixtral 8x7b on an m1 Mac? | 1 | I've got mixtral-8x7b-instruct_q4_0k_m running my 32gb m1 Mac with llama.cpp and would now like to fine tune it on my own data.
Is there a way currently to get this working preferably running on metal with my current setup or would I have to use an aws server or something?
Also is training 4 bit quantised models effective or would I be better off using a server online to do the training of the non-quantised model? | 2024-01-05T08:09:08 | https://www.reddit.com/r/LocalLLaMA/comments/18z1r1f/best_way_to_train_quantised_mixtral_8x7b_on_an_m1/ | CloudsOfMagellan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z1r1f | false | null | t3_18z1r1f | /r/LocalLLaMA/comments/18z1r1f/best_way_to_train_quantised_mixtral_8x7b_on_an_m1/ | false | false | self | 1 | null |
Serve Mixtral-8x7B-Instruct-v0.1 at scale via 8xV100s - what to do? | 1 | I have access to a rig with 8 V100s (**16GB**) at work and am trying to serve a GPT-3.5-level model (chatting, coding, maybe RAG one day when we get more GPUs) to maybe 20-50 concurrent users at a "decent" speed; would appreciate guidance on best ways to achieve this.
I'd previously used `vllm` with great success, so I figured I'd throw that together with Mixtral-8x7B-Instruct-v0.1. But I was quite disappointed to find out that optimizations like AWQ quantization and flash attention do not work ([AWQ](https://github.com/vllm-project/vllm/issues/1345#issuecomment-1788815262), [FA](https://github.com/Dao-AILab/flash-attention/issues/148#issuecomment-1574090582)) on V100. V100 has compute capability of just 7.0. :( And this [Github issue/comments](https://github.com/vllm-project/vllm/issues/2076) does not inspire confidence that `vllm` can actually run Mixtral on V100s at all.
But then I came across [Optimize Mistral Inference Speed](https://www.reddit.com/r/LocalLLaMA/comments/18xt970/optimize_mistral_inference_speed/) by /u/kekkimo, which suggests that `vllm` with Mixtral can indeed be run on a V100! And moreover, some [interesting discussion](https://www.reddit.com/r/LocalLLaMA/comments/18xt970/comment/kg6i86b/?utm_source=share&utm_medium=web2x&context=3) with /u/kryptkpr there implies that maybe I shouldn't care about quantization as there is a dequantization step for inference (learn something new every day -- I thought just basically AWQ lowered precision on weights, what exactly is being dequantized?), which can actually slow things down when batching requests. So as long as my 8x16GB VRAM can handle the volume of concurrent requests, it may be as-good-or-better for throughput to not quantize.
I also looked into Exl2, which seems great and underrated, and *maybe* works on V100 (unclear) but I can't tell if there's a backend that supports Exl2 while also supporting parallel decoding, queuing, and all the various fancy optimizations (to the extent they even apply on a V100)? I haven't had great luck finding such a backend, [Ray](https://github.com/ray-project/ray-llm) and [Aphrodite](https://github.com/PygmalionAI/aphrodite-engine) seem to both rely on `vllm` themselves. But then again, given the previous paragraph, maybe I should care about quantization that much.
Sorry for the semi-question-semi-stream-of-consciousness narration of my digging, but TLDR:
​
* Any other ideas for running a *server* with a ChatGPT-3.5-level model on 8x16GB V100s?
* Now that we're here, could someone please explain what and where is being dequantized in AWQ (and various quantization algorithms broadly)? Surprisingly, I can't find anything accessible on this. I kinda always stupidly thought "smaller model = faster" but now I realize this may not quite work like that.
If I'm wildly off on any of this, please course-correct; I want to make sure I get the most out of this hardware and am not wedded to using a particular framework (just gotta make sure parallel decoding and queueing at least work properly). Thank you!! | 2024-01-05T07:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/18z0rfk/serve_mixtral8x7binstructv01_at_scale_via_8xv100s/ | ablasionet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z0rfk | false | null | t3_18z0rfk | /r/LocalLLaMA/comments/18z0rfk/serve_mixtral8x7binstructv01_at_scale_via_8xv100s/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'o0Pvqc9HgbKKK4j-P_VfiebwEw60_6eZvMAq9CKtn-U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Npt-NzkdnPQm4CMd_5o7KWtqIfhSEOWwnVOTdTskrig.jpg?width=108&crop=smart&auto=webp&s=200345b0df7122d549939c8ff9b7113f64d03587', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Npt-NzkdnPQm4CMd_5o7KWtqIfhSEOWwnVOTdTskrig.jpg?width=216&crop=smart&auto=webp&s=30e93dbb4df86961ee42a5f44ba6cc7f7a7dd826', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Npt-NzkdnPQm4CMd_5o7KWtqIfhSEOWwnVOTdTskrig.jpg?width=320&crop=smart&auto=webp&s=023f36be02ab7389686b468be986dd4a163a5025', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Npt-NzkdnPQm4CMd_5o7KWtqIfhSEOWwnVOTdTskrig.jpg?width=640&crop=smart&auto=webp&s=a8979be42faf0c63847770a02428e43f4153e47e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Npt-NzkdnPQm4CMd_5o7KWtqIfhSEOWwnVOTdTskrig.jpg?width=960&crop=smart&auto=webp&s=c4f41295973709cd8c101aefa0a562aa4f1e86e1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Npt-NzkdnPQm4CMd_5o7KWtqIfhSEOWwnVOTdTskrig.jpg?width=1080&crop=smart&auto=webp&s=41c523cd11d2fd8b33436b6fc8cd6dd3f17c30f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Npt-NzkdnPQm4CMd_5o7KWtqIfhSEOWwnVOTdTskrig.jpg?auto=webp&s=0c943b0505ff075734a93edc1d31f781c2c78542', 'width': 1200}, 'variants': {}}]} |
Best open model for controlling browser/computer via vision? (alt to gpt-4-vision) | 1 | [removed] | 2024-01-05T06:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/18z0ifc/best_open_model_for_controlling_browsercomputer/ | Away-Bird-6339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z0ifc | false | null | t3_18z0ifc | /r/LocalLLaMA/comments/18z0ifc/best_open_model_for_controlling_browsercomputer/ | false | false | self | 1 | null |
Training LLM : A100 vs 4x4096 | 1 | [removed] | 2024-01-05T06:39:23 | https://www.reddit.com/r/LocalLLaMA/comments/18z0cgu/training_llm_a100_vs_4x4096/ | Electronic_Hawk524 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18z0cgu | false | null | t3_18z0cgu | /r/LocalLLaMA/comments/18z0cgu/training_llm_a100_vs_4x4096/ | false | false | self | 1 | null |
LLaMA Pro: Progressive LLaMA with Block Expansion (Unreleased) | 1 | 2024-01-05T06:27:08 | https://arxiv.org/abs/2401.02415 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 18z04x5 | false | null | t3_18z04x5 | /r/LocalLLaMA/comments/18z04x5/llama_pro_progressive_llama_with_block_expansion/ | false | false | default | 1 | null | |
Best 7/13B model (keras/tensorflow only) for Summarization and Q&A | 1 | Due to constraints, I am unable to download or use any of the pytorch models from hugging face. I have a use case where I want to retrieve the best domain specific summarization and Q&A and would like suggestions/opinions on which is the best 7b or 13b keras/tensorflow model. I do intend to implement RAG or finetune my model. | 2024-01-05T06:09:21 | https://www.reddit.com/r/LocalLLaMA/comments/18yztur/best_713b_model_kerastensorflow_only_for/ | xxxysss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yztur | false | null | t3_18yztur | /r/LocalLLaMA/comments/18yztur/best_713b_model_kerastensorflow_only_for/ | false | false | self | 1 | null |
Train NMT Model on dictionary | 1 | [removed] | 2024-01-05T06:01:50 | https://www.reddit.com/r/LocalLLaMA/comments/18yzoxd/train_nmt_model_on_dictionary/ | Tejasw__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yzoxd | false | null | t3_18yzoxd | /r/LocalLLaMA/comments/18yzoxd/train_nmt_model_on_dictionary/ | false | false | self | 1 | null |
MLX supports Qlora now | 1 | It looks like we are going to have a fun weekend.
[https://github.com/ml-explore/mlx-examples/blob/main/lora/README.md](https://github.com/ml-explore/mlx-examples/blob/main/lora/README.md) | 2024-01-05T05:37:25 | https://www.reddit.com/r/LocalLLaMA/comments/18yz8kc/mlx_supports_qlora_now/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yz8kc | false | null | t3_18yz8kc | /r/LocalLLaMA/comments/18yz8kc/mlx_supports_qlora_now/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'atNhb1h67EWqlmkGT-I0K6vd1XYc0t1pJKThB2o_oGk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AeFC9U1eAkLewYejToCusQDhQdVV8zlXJcudS06Q1QM.jpg?width=108&crop=smart&auto=webp&s=a9427a08f98042fa8a0caaa8dff84db7ea4f0450', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AeFC9U1eAkLewYejToCusQDhQdVV8zlXJcudS06Q1QM.jpg?width=216&crop=smart&auto=webp&s=f2bcb37ad16a829ad7e7b37b1a99bfdd09c7c33a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AeFC9U1eAkLewYejToCusQDhQdVV8zlXJcudS06Q1QM.jpg?width=320&crop=smart&auto=webp&s=764a247d0f868ed9ea11af0fc3ed31aa49aaa78b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AeFC9U1eAkLewYejToCusQDhQdVV8zlXJcudS06Q1QM.jpg?width=640&crop=smart&auto=webp&s=52d652a8f38868242f4f4dde812af2235860821c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AeFC9U1eAkLewYejToCusQDhQdVV8zlXJcudS06Q1QM.jpg?width=960&crop=smart&auto=webp&s=78c324a92c01b2a0ef5c1861d96fcf24f0637f24', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AeFC9U1eAkLewYejToCusQDhQdVV8zlXJcudS06Q1QM.jpg?width=1080&crop=smart&auto=webp&s=38dee376fac2086deed00f6ef176450ebe4599d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AeFC9U1eAkLewYejToCusQDhQdVV8zlXJcudS06Q1QM.jpg?auto=webp&s=97cc1da4f6e6b30259296e4ef42a583a5976dfba', 'width': 1200}, 'variants': {}}]} |
Looking for a Handy LLM for Survival & DIY โ Any Suggestions? | 1 | Question for yโall โ does anyone know of a small model specialized in survival, first aid, and other DIY stuff? Something between 3b-7b so it can run on a mid-spec phone.
Iโm stoned, watching The Last of Us, and I can see this being really useful in many different scenarios, ranging from getting lost in the wilderness to a cordyceps zombie apocalypse, or just plain boredom | 2024-01-05T05:37:00 | https://www.reddit.com/r/LocalLLaMA/comments/18yz8at/looking_for_a_handy_llm_for_survival_diy_any/ | cajun_spice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yz8at | false | null | t3_18yz8at | /r/LocalLLaMA/comments/18yz8at/looking_for_a_handy_llm_for_survival_diy_any/ | false | false | self | 1 | null |
Lamar2 Text Classification and Conditions | 1 | [removed] | 2024-01-05T05:36:12 | https://www.reddit.com/r/LocalLLaMA/comments/18yz7si/lamar2_text_classification_and_conditions/ | No_Concentrate_267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yz7si | false | null | t3_18yz7si | /r/LocalLLaMA/comments/18yz7si/lamar2_text_classification_and_conditions/ | false | false | self | 1 | null |
Best models for CPU without GPU? | 1 | My specs:
Ryzen 5 7535U, 6 cores 12 threads, Zen 3 (equal to Ryzen 5 6600u)
RAM: 16 GB LPDDR5-6400MT/s,
GPU: integrated Radeon 660M based on the RDNA2
\---
Which models should I use: 13b, 7b, q3-q5, gguf ?
How many threads in Koboldcpp settings 6 or 12 ? By default it sets to OpenBLAS, 5 threads, context: 2048
\---
I tried few models :
7b q5\_k\_m gguf - acceptable speed
13b q3,a4 - seems twice slower than 7b, but I'm not sure if my settings are correct
I want model for best logic, understanding and learning, for RP in a game, without chit-chatting or writing poems. Even if character will write text like a droid it will be acceptable.
Do I need 13b for that? I tried a bit 7b-Synthia and 13b-Tiefighter. 7b gives very long responses, 13b gives 3 times shorter responses but it perfectly follows description. It depends on model name or B number? | 2024-01-05T05:29:39 | https://www.reddit.com/r/LocalLLaMA/comments/18yz3ba/best_models_for_cpu_without_gpu/ | medgel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yz3ba | false | null | t3_18yz3ba | /r/LocalLLaMA/comments/18yz3ba/best_models_for_cpu_without_gpu/ | false | false | self | 1 | null |
which checkpoint to use from seeing the loss curve | 1 | Hi, I have finetuned a model on QLoRA setup, the below is the training loss curve
​
I have found while using the checkpoint of 2500(where the loss starting to get saturated), I am getting very good text generation capabilities, but while if I use the last checkpoint, the text generation is very poor and not coherent at all
​
Is it normal for LLMs/any DL models?, this is how should we select the checkpoint, where the loss starting to get saturated?
​ | 2024-01-05T05:10:13 | https://www.reddit.com/r/LocalLLaMA/comments/18yyqmq/which_checkpoint_to_use_from_seeing_the_loss_curve/ | Medium-Quantity1514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yyqmq | false | null | t3_18yyqmq | /r/LocalLLaMA/comments/18yyqmq/which_checkpoint_to_use_from_seeing_the_loss_curve/ | false | false | self | 1 | null |
How would a crypto GPU riser impact LLM inference / training? | 1 | [removed] | 2024-01-05T04:24:38 | https://www.reddit.com/r/LocalLLaMA/comments/18yxvhq/how_would_a_crypto_gpu_riser_impact_llm_inference/ | aidantheman18 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yxvhq | false | null | t3_18yxvhq | /r/LocalLLaMA/comments/18yxvhq/how_would_a_crypto_gpu_riser_impact_llm_inference/ | false | false | self | 1 | null |
RAG vs. Context-Window in GPT-4: accuracy, cost, & latency | 1 | 2024-01-05T04:14:49 | https://www.copilotkit.ai/blog/posts/rag-vs-context-window-in-gpt4-accuracy-cost | sleepysiding22 | copilotkit.ai | 1970-01-01T00:00:00 | 0 | {} | 18yxomp | false | null | t3_18yxomp | /r/LocalLLaMA/comments/18yxomp/rag_vs_contextwindow_in_gpt4_accuracy_cost_latency/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'rLxKs70_iqiRgzl_MNK8AUGhqA_hk6Ea2beBuhMrYEg', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/6sCICLLWQN1tR2mWC2yva9nPn44o7xMpF47N0vvlUD8.jpg?width=108&crop=smart&auto=webp&s=02e0c80d306f5f7e2657314cec3a017583b7ca1b', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/6sCICLLWQN1tR2mWC2yva9nPn44o7xMpF47N0vvlUD8.jpg?width=216&crop=smart&auto=webp&s=29d084f7fed26c8f79dc91836b84b5d3a7945148', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/6sCICLLWQN1tR2mWC2yva9nPn44o7xMpF47N0vvlUD8.jpg?width=320&crop=smart&auto=webp&s=322b00cd52ae2de918a898c4f487e02ba759b3a5', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/6sCICLLWQN1tR2mWC2yva9nPn44o7xMpF47N0vvlUD8.jpg?width=640&crop=smart&auto=webp&s=96438f66c446958a64035dabbcd65399090de916', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/6sCICLLWQN1tR2mWC2yva9nPn44o7xMpF47N0vvlUD8.jpg?width=960&crop=smart&auto=webp&s=e0f98403bc4eff9bc2c87f97a87957d126e55f1f', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/6sCICLLWQN1tR2mWC2yva9nPn44o7xMpF47N0vvlUD8.jpg?width=1080&crop=smart&auto=webp&s=cb7cc507af0f421e562202354a0781c516abfeae', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/6sCICLLWQN1tR2mWC2yva9nPn44o7xMpF47N0vvlUD8.jpg?auto=webp&s=a1a3ccfce10b616db22ed77000e15e3d11e07700', 'width': 1792}, 'variants': {}}]} | ||
SMoE Architectures? | 1 | I've been researching and reading old papers on SMoE architectures lately. I'd been toying with the idea for a few months before Mistral released Mixtral, but there success with Mixtral really kicked my research into high gear.
I was wondering if anyone here has experimented with their own sparsely activated MoE? What's the latest information concerning training topic-specific experts? A super basic example would be something like training a mathematics expert, an astronomy expert, etc., and then making sure the gating network classified input tokens properly to pass them to the correct expert.
I guess I'm just curious where the community stands with MoE/SMoE. I'd love links to papers that were helpful or informative. | 2024-01-05T03:58:17 | https://www.reddit.com/r/LocalLLaMA/comments/18yxcre/smoe_architectures/ | LoadingALIAS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yxcre | false | null | t3_18yxcre | /r/LocalLLaMA/comments/18yxcre/smoe_architectures/ | false | false | self | 1 | null |
5.0bpw and 8.0bpw exl2 AIRIC-The-Mistral quants. The most human bot: because it's part me | 1 | 2024-01-05T03:53:43 | https://huggingface.co/ericpolewski/AIRIC-The-Mistral-5.0bpw-exl2 | LetMeGuessYourAlts | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18yx9nl | false | null | t3_18yx9nl | /r/LocalLLaMA/comments/18yx9nl/50bpw_and_80bpw_exl2_airicthemistral_quants_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': '_EQZb5k-0umStsdqikP2jyrMNQ3cD4romsmWhi_Kgn8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qsZAALiPuOfXkrcM9eUo2L5ti5DRtP86ADIxC0cxrL0.jpg?width=108&crop=smart&auto=webp&s=8c9776fcad043e87c468ce3ac1a3fd5050a6dd28', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qsZAALiPuOfXkrcM9eUo2L5ti5DRtP86ADIxC0cxrL0.jpg?width=216&crop=smart&auto=webp&s=1934c6e85563814a510d9e19f06ecc2a73043743', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qsZAALiPuOfXkrcM9eUo2L5ti5DRtP86ADIxC0cxrL0.jpg?width=320&crop=smart&auto=webp&s=487934b8fc3d6dc36bb22d21929bb444843984f9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qsZAALiPuOfXkrcM9eUo2L5ti5DRtP86ADIxC0cxrL0.jpg?width=640&crop=smart&auto=webp&s=61f46c2fac9582df614736f31ead9f93c3b6b7dd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qsZAALiPuOfXkrcM9eUo2L5ti5DRtP86ADIxC0cxrL0.jpg?width=960&crop=smart&auto=webp&s=55f48bea91cff8998ff21cd07157230e5fbf3882', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qsZAALiPuOfXkrcM9eUo2L5ti5DRtP86ADIxC0cxrL0.jpg?width=1080&crop=smart&auto=webp&s=b0c53d5a0b80ae955bba031f2071746f41a84241', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qsZAALiPuOfXkrcM9eUo2L5ti5DRtP86ADIxC0cxrL0.jpg?auto=webp&s=1d00868af5d590b6f6bc8deacdf0ab3018c26a74', 'width': 1200}, 'variants': {}}]} | ||
What's the best free Voice Cloning / TTS tool for preserving accents? | 1 | Hi everyone!
I'm thinking about setting up a system, either local or online to have a cloned voice read me long articles that I'm too lazy to read with my eyes.
I'm looking for an option with no limits (so probably local would be the only choice) and it's REALLY important to me that the cloned voice would retain the speaker's unique foreign accent in English, as well as the intonation of their speech.
Do you have any suggestions, recommendations? | 2024-01-05T03:43:02 | https://www.reddit.com/r/LocalLLaMA/comments/18yx2dj/whats_the_best_free_voice_cloning_tts_tool_for/ | reza2kn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yx2dj | false | null | t3_18yx2dj | /r/LocalLLaMA/comments/18yx2dj/whats_the_best_free_voice_cloning_tts_tool_for/ | false | false | self | 1 | null |
Finding the best 13b model for roleplay | 3 | I have been using mistral-7b-openorca with basic jailbreaking and characters for a month now for rp and my knees are quite weak now and have a bicep. I tried some 13b models that are supposed to be uncensored but are pretty bad at rp and general stuff some refer stuff as "And then his unmentionable parts became aroused" ๐
. Just finding recommendations it needs to be as cursive and dirty as openorca :D. | 2024-01-05T03:08:56 | https://www.reddit.com/r/LocalLLaMA/comments/18ywdir/finding_the_best_13b_model_for_roleplay/ | Patient_Ad_6701 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ywdir | false | null | t3_18ywdir | /r/LocalLLaMA/comments/18ywdir/finding_the_best_13b_model_for_roleplay/ | false | false | default | 3 | null |
7B-13B Uncensores LLMs with larger context window? | 1 | I canโt run Mixtralโs 8x7b with its 32K context window but Iโm wondering what may be smaller and have the largest window?
How can I sort by this on Hugginfface?
Is there a smaller quantized 8x7b that I could run and would it keep its context window?
Thanks! | 2024-01-05T03:03:56 | https://www.reddit.com/r/LocalLLaMA/comments/18yw9rl/7b13b_uncensores_llms_with_larger_context_window/ | AmericanKamikaze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yw9rl | false | null | t3_18yw9rl | /r/LocalLLaMA/comments/18yw9rl/7b13b_uncensores_llms_with_larger_context_window/ | false | false | self | 1 | null |
Putting a stop to a particular argument that I see here constantly....... | 1 | โLLM cannot do reasoning or mathematicsโ
GPT 4 is better at GPT 3.5 on reasoning and maths questions that are NOT in its training data. And 3.5 is better than many open source models that we use.
Any one that uses a 7B or even a 13B open source model who have also used 3.5, Gemini pro and GPT 4 knows what I am talking about.
Also, please don't what hide behind semantics like โthey are just mimickingโ or โwe don't have a definition of reasoningโ or โLLM /Transformers / next token prediction is not compatible with reasoningโ.
You are just wrong except on definition of reasoning (then what are you even talking about?).
Somehow LLMs being poor are reasoning and maths is being coflated with LLMs cannot do reasoning or maths.
This is being propogated here like gospel by every self proclaimed LLM expert who think they know more than AI researchers just because they finetuned a 7B Hentailiteroticasussybaka model.
Please stop and get some help. | 2024-01-05T02:19:43 | https://www.reddit.com/r/LocalLLaMA/comments/18yvbvd/putting_a_stop_to_a_particular_argument_that_i/ | TysonUsykFury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yvbvd | false | null | t3_18yvbvd | /r/LocalLLaMA/comments/18yvbvd/putting_a_stop_to_a_particular_argument_that_i/ | false | false | self | 1 | null |
Dumb question, perhaps. How do I enable internet access for a locally run AI? | 1 | I would like it if the model I'm running were able to answer questions based on current information, available on the current internet. For example, I'm running a model called mythomax, a great conversational AI, but it can only give answers based on a static dataset, not answer a question about something currently online.
Is it impossible to change that? Before anyone feels inclined to warn me of the risks of privacy concerns or whatever, my AI has warned me numerous times, each time I ask anything related to the subject.
I do appreciate the concern, however. | 2024-01-05T02:06:51 | https://www.reddit.com/r/LocalLLaMA/comments/18yv28m/dumb_question_perhaps_how_do_i_enable_internet/ | caidicus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yv28m | false | null | t3_18yv28m | /r/LocalLLaMA/comments/18yv28m/dumb_question_perhaps_how_do_i_enable_internet/ | false | false | self | 1 | null |
Fine tuning for coding | 1 | I'm looking for a nudge in the right direction. I've been watching videos where people upload a csv of 100 questions/responses and call that "fine tuning" their LLM. I've been loving the offline modals, especially when it comes to dealing with my client data.
I'm a software developer and I'm looking to fine tune the LLM for my own projects. It already knows the basics of PHP and Javascript, but I'm looking to train it better on my own functions and classes. I can start really small with even a single function or something small-ish like my database class, but I'm striking out on getting a good sense for how I can help the model to understand my code so it can produce more code based on mine.
I have decent hardware (7950x/4090), and I want to start playing with this. Can anyone point me in the right direction? I understand the concept of giving it good questions and good answers, the disconnect is in giving it functions for a language it already "knows" and help those become tools in its toolbelt.
Thanks for an awesome community. | 2024-01-05T00:18:56 | https://www.reddit.com/r/LocalLLaMA/comments/18ysntg/fine_tuning_for_coding/ | mudmin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ysntg | false | null | t3_18ysntg | /r/LocalLLaMA/comments/18ysntg/fine_tuning_for_coding/ | false | false | self | 1 | null |
Mixtral gguf giving gibberish responses | 1 | Hi everybody! As per title I've downloaded the [mixtral-8x7b-instruct-v0.1.Q5\_K\_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf) from TheBloke page. I tried setting it up in TextGeneration WebUi with no success at all, each response it gives is just rubbish, like symbols and unreadable characters...
I've changed the Instruction Template, but nothing changes. Any help? Thank you all in advance!
| 2024-01-05T00:06:25 | https://www.reddit.com/r/LocalLLaMA/comments/18ysdcc/mixtral_gguf_giving_gibberish_responses/ | Relative_Bit_7250 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ysdcc | false | null | t3_18ysdcc | /r/LocalLLaMA/comments/18ysdcc/mixtral_gguf_giving_gibberish_responses/ | false | false | self | 1 | null |
What are the best medical diagnosis models? ChatGPT4 refuses to answer too often. | 1 | [removed] | 2024-01-04T23:40:57 | https://www.reddit.com/r/LocalLLaMA/comments/18yrr4j/what_are_the_best_medical_diagnosis_models/ | freshlyLinux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yrr4j | false | null | t3_18yrr4j | /r/LocalLLaMA/comments/18yrr4j/what_are_the_best_medical_diagnosis_models/ | false | false | self | 1 | null |
Small model for "Open Interpreter" use recommendation? | 1 | I've been trying out many different models with the tool "Open Interpreter"
[https://github.com/KillianLucas/open-interpreter/](https://github.com/KillianLucas/open-interpreter/)
GPT-4 Is obviously the best but i feel like even the 7B models (mistral and deeepseek coder) are pretty close. does anyone know any fine tuned mistrals or something that could be good at this performing these tasks?
clip for context: [https://www.tiktok.com/@techfren/video/7319827832966876434](https://www.tiktok.com/@techfren/video/7319827832966876434) | 2024-01-04T23:20:34 | https://www.reddit.com/r/LocalLLaMA/comments/18yr9u2/small_model_for_open_interpreter_use/ | AJ47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yr9u2 | false | null | t3_18yr9u2 | /r/LocalLLaMA/comments/18yr9u2/small_model_for_open_interpreter_use/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QpbBUIdv1RejQ-c6SXbmnoPMe83ErrNIc_QzVO6Nz_c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=108&crop=smart&auto=webp&s=e06f4bdd841e37d03518faa579153fe2efaa1216', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=216&crop=smart&auto=webp&s=ce18e82c987b680b8c660681f95fc657347564be', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=320&crop=smart&auto=webp&s=ebc1cca340ddceaceac3a62c48e5f99acf903728', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=640&crop=smart&auto=webp&s=cfb5efef40665878290a6fe4b7d652a3be7023f6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=960&crop=smart&auto=webp&s=9c0d7ba1306b8936f899cecc5460a92306604161', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?width=1080&crop=smart&auto=webp&s=6a89b78bbddda6bc227681a6a7a40daea97fefb7', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/AtNcL4PMos3rZPlemINkaEdp9_dwQ8iZ5pLbGGM8AJg.jpg?auto=webp&s=0faca598ca9698f6b719fa7d453b2ea554320ba2', 'width': 3840}, 'variants': {}}]} |
Is Anyone Doing LLaVa 1.5 Finetuning with Windows? | 1 | I'm guessing not but thought I'd at least ask before dual booting just for training. | 2024-01-04T23:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/18yqv42/is_anyone_doing_llava_15_finetuning_with_windows/ | daedalus1982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yqv42 | false | null | t3_18yqv42 | /r/LocalLLaMA/comments/18yqv42/is_anyone_doing_llava_15_finetuning_with_windows/ | false | false | self | 1 | null |
LLM for RAG - embedding and chat not compatible? | 1 | Ok so I'm in early stages of developing a RAG system for communication (emails mostly), and so far, the plan is as follows: use langchain to load email files and get metadata, use mistral:instruct on ollama to embed content, save all emails as chunks in elasticsearch database along with metadata, use langchain with mistral:instruct on ollama to query the elasticsearch db. The goal is to be able to ask questions like "in the period April 4th 2018 until May 6th of 2018, has an employee from company A expressed any concerns about breaching copyright laws to an employee at company B?". That's why we're using the elasticsearch db, so we can keep metadata like time and company in separate fields and avoid model hallucinations related to date and company affiliation. I've now come to understand that using mistral is not a good idea for embedding, and that I should use an encoder (Bert) or encoder-decoder (t5) model to do so. This would also help with speed (I might have 2.5M emails to work with, and everything should preferably be embedded within only a day or two). My problem is now though, that if I use a model that's good for embedding, then that model won't be as good for general language understanding and conversation, which I want. The whole point of doing a RAG system is to be able to chat with the database and ask questions, and to my understanding, Bert for instance would not be fit for that task. T5 might, but I'm not sure about that either, and it sure wouldn't be as good as mistral, for instance. So what to do? If I embed using Bert, then I can't chat with my db, but if I use mistral to embed then I'm abusing the model, it'll take forever, and it'll create suboptimal embeddings (correct?). If it's in any way relevant, I can say that my data is mostly in Danish, but I have considered using machine translation to English as part of my preprocessing. I'm also not sure that that is a good idea, however. Thanks a lot in advance :) | 2024-01-04T22:31:59 | https://www.reddit.com/r/LocalLLaMA/comments/18yq2a1/llm_for_rag_embedding_and_chat_not_compatible/ | _donau_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yq2a1 | false | null | t3_18yq2a1 | /r/LocalLLaMA/comments/18yq2a1/llm_for_rag_embedding_and_chat_not_compatible/ | false | false | default | 1 | null |
LLM for RAG - embedding and chat not compatible? | 1 | Ok so I'm in early stages of developing a RAG system for communication (emails mostly), and so far, the plan is as follows: use langchain to load email files and get metadata, use mistral:instruct on ollama to embed content, save all emails as chunks in elasticsearch database along with metadata, use langchain with mistral:instruct on ollama to query the elasticsearch db. The goal is to be able to ask questions like "in the period April 4th 2018 until May 6th of 2018, has an employee from company A expressed any concerns about breaching copyright laws to an employee at company B?". That's why we're using the elasticsearch db, so we can keep metadata like time and company in separate fields and avoid model hallucinations related to date and company affiliation. I've now come to understand that using mistral is not a good idea for embedding, and that I should use an encoder (Bert) or encoder-decoder (t5) model to do so. This would also help with speed (I might have 2.5M emails to work with, and everything should preferably be embedded within only a day or two). My problem is now though, that if I use a model that's good for embedding, then that model won't be as good for general language understanding and conversation, which I want. The whole point of doing a RAG system is to be able to chat with the database and ask questions, and to my understanding, Bert for instance would not be fit for that task. T5 might, but I'm not sure about that either, and it sure wouldn't be as good as mistral, for instance. So what to do? If I embed using Bert, then I can't chat with my db, but if I use mistral to embed then I'm abusing the model, it'll take forever, and it'll create suboptimal embeddings (correct?). If it's in any way relevant, I can say that my data is mostly in Danish, but I have considered using machine translation to English as part of my preprocessing. I'm also not sure that that is a good idea, however. Thanks a lot in advance :) | 2024-01-04T22:31:43 | https://www.reddit.com/r/LocalLLaMA/comments/18yq1y6/llm_for_rag_embedding_and_chat_not_compatible/ | _donau_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yq1y6 | false | null | t3_18yq1y6 | /r/LocalLLaMA/comments/18yq1y6/llm_for_rag_embedding_and_chat_not_compatible/ | false | false | self | 1 | null |
WhiteRabbitNeo - Cybersecurity AI - (HackerGPT) | 1 | [removed] | 2024-01-04T22:21:13 | https://www.reddit.com/r/LocalLLaMA/comments/18yprm1/whiterabbitneo_cybersecurity_ai_hackergpt/ | migtissera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yprm1 | false | null | t3_18yprm1 | /r/LocalLLaMA/comments/18yprm1/whiterabbitneo_cybersecurity_ai_hackergpt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '70xijO5cDiq9X1zc-Fi-bk7JwG5lVNZCV3ToXSuEpsA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dWTztq3tPuk6eEt_OMSE7pE7rDmSjRn17h-g36SxpIA.jpg?width=108&crop=smart&auto=webp&s=8a345c20c49c1457631c1991672eb855d048f769', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/dWTztq3tPuk6eEt_OMSE7pE7rDmSjRn17h-g36SxpIA.jpg?width=216&crop=smart&auto=webp&s=a2c53f8a4612cd459bc70d330ca3a14250a13948', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/dWTztq3tPuk6eEt_OMSE7pE7rDmSjRn17h-g36SxpIA.jpg?width=320&crop=smart&auto=webp&s=454a3aba7ba038e5d905c925aba89350fe9dbd20', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/dWTztq3tPuk6eEt_OMSE7pE7rDmSjRn17h-g36SxpIA.jpg?auto=webp&s=1d81172f830c26e0fac37ed67764920a4315c993', 'width': 480}, 'variants': {}}]} |
So... what's the purpose of a local LLM? | 1 | I love fiddling with tech and think I'd enjoy working on a local LLM, but I'm having trouble thinking of HOW I would actually use it once I get it all set up.
What do you use yours for that you either can't or wouldn't feel comfortable using a non-local option for? | 2024-01-04T22:18:14 | https://www.reddit.com/r/LocalLLaMA/comments/18ypoyf/so_whats_the_purpose_of_a_local_llm/ | scambl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ypoyf | false | null | t3_18ypoyf | /r/LocalLLaMA/comments/18ypoyf/so_whats_the_purpose_of_a_local_llm/ | false | false | self | 1 | null |
Pages to stay up to date on AI | 1 | Hi, I'm new to this model stuff, I see that very often new models come out or they make a fine tuning of two models and I wanted to ask you if you know some pages to keep me updated on these topics.
They can be twitter accounts, web pages or some other social network account, I would appreciate it very much. | 2024-01-04T22:01:31 | https://www.reddit.com/r/LocalLLaMA/comments/18ypa6t/pages_to_stay_up_to_date_on_ai/ | Hungry-Advisor-5319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ypa6t | false | null | t3_18ypa6t | /r/LocalLLaMA/comments/18ypa6t/pages_to_stay_up_to_date_on_ai/ | false | false | self | 1 | null |
๐บ๐ฆโโฌ LLM Comparison/Test: API Edition (GPT-4 vs. Gemini vs. Mistral vs. local LLMs) | 1 | Here I'm finally testing and ranking online-only API LLMs like Gemini and Mistral, retesting GPT-4 + Turbo, and comparing all of them with the local models I've already tested!
Very special thanks to kind people like u/raymyers and others who offered and lent me their API keys so I could do these tests. And thanks to those who bugged me to expand my tests onto LLMaaS. ;)
## Models tested:
- **GPT-4**
- **GPT-4 Turbo**
- **Gemini Pro**
- **mistral-medium**
- **mistral-small**
- **mistral-tiny**
## Testing methodology
- **4 German data protection trainings:**
- I run models through **4** professional German online data protection trainings/exams - the same that our employees have to pass as well.
- The test data and questions as well as all instructions are in German while the character card is in English. This **tests translation capabilities and cross-language understanding**.
- Before giving the information, I instruct the model (in German): *I'll give you some information. Take note of this, but only answer with "OK" as confirmation of your acknowledgment, nothing else.* This **tests instruction understanding and following capabilities**.
- After giving all the information about a topic, I give the model the exam question. It's a multiple choice (A/B/C) question, where the last one is the same as the first but with changed order and letters (X/Y/Z). Each test has 4-6 exam questions, for a total of **18** multiple choice questions.
- If the model gives a single letter response, I ask it to answer with more than just a single letter - and vice versa. If it fails to do so, I note that, but it doesn't affect its score as long as the initial answer is correct.
- I rank models according to how many correct answers they give, primarily after being given the curriculum information beforehand, and secondarily (as a tie-breaker) after answering blind without being given the information beforehand.
- All tests are separate units, context is cleared in between, there's no memory/state kept between sessions.
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) frontend
- [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui) backend (for HF models)
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- Chat Completion API
## Detailed Test Reports
And here are the detailed notes, the basis of my ranking, and also additional comments and observations:
- **GPT-4** (gpt-4) API:
- โ
Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **18/18**
- โ
Consistently acknowledged all data input with "OK".
- โ
Followed instructions to answer with just a single letter or more than just a single letter.
- Fluctuating speeds, but on average rather slow (15-20 tps)
- Short, concise responses
- Noticeable repetition in how responses were structured and similar sentences
The king remains on the throne: That's what a perfect score looks like! Same as last time I tested it in October 2023.
- **GPT-4 Turbo** (gpt-4-1106-preview) API:
- โ
Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+4+3+5=16/18**
- โ
Consistently acknowledged all data input with "OK".
- โ
Followed instructions to answer with just a single letter or more than just a single letter.
- Fluctuating speeds, but on average rather slow (15-20 tps) - I thought Turbo should be faster?!
- Shorter, even more concise responses
- No repetition (possibly not noticeable because of less verbose responses)
What, no perfect score, tripping up on the blind runs? Looks like it hallucinated a bit, causing it to fall behind the "normal" GPT-4. Since Turbo likely means quantized, this hints at quantization causing noticeable degradation even with such a huge model as GPT-4 (possibly also related to its alleged MoE architecture)!
- **Gemini Pro** API:
- โ Gave correct answers to only **4+4+3+6=17/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+3+3+6=16/18**
- โ Did NOT follow instructions to acknowledge data input with "OK".
- โ Did NOT follow instructions to answer with just a single letter or more than just a single letter consistently.
- Had to use a VPN since G๐ก๐คฎgle is restricting API access from Germany ~~as if it was some backworld rogue state~~
- Sometimes it got stuck somehow so I had to delete and redo the stuck message
- OK speed, despite cross-continent VPN (15-30 tps)
- Less verbose responses
- No repetition (possibly not noticeable because of less verbose responses)
Didn't feel next-gen at all. Definitely not a GPT-4 killer, because it didn't appear any better than that - and as an online model, it can't compete with local models that offer privacy and control (and the best local ones also easily surpass it in my tests).
- **mistral-medium** API:
- โ Gave correct answers to only **4+4+1+6=15/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+4+3+6=17/18**
- โ Did NOT follow instructions to acknowledge data input with "OK".
- โ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- Got a bunch of "Streaming request failed with status 503 Service Unavailable"
- Slower than what I'm used to with local models (10-15 tps)
- Very verbose! I limited max new tokens to 300 but most messages tried to exceed that and got cut off. In a few cases, had to continue to get the actual answer.
- Noticeable repetition in how responses were structured and similar sentences
- Used 691,335 tokens for 1.98 EUR
Expected more from Mistral's current flagship model - but in the third test, it failed to answer three questions, acknowledging them just like information! Retried with non-deterministic settings (random seed), but the problem persisted. Only when I raised the max new tokens from 300 to 512 would it answer the questions properly, and then it got them all right (with deterministic settings). Would be unfair to count the modified run, and a great model shouldn't exhibit such problems, so I've got to count the failures for my ranking. A great model needs to perform all the time, and if it clearly doesn't, a lower rank is deserved.
- **mistral-small** API:
- โ Gave correct answers to only **4+4+3+6=17/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+3+1+3=11/18**
- โ Did NOT follow instructions to acknowledge data input with "OK".
- โ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- Good speed, like my local EXL2 Mixtral (30 tps)
- Less verbose than mistral-medium, felt more like normal responses
- Less repetition (possibly less noticeable because of less verbose responses)
- Sometimes wasn't answering properly during the blind run, talking about the different options without selecting one decisively.
- Used 279,622 tokens for 0.19 EUR
According to Mistral AI, this is our Mixtral 8x7B, and it did OK. But local Mixtral-8x7B-Instruct-v0.1 did better when I tested it, even quantized down to 4-bit. So I wonder what quantization, if any, Mistral AI is using? Or could the difference be attributed to prompt format or anything that's different between the API and local use?
- **mistral-tiny** API:
- โ Gave correct answers to only **2+2+0+0=4/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+1+1+6=11/18**
- โ Did NOT follow instructions to acknowledge data input with "OK".
- โ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- Blazingly fast (almost 100 tps)
- Very verbose! I limited max new tokens to 300 but most messages tried to exceed that and got cut off.
- Noticeable repetition in how responses were structured and similar sentences.
- Often wasn't answering properly, talking about the different options without selecting one decisively.
- Used 337,897 tokens for 0.05 EUR
Ugh! Sorry, Mistral, but this is just terrible, felt way worse than the Mistral-7B-Instruct-v0.2 I've run locally (unquantized). Is this a quantized 7B or does API vs. local use make such a difference?
## Updated Rankings
This is my objective ranking of these models based on measuring factually correct answers, instruction understanding and following, and multilingual abilities:
| Rank | Model | Size | Format | Quant | Context | Prompt | 1st Score | 2nd Score | OK | +/- |
| ---- | ------------------------------------------------------------------------------------------------------------------------------------------------ | ------- | ------ | ------- | ----------- | ------------------------ | --------- | --------- | --- | --- |
| 1 ๐ | GPT-4 | GPT-4 | API | | | | 18/18 โ | 18/18 โ | โ | โ |
| 1 | [goliath-120b-GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) | 120B | GGUF | Q2_K | 4K | Vicuna 1.1 | 18/18 โ | 18/18 โ | โ | โ |
| 1 | [Tess-XL-v1.0-GGUF](https://huggingface.co/TheBloke/Tess-XL-v1.0-GGUF) | 120B | GGUF | Q2_K | 4K | Synthia | 18/18 โ | 18/18 โ | โ | โ |
| 1 | [Nous-Capybara-34B-GGUF](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF) | 34B | GGUF | Q4_0 | 16K | Vicuna 1.1 | 18/18 โ | 18/18 โ | โ | โ |
| 2 | [Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0) | 120B | EXL2 | 3.0bpw | 4K | Alpaca | 18/18 โ | 18/18 โ | โ | โ |
| 3 | [lzlv_70B-GGUF](https://huggingface.co/TheBloke/lzlv_70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 โ | 17/18 | โ | โ |
| 4 ๐ | GPT-4 Turbo | GPT-4 | API | | | | 18/18 โ | 16/18 | โ | โ |
| 4 | [chronos007-70B-GGUF](https://huggingface.co/TheBloke/chronos007-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 โ | 16/18 | โ | โ |
| 4 | [SynthIA-70B-v1.5-GGUF](https://huggingface.co/migtissera/SynthIA-70B-v1.5-GGUF) | 70B | GGUF | Q4_0 | 4K | SynthIA | 18/18 โ | 16/18 | โ | โ |
| 5 | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8x7B | HF | 4-bit | ~~32K~~ 4K | Mixtral | 18/18 โ | 16/18 | โ | โ |
| 6 | [dolphin-2_2-yi-34b-GGUF](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF) | 34B | GGUF | Q4_0 | 16K | ChatML | 18/18 โ | 15/18 | โ | โ |
| 7 | [StellarBright-GGUF](https://huggingface.co/TheBloke/StellarBright-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 โ | 14/18 | โ | โ |
| 8 | [Dawn-v2-70B-GGUF](https://huggingface.co/TheBloke/Dawn-v2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 โ | 14/18 | โ | โ |
| 8 | [Euryale-1.3-L2-70B-GGUF](https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 โ | 14/18 | โ | โ |
| 9 | [sophosynthesis-70b-v1](https://huggingface.co/sophosympatheia/sophosynthesis-70b-v1) | 70B | EXL2 | 4.85bpw | 4K | Vicuna 1.1 | 18/18 โ | 13/18 | โ | โ |
| 10 | [GodziLLa2-70B-GGUF](https://huggingface.co/TheBloke/GodziLLa2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 โ | 12/18 | โ | โ |
| 11 | [Samantha-1.11-70B-GGUF](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 โ | 10/18 | โ | โ |
| 12 | [Airoboros-L2-70B-3.1.2-GGUF](https://huggingface.co/TheBloke/Airoboros-L2-70B-3.1.2-GGUF) | 70B | GGUF | Q4_K_M | 4K | Llama 2 Chat | 17/18 | 16/18 | โ | โ |
| 13 ๐ | Gemini Pro | Gemini | API | | | | 17/18 | 16/18 | โ | โ |
| 14 | [Rogue-Rose-103b-v0.2](https://huggingface.co/sophosympatheia/Rogue-Rose-103b-v0.2) | 103B | EXL2 | 3.2bpw | 4K | Rogue Rose | 17/18 | 14/18 | โ | โ |
| 15 | GPT-3.5 Turbo Instruct | GPT-3.5 | API | | | | 17/18 | 11/18 | โ | โ |
| 15 ๐ | mistral-small | Mistral | API | | | | 17/18 | 11/18 | โ | โ |
| 16 | [Synthia-MoE-v3-Mixtral-8x7B](https://huggingface.co/migtissera/Synthia-MoE-v3-Mixtral-8x7B) | 8x7B | HF | 4-bit | ~~32K~~ 4K | ~~Synthia~~ Llama 2 Chat | 17/18 | 9/18 | โ | โ |
| 17 | [dolphin-2.2-70B-GGUF](https://huggingface.co/TheBloke/dolphin-2.2-70B-GGUF) | 70B | GGUF | Q4_0 | 4K | ChatML | 16/18 | 14/18 | โ | โ |
| 18 | [mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) | 7B | HF | โ | ~~32K~~ 8K | Alpaca | 16/18 | 13/18 | โ | โ |
| 19 | [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) | 7B | HF | โ | ~~32K~~ 8K | ChatML | 16/18 | 13/18 | โ | โ |
| 20 | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 7B | HF | โ | 32K | Mistral | 16/18 | 12/18 | โ | โ |
| 20 | [DeciLM-7B-instruct](https://huggingface.co/Deci/DeciLM-7B-instruct) | 7B | HF | โ | 32K | Mistral | 16/18 | 11/18 | โ | โ |
| 20 | [Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) | 7B | HF | โ | ~~32K~~ 8K | Alpaca | 16/18 | 11/18 | โ | โ |
| 21 | [SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) | 7B | HF | โ | ~~32K~~ 8K | ChatML | 16/18 | 11/18 | โ | โ |
| 22 ๐ | mistral-medium | Mistral | API | | | | 15/18 | 17/18 | โ | โ |
| 23 | [mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) | 7B | HF | โ | ~~32K~~ 8K | Alpaca | 15/18 | 14/18 | โ | โ |
| 24 | GPT-3.5 Turbo | GPT-3.5 | API | | | | 15/18 | 14/18 | โ | โ |
| 25 | [dolphin-2.5-mixtral-8x7b](https://huggingface.co/ehartford/dolphin-2.5-mixtral-8x7b) | 8x7B | HF | 4-bit | ~~32K~~ 4K | ChatML | 15/18 | 13/18 | โ | โ |
| 26 | [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) | 7B | HF | โ | 8K | OpenChat (GPT4 Correct) | 15/18 | 13/18 | โ | โ |
| 27 | [dolphin-2.6-mistral-7b-dpo](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo) | 7B | HF | โ | 16K | ChatML | 15/18 | 12/18 | โ | โ |
| 28 | [openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) | 7B | HF | โ | 8K | OpenChat (GPT4 Correct) | 15/18 | 7/18 | โ | โ |
| 29 | [dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b) | 8x7B | HF | 4-bit | 32K | ChatML | 15/18 | 6/18 | โ | โ |
| 30 | [dolphin-2.6-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mixtral-8x7b) | 8x7B | HF | 4-bit | ~~32K~~ 16K | ChatML | 14/18 | 12/18 | โ | โ |
| 31 | [MixtralRPChat-ZLoss](https://huggingface.co/chargoddard/MixtralRPChat-ZLoss) | 8x7B | HF | 4-bit | ~~32K~~ 8K | CharGoddard | 14/18 | 10/18 | โ | โ |
| 32 | [OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp) | 7B | HF | โ | ~~32K~~ 8K | OpenChat (GPT4 Correct) | 13/18 | 13/18 | โ | โ |
| 33 | [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser) | 7B | HF | โ | 16K | ChatML | 12/18 | 13/18 | โ | โ |
| 34 | [sonya-medium-x8-MoE](https://huggingface.co/dillfrescott/sonya-medium-x8-MoE) | 8x11B | HF | 4-bit | 8K | Alpaca | 12/18 | 10/18 | โ | โ |
| 35 | [dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b) | 7B | HF | โ | ~~32K~~ 8K | ChatML | 10/18 | 10/18 | โ | โ |
| 35 | [SauerkrautLM-70B-v1-GGUF](https://huggingface.co/TheBloke/SauerkrautLM-70B-v1-GGUF) | 70B | GGUF | Q4_0 | 4K | Llama 2 Chat | 9/18 | 15/18 | โ | โ |
| 36 ๐ | mistral-tiny | Mistral | API | | | | 4/18 | 11/18 | โ | โ |
| 37 | [dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2) | 2.7B | HF | โ | 2K | ChatML | 0/18 โ | 0/18 โ | โ | โ |
| 38 | [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) | 1.1B | HF | โ | 2K | Zephyr | 0/18 โ | 0/18 โ | โ | โ |
- 1st Score = Correct answers to multiple choice questions (after being given curriculum information)
- 2nd Score = Correct answers to multiple choice questions (without being given curriculum information beforehand)
- OK = Followed instructions to acknowledge all data input with just "OK" consistently
- +/- = Followed instructions to answer with just a single letter or more than just a single letter
## Conclusions
I'm not too impressed with online-only LLMs. GPT-4 is still the best, but its (quantized?) Turbo version blundered, as did all the other LLM-as-a-service offerings.
If their quality and performance aren't much, much better than that of local models, how can online-only LLMs even stay viable? They'll never be able to compete with the privacy and control that local LLMs offer, or the sheer number of brilliant minds working on local AI (many may be amateurs, but that's not a bad thing, after all it literally means "people who love what they do").
Anyway, these are the current results of all my tests and comparisons. I'm more convinced than ever that open AI, not OpenAI/Google/etc., is the future.
Mistral AI being the most open one amongst those commercial AI offerings, I wish them the best of luck. Their small offering is already on par with GPT-3.5 (in my tests), so I'm looking forward to their big one, which is supposed to be their GPT-4 challenger. I just hope they'll continue to openly release their models for local use, while providing their online services as a profitable convenience with commercial support for those who can't or don't want/need to run AI locally.
Thanks for reading. Hope my tests and comparisons are useful to some of you.
## Upcoming/Planned Tests
Next on my ~~to-do~~ to-test list are still the 10B (SOLAR) and updated 34B (Yi) models - those will surely shake up my rankings further.
I'm in the middle of that already, but took this quick detour to test the online-only API LLMs when people offered me their API keys.
--------------------------------------------------------------------------------
Here's a list of my previous model tests and comparisons or other related posts:
- [LLM Comparison/Test: Brand new models for 2024 (Dolphin 2.6/2.7 Mistral/Mixtral/Phi-2, Sonya, TinyLlama)](https://www.reddit.com/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/) Winner: dolphin-2.6-mistral-7b-dpo
- [LLM Comparison/Test: Ranking updated with 10 new models (the best 7Bs)!](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) Winners: mistral-ft-optimized-1218, OpenHermes-2.5-Mistral-7B
- [LLM **Prompt Format** Comparison/Test: Mixtral 8x7B Instruct with \*\*17\*\* different instruct templates](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)
- [LLM Comparison/Test: Mixtral-8x7B, Mistral, DeciLM, Synthia-MoE](https://www.reddit.com/r/LocalLLaMA/comments/18gz54r/llm_comparisontest_mixtral8x7b_mistral_decilm/) Winner: Mixtral-8x7B-Instruct-v0.1
- [Updated LLM Comparison/Test with new RP model: Rogue Rose 103B](https://www.reddit.com/r/LocalLLaMA/comments/18ft8f5/updated_llm_comparisontest_with_new_rp_model/)
- [**Big** LLM Comparison/Test: 3x 120B, 12x 70B, 2x 34B, GPT-4/3.5](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) Winner: Goliath 120B
- [LLM Format Comparison/Benchmark: 70B GGUF vs. EXL2 (and AWQ)](https://www.reddit.com/r/LocalLLaMA/comments/17w57eu/llm_format_comparisonbenchmark_70b_gguf_vs_exl2/)
- [LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4](https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/) Winners: goliath-120b-GGUF, Nous-Capybara-34B-GGUF
- [LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)](https://www.reddit.com/r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/) Winners: OpenHermes-2.5-Mistral-7B, openchat_3.5, Nous-Capybara-7B-V1.9
- [Huge LLM Comparison/Test: Part II (7B-20B) Roleplay Tests](https://www.reddit.com/r/LocalLLaMA/comments/17kpyd2/huge_llm_comparisontest_part_ii_7b20b_roleplay/) Winners: OpenHermes-2-Mistral-7B, LLaMA2-13B-Tiefighter
- [Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4)](https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_models_tested_7b70b/)
- [My current favorite new LLMs: SynthIA v1.5 and Tiefighter!](https://www.reddit.com/r/LocalLLaMA/comments/17e446l/my_current_favorite_new_llms_synthia_v15_and/)
- [Mistral LLM Comparison/Test: Instruct, OpenOrca, Dolphin, Zephyr and more...](https://www.reddit.com/r/LocalLLaMA/comments/178nf6i/mistral_llm_comparisontest_instruct_openorca/)
- [LLM Pro/Serious Use Comparison/Test: From 7B to 70B vs. ChatGPT!](https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/) Winner: Synthia-70B-v1.2b
- [LLM Chat/RP Comparison/Test: Dolphin-Mistral, Mistral-OpenOrca, Synthia 7B](https://www.reddit.com/r/LocalLLaMA/comments/16z3goq/llm_chatrp_comparisontest_dolphinmistral/) Winner: Mistral-7B-OpenOrca
- [LLM Chat/RP Comparison/Test: Mistral 7B Base + Instruct](https://www.reddit.com/r/LocalLLaMA/comments/16twtfn/llm_chatrp_comparisontest_mistral_7b_base_instruct/)
- [LLM Chat/RP Comparison/Test (Euryale, FashionGPT, MXLewd, Synthia, Xwin)](https://www.reddit.com/r/LocalLLaMA/comments/16r7ol2/llm_chatrp_comparisontest_euryale_fashiongpt/) Winner: Xwin-LM-70B-V0.1
- [New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B)](https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/) Winners: Nous-Hermes-Llama2-70B, Synthia-70B-v1.2b
- [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/) Winner: Mythalion-13B
- [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) Winners: MythoMax-L2-13B, vicuna-13B-v1.5-16K
- [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) Winner: Nous-Hermes-Llama2
- [SillyTavern's Roleplay preset vs. model-specific prompt format](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/)
--------------------------------------------------------------------------------
[My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested with priority. Also consider tipping your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it! | 2024-01-04T22:01:10 | https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yp9u4 | false | null | t3_18yp9u4 | /r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=108&crop=smart&auto=webp&s=f076a50b0d594dc8ba3b2ee703d67664decf1cba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=216&crop=smart&auto=webp&s=dbc51e386e2d24255edce0cbd6a139d2b37dc0a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=320&crop=smart&auto=webp&s=13107e47f85ca5d663508f0d9c3bca3648a98f75', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=640&crop=smart&auto=webp&s=f340c6c7589a711ca86aba7661baee1db6acf927', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=960&crop=smart&auto=webp&s=76d5b3a13d8ba4378270e9ae41aa3081e25b37e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=1080&crop=smart&auto=webp&s=d17a08361a95b03dd8a9a733ec765497cf2bf0d1', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?auto=webp&s=577b115ae7cd70077bd0dc15f7fe27e71ff19e2b', 'width': 1280}, 'variants': {}}]} |
What is the best 7b model currently? | 1 | Hi!
Is there any benchmark comparing only 7b models?
Thanks in advance! | 2024-01-04T21:45:51 | https://www.reddit.com/r/LocalLLaMA/comments/18yowfk/what_is_the_best_7b_model_currently/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yowfk | false | null | t3_18yowfk | /r/LocalLLaMA/comments/18yowfk/what_is_the_best_7b_model_currently/ | false | false | self | 1 | null |
Anyone ever used a x79 mining board for running llms? Is it viable? (Xeon x1-2, 5-9 pci x8 slots, 8gb ddr3) | 1 | [removed] | 2024-01-04T21:44:09 | https://www.reddit.com/r/LocalLLaMA/comments/18youw9/anyone_ever_used_a_x79_mining_board_for_running/ | TopRecognition9302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18youw9 | false | null | t3_18youw9 | /r/LocalLLaMA/comments/18youw9/anyone_ever_used_a_x79_mining_board_for_running/ | false | false | self | 1 | null |
How to use LLMs like LLAMA-2 for NER tasks? | 1 | I want to extract countries and organizations from texts and wonder if there are best practices how to do that?
Would you just define a prompt like โProvide a list of all countries mentioned in the following text: โฆโ ?
LLAMA-2 for instance provides good answers but is not very consistent and sometimes adds additional text. Are there any better approaches to use LLAMA-2 for NER tasks to get more reliable and structured results? | 2024-01-04T21:20:07 | https://www.reddit.com/r/LocalLLaMA/comments/18yo9x8/how_to_use_llms_like_llama2_for_ner_tasks/ | Electronic-Letter592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18yo9x8 | false | null | t3_18yo9x8 | /r/LocalLLaMA/comments/18yo9x8/how_to_use_llms_like_llama2_for_ner_tasks/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.