title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Felladrin/TinyMistral-248M-Alpaca | 26 | 2023-11-18T05:26:04 | https://huggingface.co/Felladrin/TinyMistral-248M-Alpaca | Thistleknot | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17xzcqk | false | null | t3_17xzcqk | /r/LocalLLaMA/comments/17xzcqk/felladrintinymistral248malpaca/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'YWn4Awzf8hyykPJ_FdOnIUviyXZszwtU0KDOIsAtsmw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3xc5Y--Et_fbotseSC9HAgmoKOTgqWQ8G-v7Fl48xKs.jpg?width=108&crop=smart&auto=webp&s=0cf04586771574c318505fc8be2ebf7c10d00be4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3xc5Y--Et_fbotseSC9HAgmoKOTgqWQ8G-v7Fl48xKs.jpg?width=216&crop=smart&auto=webp&s=f0fb2a234d4a0f0c3d6f3c48cbc6779efe946638', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3xc5Y--Et_fbotseSC9HAgmoKOTgqWQ8G-v7Fl48xKs.jpg?width=320&crop=smart&auto=webp&s=aa7ebe3d6c4615f8497a66ee37abb16cd85e8207', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3xc5Y--Et_fbotseSC9HAgmoKOTgqWQ8G-v7Fl48xKs.jpg?width=640&crop=smart&auto=webp&s=20b2681bbdda48f452bd6b2c00fc961966f66a89', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3xc5Y--Et_fbotseSC9HAgmoKOTgqWQ8G-v7Fl48xKs.jpg?width=960&crop=smart&auto=webp&s=05811c8f4a3721418e404b409c79427634bdf50a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3xc5Y--Et_fbotseSC9HAgmoKOTgqWQ8G-v7Fl48xKs.jpg?width=1080&crop=smart&auto=webp&s=bf30a6029acaa8e10def05f6505813063857fc3c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3xc5Y--Et_fbotseSC9HAgmoKOTgqWQ8G-v7Fl48xKs.jpg?auto=webp&s=1c65d63ee06e2c1c74bbce5c1931036c69f99b7b', 'width': 1200}, 'variants': {}}]} | ||
Sam and Greg's statement | 11 | 2023-11-18T04:51:22 | Goatman117 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17xyryy | false | null | t3_17xyryy | /r/LocalLLaMA/comments/17xyryy/sam_and_gregs_statement/ | false | false | 11 | {'enabled': True, 'images': [{'id': 'pkkDoU_m8RwCBDoeqTlIMNsTU4mZEqFpxOS6R2Eoc8E', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/ctucwdcie11c1.png?width=108&crop=smart&auto=webp&s=f82339ff5a985c20dd8777db84f3821670e29d87', 'width': 108}, {'height': 276, 'url': 'https://preview.redd.it/ctucwdcie11c1.png?width=216&crop=smart&auto=webp&s=0d18c7612fde083064095e1e7d086e880527bdd7', 'width': 216}, {'height': 408, 'url': 'https://preview.redd.it/ctucwdcie11c1.png?width=320&crop=smart&auto=webp&s=bf92c496ec1268a00e753e8fb7cfc8bbd8a6977d', 'width': 320}], 'source': {'height': 759, 'url': 'https://preview.redd.it/ctucwdcie11c1.png?auto=webp&s=f4172e304965433f04207e7f9e153d743fcff8ed', 'width': 594}, 'variants': {}}]} | |||
Sam and Greg's statement | 1 | ​
https://preview.redd.it/nos1jr42e11c1.png?width=594&format=png&auto=webp&s=fbee19a4fbd8196d180700c35a5eb192fe60f44c | 2023-11-18T04:48:46 | https://www.reddit.com/r/LocalLLaMA/comments/17xyqd6/sam_and_gregs_statement/ | Goatman117 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xyqd6 | false | null | t3_17xyqd6 | /r/LocalLLaMA/comments/17xyqd6/sam_and_gregs_statement/ | false | false | 1 | null | |
A statement from Greg Brockmam | 2 | https://twitter.com/gdb/status/1725736242137182594 | 2023-11-18T04:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/17xyppi/a_statement_from_greg_brockmam/ | Chance_Confection_37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xyppi | false | null | t3_17xyppi | /r/LocalLLaMA/comments/17xyppi/a_statement_from_greg_brockmam/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'X9ms_wi8YRgDCTV3fRGeYiZLOD5nSf8qxsJH9IgkS0Y', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/FmtH_Y-aw_g4rAf48W7TpSuf4FatjAmSfe3Qbkh3VFM.jpg?width=108&crop=smart&auto=webp&s=c9662a58e11bfb65eb15b3d99582dd6d9ee5aea4', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/FmtH_Y-aw_g4rAf48W7TpSuf4FatjAmSfe3Qbkh3VFM.jpg?auto=webp&s=922b8b833a827882339e8e22787293df53ceb0ba', 'width': 140}, 'variants': {}}]} |
Is GPT-4 hallucinating? | 1 | 2023-11-18T03:59:28 | dahara111 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17xxwb3 | false | null | t3_17xxwb3 | /r/LocalLLaMA/comments/17xxwb3/is_gpt4_hallucinating/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'MVIc5w6dh0bH-uagxw1J2isXnNCq5r5LRbOgzHXCvwY', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/wwi74uo2511c1.png?width=108&crop=smart&auto=webp&s=948f92b76432a3cfa431e490265f7273313e1a53', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/wwi74uo2511c1.png?width=216&crop=smart&auto=webp&s=3058d9cb6a0892de2ffa26ce5b88ba53ba815e3e', 'width': 216}, {'height': 137, 'url': 'https://preview.redd.it/wwi74uo2511c1.png?width=320&crop=smart&auto=webp&s=88eab6e4df3df5a5a8f321f65a43049671f9f2f8', 'width': 320}, {'height': 275, 'url': 'https://preview.redd.it/wwi74uo2511c1.png?width=640&crop=smart&auto=webp&s=a8eedac23ee2682f994b1f69fae3174495513ab1', 'width': 640}], 'source': {'height': 282, 'url': 'https://preview.redd.it/wwi74uo2511c1.png?auto=webp&s=ad1e45e867dec0e36a1fc8eae1f4538bd5eb51cb', 'width': 656}, 'variants': {}}]} | |||
I got into local models to develop a product idea I have, but I've actually really started to enjoy creative writing/RPG with it in my free time. Is there any community for sharing fun prompts or something like that? | 1 | [removed] | 2023-11-18T03:53:31 | https://www.reddit.com/r/LocalLLaMA/comments/17xxson/i_got_into_local_models_to_develop_a_product_idea/ | Mescallan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xxson | false | null | t3_17xxson | /r/LocalLLaMA/comments/17xxson/i_got_into_local_models_to_develop_a_product_idea/ | false | false | self | 1 | null |
Another test of logical ability for LLMs? | 8 | Found this in a children's book of riddles:
```
Six brothers were spending their time together.
The first brother was reading a book.
The second brother was playing chess.
The third brother was solving a crossword.
The fourth brother was watering the lawn.
The fifth brother was drawing a picture.
Question: what was the sixth brother doing?
```
I cant get ChatGPT to answer correctly with the usual tricks, even after hinting to consider one and two-person activities and emphasizing the word "together".
After a bunch of CoT turns we arrive to a conclusion that this is an open ended question and not a riddle :)
After trying 3 times with fresh promots, I got a correct response once, but when prompted to provide supporting reasoning the model backtracked and started apologizing.
Cant test gpt 4 r/n... | 2023-11-18T03:12:10 | https://www.reddit.com/r/LocalLLaMA/comments/17xx115/another_test_of_logical_ability_for_llms/ | laca_komputilulo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xx115 | false | null | t3_17xx115 | /r/LocalLLaMA/comments/17xx115/another_test_of_logical_ability_for_llms/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'VGR5H4jqXQd8E29JKZ6K9R94EXHzhWQKpL_yRuvY1bE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XfrStPJMYM7dMxr7MxZYd0jNzJ9acDMGrSvws-FvLZE.jpg?width=108&crop=smart&auto=webp&s=e7661939780923c6dccce91d77c6d8b9d4f6194f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XfrStPJMYM7dMxr7MxZYd0jNzJ9acDMGrSvws-FvLZE.jpg?width=216&crop=smart&auto=webp&s=2657e149f3d4af842a2ed2131069a013fd9165f8', 'width': 216}], 'source': {'height': 250, 'url': 'https://external-preview.redd.it/XfrStPJMYM7dMxr7MxZYd0jNzJ9acDMGrSvws-FvLZE.jpg?auto=webp&s=afcb8698c026f3c616a11c4009e33a06154310cb', 'width': 250}, 'variants': {}}]} |
What do you think about GPT-isms polluting datasets? Do you consider them a problem? If so, how big of a problem do you think it is? | 41 | It's no secret that many language models and fine-tunes are trained using datasets, many of them are made using GPT models. The problem arises when many "GPT-isms" end up in the dataset. And I am not only referring to the typical expressions like "however, it's important to...", "I understand your desire to...", but I am also referring to the structure of the outputs in the model's responses. ChatGPT (GPT models in general) tend to have a very predictable structure when in its "soulless assistant" mode, which makes it very easy to say "this is very GPT-like".
What do you think about this? Oh, and by the way, forgive my English. | 2023-11-18T03:02:58 | https://www.reddit.com/r/LocalLLaMA/comments/17xwuno/what_do_you_think_about_gptisms_polluting/ | OC2608 | self.LocalLLaMA | 2023-11-18T03:23:25 | 0 | {} | 17xwuno | false | null | t3_17xwuno | /r/LocalLLaMA/comments/17xwuno/what_do_you_think_about_gptisms_polluting/ | false | false | self | 41 | null |
Top 3B model is a distilled llama 7B | 34 | 2023-11-18T02:59:53 | https://huggingface.co/GeneZC/MiniChat-3B | Aaaaaaaaaeeeee | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 17xws80 | false | null | t3_17xws80 | /r/LocalLLaMA/comments/17xws80/top_3b_model_is_a_distilled_llama_7b/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'D4iwtFMy5ma1S9AfsX9xHMIaePm0dZA3eg27mo-GcNk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/AyrBA3SE0QsEcC-vVEfg5nrwR3k-aO8Bw2NX7d8i0_s.jpg?width=108&crop=smart&auto=webp&s=64772cc9a55dcc9e3ba60cdb8e1efc8b2795b620', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/AyrBA3SE0QsEcC-vVEfg5nrwR3k-aO8Bw2NX7d8i0_s.jpg?width=216&crop=smart&auto=webp&s=d9357cb0de11c74cc27d119534f7ab67c4198e03', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/AyrBA3SE0QsEcC-vVEfg5nrwR3k-aO8Bw2NX7d8i0_s.jpg?width=320&crop=smart&auto=webp&s=4bfc525b2e75b34a778e13f97f6352eb9827c238', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/AyrBA3SE0QsEcC-vVEfg5nrwR3k-aO8Bw2NX7d8i0_s.jpg?width=640&crop=smart&auto=webp&s=cbb1cb5a86d21090fcc9887bbb556f24e44bfe51', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/AyrBA3SE0QsEcC-vVEfg5nrwR3k-aO8Bw2NX7d8i0_s.jpg?width=960&crop=smart&auto=webp&s=f5a9135024f2a726b90737bd772ea8505d7c2116', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/AyrBA3SE0QsEcC-vVEfg5nrwR3k-aO8Bw2NX7d8i0_s.jpg?width=1080&crop=smart&auto=webp&s=0445eac7e04a992a02ef41d1cea00b2ee698b5c8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/AyrBA3SE0QsEcC-vVEfg5nrwR3k-aO8Bw2NX7d8i0_s.jpg?auto=webp&s=8ce0d07beb3f2d74c6fca47d283c4e55b75805f2', 'width': 1200}, 'variants': {}}]} | ||
What is your opinion of Lorax? Have any of you tried it out yet? | 1 | 2023-11-18T02:42:51 | https://predibase.com/blog/lora-exchange-lorax-serve-100s-of-fine-tuned-llms-for-the-cost-of-one | swordsman1 | predibase.com | 1970-01-01T00:00:00 | 0 | {} | 17xwg63 | false | null | t3_17xwg63 | /r/LocalLLaMA/comments/17xwg63/what_is_your_opinion_of_lorax_have_any_of_you/ | false | false | 1 | {'enabled': False, 'images': [{'id': '0zX9rqWlxtDTXM2B0vbSPg5rAOljp7Fb5kLndJw_shY', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=108&crop=smart&auto=webp&s=017ff1d06e2da11c13453c611957eb072bd46990', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=216&crop=smart&auto=webp&s=e067b863ab2c86a2ac6afc420949c23c094092c9', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=320&crop=smart&auto=webp&s=a633e736d61b766676da7ebbc04f7d2da2cdad83', 'width': 320}, {'height': 339, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=640&crop=smart&auto=webp&s=637f4390424cffbe8bbc2dcec09ee65763c2169c', 'width': 640}, {'height': 508, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=960&crop=smart&auto=webp&s=d08df46264d69a30c0035cd7fa28680480ab0e15', 'width': 960}, {'height': 572, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=1080&crop=smart&auto=webp&s=aa8d6b96b74e336acd351494e71f210183e23eee', 'width': 1080}], 'source': {'height': 585, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?auto=webp&s=33ab04c3c4b4f4e59e36ef53b177429653d5b283', 'width': 1104}, 'variants': {}}]} | ||
Recall of GPT-4-Turbo with 128K context size | 2 | Article (in German) at https://the-decoder.de/gpt-4-turbos-bestes-neues-feature-funktioniert-nicht-besonders-toll/ | 2023-11-18T01:57:46 | https://www.reddit.com/r/LocalLLaMA/comments/17xvj4t/recall_of_gpt4turbo_with_128k_context_size/ | Fit_Check_919 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xvj4t | false | null | t3_17xvj4t | /r/LocalLLaMA/comments/17xvj4t/recall_of_gpt4turbo_with_128k_context_size/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'hxOR2_MKwzIoNfNpkdsfEJhxIsJ4ufe2YQnQfCr9cN4', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/IxyDijOozGpSjU6ZjqJnzAxcgExBbtfjgJSuwesT7k8.jpg?width=108&crop=smart&auto=webp&s=39b88018390f1f687ce14803579da5d4bc5c7f32', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/IxyDijOozGpSjU6ZjqJnzAxcgExBbtfjgJSuwesT7k8.jpg?width=216&crop=smart&auto=webp&s=cd95676e5a4d33c9482cee74a74a7d6169fa9977', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/IxyDijOozGpSjU6ZjqJnzAxcgExBbtfjgJSuwesT7k8.jpg?width=320&crop=smart&auto=webp&s=79fc5a0a336eff0a8724e40ccda9db0c3b57851d', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/IxyDijOozGpSjU6ZjqJnzAxcgExBbtfjgJSuwesT7k8.jpg?width=640&crop=smart&auto=webp&s=af02421feb76716085be4e129556e277bca56e6c', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/IxyDijOozGpSjU6ZjqJnzAxcgExBbtfjgJSuwesT7k8.jpg?width=960&crop=smart&auto=webp&s=9275111e78d3e337431d21c5d52253ed89afdbfe', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/IxyDijOozGpSjU6ZjqJnzAxcgExBbtfjgJSuwesT7k8.jpg?width=1080&crop=smart&auto=webp&s=8c4db1973a10a897a2ac00e6f111f2500d5fb9da', 'width': 1080}], 'source': {'height': 686, 'url': 'https://external-preview.redd.it/IxyDijOozGpSjU6ZjqJnzAxcgExBbtfjgJSuwesT7k8.jpg?auto=webp&s=adeda61170009ddda94455d08af0e28b920d4f2e', 'width': 1200}, 'variants': {}}]} |
So Sam Altman's sister says ... | 1 | 2023-11-18T01:49:47 | https://twitter.com/MistressSnowPhD/status/1710049544124166245/photo/1 | herozorro | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 17xvdaq | false | null | t3_17xvdaq | /r/LocalLLaMA/comments/17xvdaq/so_sam_altmans_sister_says/ | false | false | default | 1 | null | |
Is anyone else experiencing an unExpected anxiety feeling about the Altman news? Idk why, but I’m freaked out | 1 | Just curious if anyone else is dealing with this? | 2023-11-18T01:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/17xuz9y/is_anyone_else_experiencing_an_unexpected_anxiety/ | PoppinJayLee | self.LocalLLaMA | 2023-11-18T03:07:38 | 0 | {} | 17xuz9y | false | null | t3_17xuz9y | /r/LocalLLaMA/comments/17xuz9y/is_anyone_else_experiencing_an_unexpected_anxiety/ | false | false | default | 1 | null |
What is considered the best uncensored LLM right now? | 61 | Looking for any model that can run with 20 GB VRAM. Thanks! | 2023-11-18T00:40:43 | https://www.reddit.com/r/LocalLLaMA/comments/17xtxsr/what_is_considered_the_best_uncensored_llm_right/ | Hyddro26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xtxsr | false | null | t3_17xtxsr | /r/LocalLLaMA/comments/17xtxsr/what_is_considered_the_best_uncensored_llm_right/ | false | false | self | 61 | null |
Greg Brockman quits OpenAI as a response to Sam's ousting | 109 | 2023-11-18T00:34:47 | https://twitter.com/gdb/status/1725667410387378559?s=20 | earslap | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 17xttaq | false | {'oembed': {'author_name': 'Greg Brockman', 'author_url': 'https://twitter.com/gdb', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">After learning today’s news, this is the message I sent to the OpenAI team: <a href="https://t.co/NMnG16yFmm">https://t.co/NMnG16yFmm</a> <a href="https://t.co/8x39P0ejOM">pic.twitter.com/8x39P0ejOM</a></p>— Greg Brockman (@gdb) <a href="https://twitter.com/gdb/status/1725667410387378559?ref_src=twsrc%5Etfw">November 18, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/gdb/status/1725667410387378559', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_17xttaq | /r/LocalLLaMA/comments/17xttaq/greg_brockman_quits_openai_as_a_response_to_sams/ | false | false | 109 | {'enabled': False, 'images': [{'id': '0EmeVa10ifnaJNYnEBPQ3i1TBsXM3oAInHJMAnOB980', 'resolutions': [{'height': 109, 'url': 'https://external-preview.redd.it/e9-mdijt8nIypbz0teX16N3G8-Fivssvx6V7mGGm7gU.jpg?width=108&crop=smart&auto=webp&s=bc880ac19d35887a20ba97746544a6d427378669', 'width': 108}, {'height': 219, 'url': 'https://external-preview.redd.it/e9-mdijt8nIypbz0teX16N3G8-Fivssvx6V7mGGm7gU.jpg?width=216&crop=smart&auto=webp&s=a8a1287431d6637be6b89cc74dcc9d0b041d2b3f', 'width': 216}, {'height': 324, 'url': 'https://external-preview.redd.it/e9-mdijt8nIypbz0teX16N3G8-Fivssvx6V7mGGm7gU.jpg?width=320&crop=smart&auto=webp&s=8824373ddd51af9d152a3251644c21b63f34e9f3', 'width': 320}, {'height': 649, 'url': 'https://external-preview.redd.it/e9-mdijt8nIypbz0teX16N3G8-Fivssvx6V7mGGm7gU.jpg?width=640&crop=smart&auto=webp&s=b379f92505af345584a170a9e8bbe0042926128a', 'width': 640}, {'height': 973, 'url': 'https://external-preview.redd.it/e9-mdijt8nIypbz0teX16N3G8-Fivssvx6V7mGGm7gU.jpg?width=960&crop=smart&auto=webp&s=946130ebdd9e66995d97bd00b11cbbfbbc035513', 'width': 960}, {'height': 1095, 'url': 'https://external-preview.redd.it/e9-mdijt8nIypbz0teX16N3G8-Fivssvx6V7mGGm7gU.jpg?width=1080&crop=smart&auto=webp&s=d8e9d5f2851627b8ba6979729b4dd693501ef287', 'width': 1080}], 'source': {'height': 1196, 'url': 'https://external-preview.redd.it/e9-mdijt8nIypbz0teX16N3G8-Fivssvx6V7mGGm7gU.jpg?auto=webp&s=11d7e79b07bb77fcaba0b4077eb30970fdca68b1', 'width': 1179}, 'variants': {}}]} | ||
I'm building a crypto project that aims to reward nodes for hosting LLM models and serving up inference free to the community. I desperately need developers, anyone interested? | 3 | [removed] | 2023-11-18T00:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/17xte7f/im_building_a_crypto_project_that_aims_to_reward/ | AbortedFajitas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xte7f | false | null | t3_17xte7f | /r/LocalLLaMA/comments/17xte7f/im_building_a_crypto_project_that_aims_to_reward/ | false | false | self | 3 | null |
Is there a way emphasis one part of the prompt like in stable diffusion. | 3 | In stable diffusion prompt, we put the prompt in brackets to emphasis it more. Is there any equivalent for it in LLM prompting ?
Some say emphasis goes from bottom to top. Some are they it's other way around. What do you thing is the right way and why ? | 2023-11-18T00:04:32 | https://www.reddit.com/r/LocalLLaMA/comments/17xt5mr/is_there_a_way_emphasis_one_part_of_the_prompt/ | BlissfulEternalLotus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xt5mr | false | null | t3_17xt5mr | /r/LocalLLaMA/comments/17xt5mr/is_there_a_way_emphasis_one_part_of_the_prompt/ | false | false | self | 3 | null |
How to make CodeLLama work decently? | 1 | [removed] | 2023-11-17T23:51:29 | https://www.reddit.com/r/LocalLLaMA/comments/17xsv6p/how_to_make_codellama_work_decently/ | lontivero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xsv6p | false | null | t3_17xsv6p | /r/LocalLLaMA/comments/17xsv6p/how_to_make_codellama_work_decently/ | false | false | self | 1 | null |
format dataset for LLMa2 model | 1 | [removed] | 2023-11-17T23:39:44 | https://www.reddit.com/r/LocalLLaMA/comments/17xslyy/format_dataset_for_llma2_model/ | Heba30018 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xslyy | false | null | t3_17xslyy | /r/LocalLLaMA/comments/17xslyy/format_dataset_for_llma2_model/ | false | false | self | 1 | null |
Marx 3B V3 and Akins 3B GGUF quantizations. | 7 | If you don't know what those are, refer to these two reddit posts about [Marx 3B V3](https://www.reddit.com/r/LocalLLaMA/comments/17f1gcu/i_released_marx_3b_v3/) and [Akins 3B](https://www.reddit.com/r/LocalLLaMA/comments/17jidpg/today_i_released_akins_3b/), the unquantized model weights are available at HuggingFace. Link to [Marx 3B V3](https://huggingface.co/acrastt/Marx-3B-V3) and [Akins 3B](https://huggingface.co/acrastt/Akins-3B).
As the [StableLM support for llama.cpp](https://github.com/ggerganov/llama.cpp/pull/3586) has just been recently, u/The-Bloke(Thank you so much!) quantized my StableLM models to GGUF as a lot of people are wanting to try the model in GGUF, you can find the GGUF conversion for [Marx 3B V3](https://huggingface.co/TheBloke/Marx-3B-v3-GGUF) and [Akins 3B](https://huggingface.co/TheBloke/Akins-3B-GGUF). Again, credit to u/The-Bloke for quantizing the model, thank you!
By the way, I don't know what dataset to finetune on right now. If you know a good dataset, let me know and I will look into it. Though I could probably only finetune on datasets below 5k conversations, maybe 10k. | 2023-11-17T23:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/17xsliz/marx_3b_v3_and_akins_3b_gguf_quantizations/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xsliz | false | null | t3_17xsliz | /r/LocalLLaMA/comments/17xsliz/marx_3b_v3_and_akins_3b_gguf_quantizations/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'ojWDPvljEfitKccZa6oiEZUoNb3X0K_vzy_5sXXUOoI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=108&crop=smart&auto=webp&s=8b19ae3b1dcc44e726c67fd09a6461988c3258fb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=216&crop=smart&auto=webp&s=2dcdb6ff59e819682724dab9622a8c0348cf4268', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=320&crop=smart&auto=webp&s=ea213059ff0b8bdc739242f7573a706638e03114', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=640&crop=smart&auto=webp&s=82c47f461aa0ce4bd800d93f179342d138aba796', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=960&crop=smart&auto=webp&s=803de3d7965c3742cb052f8ba6347476c3b6eab2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?width=1080&crop=smart&auto=webp&s=ec91d3a40e040a2c88b2303bbbc96199c85ae040', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kkgpSXybDD5-7EgJLsMC92Vm4emxC0jScDyTWc7JLE8.jpg?auto=webp&s=954177c28c58aabb092da70db1c3f975900160e2', 'width': 1200}, 'variants': {}}]} |
IDE extensions for code completion, chat | 1 | Are there any open source extensions for VS Code and JetBrains IDEs that can use a locally or remotely deployed model to get the same functionality as paid products like Github Copilot or Codeium - mainly code completion and in-context chat? | 2023-11-17T21:54:00 | https://www.reddit.com/r/LocalLLaMA/comments/17xq6k9/ide_extensions_for_code_completion_chat/ | amang0112358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xq6k9 | false | null | t3_17xq6k9 | /r/LocalLLaMA/comments/17xq6k9/ide_extensions_for_code_completion_chat/ | false | false | self | 1 | null |
Best way to share LLM's as a team? | 7 | I've noticed that nearly every web-based llm app(chatgpt-web/privategpt/textgen-ui) stores the conversations/chat history in the browser.
What's the best setup that I can host to allow a group of people to see the same conversation and build on it together that can be self-hosted? | 2023-11-17T21:40:43 | https://www.reddit.com/r/LocalLLaMA/comments/17xpvyz/best_way_to_share_llms_as_a_team/ | ab123w | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xpvyz | false | null | t3_17xpvyz | /r/LocalLLaMA/comments/17xpvyz/best_way_to_share_llms_as_a_team/ | false | false | self | 7 | null |
Sam right now: | 26 | 2023-11-17T21:22:47 | Slimxshadyx | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17xpgoa | false | null | t3_17xpgoa | /r/LocalLLaMA/comments/17xpgoa/sam_right_now/ | false | false | 26 | {'enabled': True, 'images': [{'id': 'Idu-7eOuEQpNP5mgfzuFUBPoLjOHCa7LDsCJo7VILoA', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/3zw1luyi6z0c1.jpeg?width=108&crop=smart&auto=webp&s=b4b57e2fe3aa8a86f3eb3c10711e147f4db04e6c', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/3zw1luyi6z0c1.jpeg?width=216&crop=smart&auto=webp&s=5ed058a03bac70e4a003d22049a417c361f0701f', 'width': 216}, {'height': 307, 'url': 'https://preview.redd.it/3zw1luyi6z0c1.jpeg?width=320&crop=smart&auto=webp&s=b17cc7530b90ded6f6d22d32f66b4c93babcdf60', 'width': 320}], 'source': {'height': 599, 'url': 'https://preview.redd.it/3zw1luyi6z0c1.jpeg?auto=webp&s=4ff3dbf7488fa776509c7448d681d8d308ebd1cf', 'width': 623}, 'variants': {}}]} | |||
Where do I find the best current LLM ? | 1 | Is there a website I can go to that says, "download this its the best scoring one"
I don't understand how to read the current leaderboards | 2023-11-17T21:21:54 | https://www.reddit.com/r/LocalLLaMA/comments/17xpfw7/where_do_i_find_the_best_current_llm/ | Bonfire-GTK | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xpfw7 | false | null | t3_17xpfw7 | /r/LocalLLaMA/comments/17xpfw7/where_do_i_find_the_best_current_llm/ | false | false | self | 1 | null |
Sam Altman Has Been Kicked From OpenAI | 1 | 2023-11-17T21:01:58 | https://openai.com/blog/openai-announces-leadership-transition | No_Following_3073 | openai.com | 1970-01-01T00:00:00 | 0 | {} | 17xoz44 | false | null | t3_17xoz44 | /r/LocalLLaMA/comments/17xoz44/sam_altman_has_been_kicked_from_openai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'vCantoygRVkDfRX_0Lr6d6ORGwm6VUN_9a5woU2jcZI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?width=108&crop=smart&auto=webp&s=609b912f04241821267ed23495c2b2e05b3a7d7e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?width=216&crop=smart&auto=webp&s=3fc33431e36d192e1eb215ab522eaa8cb5e1dce9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?width=320&crop=smart&auto=webp&s=9d0178e6d85e5b47137f6eb56cb6af19eec5a54c', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?width=640&crop=smart&auto=webp&s=eedd084f1e5ed8ea9e0b3308825ef3a90f950c85', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?width=960&crop=smart&auto=webp&s=101b811db7756c3cd708aac82966fe5f6b0aa1c0', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?auto=webp&s=2cd3bc88b56597d28b76eebe1da9d053417e00ae', 'width': 1000}, 'variants': {}}]} | ||
OpenAI announces leadership transition: Sama no longer CEO | 64 | [https://openai.com/blog/openai-announces-leadership-transition](https://openai.com/blog/openai-announces-leadership-transition)
​
tldr: Sama stepped down, Chief technology officer Mira Murati appointed interim CEO to lead OpenAI
What do we make of it? | 2023-11-17T20:41:58 | https://www.reddit.com/r/LocalLLaMA/comments/17xojaj/openai_announces_leadership_transition_sama_no/ | reallmconnoisseur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xojaj | false | null | t3_17xojaj | /r/LocalLLaMA/comments/17xojaj/openai_announces_leadership_transition_sama_no/ | false | false | self | 64 | {'enabled': False, 'images': [{'id': 'vCantoygRVkDfRX_0Lr6d6ORGwm6VUN_9a5woU2jcZI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?width=108&crop=smart&auto=webp&s=609b912f04241821267ed23495c2b2e05b3a7d7e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?width=216&crop=smart&auto=webp&s=3fc33431e36d192e1eb215ab522eaa8cb5e1dce9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?width=320&crop=smart&auto=webp&s=9d0178e6d85e5b47137f6eb56cb6af19eec5a54c', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?width=640&crop=smart&auto=webp&s=eedd084f1e5ed8ea9e0b3308825ef3a90f950c85', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?width=960&crop=smart&auto=webp&s=101b811db7756c3cd708aac82966fe5f6b0aa1c0', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/7ZKH1KPD558VPi2biPQ9CY3JM6cxoXH43jT_WiAocGM.jpg?auto=webp&s=2cd3bc88b56597d28b76eebe1da9d053417e00ae', 'width': 1000}, 'variants': {}}]} |
Sam Altman out as CEO of OpenAI. Mira Murati is the new CEO. | 416 | 2023-11-17T20:41:20 | https://www.cnbc.com/2023/11/17/sam-altman-leaves-openai-mira-murati-appointed-interim-boss.html | fallingdowndizzyvr | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 17xoite | false | null | t3_17xoite | /r/LocalLLaMA/comments/17xoite/sam_altman_out_as_ceo_of_openai_mira_murati_is/ | false | false | 416 | {'enabled': False, 'images': [{'id': 'oqYEULHNpcFaNgfn1z9Qf_3OF37eZVkWURxB3FO14Q8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/WtmtnG0TnHpdxqsqninwaBhDsO_jc80KhGQaZMEU7Ws.jpg?width=108&crop=smart&auto=webp&s=075f37c92c3bc544604d7bee951f5d73a9a2ea9a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/WtmtnG0TnHpdxqsqninwaBhDsO_jc80KhGQaZMEU7Ws.jpg?width=216&crop=smart&auto=webp&s=f99471e663d7471239f7a4ea53febdfb3573fa99', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/WtmtnG0TnHpdxqsqninwaBhDsO_jc80KhGQaZMEU7Ws.jpg?width=320&crop=smart&auto=webp&s=54c719695daaca301a18b13052fbea6a5aca531d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/WtmtnG0TnHpdxqsqninwaBhDsO_jc80KhGQaZMEU7Ws.jpg?width=640&crop=smart&auto=webp&s=20e7c56acee4e277715f168ed9c924c1b592ac61', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/WtmtnG0TnHpdxqsqninwaBhDsO_jc80KhGQaZMEU7Ws.jpg?width=960&crop=smart&auto=webp&s=9fc7dbdc9312bbbc67900efca461be72de13547a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/WtmtnG0TnHpdxqsqninwaBhDsO_jc80KhGQaZMEU7Ws.jpg?width=1080&crop=smart&auto=webp&s=f49e2007416349a04d4fa13bbb8d35fa50f10241', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/WtmtnG0TnHpdxqsqninwaBhDsO_jc80KhGQaZMEU7Ws.jpg?auto=webp&s=bcce18dbfd88cee4bab550d27e9525fceb6ad513', 'width': 1920}, 'variants': {}}]} | ||
LoRAX: Open Source Serving for 100s of Fine-Tuned LLMs in Production | 37 | Last month, we announced LoRAX (LoRA eXchange), a framework that makes it possible to serve hundreds of fine-tuned LLMs on one GPU with minimal degradation in throughput and latency. Today, we’re excited to release LoRAX to the open-source community under the permissive and commercial-friendly Apache 2.0 license. (original LoRAX [blog](https://predibase.com/blog/lora-exchange-lorax-serve-100s-of-fine-tuned-llms-for-the-cost-of-one)).
**What is LoRAX?**
LoRAX works by loading in the fine-tuned “adapter” weights dynamically at runtime. Combining this with an optimized caching and scheduling policy that allows us to fuse multiple adapters into a single batch, LoRAX gives you the best of both worlds: low-cost serving with high performance. 💸 🏎️
**Why open source?**
At Predibase, we believe the future is smaller, faster, cheaper fine-tuned models. To get there, we as a community must work together to make serving fine-tuned models cost-competitive with the big commercial APIs.
As the core maintainers of Ludwig ([https://ludwig.ai/](https://ludwig.ai/)) and Horovod ([https://github.com/horovod/horovod](https://github.com/horovod/horovod)), we’re no strangers to building communities around open-source AI. This isn’t a side project for us, it’s the foundation of our mission. 💪
**Why join the LoRAX community?**
🚢 Built for scale. LoRAX isn’t an academic project, it’s production infrastructure. Batteries included with pre-built Docker images, Helm charts for Kubernetes, metrics, and telemetry.
🤝 Research meets production. Bring together the best ideas from research into a single production framework (example: recently integrated SGMV kernel from Punica for significant performance improvements: [https://arxiv.org/abs/2310.18547](https://arxiv.org/abs/2310.18547)).
🕊️ Commercially viable, always. Whether you’re an individual developer or an AI platform like Predibase, you can build on LoRAX thanks to the permissive Apache 2.0 license.
Try LoRAX yourself today, and join the community to contribute and receive updates as we continue to invest in growing LoRAX in the weeks and months ahead.
**Blog**: [https://predibase.com/blog/lorax-the-open-source-framework-for-serving-100s-of-fine-tuned-llms-in](https://predibase.com/blog/lorax-the-open-source-framework-for-serving-100s-of-fine-tuned-llms-in)
**GitHub**: [https://github.com/predibase/lorax](https://github.com/predibase/lorax)
https://preview.redd.it/tscb64btqy0c1.png?width=1024&format=png&auto=webp&s=47e0e484bca5f3c957c639596216fd921b4ac266 | 2023-11-17T19:55:32 | https://www.reddit.com/r/LocalLLaMA/comments/17xniii/lorax_open_source_serving_for_100s_of_finetuned/ | Inevitable-Army-4274 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xniii | false | null | t3_17xniii | /r/LocalLLaMA/comments/17xniii/lorax_open_source_serving_for_100s_of_finetuned/ | false | false | 37 | {'enabled': False, 'images': [{'id': '0zX9rqWlxtDTXM2B0vbSPg5rAOljp7Fb5kLndJw_shY', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=108&crop=smart&auto=webp&s=017ff1d06e2da11c13453c611957eb072bd46990', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=216&crop=smart&auto=webp&s=e067b863ab2c86a2ac6afc420949c23c094092c9', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=320&crop=smart&auto=webp&s=a633e736d61b766676da7ebbc04f7d2da2cdad83', 'width': 320}, {'height': 339, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=640&crop=smart&auto=webp&s=637f4390424cffbe8bbc2dcec09ee65763c2169c', 'width': 640}, {'height': 508, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=960&crop=smart&auto=webp&s=d08df46264d69a30c0035cd7fa28680480ab0e15', 'width': 960}, {'height': 572, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?width=1080&crop=smart&auto=webp&s=aa8d6b96b74e336acd351494e71f210183e23eee', 'width': 1080}], 'source': {'height': 585, 'url': 'https://external-preview.redd.it/RRt1Or-9CEV7x2rQ39RW7v4zRk5KJpdo9gCqr1q5qGg.jpg?auto=webp&s=33ab04c3c4b4f4e59e36ef53b177429653d5b283', 'width': 1104}, 'variants': {}}]} | |
is there a local LLM capable to play Tik Tak Toe ? | 1 |
PROMPT FOR GPT 3.5:
Let's play tik tak toe, I'm X and you O | 2023-11-17T19:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/17xncoy/is_there_a_local_llm_capable_to_play_tik_tak_toe/ | grigio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xncoy | false | null | t3_17xncoy | /r/LocalLLaMA/comments/17xncoy/is_there_a_local_llm_capable_to_play_tik_tak_toe/ | false | false | self | 1 | null |
How do I train open source LLM with my pdf? | 1 | [removed] | 2023-11-17T19:07:15 | https://www.reddit.com/r/LocalLLaMA/comments/17xmflc/how_do_i_train_open_source_llm_with_my_pdf/ | consig1iere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xmflc | false | null | t3_17xmflc | /r/LocalLLaMA/comments/17xmflc/how_do_i_train_open_source_llm_with_my_pdf/ | false | false | self | 1 | null |
Unable to get response Fine tuning Lora using llama.cpp locally | 3 | I have setup llama 2 7b model locally and was able to get it to work . After the initial setup I have tried to upload a text file and train it using Lora as articulated in this post
https://www.reddit.com/r/LocalLLaMA/s/t8BfMr0qF2
The above article lists rentry guide which offers very good information and I have followed the same , after fine tuning is completed I try and run the trained model to get some specific answers as per the trained data( I have modified the William Shakespeare name to something else to verify my model is trained properly), but it’s not at all providing any coherent response, it’s either some garbage text or not able to understand my question
I m not sure where I m going wrong, I m using cpu with 12 cores , 32gb ram windows machine.
Any help in getting the correct response after the training would be appreciated | 2023-11-17T18:54:55 | https://www.reddit.com/r/LocalLLaMA/comments/17xm5jm/unable_to_get_response_fine_tuning_lora_using/ | leaderof13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xm5jm | false | null | t3_17xm5jm | /r/LocalLLaMA/comments/17xm5jm/unable_to_get_response_fine_tuning_lora_using/ | false | false | self | 3 | null |
Picking out the next gpu's to buy, after using P40's for running LLama | 5 | I started with running quantized 70B on 6x P40 gpu's, but it's noticeable how slow the performance is.
Sure maybe I'm not going to buy a few A100's to replace them. But what about an RTX8000 or two?
These will be going into a 2u gigabyte gpu server, so consumer cards don't fit | 2023-11-17T18:52:09 | https://www.reddit.com/r/LocalLLaMA/comments/17xm3fl/picking_out_the_next_gpus_to_buy_after_using_p40s/ | CertainlyBright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xm3fl | false | null | t3_17xm3fl | /r/LocalLLaMA/comments/17xm3fl/picking_out_the_next_gpus_to_buy_after_using_p40s/ | false | false | self | 5 | null |
Inference Speed When Running Local Models | 1 | I am running a LLaMa 13B instance (via GPT4all) and am finding inference times to be quite slow, especially for summarization. Does anyone have recommendations for models that can do summarization of 4k+ tokens extremely quickly? | 2023-11-17T18:14:22 | https://www.reddit.com/r/LocalLLaMA/comments/17xl8yv/inference_speed_when_running_local_models/ | Frequent-Let231 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xl8yv | false | null | t3_17xl8yv | /r/LocalLLaMA/comments/17xl8yv/inference_speed_when_running_local_models/ | false | false | self | 1 | null |
He must be very enlightened in using LLM | 1 | 2023-11-17T17:50:05 | IxinDow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17xkp96 | false | null | t3_17xkp96 | /r/LocalLLaMA/comments/17xkp96/he_must_be_very_enlightened_in_using_llm/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'DnX_5L4pL4pXk5P2DLc27_-zg10abMhHgNq7GSoSHok', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/he45f86a4y0c1.png?width=108&crop=smart&auto=webp&s=3102b0f388af25f73f10fcc33f24d621de69540c', 'width': 108}, {'height': 163, 'url': 'https://preview.redd.it/he45f86a4y0c1.png?width=216&crop=smart&auto=webp&s=d7e00d30c2f24a2167148e40310443fbb2097981', 'width': 216}, {'height': 242, 'url': 'https://preview.redd.it/he45f86a4y0c1.png?width=320&crop=smart&auto=webp&s=1c54319df95b0f6da564bc03884ee5493aadedfc', 'width': 320}, {'height': 485, 'url': 'https://preview.redd.it/he45f86a4y0c1.png?width=640&crop=smart&auto=webp&s=4700860072d7efa88f049b2753c62b01ba2e161a', 'width': 640}, {'height': 727, 'url': 'https://preview.redd.it/he45f86a4y0c1.png?width=960&crop=smart&auto=webp&s=fbdb83935f5e2fc20b068a18c26c40665f33fcb5', 'width': 960}, {'height': 818, 'url': 'https://preview.redd.it/he45f86a4y0c1.png?width=1080&crop=smart&auto=webp&s=96f85af4bc0d7d3f17ef1c4a8d08615ded3984eb', 'width': 1080}], 'source': {'height': 822, 'url': 'https://preview.redd.it/he45f86a4y0c1.png?auto=webp&s=816e328796118541ad78feb9a8f1b030a7ea6dad', 'width': 1084}, 'variants': {}}]} | |||
What is the best config to run Mistral 7b on an Android phone? | 3 | It's pretty slow on a Pixel 6 (8GB RAM):
\`\`\`
./main -m models/mistral-7b-v0.1.Q2\_K.gguf --prompt "what's the capital of australia? reply in a short manner as a helpful assistant that knows a lot about the world." -ins
\`\`\`
Running latest llama.cpp on master as of Nov 15th | 2023-11-17T17:37:34 | https://www.reddit.com/r/LocalLLaMA/comments/17xkfif/what_is_the_best_config_to_run_mistral_7b_on_an/ | gianpaj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xkfif | false | null | t3_17xkfif | /r/LocalLLaMA/comments/17xkfif/what_is_the_best_config_to_run_mistral_7b_on_an/ | false | false | self | 3 | null |
Just spreading awareness towards this very useful Model. | 30 | Hi. I am not behind the model in any capacity nor those who are asked me to do so, before anyone asks.
I am just a normal LLM enjoyer that wants better 13B models in the near future, because at the moment, they're being plummeted onto the ground by many Mistral 7B finetunes and since we don't have any Mistral 13B base model...
The Model in question is this one right here, which seems to be flying under the radar for some reason:
[https://huggingface.co/sequelbox/DaringFortitude](https://huggingface.co/sequelbox/DaringFortitude)
TheBloke already did his magic on it, just search his profile on Hugging Face with Ctrl+F.
The reason as to why I am doing this is: I honestly think this is a really, really good (I did some little testing, but my machine is garbage to test any further) and useful Base Model for further finetuning/merging and etc... | 2023-11-17T17:35:07 | https://www.reddit.com/r/LocalLLaMA/comments/17xkdib/just_spreading_awareness_towards_this_very_useful/ | Haiart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xkdib | false | null | t3_17xkdib | /r/LocalLLaMA/comments/17xkdib/just_spreading_awareness_towards_this_very_useful/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'Uqz2ht8RpjG0R-50XXlP8-I8j1WNCa-y72UF_qm2Q-c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HMHx7UFVgwA0HfENeXjHDw_baIw-LlsD1wygBfJBMRg.jpg?width=108&crop=smart&auto=webp&s=bae291fcee003b381a8475f432ece4157bbe946e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HMHx7UFVgwA0HfENeXjHDw_baIw-LlsD1wygBfJBMRg.jpg?width=216&crop=smart&auto=webp&s=7ac619c879dfbc22e89f433bc610764e76d5505b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HMHx7UFVgwA0HfENeXjHDw_baIw-LlsD1wygBfJBMRg.jpg?width=320&crop=smart&auto=webp&s=9ae101764d1f5d80377487601ed09e8c36fa9046', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HMHx7UFVgwA0HfENeXjHDw_baIw-LlsD1wygBfJBMRg.jpg?width=640&crop=smart&auto=webp&s=7df7d50895fa902bf3efb295fa4ea6db79391123', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HMHx7UFVgwA0HfENeXjHDw_baIw-LlsD1wygBfJBMRg.jpg?width=960&crop=smart&auto=webp&s=613eed0c655a5ae563c4ddd60fd02d094ef126ca', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HMHx7UFVgwA0HfENeXjHDw_baIw-LlsD1wygBfJBMRg.jpg?width=1080&crop=smart&auto=webp&s=8dd2b4159de0747eee43cdebd4f3bf052d15d4f8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HMHx7UFVgwA0HfENeXjHDw_baIw-LlsD1wygBfJBMRg.jpg?auto=webp&s=56f54051eb3f45210a939a6e46c23473f9da6a71', 'width': 1200}, 'variants': {}}]} |
What should I do to run a model mainly on RAM? | 1 | [removed] | 2023-11-17T16:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/17xjhju/what_should_i_do_to_run_a_model_mainly_on_ram/ | HistoricalArm5864 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xjhju | false | null | t3_17xjhju | /r/LocalLLaMA/comments/17xjhju/what_should_i_do_to_run_a_model_mainly_on_ram/ | false | false | self | 1 | null |
Dual 3090,24GB & 1070 worth it? | 8 | I upgraded my system a year ago. Amongst these was a GPU 1070 -> 3090 upgrade. My old card rusts now in the cellar. I heard people around here have mentioned they run desktop on their old card to completely free VRAM in their workhorse card.
is this worth doing in the context of loading LLMs and playing around with them, no training yet? The downside is a worse thermal situation if I cram them together. | 2023-11-17T16:46:08 | https://www.reddit.com/r/LocalLLaMA/comments/17xj9mh/dual_309024gb_1070_worth_it/ | BlueMetaMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xj9mh | false | null | t3_17xj9mh | /r/LocalLLaMA/comments/17xj9mh/dual_309024gb_1070_worth_it/ | false | false | self | 8 | null |
Training LLMs to follow procedure for Math gives an accuracy of 98.5% | 94 | Github Link: https://github.com/desik1998/MathWithLLMs
Although LLMs are able to do a lot of tasks such as Coding, science etc, they often fail in doing Math tasks without a calculator (including the State of the Art Models).
Our intuition behind why models cannot do Math is because the instructions on the internet are something like a x b = c and do not follow the procedure which we humans follow when doing Math. For example when asked any human how to do 123 x 45, we follow the digit wise multiplication technique using carry, get results for each digit multiplication and then add the corresponding resulting numbers. But on the internet, we don't show the procedure to do Math and instead just right the correct value. And now given LLMs are given a x b = c, they've to reverse engineer the algorithm for multiplication.
Most of the existing Literature gives instructions to the LLM instead of showing the procedure and we think this might not be the best approach to teach LLM.
### What this project does?
This project aims to prove that LLMs can learn Math when trained on a step-by-step procedural way similar to how humans do it. It also breaks the notion that LLMs cannot do Math without using calculators. For now to illustrate this, this project showcases how LLMs can learn multiplication. The rationale behind taking multiplication is that GPT-4 cannot do multiplication for >3 digit numbers. We prove that LLMs can do Math when taught using a step-by-step procedure. For example, instead of teaching LLMs multiplication like 23 * 34 = 782, we teach it multiplication similar to how we do digit-wise multiplication, get values for each digit multiplication and further add the resulting numbers to get the final result.
**Instruction Tuning:**
We've further done finetuning on OpenAI's GPT-3.5 to teach Math.
There are close to 1300 multiplication instructions created for training and 200 for validation. The test cases were generated keeping in mind the OpenAI GPT-3.5 4096 token limit. A 5 x 5 digit multiplication can in general fit within 4096 limit but 6 x 6 cannot fit. But if one number is 6 digit, the other can be <= 4 digit and similarly if 1 number is 7 digit then the other can be <= 3 digit.
Also instead of giving * for multiplication and + for addition, different operators' <<*>> and <<<+>>> are given. The rationale behind this is, using the existing * and + for multiplication and addition might tap on the existing weights of the neural network which doesn't follow step-by-step instruction and directly give the result for multiplication in one single step.
[Sample Instruction](https://pastebin.com/VZNUHQVQ)

### Results
The benchmarking was done on 200 test cases where each test case has two random numbers generated. For the 200 samples which were tested, excluding for 3 cases, the rest of the cases the multiplication is correct. Which means this overall accuracy is **98.5%**. (We're also looking for feedback from community about how to test this better.)
### Future Improvements
* Reach out to AI and open-source community to make this proposal better or identify any flaws.
* Do the same process of finetuning using open-source LLMs.
* Figure out what's the smallest LLM that can do Math accurately when trained in a procedural manner (A 10 year kid can do Math). Check this for both normal models and distilled models as well.
Requesting for Feedback from AI Community! | 2023-11-17T16:45:17 | https://www.reddit.com/r/LocalLLaMA/comments/17xj8wl/training_llms_to_follow_procedure_for_math_gives/ | Desik_1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xj8wl | false | null | t3_17xj8wl | /r/LocalLLaMA/comments/17xj8wl/training_llms_to_follow_procedure_for_math_gives/ | false | false | self | 94 | {'enabled': False, 'images': [{'id': 'Ub064J5FFo82nsMku-imUAOCL7O2CY9UKoBzpKTmZjA', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/OXavcGeAk3G3SAXzAGCNnp_nKj8-ZVU_o6inl1iHIok.png?width=108&crop=smart&auto=webp&s=3131bdaef0725c3608d2352206670f04c40b41a5', 'width': 108}, {'height': 175, 'url': 'https://external-preview.redd.it/OXavcGeAk3G3SAXzAGCNnp_nKj8-ZVU_o6inl1iHIok.png?width=216&crop=smart&auto=webp&s=1c27f997220ed087cfed820d91474b63c9ccc2b9', 'width': 216}, {'height': 260, 'url': 'https://external-preview.redd.it/OXavcGeAk3G3SAXzAGCNnp_nKj8-ZVU_o6inl1iHIok.png?width=320&crop=smart&auto=webp&s=1063549208e95ab392d91e3c087114c62f4ded4d', 'width': 320}, {'height': 520, 'url': 'https://external-preview.redd.it/OXavcGeAk3G3SAXzAGCNnp_nKj8-ZVU_o6inl1iHIok.png?width=640&crop=smart&auto=webp&s=a9a75dc3a6accac5930c500cec32699333a50f37', 'width': 640}, {'height': 781, 'url': 'https://external-preview.redd.it/OXavcGeAk3G3SAXzAGCNnp_nKj8-ZVU_o6inl1iHIok.png?width=960&crop=smart&auto=webp&s=34b167aed61f0ab7f5bbd852f401e7dddb9da3f2', 'width': 960}, {'height': 878, 'url': 'https://external-preview.redd.it/OXavcGeAk3G3SAXzAGCNnp_nKj8-ZVU_o6inl1iHIok.png?width=1080&crop=smart&auto=webp&s=18bb70f6cd17b44097d09e479b45faaa57bd9f54', 'width': 1080}], 'source': {'height': 1590, 'url': 'https://external-preview.redd.it/OXavcGeAk3G3SAXzAGCNnp_nKj8-ZVU_o6inl1iHIok.png?auto=webp&s=ffa02ee25ca1da520847df845743bc0bc19bc911', 'width': 1954}, 'variants': {}}]} |
How are people here observing their experiments and production models? | 12 | I'm currently working on some RAG-based tooling for some non-profits and am having difficulty doing the following. Wondering what people are using?
1. Tracking model performance across experiments and productized pipelines
1. changes in test or finetuning data sets
2. Changes in chunking strategy
3. changes in RAG tooling (e.g. RAG Fusion or RAG-DIT)
4. Changes in underlying models and/or finetuning strategies
2. Tracking pipeline performance (e.g. speed, throughput, latency, etc.) as we change items laid out above
What products do you use and how do you choose them? | 2023-11-17T16:25:34 | https://www.reddit.com/r/LocalLLaMA/comments/17xitbq/how_are_people_here_observing_their_experiments/ | thedabking123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xitbq | false | null | t3_17xitbq | /r/LocalLLaMA/comments/17xitbq/how_are_people_here_observing_their_experiments/ | false | false | self | 12 | null |
Some of the possibilities with using LLM that I think most don’t think about or overlook | 1 | [removed] | 2023-11-17T16:11:24 | https://www.reddit.com/r/LocalLLaMA/comments/17xii58/some_of_the_possibilities_with_using_llm_that_i/ | crua9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xii58 | false | null | t3_17xii58 | /r/LocalLLaMA/comments/17xii58/some_of_the_possibilities_with_using_llm_that_i/ | false | false | self | 1 | null |
Some of the possibilities with using LLM that I think most don’t think about or overlook | 1 | [removed] | 2023-11-17T16:11:24 | https://www.reddit.com/r/LocalLLaMA/comments/17xii54/some_of_the_possibilities_with_using_llm_that_i/ | crua9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xii54 | false | null | t3_17xii54 | /r/LocalLLaMA/comments/17xii54/some_of_the_possibilities_with_using_llm_that_i/ | false | false | self | 1 | null |
Introducing Tess: Tess-M with 200K Context Length | 52 | I'm blown away. See for yourself.
[https://migel.substack.com/p/a-conversation-with-tess](https://migel.substack.com/p/a-conversation-with-tess)
Tess, welcome to the world!
Model is Open Source with 200K context length.
Available at: [https://huggingface.co/migtissera/Tess-M-v1.0](https://huggingface.co/migtissera/Tess-M-v1.0)
​
https://preview.redd.it/qehogf32mx0c1.jpg?width=800&format=pjpg&auto=webp&s=333356e7c95e1a1d8fc32dca1d7829720ad2cc84 | 2023-11-17T16:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/17xif4e/introducing_tess_tessm_with_200k_context_length/ | migtissera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xif4e | false | null | t3_17xif4e | /r/LocalLLaMA/comments/17xif4e/introducing_tess_tessm_with_200k_context_length/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'AwaPxyZ9t0WiaGd2WA3Pl7lMaamnG0mIqljXqg0Sjx4', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/H049yica5qeu70fs1NllaiYbmFLruBDWvq6aJUeGyc4.jpg?width=108&crop=smart&auto=webp&s=0f7392f5b20ab1a1955b5ebb59ee1b4524cf028e', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/H049yica5qeu70fs1NllaiYbmFLruBDWvq6aJUeGyc4.jpg?width=216&crop=smart&auto=webp&s=8a591fcacf1e584c7cf66cd70f4960d1c4645e74', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/H049yica5qeu70fs1NllaiYbmFLruBDWvq6aJUeGyc4.jpg?width=320&crop=smart&auto=webp&s=a284c05a9e69f0e8d9bac9b88a3d212cf3b216be', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/H049yica5qeu70fs1NllaiYbmFLruBDWvq6aJUeGyc4.jpg?width=640&crop=smart&auto=webp&s=6c5850d110d94e8e1b5c65f2ab0727f7221ac632', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/H049yica5qeu70fs1NllaiYbmFLruBDWvq6aJUeGyc4.jpg?width=960&crop=smart&auto=webp&s=32302d5bf0af7d969b0b47dfc3affebce5025221', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H049yica5qeu70fs1NllaiYbmFLruBDWvq6aJUeGyc4.jpg?auto=webp&s=a3c32081fb205116acbe387db35a6b5c09aa1870', 'width': 1024}, 'variants': {}}]} | |
Kernal dies when loading TheBloke / vicuna-33B-GPTQ locally on NVIDIA GeForce RTX 4090 24GB, RAM 64GB | 1 | [removed] | 2023-11-17T15:57:28 | https://www.reddit.com/r/LocalLLaMA/comments/17xi6rl/kernal_dies_when_loading_thebloke_vicuna33bgptq/ | Jg0at | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xi6rl | false | null | t3_17xi6rl | /r/LocalLLaMA/comments/17xi6rl/kernal_dies_when_loading_thebloke_vicuna33bgptq/ | false | false | self | 1 | null |
AI Portal Gun: A Comprehensive Open Source Guide to Free AI Resources for Mastering Artificial Intelligence – Books, Courses, Articles, Research Papers, Codes, Projects and More. | 1 | 2023-11-17T15:45:38 | https://www.portalgunai.org/ | Jiraiya27s | portalgunai.org | 1970-01-01T00:00:00 | 0 | {} | 17xhxol | false | null | t3_17xhxol | /r/LocalLLaMA/comments/17xhxol/ai_portal_gun_a_comprehensive_open_source_guide/ | false | false | default | 1 | null | |
Large-scale LLM deployment with GBNF support | 1 | Hey guys, as the title suggests I'd like some advice on the best way to serve LLMs with the support of GBNF or similar to ensure that I receive deterministic output. I have been using text-generation-web-ui locally and from there I can add my grammar, however, I would like to be able to do this across a cluster that can infer with high throughput. Any suggests on how best to accomplish this?
​
Hey guys, as the title suggests I'd like some advice on the best way to serve LLMs with the support of GBNF or similar to ensure that I receive deterministic output. I have been using text-generation-web-ui locally and from there I can add my grammar, however, I would like to be able to do this across a cluster that can infer with high throughput. Any suggestions on how best to accomplish this? | 2023-11-17T15:24:37 | https://www.reddit.com/r/LocalLLaMA/comments/17xhh9b/largescale_llm_deployment_with_gbnf_support/ | MonkeyMaster64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xhh9b | false | null | t3_17xhh9b | /r/LocalLLaMA/comments/17xhh9b/largescale_llm_deployment_with_gbnf_support/ | false | false | self | 1 | null |
overclocked or not overclocked? | 3 | hi guys im building a pc/server mostly for running llms. i decided to go as a gpu with:
nvidia 4060ti 16gb
. now i wonder if i should buy the oc or non oc version.
what i heard : overclocked makes it work faster but consume more and degrade faster.
i would think that the non oc version is preferable for longetivety and electricity cost.
or am i overlooking sth and i could run certain models with a oc version and not with a non oc version?
Thanks for your help :) | 2023-11-17T15:02:19 | https://www.reddit.com/r/LocalLLaMA/comments/17xgzw7/overclocked_or_not_overclocked/ | thefunnyape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xgzw7 | false | null | t3_17xgzw7 | /r/LocalLLaMA/comments/17xgzw7/overclocked_or_not_overclocked/ | false | false | self | 3 | null |
Help with spec for a PC to train LLMs | 1 | I have a budget of about £3000, though extends to £4500 for the possibility of having 2 GPUs. Currently looking at using the Mistral 7B model. I might get away with simply deploying the model and giving it context in the prompt for my project. However it may require training to learn domain specific context, which I'd do through something like QLoRA.
I'll probably be going with 1x 3090 to begin with, potentially get another if training is required. In particular I'm struggling with finding a case that can sufficiently house and cool 2x 3 slot graphics cards. A lot of the options with vertical mounts press the GPU cooler right up against the side panel which worries me.
Here's what I'm thinking so far:
* Mobo: **ROG STRIX X670E-E GAMING WIFI**
* Supports PCI-E Gen 5.0 x16 on 2 slots both on CPU (I'll only need PCI-E Gen 4.0 8x for each). Struggled finding mobos that could do this.
* CPU: **AMD 7900X**
* Good price to performance. Decent number of cores.
* GPU: **MSI GeForce RTX 3090 VENTUS 3X 24G OC GeForce RTX 3090 24 GB**
* Satisfies VRAM requirements
* RAM: **Corsair Vengeance 2x32GB 5200Mhz**
* Sounds like RAM is never a bottleneck so saving money with slower memory. Can upgrade to 128GB later. Unsure why a machine would need a lot of RAM when everything needs to fit on 24GB VRAM anyway, any advice welcome on this matter.
* SSD: **Western Digital Blue SN580**
* PSU: **Corsair HX1500i (2023)**
* 1500W plenty for 2 cards. 2023 model allows for better future upgradeability
* Cases: **ECLIPSE P600S** or **Fractal Meshify 2XL**
* Both have opton for vertical mount with 3 rear PCI-E slots so look like the best bet for fitting 2 cards in and getting good air flow. Still worried about the spacing though.
* The case also needs to be relatively easily movable... Just moving around the office. So open air test benches feel like they'd be too fragile.
Also have to factor in £100-200+ for case fans + perhipherals, PCI-E riser cables if needed.
Additionally, not sure if the NVlink bridge for £100 would help. I was wondering if it would alleviate the need for 2x pci-e slots with Gen 4.0 16x or 8x, as most motherboards seemed to only have 4x on the second slot, or even 4x on the chipset for the second slot. | 2023-11-17T14:48:30 | https://www.reddit.com/r/LocalLLaMA/comments/17xgp1v/help_with_spec_for_a_pc_to_train_llms/ | gazm2k5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xgp1v | false | null | t3_17xgp1v | /r/LocalLLaMA/comments/17xgp1v/help_with_spec_for_a_pc_to_train_llms/ | false | false | self | 1 | null |
How have local LLMs helped your business? | 5 | I'm curious how using local LLMs has helped your industry and why you'd opt for a local LLM over something like an API from ChatGPT. I've been working quite a bit with Stable Diffusion and am interested in branching out into other AI like LLMs. I do use ChatGPT for various small tasks but I'm kind of trying to wrap my mind around the scope of how LLMs can help business, particularly locally run LLMs. Also, is the draw of local LLMs mainly privacy, training, etc? | 2023-11-17T14:09:14 | https://www.reddit.com/r/LocalLLaMA/comments/17xfv0v/how_have_local_llms_helped_your_business/ | Brad12d3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xfv0v | false | null | t3_17xfv0v | /r/LocalLLaMA/comments/17xfv0v/how_have_local_llms_helped_your_business/ | false | false | self | 5 | null |
Can I run a model mainly on RAM? | 1 | [removed] | 2023-11-17T13:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/17xflrv/can_i_run_a_model_mainly_on_ram/ | HistoricalArm5864 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xflrv | false | null | t3_17xflrv | /r/LocalLLaMA/comments/17xflrv/can_i_run_a_model_mainly_on_ram/ | false | false | self | 1 | null |
transformers library PR: GrammarConstrainedLogitsProcessor, compatible with llama.cpp GBNF | 9 | 2023-11-17T13:36:50 | https://github.com/huggingface/transformers/pull/27557 | oobabooga4 | github.com | 1970-01-01T00:00:00 | 0 | {} | 17xf8a4 | false | null | t3_17xf8a4 | /r/LocalLLaMA/comments/17xf8a4/transformers_library_pr/ | false | false | default | 9 | null | |
Lower quality responses with GPTQ model vs GGUF? | 8 | I recently found out about Chronos-Hermes 13B and have been trying to play around with it.
I've tried three formats of the model, GPTQ, GPML, and GGUF. It's my understanding that GPML is older and more CPU-based, so I don't use it much. Whenever I use the GGUF (Q5 version) with KobaldCpp as a backend, I get incredible responses, but the speed is extremely slow. I even offload 32 layers to my GPU, and confirmed that it's not overusing VRAM, and it's still slow. The GPTQ model on the other hand is way faster, but the quality of responses is worse.
My question is, are there any tricks to loading GPTQ models I might not be aware of? | 2023-11-17T13:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/17xetyp/lower_quality_responses_with_gptq_model_vs_gguf/ | Civil_Ranger4687 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xetyp | false | null | t3_17xetyp | /r/LocalLLaMA/comments/17xetyp/lower_quality_responses_with_gptq_model_vs_gguf/ | false | false | self | 8 | null |
Interview with CEO of Mistral (Arthur Mensch) | 63 | Link: [https://www.youtube.com/watch?v=EMOFRDOMIiU](https://www.youtube.com/watch?v=EMOFRDOMIiU)
Really solid interview, made me feel hopeful about the future of open source models at Mistral. He also elaborates how they think about model safety. In summary: base model should not have training targeted at safety as it degrades performance, but tools will be provided to make safety filtering on top easier, so users can opt into it. He also hinted at some sort of platform they are building (to actually make money, I guess).
I hope they can keep their current approach. It might be hard as they are VC funded, and as the company grows. | 2023-11-17T13:02:33 | https://www.reddit.com/r/LocalLLaMA/comments/17xel37/interview_with_ceo_of_mistral_arthur_mensch/ | DreamGenX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xel37 | false | null | t3_17xel37 | /r/LocalLLaMA/comments/17xel37/interview_with_ceo_of_mistral_arthur_mensch/ | false | false | self | 63 | {'enabled': False, 'images': [{'id': 'sfAlS4EwpTqrYVmmtQjWxvtL-2MAugR_i-rN0NNM2-U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/W3JnU2JLM-yc7z9dh8hObN1aF8lo4H-7hMDeLsBGfyU.jpg?width=108&crop=smart&auto=webp&s=973f76ce9ee0ac4a2537eda56a0bd6452b544bee', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/W3JnU2JLM-yc7z9dh8hObN1aF8lo4H-7hMDeLsBGfyU.jpg?width=216&crop=smart&auto=webp&s=50eb4d063a54abc8a1908cc2c9483c4bf31db557', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/W3JnU2JLM-yc7z9dh8hObN1aF8lo4H-7hMDeLsBGfyU.jpg?width=320&crop=smart&auto=webp&s=3fcecae0689399fe583f20cc90f9377a14a39404', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/W3JnU2JLM-yc7z9dh8hObN1aF8lo4H-7hMDeLsBGfyU.jpg?auto=webp&s=4b299af1f6bc90fc4b722a8acb9a3a0a278f03ad', 'width': 480}, 'variants': {}}]} |
What is the best current uncensored Storytelling LLM that can run with 32gb system ram and 8 gb Vram PC? | 14 | Ok, I know this may be asked here a lot, but the last time I checked this sub was around the time that LLaMa.Cpp just came out and I assume a lot has changed/Improved I hear models like Mistral can even change the landscape, what is currently best roleplay and storytelling LLM that can run on my PC with 32 GB Ram and 8gb Vram card (Or both since I also heard about layered hybrid approach too) and generally what would you recommend with this specs?
Thanks for this amazing in community in advance for improving open source LLM eco-system | 2023-11-17T12:15:18 | https://www.reddit.com/r/LocalLLaMA/comments/17xdrfd/what_is_the_best_current_uncensored_storytelling/ | Acrobatic_Internal_2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xdrfd | false | null | t3_17xdrfd | /r/LocalLLaMA/comments/17xdrfd/what_is_the_best_current_uncensored_storytelling/ | false | false | self | 14 | null |
What is the best current uncensored Storytelling LLM that can run with 32gb system ram and 8 gb Vram PC? | 1 | Ok, I know this may be asked here a lot, but the last time I checked this sub was around the time that LLaMa.Cpp just came out and I assume a lot has changed/Improved I hear models like Mistral can even change the landscape, what is currently best roleplay and storytelling LLM that can run on my PC with 32 GB Ram and 8gb Vram card (Or both since I also heard about layered hybrid approach too) and generally what would you recommend with this specs?
Thanks for this amazing in community in advance for improving open source LLM eco-system | 2023-11-17T12:15:18 | https://www.reddit.com/r/LocalLLaMA/comments/17xdrfl/what_is_the_best_current_uncensored_storytelling/ | Acrobatic_Internal_2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xdrfl | false | null | t3_17xdrfl | /r/LocalLLaMA/comments/17xdrfl/what_is_the_best_current_uncensored_storytelling/ | false | false | self | 1 | null |
Kyutai: Paris' Open Source AI Non-Profit lands many FAIR&DM alums, a supercomputer, and 300M Euros. | 1 | I just watched the Scaleway aiPulse keynotes ([https://event.ai-pulse.eu/en/](https://event.ai-pulse.eu/en/))
Announced at the event:
Europe got a $300M Euro non-profit open source AI lab staffed with Meta (FAIR) and DeepMind alums. Every single speaker from the lab harped on the importance of open source for open reproducible science, economic growth, European independence, innovation, safety, etc. etc.
It's called Kyutai, reportedly a Japanese word meaning sphere (and conveniently ends in 'ai' when latinized).
It seems there will be a significant focus on multimodal models.
One guy specified, "reproducible research, open models, open weights, under a permissive open source license."
If they have a website up yet, it hasn't been indexed by Bing.
Apologies for not taking better notes. | 2023-11-17T11:19:29 | https://www.reddit.com/r/LocalLLaMA/comments/17xcvr2/kyutai_paris_open_source_ai_nonprofit_lands_many/ | georgejrjrjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xcvr2 | false | null | t3_17xcvr2 | /r/LocalLLaMA/comments/17xcvr2/kyutai_paris_open_source_ai_nonprofit_lands_many/ | false | false | self | 1 | null |
3090 bonanza | 1 | In case anyone is looking to pick up a 3090, there are several "renewed" ones on sale for around $1000 on Amazon:
[https://www.amazon.com/ASUS-GeForce-Graphics-DisplayPort-Bearings/dp/B092T1CLK2/](https://www.amazon.com/ASUS-GeForce-Graphics-DisplayPort-Bearings/dp/B092T1CLK2/ref=sr_1_3?keywords=nvidia+3090&qid=1700218428&sr=8-3)
[https://www.amazon.com/PNY-GeForce-Gaming-Graphics-Renewed/dp/B092XB1JGD/](https://www.amazon.com/PNY-GeForce-Gaming-Graphics-Renewed/dp/B092XB1JGD/ref=sr_1_4?keywords=nvidia+3090&qid=1700218428&sr=8-4)
and so on.
Renewed != new, but still preferable over used cards by some random seller on ebay. | 2023-11-17T11:11:28 | https://www.reddit.com/r/LocalLLaMA/comments/17xcqv0/3090_bonanza/ | neph1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xcqv0 | false | null | t3_17xcqv0 | /r/LocalLLaMA/comments/17xcqv0/3090_bonanza/ | false | false | self | 1 | null |
PSA: If you have Telsa P40 which has abysmal FP16 performance DO NOT update oobabooga past commit 564d0cde8289a9c9602b4d6a2e970659492ad135 | 35 | I updated to the latest commit because ooba said it uses the latest llama.cpp that improved performance. What I suspect happened is it uses more FP16 now because the tokens/s on my Tesla P40 got halved along with the power consumption and memory controller load.
You can fix this by doing:
git reset --hard 564d0cde8289a9c9602b4d6a2e970659492ad135
to go back to the last verified commit that didn't kill performance on the Tesla P40. Not sure how to fix this for future updates so maybe [u/Oobabooga](https://www.reddit.com/u/Oobabooga/) can chime in. | 2023-11-17T11:05:31 | https://www.reddit.com/r/LocalLLaMA/comments/17xcn5d/psa_if_you_have_telsa_p40_which_has_abysmal_fp16/ | nero10578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xcn5d | false | null | t3_17xcn5d | /r/LocalLLaMA/comments/17xcn5d/psa_if_you_have_telsa_p40_which_has_abysmal_fp16/ | false | false | self | 35 | null |
Is there a 7B model capable to extract keywords from a text and return them as an array? | 1 | Prompt like:
Extract the company names from the texts below and return as an array
<text>
--
["Google", "Meta", "Microsoft"] | 2023-11-17T10:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/17xcdbw/is_there_a_7b_model_capable_to_extract_keywords/ | grigio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xcdbw | false | null | t3_17xcdbw | /r/LocalLLaMA/comments/17xcdbw/is_there_a_7b_model_capable_to_extract_keywords/ | false | false | self | 1 | null |
Multiple qlora adapters? | 2 | I want to train llama using qlora on multiple tasks in a sequential manner, ie task A-> B-> C. Would it be possible to combine, say adapter weights trained on A with B and subsequentially, A with C? How can i can go about doing this?
Has anyone tried doing so and achieving reasonable results? i am aiming for task A to be continual pretraining for domain adaptation, while B and C to be the downstream tasks. | 2023-11-17T10:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/17xc9ks/multiple_qlora_adapters/ | nohodlnodough | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xc9ks | false | null | t3_17xc9ks | /r/LocalLLaMA/comments/17xc9ks/multiple_qlora_adapters/ | false | false | self | 2 | null |
best roleplay model / settings that goes far away from the wholesome nonsense | 28 | Hi,
A lot of roleplay models I tried like to continue the story with some sappy s\*\*\* and I hate it. I tried to tell them not to, but they aren't listening to me.
For an example:
X does y. What will happen next? Only time will tell....
Together, x and y are unstoppable. It is a testament to the spirit and unyielding hope they have.
Except multiply the amount of garbage by three.
I tried many models and they all seem to do this. I am getting really tired of it as when it starts it's almost impossible to get it to stop and it just ruins a perfectly good roleplay with this crap..
Sorry for the rant, I'm just a bit frustrated haha. | 2023-11-17T10:27:44 | https://www.reddit.com/r/LocalLLaMA/comments/17xc4kp/best_roleplay_model_settings_that_goes_far_away/ | Several_Extreme3886 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xc4kp | false | null | t3_17xc4kp | /r/LocalLLaMA/comments/17xc4kp/best_roleplay_model_settings_that_goes_far_away/ | false | false | self | 28 | null |
An non profit to develop OS llm was announced, live from ai-pulse | 129 | 2023-11-17T10:21:37 | petitponeyrose | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17xc1ku | false | null | t3_17xc1ku | /r/LocalLLaMA/comments/17xc1ku/an_non_profit_to_develop_os_llm_was_announced/ | false | false | 129 | {'enabled': True, 'images': [{'id': '1vTKUHbaVmXwSs9VqCZPATttCK3iO80fsp72iN3Tbj0', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/euxj2u5kwv0c1.jpg?width=108&crop=smart&auto=webp&s=9d95ca99e078c865248227834927ed4254025fd2', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/euxj2u5kwv0c1.jpg?width=216&crop=smart&auto=webp&s=18acbe1868d3e240bd4b1437ca8a9311c12c0d2d', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/euxj2u5kwv0c1.jpg?width=320&crop=smart&auto=webp&s=aec9728eddbece4e32fddcdddccc5778e6c8c487', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/euxj2u5kwv0c1.jpg?width=640&crop=smart&auto=webp&s=6897e4de1226c0733cfe3048562a1e5a8348acf0', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/euxj2u5kwv0c1.jpg?width=960&crop=smart&auto=webp&s=b6a370d2be13cf513f6d8c4b23974b7c1f03ed70', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/euxj2u5kwv0c1.jpg?width=1080&crop=smart&auto=webp&s=13a81b12f94ecf17998df9969c1a5da1ca3e4072', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/euxj2u5kwv0c1.jpg?auto=webp&s=dd45bfb974a3ba9a277ae59dc0e8d26448c6edc0', 'width': 3024}, 'variants': {}}]} | |||
Best model to fine tune for code generation & instructions are in a different language | 2 | I am trying to curate a dataset to fine tune a open source model to generate code where the instructions would be in South Indian languages. What would be the best model to start with ? Also if there are attempts already please point me towards such other language models exclusively for code generation | 2023-11-17T09:58:14 | https://www.reddit.com/r/LocalLLaMA/comments/17xbpws/best_model_to_fine_tune_for_code_generation/ | maayon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xbpws | false | null | t3_17xbpws | /r/LocalLLaMA/comments/17xbpws/best_model_to_fine_tune_for_code_generation/ | false | false | self | 2 | null |
Rx 6700 xt vs rtx 3060 12gb | 1 | Hi, does anyone use rx 6700 for llama2 inference ? If so ,what kind of speeds are yougetting? I am currently thinking of replacing my gtx 1070ti but i am stuck between 3060 and 6700. | 2023-11-17T09:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/17xb9ks/rx_6700_xt_vs_rtx_3060_12gb/ | ramzeez88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xb9ks | false | null | t3_17xb9ks | /r/LocalLLaMA/comments/17xb9ks/rx_6700_xt_vs_rtx_3060_12gb/ | false | false | self | 1 | null |
Training/Finetuning LLaMa.c (Karpathy's BabyLLaMa) | 1 | Hello guys, for some projects I may need to make a language model and I don't really need a big model (such as Mistral or LLaMa2) so BabyLLaMa seems like a good choice.
BTW, The training scripts in the repo seem good but I cannot get them to the work with datasets other than TinyStories one. So if there is any way of getting it fine-tuned or trained, I'll appreciate it.
Thanks. | 2023-11-17T08:08:46 | https://www.reddit.com/r/LocalLLaMA/comments/17xaa77/trainingfinetuning_llamac_karpathys_babyllama/ | Haghiri75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17xaa77 | false | null | t3_17xaa77 | /r/LocalLLaMA/comments/17xaa77/trainingfinetuning_llamac_karpathys_babyllama/ | false | false | self | 1 | null |
Chupacabra-7B Mistral Model Released | 5 | [https://huggingface.co/perlthoughts/Chupacabra-7B](https://huggingface.co/perlthoughts/Chupacabra-7B)
Special thanks to [Mistral.ai](https://Mistral.ai), and [Nurture.ai](https://Nurture.ai) | 2023-11-17T05:38:45 | https://www.reddit.com/r/LocalLLaMA/comments/17x842s/chupacabra7b_mistral_model_released/ | perlthoughts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x842s | false | null | t3_17x842s | /r/LocalLLaMA/comments/17x842s/chupacabra7b_mistral_model_released/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'aD1B3L2RcWUHo17rcbkZgc-NJ9Hfau835NxTO4h61do', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QXUGx0d9g4mjr7B6OUSZBPmZkVhUGiL1wwcsh1kTzzs.jpg?width=108&crop=smart&auto=webp&s=7036fa2a41529d3212d040828cb1e16f2442937a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QXUGx0d9g4mjr7B6OUSZBPmZkVhUGiL1wwcsh1kTzzs.jpg?width=216&crop=smart&auto=webp&s=192d9af12dc0456d090891aab0b7aa05b492c8db', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QXUGx0d9g4mjr7B6OUSZBPmZkVhUGiL1wwcsh1kTzzs.jpg?width=320&crop=smart&auto=webp&s=ac3ded8eb41d38080dfc81d2991964f83d8fb514', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QXUGx0d9g4mjr7B6OUSZBPmZkVhUGiL1wwcsh1kTzzs.jpg?width=640&crop=smart&auto=webp&s=330f70a19f86761b6ef38416cc8f378ea1468bbf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QXUGx0d9g4mjr7B6OUSZBPmZkVhUGiL1wwcsh1kTzzs.jpg?width=960&crop=smart&auto=webp&s=b8148379fc19441c5dc07359619430efbf59f709', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QXUGx0d9g4mjr7B6OUSZBPmZkVhUGiL1wwcsh1kTzzs.jpg?width=1080&crop=smart&auto=webp&s=45bcfadeb16baf98a6b2a59d5db03de7f3c8dcfe', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QXUGx0d9g4mjr7B6OUSZBPmZkVhUGiL1wwcsh1kTzzs.jpg?auto=webp&s=00fd3e165a031005a9a6ad83571d60dade9a984b', 'width': 1200}, 'variants': {}}]} |
How exactly does Openai fine-tune our personal data on GPT4? | 4 | How exactly does Openai fine-tune our personal data on GPT4?
It doesnt backprogate the entire GPT4 for each of us right? Do they just tune the last layer, or do they use Lora or whatever things to finetune the personalize GPT for us? | 2023-11-17T05:08:37 | https://www.reddit.com/r/LocalLLaMA/comments/17x7m4p/how_exactly_does_openai_finetune_our_personal/ | No_Oilve_6577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x7m4p | false | null | t3_17x7m4p | /r/LocalLLaMA/comments/17x7m4p/how_exactly_does_openai_finetune_our_personal/ | false | false | self | 4 | null |
ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks | 5 | >Large language models have shown promising performance in code generation benchmarks. However, a considerable divide exists between these benchmark achievements and their practical applicability, primarily attributed to real-world programming's reliance on pre-existing libraries. Instead of evaluating LLMs to code from scratch, this work aims to propose a new evaluation setup where LLMs use open-source libraries to finish machine learning tasks. Therefore, we propose ML-Bench, an expansive benchmark developed to assess the effectiveness of LLMs in leveraging existing functions in open-source libraries. Consisting of 10,040 samples spanning 130 tasks over 14 notable machine learning GitHub repositories. In this setting, given a specific machine learning task instruction and the accompanying README in a codebase, an LLM is tasked to generate code to accomplish the task. This necessitates the comprehension of long and language-code interleaved documents, as well as the understanding of complex cross-file code structures, introducing new challenges. Notably, while GPT-4 exhibits remarkable improvement over other LLMs, it manages to accomplish only 39.73\% of the tasks, leaving a huge space for improvement. We address these challenges by proposing ML-Agent, designed to effectively navigate the codebase, locate documentation, retrieve code, and generate executable code. Empirical results demonstrate that ML-Agent, built upon GPT-4, results in further improvements. | 2023-11-17T04:42:48 | https://ml-bench.github.io/ | ninjasaid13 | ml-bench.github.io | 1970-01-01T00:00:00 | 0 | {} | 17x76u6 | false | null | t3_17x76u6 | /r/LocalLLaMA/comments/17x76u6/mlbench_large_language_models_leverage_opensource/ | false | false | default | 5 | null |
Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives | 5 | >Recent decisions by leading AI labs to either open-source their models or to restrict access to their models has sparked debate about whether, and how, increasingly capable AI models should be shared. Open-sourcing in AI typically refers to making model architecture and weights freely and publicly accessible for anyone to modify, study, build on, and use. This offers advantages such as enabling external oversight, accelerating progress, and decentralizing control over AI development and use. However, it also presents a growing potential for misuse and unintended consequences. This paper offers an examination of the risks and benefits of open-sourcing highly capable foundation models. While open-sourcing has historically provided substantial net benefits for most software and AI development processes, we argue that for some highly capable foundation models likely to be developed in the near future, open-sourcing may pose sufficiently extreme risks to outweigh the benefits. In such a case, highly capable foundation models should not be open-sourced, at least not initially. Alternative strategies, including non-open-source model sharing options, are explored. The paper concludes with recommendations for developers, standard-setting bodies, and governments for establishing safe and responsible model sharing practices and preserving open-source benefits where safe. | 2023-11-17T04:24:57 | https://arxiv.org/abs/2311.09227 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17x6vxv | false | null | t3_17x6vxv | /r/LocalLLaMA/comments/17x6vxv/opensourcing_highly_capable_foundation_models_an/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | |
Qwen 14b on Mac OS, Langchain tool usage, and auto-gptq | 1 | I have been trying to get open source models to work with Langchain tools. So far the only model that has worked has been Llama 2 70b Q4 following James Briggs tutorial. Both Llama 2 13b and Mistral 7b Instruct use the tool correctly, observe the answer, but then return an empty string at the end as the output, whereas Llama 2 70b returns "It looks like the answer is X".
I want to experiment with Qwen 14b as it is a relatively small model that may be more efficient to run than Llama 2 70b to see if it works with Langchain tools etc. I read on the GitHub page for Qwen 14b that it was trained specifically for tool usage so I feel like it is one of the most promising models. That and there was quite a lot of positive sentiment about it on this sub.
When I try to load Qwen 14b on my Mac M1 I am getting an error related to auto-gptq, when I tried to install auto-gptq with pip it errors and mentions something about CUDA. Does auto-gptq work on Mac OS or does it require CUDA? Is there any way to get some version of Qwen 14b to run on Mac OS?
Has anyone experimented with Qwen 14b and Langchain tool usage? | 2023-11-17T04:17:14 | https://www.reddit.com/r/LocalLLaMA/comments/17x6r44/qwen_14b_on_mac_os_langchain_tool_usage_and/ | tail-recursion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x6r44 | false | null | t3_17x6r44 | /r/LocalLLaMA/comments/17x6r44/qwen_14b_on_mac_os_langchain_tool_usage_and/ | false | false | self | 1 | null |
Tied-Lora: Enhacing parameter efficiency of LoRA with weight tying | 14 | >We propose Tied-LoRA, a simple paradigm utilizes weight tying and selective training to further increase parameter efficiency of the Low-rank adaptation (LoRA) method. Our investigations include all feasible combinations parameter training/freezing in conjunction with weight tying to identify the optimal balance between performance and the number of trainable parameters. Through experiments covering a variety of tasks and two base language models, we provide analysis revealing trade-offs between efficiency and performance. Our experiments uncovered a particular Tied-LoRA configuration that stands out by demonstrating comparable performance across several tasks while employing only 13~\% percent of parameters utilized by the standard LoRA method. | 2023-11-17T03:41:31 | https://arxiv.org/abs/2311.09578 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 17x64zm | false | null | t3_17x64zm | /r/LocalLLaMA/comments/17x64zm/tiedlora_enhacing_parameter_efficiency_of_lora/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} | |
Question about LLM naming | 3 | So I haven't recently gone down the rabbit hole of cancelling my ChatGPT subscription and now just use OpenHermes2.5-Mistral-7B. I've learned about the different benchmarks and how they compare and I understand how to read the HuggingFace LLM leaderboard and download any other model I might like to try.
What I struggle to understand is the meaning of the naming conventions. Mistral seems to clearly be better than LLAMA2 from what I have read and I understand the differences of 7B, 13B, etc etc.
Can someone explain the additional prefixes of Hermes, OpenHermes, NeuralChat, etc.
Tldr; What is the difference between Dolphin-Mistral and OpenHermes-Mistral. I'm guessing one is the dataset and the other is how it was trained? | 2023-11-17T03:16:31 | https://www.reddit.com/r/LocalLLaMA/comments/17x5nqo/question_about_llm_naming/ | ijustdontcare2try | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x5nqo | false | null | t3_17x5nqo | /r/LocalLLaMA/comments/17x5nqo/question_about_llm_naming/ | false | false | self | 3 | null |
Semi+noob question | 1 | So I haven't recently gone down the rabbit hole of cancelling my ChatGPT subscription and now just use OpenHermes2.5-Mistral-7B. I've learned about the different benchmarks and how they compare and I understand how to read the HuggingFace LLM leaderboard and download any other model I might like to try.
What I struggle to understand is the meaning of the naming conventions. Mistral seems to clearly be better than LLAMA2 from what I have read and I understand the differences of 7B, 13B, etc etc.
Can someone explain the additional prefixes of Hermes, OpenHermes, NeuralChat, etc.
Tldr; What is the difference between Dolphin-Mistral and OpenHermes-Mistral. I'm guessing one is the dataset and the other is how it was trained? | 2023-11-17T03:15:16 | https://www.reddit.com/r/LocalLLaMA/comments/17x5ms1/seminoob_question/ | ijustdontcare2try | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x5ms1 | false | null | t3_17x5ms1 | /r/LocalLLaMA/comments/17x5ms1/seminoob_question/ | false | false | self | 1 | null |
The newly released Psyfighter2 13B, A better version of Tiefighter? | 28 | [https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2-GGUF](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2-GGUF) | 2023-11-17T01:25:47 | https://www.reddit.com/r/LocalLLaMA/comments/17x3du9/the_newly_released_psyfighter2_13b_a_better/ | Majestical-psyche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x3du9 | false | null | t3_17x3du9 | /r/LocalLLaMA/comments/17x3du9/the_newly_released_psyfighter2_13b_a_better/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'YmwGy7j10gy1CdOwLlEh-TKCZ7pCjLHLgGL3cx2gGVg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MsMAbUxY_LOxUGnd6FElJNXb_QXXFvDbvRl_QCL07Ws.jpg?width=108&crop=smart&auto=webp&s=f0cea3dac19609e3d813bd2389730f277440bb06', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MsMAbUxY_LOxUGnd6FElJNXb_QXXFvDbvRl_QCL07Ws.jpg?width=216&crop=smart&auto=webp&s=549f4f9dd0f064b4ef915bd2f502421fa48e7f37', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MsMAbUxY_LOxUGnd6FElJNXb_QXXFvDbvRl_QCL07Ws.jpg?width=320&crop=smart&auto=webp&s=691201de94d2272b91ec8b0843497df35bef5691', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MsMAbUxY_LOxUGnd6FElJNXb_QXXFvDbvRl_QCL07Ws.jpg?width=640&crop=smart&auto=webp&s=47ced00e7d3d06f62853b4d9a3adf2cd6e76d7ab', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MsMAbUxY_LOxUGnd6FElJNXb_QXXFvDbvRl_QCL07Ws.jpg?width=960&crop=smart&auto=webp&s=b5459c58c50b2e44e63d659dcfb6cacf4e297fb1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MsMAbUxY_LOxUGnd6FElJNXb_QXXFvDbvRl_QCL07Ws.jpg?width=1080&crop=smart&auto=webp&s=85b934734b08adf8c085755649c7236db6d3ba5e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MsMAbUxY_LOxUGnd6FElJNXb_QXXFvDbvRl_QCL07Ws.jpg?auto=webp&s=00894521b14fd1fee7b90b54297b291105faf5cb', 'width': 1200}, 'variants': {}}]} |
Fine Tuning an LLM to support function calling | 8 | Hi Everyone,
After experimenting with Open AI function calling feature, I was wondering how can I fine-tune an open source model to support such a feature?
Thank you and best Regards | 2023-11-17T01:07:21 | https://www.reddit.com/r/LocalLLaMA/comments/17x2zla/fine_tuning_an_llm_to_support_function_calling/ | hatahetahmad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x2zla | false | null | t3_17x2zla | /r/LocalLLaMA/comments/17x2zla/fine_tuning_an_llm_to_support_function_calling/ | false | false | self | 8 | null |
How can I improve inference performance to a normal range? | 4 | At work, we are using four A100 cards (0,1 nvlinked and 2,3 nvlinked) and I am curious about how to connect all four cards. Additionally, when using four A100 cards, the performance seems slower and the token usage is much lower compared to using a 4060 Ti at home. Why might this be? When I check with nvidia-smi, it shows that the VRAM is being fully utilized, but the volatile GPU utilization is not 100% for all four, usually something like 100, 70, 16, 16. (using KVM passthrough rhel8 server) | 2023-11-17T00:38:23 | https://www.reddit.com/r/LocalLLaMA/comments/17x2dju/how_can_i_improve_inference_performance_to_a/ | Radiant-Practice-270 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x2dju | false | null | t3_17x2dju | /r/LocalLLaMA/comments/17x2dju/how_can_i_improve_inference_performance_to_a/ | false | false | self | 4 | null |
Text Generator webui - how do I select which GPU to use? | 3 | Hi everyone, sort of new to this so I'm sorry if this is a stupid question or if I'm unsure about a few things. But basically I'm on my laptop. I'm not using this for anything professional by the way. But if you need to know my specs I'll leave them here as well:
CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz 2.30 GHz
RAM: 32GB
GPU0: Intel(R) UHD Graphics
GPU1: Nvidia RTX 3060 Laptop GPU
So I installed text gen webui and it opens just fine, I've tried a few models, some of them will give me an error for files missing but I did get one or two to work, one of them is [TheBloke/OpenHermes-2.5-Mistral-7B-16k-AWQ](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-16k-AWQ) which seems to install properly and auto detects that it should be using AutoAWQ model loader. I've assigned gpu memory and even some cpu memory but regardless it still gives me the following error:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacty of 6.00 GiB of which 0 bytes is free. Of the allocated memory 5.24 GiB is allocated by PyTorch, and 81.71 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Now as it says there it is trying to use GPU 0. And I would assume my GPU 1 would be better for this. Also changing the gpu-memory on the web interface doesn't seem to change anything in the message so I'm guessing that has nothing to do with it. When installing the application I did put that my GPU was an Nvidia card. Also I do know that my Intel GPU is being used because when I press "Load" on my interface using that model I can see my GPU 0 (Intel GPU) being used.
I've seen one or two websites that tell people to add `CUDA_VISIBLE_DEVICES=0`(or 1) before running the python command but I'm assuming that's for people who're doing this manually? I'm running it all through TextGen webui so I don't run any python code, just a batch file that comes with it. Any advice would be appreciated | 2023-11-17T00:03:50 | https://www.reddit.com/r/LocalLLaMA/comments/17x1ly2/text_generator_webui_how_do_i_select_which_gpu_to/ | OvercookedSatellite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x1ly2 | false | null | t3_17x1ly2 | /r/LocalLLaMA/comments/17x1ly2/text_generator_webui_how_do_i_select_which_gpu_to/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Rad13nIxbFMWwWsCtzfBsHxhePaNbqtMM0pN_0-dkqw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gB5toa2ZPdfHbGQzJx1LmKJkvxrU3REaKZpvNP_ART4.jpg?width=108&crop=smart&auto=webp&s=b6de1215db4f1bf8019c472fbb862376cfc7bdf0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gB5toa2ZPdfHbGQzJx1LmKJkvxrU3REaKZpvNP_ART4.jpg?width=216&crop=smart&auto=webp&s=00821c74396bfdfb386e9b4ca1158b48b63d692f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gB5toa2ZPdfHbGQzJx1LmKJkvxrU3REaKZpvNP_ART4.jpg?width=320&crop=smart&auto=webp&s=d6c90811f15e36884ef36c4f53099556c6a08986', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gB5toa2ZPdfHbGQzJx1LmKJkvxrU3REaKZpvNP_ART4.jpg?width=640&crop=smart&auto=webp&s=6bb8c532f5888d755b94e4b2e728e39682012dbb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gB5toa2ZPdfHbGQzJx1LmKJkvxrU3REaKZpvNP_ART4.jpg?width=960&crop=smart&auto=webp&s=35824cee4639f56d6c2209e907f60f098372a618', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gB5toa2ZPdfHbGQzJx1LmKJkvxrU3REaKZpvNP_ART4.jpg?width=1080&crop=smart&auto=webp&s=1ab7bc1833e2062608d813dcdd8ff5c965616a23', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gB5toa2ZPdfHbGQzJx1LmKJkvxrU3REaKZpvNP_ART4.jpg?auto=webp&s=5178c2d454381e4c9c2ac813ee217f99f8faabda', 'width': 1200}, 'variants': {}}]} |
Optimizing Model Settings for Sillytavern: Seeking Guidance on Goliath 120b and Xwin 70b | 12 | I've been encountering a repetition issue with models like Goliath 120b and Xwin 70b on Sillytavern + OpenRouter.
While I understand that changing models can have a significant impact, I'm puzzled by the repetition problem.
Despite my efforts to find online resources for correct settings, my searches for Aurobors 70b, Xwin 70b, Lzlb 70b, and others have been in vain.
I came across posts on this subreddit addressing similar concerns, but unfortunately, they lacked solutions.
One suggestion was to "use the shortwave preset," but it seems to be nonexistent.
Unsure of what I might be overlooking, I'm reaching out here for help.
The 120b model should theoretically outperform the 7b/13b models, but I suspect there's a configuration issue.
If anyone could provide insights or share the correct settings for these models, it would greatly help not only me but also future users facing the same issue.
Let's compile a comprehensive guide here so that anyone searching the internet for a solution can find this post and get the answers they need.
Thank you in advance for your assistance!
PS: mythomax 13B seems to be the best model because it's the only one that actually works... | 2023-11-16T23:51:17 | https://www.reddit.com/r/LocalLLaMA/comments/17x1c1b/optimizing_model_settings_for_sillytavern_seeking/ | Regular_Instruction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x1c1b | false | null | t3_17x1c1b | /r/LocalLLaMA/comments/17x1c1b/optimizing_model_settings_for_sillytavern_seeking/ | false | false | self | 12 | null |
What UI do you use and why? | 87 | From the wiki:
[**Text generation web UI**](https://github.com/oobabooga/text-generation-webui)
[**llama.cpp**](https://github.com/ggerganov/llama.cpp)
[**KoboldCpp**](https://github.com/LostRuins/koboldcpp)
[**vLLM**](https://github.com/vllm-project/vllm)
[**MLC LLM**](https://github.com/mlc-ai/mlc-llm)
[**Text Generation Inference**](https://github.com/huggingface/text-generation-inference) | 2023-11-16T22:59:22 | https://www.reddit.com/r/LocalLLaMA/comments/17x052b/what_ui_do_you_use_and_why/ | Deadlibor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17x052b | false | null | t3_17x052b | /r/LocalLLaMA/comments/17x052b/what_ui_do_you_use_and_why/ | false | false | self | 87 | {'enabled': False, 'images': [{'id': 'ck1LOG3sEzaBui5jWVc5cnLlUo-Vxi4ak7rqzh4uU-M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gl6mGvnXSs2e1Hii7TbyCrQgmIgOaEKqUN7HsTvVsso.jpg?width=108&crop=smart&auto=webp&s=61c8dfc7a3d22760955aa51d78a93b2e1ba50e08', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gl6mGvnXSs2e1Hii7TbyCrQgmIgOaEKqUN7HsTvVsso.jpg?width=216&crop=smart&auto=webp&s=16595837096ea636911973db7c571aec7d04c754', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gl6mGvnXSs2e1Hii7TbyCrQgmIgOaEKqUN7HsTvVsso.jpg?width=320&crop=smart&auto=webp&s=75440c1f4aeb4d44ae354b3e715da1271a19256c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gl6mGvnXSs2e1Hii7TbyCrQgmIgOaEKqUN7HsTvVsso.jpg?width=640&crop=smart&auto=webp&s=43e53b7c0364c89e718598629fc8b883ed10fb7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gl6mGvnXSs2e1Hii7TbyCrQgmIgOaEKqUN7HsTvVsso.jpg?width=960&crop=smart&auto=webp&s=d1d9b393efe677d5766a462068dcea69ea98befc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gl6mGvnXSs2e1Hii7TbyCrQgmIgOaEKqUN7HsTvVsso.jpg?width=1080&crop=smart&auto=webp&s=35e0018bdf37803f7748418a99f95d41ed23fd1b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gl6mGvnXSs2e1Hii7TbyCrQgmIgOaEKqUN7HsTvVsso.jpg?auto=webp&s=0915f471767af0da89b1a644c269d027c2fc2469', 'width': 1200}, 'variants': {}}]} |
Optimizing inference time for LLM within a python script | 1 | I am currently using Mistral-7b Q4 within python using ctransformers to load and configure. The basic premise is to ingest in text, perform some specific NLP task and output into JSON form.
Currently performing tests between CPU and GPU and with an A10 24GB GPU the time taken to iterate read text->output is around 7 seconds for approx 150word prompt + 50-100 word input. The generated output is no more than 50 tokens.
I was looking into exllama but can't see a particular good example for integrating with something like ctransformers/llama-cpp-python. | 2023-11-16T22:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/17wzwqf/optimizing_inference_time_for_llm_within_a_python/ | Mattyjoels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wzwqf | false | null | t3_17wzwqf | /r/LocalLLaMA/comments/17wzwqf/optimizing_inference_time_for_llm_within_a_python/ | false | false | self | 1 | null |
TabbyAPI released! A pure LLM API for exllama v2. | 30 | 2023-11-16T22:40:28 | https://github.com/theroyallab/tabbyAPI | panchovix | github.com | 1970-01-01T00:00:00 | 0 | {} | 17wzp7y | false | null | t3_17wzp7y | /r/LocalLLaMA/comments/17wzp7y/tabbyapi_released_a_pure_llm_api_for_exllama_v2/ | false | false | 30 | {'enabled': False, 'images': [{'id': 'aAqmp6D8lMvY3AYqplPSC8dRMaINciS0Bdaj2IlUHbk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4iFrNPHOoXpSA8ovBO5FAEm8cERJswh5rN7EpHLGX5o.jpg?width=108&crop=smart&auto=webp&s=afe1cc77fd0a81fe324380ff4e6c265004fe8f51', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4iFrNPHOoXpSA8ovBO5FAEm8cERJswh5rN7EpHLGX5o.jpg?width=216&crop=smart&auto=webp&s=2a1feaca8e71930bf8b2e8a33b1492fa0ce4d625', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4iFrNPHOoXpSA8ovBO5FAEm8cERJswh5rN7EpHLGX5o.jpg?width=320&crop=smart&auto=webp&s=189b3c1a186b211f379afc8e152051a07f77f36c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4iFrNPHOoXpSA8ovBO5FAEm8cERJswh5rN7EpHLGX5o.jpg?width=640&crop=smart&auto=webp&s=044e871674985fe009a407071c5cd0b0ca981a5a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4iFrNPHOoXpSA8ovBO5FAEm8cERJswh5rN7EpHLGX5o.jpg?width=960&crop=smart&auto=webp&s=b8193c5a571ac632763574bbfb24962c242d71b3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4iFrNPHOoXpSA8ovBO5FAEm8cERJswh5rN7EpHLGX5o.jpg?width=1080&crop=smart&auto=webp&s=cd0a726046e764b49e597ede31e5ac935c23c43c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4iFrNPHOoXpSA8ovBO5FAEm8cERJswh5rN7EpHLGX5o.jpg?auto=webp&s=ab794be9b506702fe6a5d80f4806df84a43c4ad9', 'width': 1200}, 'variants': {}}]} | ||
If you like engineering/finetuning LLMs + Gaming - come work with us! | 1 | 2023-11-16T22:29:47 | https://havenstudios.com/en/jobs/ml-researcher-llms | InfamousPancakes | havenstudios.com | 1970-01-01T00:00:00 | 0 | {} | 17wzfxz | false | null | t3_17wzfxz | /r/LocalLLaMA/comments/17wzfxz/if_you_like_engineeringfinetuning_llms_gaming/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'PPGXK5IcelPqHHBzNCmOA32Jj1hTfmb6QQaIfobai7c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/9HI-y1Pbmiu98Kjzq6Sdr-6P9QAbheJ8KxhaqV-Dpvw.jpg?width=108&crop=smart&auto=webp&s=d1ff993069fcd746066b962ac854024fd67285f7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/9HI-y1Pbmiu98Kjzq6Sdr-6P9QAbheJ8KxhaqV-Dpvw.jpg?width=216&crop=smart&auto=webp&s=8a685f6e13528d6d28cbdd5e98515e87b08d41e9', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/9HI-y1Pbmiu98Kjzq6Sdr-6P9QAbheJ8KxhaqV-Dpvw.jpg?width=320&crop=smart&auto=webp&s=a92fbb640161e6acb40fdbfc7e25ab66eefc0bc2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/9HI-y1Pbmiu98Kjzq6Sdr-6P9QAbheJ8KxhaqV-Dpvw.jpg?width=640&crop=smart&auto=webp&s=32b81cf7725763fbea82e8e83b96222ba7c0579b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/9HI-y1Pbmiu98Kjzq6Sdr-6P9QAbheJ8KxhaqV-Dpvw.jpg?width=960&crop=smart&auto=webp&s=8d202fc6b0912eb2e34f6db859df4283f6e3fcbc', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/9HI-y1Pbmiu98Kjzq6Sdr-6P9QAbheJ8KxhaqV-Dpvw.jpg?width=1080&crop=smart&auto=webp&s=bfd235c76fc1bb94fad1a3c1608a9867e54955c3', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/9HI-y1Pbmiu98Kjzq6Sdr-6P9QAbheJ8KxhaqV-Dpvw.jpg?auto=webp&s=bdb6d2ca87c95c47bba8182efa2788bce78d349d', 'width': 3840}, 'variants': {}}]} | ||
{D} Combine knowledge Graph with LLM? | 4 | currently, i am working on a Knowledge Graph to improve LLM performance in the practice Level ..
in my First Stage, I am wondering what is the better way to Process the Dataset as knowledge Graph,?
\- do I should use Prompt Engineering while representing KG as a Query schema?
\- may i could use RAG directly and set KG as a knowledge Database,
my main question here is
what are the most common techniques used to tokenize KG?
or there's a possibility a of using Top-Layer based on GNN to handle extraction context from KG
i am sorry if my Qs are not clear because i am just starting my journey in this problem
| 2023-11-16T22:03:41 | https://www.reddit.com/r/LocalLLaMA/comments/17wytey/d_combine_knowledge_graph_with_llm/ | Youness_Elbrag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wytey | false | null | t3_17wytey | /r/LocalLLaMA/comments/17wytey/d_combine_knowledge_graph_with_llm/ | false | false | self | 4 | null |
Using open source reranker in Langchain | 2 | I am looking for ways to have an open-source reranker like bge-rerank inside my RetrievalQA chain, but have not find examples of doing this. Is it possible at the moment? | 2023-11-16T21:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/17wynsy/using_open_source_reranker_in_langchain/ | asakura_matsunoki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wynsy | false | null | t3_17wynsy | /r/LocalLLaMA/comments/17wynsy/using_open_source_reranker_in_langchain/ | false | false | self | 2 | null |
Can LM studio have a SillyTavern-like interface or is that only possible with obabooga? | 2 | I just downloaded LM studio for Mac. I was wondering id there was a solution for this | 2023-11-16T21:52:24 | https://www.reddit.com/r/LocalLLaMA/comments/17wyjo8/can_lm_studio_have_a_sillytavernlike_interface_or/ | ChromeGhost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wyjo8 | false | null | t3_17wyjo8 | /r/LocalLLaMA/comments/17wyjo8/can_lm_studio_have_a_sillytavernlike_interface_or/ | false | false | self | 2 | null |
Learn to use an LLM on your own computer! | 1 | [removed] | 2023-11-16T21:32:33 | https://www.reddit.com/r/LocalLLaMA/comments/17wy2s5/learn_to_use_an_llm_on_your_own_computer/ | Artistic_Highlight_1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wy2s5 | false | null | t3_17wy2s5 | /r/LocalLLaMA/comments/17wy2s5/learn_to_use_an_llm_on_your_own_computer/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-LQEf8gT9DaigEzgvvRFOXmtJ5Rg2xYag0hdJHzM8hw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/EHbc6wd3MwjqraXiFpkM4RcUoy6mD_tWJXotb-6yHqE.jpg?width=108&crop=smart&auto=webp&s=07338c74f933eb1a4fed2de701b97b2153ae6489', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/EHbc6wd3MwjqraXiFpkM4RcUoy6mD_tWJXotb-6yHqE.jpg?width=216&crop=smart&auto=webp&s=b008074765d69561f6b4ce03798f233fdc6c083f', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/EHbc6wd3MwjqraXiFpkM4RcUoy6mD_tWJXotb-6yHqE.jpg?width=320&crop=smart&auto=webp&s=03ffd7ce9ab37d435d50a267498e957714ac89ed', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/EHbc6wd3MwjqraXiFpkM4RcUoy6mD_tWJXotb-6yHqE.jpg?width=640&crop=smart&auto=webp&s=782951c767302b514b19ff4c105dc511a4c938f3', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/EHbc6wd3MwjqraXiFpkM4RcUoy6mD_tWJXotb-6yHqE.jpg?width=960&crop=smart&auto=webp&s=971a8ad3d40c3007bb82790f13ca1fcb9708855d', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/EHbc6wd3MwjqraXiFpkM4RcUoy6mD_tWJXotb-6yHqE.jpg?width=1080&crop=smart&auto=webp&s=9b0654c9f93113f9e428a8a61313f98520d5a3e5', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/EHbc6wd3MwjqraXiFpkM4RcUoy6mD_tWJXotb-6yHqE.jpg?auto=webp&s=f3d74eb7f31a88b2ac618fda9ba36a7fa8374508', 'width': 1200}, 'variants': {}}]} |
Getting OpenAI API key or loading LLM from bearer access token | 1 | [removed] | 2023-11-16T20:54:21 | https://www.reddit.com/r/LocalLLaMA/comments/17wx661/getting_openai_api_key_or_loading_llm_from_bearer/ | ManyAffectionate770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wx661 | false | null | t3_17wx661 | /r/LocalLLaMA/comments/17wx661/getting_openai_api_key_or_loading_llm_from_bearer/ | false | false | self | 1 | null |
Seeking Advice on Implementing AI Model Inference Server with 7B Model (4-bit) on RTX 4090 for side hassle | 2 | I'm currently working on a side project for passion and learning that involves implementing an AI model inference server using a quantized 7B model with 4-bit precision.
My hardware setup includes an RTX 4090. I'm reaching out to gather advice and insights from this knowledgeable community.
- Calculating Max Concurrent Users:
I'd appreciate guidance on how to calculate the maximum number of concurrent users my setup can handle. What factors should I consider in this calculation, and are there any tools or methodologies you recommend?
- What to be used for the API:
I'm also exploring open source projects to use as an API for my model. Any recommendations or experiences with projects that seamlessly integrate with quantized 7B models and can run efficiently on RTX 4090?
Your expertise will be invaluable, and any advice or pointers to relevant resources are highly appreciated. Thanks in advance! | 2023-11-16T20:52:45 | https://www.reddit.com/r/LocalLLaMA/comments/17wx4ud/seeking_advice_on_implementing_ai_model_inference/ | Ok-Bird8904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wx4ud | false | null | t3_17wx4ud | /r/LocalLLaMA/comments/17wx4ud/seeking_advice_on_implementing_ai_model_inference/ | false | false | self | 2 | null |
How Can I Optimize Execution Times for a Secure, Self-Hosted Llama2-Chat 7-B Model Deployed with Truss and Kubernetes on a Tesla T4 GPU | 1 | 1. Hello! I want to implement a chatbot for my company. We aim to deploy our own model because we have significant data security concerns and cannot share data with OpenAI. I deployed the 7-B version of llama2-chat using Truss and Kubernetes with a Tesla T4 GPU. The model works well, but the execution times are very long (it takes about 15 seconds per prompt). My question is whether it's normal for it to take this long, if there's any way to improve it, or if you know of any other alternatives for my case. | 2023-11-16T20:10:08 | https://www.reddit.com/r/LocalLLaMA/comments/17ww5tn/how_can_i_optimize_execution_times_for_a_secure/ | Spiritual_Narwhal649 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17ww5tn | false | null | t3_17ww5tn | /r/LocalLLaMA/comments/17ww5tn/how_can_i_optimize_execution_times_for_a_secure/ | false | false | self | 1 | null |
PostgresML — serve open-source models inside PostgreSQL | 1 | 2023-11-16T19:47:34 | https://postgresml.org/ | something_cleverer | postgresml.org | 1970-01-01T00:00:00 | 0 | {} | 17wvn66 | false | null | t3_17wvn66 | /r/LocalLLaMA/comments/17wvn66/postgresml_serve_opensource_models_inside/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Gy3S0WOyX2rCy7_cN1nsTwWj1UHErFo8n1DRP2b2CU4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=108&crop=smart&auto=webp&s=976c5721eca753bce007cc34425edafb04d1191b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=216&crop=smart&auto=webp&s=13348d5875fd161aa1c61d1a9ea985e5c79ef991', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=320&crop=smart&auto=webp&s=a24aa3ccd0e3129e476567cb6f2c0dce250015de', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=640&crop=smart&auto=webp&s=30bed3fb41918167cbd77ba3152e0342ec279843', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=960&crop=smart&auto=webp&s=269329a30f4bfc7fbc2303d01003cc4c80397b0b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?width=1080&crop=smart&auto=webp&s=f43b0fac23705449d34060d5c7041f2ac41e34ff', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VjCPWlFDVtaTT_oULaNBM78i0HdDgMuqaE188c08lBE.jpg?auto=webp&s=29fbd23d271617383d00d30830bc55d7eaf33e54', 'width': 1200}, 'variants': {}}]} | ||
FYI. Event Tomorrow. Mistral AI's Open Source Initiative: Ambitions, approaches, and roadmap ahead | 136 | 2023-11-16T17:08:28 | phoneixAdi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 17ws122 | false | null | t3_17ws122 | /r/LocalLLaMA/comments/17ws122/fyi_event_tomorrow_mistral_ais_open_source/ | false | false | 136 | {'enabled': True, 'images': [{'id': 'PRMi6DqTKPwZ9CQH5t7AOXAfNXyc7ZMQOsZUUuavhT4', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/33j9yh47sq0c1.png?width=108&crop=smart&auto=webp&s=6e4661acae035c4cb50547238d7a8072bdc3dac4', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/33j9yh47sq0c1.png?width=216&crop=smart&auto=webp&s=178bef36849c4153a98e33713596561cf2f2bfd6', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/33j9yh47sq0c1.png?width=320&crop=smart&auto=webp&s=ada2e14bb093f66d3fd0c5300bdbfb6b724183f8', 'width': 320}], 'source': {'height': 447, 'url': 'https://preview.redd.it/33j9yh47sq0c1.png?auto=webp&s=6a0840b18c5463960d35a4f745ea3408d5cabcf8', 'width': 629}, 'variants': {}}]} | |||
What is the best 7B model for reading comprehension and instruction following at the moment? | 13 | Hi, can you guys suggest me some 7b models good at reading comprehension and instruction following | 2023-11-16T16:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/17wril8/what_is_the_best_7b_model_for_reading/ | Outrageous-Drama924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wril8 | false | null | t3_17wril8 | /r/LocalLLaMA/comments/17wril8/what_is_the_best_7b_model_for_reading/ | false | false | self | 13 | null |
Test 3 - My AI made a event where they are sleeping with one another | 25 | So I wanted to have it where my AI would stop cussing and threaten each other all the time. I also wanted to test throwing in given personalities in the mix to see what will happen. Well you can see the short here
[https://youtube.com/shorts/DtVjQC-Iol4?feature=share](https://youtube.com/shorts/DtVjQC-Iol4?feature=share)
And full version I try to answer the questions in a way anyone can understand what is going on.
[https://youtu.be/8LWF42seZHI](https://youtu.be/8LWF42seZHI)
​ | 2023-11-16T15:59:57 | https://www.reddit.com/r/LocalLLaMA/comments/17wqcm5/test_3_my_ai_made_a_event_where_they_are_sleeping/ | crua9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wqcm5 | false | null | t3_17wqcm5 | /r/LocalLLaMA/comments/17wqcm5/test_3_my_ai_made_a_event_where_they_are_sleeping/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'u8l7uQ1nvL4V8cNtGBvzLi9Kt6vZyaFB2WoAxoAesf8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QdtSy1rfnXHiVquQKcaL3G3x0fEXxxkYI7pq67i7XRA.jpg?width=108&crop=smart&auto=webp&s=54ac6dc41fb523b651f4ef20eb67ace1544142c0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/QdtSy1rfnXHiVquQKcaL3G3x0fEXxxkYI7pq67i7XRA.jpg?width=216&crop=smart&auto=webp&s=b3cb9d0c8aa61d728adccd0545d340d404253fe4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/QdtSy1rfnXHiVquQKcaL3G3x0fEXxxkYI7pq67i7XRA.jpg?width=320&crop=smart&auto=webp&s=92bdb33fe5b300ffdc5e5d67d72805424da7e201', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/QdtSy1rfnXHiVquQKcaL3G3x0fEXxxkYI7pq67i7XRA.jpg?auto=webp&s=6db529831437f29b1c2097d4269058f01a41830e', 'width': 480}, 'variants': {}}]} |
So what's the story on distributing or redistributing characters? | 1 | Say I want a celebrity "voice" available from a front end I intend to distribute, or find a chub card so useful I feel everyone should have it?
It may be rude, and I intend to cite sources a bit, but I want to know a quick bit of legality and how much name change makes things a safe parody.
I also want to get a feel for the community attitude around front ends with baked in default personalities. | 2023-11-16T15:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/17wqci5/so_whats_the_story_on_distributing_or/ | hibbity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wqci5 | false | null | t3_17wqci5 | /r/LocalLLaMA/comments/17wqci5/so_whats_the_story_on_distributing_or/ | false | false | self | 1 | null |
Best stack for local vs hosted usecases? | 1 | [removed] | 2023-11-16T15:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/17wpycw/best_stack_for_local_vs_hosted_usecases/ | minimal_surface | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wpycw | false | null | t3_17wpycw | /r/LocalLLaMA/comments/17wpycw/best_stack_for_local_vs_hosted_usecases/ | false | false | self | 1 | null |
Selected papers from ICLR 2024 | 24 | ICLR 2024 papers are available to access along with the comments & scores of the reviewers [here](https://openreview.net/group?id=ICLR.cc/2024/Conference). Here I'm sharing a list of those which I think this community may find useful. For this I did some filtering using keywords, then setting a score threshold using a modified weighted Bayesian averaging. I skimmed the remaining 100 papers and read the more interesting ones. Note that this is not necessarily a list of "best papers" as it leans toward those that are simple rather than complex, practical rather than theoretical, and incremental rather than groundbreaking. Anyway here they are:
1) [Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing](https://openreview.net/forum?id=02f3mUtqnM): Nice and simple method to cost-optimize your pipeline by using an expensive LLM only when necessary, and going with your small one when not. It's pretty modular as you can enable and disable it whenever you want, and choose how frequently you want to route on which.
2) [ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs](https://openreview.net/forum?id=02f3mUtqnM): Presents a dataset and a benchmark to increase and measure the capabilities of LLMs as agents, especially smaller ones with cool results.
3) [Re-Reading Improves Reasoning in Language Models](https://openreview.net/forum?id=3jXCF5dNpC): Very simple approach by repetition of the prompt, with surprisingly good results.
4) [CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing](https://openreview.net/forum?id=Sx038qxjek): Presents a framework where a more accurate output can be obtained via a verification-correction cycle, using external sources such as websites and knowledge bases.
5) [SuRe: Improving Open-domain Question Answering of LLMs via Summarized Retrieval](https://openreview.net/forum?id=w4DW6qkRmt): Aims to make the models give more useful & accurate responses to the questions by creating multiple candidate answers and comparing them pairwise. Looks a bit slow potentially but provides meaningful improvements.
6) [Large Language Models Are Not Robust Multiple Choice Selectors](https://openreview.net/forum?id=shr9PXz7T0): Improving multiple choice selection is one of the areas that I find to be the most interesting. This paper makes a contribution by providing a simple way to mitigate the well-known "token bias" in those prompts.
7) [ToolChain*: Efficient Action Space Navigation in Large Language Models with A* Search](https://openreview.net/forum?id=B6pQxqUcT8): Adapts the famous A* search algorithm to help LLMs plan more robust solutions to complicated real-life problems. Results look like a significant improvement over similar ones such as ReAct.
8) [LoftQ: LoRA-Fine-Tuning-aware Quantization for Large Language Models](https://openreview.net/forum?id=LzPWWPAdY4): Now I'm not extremely knowledgeable about the technical aspects of fine tuning TBH, but this paper has stellar reviews and claims a significant improvement in terms of decreasing model size with minimal degradation in performance, so I'll just include it.
9) [OctoPack: Instruction Tuning Code Large Language Models](https://openreview.net/forum?id=mw1PWNSWZP): Presents a dataset to finetune models for coding assistance, also finetunes 16b model and shares the results, which seem to be better than anything other than GPT-4 (which are admittedly still a far cry from GPT-4, but baby steps I guess).
10) [BooookScore: A systematic exploration of book-length summarization in the era of LLMs](https://openreview.net/forum?id=7Ttk3RzDeu): Has very high scores, and IMHO justifiably so. Provides a carefully crafted evaluation metric for summarization that seems to greatly reduce the need for manual evaluation by humans.
Apart from those there are several great-looking papers that I excluded due to the practicality bias, but they may be the topic of another post on another sub. Let me know if you find any of the above especially useful! | 2023-11-16T15:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/17wp04r/selected_papers_from_iclr_2024/ | coumineol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wp04r | false | null | t3_17wp04r | /r/LocalLLaMA/comments/17wp04r/selected_papers_from_iclr_2024/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'A2cFENtZsGUk4TdgVLLL25zXBQBwmcPSG87hZLopV-w', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uqSXAZWnIeNKGNM9S7DGpGLOnzm_mxUMvr6Y0yks4jY.jpg?width=108&crop=smart&auto=webp&s=9c811689cb2c2b238253833845bad24e74bdb5d8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/uqSXAZWnIeNKGNM9S7DGpGLOnzm_mxUMvr6Y0yks4jY.jpg?width=216&crop=smart&auto=webp&s=79517bf9d18cf488552e43744ad2c342af22479f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/uqSXAZWnIeNKGNM9S7DGpGLOnzm_mxUMvr6Y0yks4jY.jpg?width=320&crop=smart&auto=webp&s=d4b56b82708f12907eed5cb9688415ff2947f8a5', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/uqSXAZWnIeNKGNM9S7DGpGLOnzm_mxUMvr6Y0yks4jY.jpg?auto=webp&s=71ad6a8a2e6e5fac511957278effb619d3b30998', 'width': 512}, 'variants': {}}]} |
Why is Mistral-7b so capable? Any ideas re: dataset? | 153 | So Mistral-7b is a pretty impressive 7B param model ... but why is it so capable? Do we have any insights into its dataset? Was it trained very far beyond the scaling limit? Any attempts at open reproductions or merges to scale up # of params? | 2023-11-16T14:56:41 | https://www.reddit.com/r/LocalLLaMA/comments/17wou8y/why_is_mistral7b_so_capable_any_ideas_re_dataset/ | Fun_Tangerine_1086 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wou8y | false | null | t3_17wou8y | /r/LocalLLaMA/comments/17wou8y/why_is_mistral7b_so_capable_any_ideas_re_dataset/ | false | false | self | 153 | null |
Strange noise/sound produced by GPU while running go-llama.cpp bindings | 1 | [removed] | 2023-11-16T13:50:31 | https://www.reddit.com/r/LocalLLaMA/comments/17wncbp/strange_noisesound_produced_by_gpu_while_running/ | KaleidoscopeGlad2649 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wncbp | false | null | t3_17wncbp | /r/LocalLLaMA/comments/17wncbp/strange_noisesound_produced_by_gpu_while_running/ | false | false | self | 1 | null |
Embeddings for C functions | 1 | [removed] | 2023-11-16T13:23:56 | https://www.reddit.com/r/LocalLLaMA/comments/17wmru9/embeddings_for_c_functions/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wmru9 | false | null | t3_17wmru9 | /r/LocalLLaMA/comments/17wmru9/embeddings_for_c_functions/ | false | false | default | 1 | null |
Best model for situational awareness and a 4090? | 6 | I'm trying to use a LLM to help me flesh out some filler for my stories. I find that a lot of people that do this put a lot of emphasis and importance in the quality of the writing it produces, where as I'm looking more for something that is capable of advanced reasoning and understanding. I plan on going through and rewriting everything to fit my personal prose, but I like to use ChatGPT to kind of get the ball rolling. The problem is that it's censorship is a bit much. I don't usually write NSFW stuff, but even things like violence and bloodshed get censored pretty heavily.
Is there a model that excels at understanding more than others that can be used on a 4090? I don't care about speed, just decent results. | 2023-11-16T12:37:40 | https://www.reddit.com/r/LocalLLaMA/comments/17wlvbb/best_model_for_situational_awareness_and_a_4090/ | Ok-Scar011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 17wlvbb | false | null | t3_17wlvbb | /r/LocalLLaMA/comments/17wlvbb/best_model_for_situational_awareness_and_a_4090/ | false | false | self | 6 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.