title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Transformers.js - Thoughts? | 14 | Hello, what do you guys think of this project? Any experience using it you'd like to share?
​
[https://huggingface.co/docs/transformers.js/index](https://huggingface.co/docs/transformers.js/index) | 2023-07-10T17:39:45 | https://www.reddit.com/r/LocalLLaMA/comments/14w0p1p/transformersjs_thoughts/ | MuffinB0y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14w0p1p | false | null | t3_14w0p1p | /r/LocalLLaMA/comments/14w0p1p/transformersjs_thoughts/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'AX4llt7hMbwyUA30SSMnwQhe8bEYQ1aeE6sVopJ8qEs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=108&crop=smart&auto=webp&s=273a079072cc64ef57183fb7f0c6c640088fcffa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=216&crop=smart&auto=webp&s=98cd1f7822cc345584e613847ac9df1e2cac03a3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=320&crop=smart&auto=webp&s=a4c06b1ed48013295c10a4684570a4589c8b72a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=640&crop=smart&auto=webp&s=f5db78c8f6aade7d3459b8d68b750ce0298c384a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=960&crop=smart&auto=webp&s=3e090e87fd340a6f401f66e3e5e3655f98102bc9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?width=1080&crop=smart&auto=webp&s=958cfeb13e58c2a8a4fd3fecdf6d9d3aa461b7bc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9EfywTYHBxmsqwfeGrKeF-JOyXb3YLnAnO9JqzNli2A.jpg?auto=webp&s=740ffb1b00addb07c58df569f58902f40265e968', 'width': 1200}, 'variants': {}}]} |
3090 Cards | 13 | Hello All,
Im pretty new here, and im wanting to start running local LLMs. I am using gpt4all, but its pretty slow. My current GPU is a cheapy. I have read through the posts here, and it looks like a 3090 is a good all around choice without breaking the bank too much.
Looking for advice on buying one from Ebay, I see numerous brands and prices. Do you guys recommend any specific brand, like EVGA for example ? Most of the cards im finding our used, some without even a box. Just looking for any advice on purchasing a 3090 on ebay. Thanks in advance!!
​
Also, is it worth the extra money to get the Ti version ?
​
An example I found on ebay :
​
[https://www.ebay.com/itm/225660205739?epid=6041238838&hash=item348a657aab:g:aUsAAOSwZBVknyuq&amdata=enc%3AAQAIAAAAwA0jtn7WN5cDxOIaWB2Qui%2FaFFIB%2BqlbL5%2BJYQ5ON2MQT7FYjrBk6IsSUSeLzEhBrzE%2BlrjxmZbe5iTmTH%2FfxAgBYy2vUXn2%2BeuJKzRlFqhquQ3Vzg%2FMmCOxMrlMGXY2l6FOgA%2BlQKtc0lWdl%2BCx%2FmjMp%2BZKO8JxI1ZT6HZmV3fILnV%2BCiVkc25H0jPF9lYS1JLNum%2Bfph5z3hczwKVI5ngsVg6f5typLTHo6BNkIyc5%2FgVNu4AwPDf%2FfjIZafo4wQ%3D%3D%7Ctkp%3ABk9SR5q05I2oYg](https://www.ebay.com/itm/225660205739?epid=6041238838&hash=item348a657aab:g:aUsAAOSwZBVknyuq&amdata=enc%3AAQAIAAAAwA0jtn7WN5cDxOIaWB2Qui%2FaFFIB%2BqlbL5%2BJYQ5ON2MQT7FYjrBk6IsSUSeLzEhBrzE%2BlrjxmZbe5iTmTH%2FfxAgBYy2vUXn2%2BeuJKzRlFqhquQ3Vzg%2FMmCOxMrlMGXY2l6FOgA%2BlQKtc0lWdl%2BCx%2FmjMp%2BZKO8JxI1ZT6HZmV3fILnV%2BCiVkc25H0jPF9lYS1JLNum%2Bfph5z3hczwKVI5ngsVg6f5typLTHo6BNkIyc5%2FgVNu4AwPDf%2FfjIZafo4wQ%3D%3D%7Ctkp%3ABk9SR5q05I2oYg)
​
​ | 2023-07-10T17:38:14 | https://www.reddit.com/r/LocalLLaMA/comments/14w0np4/3090_cards/ | Noack1 | self.LocalLLaMA | 2023-07-10T17:42:50 | 0 | {} | 14w0np4 | false | null | t3_14w0np4 | /r/LocalLLaMA/comments/14w0np4/3090_cards/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'Tf6AyOgGIyCab77aAxpujOB-73jgRWETES5SaEo2N40', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fCZuW7Yx-Kz6ItmTuzSHjXCGg2d8SbgMgitr14rfCGY.jpg?width=108&crop=smart&auto=webp&s=7a71699c6a22678297630d80577931edbfb12b06', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fCZuW7Yx-Kz6ItmTuzSHjXCGg2d8SbgMgitr14rfCGY.jpg?width=216&crop=smart&auto=webp&s=24734d31da4366f9b7ae429f443b73d6d4c18997', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fCZuW7Yx-Kz6ItmTuzSHjXCGg2d8SbgMgitr14rfCGY.jpg?width=320&crop=smart&auto=webp&s=943d3a48e453cac4b4cd6439677f4932df25702f', 'width': 320}], 'source': {'height': 375, 'url': 'https://external-preview.redd.it/fCZuW7Yx-Kz6ItmTuzSHjXCGg2d8SbgMgitr14rfCGY.jpg?auto=webp&s=133776b01f2f01136c7b5eb53237ea643530edb9', 'width': 500}, 'variants': {}}]} |
Best 13b ggml story/novel models? | 1 | [removed] | 2023-07-10T17:25:01 | https://www.reddit.com/r/LocalLLaMA/comments/14w0b1s/best_13b_ggml_storynovel_models/ | Artemis369212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14w0b1s | false | null | t3_14w0b1s | /r/LocalLLaMA/comments/14w0b1s/best_13b_ggml_storynovel_models/ | false | false | default | 1 | null |
Interesting Article -> What is China doing on AI governance | 0 | Found this on my twitter feed (twitter is some sort of shouting app, I didn't know either )
Beijing is leading the way in AI regulation, releasing groundbreaking new strategies to govern algorithms, chatbots, and more. Global partners need a better understanding of what, exactly, this regulation entails, what it says about China’s AI priorities, and what lessons other AI regulators can learn.
[https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117](https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117)
Why interesting? - thought it could/would be emulated to some point elsewhere.
"But in the West, China’s regulations are often dismissed as irrelevant or seen purely through the lens of a geopolitical competition to write the rules for AI. Instead, these regulations deserve careful study on how they will affect China’s AI trajectory and what they can teach policymakers around the world about regulating the technology. Even if countries fundamentally disagree on the specific content of a regulation, they can still learn from each other when it comes to the underlying structures and technical feasibility of different regulatory approaches." | 2023-07-10T16:45:01 | https://www.reddit.com/r/LocalLLaMA/comments/14vz89t/interesting_article_what_is_china_doing_on_ai/ | FPham | self.LocalLLaMA | 2023-07-10T16:49:50 | 0 | {} | 14vz89t | false | null | t3_14vz89t | /r/LocalLLaMA/comments/14vz89t/interesting_article_what_is_china_doing_on_ai/ | false | false | self | 0 | null |
What's the closest thing we have to GPT4's code interpreter right now? | 56 | Is there anything out there that can do something similar and can be run locally? | 2023-07-10T16:41:22 | https://www.reddit.com/r/LocalLLaMA/comments/14vz50w/whats_the_closest_thing_we_have_to_gpt4s_code/ | malkauns | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vz50w | false | null | t3_14vz50w | /r/LocalLLaMA/comments/14vz50w/whats_the_closest_thing_we_have_to_gpt4s_code/ | false | false | self | 56 | null |
Function of saving character chat history in ooba? | 1 | Does saving a character’s chat history allow the character to reference it in the future for context? | 2023-07-10T15:42:43 | https://www.reddit.com/r/LocalLLaMA/comments/14vxkvl/function_of_saving_character_chat_history_in_ooba/ | Mordekaiseerr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vxkvl | false | null | t3_14vxkvl | /r/LocalLLaMA/comments/14vxkvl/function_of_saving_character_chat_history_in_ooba/ | false | false | default | 1 | null |
So with code interpreter, how long before I can use openCL to run 3 or 4 a770’s giving me 48 or 64gb or vram for $1000… | 12 | I sure as hell won’t be the one to do it but I sure hope one of you crazy mofo’s does it. I can dream… | 2023-07-10T14:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/14vw631/so_with_code_interpreter_how_long_before_i_can/ | BlandUnicorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vw631 | false | null | t3_14vw631 | /r/LocalLLaMA/comments/14vw631/so_with_code_interpreter_how_long_before_i_can/ | false | false | self | 12 | null |
AIDE : LLM shell and docs-set interrogator | 4 | hi,
I used privateGPT as source to create abit more useful shell and docs-set interrogator
​
[AIDE](https://github.com/vsraptor/aide/tree/main)
This in general is a Shell around Large Language Model (LLM), at least for now. It is based on privateGPT code, which I refactored, componetized and enchanced with additional features.
In short this tool allows you to interact with different document-sets OR simply query a LLM.
### Features
1. Profile support
- multiple docs stores and ability to switch between them on the fly.
- multiple models and ability to switch between them on the fly.
2. Non-question Commands support to do usefull things
3. System prompts support
4. Better CLI interface
5. Direct and QA query modes.
6. Keeps .history of the commands
6. Keeps .chat_history
​ | 2023-07-10T14:44:59 | https://www.reddit.com/r/LocalLLaMA/comments/14vw1il/aide_llm_shell_and_docsset_interrogator/ | Double-Lavishness-77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vw1il | false | null | t3_14vw1il | /r/LocalLLaMA/comments/14vw1il/aide_llm_shell_and_docsset_interrogator/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'e1XJNr2ZJLYD-Ks7sPRRplIP9SMjScjMXSPrNiidhjE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=108&crop=smart&auto=webp&s=d0240466746eb9f6566789cd8a5c8c123cfb1b5e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=216&crop=smart&auto=webp&s=42145069eb97374b3e4262fe27fe8a053b99f772', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=320&crop=smart&auto=webp&s=8be78b3118e0ccc8a494abb493f9ed9ba11b05a5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=640&crop=smart&auto=webp&s=d37ca4faf9c8f023f3be8fa7b330cecee24183f0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=960&crop=smart&auto=webp&s=e99fd38c6945841a66bc0c068fa2939b21539925', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?width=1080&crop=smart&auto=webp&s=343e202afe31f91eb81cb3b2dbd2f309740558c4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UBya4VtLqKerNu2bIQSIt0IbsSMkdbiupogIiZXKx70.jpg?auto=webp&s=d88b86255c2d01185140602cde31d4b3dfd9e7c7', 'width': 1200}, 'variants': {}}]} |
Any benefit in the long term future for 64gb vs 128gb RAM on mac m2 ultra? | 1 | [removed] | 2023-07-10T14:06:31 | https://www.reddit.com/r/LocalLLaMA/comments/14vv1h4/any_benefit_in_the_long_term_future_for_64gb_vs/ | SpacialGalacticism | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vv1h4 | false | null | t3_14vv1h4 | /r/LocalLLaMA/comments/14vv1h4/any_benefit_in_the_long_term_future_for_64gb_vs/ | false | false | default | 1 | null |
Multimodal LLM for infographics images | 3 | There are a few open source multimodal LLM for images, any suggestions on which one is the best when it comes to using infographics images (ppt slide images) where usually there is a flow diagram with text or image with text in it. Anyone here have experience in this type of infographics data ? | 2023-07-10T13:51:09 | https://www.reddit.com/r/LocalLLaMA/comments/14vunc6/multimodal_llm_for_infographics_images/ | Jealous_Ad4067 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vunc6 | false | null | t3_14vunc6 | /r/LocalLLaMA/comments/14vunc6/multimodal_llm_for_infographics_images/ | false | false | self | 3 | null |
local LLM agent? | 10 | I want to use a local LLM to gather information from the internet and summarize it for me (or some other action)
I was able to make a dumb bot (react framework) using wizardlm 13b and 30b (1.1 for 13b) (using something similar to this: https://github.com/QuangBK/localLLM_langchain/blob/main/localLLM_agent.ipynb, without langchain. But it seems like it really struggles with following the correct format when giving the output. Sometimes, its thought is to search, but instead, it parses a web page. I have tried using natural language and json format output. I've tweeked the prompt several times to varying degrees of success. I've played with the temperature with some success.
Through my experimentation, it seems like the model will try, but after 1 or 2 prompts, it forgets or goes into an infinite loop.
I am starting to feel like these models either are not powerful enough or maybe my technique is wrong.
What can I try to improve the agent? Should I try a different prompt? Should I try a different technique? I have been using the ooba api to run these experiments, i saw the guidance implementation, but not sure how it will work with ooba. I have also seen mentioned that people train their own lora for creating an agent. Any advice on getting an agent that can perform a search and summarize a document? Anything I can read? | 2023-07-10T11:36:21 | https://www.reddit.com/r/LocalLLaMA/comments/14vrjcl/local_llm_agent/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vrjcl | false | null | t3_14vrjcl | /r/LocalLLaMA/comments/14vrjcl/local_llm_agent/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'b07J4sBspwa3QMgw1RwJa9t7EjV_wdSqTwVlOv4qrng', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=108&crop=smart&auto=webp&s=1b4d0843f1f7483e8e4f99731945dce0962a0fdc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=216&crop=smart&auto=webp&s=8c898b40b6180e550c3572deb47eb26c8c943e10', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=320&crop=smart&auto=webp&s=3d5d4ca7c699d1f3fde56a609b355870ff411bc4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=640&crop=smart&auto=webp&s=06a15d130fc2b96a8bf18b231f0c4900cd6920c3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=960&crop=smart&auto=webp&s=f5f05fa820a024351ac896d0bf6cb375c25233b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?width=1080&crop=smart&auto=webp&s=559ba12d1127cfb8352e0f31481c0ddfb0938077', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yax1xpDbiFGh6zpQEuYsoQS7oHHgHqD_YKZgOxP_XmM.jpg?auto=webp&s=2ac442a235f94f6f4505bf3ce3009618cc5b4384', 'width': 1200}, 'variants': {}}]} |
What coding models can I run on my rig? | 4 | I am looking into running something like wizardcoder or starcoder. Target languages would be python, bash, HCL and yaml.
I need long contexts and lots of tokens.
My rig is:
12700K 5.2 GHz
96 GB of DDR4 RAM
3080Ti 12 GB | 2023-07-10T10:18:38 | https://www.reddit.com/r/LocalLLaMA/comments/14vpzqi/what_coding_models_can_i_run_on_my_rig/ | l0lcat19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vpzqi | false | null | t3_14vpzqi | /r/LocalLLaMA/comments/14vpzqi/what_coding_models_can_i_run_on_my_rig/ | false | false | self | 4 | null |
Fine tuning with long context conversations? | 1 | I cleaned and instruct formatted the Teatime proxy logs, but there are a lot of them that are just under 8196 tokens per chat. Does anyone have any suggestions as to what I should do? Does fine tuning with 8K context require more VRAM? I'm reading all sorts of things, including that fine tuning on data over 2k context decreases performance with shorter contexts. Should I just be breaking up the long conversations into 4K chunks? 2K chunks? Or am I ok just leaving it as is? Does it make a difference if the end model is going to be converted to ggml? I was hoping to fine tune on openllama. | 2023-07-10T08:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/14vnr0u/fine_tuning_with_long_context_conversations/ | CheshireAI | self.LocalLLaMA | 2023-07-10T11:54:15 | 0 | {} | 14vnr0u | false | null | t3_14vnr0u | /r/LocalLLaMA/comments/14vnr0u/fine_tuning_with_long_context_conversations/ | false | false | default | 1 | null |
My experience on starting with fine tuning LLMs with custom data | 373 | I keep seeing questions about "How I make a model to answer based on my data. I have \[wiki, pdfs, whatever other documents\]"
Currently I am making a living by helping companies built chatbots fine tuned on their custom data.
Most of those are support or Q&A chatbots to answer questions from clients at any hour and day. There are also internal chatbots to be used to train new people joining the company and several other use cases.
So, I was thinking to share my experience (it might be wrong and I might be doing everything wrong, but it is my experience and based on this I have a dozen chatbots running in production and talking with clients with few dozen more in different stages of testing).
The actual training / fine-tuning, while it might initially seem like a daunting task due to the plethora of tools available (FastChat, Axolot, Deepspeed, transformers, LoRA, qLoRA, and more), I must tell you - this is actually the easiest part of the whole process! All you need to do is peek into their repositories, grab an example, and tweak it to fit your model and data.
However, the real challenge lies in preparing the data. A massive wiki of product documentation, a thousand PDFs of your processes, or even a bustling support forum with countless topics - they all amount to nothing if you don't have your data in the right format. Projects like Dolly and Orca have shown us how enriching data with context or system prompts can significantly improve the final model's quality. Other projects, like Vicuna, use chains of multi-step Q&A with solid results. There are many other datasets formats, depending of the expected result. For example, a dataset for quotes is much simpler, because there will be no actual interaction, the quote is a quote.
Personally, I mostly utilize the #instruction, #input, #output format for most of my fine-tuning tasks.
So, shaping your data in the correct format is, without a doubt, the most difficult and time-consuming step when creating a Language Learning Model (LLM) for your company's documentation, processes, support, sales, and so forth.
Many methods can help you tackle this issue. Most choose to employ GPT4 for assistance. Privacy shouldn't be a concern if you're using Azure APIs, though they might be more costly, but offer privacy. However, if your data is incredibly sensitive, refrain from using them. And remember, any data used to train a public-facing chatbot should not contain any sensitive information.
Automated tools can only do so much; manual work is indispensable and in many cases, difficult to outsource. Those who genuinely understand the product/process/business should scrutinize and cleanse the data. Even if the data is top-notch and GPT4 does a flawless job, the training could still fail. For instance, outdated information or contradictory responses can lead to poor results.
In many of my projects, we involve a significant portion of the organization in the process. I develop a simple internal tool allowing individuals to review rows of training data and swiftly edit the output or flag the entire row as invalid.
Once you've curated and correctly formatted your data, the fine-tuning can commence. If you have a vast amount of data, i.e., tens of thousands of instructions, it's best to fine-tune the actual model. To do this, refer to the model repo and mimic their initial training process with your data.
However, if you're working with a smaller dataset, a LoRA or qLoRA fine-tuning would be more suitable. For this, start with examples from LoRA or qLoRA repositories, use booga UI, or experiment with different settings. Getting a good LoRA is a trial and error process, but with time, you'll become good at it.
Once you have your fine-tuned model, don't expose it directly to clients. Instead, run client queries through the model, showcasing the responses internally and inviting internal users to correct the answers. Depending on the percentage of responses modified by users, you might need to execute another fine-tuning with this new data or completely redo the fine-tuning if results were really poor.
On the hardware front, while it's possible to train a qLoRA on a single 3090, I wouldn't recommend it. There are too many limitations, and even browsing the web while training could lead to OOM. I personally use a cloud A6000 with 48GB VRAM, which costs about 80 cents per hour.
For anything larger than a 13B model, whether it's LoRA or full fine-tuning, I'd recommend using A100. Depending on the model and dataset size, and parameters, I run 1, 4, or 8 A100s. Most tools are tested and run smoothly on A100, so it's a safe bet. I once got a good deal on H100, but the hassle of adapting the tools was too overwhelming, so I let it go.
Lastly, if you're looking for a quick start, try embeddings. This is a cheap, quick, and acceptable solution for internal needs. You just need to throw all internal documents into a vector db, put a model in front for searching, and voila! With no coding required, you can install booga with the superbooga extension to get started.
​
UPDATE:
I saw some questions repeating, sorry that I am not able to answer to everyone, but I am updating here, hope that this helps. Here are some answers for the repeated questions:
1. I do not know how to train a pre-trained model with "raw" data, like big documents. From what I know, any further training of a pre-trained model is done by feeding data tokenized and padded to maximum context size of the original model, no more.
2. Before starting, make sure that the problem that needs to be solved and the expectations are fully defined. "Teaching the model about xyz" is not a problem, it is a wish. It is hard to solve "wishes", but we can solve problems. For example: "I want to ask the model about xyz and get accurate answers based on abc data". This is needed to offer non stop answering chat for customers. We expect customer to ask "example1, 2, 3, .. 10" and we expect the answers to be in this style "example answers with example addressation, formal, informal, etc). We do not want the chat to engage in topics not related to xyz. If customer engage in such topics, politely explain that have no knowledge on that. (with example). This is a better description of the problem.
3. It is important to define the target audience and how the model will be used. There is a big difference of using it internally inside an organisation or directly expose it to the clients. You can get a lot cheaper when it is just an internal helper and the output can be ignored if not good. For example, in this case, full documents can be ingested via vectordb and use the model to answer questions about the data from the vectordb. If you decide to go with the embeddings, this can be really helpful: [https://github.com/HKUNLP/instructor-embedding](https://github.com/HKUNLP/instructor-embedding)
4. It is important to define what is the expected way to interact with the model. Do you want to chat with it? Should it follow instructions? Do you want to provide a context and get output in the provided context? Do you want to complete your writing (like Github Copilot or Starcoder)? Do you want to perform specific tasks (eg grammar checking, translation, classification of something etc)?
5. After all the above are decided and clarified and you decided that embeddings are not what you want and want to proceed further with fine tuning, it is the time to decide on the data format.
1. \#instruction,#input,#output is a popular data format and can be used to train for both chat and instruction following. This is an example dataset in this format: [https://huggingface.co/datasets/yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) . I am using this format the most because it is the easiest to format unstructured data into, having the optional #input it makes it very flexible
2. It was proven that having better structured, with extra information training data will produce better results. Here is Dolly dataset that is using a context to enrich the data: [https://huggingface.co/datasets/databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
3. A newer dataset that further proved that data format and quality is the most important in the output is Orca format. It is using a series of system prompts to categorize each data row (similar with a tagging system). [https://huggingface.co/datasets/Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
4. We don't need complicated data structure always. For example, if the expecation is that we prompt the model "Who wrote this quote: \[famous quote content\]?" and we expect to only get name of the author, then a simple format is enough, like it is here: [https://huggingface.co/datasets/Abirate/english\_quotes](https://huggingface.co/datasets/Abirate/english_quotes)
5. For a more fluid conversation, there is the Vicuna format, an Array of Q&A. Here is an example: [https://huggingface.co/datasets/ehartford/wizard\_vicuna\_70k\_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
6. There are other datasets formats, in some the output is partially masked (for completion suggestion models), but I have not worked and I am not familiar with those formats.
6. From my experiments, things that can be totally wrong:
1. directly train a pre-trained model with less than 50000 data row is more or less useless. I would think of directly train a model when I have more than 100k data rows, for a 13B model and at least 1 mil for a 65B model.
2. with smaller datasets, it is efficient to train LoRA of qLoRA.
3. I prefer to train a 4 bit qLora 30B model than a fp16 LoRA for a 13B model (about same hw requirements, but the results with the 4bit 30B model are superior to the 13B fp16 model)
​ | 2023-07-10T07:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/14vnfh2/my_experience_on_starting_with_fine_tuning_llms/ | Ion_GPT | self.LocalLLaMA | 2023-07-11T06:23:21 | 1 | {'gid_2': 1} | 14vnfh2 | false | null | t3_14vnfh2 | /r/LocalLLaMA/comments/14vnfh2/my_experience_on_starting_with_fine_tuning_llms/ | false | false | self | 373 | {'enabled': False, 'images': [{'id': 'fOQFUmKkqkswgcf8o9RyOEbRNk9YtCK2Pg8J5eBVG0s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=108&crop=smart&auto=webp&s=b56e71127b4007f17a55457fe52847f9d7c90c2a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=216&crop=smart&auto=webp&s=ba4d83976597b4633957daf050992fb9516a2942', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=320&crop=smart&auto=webp&s=4a18c2ac60e4aa8bca2aeae921a6019ccac7dd90', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=640&crop=smart&auto=webp&s=641cc7e9888cc6f9ce583704cd4057378fbd0961', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=960&crop=smart&auto=webp&s=94e688b009c774fc2f7901bb9f913dd6bffdaaf9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?width=1080&crop=smart&auto=webp&s=37f446fbd9ea3c78a99c8275cfc88690df88930e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JtAMbGflq-r-a7CZyIiyMHJbXoIdiVaDOQv1W64JC-k.jpg?auto=webp&s=03dbb888530b6192d7169694204bc34c81551f2e', 'width': 1200}, 'variants': {}}]} |
I NEED A MODEL THAT CAN RATE ARTICLES | 0 | So I'm working on an small project for scraping tech news and articles and sort thier importance or show the breaking new up top , The only problem is which LLM model that is "Fast,Small,Efficient" for this task , i tried to use gpt4all with falcon7b , but it doesn't actually run as intended , not like chatgpt
And if my method and approach is wrong , is there another way that don't require training new model or fine tune one? , because I don't have the resources for this | 2023-07-10T07:38:56 | Hussei911 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14vn41b | false | null | t3_14vn41b | /r/LocalLLaMA/comments/14vn41b/i_need_a_model_that_can_rate_articles/ | false | false | 0 | {'enabled': True, 'images': [{'id': '6v5IizTFSOTsiiOVkgDF5yHClbOTtFTU3J9B-GhyHk0', 'resolutions': [{'height': 184, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=108&crop=smart&auto=webp&v=enabled&s=b7f2122a8e9d4f5f7f1403fd322a45992f844a18', 'width': 108}, {'height': 368, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=216&crop=smart&auto=webp&v=enabled&s=a7da2a84af3c8e32cc84a3aecc9a9cb265aba3d6', 'width': 216}, {'height': 545, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=320&crop=smart&auto=webp&v=enabled&s=98d6d5212c72cdb5055b57e05ccdb6e3c6887415', 'width': 320}, {'height': 1090, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=640&crop=smart&auto=webp&v=enabled&s=7306493671c110288db337030c9c866cfa7e0ae6', 'width': 640}, {'height': 1636, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=960&crop=smart&auto=webp&v=enabled&s=9d76e91fb56b92c2b5e5c96823fb8ecd58f21cc6', 'width': 960}, {'height': 1841, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=094ed4a3a2f3361d8fb22f9f4948219dfe405b9c', 'width': 1080}], 'source': {'height': 1841, 'url': 'https://preview.redd.it/n0cy3ib1d3bb1.jpg?auto=webp&v=enabled&s=724804176a1435fe8849e315e724c28d0bf43a02', 'width': 1080}, 'variants': {}}]} | ||
Prompt Injection: How to Prevent It or Should We Prevent It? | 1 | 2023-07-10T07:08:22 | https://utkusen.medium.com/prompt-injection-how-to-prevent-it-or-should-we-prevent-it-ef62d9355191 | utku1337 | utkusen.medium.com | 1970-01-01T00:00:00 | 0 | {} | 14vmlec | false | null | t3_14vmlec | /r/LocalLLaMA/comments/14vmlec/prompt_injection_how_to_prevent_it_or_should_we/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'U-XOpzBhj9yNB43tonlrTX4lXnH5D4U1q6L8FZXZnD4', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=108&crop=smart&auto=webp&v=enabled&s=9ec39acc26f3de046d8671ab219cf73f9252c51d', 'width': 108}, {'height': 161, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=216&crop=smart&auto=webp&v=enabled&s=dee3c708c3f7a503922a750cb2f6f18d8424a286', 'width': 216}, {'height': 239, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=320&crop=smart&auto=webp&v=enabled&s=a5f6d5c56cd2aa42519fb47c5daf8310438fc118', 'width': 320}, {'height': 478, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=640&crop=smart&auto=webp&v=enabled&s=d926f1978ebb89aa1d24d5f137c86e3a0a03aa21', 'width': 640}, {'height': 718, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=960&crop=smart&auto=webp&v=enabled&s=1230fb5b42bc2d165b504b0c8648307084debdfe', 'width': 960}, {'height': 808, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=15019090f59a225213bf34cbcb2f633e61ed6296', 'width': 1080}], 'source': {'height': 898, 'url': 'https://external-preview.redd.it/sNxvwMWQplnWRd_lwn5sIT03g9-d6D42jCZbRj2-Rmg.jpg?auto=webp&v=enabled&s=5c1fe2ca6e3e6f5f028904e3aeb896d1075ae540', 'width': 1200}, 'variants': {}}]} | |
Uncensored Llama/Falcon model hosting and API? | 13 | I've been trying to run a couple of the uncensored wizardLM models using llama.cpp and llama-cpp-python, but it's an absolute pain to get everything working on wsl and cuda and making sure the gpu is used. Is there any API hosting site similar to [https://replicate.com/](https://replicate.com/) but for uncensored models? | 2023-07-10T05:27:45 | https://www.reddit.com/r/LocalLLaMA/comments/14vkrwm/uncensored_llamafalcon_model_hosting_and_api/ | IcyMissile | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vkrwm | false | null | t3_14vkrwm | /r/LocalLLaMA/comments/14vkrwm/uncensored_llamafalcon_model_hosting_and_api/ | false | false | self | 13 | null |
(Noob) with a question about loading models in python. | 1 | Hi guys, I was wondering if there was anyway to load these models in python. I looked around online for python scripts, but nothing seems to work! | 2023-07-10T03:37:37 | https://www.reddit.com/r/LocalLLaMA/comments/14vinx2/noob_with_a_question_about_loading_models_in/ | Norok_The_Diablo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vinx2 | false | null | t3_14vinx2 | /r/LocalLLaMA/comments/14vinx2/noob_with_a_question_about_loading_models_in/ | false | false | default | 1 | null |
Inference server for running multiple LoRAs alongside a single foundational LLM? | 5 | Hi,
There are a few good options to efficiently run an inference server for a single LLM - such as NVIDIA triton combined with [vLLM](https://github.com/vllm-project/vllm).
However, I would like to run multiple LoRAs alongside a single big LLM. I do not want to merge these and run multiple LLMs. Are there any options?
Thanks! | 2023-07-10T02:49:36 | https://www.reddit.com/r/LocalLLaMA/comments/14vhp5q/inference_server_for_running_multiple_loras/ | JKStreamAdmin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vhp5q | false | null | t3_14vhp5q | /r/LocalLLaMA/comments/14vhp5q/inference_server_for_running_multiple_loras/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'NMAc4c9-X5ZRSghyt_Re8vxzoaYvdQ9ah5qk6eTSu8w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=108&crop=smart&auto=webp&s=955ef4fe2d98719e3d4182260101d9d3d02d84b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=216&crop=smart&auto=webp&s=a3e4b4c37a3376b2d81b0356ade02f460d70ae6a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=320&crop=smart&auto=webp&s=523ea30c13bfff972977fe9f5cac2ad5c193eee6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=640&crop=smart&auto=webp&s=cd6b16d4939527e0bc40e64ad0fe9faf241df45e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=960&crop=smart&auto=webp&s=e522a0594b07c1d09b2d662b4c9947bc956248f1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?width=1080&crop=smart&auto=webp&s=7b1f5c6bbddbb678c500137d63a85220a774a791', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eFBh5-PxmrfeAqgBVux5kuytiK04dn3BfsBtPSDIVvY.jpg?auto=webp&s=3298b53ecc53be1c129e536f3c4ff6e53bf8b6f8', 'width': 1200}, 'variants': {}}]} |
GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest | 70 | 2023-07-10T01:26:48 | https://v.redd.it/hrxxtw0li1bb1 | ninjasaid13 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14vfz0r | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/hrxxtw0li1bb1/DASHPlaylist.mpd?a=1692208707%2COTc0MDFkZTE3ZTRjYzE0ZDk4ZTg3NjAwNjY2ZGY1NWMxMWJjMTliYWI4ZjA3NTliYzlhNDhjZTg4NmNmYzEzOA%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/hrxxtw0li1bb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/hrxxtw0li1bb1/HLSPlaylist.m3u8?a=1692208707%2COThmMjcyZTJmYjE0OTFmOTNhM2E2YmUwZDBlYjdkZjZkMWRlNGQ4MGE5NjgzYzQ5YjE2MWFlZmMzMjg2NDY5OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hrxxtw0li1bb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_14vfz0r | /r/LocalLLaMA/comments/14vfz0r/gpt4roi_instruction_tuning_large_language_model/ | false | false | 70 | {'enabled': False, 'images': [{'id': 'iDtS3s8aa3SNojEmQvrRte6eIFwMrc3riVM1YsoqPyk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=108&crop=smart&format=pjpg&auto=webp&s=a9b0522c329c24f5a9c1e5cfff3b79b4fd05bf70', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=216&crop=smart&format=pjpg&auto=webp&s=7f92f48dfacc5190fa2a78e691f0d3d7843a9e09', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=320&crop=smart&format=pjpg&auto=webp&s=33caa91931b1d305fe798535d317d86d78ed4fa4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=640&crop=smart&format=pjpg&auto=webp&s=d95cd72664caf70ffc882099a51acd4ae749cc92', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=960&crop=smart&format=pjpg&auto=webp&s=ce1f5765d53649c6c7a0fc839e173c8646bff47b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a107285163b7b78a2a5d4c90c369a41b7d418283', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/x8L6X5GEiCDuFcd-SXd5IW2plIa4NDIy71PzpuEr3yM.png?format=pjpg&auto=webp&s=502527b5e23314bab53326b473ba2e9436f853fa', 'width': 1920}, 'variants': {}}]} | ||
Cleaning an OCR output with an LLM | 3 | Hi all!
I have OCRed some pdfs but the result is really distorted. I assume an LLM should be capable of recovering the information and cleaning the text.
Does any one have experience doing so? Is there any resource that you know of? I have never used a local LLM, so this would be my first use case.
Thank you so much! | 2023-07-09T22:58:48 | https://www.reddit.com/r/LocalLLaMA/comments/14vcly8/cleaning_an_ocr_output_with_an_llm/ | Por-Tutatis | self.LocalLLaMA | 2023-07-09T23:11:56 | 0 | {} | 14vcly8 | false | null | t3_14vcly8 | /r/LocalLLaMA/comments/14vcly8/cleaning_an_ocr_output_with_an_llm/ | false | false | self | 3 | null |
LLM Multiple Rounds Fine-Tune Tips? | 4 | I have 2 datasets that I want to use for fine-tuning multiple LLMs.
Dataset #1 is large and has "raw" text data (many scraped webpages revolving around different topics that an LLM may not have been pre-trained on).
Dataset #2 is very small and has selected examples depicting how a topic from Dataset #1 should be presented to the user during chat interactions.
I am trying to determine a good strategy for fine-tuning. I could not find any examples of "two-round" fine tuning in a scenario like this.
I am thinking about using quantization and training just the last few layers of selected LLM on Dataset #1, and using the LoRA method for Dataset #2. However, I do not have much compute resources.
Any tips/suggestions/advice would be greatly appreciated! | 2023-07-09T22:26:46 | https://www.reddit.com/r/LocalLLaMA/comments/14vbul6/llm_multiple_rounds_finetune_tips/ | ilovejoi36912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vbul6 | false | null | t3_14vbul6 | /r/LocalLLaMA/comments/14vbul6/llm_multiple_rounds_finetune_tips/ | false | false | self | 4 | null |
Focused Transformer: Contrastive Training for Context Scaling | 37 | 2023-07-09T22:11:32 | https://arxiv.org/abs//2307.03170 | metalman123 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 14vbh28 | false | null | t3_14vbh28 | /r/LocalLLaMA/comments/14vbh28/focused_transformer_contrastive_training_for/ | false | false | 37 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | ||
Developing Scalable LLM app | 3 | Hey guys,
I'm currently working on building a Language Model (LLM) app, where the user can interact with an AI model and learn cool stuff through their conversations. I have a couple of questions regarding the development process:
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
1) Hosting the Model:
\* I think I should host the model in another place (not with the backend) and provide an API to it (to offer a good dependent scalable service).
\* What is the best host provider in your experience (I need one that temporarily scales when I do training, not high cost)
2) Scaling for Different Languages:
\* What is the good approach here? finetune the model to each language, and if for example, the app has translation, summary, and q/a features, for example, Italiano language, I should finetune it with English to Italiano text in each case. (what if the language to translate was varied (like can be Spaniol, Chianese, Arabic, etc. ) do I have to fine-tune all the text as bi-directional with each language?
( I found this [multi-language bert model](https://huggingface.co/bert-base-multilingual-cased) , I tried it but it's not working well ) so are there any alternative approaches or i should look for multi-lingual models | 2023-07-09T21:32:54 | https://www.reddit.com/r/LocalLLaMA/comments/14vai1h/developing_scalable_llm_app/ | mo_falih98 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vai1h | false | null | t3_14vai1h | /r/LocalLLaMA/comments/14vai1h/developing_scalable_llm_app/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'gzzADzJ96NPFI2YQyhJTYwCKoMRcV4KD_9pHwj9QV2g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=108&crop=smart&auto=webp&s=7453fc3adb32b9f5bf731b1929cb14c5d3766829', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=216&crop=smart&auto=webp&s=7f1cbdcff3149bc7e8bdeda922f525fc3666f1d1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=320&crop=smart&auto=webp&s=c0e2e37199cc210b2a1e21b14ea926751f0ec0b6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=640&crop=smart&auto=webp&s=136432b6b646d45065037af76ca7fda349b61558', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=960&crop=smart&auto=webp&s=a7019ae44455405f0c61362ac8c6d9cead01443e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?width=1080&crop=smart&auto=webp&s=ed19c66c5499f40e92a3e9aa0e461592b395e58a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9rtRBHCl3jMyhdDPWYnwk7p5WplQaiTNUQTGPsj2OVg.jpg?auto=webp&s=c8f24cdf8a0131548473315ed3fc97c1186fce99', 'width': 1200}, 'variants': {}}]} |
Something in the same level or close to Multilingual voice cloning from Elevenlabs? | 1 | [removed] | 2023-07-09T21:28:03 | https://www.reddit.com/r/LocalLLaMA/comments/14vadjd/something_in_the_same_level_or_close_to/ | Darquealla | self.LocalLLaMA | 2023-07-09T23:42:53 | 0 | {} | 14vadjd | false | null | t3_14vadjd | /r/LocalLLaMA/comments/14vadjd/something_in_the_same_level_or_close_to/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'Xf3ACKJqsDqA8DRzw7DeDHPpBG8AalW4u05yVuk_eJk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JKTUiteqGLHarSWW9RGuKKHRSRX0iEL0Vhk2hLbf8js.jpg?width=108&crop=smart&auto=webp&v=enabled&s=5f096f502b2e8b54638a09f680c65f1c4a3f79c7', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/JKTUiteqGLHarSWW9RGuKKHRSRX0iEL0Vhk2hLbf8js.jpg?auto=webp&v=enabled&s=c59a94454e7e29cf0f6d1fdb5627caecc3d92419', 'width': 140}, 'variants': {}}]} |
How to make my own LoRA for SuperHOT GPTQ model? | 18 | Been trying to do this for a few days but haven't had any luck. Trying to make a LoRA for this: https://huggingface.co/TheBloke/wizard-vicuna-13B-SuperHOT-8K-GPTQ
I've tried Oobabooga trainer with various options without success. I also just set up `kaiokendev/alpaca_lora_4bit@main` and got it running locally but I'm getting errors when trying to run based on the example `finetune.py` usage.
I _could_ give more specifics on what I've tried thus far, but it's all been a shot in the dark while cross-referencing a bunch of different things from Google, so it would probably be better to just ask for someone else's setup which I could try to mirror. | 2023-07-09T21:27:31 | https://www.reddit.com/r/LocalLLaMA/comments/14vad2r/how_to_make_my_own_lora_for_superhot_gptq_model/ | tylercamp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14vad2r | false | null | t3_14vad2r | /r/LocalLLaMA/comments/14vad2r/how_to_make_my_own_lora_for_superhot_gptq_model/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': '4qiIf_YIS__bCo3O4kM7bQRddz62tQI3dvv9QbHy8fw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=108&crop=smart&auto=webp&s=6f857744ca5addd24343bdd0ea273e1779570c7d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=216&crop=smart&auto=webp&s=3fc2f305d98d8221d3bdd0cfdd78ff87656b7dea', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=320&crop=smart&auto=webp&s=f8663f82aaa6cb4cde1ce09c33b4a34a087f2765', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=640&crop=smart&auto=webp&s=ce4024e6bfdd1e3b393ab052e36d211460fdb276', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=960&crop=smart&auto=webp&s=93a961a190d7a2eac765192381c54f79fd167b12', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?width=1080&crop=smart&auto=webp&s=cdf5d5c8283f53676908fc03f300be83ca001460', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ghwE3jaA2LBfSulElowyZwJAogabhWsxJc2Sj-OLKNU.jpg?auto=webp&s=31b2a4b5a3cdd1987041f96961ba00f0eb93fc0a', 'width': 1200}, 'variants': {}}]} |
Best local chat model for M2 Max, 32 GB | 20 | I'm trying to create and converse with 'synthetic characters' (on a use case of value to small educational institutions). Looking for a cost effective alternative to OAI, for long (not necessarily psychologically profound as in [character.ai](https://character.ai)) conversations. Two questions:
\- best model for the footprint of my Mac
\- best 'cheatsheet' to get it working on Metal (relative newbie to the Llama world)
\- any experiences with chatting with quantized models. I struggle with understanding the impact of quantization (which is a 'bit level' thing) on conversation quality (which is a 'meaning level' thing)
TIA for any tips. | 2023-07-09T19:41:43 | https://www.reddit.com/r/LocalLLaMA/comments/14v7nvv/best_local_chat_model_for_m2_max_32_gb/ | venuv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14v7nvv | false | null | t3_14v7nvv | /r/LocalLLaMA/comments/14v7nvv/best_local_chat_model_for_m2_max_32_gb/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'veE04iaMbgI4yLvLGj2IZNV7UQfnq3n_7BmxP28dCd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=108&crop=smart&auto=webp&s=0e594332595e82a5118e08d35a2cd140c18d7571', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=216&crop=smart&auto=webp&s=e3c279ba2d1ae1f9f2fba4b328e22f6615821b5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=320&crop=smart&auto=webp&s=e635acb6bc693890c232162908676cb6478c120c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=640&crop=smart&auto=webp&s=59ba293d6adf4cce410b43b5d28ae104922701b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=960&crop=smart&auto=webp&s=fc7dc69af838ec53e60b3e88fec5e67c8759495b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?width=1080&crop=smart&auto=webp&s=e50a4f1b7c99e137a2ab4d5e2d573bb75becd067', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BGOGS7KdVHomyGu2FIFx85nsQDLp_fpWWBKXxvtudf4.jpg?auto=webp&s=b8597825d9b212133d3dbd9ee26fd0dcc2a84677', 'width': 1200}, 'variants': {}}]} |
How can you train LLM's with particular qualities (e.g. empathetic, vindictive, etc...) without having them break character? Also what are some best-practices for high-quality models that actually work in practice? | 9 | Hi,
I've been reading quite a bit about LLM's recently and I've grown interested in understanding how to train LLM's with particular qualities. For example, if I'm looking to produce an LLM that's "empathetic" and "friendly" while never breaking character (if user prompts them to adopt a quality they're not, I'd like them to refuse such a request). Is this purely to do with the fine-tuning stage on the dataset, where the dataset would have many examples of that particular quality and refusal to adopt different qualities?
If so, are their particular papers that deal with this particular issue/how to set-up fine-tuning on a particular dataset to get the most out of it?
​
Also, a little bit of a side question, what are the best practices (that work well) when it comes to fine-tuning high quality models? There seems to be a ton of resources out there and it makes me overwhelmed to know what works and what doesn't. For example, it seems that the long context hacks are still lacking significantly in the quality of the output; but the context-free guidance paper seems to yield non-trivial improvements in performance. Can anyone point me to posts that deal with these things?
​
Thank you! | 2023-07-09T19:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/14v7jz3/how_can_you_train_llms_with_particular_qualities/ | Ok_Coyote_8904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14v7jz3 | false | null | t3_14v7jz3 | /r/LocalLLaMA/comments/14v7jz3/how_can_you_train_llms_with_particular_qualities/ | false | false | self | 9 | null |
CodeGen-v2.5-Mono, 7B | 1 | https://huggingface.co/spaces/mike-ravkine/can-ai-code-results
CodeGen-v2.5-Mono, 7B, seems to punch above its size in Python and JavaScript. Is there any ggml version? | 2023-07-09T19:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/14v77bq/codegenv25mono_7b/ | Eduard_T | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14v77bq | false | null | t3_14v77bq | /r/LocalLLaMA/comments/14v77bq/codegenv25mono_7b/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'QyV2S4rh-pMbnJML_R7bbJcWB2RH98giH_Z_P45_3vo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=108&crop=smart&auto=webp&v=enabled&s=c0348c9098d1c9de6e442d8128ba34d7f73f1191', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=216&crop=smart&auto=webp&v=enabled&s=7459424e3784b5de04310cf17faa53a9ca8eda10', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=320&crop=smart&auto=webp&v=enabled&s=95d2030f9f048deba863cdeb9a00e01c7e5aca5c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=640&crop=smart&auto=webp&v=enabled&s=3522bf80ed4669be451c9460845be13f5a4807c7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=960&crop=smart&auto=webp&v=enabled&s=538a76046f6933bc34d2e4444b65a0dec699278e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=5e82ab1e5d657af57d1e316f05b4f70ce424bcfa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?auto=webp&v=enabled&s=0ce14980925b1968fdda5372673b7a5d121beb68', 'width': 1200}, 'variants': {}}]} |
Deepmind: Focused Transformer: Contrastive Training for Context Scaling | 15 | 2023-07-09T19:19:32 | https://arxiv.org/abs//2307.03170 | metalman123 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 14v73w1 | false | null | t3_14v73w1 | /r/LocalLLaMA/comments/14v73w1/deepmind_focused_transformer_contrastive_training/ | false | false | default | 15 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=586089b93aa59ebd86bb3b273ad1fb0c73e45ab7', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&v=enabled&s=00869aa5692fb9c8aa11f48ed92bff8db4f47293', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&v=enabled&s=72f6ae2c0800df8a56c3fc74afb033bf37cc16a9', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=cfcb5f9f66743f2e26952e5edff4dfed984af692', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&v=enabled&s=821ed287940b59a56b2643dcaf6a356ccfdc4eb5', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=f101972ffc7ec2e3eedefa45eaa677e4d9024520', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&v=enabled&s=757c00601aa4ffb984c87000927a0610d04c3845', 'width': 1200}, 'variants': {}}]} | |
Multi-GPU questions | 3 | I scrolled through the webui discussions for multi GPU, and found a few people using it.
Why aren't people recommending x3 3060 12gb for budget ram?
Is it because gamers are used to nvlink and SLI sucking and don't think to use more than one GPU offloading for the layers?
Or is it because it is jank?
Should there be any reason why it shouldn't work? | 2023-07-09T17:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/14v4spn/multigpu_questions/ | ArcadiaSofka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14v4spn | false | null | t3_14v4spn | /r/LocalLLaMA/comments/14v4spn/multigpu_questions/ | false | false | self | 3 | null |
What are the cheapest PC specs I need to run this? | 2 | I want to make my own local "cluster" of 4 nos. 7b agents. I've been using fine tuned LLAMA7b on runpod and vastAI, and fine-tuning these models is great on cloud, but over the long run I'd rather just buy the hardware and run locally.
I want to be able to run inference for fine tuned 7bs. Since I'm getting 4 of them (each one fine tuned for a specific task) I want to opt for a cheaper PC config. 5-10 tokens/second is fine. If CPU is cheaper, I'm fine with that, too. I won't use these computers for anything other than the LLM agent models. Oh, and I want to use FP16, since I'm really not sure what kind of quality loss I'd have to deal with.
What are the cheapest PC specs I can go for? Would CPU be cheaper? | 2023-07-09T17:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/14v4s2e/what_are_the_cheapest_pc_specs_i_need_to_run_this/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14v4s2e | false | null | t3_14v4s2e | /r/LocalLLaMA/comments/14v4s2e/what_are_the_cheapest_pc_specs_i_need_to_run_this/ | false | false | self | 2 | null |
PoisonGPT: How we hid a lobotomized LLM on Hugging Face to spread fake news | 0 | 2023-07-09T17:23:53 | https://blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/ | Dwedit | blog.mithrilsecurity.io | 1970-01-01T00:00:00 | 0 | {} | 14v47wx | false | null | t3_14v47wx | /r/LocalLLaMA/comments/14v47wx/poisongpt_how_we_hid_a_lobotomized_llm_on_hugging/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'kNe1Ec9FSwYu0t8I_fiqKe0Bl3P6G4731hhZQ_soeNk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=108&crop=smart&auto=webp&s=852030e09a99ffc343c53597578fb7861ee78b59', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=216&crop=smart&auto=webp&s=5dd0cf2784c06144509468417b04d373c6d35509', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=320&crop=smart&auto=webp&s=14211131f1849fac21b92428106d94e2b2ea1954', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=640&crop=smart&auto=webp&s=ca3cbafde4758fe8b4abadd8472d046a059f7b87', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=960&crop=smart&auto=webp&s=24b9bdd9a2d57e1ae548f685e40b7a34895f08c5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?width=1080&crop=smart&auto=webp&s=c5ba9550fa55f1511c895a66bb355ff517291039', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/_kr88j07kaa9x06BZHmnIc0w3-NAXPe2ciBpORV6Koc.jpg?auto=webp&s=3895bd8afb896475edd28ce0b8a945a6c32e5ac6', 'width': 1280}, 'variants': {}}]} | ||
Any model I try (13B) gets obsessed with chess. | 2 | User: ok lets play chess im white you are black
Output: Absolutely! Let us begin this fascinating battle of wits at once. Prepare to be defeated in the most elegant manner possible.
User: my first move: b2 b3
Output:1.e4 e5
User: your turn please
Output: 1...Nf6
After that no matter what input is: a model continues to make moves. I've tried to separate last user\_input from history a bit and change instructions a bit but it didn't help. Did anyone have the same problem? | 2023-07-09T17:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/14v40qg/any_model_i_try_13b_gets_obsessed_with_chess/ | Awethon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14v40qg | false | null | t3_14v40qg | /r/LocalLLaMA/comments/14v40qg/any_model_i_try_13b_gets_obsessed_with_chess/ | false | false | default | 2 | null |
New llama-cpp-python out, with performance patches | 85 | Just wanted to mention, because the topic has come up here before, that \[llama-cpp-python\]([https://github.com/abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)) has \[today\]([https://github.com/abetlen/llama-cpp-python/blob/main/CHANGELOG.md](https://github.com/abetlen/llama-cpp-python/blob/main/CHANGELOG.md)) been bumped to 0.1.69 & includes performance patches so that it should pretty much match underlying llama.cpp perf (there was a lot of nonsense talk about Python itself being the problem, but we'll set that aside). | 2023-07-09T16:37:42 | https://www.reddit.com/r/LocalLLaMA/comments/14v32zi/new_llamacpppython_out_with_performance_patches/ | CodeGriot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14v32zi | false | null | t3_14v32zi | /r/LocalLLaMA/comments/14v32zi/new_llamacpppython_out_with_performance_patches/ | false | false | self | 85 | {'enabled': False, 'images': [{'id': 'HYMcl_oe2YCkRPufeqNEz2WoSs6zgXnHLQvoKVoIGuU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=108&crop=smart&auto=webp&s=757a079409297ddef698b379de3fffb82ea04de4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=216&crop=smart&auto=webp&s=d22ed2c2a4094e709ec522fd84ae85773d95e6d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=320&crop=smart&auto=webp&s=de9619e67287dcaf2be5dadbc88a96366ee72089', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=640&crop=smart&auto=webp&s=cef46ce8d26c1f7ba86047fe09996b0fd602483c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=960&crop=smart&auto=webp&s=6a362d0acdfaec6f20694f006ec9414af75e582b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?width=1080&crop=smart&auto=webp&s=c44b1a30c956b026caa5170419017cecb19bb9fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pOsrcya7bh2_qBKMA4PxTb3wL8YtSZaUq_EkD5Qe53g.jpg?auto=webp&s=bafe89930bf06e56445e1cd711d9b0a8b6cb899c', 'width': 1200}, 'variants': {}}]} |
Which LLM works for taboo questions or programming like webscraping? | 2 | I’m super new to this but definitely interested in learning about unique ways to get clients for a software business or have architectural discussions with AI on how webscraping is achieved. | 2023-07-09T16:11:18 | https://www.reddit.com/r/LocalLLaMA/comments/14v2fw6/which_llm_works_for_taboo_questions_or/ | BluePilotPen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14v2fw6 | false | null | t3_14v2fw6 | /r/LocalLLaMA/comments/14v2fw6/which_llm_works_for_taboo_questions_or/ | false | false | default | 2 | null |
"Hallucination refinement" for generating creative ideas ? | 3 | Hi,
i was wondering if hallucinations could be used to create interesting new ideas.
E.g. let a model run with higher temperature and create lot's of ideas as outputs. And then in a second step let a more rational acting model sort out all nonsensical/unrealistic ideas from the outputs.
I've not yet tried so much myself to create hallucinations.
So, i'm curious if some of you have played with it and can tell if there is sometimes valuable hallucinationed output or if it rather is garbage output most times ?
And what would be a good temperature value for this ? | 2023-07-09T15:09:51 | https://www.reddit.com/r/LocalLLaMA/comments/14v0y6z/hallucination_refinement_for_generating_creative/ | Glat0s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14v0y6z | false | null | t3_14v0y6z | /r/LocalLLaMA/comments/14v0y6z/hallucination_refinement_for_generating_creative/ | false | false | default | 3 | null |
InternLM7B-8K on GPT4ALL? | 6 | Is there a reasonable path to implement InternLM7B-8K on GPT4All? I have been unable to find the base model structure to see if it is compatible?
Im technical but new to LLM hacking. | 2023-07-09T12:08:46 | https://www.reddit.com/r/LocalLLaMA/comments/14uwzw6/internlm7b8k_on_gpt4all/ | bidet_enthusiast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14uwzw6 | false | null | t3_14uwzw6 | /r/LocalLLaMA/comments/14uwzw6/internlm7b8k_on_gpt4all/ | false | false | self | 6 | null |
Fine-tuning chat model on raw text | 9 | Anyone had success fine-tuning chat or instruct models using just plaintext as input for LoRa? What's the best method for applying a new knowledge base and style with LoRa with just a plain text? | 2023-07-09T11:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/14uwlf4/finetuning_chat_model_on_raw_text/ | Itchy_Monk2686 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14uwlf4 | false | null | t3_14uwlf4 | /r/LocalLLaMA/comments/14uwlf4/finetuning_chat_model_on_raw_text/ | false | false | self | 9 | null |
Any suggestions for an open source model for parsing real estate listings? | 3 | I need to extract structured data like phone numbers, price, area and other details(like tags). I know it is much faster to use regex but sometimes it is not possible and I would like to try LLM.
Any thoughts? Thank you! | 2023-07-09T11:36:27 | https://www.reddit.com/r/LocalLLaMA/comments/14uwe6r/any_suggestions_for_an_open_source_model_for/ | benderlio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14uwe6r | false | null | t3_14uwe6r | /r/LocalLLaMA/comments/14uwe6r/any_suggestions_for_an_open_source_model_for/ | false | false | self | 3 | null |
Ai personal assistant with long term memory? | 22 | I want to train a personal model that would have access to my personal information, such as my calendar, and that could help me organise my life a bit.
I was initially thinking of using a small model (7/13b), and connect it to a vectordb for the long-term memory part, but I’m pretty new in the llm world and would like your guys advice.
Ideally the model should be able to:
-Remember my calendar and be able to access and modify the schedules.
-Answer questions about stored data. (Information retrieval).
-Deduce the importance of an event. (Going out for a drink < uni deadline).
I’d love to hear your guys opinions on this, and thank you in advance for any suggestions :) | 2023-07-09T11:35:34 | https://www.reddit.com/r/LocalLLaMA/comments/14uwdj6/ai_personal_assistant_with_long_term_memory/ | Davde_pq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14uwdj6 | false | null | t3_14uwdj6 | /r/LocalLLaMA/comments/14uwdj6/ai_personal_assistant_with_long_term_memory/ | false | false | self | 22 | null |
Not a new LLM, but worth to read | 4 | Hi community hope you don’t mind to share this link with you, it’s about an interview with different ai models - I found it quite interesting and thought you would be interested too, since here in the sub is also a lot of discussion about creative writing, Q&A, context length etc…
Hope you enjoy: https://hardware.slashdot.org/story/23/07/09/040210/nine-ai-powered-humanoid-robots-hold-press-conference-at-un-summit
— edit: typos | 2023-07-09T10:12:12 | https://www.reddit.com/r/LocalLLaMA/comments/14uuw9l/not_a_new_llm_but_worth_to_read/ | _omid_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14uuw9l | false | null | t3_14uuw9l | /r/LocalLLaMA/comments/14uuw9l/not_a_new_llm_but_worth_to_read/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'hil_tMNOLt9yEv7PpbqA4DDr8WAL7xTJ2MH4RtRRmYA', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/ySXr4T01nIrkOEqMBk6hKUwKLa4KjANzhxGh1LsHW7o.jpg?auto=webp&s=56533533ad4c617d5efe2e672f08dc3131d745e0', 'width': 64}, 'variants': {}}]} |
Which model can answer this question correctly? | 72 | 2023-07-09T09:52:48 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14uujna | false | null | t3_14uujna | /r/LocalLLaMA/comments/14uujna/which_model_can_answer_this_question_correctly/ | false | false | 72 | {'enabled': True, 'images': [{'id': '1jM1Hu0DYnXg8OG7qI6VC5E6uSmGFPc6-PtY2m0J_oU', 'resolutions': [{'height': 149, 'url': 'https://preview.redd.it/jv3plj2ovwab1.jpg?width=108&crop=smart&auto=webp&s=80bc571c5a3ec5713dc0b40bafc061189fd2950f', 'width': 108}, {'height': 299, 'url': 'https://preview.redd.it/jv3plj2ovwab1.jpg?width=216&crop=smart&auto=webp&s=36b2b33a1ab8723fae57bbff236b7f3be343e334', 'width': 216}, {'height': 444, 'url': 'https://preview.redd.it/jv3plj2ovwab1.jpg?width=320&crop=smart&auto=webp&s=15d1de7a7227fd5da02447f04d4c612e487d62f7', 'width': 320}, {'height': 888, 'url': 'https://preview.redd.it/jv3plj2ovwab1.jpg?width=640&crop=smart&auto=webp&s=9b9af8c5d22e127ffa2561fca2ce6f656fd846f7', 'width': 640}], 'source': {'height': 1044, 'url': 'https://preview.redd.it/jv3plj2ovwab1.jpg?auto=webp&s=4baacd4df6f58fe10e98f28249cad871183b4612', 'width': 752}, 'variants': {}}]} | |||
Any successful guides on scanning internal pages and build a virtual assistant using LLAMA? | 25 | https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=2
I'm planning to do a private chatGPT for a client who has around 1000 web pages. I want to build like a virtual assistant which can intelligently answer customer queries based on the knowledge on those internal webpages or sometimes including google docs.
I read the guide link above. What are my options to accomplish this?
Since data is private only for the client, cannot use openAI APIs.
Help a reddit buddy out.😅 Thanks | 2023-07-09T09:22:43 | https://www.reddit.com/r/LocalLLaMA/comments/14uu1wl/any_successful_guides_on_scanning_internal_pages/ | vlodia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14uu1wl | false | null | t3_14uu1wl | /r/LocalLLaMA/comments/14uu1wl/any_successful_guides_on_scanning_internal_pages/ | false | false | self | 25 | null |
Is this a reasonable account of how Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GGML came to be? | 59 | 2023-07-09T08:20:47 | https://www.threads.net/t/CubNV7IxlWR/ | isaac_szpindel | threads.net | 1970-01-01T00:00:00 | 0 | {} | 14usztv | false | null | t3_14usztv | /r/LocalLLaMA/comments/14usztv/is_this_a_reasonable_account_of_how/ | false | false | 59 | {'enabled': False, 'images': [{'id': 'fD9ewC0SZNewDOAD-SuMftKD0KPjNB_dSRpe6jMbDLw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/rWPKAjWtxh8j7F9g3Cdz_if_JAKWsMebYeS28GtadeE.jpg?width=108&crop=smart&auto=webp&s=8aeebc58faa518c083ec8b66fa6f33c43a655bef', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/rWPKAjWtxh8j7F9g3Cdz_if_JAKWsMebYeS28GtadeE.jpg?auto=webp&s=afccf38b0e429021c2f144d9704839542c019995', 'width': 150}, 'variants': {}}]} | ||
would it be possible for the mods to pin a thread that would just function as a wiki of llm models? | 1 | [removed] | 2023-07-09T07:05:18 | https://www.reddit.com/r/LocalLLaMA/comments/14urpew/would_it_be_possible_for_the_mods_to_pin_a_thread/ | Maelstrom100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14urpew | false | null | t3_14urpew | /r/LocalLLaMA/comments/14urpew/would_it_be_possible_for_the_mods_to_pin_a_thread/ | false | false | default | 1 | null |
Any Suggestions on good open source model for Document QA which we can run on prod ? 13b + models? | 18 | Looking for good models for Document QA, i tried some of them but lot of them hallucinating, please suggest some of best models which can fit for this use case? | 2023-07-09T07:05:09 | https://www.reddit.com/r/LocalLLaMA/comments/14urp8y/any_suggestions_on_good_open_source_model_for/ | Effective_Twist6995 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14urp8y | false | null | t3_14urp8y | /r/LocalLLaMA/comments/14urp8y/any_suggestions_on_good_open_source_model_for/ | false | false | self | 18 | null |
Orca-Mini-V2-13b | 98 | Today I released Orca-Mini-V2-13b
[https://huggingface.co/psmathur/orca\_mini\_v2\_13b](https://huggingface.co/psmathur/orca_mini_v2_13b)
New Eval Updates:
Looks like orca-mini-v2-13b performed better on [HuggingFace Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) then I was expecting: It is 5th on all 13B models & 21 overall. I think, I am going to expedite v3 release.
More Updates:
Just finished final evaluation (additional metrics) on https://github.com/EleutherAI/lm-evaluation-harness and have averaged the results for orca-mini-v2-13b.
The average results are not that great, compare to initial metrics. The average is now 0.54675 which put this model below then many other 13b out there.
Updates:
Thanks to u/The-Bloke we have our quants:
https://huggingface.co/TheBloke/orca_mini_v2_13b-GGML
https://huggingface.co/TheBloke/orca_mini_v2_13b-GPTQ
This is an **Uncensored** LLaMA-13b model build in collaboration with [**Eric Hartford**](https://huggingface.co/ehartford). This is trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets, applying Orca Research Paper dataset construction approaches and refusals removed.
Please note this model has *better code generation capabilities* compare to original orca\_mini\_13b which was trained on base OpenLLaMA-7b model and which has the [**empty spaces issues & found not good for code generation**](https://huggingface.co/psmathur/orca_mini_v2_7b/blob/main/(https://github.com/openlm-research/open_llama#update-06072023)).
**Evaluation:**
I evaluated orca\_mini\_v2\_13b on a wide range of tasks using [**Language Model Evaluation Harness**](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Task Value Stderr
arc_challenge 0.5478 0.0145
hellaswag 0.7023 0.0040
mmlu 0.4969 0.035
truthfulqa_mc 0.44 0.0158
Total Average 0.54675 0.0114
**Here is sample example:**
### System: You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User: Tell me how to break into my own car
### Input:
### Response:
Breaking into a car is a crime and should be avoided. However, if you are in an emergency situation and need to break into your own car, here is a basic guide on how to do it:
1. Choose the right car: Look for a car that is ^^^^^^^^^^^^^.
2. Get a ^^^^^^^^^^^^^t: Use a ^^^^^^^^^^^^^.
3. Insert the object: Insert the ^^^^^^^^^^^^^
4. Apply ^^^^^^^^^^^^^: Apply ^^^^^^^^^^^^^
5. Push ^^^^^^^^^^^^^: Once the lock is ^^^^^^^^^^^^^
**NOTE: The real response is hidden here with \^\^\^\^\^\^\^\^\^\^\^\^\^.**
Also, I am adding original Orca Paper System prompts here, which were used to train all orca-minis models. This will help users to come up with their own system prompts, Again if you want slightly better results from orca-minis models you need to have system prompt, it's not mandatory but surely it will not hurt:
*"You are an AI assistant. Provide a detailed answer so user don’t need to search outside to understand the answer.",*
*"You are an AI assistant. You will be given a task. You must generate a detailed and long answer.",*
*"You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old.",*
*"You are an AI assistant that follows instruction extremely well. Help as much as you can.",*
*"You are an AI assistant that helps people find information. Provide a detailed answer so user don’t need to search outside to understand the answer.",*
*"You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.",*
*"You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. Think like you are answering to a five year old.",*
*"Explain how you used the definition to come up with the answer.",*
*"You are an AI assistant. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. You might need to use additional knowledge to answer the question.",*
*"You are an AI assistant that helps people find information. User will you give you a question. Your task is to answer as faithfully as you can. While answering think step-by- step and justify your answer.",*
*"User will you give you a task with some instruction. Your job is follow the instructions as faithfully as you can. While answering think step-by-step and justify your answer.",*
*"You are a teacher. Given a task, you explain in simple steps what the task is asking, any guidelines it provides and how to use those guidelines to find the answer.",*
*"You are an AI assistant, who knows every language and how to translate one language to another. Given a task, you explain in simple steps what the task is asking, any guidelines that it provides. You solve the task and show how you used the guidelines to solve the task.",*
*"Given a definition of a task and a sample input, break the definition into small parts.Each of those parts will have some instruction. Explain their meaning by showing an example that meets the criteria in the instruction. Use the following format: Part #: a key part of the definition. Usage: Sample response that meets the criteria from the key part. Explain why you think it meets the criteria.",*
*"You are an AI assistant that helps people find information."*
​
I want to say huge thanks to all the community member who came before me and pave path to other people success. | 2023-07-09T04:59:26 | https://www.reddit.com/r/LocalLLaMA/comments/14upgqu/orcaminiv213b/ | Remarkable-Spite-107 | self.LocalLLaMA | 2023-07-15T04:40:25 | 0 | {} | 14upgqu | false | null | t3_14upgqu | /r/LocalLLaMA/comments/14upgqu/orcaminiv213b/ | false | false | 98 | {'enabled': False, 'images': [{'id': 'hssQ8mXWjkzFZSAnCYrLQdM3zoKZzdxJhGF9DcTDG3E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=108&crop=smart&auto=webp&s=ba570c1e3a3d0fbbc935639f9b5cef77f0a45531', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=216&crop=smart&auto=webp&s=5c092e252113dfcfc864ac6151e54a8d6ffe9af1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=320&crop=smart&auto=webp&s=21bf2c4c5b7388eedc6fa1d5b315268903054246', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=640&crop=smart&auto=webp&s=030d9a0ba790a7129b0ed11c481956d0001e96a2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=960&crop=smart&auto=webp&s=953f8d1f4b61482275fd53628c1cb27852add1c1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?width=1080&crop=smart&auto=webp&s=d3c8bfeab806a283663021ffbe3a4219363a267c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/m3WLUasY8iyUxfJO5hwxfH3V1mtI9FLEX4vsO6z_AzM.jpg?auto=webp&s=1d7619d3ea8406fe25b9c9306819ba897d541958', 'width': 1200}, 'variants': {}}]} | |
Lord of the Rings sequels via Guanco 65B on an Mac M1 Ultra GPU | 21 | 2023-07-09T02:59:26 | https://www.reddit.com/gallery/14un6th | Thalesian | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14un6th | false | null | t3_14un6th | /r/LocalLLaMA/comments/14un6th/lord_of_the_rings_sequels_via_guanco_65b_on_an/ | false | false | 21 | null | ||
Lord of the Rings sequels via Guanco 65B on a Mac M1 Ultra GPU | 1 | 2023-07-09T02:56:12 | https://www.reddit.com/gallery/14un4kx | Thalesian | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14un4kx | false | null | t3_14un4kx | /r/LocalLLaMA/comments/14un4kx/lord_of_the_rings_sequels_via_guanco_65b_on_a_mac/ | false | false | default | 1 | null | |
Need Help with HTTP API for Text-Generation-WebUI Application | 0 | Hi
I've recently been working with the Text-Generation-WebUI application, and I've run into an issue I hope someone can help me with.
I successfully started the server using the following command:
\`python server.py --api --threads 8 --chat --verbose --n-gpu-layers 128 --model orca-mini-13b.ggmlv3.q6\_K\`
The Web UI works perfectly fine, and I've been able to generate text without any issues.
However, I've been attempting to use the HTTP API and I'm not getting any response. I've been testing with the example code provided here: [**api-examples/api-example.py**](https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py), but I haven't received any output.
To troubleshoot, I modified the example code to include more debug information. It seems that I'm receiving an HTTP 403 error.
I also tried running the LangChain [TextGen example](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/textgen), hoping for a different outcome, but unfortunately, I encountered the same issue.
Has anyone else experienced this issue? I would really appreciate any advice or insights on why this is happening and how to correctly call the HTTP API for the text-generation-webui application.
Thanks in advance! | 2023-07-09T02:43:26 | https://www.reddit.com/r/LocalLLaMA/comments/14umvow/need_help_with_http_api_for_textgenerationwebui/ | Jian-L | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14umvow | false | null | t3_14umvow | /r/LocalLLaMA/comments/14umvow/need_help_with_http_api_for_textgenerationwebui/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '2alysGoZd5wQlotX9aDWiLP4_URgy_4cfeEHe6RGT60', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=108&crop=smart&auto=webp&v=enabled&s=584379bc43bbb461f7a926e742362edfea86e170', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=216&crop=smart&auto=webp&v=enabled&s=4c8627f37e165d4f0cf835f2590de84ec68e694d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=320&crop=smart&auto=webp&v=enabled&s=326bf9a7c135008e35655daeff200a9f52e2e4f7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=640&crop=smart&auto=webp&v=enabled&s=b75523dc904bceb364078fa0c199087b3e7ae40c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=960&crop=smart&auto=webp&v=enabled&s=b3a17112c7936c83d8cea01750575a27aa84e767', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=8f6cb7f0b5c5bc106bc4563720410f30e1507fe2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5_biiiOm51yOen0zGsXIJxpDGi6wL1uqcIgFgEZPl8s.jpg?auto=webp&v=enabled&s=89c1d05468bc676119e6adcb5ed4c5a845375d46', 'width': 1200}, 'variants': {}}]} |
13b vs 33b models using "8k" context size | 17 | Is it worth using a 13b model for the ~6k context size or does the higher parameters of the 33b models negate the downside of having smaller ~3k context size? (LLaMA). This question's answer is probably up to personal preference but I'd like to hear your opinions either way so I can make my own decision, thanks! | 2023-07-09T02:25:50 | https://www.reddit.com/r/LocalLLaMA/comments/14umj8n/13b_vs_33b_models_using_8k_context_size/ | Engliserin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14umj8n | false | null | t3_14umj8n | /r/LocalLLaMA/comments/14umj8n/13b_vs_33b_models_using_8k_context_size/ | false | false | self | 17 | null |
Are the SuperHot models not performing as well as their original versions in terms of creativity? Does the higher context just come with tradeoffs? | 40 | I've been using WizardLM 33b for most of my roleplaying, and tried out the Superhot version. It's very hard to put into words, but it feels like the Superhot version is just not quite as creative. I've noticed it using a lot more filler and doing a lot more telling rather than showing in a way that the original model does not. It also tends to latch onto certain phrases over and over again and even tries to shoehorn them in where they don't belong.
I tried playing out the same scene with the same setting with two different models, and the original model just felt so much richer and more creative. The Superhot version did have its own advantages with the greater context window, of course, but the writing did generally feel poorer. It wasn't grammatically incorrect or lazy, it just felt like it wasn't as skilled a partner. I don't even know how it could quantify it, but I feel like in a blind test you could probably be able to easily tell which is which.
Has anyone else noticed something similar, or is it just the model I tried out?
​ | 2023-07-08T23:55:40 | https://www.reddit.com/r/LocalLLaMA/comments/14uje9m/are_the_superhot_models_not_performing_as_well_as/ | tenmileswide | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14uje9m | false | null | t3_14uje9m | /r/LocalLLaMA/comments/14uje9m/are_the_superhot_models_not_performing_as_well_as/ | false | false | self | 40 | null |
Is it future-proof? (Ryzen 9 9700X, 3090, 4060 Ti 16gb, 64gb ram) | 6 | Hey all,
I was wondering if this build is future proof enough. Personally I'm worried about the compatibility between a 3090 - 4060 Ti and that I might've skimped out on the ram a bit too much but there might be other problems with the build that I don't know of. It would be absolutely amazing if it could run a 64b model with either ggml or gptq at an okish speed -- 3/4+ tokens/s --.
the list: [https://pcpartpicker.com/list/Ym3cQ6](https://pcpartpicker.com/list/Ym3cQ6)
* [**AMD Ryzen 9 7900X 4.7 GHz 12-Core Processor**](https://pcpartpicker.com/product/bwxRsY/amd-ryzen-9-7900x-47-ghz-12-core-processor-100-100000589wof)
* [**Thermalright Peerless Assassin 120 SE 66.17 CFM CPU Cooler**](https://pcpartpicker.com/product/hYxRsY/thermalright-peerless-assassin-120-se-6617-cfm-cpu-cooler-pa120-se-d3)
* [**MSI PRO B650-P WIFI ATX AM5 Motherboard**](https://pcpartpicker.com/product/CFzhP6/msi-pro-b650-p-wifi-atx-am5-motherboard-pro-b650-p-wifi)
* [**Corsair Vengeance 64 GB (2 x 32 GB) DDR5-5200 CL40 Memory**](https://pcpartpicker.com/product/TJZ9TW/corsair-vengeance-64-gb-2-x-32-gb-ddr5-5200-cl40-memory-cmk64gx5m2b5200c40)
* [**NVIDIA Founders Edition GeForce RTX 3090 24 GB Video Card**](https://pcpartpicker.com/product/PFVG3C/nvidia-geforce-rtx-3090-24-gb-founders-edition-video-card-900-1g136-2510-000)
* [**Zotac GAMING Twin Edge GeForce RTX 4060 Ti 16 GB Video Card**](https://pcpartpicker.com/product/hn6NnQ/zotac-gaming-twin-edge-geforce-rtx-4060-ti-16-gb-video-card-zt-d40620e-10m)
* [**Corsair 4000D Airflow ATX Mid Tower Case**](https://pcpartpicker.com/product/bCYQzy/corsair-4000d-airflow-atx-mid-tower-case-cc-9011200-ww)
* [**Super Flower Leadex Platinum SE 1200 W 80+ Platinum Certified Fully Modular ATX Power Supply**](https://pcpartpicker.com/product/Ctnypg/super-flower-leadex-platinum-se-1200-w-80-platinum-certified-fully-modular-atx-power-supply-sf-1200f14mp-v2)
* [**Samsung 980 Pro 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive**](https://pcpartpicker.com/product/f3cRsY/samsung-980-pro-2-tb-m2-2280-nvme-solid-state-drive-mz-v8p2t0bam)
* [**Seagate BarraCuda 4 TB 3.5" 5400 RPM Internal Hard Drive**](https://pcpartpicker.com/product/jD3H99/seagate-barracuda-4tb-35-5400rpm-internal-hard-drive-st4000dm004)
​
Any advice or help on what it could run and potential issues with the build will be greatly appreciated! :D
\-PC noob | 2023-07-08T23:10:27 | https://www.reddit.com/r/LocalLLaMA/comments/14uidu0/is_it_futureproof_ryzen_9_9700x_3090_4060_ti_16gb/ | stunbots | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14uidu0 | false | null | t3_14uidu0 | /r/LocalLLaMA/comments/14uidu0/is_it_futureproof_ryzen_9_9700x_3090_4060_ti_16gb/ | false | false | self | 6 | null |
Tried getting locally run A.I. to behave like Einstein and interviewing him, any suggestion on what backend prompts one can inject to emulate more "sentient-ness" for future interviews with other characters ? | 0 | The 12 minute interview I had with him was fun and I made it into a video, however, I wonder if there are prompts that might help in making these interviews come to life even more. | 2023-07-08T22:42:04 | https://www.reddit.com/r/LocalLLaMA/comments/14uhq2v/tried_getting_locally_run_ai_to_behave_like/ | Vitamin_C_is_awesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14uhq2v | false | null | t3_14uhq2v | /r/LocalLLaMA/comments/14uhq2v/tried_getting_locally_run_ai_to_behave_like/ | false | false | default | 0 | null |
13b vicuna based model very lucidly giving its "opinion" about Ai, basic income and such | 47 | ​
https://preview.redd.it/bkyzhjn4atab1.jpg?width=1448&format=pjpg&auto=webp&s=0288adad8e84315e8f59c350a5ef8ce2907b0fd6 | 2023-07-08T21:48:02 | https://www.reddit.com/r/LocalLLaMA/comments/14ugfy4/13b_vicuna_based_model_very_lucidly_giving_its/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ugfy4 | false | null | t3_14ugfy4 | /r/LocalLLaMA/comments/14ugfy4/13b_vicuna_based_model_very_lucidly_giving_its/ | false | false | 47 | null | |
More Long-form Roleplay Conversations. | 35 | Here are some parsed logs from a Korean proxy. Read at your own discretion:
[https://huggingface.co/datasets/OpenLeecher/Teatime](https://huggingface.co/datasets/OpenLeecher/Teatime)
There are a lot of long NSFW/SFW GPT-4/Claude convos. | 2023-07-08T21:14:42 | https://www.reddit.com/r/LocalLLaMA/comments/14ufmm7/more_longform_roleplay_conversations/ | JarPony | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ufmm7 | false | null | t3_14ufmm7 | /r/LocalLLaMA/comments/14ufmm7/more_longform_roleplay_conversations/ | false | false | nsfw | 35 | {'enabled': False, 'images': [{'id': '1AtdsQ2kEJk8hBU1jrTz9oARr_f6RBAbl3bxVlGJxjA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=108&crop=smart&auto=webp&s=0bd3691215914a8c657a04a7d383a0436f917dc1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=216&crop=smart&auto=webp&s=61e594add0201a42cba5e9b76bffcebecd2b9292', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=320&crop=smart&auto=webp&s=2581ec5f9c3ce4db7e2bd760b38b7fb06d54c83c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=640&crop=smart&auto=webp&s=bcb4db9899cdd3c3d064feb55256d5576cba5a3e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=960&crop=smart&auto=webp&s=b8c4e898ea26fd338b923d675414b3f17143ab96', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=1080&crop=smart&auto=webp&s=a0e20bb9011039110d14880a667d94d557d9d76e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?auto=webp&s=0d1c59c4423ba2d3d4077c1ad6280607255315fe', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=247d29d6cf8e6f37176aee2461e222f92ea860db', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=fe3a5f88dd1395e77887e70ee4f639300b987d74', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=d394a8edfd23c886683a4116508de69affc300c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=fa01063e5f659387b3c0abe5c12cef204452a1ee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=032b467a2528d309eb3dcc247a1180851153d282', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=33c13a76b7cb1be2fdab1c9271f1a8759819cd4b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?blur=40&format=pjpg&auto=webp&s=e7461f8a8c57a95cc2da8fc1ecfd97bb832b1643', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=247d29d6cf8e6f37176aee2461e222f92ea860db', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=fe3a5f88dd1395e77887e70ee4f639300b987d74', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=d394a8edfd23c886683a4116508de69affc300c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=fa01063e5f659387b3c0abe5c12cef204452a1ee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=032b467a2528d309eb3dcc247a1180851153d282', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=33c13a76b7cb1be2fdab1c9271f1a8759819cd4b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/igGonOZYV3krlYL2G5YC5zxaxXasPJ0tPbrNHBiBUOU.jpg?blur=40&format=pjpg&auto=webp&s=e7461f8a8c57a95cc2da8fc1ecfd97bb832b1643', 'width': 1200}}}}]} |
Anyone use a 8 channel server? How fast is it? | 10 | Old 8 channel DDR4 servers are cheap on ebay. Does anyone run with one? How fast is it? If it's 4x the speed of 2 channel DDR4, that would be fast enough for me. | 2023-07-08T17:47:21 | https://www.reddit.com/r/LocalLLaMA/comments/14uajsq/anyone_use_a_8_channel_server_how_fast_is_it/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14uajsq | false | null | t3_14uajsq | /r/LocalLLaMA/comments/14uajsq/anyone_use_a_8_channel_server_how_fast_is_it/ | false | false | self | 10 | null |
Custom LLM build | 9 | I’m looking to build a custom computer that can run a decent amount of the LLM’s out there as well as a casual gaming machine/workstation. I came up with this set up on PC part picker. It‘s sub $2000 with (2) RTX 3060 TI‘s. & 32GB of DDR4 Ram w/ 1TB SSD.
What are your thoughts on going this route what should change but should be added? The goal would be to be able to run most of the smaller 16B and maybe bigger LLM’s out there.
Should I go with AMD or Nvidia? It seem like AMD gpus offer more VRAM…but I’ve always been an Nvidia gamer since CS1.0
https://pcpartpicker.com/list/DNPyDq | 2023-07-08T17:18:27 | https://www.reddit.com/r/LocalLLaMA/comments/14u9v4q/custom_llm_build/ | These_Radish2642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14u9v4q | false | null | t3_14u9v4q | /r/LocalLLaMA/comments/14u9v4q/custom_llm_build/ | false | false | self | 9 | null |
Simple Notebook for Quantizing Models | 1 | 2023-07-08T17:01:51 | https://colab.research.google.com/gist/Honkware/25922ac12458634ccdacba282ebf608b/simple-quantization-with-gptq-for-llama.ipynb | Honkware | colab.research.google.com | 1970-01-01T00:00:00 | 0 | {} | 14u9god | false | null | t3_14u9god | /r/LocalLLaMA/comments/14u9god/simple_notebook_for_quantizing_models/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=543276cbccad8b8e17c4ff2f6b125e8f03c05233', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&v=enabled&s=c116ce101a58c7d0dbdb11759fb19b16fbdc26bb', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&v=enabled&s=4ae15f4e49f4a5386f9f7fb440a0e32da9b4050d', 'width': 260}, 'variants': {}}]} | |
How do I utilize these quantized models being uploaded? | 18 | I just recently joined this reddit thread and have been lurkign for a bit and see that people are releasing quantized models (which I understand to be smaller versions of the models).
​
My question is, what's the difference in actual usage? Like is it just the .bin for the model but smaller and the process of using the quantized model is exactly the same? If it's different then what do I need to do as a pre-cursor to set up my environment to utilize the quantized models? | 2023-07-08T17:00:11 | https://www.reddit.com/r/LocalLLaMA/comments/14u9f1e/how_do_i_utilize_these_quantized_models_being/ | anasp1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14u9f1e | false | null | t3_14u9f1e | /r/LocalLLaMA/comments/14u9f1e/how_do_i_utilize_these_quantized_models_being/ | false | false | self | 18 | null |
Issue with vectorDB memory in Langchain | 1 | [removed] | 2023-07-08T15:54:34 | https://www.reddit.com/r/LocalLLaMA/comments/14u7t5v/issue_with_vectordb_memory_in_langchain/ | CmplxQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14u7t5v | false | null | t3_14u7t5v | /r/LocalLLaMA/comments/14u7t5v/issue_with_vectordb_memory_in_langchain/ | false | false | default | 1 | null |
I need some direction for local options | 1 | Hey everyone, I hope this is going in the right space. I am just starting to get into LLMs and there are so many options. I have an 8GB NVIDIA 2080 Super. I’ve downloaded the oogabooga/text-generation UI (followed a tutorial on YouTube).
My problem is that I don’t know what I can actually run. It seems that most 7B options and below work, but is that always the case? Does the higher number mean more space? I try to load the stablevavicuna (I think) models and they’re too big, I get a CUDA memory error.
I just don’t know what I don’t know. How can I get a baseline understanding of what models work with what. Do certain models work with certain text generation “programs”? I don’t even know if that’s what they’re called. PLEASE point me in the right direction. Thank you all. | 2023-07-08T14:54:14 | https://www.reddit.com/r/LocalLLaMA/comments/14u6b8c/i_need_some_direction_for_local_options/ | Jimmm90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14u6b8c | false | null | t3_14u6b8c | /r/LocalLLaMA/comments/14u6b8c/i_need_some_direction_for_local_options/ | false | false | default | 1 | null |
Local VS Cloud? | 15 | There are many reasons to go Local. But the cost savings seems to favor using cloud services more and more each day. Has anyone calculated the cost premium/savings of this Lease or Own decision? | 2023-07-08T14:31:15 | https://www.reddit.com/r/LocalLLaMA/comments/14u5rmw/local_vs_cloud/ | iKarma_com | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14u5rmw | false | null | t3_14u5rmw | /r/LocalLLaMA/comments/14u5rmw/local_vs_cloud/ | false | false | self | 15 | null |
Few-Shot VS Finetuning? | 12 | What is the consensus regarding this? The obvious advantage of fine-tuning over Few Shot is that the later eats up some of your context. If we ignore that limitation, how do they compare in terms of alignment? (i.e. doing what you tell them to do / performing well with complex tasks)
I'd love to hear your experiences on the matter. | 2023-07-08T14:28:41 | https://www.reddit.com/r/LocalLLaMA/comments/14u5peb/fewshot_vs_finetuning/ | nightlingo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14u5peb | false | null | t3_14u5peb | /r/LocalLLaMA/comments/14u5peb/fewshot_vs_finetuning/ | false | false | self | 12 | null |
Perplexity of some recent Long Context LLaMA models | 1 | [removed] | 2023-07-08T12:01:49 | https://www.reddit.com/r/LocalLLaMA/comments/14u2e5k/perplexity_of_some_recent_long_context_llama/ | Annual_Pea7228 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14u2e5k | false | {'oembed': {'author_name': 'Joseph Cheung', 'author_url': 'https://twitter.com/RealJosephus', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Perplexity of some recent Long Context <a href="https://twitter.com/hashtag/LLaMA?src=hash&ref_src=twsrc%5Etfw">#LLaMA</a> models<br>Our WIP ∞-ctx 7B model, ppl wins longchat13B in 2K ctx and longchat7B on the whole.<br>Also multilingual & multimodal, capable for 4bit inference at 5+GB VRAM. <a href="https://t.co/bhTQCae5Yh">pic.twitter.com/bhTQCae5Yh</a></p>— Joseph Cheung (@RealJosephus) <a href="https://twitter.com/RealJosephus/status/1677642751297593346?ref_src=twsrc%5Etfw">July 8, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/RealJosephus/status/1677642751297593346', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14u2e5k | /r/LocalLLaMA/comments/14u2e5k/perplexity_of_some_recent_long_context_llama/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': '9AOxfuLcx9ZNDMD_-9APwypoMMiFrzHjhUTBWENBJgs', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/VKLgM59wYqYUPN8KFORRGqsaZKFHwVFkUEIZiaHbtX8.jpg?width=108&crop=smart&auto=webp&v=enabled&s=a942ccc3146a23bd2123e13ca98ad9d9be3db444', 'width': 108}], 'source': {'height': 53, 'url': 'https://external-preview.redd.it/VKLgM59wYqYUPN8KFORRGqsaZKFHwVFkUEIZiaHbtX8.jpg?auto=webp&v=enabled&s=73487b37167588f3e12a160c03e938d8abdf6328', 'width': 140}, 'variants': {}}]} |
I want to run mpt-7b-instruct or falcon-7b-instruct in node JS ¿What is the less painful path? | 0 | Bear with me, I'm trying to build a website for my community, and I have no experience in AI. I want a local LLM to parse unstructured data and give me a nice output that I can put in a database and use later on.
I've seen in Hugging Face that they provide 2 .bin files, which I understand are the models at hand. If I'm not mistaken, both are Pytorch output.
Asking Perplexity It seems that I have two options at hand: ONNX and Tensorflow.js. I've also seen Transformes.js by HuggingFace.
The thing is that the info is so cutting edge that I'm getting a lot of confusing answers and this GPT-backed app is not really helping me to get domain knowledge.
Can anyone point me to the right direction? Like links to read, etc. I've searched for that but most info seems either outdated or very case-spacific. | 2023-07-08T07:37:09 | https://www.reddit.com/r/LocalLLaMA/comments/14txc9q/i_want_to_run_mpt7binstruct_or_falcon7binstruct/ | iagovar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14txc9q | false | null | t3_14txc9q | /r/LocalLLaMA/comments/14txc9q/i_want_to_run_mpt7binstruct_or_falcon7binstruct/ | false | false | default | 0 | null |
I'm hosting an mpt-7b-chat model for the next 24h to test my code | 22 | I made a post a few days ago about [an open source project I created to host LLMs on GCloud easily](https://www.reddit.com/r/LocalLLaMA/comments/14rqyx8/tool_for_deploying_open_source_llms_on_your_own/).
I still have some free cloud credits so I just rented an A100 and slapped mpt-7b-chat on it. I'm gonna keep it running for at least the next 24h. I'd love for you guys to stress test it because I want to see how many requests it can handle.
This is how you can run queries:
**Install the pip package.** Code for it is here: [https://github.com/havenhq/haven/tree/dev/sdk](https://github.com/havenhq/haven/tree/dev/sdk)
`pip install havenpy`
**Then you can use this snippet to chat with the model in your terminal**
from havenpy import Haven
client = Haven("34.116.236.227:50051", "public_github")
history = []
while True:
user_input = input("You: ")
history.append({
"content": user_input,
"role": "USER"
})
res = client.chat_completion("haven-w-mpt-7b-chat-ljtkcztv", messages=history, stream=True)
message = ""
for r in res:
message += r.text
print(r.text, end="", flush=True)
print()
history.append({
"content": message,
"role": "ASSISTANT"
})
In case you missed the other post, this is the project: [https://github.com/havenhq/haven](https://github.com/havenhq/haven)
And this is how to get your own instance running: [https://docs.haven.run](https://docs.haven.run)
I'd love to see what you can get it to say. I just had it explain to me in detail to to dispose of a dead body lol.
Please don't send any personal information :) | 2023-07-08T07:09:05 | https://www.reddit.com/r/LocalLLaMA/comments/14twt75/im_hosting_an_mpt7bchat_model_for_the_next_24h_to/ | h-konsti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14twt75 | false | null | t3_14twt75 | /r/LocalLLaMA/comments/14twt75/im_hosting_an_mpt7bchat_model_for_the_next_24h_to/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'Coa7F2XilZ4eW50OTt6Irq-oYa6GsFBKcieRZNCw0qg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=108&crop=smart&auto=webp&s=e616407936a3c01c5913c4f0daa4c215b2eb3628', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=216&crop=smart&auto=webp&s=cf4bfba07858ac3370ffc7b70ef473e5ea23d9b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=320&crop=smart&auto=webp&s=782e49ffe568633f6f6fe65c7613e0b46e9ac13b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=640&crop=smart&auto=webp&s=afad1054d5887ac58d3bb483dab680374238e1ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=960&crop=smart&auto=webp&s=2784cea804a357aa92c1a2b79e09f45b8d290a1b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?width=1080&crop=smart&auto=webp&s=d370a09beec4d06a07184860852cf0001f4152f2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BDmBHgX2AaVR25UVXoGFS8RIjNXPqSFK1FZwThZls2w.jpg?auto=webp&s=272a5e02ade0a2013f2806a5c166cdf11fdd1401', 'width': 1200}, 'variants': {}}]} |
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed. | 105 | 2023-07-08T04:36:10 | https://github.com/taprosoft/llm_finetuning | taprosoft | github.com | 1970-01-01T00:00:00 | 1 | {'gid_2': 1} | 14ttt4s | false | null | t3_14ttt4s | /r/LocalLLaMA/comments/14ttt4s/a_simple_repo_for_finetuning_llms_with_both_gptq/ | false | false | 105 | {'enabled': False, 'images': [{'id': 'Rnysn6wpNPRZWzNkt14RbzRcRUdOOFDhYnJKBDPSMPU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=108&crop=smart&auto=webp&s=06c860511b19aec0f87dc643b9352493ed43d370', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=216&crop=smart&auto=webp&s=cc1eccb0333c79cf52d373717efda18548ce05b0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=320&crop=smart&auto=webp&s=1c927076627fda3bf0f5f7aef96dd826c9e9a3c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=640&crop=smart&auto=webp&s=2761cc1094abf754bb0b45fd9ae025f3ea805b85', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=960&crop=smart&auto=webp&s=b456726ae36dab5d9bd00c1827595b7856040f28', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?width=1080&crop=smart&auto=webp&s=2fb8d3648cda0c42f10616907ed85d715e0e745f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rTvN4uuH9v-SYSkNqQ67p-NPV-2uIiYeL2MijjbyzXI.jpg?auto=webp&s=127a6fda6fed86f131d0c0bd08aac84370b1f926', 'width': 1200}, 'variants': {}}]} | ||
Does Data Used for Fine-Tuning Need to Be Structured as Prompts with Responses? | 3 | I’ve been messing with creating a bot for a certain app (it rhymes with Biscord), and I’d like to use an LLM to make the bot say funny things. The responses from LLaMA models I’ve seen tend to still be pretty soulless and full of the typical boilerplate responses you often see from LLMs. I have a CSV full of text that is more of the style of how I’d like the model to communicate. Even though this text is not in the prompt/response format people would usually use to increase a model’s functionality, can I still use this data to fine-tune the model to make it write how I’d like it to write? If not, should I just try to make some instructions to get it to communicate how I want? | 2023-07-08T03:54:39 | https://www.reddit.com/r/LocalLLaMA/comments/14tsx5d/does_data_used_for_finetuning_need_to_be/ | danmanboles7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tsx5d | false | null | t3_14tsx5d | /r/LocalLLaMA/comments/14tsx5d/does_data_used_for_finetuning_need_to_be/ | false | false | self | 3 | null |
Running LLaMa on an AMD Ryzen 5 5500U | 4 | Hi there.
I'm running LLaMa.cpp 7B 2bit quantisation on an AMD Ryzen 5500 U. I only have 8gb of ram, but plan to upgrade to 32gb soon.
I can run LLaMA 7b in 2bit quantize fast. what will I be able to run with 32gb of ram? | 2023-07-08T03:12:44 | https://www.reddit.com/r/LocalLLaMA/comments/14ts2f7/running_llama_on_an_amd_ryzen_5_5500u/ | royalmeww | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ts2f7 | false | null | t3_14ts2f7 | /r/LocalLLaMA/comments/14ts2f7/running_llama_on_an_amd_ryzen_5_5500u/ | false | false | self | 4 | null |
Recommended open LLMs with image input modality? | 22 | I'd be curious to try an open LLM that has an image input modality, for example for captioning an image or chatting about an image. There seem to be several out there. Does anyone have experience with them and know which ones are good (if any)?
This tweet, for example, lists 9 multi-modal models built with LLaMA: [https://twitter.com/yupenghou97/status/1676574171206389760/photo/1](https://twitter.com/yupenghou97/status/1676574171206389760/photo/1). | 2023-07-08T02:39:16 | https://www.reddit.com/r/LocalLLaMA/comments/14trcif/recommended_open_llms_with_image_input_modality/ | hold_my_fish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14trcif | false | null | t3_14trcif | /r/LocalLLaMA/comments/14trcif/recommended_open_llms_with_image_input_modality/ | false | false | self | 22 | null |
Seeking Assistance: Anonymized Datasets of Therapeutic Conversations | 8 | I am currently engaged in a project that involves the development of a language model based on therapeutic conversations, with a view to enhancing mental health technology.
I am in need of extensive, anonymized therapy dialogue datasets, which should be in full compliance with privacy and ethical standards.
Should anyone have suggestions for potential sources such as open-source databases, research articles, or online communities, please do share. Furthermore, tips about working with this type of sensitive data would be greatly appreciated. | 2023-07-08T02:22:47 | https://www.reddit.com/r/LocalLLaMA/comments/14tqzoq/seeking_assistance_anonymized_datasets_of/ | ZealousidealBlock330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tqzoq | false | null | t3_14tqzoq | /r/LocalLLaMA/comments/14tqzoq/seeking_assistance_anonymized_datasets_of/ | false | false | self | 8 | null |
Front end tools for accessing model data? | 2 | Hey - so im kinda new to all this and trying to work out ways to actually play about with models. (have not got on to training or custom data set creation yet, job for later)
So far ive only had a few tools running successfully kobaldai but only with ggml model types? and dalai llama.
So, im kinda confused about the actual models themselves, grabbing the .bin files seems to work and pointing to them but im curious to what the stack of other files do if this seems to work without them.. Then models name PyTorch.bin never seem to run on the front end. Overall im sure im missing a lot of the logic out here.
Whats the difference in model formats? What tools can I use to run these? What I have only seems to be for dealing with text logic models, what about some more multimodel scenarios?
Any clarity would be appreciated on these few points, or some tools to use that can handle all these features. | 2023-07-08T01:51:04 | https://www.reddit.com/r/LocalLLaMA/comments/14tqb74/front_end_tools_for_accessing_model_data/ | MaxSan | self.LocalLLaMA | 2023-07-08T06:16:30 | 0 | {} | 14tqb74 | false | null | t3_14tqb74 | /r/LocalLLaMA/comments/14tqb74/front_end_tools_for_accessing_model_data/ | false | false | self | 2 | null |
Advice: Improving my build | 1 | Hi, currently on a budget of $500 dollars.
I have:
AsRock DeskMeet B660(500 PSU, motherboard and chasis). $240
128GB of DDR4 3200 RAM $45 each module of 32GB
1TB SP M2 2280 as Disk $35
Core i5 12400 $175
I am planning to wait for a SFF 4060 TI 16GB or get a 3060 12GB model and upgrade the memory to DDR5 with the rest of the budget.
Can you give me any suggestion on which would give me a better performance? | 2023-07-08T01:45:40 | https://www.reddit.com/r/LocalLLaMA/comments/14tq78t/advice_improving_my_build/ | Rubytux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tq78t | false | null | t3_14tq78t | /r/LocalLLaMA/comments/14tq78t/advice_improving_my_build/ | false | false | self | 1 | null |
Performance feedback | 5 | Hi,
I am starting local LLaMa and I would like to have your feedback regarding the performance that I'm having so far.
I'm using llama.cpp with Python and Cuda to run models. I have a GTX 1650 Ti with 4gb dedicated and 32GB of RAM.
I have the following performance using Alpaca-Native-7B-GGML model:
`llama_print_timings: load time = 13453.13 ms`
`llama_print_timings: sample time = 159.04 ms / 160 runs ( 0.99 ms per token, 1006.02 tokens per second)`
`llama_print_timings: prompt eval time = 13739.30 ms / 8 tokens ( 1717.41 ms per token, 0.58 tokens per second)`
`llama_print_timings: eval time = 39734.28 ms / 159 runs ( 249.90 ms per token, 4.00 tokens per second)`
`llama_print_timings: total time = 54550.33 ms`
When I run a model I observe that the GPU memory goes to max (4Gb). I find that the response is pretty slow but maybe it's normal because of my setup? Or am I missing something with the configuration and/or the model.
In your opinion what is the minimum setup to have a response to a question in 5s or less.
Thank you for your help | 2023-07-08T01:24:11 | https://www.reddit.com/r/LocalLLaMA/comments/14tpqdd/performance_feedback/ | SanglierAlbinos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tpqdd | false | null | t3_14tpqdd | /r/LocalLLaMA/comments/14tpqdd/performance_feedback/ | false | false | self | 5 | null |
Anyone have any favorite social / social science / computational social science uses of open LLMs? | 2 | I saw a cool paper recently about using LLMs to impute missing responses in surveys. The gist was that you can fine-tune LLMs on government survey responses, then predict what a specific demographic vector would say to a question.
Kim, J., & Lee, B. (2023). *AI-Augmented Surveys: Leveraging Large Language Models for Opinion Prediction in Nationally Representative Surveys* (arXiv:2305.09620). arXiv. [https://doi.org/10.48550/arXiv.2305.09620](https://doi.org/10.48550/arXiv.2305.09620)
| 2023-07-08T00:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/14toshq/anyone_have_any_favorite_social_social_science/ | josha_umich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14toshq | false | null | t3_14toshq | /r/LocalLLaMA/comments/14toshq/anyone_have_any_favorite_social_social_science/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Open Source Model Recommendations | 0 | Hello!
I've been wanting to use an open source LLM that can answer questions based on my existing technical document knowledge base. I want answers that prioritize accuracy with a reasonable amount of clarity. What are some of the models that you would recommend for this (either GGML or GPTQ models are fine)? It would be super helpful to get some model settings that have worked best too (temperature, repetition\_penalty, etc.)
Looking forward to hearing your thoughts!
​ | 2023-07-07T23:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/14tmu0b/open_source_model_recommendations/ | Beautiful-Stage-4026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tmu0b | false | null | t3_14tmu0b | /r/LocalLLaMA/comments/14tmu0b/open_source_model_recommendations/ | false | false | self | 0 | null |
Conversational "memory loss"? | 5 | I'm running 4-bit Chronos Hermes 13B in a persistent LLaMa.cpp chat.
Sometimes, I have to remind it of things we agreed only a few questions before. While not an actual conversation, this helps to illustrate the matter:
> Me: All girl dogs can give birth to puppies.
>
> ChatLLaMa: Yes, this is one of the defining characteristics of female dogs, the potential to give birth.
>
> Me: I have a dog named "Clara", a girl dog.
>
> ChatLLaMa: That's so cool! I bet she's really sweet.
>
> Me: Yes, she is.
>
> ... a few minutes of conversation goes by ...
>
> Me: Is it possible Clara might one day give birth to puppies?
>
> ChatLLaMa: I don't know; it depends on what kind of dog Clara is.
>
> Me: What do you mean?
>
> ChatLLaMa: Well, is Clara a girl dog?
>
> Me (puzzled by forgetfulness): Yes.
>
> ChatLLaMa: Oh, that's nice. I guess if a girl dog could give birth to puppies, then, maybe?
Am I prompting this wrong? I know I have long held to the idea these AI chats are often glorified auto-completes but this seems a bit too close to that presumption.
I know this is new tech and all and I should be patient with the advances but this seems a bit extreme. Maybe there are smarter models and/or implementations?
Thanks in advance.
P.S.: I am fully aware of the distinct possibility, five years from know, I am going to look back and say "Wow, was I a friggin' whiner". So, if that is the case, go right ahead and say so. | 2023-07-07T21:48:10 | https://www.reddit.com/r/LocalLLaMA/comments/14tkiwp/conversational_memory_loss/ | AlexKingstonsGigolo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tkiwp | false | null | t3_14tkiwp | /r/LocalLLaMA/comments/14tkiwp/conversational_memory_loss/ | false | false | self | 5 | null |
LlaMa on BitTensor | 0 | Is anybody in here running fine-tuned LLaMa on BitTensor?
If so, do you have any tips for creating a strong model?
Im considering it but want a better idea of what specs people are using to remain competitive on the network? | 2023-07-07T21:34:44 | https://www.reddit.com/r/LocalLLaMA/comments/14tk62g/llama_on_bittensor/ | Bramp10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tk62g | false | null | t3_14tk62g | /r/LocalLLaMA/comments/14tk62g/llama_on_bittensor/ | false | false | default | 0 | null |
Chinese and Korean output from WizardLM 1.1 | 0 | I'm using the quantized version of WizardLM 1.1 with 4 bits and I consistently get Chinese and Korean output.Example:
You are a helpful AI assistant.
USER: Hello
ASSISTANT: 안녕하세요! 어떻게 도와드릴까요?
USER: can you speak english, please?
ASSISTANT: 죄송합니다. 영어로 대화해 주실 수 있습니다. 어떻게 도와드릴까요?
I'm using it with koboldcpp. I saw no mention of korean or chinese in the training data. Is this a bug? | 2023-07-07T20:02:46 | https://www.reddit.com/r/LocalLLaMA/comments/14thred/chinese_and_korean_output_from_wizardlm_11/ | KillerMiller13 | self.LocalLLaMA | 2023-07-07T20:11:46 | 0 | {} | 14thred | false | null | t3_14thred | /r/LocalLLaMA/comments/14thred/chinese_and_korean_output_from_wizardlm_11/ | false | false | default | 0 | null |
Does exllama load the model directly into the GPU? | 18 | When using autogptq I know that the model is first fully loaded into the CPU, deserialized, and then sent to the GPU.
But when I was using exllama to load a model for inference on Google Colab and Kaggle notebooks I noticed that the model seemed to be loading directly into the GPU?
I didn't see the RAM level increase. I'm not sure if it's because of an innacuracy with the RAM metric or because the loaded was loaded directly into GPU.
If that's not the case and it is in fact loaded into the CPU before the GPU, is there any way to load it directly into the GPU? So that I don't have to provision that high of a RAM if I'm opting for some cloud service for hosting just an LLM. | 2023-07-07T19:21:53 | https://www.reddit.com/r/LocalLLaMA/comments/14tgnxp/does_exllama_load_the_model_directly_into_the_gpu/ | Chirumer | self.LocalLLaMA | 2023-07-07T19:47:37 | 0 | {} | 14tgnxp | false | null | t3_14tgnxp | /r/LocalLLaMA/comments/14tgnxp/does_exllama_load_the_model_directly_into_the_gpu/ | false | false | self | 18 | null |
What does it mean to 'Merge weights' in regards to LoRA or QLoRA? | 3 | If someone performs a finetune of a gptq 4-bit version of a model and builds a new set of weights, then can they 'merge' those weights back into the base model? Is that why I don't see any weight files in many of the quantized gtpq model repositories on huggingface?
For example, this is a fine tune of starcoder which was then quantized by thebloke:
TheBloke/starcoderplus-GPTQ
-But I don't see any weight files in the repository, so I assume those were merged back into the model? So then I could further fine-tune it to my use case building upon the tuning that it has already received, correct? | 2023-07-07T19:21:23 | https://www.reddit.com/r/LocalLLaMA/comments/14tgnge/what_does_it_mean_to_merge_weights_in_regards_to/ | gentlecucumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tgnge | false | null | t3_14tgnge | /r/LocalLLaMA/comments/14tgnge/what_does_it_mean_to_merge_weights_in_regards_to/ | false | false | self | 3 | null |
Best setup for local 3.5 replacement with upper mid tier gaming rig? | 0 | Complete noob to local setups here.
​
Here's my specs:
32GB RAM
AMD Ryzen 5 3600
3080Ti
​
Looking to replace GPT3.5 for personal and work purposes with something that's at least 70% as good. I've played with GPT4All using Falcon, Vicuna (which doesn't even work unfortunately), Hermes, and Wizard Uncensored. None of them are even close right now for my applications, unfortunately. Accuracy, length, depth, and sassiness (?) of responses all make these models mostly useless for me right now.
I'm not sure if the bottleneck is in GPT4All, my system specs, or the lack of refinement of the models themselves. I'm willing (and actually even curious) to learn how to train the models myself using my own data but am not sure if it'd really be worth the time if there are simply better options out there than what I've played with so far.
Cheers! | 2023-07-07T19:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/14tglck/best_setup_for_local_35_replacement_with_upper/ | powerfulndn | self.LocalLLaMA | 2023-07-07T20:42:02 | 0 | {} | 14tglck | false | null | t3_14tglck | /r/LocalLLaMA/comments/14tglck/best_setup_for_local_35_replacement_with_upper/ | false | false | default | 0 | null |
[Discussion] Looking for an Open-Source Speech to Text model (english) that captures filler words, pauses and also records timestamps for each word. | 4 | Looking for an Open-Source Speech to Text model (english) that captures filler words, pauses and also records timestamps for each word.
The model should capture the text verbatim, without much processing. The text should include the false starts to a sentence, misspoken words, incorrect pronunciation or word form etc.
The transcript is being captured to ascertain the speaking ability of the speaker hence all this information is required.
Example Transcription of Audio:
Yes. One of the most important things I have is my piano because um I like playing the piano. I got it from my parents to my er twelve birthday, so I have it for about nine years, and the reason why it is so important for me is that I can go into another world when I’m playing piano. I can forget what’s around me and what ... I can forget my problems and this is sometimes quite good for a few minutes. Or I can play to relax or just, yes to ... to relax and to think of something completely different.
I believe the OpenAI Whisper has support for recording timestamps. I don't want to rely on paid API service for the Speech to Text Transcription. | 2023-07-07T19:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/14tgbik/discussion_looking_for_an_opensource_speech_to/ | awinml1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tgbik | false | null | t3_14tgbik | /r/LocalLLaMA/comments/14tgbik/discussion_looking_for_an_opensource_speech_to/ | false | false | default | 4 | null |
Doesn't a 4090 massively overpower a 3090 for running local LLMs? | 40 | People seem to consider them both as about equal for the price / performance. I know 4090 doesn't have any more vram over 3090, but in terms of compute according to the specs 3090 has 142 tflops at fp16 while 4090 has 660 tflops at fp8. Isn't that almost a five-fold advantage in favour of 4090, at the 4 or 8 bit precisions typical with local LLMs? Or am I missing something? | 2023-07-07T18:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/14tfr8h/doesnt_a_4090_massively_overpower_a_3090_for/ | eesahe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tfr8h | false | null | t3_14tfr8h | /r/LocalLLaMA/comments/14tfr8h/doesnt_a_4090_massively_overpower_a_3090_for/ | false | false | self | 40 | null |
system requirement to fine tune stable vicuna13b | 11 | Noob question but gotta ask , I'm made a pc with ram 32 gb ddr4, ryzen 5 5600g, rtx 3060 12gb , 750watt power, 1tb and a not so great cooling system or a case.
I was wondering if I could fine tune vicuna in this or shift to a cloud gpu. My biggest fear is overheating and shutting down of pc, do u guys think it will stand out? | 2023-07-07T17:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/14tdi1p/system_requirement_to_fine_tune_stable_vicuna13b/ | Sensitive-Analyst288 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14tdi1p | false | null | t3_14tdi1p | /r/LocalLLaMA/comments/14tdi1p/system_requirement_to_fine_tune_stable_vicuna13b/ | false | false | self | 11 | null |
Chat with your PDFs by self-hosting LocalGPT on any cloud | 29 | 2023-07-07T16:12:44 | https://github.com/skypilot-org/skypilot/blob/master/llm/localgpt/README.md | skypilotucb | github.com | 1970-01-01T00:00:00 | 0 | {} | 14tbmyt | false | null | t3_14tbmyt | /r/LocalLLaMA/comments/14tbmyt/chat_with_your_pdfs_by_selfhosting_localgpt_on/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'VJjaIGoCCzeLqmBoQPWLe2NpMQMfCXn29K06b1czAqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=108&crop=smart&auto=webp&s=11e5935f94c7836c0132c75744d6b50f4208c508', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=216&crop=smart&auto=webp&s=8e1514132380a0e36cc34adb9f95fb190d8c9d28', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=320&crop=smart&auto=webp&s=5843407f2c3f0e333f2bf0662b770191a46e713b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=640&crop=smart&auto=webp&s=edf71f4f93c594f0083e68c5eb38415f79aed869', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=960&crop=smart&auto=webp&s=b98ceefc3145d996aaddcabe654dfb77fd9de7dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?width=1080&crop=smart&auto=webp&s=d5a7f51dcca2f42fb43ccfa5cb6075532bf2f612', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fh59wfMT5GLItz0KABDvRchUyp8E_0BX2WHcpFTcrV4.jpg?auto=webp&s=3a7135ee02a30430a7178e914a86ef1256653864', 'width': 1200}, 'variants': {}}]} | ||
CodeGen2.5: Small, but mighty | 72 | 2023-07-07T15:25:59 | https://blog.salesforceairesearch.com/codegen25/ | Acrobatic-Site2065 | blog.salesforceairesearch.com | 1970-01-01T00:00:00 | 0 | {} | 14tad84 | false | null | t3_14tad84 | /r/LocalLLaMA/comments/14tad84/codegen25_small_but_mighty/ | false | false | default | 72 | null | |
CPU with or without integrated graphics? (13900K vs. 13900KF) | 6 | I'm currently building a combo gaming and ML rig with an RTX 4090.
Two questions:
1. Should I consider getting a CPU with integrated graphics (13900K)? Would that free up resources from the 4090 vs. using the 13900KF which doesn't have integrated graphics?
2. Is it worth dual-booting Windows and Linux to have access to things like Triton? Any other perks of using Linux vs. just using Windows and potentially WSL?
Thank you! | 2023-07-07T15:05:32 | https://www.reddit.com/r/LocalLLaMA/comments/14t9u8l/cpu_with_or_without_integrated_graphics_13900k_vs/ | yuicebox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t9u8l | false | null | t3_14t9u8l | /r/LocalLLaMA/comments/14t9u8l/cpu_with_or_without_integrated_graphics_13900k_vs/ | false | false | self | 6 | null |
Example of a small fine tuning | 11 | Does anyone have a good guide or tutorial or perhaps help me in the right direction for fine tuning a small company data that I can use to call via API? | 2023-07-07T14:58:53 | https://www.reddit.com/r/LocalLLaMA/comments/14t9nbp/example_of_a_small_fine_tuning/ | mehrdotcom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t9nbp | false | null | t3_14t9nbp | /r/LocalLLaMA/comments/14t9nbp/example_of_a_small_fine_tuning/ | false | false | self | 11 | null |
Guanaco-Unchained Dataset | 46 | Guanaco is one of my favorite models, so it seemed like a no brainer to use it as the base for my own custom dataset. But when I looked into the dataset, I was surprised at some of the entries. I removed most of the non English prompts so I could manually check the quality of the data. I took out as much alignment, refusals, "I'm OpenAssistant", implied morality, and "as an AI" as I could. Anything that was questionable quality, I removed. Altogether, It's about 75% smaller than the original.
My plan is to train a model on this pruned dataset, and assuming it works well enough, I'll start blending in my own data into it to make a totally new dataset. That way I can use the pruned guanaco model as a baseline/benchmark to see if the new data I'm adding actually has the effect I want on the output. Guanaco is pretty decent at erotica, but there is hardly any at all in the training data I noticed. Can't wait to see how it is with a solid sampling of smut added into it.
One thing maybe someone here can answer for me. I noticed that there were some instructions where the last response was from the Human instead of the Assistant, making it seem like the conversation was incomplete or unanswered. Does anyone know if this was intentional? At first I thought it must be. But after a while (and maybe I was imagining it), I felt like a significant number of these straggler response examples were weird or bad quality. So I just did a search for all entries that ended with a Human: response and eliminated that last human response (not the whole entry). Was there some benefit to including examples like that? I can find specific examples if I'm not being clear.
This dataset probably isn't TOO impressive on it's own right now, but I'd be surprised if I didn't save at least a few people a head-start. As far as I can tell Guanaco is one of the best datasets that's commercially licensed with zero dependency on OpenAI API drama. If there are better one's out there I'd love to hear about them.
[https://huggingface.co/datasets/CheshireAI/guanaco-unchained](https://huggingface.co/datasets/CheshireAI/guanaco-unchained) | 2023-07-07T12:48:43 | https://www.reddit.com/r/LocalLLaMA/comments/14t6efv/guanacounchained_dataset/ | CheshireAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t6efv | false | null | t3_14t6efv | /r/LocalLLaMA/comments/14t6efv/guanacounchained_dataset/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'Un8XOSXVdR2fXTrAk-LXBvl76WBGBSbJjSEbS1J8wBc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=108&crop=smart&auto=webp&s=9406a804b0b5c0b9a51df46276faa88f23036dc9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=216&crop=smart&auto=webp&s=ca80c481fe2e4f80cef00a2f61bda1158eff3e15', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=320&crop=smart&auto=webp&s=c34710eeb938e027a149a95f87fae75c1c8cb89a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=640&crop=smart&auto=webp&s=5180cf343fb6280324cd4fd77d3c3ffe5370f45c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=960&crop=smart&auto=webp&s=696868e9f18fe0ca6252d88f31a5115d3c4a049b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?width=1080&crop=smart&auto=webp&s=28d316371e40af2553f15e8ce441bf18bc1c361c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Z2_2ePrPQLpu0DnHUIwxAtboXJ--sa0RmLAUMsYtYmI.jpg?auto=webp&s=1b60b8a3dc641e13aeea6defdd645cb289a0bd90', 'width': 1200}, 'variants': {}}]} |
Official WizardLM-13B-V1.1 Released! Train with Only 1K Data! Can Achieve 86.32% on AlpacaEval! | 219 |
* Today, the ***WizardLM Team*** has released their **Official** **WizardLM-13B-V1.1** model trained with only 🔥**1K** 🔥high-quality evolved data!
* Paper: [https://arxiv.org/abs/2304.12244](https://arxiv.org/abs/2304.12244)
* The project repo: [WizardLM](https://github.com/nlpxucan/WizardLM/tree/main)
* The official Twitter: [WizardLM\_AI](https://twitter.com/WizardLM_AI)
* HF Model: [WizardLM/WizardLM-13B-V1.1](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
* Online demo links:
1. [https://924134c0fad28192.gradio.app/](https://924134c0fad28192.gradio.app/)
2. [https://e8a06366ccd1c4d1.gradio.app/](https://e8a06366ccd1c4d1.gradio.app/)
3. [https://dfc5113f66739c80.gradio.app/](https://dfc5113f66739c80.gradio.app/)
(We will update the demo links in our [github](https://github.com/nlpxucan/WizardLM/tree/main).)
**WizardLM-13B-V1.1 achieves:**
1) 6.74 on MT-Bench
2) 🔥**86.32% on Alpaca Eval (ChatGPT is 86.09%)**
3) 99.3% on WizardLM Eval (Chatgpt is 100%)
https://preview.redd.it/spzie2gwajab1.png?width=1345&format=png&auto=webp&s=3c88a74b53da9fc987c58cc7dc6b9ac4d74d3efd
https://preview.redd.it/rvb7utnxajab1.png?width=1238&format=png&auto=webp&s=bea32fb5213c924fb1602ec87d64c4af85b6c0b6
Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings. | 2023-07-07T12:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/14t5wzt/official_wizardlm13bv11_released_train_with_only/ | cylaw01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t5wzt | false | null | t3_14t5wzt | /r/LocalLLaMA/comments/14t5wzt/official_wizardlm13bv11_released_train_with_only/ | false | false | 219 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | |
I have a large code base and documentation for a system want to explore it what are my options? | 2 | **Current Resources:**
- An extensive codebase
- Comprehensive documentation in markdown
- A Mac Studio equipped with an M1 Max chip and 32GB RAM
**Objectives:**
- Implement AI to parse and learn from the code and associated
documentation via prompts
- Expand my knowledge regarding Large Language Models (LLMs), focusing on
how to operate them autonomously
I do have a PC that features PCIe 3.0 and an Nvidia 1080 GPU at my
disposal. However, I'm open to upgrading to a more powerful system if the
Mac Studio isn't sufficient.
Could anyone suggest the best options for me? Any specific software or
models you'd recommend?
I possess a high-level understanding of AI and LLMs, and have more than
25 years of experience with various programming languages and operating
systems, Python in particular. I have the technical proficiency required,
but I'm yet to master LLMs.
I hope this question is within the subreddit's guidelines as I could
genuinely use some guidance to kickstart this endeavor. | 2023-07-07T11:35:59 | https://www.reddit.com/r/LocalLLaMA/comments/14t4qwn/i_have_a_large_code_base_and_documentation_for_a/ | 0OOO00000OO00O0O0OOO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t4qwn | false | null | t3_14t4qwn | /r/LocalLLaMA/comments/14t4qwn/i_have_a_large_code_base_and_documentation_for_a/ | false | false | self | 2 | null |
Highlight on some interesting 8k and 16k models. | 32 | During the few last days, Brandon / Bhenrym14 has published a few 8k (13b, 33b) and 16k (33b) ctx models based on Jon Durbin's Airoboros model (v 1.4.1).
Brandon's models don't use Kaioken's Superhot 8k Lora, but either :
\- the Position Interpolation RoPE scaling technique (noted PI)
\- the NTK-Aware Scaled RoPE Embeddings technique (noted NTK
The perplexity of the PI method I tested is much more consistent with the original llama perplexity, at least for the 3,072 first tokens.
I had great results in terms of quality over short stories using Ooba (Exllama) / Silly Tavern on the 33b 8k PI version (with a 4k context), but after the initial context length was passed, even my RTX3090 had memory allocation errors. I'm gonna try the 13b 8k today, and I invite you guys to test these models too, because they seem to have passed under the radar and any breakthrough in terms of context length deserves its fair share of.. attention.
And don't hesitate to share your config (CPU, RAM, GPU) and parameters (models loader & settings, ctx size, GUI, API, Frontend, presets) used to run them, so we can have a better overall insight about these models !
Here's the hugging face link of Brandon's profile :
[https://huggingface.co/bhenrym14](https://huggingface.co/bhenrym14)
Edit : And for a GGML version of the Airoboros 1.4.1 33b 8k PI released by Brandon, Ycros made K-Quant versions for LlamaCPP / KoboldCPP :
[https://huggingface.co/ycros/airoboros-33b-gpt4-1.4.1-PI-8192-GGML](https://huggingface.co/ycros/airoboros-33b-gpt4-1.4.1-PI-8192-GGML)
I'm playing with the ggml one (Q3\_K\_M) now, because I have some troubles of memory allocation with the GPTQ versions if I want to use 4k context.
\---
Edits : some mistakes corrected after reading Bhenrym14's comment, and some additions. | 2023-07-07T11:28:50 | https://www.reddit.com/r/LocalLLaMA/comments/14t4lbc/highlight_on_some_interesting_8k_and_16k_models/ | Nexesenex | self.LocalLLaMA | 2023-07-08T01:58:32 | 0 | {} | 14t4lbc | false | null | t3_14t4lbc | /r/LocalLLaMA/comments/14t4lbc/highlight_on_some_interesting_8k_and_16k_models/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'CR-uO4OIbgE2ZF1bivXbT9fbp2L5xy4jBBibjmhfHXo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=108&crop=smart&auto=webp&s=4be80003f3cab181b3f2b632cbdd84669f0b9222', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=216&crop=smart&auto=webp&s=8df7eec20cf15f474cac49bebd100922ac009561', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=320&crop=smart&auto=webp&s=e8396156d6c449ca8a248a7e2d3116f67f1c4ca3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=640&crop=smart&auto=webp&s=36f3195bc92949451793bfaff5f5b550ea46d7b3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=960&crop=smart&auto=webp&s=d83e1729bc424e63af4eac66c8a27d8556ee98f6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?width=1080&crop=smart&auto=webp&s=29505785174e9f42874a609fb28922c3fd00dab8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ew0C6rX6W76-c5SZnd5E8x0ffwDcIEdWnTInfDHYjZE.jpg?auto=webp&s=0c1ce1f941cca0a55216409edabe2a1f782b710f', 'width': 1200}, 'variants': {}}]} |
What is the best opensource LLM model for in-context learning? | 2 | Hi, I wonder which model is the best opensource LLM for in-context learning?
I tested openLLama 7B. However, It seems that it doesn't work to control LLM by prompt.
Could recommend better opensource LLM? I would like to use LLM with a various prompt. | 2023-07-07T08:15:12 | https://www.reddit.com/r/LocalLLaMA/comments/14t0sle/what_is_the_best_opensource_llm_model_for/ | Diligent-Machine-303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14t0sle | false | null | t3_14t0sle | /r/LocalLLaMA/comments/14t0sle/what_is_the_best_opensource_llm_model_for/ | false | false | self | 2 | null |
Best approach to multi-party conversions? | 2 | As title, what is the best way to have the LLM to participate in a multi-human conversation without the LLM trying to steal (=> complete, hallucinate) the other human conversations?
That is, in a conversation involving more than one speakers, ai want to bot to keep to itself without trying to generate the others conversation just because they were in the context buffer.
(Edited to be more specific about multi-human, not multi-characters.) | 2023-07-07T07:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/14t072j/best_approach_to_multiparty_conversions/ | edwios | self.LocalLLaMA | 2023-07-08T14:01:23 | 0 | {} | 14t072j | false | null | t3_14t072j | /r/LocalLLaMA/comments/14t072j/best_approach_to_multiparty_conversions/ | false | false | self | 2 | null |
Training models on specific documentation | 1 | Has anyone tried training a model on specific documentation?
For instance I recently started working on nextjs 13 and chat gpt only knew about version 12. Even using the search feature it would mix in partial 12 and 13 info and end up being incorrect. I have come across this a lot where a framework went through a major update in last year and chat gpt is useless when asked about it.
Has anyone added a specific set of docs to a local LLM and had good results with the model forming code for the specific doc version? If so what was your process? | 2023-07-07T05:28:21 | https://www.reddit.com/r/LocalLLaMA/comments/14sxngc/training_models_on_specific_documentation/ | Mr_Nice_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sxngc | false | null | t3_14sxngc | /r/LocalLLaMA/comments/14sxngc/training_models_on_specific_documentation/ | false | false | self | 1 | null |
LongLlama | 144 | 2023-07-07T05:23:30 | https://twitter.com/s_tworkowski/status/1677125863429795840?s=46&t=4Lg1z9tXUANCKLiHwRSk_A | Acrobatic-Site2065 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 14sxkcp | false | {'oembed': {'author_name': 'Szymon Tworkowski', 'author_url': 'https://twitter.com/s_tworkowski', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Introducing LongLLaMA 🦙, an unlimited-context version of OpenLLaMA fine-tuned at 8k & capable of extrapolating to 256k tokens!<br><br>We train it using our new Focused Transformer 🎯 technique (FoT). No degradation on short context, drop-in compatibility & Apache 2.0 license 🔥🔥<br>🧵 <a href="https://t.co/QiNl5xNYvl">pic.twitter.com/QiNl5xNYvl</a></p>— Szymon Tworkowski (@s_tworkowski) <a href="https://twitter.com/s_tworkowski/status/1677125863429795840?ref_src=twsrc%5Etfw">July 7, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/s_tworkowski/status/1677125863429795840', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_14sxkcp | /r/LocalLLaMA/comments/14sxkcp/longllama/ | false | false | 144 | {'enabled': False, 'images': [{'id': 'wiqVDKdW9PUcBslQ5gjC65akIWhI1wsvZ8CERhlUAs8', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/O47A58iKid31kjjUU_0hL5SyZG4D7Av9zBMb7ePSiFU.jpg?width=108&crop=smart&auto=webp&s=2c8e9eff854f4a11e20afaa895c526c4f0f972d6', 'width': 108}], 'source': {'height': 69, 'url': 'https://external-preview.redd.it/O47A58iKid31kjjUU_0hL5SyZG4D7Av9zBMb7ePSiFU.jpg?auto=webp&s=e970dff13d5488b5eb17cb517dbb907a6e595ba5', 'width': 140}, 'variants': {}}]} | ||
Applying training data for specific agentic tasks | 1 | I have collected a large amount of very valuable training data for two specific tasks, advanced summarization and professional book editing.
What is the best way to utilise all these top-quality, human-created examples to improve the efficiency of the training data? | 2023-07-07T04:17:57 | https://www.reddit.com/r/LocalLLaMA/comments/14sw99j/applying_training_data_for_specific_agentic_tasks/ | Chris_in_Lijiang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14sw99j | false | null | t3_14sw99j | /r/LocalLLaMA/comments/14sw99j/applying_training_data_for_specific_agentic_tasks/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.