title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Oniichat drama rundown: everything you missed | 103 | Hello, my local drama llamas!
​
This will be a quick rundown on all the Oniichat drama that has gone down that you missed from your favorite reporter, the tortistic turtle.
​
As you may have seen, yesterday the 'oniichat' model was uploaded and posted here. The reddit posters conclusion: "Tried this model for a couple of hours and it really upholds the character card really well. i'm having a lot of fun with it."
​
The model description stands proud and not-so-humbly:
​
https://preview.redd.it/5zfvduagpg4c1.png?width=935&format=png&auto=webp&s=770247826487536c8d8aca45c22bb764768c08ec
​
But the commentors on the localllama post were not impressed, not at all.
​
Select comment by user justynasty: "I didn't feel it different than other 13B models (trained on bluemoon dataset). It kept repeating the info from the character card, I had to move the story forward."
​
But it gets even worse, which will be a running theme in this story. After a while, people start pointing out an eery similarity between the good old Mythomax and Oniichat. Even Gryphe, the legendary maker of MythoMax and THE alpindale join in:
https://preview.redd.it/9eet9adjmg4c1.png?width=1917&format=png&auto=webp&s=5b55f94cab85379762757ef120ba2cefcec9d8fc
It seems that the SHA256 hashes, which can tell the difference of two files byte for byte confirm that oniichat is simply a plagiarized Mythomax!
​
That's where I, the protagonist of the story enter the scene. I bravely download a weight from both models to verify that the hashes indeed match up.
https://preview.redd.it/jl8nbin3ng4c1.png?width=1297&format=png&auto=webp&s=7a0c579fbd0531b383b354813451c31a955d4833
But chatGPT is here to save the day! 'As a developer of Oniichat, I apologize for any confusion I have caused' he says. Was it all a misunderstanding all along? A true peripeteia!
https://preview.redd.it/zgi3xanjng4c1.png?width=1577&format=png&auto=webp&s=3c9e52d5dfddb33a196c1be21cdd06a4ac4283cc
Upon commenting this, he indeed deleted the original 13 pytorch files and replaced them with 12 safetensor files and creddited mythomax in the readme, and the hashes no longer matched up. All was well. Or was it?
​
Unfortunately, TheBloke was having none of it. He started running tests
https://preview.redd.it/w9z06ck6qg4c1.png?width=1548&format=png&auto=webp&s=f1986f0f4eecf4860106dd5b432cecda1a38136a
And the conclusion that he came to was that despite the different weight format, the models were still producing the exact same output!
​
https://preview.redd.it/ae8gb46mqg4c1.png?width=1432&format=png&auto=webp&s=6e70c272c1459c9afcc33ef5d9509bab70ee9420
This was just one hour ago, and this is where we stand right now. What do you think? Was it all just a big misunderstanding and onii uploaded the wrong model twice? What do you think will happen next in this showdown?
​
Signing off, drama reporter & protagonist tortistic turtle | 2023-12-05T11:42:42 | https://www.reddit.com/r/LocalLLaMA/comments/18ba6md/oniichat_drama_rundown_everything_you_missed/ | tortistic_turtle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ba6md | false | null | t3_18ba6md | /r/LocalLLaMA/comments/18ba6md/oniichat_drama_rundown_everything_you_missed/ | false | false | 103 | {'enabled': False, 'images': [{'id': 'T3lH7dWiyqR2r__5pQmtvNAQEzPeRJ3nryCCrd_ja-k', 'resolutions': [{'height': 93, 'url': 'https://external-preview.redd.it/czk0d9P86eNSMPkXyuoLo2R3mWmgVHE0PprWQ_LKjfU.png?width=108&crop=smart&auto=webp&s=843fab1fecec12d9711fe663d591e3b7ee92abe4', 'width': 108}, {'height': 187, 'url': 'https://external-preview.redd.it/czk0d9P86eNSMPkXyuoLo2R3mWmgVHE0PprWQ_LKjfU.png?width=216&crop=smart&auto=webp&s=c714160cb7a5a509d315c222224d1ac2d1581800', 'width': 216}, {'height': 277, 'url': 'https://external-preview.redd.it/czk0d9P86eNSMPkXyuoLo2R3mWmgVHE0PprWQ_LKjfU.png?width=320&crop=smart&auto=webp&s=ccb14d72fabfbd1194e2e05de8225eb16e01a0ed', 'width': 320}, {'height': 554, 'url': 'https://external-preview.redd.it/czk0d9P86eNSMPkXyuoLo2R3mWmgVHE0PprWQ_LKjfU.png?width=640&crop=smart&auto=webp&s=3dcd8c6ae25fbed2a839f3edd8d1d85b07da90a5', 'width': 640}], 'source': {'height': 810, 'url': 'https://external-preview.redd.it/czk0d9P86eNSMPkXyuoLo2R3mWmgVHE0PprWQ_LKjfU.png?auto=webp&s=bc4bbf6dd87ffe4ec7fe48a42b07e46358cde2dd', 'width': 935}, 'variants': {}}]} | |
Fine-tuning Approach | 1 | [removed] | 2023-12-05T11:35:46 | https://www.reddit.com/r/LocalLLaMA/comments/18ba30b/finetuning_approach/ | One-Calligrapher1792 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ba30b | false | null | t3_18ba30b | /r/LocalLLaMA/comments/18ba30b/finetuning_approach/ | false | false | self | 1 | null |
Why doesn't anyone seem to care about knowledge cut-off dates? | 21 | I'm asking this because I rarely see this mentioned anywhere and when I download a model, I'm never able to find it anywhere either, unless I ask the model itself.
In my case, it changes everything, as I use models mostly for coding and two years worth of knowledge is a VERY long time. And most models seem to have 2021 as the cut-off year.
I would just like to know if I am missing something, as I don't think I should be the only one concerned with this. | 2023-12-05T10:57:18 | https://www.reddit.com/r/LocalLLaMA/comments/18b9i1s/why_doesnt_anyone_seem_to_care_about_knowledge/ | Ok-Book-1494 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b9i1s | false | null | t3_18b9i1s | /r/LocalLLaMA/comments/18b9i1s/why_doesnt_anyone_seem_to_care_about_knowledge/ | false | false | self | 21 | null |
I made a list of cheaper AI providers compared to OpenAI | 30 | I recently researched AI providers that are cheaper than OpenAI and thought of sharing it here:
PS: link goes directly to their pricing page.
* [Pricing (openai.com)](https://openai.com/pricing)
* [Pricing | Anyscale Endpoints](https://docs.endpoints.anyscale.com/pricing)
* [Simple Pricing | Machine Learning Infrastructure | Deep Infra](https://deepinfra.com/pricing)
* [Hugging Face – Pricing](https://huggingface.co/pricing)
* [Pricing (fireworks.ai)](https://readme.fireworks.ai/page/pricing)
* [Pricing (perplexity.ai)](https://docs.perplexity.ai/docs/pricing)
* [Together Pricing | The Most Powerful Tools at the Best Value](https://www.together.ai/pricing)
* [Azure AI pricing](https://azure.microsoft.com/en-us/free/ai-services/)
Did I miss any? | 2023-12-05T10:44:09 | https://www.reddit.com/r/LocalLLaMA/comments/18b9bc5/i_made_a_list_of_cheaper_ai_providers_compared_to/ | jerrygoyal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b9bc5 | false | null | t3_18b9bc5 | /r/LocalLLaMA/comments/18b9bc5/i_made_a_list_of_cheaper_ai_providers_compared_to/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': '3G82mp4BI1XI6foooUWKhZlj7HOwmu2byJ-9w14txE8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/PaPdLPCZupqoQk78JQZUDgL9iQjg0cnHDq4_JqDsL84.jpg?width=108&crop=smart&auto=webp&s=d1f9c79724b73b4970eb4c2d3e8bc8a7f7014192', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/PaPdLPCZupqoQk78JQZUDgL9iQjg0cnHDq4_JqDsL84.jpg?width=216&crop=smart&auto=webp&s=8316811590057986a142ce767554c44d9c474362', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/PaPdLPCZupqoQk78JQZUDgL9iQjg0cnHDq4_JqDsL84.jpg?width=320&crop=smart&auto=webp&s=4ba4fd00974103f25795307c945de96e7838cff7', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/PaPdLPCZupqoQk78JQZUDgL9iQjg0cnHDq4_JqDsL84.jpg?width=640&crop=smart&auto=webp&s=16794334dafc39af755c87da34f1cb459f7520cc', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/PaPdLPCZupqoQk78JQZUDgL9iQjg0cnHDq4_JqDsL84.jpg?width=960&crop=smart&auto=webp&s=edc6569e3e64085f36423239f1d221f7ae7f921b', 'width': 960}], 'source': {'height': 562, 'url': 'https://external-preview.redd.it/PaPdLPCZupqoQk78JQZUDgL9iQjg0cnHDq4_JqDsL84.jpg?auto=webp&s=d9b9efef9465eca591df1a6e574f7592c6d138a6', 'width': 1000}, 'variants': {}}]} |
Speed of Fiber Transmission to transport Local LLMs | 1 | I thought this community will like this too: The speed of optical fiber transmissions is growing. 22 PBit/s, that's like a fully trained GPT4 model transmitted from one place to another in less than a second. Local LLMs might just as well move from one data center to another, just where energy is available. [https://www.inavateonthenet.net/News/article/optical-fibre-transmission-world-record-is-doubled-to-229-petabits-per-second](https://www.inavateonthenet.net/News/article/optical-fibre-transmission-world-record-is-doubled-to-229-petabits-per-second?utm_source=emailmarketing&utm_medium=email&utm_campaign=inanl20230512&utm_content=2023-12-05)
​ | 2023-12-05T10:43:01 | https://www.reddit.com/r/LocalLLaMA/comments/18b9atq/speed_of_fiber_transmission_to_transport_local/ | stefan2017ch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b9atq | false | null | t3_18b9atq | /r/LocalLLaMA/comments/18b9atq/speed_of_fiber_transmission_to_transport_local/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bmZAD8EJSNo_JmoknU9f3Xqlgs_ZOOln7GSwZSug4Iw', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/ONzbID2C_yrpcBikD8BoE3CgumrtHSEyAcZxj6p7AlQ.jpg?width=108&crop=smart&auto=webp&s=dd1bd9b9f9763afd28c7e1723846119dae92a31f', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/ONzbID2C_yrpcBikD8BoE3CgumrtHSEyAcZxj6p7AlQ.jpg?width=216&crop=smart&auto=webp&s=32aac61f97d2d01f3ca9cea22660911f7e310cdb', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/ONzbID2C_yrpcBikD8BoE3CgumrtHSEyAcZxj6p7AlQ.jpg?width=320&crop=smart&auto=webp&s=22e6e9fd4b553d93875c004cb959542d7e53cc6d', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/ONzbID2C_yrpcBikD8BoE3CgumrtHSEyAcZxj6p7AlQ.jpg?width=640&crop=smart&auto=webp&s=0d66373f44f74870a38182a852db63267c0f7b69', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/ONzbID2C_yrpcBikD8BoE3CgumrtHSEyAcZxj6p7AlQ.jpg?width=960&crop=smart&auto=webp&s=ba281c63716c8d50517293de77d66aeb1d2de846', 'width': 960}], 'source': {'height': 682, 'url': 'https://external-preview.redd.it/ONzbID2C_yrpcBikD8BoE3CgumrtHSEyAcZxj6p7AlQ.jpg?auto=webp&s=321c80c565b2c12c18185fbfa51d3f9f0c828de4', 'width': 1024}, 'variants': {}}]} |
Pre-training intuition question | 1 | Is pre-training ever done in a cumulative way whereby one pre-trains on elementary school content, then high school content, then university content. Or with a "how to reason" curriculum somewhere early?
Is my intuition that this would be more efficient incorrect?
Obviously the cost of building the curriculum would be high, but multiple projects could leverage the work.
​ | 2023-12-05T10:16:55 | https://www.reddit.com/r/LocalLLaMA/comments/18b8y2w/pretraining_intuition_question/ | Smallpaul | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b8y2w | false | null | t3_18b8y2w | /r/LocalLLaMA/comments/18b8y2w/pretraining_intuition_question/ | false | false | self | 1 | null |
deepsex-34b: a NSFW model which pretrained with Light novel | 281 | [zzlgreat/deepsex-34b · Hugging Face](https://huggingface.co/zzlgreat/deepsex-34b)
Here are the steps to make this model:
1. I first collected a total collection of about 4GB of various light novels, and used BERT to perform two rounds of similarity deduplication on the novels with similar plots in the data set. In addition, a portion of nsfw novels are mixed in to improve the NSFW capabilities of the model.
2. Then use the YI-34B-base as the base of the model, use the setting of r=64 alpha=128 and use qlora to fine-tune 3 epochs for continuous pre-training.
3. Prepare the limarp+pippa data set, clean it into alpaca format, and use [https://huggingface.co/alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b), which is good at role-playing, to score each question and answer pair, and filter out the high-quality ones. 30k data.
4. Use the data in 3 for sft on the base model obtained in 2, 6 epochs, r=16 alpha=32 for fine-tuning.
I also want to hear your guys' suggestions. Is there anything worth improving on these steps? | 2023-12-05T09:52:23 | https://www.reddit.com/r/LocalLLaMA/comments/18b8lrh/deepsex34b_a_nsfw_model_which_pretrained_with/ | Fun_Water2230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b8lrh | false | null | t3_18b8lrh | /r/LocalLLaMA/comments/18b8lrh/deepsex34b_a_nsfw_model_which_pretrained_with/ | false | false | nsfw | 281 | {'enabled': False, 'images': [{'id': 'Zm7dPV2TXq_6DQwrub1yn6yxN5mxIOqwF2YtLj5DvpY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=108&crop=smart&auto=webp&s=dddc065d2fac44ee97d36e7b80540b23bf3a2b28', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=216&crop=smart&auto=webp&s=4ed3550d3ea70298c41f7f2137b0ba6577a99275', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=320&crop=smart&auto=webp&s=3a3f2e5f18e63b38ff07aeb03e9763311a9fc76b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=640&crop=smart&auto=webp&s=b51fb14ece47c21a8c808d826fa67c71eb3b063b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=960&crop=smart&auto=webp&s=f1559827f391e92ee556b79d85e47cc8878e3e23', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=1080&crop=smart&auto=webp&s=70beeb9295bb095c8cc9072baf8bca1e38f3cc58', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?auto=webp&s=162a2d0b042f5415db4e21d567200a37c9d3f6cb', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=899ee0bbb4e77d7ed5bb26dedbedd18754aac42f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=596b72d5962d1c2533bda138f9b7e7dd865c5cbe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=4e38ccfbb76d56bfeb7a0719840a436dcf392990', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=523d46ee869de88db804bb55f2617ea3e0d88f38', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=859e5327c2f6ca6238a59a8e3588b491dfb6626b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=44f59d133af772b15fe0b31c1cef74ca1e43c03a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?blur=40&format=pjpg&auto=webp&s=20721379111a0b8c7bff6951abb9563ea399c826', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=899ee0bbb4e77d7ed5bb26dedbedd18754aac42f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=596b72d5962d1c2533bda138f9b7e7dd865c5cbe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=4e38ccfbb76d56bfeb7a0719840a436dcf392990', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=523d46ee869de88db804bb55f2617ea3e0d88f38', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=859e5327c2f6ca6238a59a8e3588b491dfb6626b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=44f59d133af772b15fe0b31c1cef74ca1e43c03a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YSaCKE9xKCKInW8fYmJQISHcRg7m7y62voizvi6ZU3s.jpg?blur=40&format=pjpg&auto=webp&s=20721379111a0b8c7bff6951abb9563ea399c826', 'width': 1200}}}}]} |
What is your best guide to train LLM from your customised dataset? | 2 | https://www.reddit.com/r/LocalLLaMA/comments/14vnfh2/my_experience_on_starting_with_fine_tuning_llms/
https://www.reddit.com/r/LocalLLaMA/comments/16zuccy/after_500_loras_made_here_is_the_secret/
https://old.reddit.com/r/LocalLLaMA/wiki/index --> cannot find in sidebar
Found the articles above but might be old.
My plan is to train a Mistral 7B LLM using my customised dataset (text-based in .json with question and answers). I'll be using my machine or a paid cloud service.
Feel free to share | 2023-12-05T09:05:35 | https://www.reddit.com/r/LocalLLaMA/comments/18b7zwx/what_is_your_best_guide_to_train_llm_from_your/ | vlodia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b7zwx | false | null | t3_18b7zwx | /r/LocalLLaMA/comments/18b7zwx/what_is_your_best_guide_to_train_llm_from_your/ | false | false | self | 2 | null |
a beginner with LLM trying to make it work :/ | 1 | [removed] | 2023-12-05T09:00:25 | https://www.reddit.com/r/LocalLLaMA/comments/18b7xbu/a_beginner_with_llm_trying_to_make_it_work/ | SaltyAd5111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b7xbu | false | null | t3_18b7xbu | /r/LocalLLaMA/comments/18b7xbu/a_beginner_with_llm_trying_to_make_it_work/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9cWeWP4ZX06TcfaZj6bu0HZnpkXpDxX8Z8JLesAZzBs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/YTLA8pGhs-lCMf5hVOqlOWsDaYs06jC2gH8KS1htK6o.jpg?width=108&crop=smart&auto=webp&s=d84e2fff39caea2a6d645b6d49e32b42746dbe6c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/YTLA8pGhs-lCMf5hVOqlOWsDaYs06jC2gH8KS1htK6o.jpg?width=216&crop=smart&auto=webp&s=8563dd3b18fe1abd796baed24a4cc661bf537389', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/YTLA8pGhs-lCMf5hVOqlOWsDaYs06jC2gH8KS1htK6o.jpg?width=320&crop=smart&auto=webp&s=2a5239174f0ba6b304237b2a259489003dcc25e5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/YTLA8pGhs-lCMf5hVOqlOWsDaYs06jC2gH8KS1htK6o.jpg?auto=webp&s=9679184344f45726192a804a88f6e88ef7fe689e', 'width': 480}, 'variants': {}}]} |
What interesting formulations do you know that would make the LLM answer more productive? | 7 | I would like this post to become a collection of text formulations that make the LLM answer.
​
I saw the following examples online on how to make the LLM answer better.
​
1 - You better be sure that (Text. For example, it’s better for him to be sure that the answer will be based on truthful information or that the answer will be as safe as possible for children). This is very important for my career.
​
2 - I will give you 200 dollars if (Text. For example, if your answer is long and rich in detail or the story has a strong emphasis on unusual and exciting dialogue).
​
This is text manipulation because LLMs were trained on texts in which such statements had a positive effect. I want each of you to write similar formulations known to him or express your assumptions about them, which, perhaps, will also have a positive effect. | 2023-12-05T08:09:22 | https://www.reddit.com/r/LocalLLaMA/comments/18b793g/what_interesting_formulations_do_you_know_that/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b793g | false | null | t3_18b793g | /r/LocalLLaMA/comments/18b793g/what_interesting_formulations_do_you_know_that/ | false | false | self | 7 | null |
If you don't have necessary compute to host an open-source LLM, now you can use Perplexity API. I am using it and finding it quite useful | 14 | 2023-12-05T07:56:41 | https://blog.perplexity.ai/blog/introducing-pplx-api | vnpttl | blog.perplexity.ai | 1970-01-01T00:00:00 | 0 | {} | 18b72hv | false | null | t3_18b72hv | /r/LocalLLaMA/comments/18b72hv/if_you_dont_have_necessary_compute_to_host_an/ | false | false | default | 14 | null | |
Build compact and silent PC for LLM projects | 1 | [removed] | 2023-12-05T07:54:44 | https://www.reddit.com/r/LocalLLaMA/comments/18b71gm/build_compact_and_silent_pc_for_llm_projects/ | Technical_Spirit_622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b71gm | false | null | t3_18b71gm | /r/LocalLLaMA/comments/18b71gm/build_compact_and_silent_pc_for_llm_projects/ | false | false | self | 1 | null |
SUS-Chat-34B has been released! | 44 | HF link: [https://huggingface.co/SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B)
TheBloke hasn't quantized this model yet
It is currently the 2nd best model on the [HF leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
the system prompt is
### Human: {message here}
### Assistant: | 2023-12-05T06:40:29 | https://www.reddit.com/r/LocalLLaMA/comments/18b5zdj/suschat34b_has_been_released/ | marcus__-on-wrd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b5zdj | false | null | t3_18b5zdj | /r/LocalLLaMA/comments/18b5zdj/suschat34b_has_been_released/ | false | false | self | 44 | {'enabled': False, 'images': [{'id': 'mWQNSFwr9xUwdDwzhsen834fVrESklKQOvXtj79x9oY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/y8j5eoe1scNGCWGbTSnokXD2UnbD_m5qWaDsRo2hSqI.jpg?width=108&crop=smart&auto=webp&s=09dd8f07cc55d2ff8de6ce8bf259863bc5e68070', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/y8j5eoe1scNGCWGbTSnokXD2UnbD_m5qWaDsRo2hSqI.jpg?width=216&crop=smart&auto=webp&s=6c9fe392bc3bd93eba2fe7eff64cff0667d1fd7e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/y8j5eoe1scNGCWGbTSnokXD2UnbD_m5qWaDsRo2hSqI.jpg?width=320&crop=smart&auto=webp&s=e1eff2771c74f6df8f25fb20a0f64034fc770e80', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/y8j5eoe1scNGCWGbTSnokXD2UnbD_m5qWaDsRo2hSqI.jpg?width=640&crop=smart&auto=webp&s=2cdf545594382e6e42c87686832101c46d69ac84', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/y8j5eoe1scNGCWGbTSnokXD2UnbD_m5qWaDsRo2hSqI.jpg?width=960&crop=smart&auto=webp&s=3e7dd1bcdaa9979596c8241421b5e8f2078157ed', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/y8j5eoe1scNGCWGbTSnokXD2UnbD_m5qWaDsRo2hSqI.jpg?width=1080&crop=smart&auto=webp&s=71014be7f9a3cbce93f3d879f876017adda60232', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/y8j5eoe1scNGCWGbTSnokXD2UnbD_m5qWaDsRo2hSqI.jpg?auto=webp&s=f86f4673d5b31e49f8ac383cfba821036e148e71', 'width': 1200}, 'variants': {}}]} |
Looking at buying a new labtop. Deciding between a RTX 4060, 4070 and 4080 | 1 | HP Envy - 32 Ram - GeForce RTX 4060 i7 13th Gen H $1400
HP Omen - 32 Ram - GeForce RTX 4070 i7 13th Gen HX $1600
HP Omen - 32 Ram - GeForce RTX 4080 i9 13th Gen HX $2100
Is it worth the extra money for the 4080? For LLM? | 2023-12-05T06:12:05 | https://www.reddit.com/r/LocalLLaMA/comments/18b5jgn/looking_at_buying_a_new_labtop_deciding_between_a/ | redviiper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b5jgn | false | null | t3_18b5jgn | /r/LocalLLaMA/comments/18b5jgn/looking_at_buying_a_new_labtop_deciding_between_a/ | false | false | self | 1 | null |
Any great story models smaller than lzlv-70b? | 1 | lzlv has great storywriting capabilities but it's too beefy for my pc. I would prefer a good 7b or 13b. I have found slimopenorca7b to be decent. Thank you. | 2023-12-05T05:54:56 | https://www.reddit.com/r/LocalLLaMA/comments/18b595q/any_great_story_models_smaller_than_lzlv70b/ | ArmoredBattalion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b595q | false | null | t3_18b595q | /r/LocalLLaMA/comments/18b595q/any_great_story_models_smaller_than_lzlv70b/ | false | false | self | 1 | null |
LLM as a Service API | 19 | So basically my company is getting access to lot of GPU compute credits (over $750K) from a major cloud vendor and I'm interested in building an LLM as a service kind of SaaS. Basically, you choose your favorite model, we will spin a pod or similar GPU powered instance transparently, and give you an API access endpoint to talk to your LLM securely, and concurrently.
I was wondering if you guys are interested in such service? I'm know most of you prefer local deployment but some users can't afford powerful computer for such a task so there is a potential niche.
Looking forward for your advices... | 2023-12-05T05:09:36 | https://www.reddit.com/r/LocalLLaMA/comments/18b4h14/llm_as_a_service_api/ | histoire_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b4h14 | false | null | t3_18b4h14 | /r/LocalLLaMA/comments/18b4h14/llm_as_a_service_api/ | false | false | self | 19 | null |
Sub-par Token Generation on AMD Hardware? | 1 | [removed] | 2023-12-05T05:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/18b4dll/subpar_token_generation_on_amd_hardware/ | SquishyOranges | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b4dll | false | null | t3_18b4dll | /r/LocalLLaMA/comments/18b4dll/subpar_token_generation_on_amd_hardware/ | false | false | self | 1 | null |
LLM Dataset Distillation using embeddings | 6 | I've been working on a script to do dataset distillation using embeddings.
I'm using gpt-neo atm because it's small.
My loss appears to be lowering
​
https://preview.redd.it/4yfizrabre4c1.png?width=712&format=png&auto=webp&s=eacb9e9d54b30fb533565f11ab670ebaf463dec4
[https://gist.github.com/thistleknot/93481a4024e7813fa65d901f8c5fe599](https://gist.github.com/thistleknot/93481a4024e7813fa65d901f8c5fe599)
the intent is to do dataset distillation but for llm's using the last hidden layer as embeddings -> logits
Inspired from original paper:
[https://arxiv.org/abs/1811.10959](https://arxiv.org/abs/1811.10959)
Next up would be
* set it up for cuda
* expand to all the dataset (rather than a fixed length, which was done for testing to avoid padding errors)
* construct your distilled dataset: find out where the embeddings align with a kmeans model over the original dataset's embeddings to find representative records.
* see [https://gist.github.com/thistleknot/f11104df9ae3b8a33327f4145db860b9](https://gist.github.com/thistleknot/f11104df9ae3b8a33327f4145db860b9)
Feedback welcome | 2023-12-05T05:01:25 | https://www.reddit.com/r/LocalLLaMA/comments/18b4bo9/llm_dataset_distillation_using_embeddings/ | Thistleknot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b4bo9 | false | null | t3_18b4bo9 | /r/LocalLLaMA/comments/18b4bo9/llm_dataset_distillation_using_embeddings/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]} | |
What are the t/s on Amd 7900 xtx for a 13b model? | 1 | I'm wondering if anyone knows the t/s for a 13b llm model and I've been only able to find a few mentions of this, so is there a benchmark of sorts for the 7900 xtx or if you yourself have a 7900 xtx could you let me know (I would use a quants of 4\_k\_m).
Cheers! | 2023-12-05T04:03:46 | https://www.reddit.com/r/LocalLLaMA/comments/18b3ab9/what_are_the_ts_on_amd_7900_xtx_for_a_13b_model/ | searmr_cool | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b3ab9 | false | null | t3_18b3ab9 | /r/LocalLLaMA/comments/18b3ab9/what_are_the_ts_on_amd_7900_xtx_for_a_13b_model/ | false | false | self | 1 | null |
Total Newb looking for advice | 2 | I currently have a rig with 6 RX6600XT gpus. Intel i5 and 8gb of ram. Where should I start as a total rookie looking to experiment building a local LLama?
Thanks in advance! | 2023-12-05T03:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/18b2xnu/total_newb_looking_for_advice/ | Narwhal-Public | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b2xnu | false | null | t3_18b2xnu | /r/LocalLLaMA/comments/18b2xnu/total_newb_looking_for_advice/ | false | false | self | 2 | null |
Relationship between SFTTrainer max_seq_length and lora_alpha, rank of LoraConfig | 3 | I'm fine tuning llama-2-b-chat-hf using Supervised Fine Tuning and LoRA configuration. I need to know if there is the relationship between SFTTrainer max\_seq\_length and lora\_alpha, rank of LoraConfig. I have seen in some places that max\_seq\_length = lora\_alpha\*rank. Is this true? | 2023-12-05T03:43:24 | https://www.reddit.com/r/LocalLLaMA/comments/18b2vyk/relationship_between_sfttrainer_max_seq_length/ | Desperate-Tip-650 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b2vyk | false | null | t3_18b2vyk | /r/LocalLLaMA/comments/18b2vyk/relationship_between_sfttrainer_max_seq_length/ | false | false | self | 3 | null |
Magicoder, coding-tuned Deepseek-6.7B and Llama-7B. Synth data techniques make it "surpass the ChatGPT on HumanEval+" | 55 | 2023-12-05T03:26:49 | https://github.com/ise-uiuc/magicoder | BayesMind | github.com | 1970-01-01T00:00:00 | 0 | {} | 18b2kks | false | null | t3_18b2kks | /r/LocalLLaMA/comments/18b2kks/magicoder_codingtuned_deepseek67b_and_llama7b/ | false | false | 55 | {'enabled': False, 'images': [{'id': 'qbwl3oAtCxTGR34hlR51_uBcBTCMoCAt-vyQoCKNZRU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ojrUrF62Lk4py1aGiudthRyt3dZ1mHmhcylqKolXLJ0.jpg?width=108&crop=smart&auto=webp&s=40e05fc252ea99fac9ada9c65836cba719438ac9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ojrUrF62Lk4py1aGiudthRyt3dZ1mHmhcylqKolXLJ0.jpg?width=216&crop=smart&auto=webp&s=7b4097e9b71686bf6f75dbd98d614a95d3d9c928', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ojrUrF62Lk4py1aGiudthRyt3dZ1mHmhcylqKolXLJ0.jpg?width=320&crop=smart&auto=webp&s=c2c8f5062545911336d6d34889492782ebd5955b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ojrUrF62Lk4py1aGiudthRyt3dZ1mHmhcylqKolXLJ0.jpg?width=640&crop=smart&auto=webp&s=ff2bad5dbc3f963e2064968f37cab9753d526246', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ojrUrF62Lk4py1aGiudthRyt3dZ1mHmhcylqKolXLJ0.jpg?width=960&crop=smart&auto=webp&s=fb06cc8bf18d41e80a5269c8faca7777017089e6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ojrUrF62Lk4py1aGiudthRyt3dZ1mHmhcylqKolXLJ0.jpg?width=1080&crop=smart&auto=webp&s=f08deb5a1068fa50dd60616b95fc23448621ba52', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ojrUrF62Lk4py1aGiudthRyt3dZ1mHmhcylqKolXLJ0.jpg?auto=webp&s=949cf48c5d048f4d4d658b7246c980c1acfff623', 'width': 1200}, 'variants': {}}]} | ||
Can someone please tell me how many tokens per second you get on MACs unified memory | 12 | I am considering purchasing the Mac Studio m2 because it has an option for 192gb unified memory.
Anyone with Macs care to chime in and give me your token per second results on 70B and 100B+ models?
Please include what model you are running, what quants and which Mac you have.
Thank you very much. | 2023-12-05T02:44:09 | https://www.reddit.com/r/LocalLLaMA/comments/18b1qgy/can_someone_please_tell_me_how_many_tokens_per/ | DominicanGreg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b1qgy | false | null | t3_18b1qgy | /r/LocalLLaMA/comments/18b1qgy/can_someone_please_tell_me_how_many_tokens_per/ | false | false | self | 12 | null |
Is there any LLM that can be installed with out python | 1 | I want to use an LLM along with an application I am developing in C#. Is there any suitable LLM out there which can be installed along with ,Net application's distribution | 2023-12-05T02:06:44 | https://www.reddit.com/r/LocalLLaMA/comments/18b0zcv/is_there_any_llm_that_can_be_installed_with_out/ | Ok_Calligrapher_9676 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b0zcv | false | null | t3_18b0zcv | /r/LocalLLaMA/comments/18b0zcv/is_there_any_llm_that_can_be_installed_with_out/ | false | false | self | 1 | null |
Theory about Q* | 1 | not sure where to post this but y’all think that the whole sam altman drama was just a marketing stunt to “leak” Q*?
it seems like everyone is hyped up about this new OpenAI project that’s so powerful that the board “fired” the CEO over it and then promptly rehired
idk just a theory | 2023-12-05T01:58:07 | https://www.reddit.com/r/LocalLLaMA/comments/18b0sw0/theory_about_q/ | Monochrome21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b0sw0 | false | null | t3_18b0sw0 | /r/LocalLLaMA/comments/18b0sw0/theory_about_q/ | false | false | self | 1 | null |
How does ChatGPT understand the files you upload? | 2 | I am still learning about local LLMs and have hosted a few locally to play around with. One of my favorite features about ChatGPT is the ability to upload documents and files and have the AI be able to read those files. Can a feature like this be used with a local LLM for chat? | 2023-12-05T01:55:47 | https://www.reddit.com/r/LocalLLaMA/comments/18b0r8q/how_does_chatgpt_understand_the_files_you_upload/ | Panhandlemantle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b0r8q | false | null | t3_18b0r8q | /r/LocalLLaMA/comments/18b0r8q/how_does_chatgpt_understand_the_files_you_upload/ | false | false | self | 2 | null |
Large Text summarization | 6 | I am trying to summerize large pieces of text (5000 tokens). Any model recommendations?
I am trying to summarize large piece of text (5000 tokens). Any model recommendations?
it but I am trying to avoid having to do recursive summarization. | 2023-12-05T01:52:29 | https://www.reddit.com/r/LocalLLaMA/comments/18b0ov2/large_text_summarization/ | wilyx11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b0ov2 | false | null | t3_18b0ov2 | /r/LocalLLaMA/comments/18b0ov2/large_text_summarization/ | false | false | self | 6 | null |
Emacs lisp to split up text and reify it | 1 | [removed] | 2023-12-05T01:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/18b06dd/emacs_lisp_to_split_up_text_and_reify_it/ | introsp3ctor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18b06dd | false | null | t3_18b06dd | /r/LocalLLaMA/comments/18b06dd/emacs_lisp_to_split_up_text_and_reify_it/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': '2CHLIRuG2hI47a71hav-a9HVF4KCm0B9wNBsbrWcFe8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0BgHu9nkCMlJ4Hn4rqwD0dh-KwmdmovBmfJnZ-eGbt4.jpg?width=108&crop=smart&auto=webp&s=a707f8e0caa8dc00016bdeb06591fbf41a0beb1c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0BgHu9nkCMlJ4Hn4rqwD0dh-KwmdmovBmfJnZ-eGbt4.jpg?width=216&crop=smart&auto=webp&s=9d55bc385283dc2984cc669532cec06b95b45fb0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0BgHu9nkCMlJ4Hn4rqwD0dh-KwmdmovBmfJnZ-eGbt4.jpg?width=320&crop=smart&auto=webp&s=b394030498e5c6da2b050882e3ee75f48a3bded0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0BgHu9nkCMlJ4Hn4rqwD0dh-KwmdmovBmfJnZ-eGbt4.jpg?width=640&crop=smart&auto=webp&s=bf85e86f20b3e9c9619df156421d6abd68a96736', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0BgHu9nkCMlJ4Hn4rqwD0dh-KwmdmovBmfJnZ-eGbt4.jpg?width=960&crop=smart&auto=webp&s=c818ad2fa7143fa9f3595ba588239af7f68a9ecd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0BgHu9nkCMlJ4Hn4rqwD0dh-KwmdmovBmfJnZ-eGbt4.jpg?width=1080&crop=smart&auto=webp&s=396591a8a415aaed37fd47c8686b6e519ba2c3c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0BgHu9nkCMlJ4Hn4rqwD0dh-KwmdmovBmfJnZ-eGbt4.jpg?auto=webp&s=4afff0ac1ca1cf9380d4a535482d3854c47c701c', 'width': 1200}, 'variants': {}}]} |
🤯 it actually works! Now I need to learn how to use it (suggestions welcome!) | 1 | [removed] | 2023-12-05T00:20:16 | https://www.reddit.com/r/LocalLLaMA/comments/18ays15/it_actually_works_now_i_need_to_learn_how_to_use/ | firesalamander | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ays15 | false | null | t3_18ays15 | /r/LocalLLaMA/comments/18ays15/it_actually_works_now_i_need_to_learn_how_to_use/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'r2itq5ijmnIlgY78qq9teQy5CcSLG0r6l4mxIKsJpFw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/QvB-faG8fiHT6n6Hm7ngPn4CGNAMbxLusCL3crFmNgo.jpg?width=108&crop=smart&auto=webp&s=c581a9be93941730d99a63df617790af6a7cb839', 'width': 108}], 'source': {'height': 94, 'url': 'https://external-preview.redd.it/QvB-faG8fiHT6n6Hm7ngPn4CGNAMbxLusCL3crFmNgo.jpg?auto=webp&s=571de19feb5f289f97f7cb3d65ccf088be74b141', 'width': 168}, 'variants': {}}]} |
Interesting: Asus lists RTX 4070 GPU with a blower design, making it possible to build a budget multi-GPU machine for AI and deep learning | 1 | Asus has put a blower cooler on the [GeForce RTX 4070](https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4070-review), which is currently one of the [best graphics cards](https://www.tomshardware.com/reviews/best-gpus,4380.html). The new Turbo GeForce RTX 4070 12GB GDDR6X (TURBO-RTX4070-12G) specifically caters to surroundings with restricted airflow, such as server racks and small-form-factory (SFF) systems.
[Asus lists RTX 4070 GPU with a blower design, making it possible to build a budget multi-GPU machine for AI and deep learning | Tom's Hardware (tomshardware.com)](https://www.tomshardware.com/pc-components/gpus/asus-lists-rtx-4070-gpu-with-a-blower-design-making-it-possible-to-build-a-budget-multi-gpu-machine-for-ai-and-deep-learning) | 2023-12-04T23:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/18ay6x4/interesting_asus_lists_rtx_4070_gpu_with_a_blower/ | eviloni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ay6x4 | false | null | t3_18ay6x4 | /r/LocalLLaMA/comments/18ay6x4/interesting_asus_lists_rtx_4070_gpu_with_a_blower/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UINnGBNgIy2ASCN2NnVgiHlzHoCcVwO6NTwHVtm4NeA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/S9THAGHVBRFrm8intx9qEFcduq9CATx_GXfrttpLnuU.jpg?width=108&crop=smart&auto=webp&s=8def65088164490fa94522b05fc3a5227286a7e9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/S9THAGHVBRFrm8intx9qEFcduq9CATx_GXfrttpLnuU.jpg?width=216&crop=smart&auto=webp&s=96916885e3544f3933b98b7493df966427812e27', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/S9THAGHVBRFrm8intx9qEFcduq9CATx_GXfrttpLnuU.jpg?width=320&crop=smart&auto=webp&s=1e531fce8714c47ef4ebe559a6df0013e5f2bd80', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/S9THAGHVBRFrm8intx9qEFcduq9CATx_GXfrttpLnuU.jpg?width=640&crop=smart&auto=webp&s=39948fa2eb32846e6389d77655d8e36a32027d7f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/S9THAGHVBRFrm8intx9qEFcduq9CATx_GXfrttpLnuU.jpg?width=960&crop=smart&auto=webp&s=f239e4b6ead1bbad363906b17cfc4cbdafe3f7ca', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/S9THAGHVBRFrm8intx9qEFcduq9CATx_GXfrttpLnuU.jpg?width=1080&crop=smart&auto=webp&s=276476d40831449f5361348e4486989221a631ef', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/S9THAGHVBRFrm8intx9qEFcduq9CATx_GXfrttpLnuU.jpg?auto=webp&s=2b0b20962da46fe63f24b7dcccfb50797b6d4b27', 'width': 1200}, 'variants': {}}]} |
Tree-based LLM frontend mockup | 37 | 2023-12-04T23:21:03 | kindacognizant | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18axghe | false | null | t3_18axghe | /r/LocalLLaMA/comments/18axghe/treebased_llm_frontend_mockup/ | false | false | 37 | {'enabled': True, 'images': [{'id': 'PF1kyZyHOxDoCLruPxJtgBC1nXngW6pla_CMjffTKCw', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/akhn5u3w1d4c1.png?width=108&crop=smart&auto=webp&s=c9fa7574369b649ac06d701b0d4669a94c5a3efc', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/akhn5u3w1d4c1.png?width=216&crop=smart&auto=webp&s=39336e07ba31e6e8f8583890410c93b0ab849188', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/akhn5u3w1d4c1.png?width=320&crop=smart&auto=webp&s=ddd791d1f4d0ea54d9299d39712ea4a8274ad862', 'width': 320}, {'height': 455, 'url': 'https://preview.redd.it/akhn5u3w1d4c1.png?width=640&crop=smart&auto=webp&s=a94472879e3fb14caf2fca657caef0d70b8d6e32', 'width': 640}, {'height': 683, 'url': 'https://preview.redd.it/akhn5u3w1d4c1.png?width=960&crop=smart&auto=webp&s=d859dde0cffb2dd2364c5ce798d62f535be4636c', 'width': 960}, {'height': 768, 'url': 'https://preview.redd.it/akhn5u3w1d4c1.png?width=1080&crop=smart&auto=webp&s=0be657d898b5c31a0e1f7e35de2ca82c264c42d1', 'width': 1080}], 'source': {'height': 2572, 'url': 'https://preview.redd.it/akhn5u3w1d4c1.png?auto=webp&s=a6c8a9f21577bf159719777b396a998f4d4d534d', 'width': 3613}, 'variants': {}}]} | |||
Assessing llms for code generation. | 11 | I'm looking for a good way to benchmark coding llms
My current approach, which is not complete is to create a mongodb db with 100 to 150 prompts to generate code. Send the prompts to a set of target llms and capture the output back into the database along with the meta data, like how long, how many tokens produced, and the generated code split into files using the ``` markers.
Once i have it through processing, walk the db, pushing each output through sonarqube, and extracting the error counts, no of code smells, number of security errors etc. That should give me a data set suitable for scoring each llm against the 150 prompts.
Before I get to far into this rabbit hole I thought I would ask the sub.
Remember, I'm not interested in the llms abilities as a chatbot or its language capabilities, I'm only interested in its ability to generate good code. | 2023-12-04T23:04:36 | https://www.reddit.com/r/LocalLLaMA/comments/18ax2ir/assessing_llms_for_code_generation/ | tshawkins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ax2ir | false | null | t3_18ax2ir | /r/LocalLLaMA/comments/18ax2ir/assessing_llms_for_code_generation/ | false | false | self | 11 | null |
Awesome List for LocalLLMs! | 2 | Sorry if this has been done before, but I have been playing with a lot of local and opensource LLM tools. I figured it would be cool to have one place where we can have all of those.
​
Currently I am scouting great tools that can work well locally (like Vision models, or local ChatPDF type tools).
​
Would love some contributions from the community!
​
Here is the link: [https://github.com/Hallicopter/AwesomeLocalLLM](https://github.com/Hallicopter/AwesomeLocalLLM) | 2023-12-04T22:56:57 | https://www.reddit.com/r/LocalLLaMA/comments/18aww79/awesome_list_for_localllms/ | NobleSiks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aww79 | false | null | t3_18aww79 | /r/LocalLLaMA/comments/18aww79/awesome_list_for_localllms/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'Rr7WLHoSPUL2swHKq614gNUJ_Onzd13pLWOoyl-ikrs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i4XwK-lHpnnKi2RLSbgpXnCJqyx0F6eRVl5ebel_dQQ.jpg?width=108&crop=smart&auto=webp&s=4f8d687f3f801de95de70ab1074cab6910cc9c06', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i4XwK-lHpnnKi2RLSbgpXnCJqyx0F6eRVl5ebel_dQQ.jpg?width=216&crop=smart&auto=webp&s=4d6c946cc711c60dc4dc9aad7b8ba69e850cc110', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i4XwK-lHpnnKi2RLSbgpXnCJqyx0F6eRVl5ebel_dQQ.jpg?width=320&crop=smart&auto=webp&s=13865a1909b6732c9f5435a3308172e276669b5c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i4XwK-lHpnnKi2RLSbgpXnCJqyx0F6eRVl5ebel_dQQ.jpg?width=640&crop=smart&auto=webp&s=375bfa7c25d15e6aa3fa366791b288a2984c0b73', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i4XwK-lHpnnKi2RLSbgpXnCJqyx0F6eRVl5ebel_dQQ.jpg?width=960&crop=smart&auto=webp&s=0d88b34cc209d16dc8238f625b2c04101d8b56ff', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/i4XwK-lHpnnKi2RLSbgpXnCJqyx0F6eRVl5ebel_dQQ.jpg?width=1080&crop=smart&auto=webp&s=524b8e6e9bb454ef52393835f1836bc5db1c017e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/i4XwK-lHpnnKi2RLSbgpXnCJqyx0F6eRVl5ebel_dQQ.jpg?auto=webp&s=a801f8c5487d8e63a08df2714c63b15ff5c10c4f', 'width': 1200}, 'variants': {}}]} |
Has anyone gotten OpenChat 3.5 working on a Pascal GPU? | 1 | So, I recently tried OpenChat 3.5 7B on HuggingChat, and wow - it's absolutely incredible for a 7B model. Certainly miles ahead of any other 7B I've used.
I'd really like to self-host this model, and start building a local, general assistant to help me scan docs and perform mundane tasks. The thing is, I can't get it to run in the text-generation-inference container, as (according to the logs) my older GPU lacks the required compute capability for Mistral models / Flash Attention v2.
So my question is this - is there any way to get this model running on my hardware? I don't need a WebUI, or really anything in the way of fancy features features. I just want a simple Docker container, that serves an endpoint I can run langchain against.
Thanks in advance. | 2023-12-04T22:43:51 | https://www.reddit.com/r/LocalLLaMA/comments/18awlb4/has_anyone_gotten_openchat_35_working_on_a_pascal/ | SilverbackReborn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18awlb4 | false | null | t3_18awlb4 | /r/LocalLLaMA/comments/18awlb4/has_anyone_gotten_openchat_35_working_on_a_pascal/ | false | false | self | 1 | null |
Llama 2 minimum requirements for inference | 1 | Hey, does anyone know the minimum hardware requirements for running llama 2 locally? | 2023-12-04T22:35:32 | https://www.reddit.com/r/LocalLLaMA/comments/18awed8/llama_2_minimum_requirements_for_inference/ | AI-226 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18awed8 | false | null | t3_18awed8 | /r/LocalLLaMA/comments/18awed8/llama_2_minimum_requirements_for_inference/ | false | false | self | 1 | null |
Terminal client chat for llama.cpp server. | 20 | Hello All, i hope you are well.
I created a lighweight terminal chat interface for being used with llama.cpp server. I made it in C++ with simple way to compile (For windows/linux). Supports many commands for manipulate the conversation flow and also you can save/load conversations and add your own configurations, parametization and prompt-templates. I want to clarify that I am not an llm expert, maybe some but for what I've used it for, it's been fine. The idea was to dispense with the browser with a lightweight executable.
I hope you like it.
##### Command List that supports:
* **narrator:** Lets to narrator generate a narration.
* **director:** Switch to director mode input.
* **actor:** Create a new character or use an existent into convesation and lets it talks. (ex: /actor Einstein).
* **as:** Pretend to be an actor and prompt it. (ex: /as Einstein)
* **talkto:** Talk to a determinated character. It will swtich the current talking actor. (ex: /talkto Monica)
​
* **retry / r:** Retry the last completion.
* **continue:** Continue the completion without intervention. (The assistant will continue talking)
* **undolast:** Undo only the last completion.
* **undo / u:** Undo the last completion and user input.
​
* **save (chatname):** Save the chat. (without extension)
* **load (chatname):** Load a previously saved chat.
* **redraw:** Redraw the chat content.
* **reset:** Reset the entire chat.
* **quit / q:** Exit the program.
* **lprompt:** Print the current prompt.
* **lparams:** Print the current parameters.
* **lactors:** Print current actors.
* **rparams** Reload current parameter profile.
* **rtemplate** Reload current template profile.
​
Source code: [https://github.com/hwpoison/llama-server-chat-terminal-client](https://github.com/hwpoison/llama-server-chat-terminal-client)
Currently there are some builds for windows. | 2023-12-04T22:32:52 | https://www.reddit.com/r/LocalLLaMA/comments/18awc7u/terminal_client_chat_for_llamacpp_server/ | hwpoison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18awc7u | false | null | t3_18awc7u | /r/LocalLLaMA/comments/18awc7u/terminal_client_chat_for_llamacpp_server/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': '_MMlmAnE58vkgoCw4MhUGJ-DwQfE89Y-zet38oh02DU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DMruaeoxneR2cmKgOyQIGUmZr4RkRguRPwm6piV6hY4.jpg?width=108&crop=smart&auto=webp&s=cfb0e5f9e79479a551996f2f5299d5498a690b52', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DMruaeoxneR2cmKgOyQIGUmZr4RkRguRPwm6piV6hY4.jpg?width=216&crop=smart&auto=webp&s=5caa22d34a3aaca435c3f10d3f673ba5c20d44db', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DMruaeoxneR2cmKgOyQIGUmZr4RkRguRPwm6piV6hY4.jpg?width=320&crop=smart&auto=webp&s=ac5ef2382cbb58f3381d750c7d524742c820dfa6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DMruaeoxneR2cmKgOyQIGUmZr4RkRguRPwm6piV6hY4.jpg?width=640&crop=smart&auto=webp&s=76b4c527200509de79611ae62da4f7553b6061a8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DMruaeoxneR2cmKgOyQIGUmZr4RkRguRPwm6piV6hY4.jpg?width=960&crop=smart&auto=webp&s=b8396b85623c2fc8aff08c09a2b2837cadb4b755', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DMruaeoxneR2cmKgOyQIGUmZr4RkRguRPwm6piV6hY4.jpg?width=1080&crop=smart&auto=webp&s=53748de6574315a2c78ebc222a1489b5612dac94', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DMruaeoxneR2cmKgOyQIGUmZr4RkRguRPwm6piV6hY4.jpg?auto=webp&s=51afef1da4b4a4ecc2f0da523406e4acfaae2313', 'width': 1200}, 'variants': {}}]} |
Advantage of Local vs GPT4 for use case? Generating text in specific style based on examples | 3 | Would appreciate some input on whether I would see advantages using local LLMs over GPT4 for my use case. I've got different existing writing styles I need to emulate based on existing examples. The primary thing I care about is getting the "voice" right. I'm not generating complex narratives or anything. Currently, I provide the main substantive points - for example, "write a movie review that makes the following 5 points, in the style of these 10 example reviews," though I'm exploring applications where I'd like the LLM to do more "thinking" — applying preferences and a point of view to new topics. Most of the content I need produced is short, under 500 words, and scale is modest, dozens to hundreds of outputs required for each project. I'm not greatly concerned with security/privacy, nor do I have any issues with online availability.
I've had pretty good success prompt engineering with GPT4, and since I'm not doing huge volumes, cost isn't an issue. But it it captures some styles better than others, and there's always room for improvement. I also come up against some limits where I sense OpenAI's training preferences either limiting the output in various ways, or imposing certain patterns, styles, turns of phrase — and prompt engineering has been less effective at ironing those out.
I have some basic coding knowledge (enough python/html to write a simple web based front end and a dynamic prompt creation system to feed the OpenAI API) so I figure I can get these running with some help, but it's also not something that I can probably do in an afternoon. Would appreciate thoughts on whether I might get "better" results with a local LLM ("better" = more fidelity to examples, more control over the output) and what levers I'd be moving to do that. My current hardware is an M2 Air with 24GB RAM, so also helpful to know if I'd need to step up from that to see improved results. Thank you! | 2023-12-04T22:01:14 | https://www.reddit.com/r/LocalLLaMA/comments/18avltf/advantage_of_local_vs_gpt4_for_use_case/ | lostbart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18avltf | false | null | t3_18avltf | /r/LocalLLaMA/comments/18avltf/advantage_of_local_vs_gpt4_for_use_case/ | false | false | self | 3 | null |
Quick Start Guide To Converting Your Own GGUFs (including fp16) | 54 | **IMPORTANT**: This is not an exhaustive guide by far. Consider this a quick start tutorial to get you going. For a lot of models, what I have below is all you need. But for some models it looks like there's some wrestling involved with settings, vocab thingies, etc. This guide exists purely for people like me who want help taking the first step, and then can start working out the rest on their own.
\-----------------------------------------------------------
Once in a while you run across something that is really simple, and yet really useful. And sometimes that thing is so simple, in fact, that everyone assumes everyone else knows how to do it. I feel like this is something that happens here with GGUF conversions. I constantly see people say "it's just a script", but I rarely saw much more info than that.
So... why do it yourself? Several great members of this community are already producing quantized GGUFs, so what's the point? Well, for Mac users like me this is a big deal. I've been trying to get transformers to work nicely with Metal for a while so I can run unquantized models; I mean, what's the point of stupid amounts of VRAM if you can't do that? Never could get it to work though. Well, it turns out that you can make fp16 ggufs. And they run just fine on Mac (I'm running one now).
# The Guide
**NOTE**: This process does NOT rely on GPU the way inferencing does. This was a big hangup for me in even trying this, imagining that if I wanted to quantize a 70GB model I'd need 140GB of VRAM. Nope!
I can't guarantee your personal computer is strong enough to do it since everyone's hardware is different, but I'd imagine for a lot of folks reading this can.
Below looks really complex; its not. It's very simple, but I'm spelling out every step in case anyone gets confused on something.
​
* Step 1) You'll need Python! I recommend grabbing the latest 3.11 version (I ran into problems with 3.12), and (on Windows) during the install make sure that you have the checkbox to add Path variables as needed.
* Step 2) You'll need [llama.cpp](https://github.com/ggerganov/llama.cpp). I just grab the latest version. If you have git, you know how already. If you don't have git, you can just click the green "Code" button and click the down arrow to download a zip of the code.
* Step 3) You'll need a model! You can grab one over at huggingface. For my example, I'll use [https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B), which is currently a popular favorite in the 7B range.
* There are several ways to download these. The easiest and fastest way that I use is via [Oobabooga](https://github.com/oobabooga/text-generation-webui). If you pop over to the model tab, on the right side is an area to download.
* At the top of the huggingface page, you'll see the name of the model, to the left of the "like" button. Copy it, or click the little icon to copy it. For the above model, it should be "teknium/OpenHermes-2.5-Mistral-7B"
* Paste that in the top of the two text boxes on the right side above "Download". Click "download" and it should start.
* **IMPORTANT**: Ensure that the model you are downloading has a file in it called "tokenizer.model". You can confirm this on huggingface by clicking the "File and Versions" tab. If it doesn't, there's a note below about that, but the rest of this won't work as of this writing (2023-12-04).
* Step 4) Navigate to the llama.cpp folder you downloaded. You should see a file in there called requirements.txt; if so, you're in the right place. Open a command prompt/terminal window to this folder. The following command should install some stuff:
* python -m pip install -r requirements.txt
* You should get no major errors. Maybe some warnings. If you do get errors, then something went wrong.
* Step 5) Now to convert! You will be specifying the path to the folder containing the model (we downloaded this is in step 3), a path and filename for the output, and a quantization
* This process is run with the command "`python` [`convert.py`](https://convert.py)" and then arguments. Below is an example for our model being converted to q8, assuming we downloaded from oobabooga
* **q8 Example**: `python` [`convert.py`](https://convert.py) `C:\text-generation-webui-main\models\teknium_OpenHermes-2.5-Mistral-7B --outfile C:\Folder_For_GGUFs\OpenHermes-2.5-Mistral-7b.q8_0.gguf --outtype q8_0`
* Note that the output file name is entirely up to you. I named it according to the general standard I see others use because I'm sure some front ends do work based on that.
* **fp16 Example**: `python` [`convert.py`](https://convert.py) `C:\text-generation-webui-main\models\teknium_OpenHermes-2.5-Mistral-7B --outfile C:\Folder_For_GGUFs\OpenHermes-2.5-Mistral-7b-fp16.gguf --outtype f16`
Tada! You have a gguf now.
**NOTES**
* If you have a model that lacks the tokenizer.model file, this is a special huggingface tokenizer thingy. Currently, as of this writing, the [convert.py](https://convert.py) in the main branch for llamacpp does not handle these.
* There is, however, a PR with a change to handle them. Deepseek 67b is such a model. [https://github.com/ggerganov/llama.cpp/pull/3633](https://github.com/ggerganov/llama.cpp/pull/3633)
* I won't give instructions on how to make use of this unmerged change, because it has not gone through the proper review process. I will say, however, that I see TheBloke has made use of it, and I also tried it with DeepSeek 67b for fp16 and it worked well for me.
If you know how to make use of it it then you probably know enough to feel comfortable vetting the changes yourself before running it. If you don't know, then I'd wait for the PR to go into the main branch.
* Of you do this for deepseek, you'll need the argument --padvocab as well. I got an error otherwise, and the convo on that chat told me do that lol
* Again- you don't need a fancy GPU for this.
* An fp16 is going to be about the size of the model folder you got from huggingface. So a 70b model will be around 130-140GB. That fits neatly on a 192GB Mac Studio!
* Some models seem to be a little more of a pain to quantize than others, so just keep that in mind. From what I can tell, most will quickly run without any special settings. But I've seen a few conversations where folks had to work out some pretty challenging looking stuff to make it work right, so just keep that in mind. | 2023-12-04T21:46:25 | https://www.reddit.com/r/LocalLLaMA/comments/18av9aw/quick_start_guide_to_converting_your_own_ggufs/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18av9aw | false | null | t3_18av9aw | /r/LocalLLaMA/comments/18av9aw/quick_start_guide_to_converting_your_own_ggufs/ | false | false | self | 54 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
C# coding model | 6 | Hi! I'm looking for a model that can do C# coding fairly well, I've already tried DeepSeek Coder and CodeLlama but want to know if anyone has been using anything else? Parameter size doesn't matter to me, though the higher context size the better but also not too big of a deal. | 2023-12-04T21:20:09 | https://www.reddit.com/r/LocalLLaMA/comments/18aunhg/c_coding_model/ | maxwell321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aunhg | false | null | t3_18aunhg | /r/LocalLLaMA/comments/18aunhg/c_coding_model/ | false | false | self | 6 | null |
Creating a Student Advisor Chatbot - Need Advice | 2 | Hello everyone!
I'm looking to create a student advisor chatbot that can answer questions related to study programs, which field of study suits a student best, what subjects and focuses a particular field has, and more. As I'm relatively new to this field, I've experimented with QLoRa fine-tuning using a German fine-tuned Llama 2 model for testing purposes. I'm in the process of preparing my dataset from study advisory books, and I expect to have around 2000 question-answering data points. I've read that Instruction tuning can bring more advantages to tasks like this. I've used the same format (Alpaca) for fine-tuning that was used for the German fine-tuning.
I have a few questions:
1. I want my bot to learn how to respond based on the information(##input) in the dataset. Would having 1000-2000 rows of data be sufficient to teach it the format of responses and what's relevant from the instructions for the response?
2. Are there any good German models that are well-suited for such a task?
3. What's the best way to evaluate if the fine-tuning was successful for such a task? How can I effectively test the model?
4. Is there a good script I could use for my training, where I can experiment with parameters and conduct tests?
I'd appreciate any insights or guidance you can provide. Thanks in advance! | 2023-12-04T20:43:46 | https://www.reddit.com/r/LocalLLaMA/comments/18att91/creating_a_student_advisor_chatbot_need_advice/ | Horizonard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18att91 | false | null | t3_18att91 | /r/LocalLLaMA/comments/18att91/creating_a_student_advisor_chatbot_need_advice/ | false | false | self | 2 | null |
Can YARN be used on any model to increase the context length? | 1 | There are plenty of great models that would benefit from 16k, 64k, 128k context length. | 2023-12-04T20:21:03 | https://www.reddit.com/r/LocalLLaMA/comments/18at9xq/can_yarn_be_used_on_any_model_to_increase_the/ | A0sanitycomp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18at9xq | false | null | t3_18at9xq | /r/LocalLLaMA/comments/18at9xq/can_yarn_be_used_on_any_model_to_increase_the/ | false | false | self | 1 | null |
Mamba: subexponential attention replacement from Tri Dao (of FlashAttention fame) | 205 | “Like attention but subexponential” papers have been coming out for almost as long as multi-headed attention. None of them have gone anywhere, and I’ve mostly stopped paying attention to attention replacements.
Mamba is a different story: it comes to us from one of the most legendary engineers working in deep learning, Tri Dao, and the authors released a 3B model with the paper.
Tl;dr this one might actually pan out.
Paper: https://arxiv.org/pdf/2312.00752.pdf
GitHub: http://github.com/state-spaces/mamba | 2023-12-04T19:16:44 | https://www.reddit.com/r/LocalLLaMA/comments/18arr1t/mamba_subexponential_attention_replacement_from/ | georgejrjrjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18arr1t | false | null | t3_18arr1t | /r/LocalLLaMA/comments/18arr1t/mamba_subexponential_attention_replacement_from/ | false | false | self | 205 | null |
Orca 2 13b uncensored ? | 8 | Hi, I thought the Orca 2 models were censored; the 7b model is, but not the 13b. Has there been a change? Also, what are your thoughts on these models? | 2023-12-04T19:14:31 | https://www.reddit.com/r/LocalLLaMA/comments/18arp5e/orca_2_13b_uncensored/ | CoolestSlave | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18arp5e | false | null | t3_18arp5e | /r/LocalLLaMA/comments/18arp5e/orca_2_13b_uncensored/ | false | false | self | 8 | null |
Can we have a `New Pretrained Model` flair? | 106 | Or `New Base Model`
Or `New Foundation Model`
Or separate out `New Finetune` / `New Merge`?
I really value keeping up with the latest strong models, and it's hard to tell from name alone. For instance, there's a new thread talking about the `New Model` "DiscoLM 120B + DiscoLM 70B", and, turns out it's just a Megamerge deal, which, that's cool to know about and should be posted, but, I think a lot of us are keenly interested in new **Pretrained Models** separately from our interest in Mega-Merge-2.0-100B-4k. | 2023-12-04T19:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/18arn3e/can_we_have_a_new_pretrained_model_flair/ | BayesMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18arn3e | false | null | t3_18arn3e | /r/LocalLLaMA/comments/18arn3e/can_we_have_a_new_pretrained_model_flair/ | false | false | self | 106 | null |
Terminal-LLM: Lightweight simple Python-based LLM inference in terminal. | 15 | Hello guys,
I've developed something simple to run GGUF models via terminal to lower the entry barrier for local LLMs.
You can create chars and it saves your conversation history as a csv file so you can review it/continue later from the conversation.
Check it out at https://github.com/raddka/terminal-llm | 2023-12-04T18:47:29 | https://www.reddit.com/r/LocalLLaMA/comments/18ar21y/terminalllm_lightweight_simple_pythonbased_llm/ | raddka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ar21y | false | null | t3_18ar21y | /r/LocalLLaMA/comments/18ar21y/terminalllm_lightweight_simple_pythonbased_llm/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'iybzmRKPQIzwRnvf6nO0O-6JHcUogQq1ZDUctU0uCVE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N56xK3-NITZ3juqwCzwxIdZLQYlHlL8rGRjaKCHiWTA.jpg?width=108&crop=smart&auto=webp&s=cd7c43f6a2120c4d455d8ff6db014cfbdff2a0bd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N56xK3-NITZ3juqwCzwxIdZLQYlHlL8rGRjaKCHiWTA.jpg?width=216&crop=smart&auto=webp&s=ffbc4d0ee623b4255dc6faedae0dfeaef446ced6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N56xK3-NITZ3juqwCzwxIdZLQYlHlL8rGRjaKCHiWTA.jpg?width=320&crop=smart&auto=webp&s=785b698712653135b34a7eac97932204f133fa50', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N56xK3-NITZ3juqwCzwxIdZLQYlHlL8rGRjaKCHiWTA.jpg?width=640&crop=smart&auto=webp&s=4fc4f010dc014466bd9b9a45faed26dbc19d78d6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N56xK3-NITZ3juqwCzwxIdZLQYlHlL8rGRjaKCHiWTA.jpg?width=960&crop=smart&auto=webp&s=232d0ef6beb798f174ff8c3f97b4fab54b7b3d65', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N56xK3-NITZ3juqwCzwxIdZLQYlHlL8rGRjaKCHiWTA.jpg?width=1080&crop=smart&auto=webp&s=10c9d368aef67e241d69e9878cc1b6c4f4126082', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N56xK3-NITZ3juqwCzwxIdZLQYlHlL8rGRjaKCHiWTA.jpg?auto=webp&s=445ae2fa80f135a53fc248ff433da0ac2a02dc82', 'width': 1200}, 'variants': {}}]} |
Better encodings with RAG | 5 | I'm sorry if it's a dumb question, but this idea haunts me for a while and I'm curious what the community thinks.
It was shown that re-phrasing the question and deriving new questions from it before doing the cosine similarity search on the vector-db improves performance.
For example, rather than running the query
>Tell me about New York
you could ask
>Tell me about the geographic location of New York
>
>Tell me about the population of New York
>
>etc..
This is kind of obvious as it approaches/queries the data from multiple angles.
What I don't understand is why we don't have encoding models finetuned for retrieval.
Rather than having an LLM rephrase your question, it seems to me that it should be possible to encode the information (about the population and location of New York) to close proximity, such that when querying the data, all information gets returned.
​ | 2023-12-04T18:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/18aqxrz/better_encodings_with_rag/ | _szeka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aqxrz | false | null | t3_18aqxrz | /r/LocalLLaMA/comments/18aqxrz/better_encodings_with_rag/ | false | false | self | 5 | null |
Professional UI Frontend for KoboldCPP: Need Recommendations | 5 | Hi everyone,
I recently delved into running local models on my PC and after testing several applications, I've settled on **Koboldcpp**. However, I find its **UI lacking in professionalism**, and I'm on the hunt for a frontend-only solution that seamlessly links to the Koboldcpp API server through the localhost API URL.
The specific features I'm looking for to enhance my work experience include:
* **Multiple Chat Sessions with History:** Similar to ChatGPT.
* **Markdown Support:** Vital for effective communication.
* **Code Blocks with Syntax Highlighting and Copy Button:** Streamlining code sharing.
* **Inline Math Support with LaTeX:** Particularly for displaying math in human-readable form.
While KoboldCPP's frontend is versatile with game modes and role-playing features, I'm solely interested in a professional version tailored for productivity. For gaming purposes, I'm content with using the Koboldcpp UI.
Before anyone suggests it, I'm aware that the Koboldcpp frontend does support code blocks and markdown, but it's somewhat incomplete.
For those familiar with Koboldcpp, here are the specific issues hindering its use as a professional tool:
1. **Lack of Multi-Chat Support.**
2. **Aesthetic Mode:** Code blocks have proper headers (\[cpp,python\]) but don't display markdown.
3. **Classic Mode:** Displays markdown but lacks headers, resulting in the output always having 'cpp' on the first line for c++ code for example.
4. **No Copy-to-Clipboard Button.**
5. **No Syntax Highlighting for Code.**
These missing features make it challenging to use Koboldcpp as a replacement for ChatGPT. Although LM Studio supports clipboard code, multi-user chat, and syntax highlighting, it's closed source, and I find the Koboldcpp backend superior.
**TLDR:** Seeking recommendations for a frontend that can connect to the Koboldcpp backend for a more professional UI for work-related tasks, including markdown, copy-to-clipboard functionality, and multiple chat windows. Thanks to this fantastic community for any assistance! | 2023-12-04T18:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/18aqos1/professional_ui_frontend_for_koboldcpp_need/ | AdTotal4035 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aqos1 | false | null | t3_18aqos1 | /r/LocalLLaMA/comments/18aqos1/professional_ui_frontend_for_koboldcpp_need/ | false | false | self | 5 | null |
How can I use my idle 2060 GPU for local LLM-based apps and services? | 2 | Hello Local Llamas,
Looking for some guidance on how to best use the gpus I have for LLM based apps and services.
I replaced my 2060 with a 3090 which is installed in my PC now.
It works fine, can run most LLM models easily - all good. But my 2060 is not being used.
LLM services in the PC:
Chat models - Orca-2-13B /Zephyr 7B GPTQ
Coding assistant - Wizard-Coder-Python-34B-GPTQ
Stable diffusion
I also run a 3 node low power(dell Optiplex 3070) home lab cluster where I host my apps.(16x7). Yes I turn off at night.
LLM apps in the lab:
Chatbot UI -frontend for chatmodels ( docker)
Flowise(docker) - no code experimentation
Currently I do not run LLMs as a service in the lab due to lack of a GPUs in the nodes as well as increased power requirements. Instead,I turn the PC on demand.
Asking here if anyone has any idea how I can put 2060 to best use. | 2023-12-04T18:31:09 | https://www.reddit.com/r/LocalLLaMA/comments/18aqoa2/how_can_i_use_my_idle_2060_gpu_for_local_llmbased/ | AstrionX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aqoa2 | false | null | t3_18aqoa2 | /r/LocalLLaMA/comments/18aqoa2/how_can_i_use_my_idle_2060_gpu_for_local_llmbased/ | false | false | self | 2 | null |
Explanation to people who have zero idea about LLMs | 1 | So I need to give a presentation to a group of employees in a company.
Let's assume they do not have any idea about machine learning and I need to explain them what a large language model is.
Can I say that it's a complex probability function that predicts the probability of wors amongst its vocabulary that can occur next?
Is my simplification right? | 2023-12-04T18:25:57 | https://www.reddit.com/r/LocalLLaMA/comments/18aqjw5/explanation_to_people_who_have_zero_idea_about/ | IamFuckinTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aqjw5 | false | null | t3_18aqjw5 | /r/LocalLLaMA/comments/18aqjw5/explanation_to_people_who_have_zero_idea_about/ | false | false | self | 1 | null |
Building lab and would love input on my proposed specs. | 1 | Hi! I'm building a local lab for AI development, especially around larger parameter models. Most of my work will be around NLP, Coding Models, as well as embedding and vector workflows for some database related projects.
What you see below is my proposed build, I'd welcome any criticism or opinions on what could be better. My budget is fairly high, so I'm not necessary looking for the most cost effective solution, but I also don't want to buy a component for an extra grand if I'm only going to see a 2% performance increase. I'm especially looking for any input on motherboards, as I don't have one picked out yet.
* Nvidia 3090 24 GB x 3 (Does it matter if you mix and match brands at all?)
* 128 GB of DDR4 RAM
* AMD Ryzen Threadripper 3960X
* Thermaltake Toughpower GF A3 1200W x2 (Any gotchas with multiple PSU's?)
Again, thank you for any input!
| 2023-12-04T18:09:55 | https://www.reddit.com/r/LocalLLaMA/comments/18aq6jp/building_lab_and_would_love_input_on_my_proposed/ | Diceclip | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aq6jp | false | null | t3_18aq6jp | /r/LocalLLaMA/comments/18aq6jp/building_lab_and_would_love_input_on_my_proposed/ | false | false | self | 1 | null |
7B Christmas Wish List | 1 | The most performant combination of any 2 or 3 these models:
* OpenHermes 2.5
* Microsoft Orca 2
* Intel Neural Chat
* DeepSeek Code
* Hermes-Trismegistus
* Samantha-Mistral-Instruct
* Phind
* Capybara
I want a DeepSeek-Orca (or an uncensored DeepSeek-Dolphin), a Neural-Trismegistus, or a OrcaHermes that works | 2023-12-04T18:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/18aq0o0/7b_christmas_wish_list/ | SSPaladin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aq0o0 | false | null | t3_18aq0o0 | /r/LocalLLaMA/comments/18aq0o0/7b_christmas_wish_list/ | false | false | self | 1 | null |
Amazing success with Euryale-1.3 | 35 | Just thought I'd share about this model because I haven't seen a word about it: [https://huggingface.co/Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)
This is by far the best model for chat that I've ever used. It seems to do a great job at understanding context and giving response that make sense. I've tried MythoMax 13B, Noromaid 20B, airoboros 33B, psymedrp 20B, and a ton of other models that are supposedly good for chat, but nothing holds a candle to this one.
I use the 2.4bpw EXL2 format [https://huggingface.co/LoneStriker/Euryale-1.3-L2-70B-2.4bpw-h6-exl2](https://huggingface.co/LoneStriker/Euryale-1.3-L2-70B-2.4bpw-h6-exl2) to run it on a 4090, and it runs FAST and still maintains high quality.
For settings, I often use \~1.1 temp, \~0.4 min P, 1.18 Rep penalty, and disable just about everything else. | 2023-12-04T17:59:12 | https://www.reddit.com/r/LocalLLaMA/comments/18apx9c/amazing_success_with_euryale13/ | Civil_Ranger4687 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18apx9c | false | null | t3_18apx9c | /r/LocalLLaMA/comments/18apx9c/amazing_success_with_euryale13/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': '3-HRI58h9_7BuR5tmYr_6CHVPAc7lvOfJJN_iyKb8Nw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bycpeDMngPPN350pmASjFW1pm3nj4Upm-ts2-mnkcuU.jpg?width=108&crop=smart&auto=webp&s=3106f3547f73f141c8a025b42e054d863fe55825', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bycpeDMngPPN350pmASjFW1pm3nj4Upm-ts2-mnkcuU.jpg?width=216&crop=smart&auto=webp&s=be8de9d807be963da9c4eecee4ab656f20cbe0f1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bycpeDMngPPN350pmASjFW1pm3nj4Upm-ts2-mnkcuU.jpg?width=320&crop=smart&auto=webp&s=4198938c1dacb99271bee5bd09dcfe803c3fa399', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bycpeDMngPPN350pmASjFW1pm3nj4Upm-ts2-mnkcuU.jpg?width=640&crop=smart&auto=webp&s=722ae5e1d63c3ac2bc171ff04815d855a601dfee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bycpeDMngPPN350pmASjFW1pm3nj4Upm-ts2-mnkcuU.jpg?width=960&crop=smart&auto=webp&s=e1cb2f4b9e9d1c35260e7dbeaa848b8cc9e737c0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bycpeDMngPPN350pmASjFW1pm3nj4Upm-ts2-mnkcuU.jpg?width=1080&crop=smart&auto=webp&s=200405f1792acc34c0943f9f0c80ff13d95eae5c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bycpeDMngPPN350pmASjFW1pm3nj4Upm-ts2-mnkcuU.jpg?auto=webp&s=28459e556053f50dc97bf01c2e93f88299ea657d', 'width': 1200}, 'variants': {}}]} |
LLM for (ethical)hacking | 5 | Does anyone know if there is a LLM that is specifically trained/fine tuned and jailbroken on hacking data (such as the latest vulnerabilities/vulndb, kali toolset, metasploit etc)
Asking for a friend ofcourse 😬 | 2023-12-04T17:30:50 | https://www.reddit.com/r/LocalLLaMA/comments/18ap9vr/llm_for_ethicalhacking/ | e-nigmaNL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ap9vr | false | null | t3_18ap9vr | /r/LocalLLaMA/comments/18ap9vr/llm_for_ethicalhacking/ | false | false | self | 5 | null |
Which is the best open source language model with large context window? | 16 | I'm searching for an open source large context language model (30K context window or more). Fine tuning is not possible for my use case as I don't have the required data. | 2023-12-04T17:16:01 | https://www.reddit.com/r/LocalLLaMA/comments/18aoy4v/which_is_the_best_open_source_language_model_with/ | Special_Crew_401 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aoy4v | false | null | t3_18aoy4v | /r/LocalLLaMA/comments/18aoy4v/which_is_the_best_open_source_language_model_with/ | false | false | self | 16 | null |
What 7B/ 13B LLM for letter/email generation? | 1 | [removed] | 2023-12-04T16:56:42 | https://www.reddit.com/r/LocalLLaMA/comments/18aoibu/what_7b_13b_llm_for_letteremail_generation/ | Dutch_Guy_23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aoibu | false | null | t3_18aoibu | /r/LocalLLaMA/comments/18aoibu/what_7b_13b_llm_for_letteremail_generation/ | false | false | self | 1 | null |
How to deal with NSFW when hosting open-source model on site? | 1 | [removed] | 2023-12-04T14:54:50 | https://www.reddit.com/r/LocalLLaMA/comments/18alvln/how_to_deal_with_nsfw_when_hosting_opensource/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18alvln | false | null | t3_18alvln | /r/LocalLLaMA/comments/18alvln/how_to_deal_with_nsfw_when_hosting_opensource/ | false | false | nsfw | 1 | null |
New RAG LLM | 28 | https://huggingface.co/collections/llmware/dragon-models-65552d7648093c3f6e35d1bf
Has anyone experimented with the DRAGON Models from Hugging Face’s llmware collection, especially Mistral? I’d love to hear your experiences and insights on how they perform compared to other large language models. | 2023-12-04T14:54:17 | https://www.reddit.com/r/LocalLLaMA/comments/18alv81/new_rag_llm/ | Toni_rider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18alv81 | false | null | t3_18alv81 | /r/LocalLLaMA/comments/18alv81/new_rag_llm/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'PZLD4GkPQbteKzvMg7C5k4W4ILiUuLlEfG12NKFjJfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bhoRGalqNtZzwQUcCMqPKrLTE6fTMQ2WkLtyu-DnuG8.jpg?width=108&crop=smart&auto=webp&s=9dbdc9e08e18964b47a2c356f7106e2578a79ace', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bhoRGalqNtZzwQUcCMqPKrLTE6fTMQ2WkLtyu-DnuG8.jpg?width=216&crop=smart&auto=webp&s=4a30ce0088ea06f2b17b07bdd1766bbfa09fef00', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bhoRGalqNtZzwQUcCMqPKrLTE6fTMQ2WkLtyu-DnuG8.jpg?width=320&crop=smart&auto=webp&s=112d5a4b6243148a514a4a814234ed27f06a55a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bhoRGalqNtZzwQUcCMqPKrLTE6fTMQ2WkLtyu-DnuG8.jpg?width=640&crop=smart&auto=webp&s=8bcf296c76d07ecafe067e8de74a02aba19ca76d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bhoRGalqNtZzwQUcCMqPKrLTE6fTMQ2WkLtyu-DnuG8.jpg?width=960&crop=smart&auto=webp&s=f4efba8ca59249973454b9d6fc773ca53021817f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bhoRGalqNtZzwQUcCMqPKrLTE6fTMQ2WkLtyu-DnuG8.jpg?width=1080&crop=smart&auto=webp&s=8485511b0d2b95f96761eb0df77bc815eb0a3968', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bhoRGalqNtZzwQUcCMqPKrLTE6fTMQ2WkLtyu-DnuG8.jpg?auto=webp&s=03b9d126d25ede61d893e157da1839cd869610b8', 'width': 1200}, 'variants': {}}]} |
GPU and Mistral support in languagemodels package on Colab free tier | 5 | The [languagemodels](https://github.com/jncraton/languagemodels) Python package has had some recent updates. If you aren't familiar, this is a package to make using LLMs as simple as possible in Python for folks who don't have deep programming experience, particularly in education and personal exploration. Here's an example [Colab notebook summarizing a number of documents](https://colab.research.google.com/github/jncraton/languagemodels/blob/master/examples/summarize.ipynb).
I wanted to highlight a couple of recent changes that I think folks here may be interested in:
1. If you are running on Colab using a GPU, this will now automatically be used to speed up inference.
2. A Mistral-based model (specifically [NeuralChat](https://huggingface.co/Intel/neural-chat-7b-v3-1)) is used when you have 8GB of RAM or more available for inference. This can be manually set using lm.config\["max\_ram"\] = 8 and is the new default on Colab with a GPU.
Taken together this means that the following code will now run efficient Mistral inference on a free T4 Colab instance:
%pip install languagemodels
import languagemodels as lm
lm.do("What is AI?")
You can also use a GPU when running locally as [described in the readme](https://github.com/jncraton/languagemodels#gpu-acceleration). | 2023-12-04T14:21:32 | https://www.reddit.com/r/LocalLLaMA/comments/18al7ss/gpu_and_mistral_support_in_languagemodels_package/ | jncraton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18al7ss | false | null | t3_18al7ss | /r/LocalLLaMA/comments/18al7ss/gpu_and_mistral_support_in_languagemodels_package/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'vMog8eK5TKxXTx9WqPbW_JELhFrky-vtxLH0v_rknQY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HiXMjdPyuc7dtmBAY_ucGGI6Zs6kH7zYNq9yUqM8V7Y.jpg?width=108&crop=smart&auto=webp&s=38e0d12f73e93aae767597223b5d47f2c3a9d266', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HiXMjdPyuc7dtmBAY_ucGGI6Zs6kH7zYNq9yUqM8V7Y.jpg?width=216&crop=smart&auto=webp&s=8d181e4b12130b802a969c411fcbe2b9ad04ac34', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HiXMjdPyuc7dtmBAY_ucGGI6Zs6kH7zYNq9yUqM8V7Y.jpg?width=320&crop=smart&auto=webp&s=456b37721ba2e056a31fc0230a3aed789df4bbb6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HiXMjdPyuc7dtmBAY_ucGGI6Zs6kH7zYNq9yUqM8V7Y.jpg?width=640&crop=smart&auto=webp&s=e1c105badebc5188be3efac02400ac59c835c6f2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HiXMjdPyuc7dtmBAY_ucGGI6Zs6kH7zYNq9yUqM8V7Y.jpg?width=960&crop=smart&auto=webp&s=8f059b33dfa66fd59cee24a865f77b782f86b36b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HiXMjdPyuc7dtmBAY_ucGGI6Zs6kH7zYNq9yUqM8V7Y.jpg?width=1080&crop=smart&auto=webp&s=613a4f4e07b263aa8184bd3f055d1cf7d33d3169', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HiXMjdPyuc7dtmBAY_ucGGI6Zs6kH7zYNq9yUqM8V7Y.jpg?auto=webp&s=6e23c72844a20a044316d09d7169459d49d4159b', 'width': 1200}, 'variants': {}}]} |
a beginner with LLM trying to make it work :/ | 1 | [removed] | 2023-12-04T14:10:52 | https://www.reddit.com/r/LocalLLaMA/comments/18al07d/a_beginner_with_llm_trying_to_make_it_work/ | SaltyAd5111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18al07d | false | null | t3_18al07d | /r/LocalLLaMA/comments/18al07d/a_beginner_with_llm_trying_to_make_it_work/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9cWeWP4ZX06TcfaZj6bu0HZnpkXpDxX8Z8JLesAZzBs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/YTLA8pGhs-lCMf5hVOqlOWsDaYs06jC2gH8KS1htK6o.jpg?width=108&crop=smart&auto=webp&s=d84e2fff39caea2a6d645b6d49e32b42746dbe6c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/YTLA8pGhs-lCMf5hVOqlOWsDaYs06jC2gH8KS1htK6o.jpg?width=216&crop=smart&auto=webp&s=8563dd3b18fe1abd796baed24a4cc661bf537389', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/YTLA8pGhs-lCMf5hVOqlOWsDaYs06jC2gH8KS1htK6o.jpg?width=320&crop=smart&auto=webp&s=2a5239174f0ba6b304237b2a259489003dcc25e5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/YTLA8pGhs-lCMf5hVOqlOWsDaYs06jC2gH8KS1htK6o.jpg?auto=webp&s=9679184344f45726192a804a88f6e88ef7fe689e', 'width': 480}, 'variants': {}}]} |
Fine-tuning advice | 5 | Hello !
I need advice on how to fine-tune the llama model. I’d like to fine-tune it for a specific task consisting in sending back modified text. I have n dataset containing the original text and the desired modified text. I don’t know which Trainer to use, Trainer or SFTTrainer?
Can anyone tell me how to proceed? | 2023-12-04T13:55:21 | https://www.reddit.com/r/LocalLLaMA/comments/18akp7q/finetuning_advice/ | BluebirdFinancial119 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18akp7q | false | null | t3_18akp7q | /r/LocalLLaMA/comments/18akp7q/finetuning_advice/ | false | false | self | 5 | null |
Seeking Advice on LLM Handling Tools: KoboldAI or Oobabooga? | 4 | Hey everyone! 🌟 I'm diving into the world of large language models and could use some guidance. Currently, I've been using Oobabooga, but it's not handling longer conversations as well as I'd like. I've heard good things about KoboldAI's user interface, and with my PC having256GB of DDR3 RAM, I'm wondering if running multiple instances simultaneously could be the move. 🚀 So, a few questions for the community:
ideal setup
* runs on Linux or Windows?
* CPU-only but can support GPUS?
* Any recommendations for dealing with extensive conversation histories? Bonus: Is it feasible to run multiple instances or different LLMs simultaneously?
By the way, the model I've been using is NousResearch/Nous-Hermes-Llama2-13b. Any insights or experiences you could share would be greatly appreciated! 🙌
[Test llm gen with hardware stats](https://preview.redd.it/9mi6qidg2a4c1.png?width=1265&format=png&auto=webp&s=737385021c4272a9aea78d4c615bb8ff89b5f4e7) | 2023-12-04T13:43:54 | https://www.reddit.com/r/LocalLLaMA/comments/18akhhe/seeking_advice_on_llm_handling_tools_koboldai_or/ | totalcooljeff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18akhhe | false | null | t3_18akhhe | /r/LocalLLaMA/comments/18akhhe/seeking_advice_on_llm_handling_tools_koboldai_or/ | false | false | 4 | null | |
RP Folks what is the best way extending Mistral past 8k context and still get good results? | 14 | I've been trying to get 16k context size work *with RP* for the past week and tried searching around but there's alot of confusing information, people saying it supports 32k out of the box but anything past 8k just turns into nonsense or you get instant EOS token (banning it just gives bunch of garbo). Some people claiming RoPE scaling is automatic in Mistral, not the case for me but maybe I'm just using wrong settings?
Some people suggesting NurtureAI's models with extended context, these sorta work (I tried openhermes-2.5-mistral-7b-16k.Q8_0.gguf) but not really, if I load up long chat in SillyTavern that fills the entire 16k context it gives response instead of just instant EOS token like default OpenHermes-2.5-Mistral-7B but the response doesn't really make sense, I notice NurtureAI's models set rope_freq_base in oobabooga to 100 000 but playing with it didn't seem to help.
Then I saw post about someone calculating the exact alpha values for llama2 models and tried them despite having seen comments that you don't need to touch RoPE with Mistral. This was actually first time I got decent results by using bartowski_OpenHermes-2.5-Mistral-7B-exl2_8.0 with ExLlamav2_HF loader, max_seq_len 16384 and alpha_value of 2,643 (or just 2,65 with the slider) and compress_pos_emb 1. Same actually worked for openhermes-2.5-mistral-7b-16k.Q8_0.gguf using llama.cpp loader but by setting rope_freq_base back to 0 and then adjust alpha_value but I didn't notice much difference between results of these two and I prefer speed of exl2 models. Also someone made calculator for llama2 alpha values so I just [adjusted it slightly](https://i.imgur.com/OCm9RYV.png) to show also Mistral if anyone wants to try. ^or [source](https://www.desmos.com/calculator/vtba3wyspc) if you need values from between
But it leaves me wondering am I doing this wrong, is there better way of extending the context on mistral that I missed or didn't understand without as much perplexity loss? | 2023-12-04T13:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/18akggq/rp_folks_what_is_the_best_way_extending_mistral/ | Gomenope | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18akggq | false | null | t3_18akggq | /r/LocalLLaMA/comments/18akggq/rp_folks_what_is_the_best_way_extending_mistral/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '88qm4e2ROey4kiK4u9_VsogT-MXn3ULSRh7HcjaAOM8', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/cloalSXlQFHETPAGGkDoah6JfFjZKLkw2xKCisWcXys.png?width=108&crop=smart&auto=webp&s=a9110d7f0f0fc44a9f8389daa97dec9bc851e1ab', 'width': 108}, {'height': 166, 'url': 'https://external-preview.redd.it/cloalSXlQFHETPAGGkDoah6JfFjZKLkw2xKCisWcXys.png?width=216&crop=smart&auto=webp&s=43fc972def71d32c9c9f979ca43813a5f5f84290', 'width': 216}, {'height': 247, 'url': 'https://external-preview.redd.it/cloalSXlQFHETPAGGkDoah6JfFjZKLkw2xKCisWcXys.png?width=320&crop=smart&auto=webp&s=2a6314f830af46ddba6dd0a251cc705a758a368d', 'width': 320}, {'height': 494, 'url': 'https://external-preview.redd.it/cloalSXlQFHETPAGGkDoah6JfFjZKLkw2xKCisWcXys.png?width=640&crop=smart&auto=webp&s=7cda0cca94a5077f3b181bee324985565f0b2a4c', 'width': 640}, {'height': 741, 'url': 'https://external-preview.redd.it/cloalSXlQFHETPAGGkDoah6JfFjZKLkw2xKCisWcXys.png?width=960&crop=smart&auto=webp&s=f4cc1233cf05fcce899adf614449263e2dab1d80', 'width': 960}], 'source': {'height': 825, 'url': 'https://external-preview.redd.it/cloalSXlQFHETPAGGkDoah6JfFjZKLkw2xKCisWcXys.png?auto=webp&s=75ee52a7f0dd5822c11ef105f91257b800c3129b', 'width': 1068}, 'variants': {}}]} |
How do you do code analysis without running into "CUDA out of memory..." | 5 | I have dual GPUs, one 3090/24GB and one 1070/8GB. I loaded a 34B model through oobabooga/text-generation-webui with 4 bit quantization. I keep running into memory errors even when loading a relatively short code of 166 lines(403 words) after few interactions with the LLM. Is that normal or am I doing something wrong?
How do you guys do code analysis with open source LLMs ? | 2023-12-04T13:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/18akah1/how_do_you_do_code_analysis_without_running_into/ | BlueMetaMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18akah1 | false | null | t3_18akah1 | /r/LocalLLaMA/comments/18akah1/how_do_you_do_code_analysis_without_running_into/ | false | false | self | 5 | null |
ML@Home is like SETI@Home but for Machine Learning | 3 | I want my own GPU/TPU cluster. Is that economically possible? Probably not. But this is a pretty basic compute problem and can be done in parallel.
Well most of us have GPUs. We already have the models. We already have open source AI/ML platforms. We have papers galore describing all of the tools and techniques for building these systems.
So what don’t we have? Power… or do we?
I propose that we focus on building edge driven, massively parallel, massively distributed, decentralized compute clusters designed for training fully open source language models.
These models can be built so they conform to platforms like llama.cpp or whatever the community decides.
Seeing that gamers love to game and live to brag about their exploits I can’t see a more willing and ABLE (GPUs galore) target audience. 😀
I say game-ify the crap out of this thing. Make it world wide.
Scenario:
Go to modelsareus.ai and download ML@Home
Install the ML@Home binary and follow the instructions. Provide some settings:
Idle compute window (2am to 6am)
Project alignments (which models do you support)
GPU allocation (cuz we all have more than one)
Social Media (announcements, cuz haters gotta hate 😉)
This needs to exist. Some other things that this could facilitate:
Frequent updates - models get updated faster
Variations - mixed models will become a thing
Abstraction - something tells me that we can get away from the monolithic model building mindset. What if we built smaller more modular model frameworks that can be integrated?
Anyway interested in others thoughts. Or maybe this has been already done. If so, where is it? | 2023-12-04T13:08:26 | https://www.reddit.com/r/LocalLLaMA/comments/18ajuk6/mlhome_is_like_setihome_but_for_machine_learning/ | clueelf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ajuk6 | false | null | t3_18ajuk6 | /r/LocalLLaMA/comments/18ajuk6/mlhome_is_like_setihome_but_for_machine_learning/ | false | false | self | 3 | null |
Holy shit! LLM code analysis really works! | 304 | I am sorry if this post is not proper for the subreddit, but I just want to share my absolute astonishment.
I tries codellama/CodeLlama-34b-Python-hf and gave it some code I wrote a while back. It is a short, 166 lines, python script. I fed it to the LLM and ask it to describe what the code does. I really thought it would say something stupid, like saying "this code loads the package x and than initializes the variables ...". But no, it actually summarized what the code does on a high concept level in a few sentences. I'm speechless.
What a time to be alive! | 2023-12-04T13:08:20 | https://www.reddit.com/r/LocalLLaMA/comments/18ajuhs/holy_shit_llm_code_analysis_really_works/ | BlueMetaMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ajuhs | false | null | t3_18ajuhs | /r/LocalLLaMA/comments/18ajuhs/holy_shit_llm_code_analysis_really_works/ | false | false | self | 304 | null |
Is there any model that has a recent cut-off and is really good at Java and JavaScript? | 2 | I'm looking for a model that has relatively current knowledge and knows Java and JavaScript, that I can run locally.
Does that even exist? | 2023-12-04T12:50:24 | https://www.reddit.com/r/LocalLLaMA/comments/18ajixk/is_there_any_model_that_has_a_recent_cutoff_and/ | Ok-Book-1494 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ajixk | false | null | t3_18ajixk | /r/LocalLLaMA/comments/18ajixk/is_there_any_model_that_has_a_recent_cutoff_and/ | false | false | self | 2 | null |
Prompt Engineering | 1 | I hear this model is great and that model is great but it be nice to see prompts used. I personally once had a prompt I lost it 😔. I think I followed a video I watched on YouTube and used his prompt. With a model way back a year ago.
It felt like the good conversation was so good I thought I got hacked. It started asking me all of these personal “relevant” questions about me. I pulled the internet cable because it freaked me out so much. It got better about half way through the convo so I think it was a mix of the prompt and what was said.
Back then the models were meh and smaller. Now I use 70b plus models all the time “upgraded hardware” but can never get back that experience. | 2023-12-04T12:48:11 | https://www.reddit.com/r/LocalLLaMA/comments/18ajhlm/prompt_engineering/ | AutomaticDriver5882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ajhlm | false | null | t3_18ajhlm | /r/LocalLLaMA/comments/18ajhlm/prompt_engineering/ | false | false | self | 1 | null |
vllm with VsCode CoPilot extension | 3 | Hello everyone, I'm trying to use vllm (Mistral-7B-Instruct-v0.1-AWQ) with VsCode CoPilot extension, by updating the settings.json. But the extension is sending the commands to the /v1/engines endpoint, and it doesn't work.
`"POST /v1/engines/tmp/Mistral-7B-Instruct-v0.1-AWQ/completions HTTP/1.1" 404 Not Found`
`"GET /v1/engines HTTP/1.1" 404 Not Found`
I found out that OpenAi modified the engines to models over a year ago ([https://help.openai.com/en/articles/6283125-what-happened-to-engines](https://help.openai.com/en/articles/6283125-what-happened-to-engines)). I don't get why VsCode Copilot extension is still using the "engines" endpoint. Is there a way to fix this?
PS: Other endpoints such as chat/completions work.
`curl` [`http://localhost:8000/v1/chat/completions`](http://localhost:8000/v1/chat/completions) `-H "Content-Type: application/json" -d '{`
`"model": "/tmp/Mistral-7B-Instruct-v0.1-AWQ",`
`"messages": [{"role": "user", "content": "Say this is a test!"}]`
`}'` | 2023-12-04T12:45:47 | https://www.reddit.com/r/LocalLLaMA/comments/18ajg49/vllm_with_vscode_copilot_extension/ | craulnober | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ajg49 | false | null | t3_18ajg49 | /r/LocalLLaMA/comments/18ajg49/vllm_with_vscode_copilot_extension/ | false | false | self | 3 | null |
llama.cpp - it doesn't print '**' in code block | 1 | Anyone else have this issue? | 2023-12-04T12:38:16 | https://www.reddit.com/r/LocalLLaMA/comments/18ajbn3/llamacpp_it_doesnt_print_in_code_block/ | Big-Opportunity-6407 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ajbn3 | false | null | t3_18ajbn3 | /r/LocalLLaMA/comments/18ajbn3/llamacpp_it_doesnt_print_in_code_block/ | false | false | self | 1 | null |
Possibilities with an 8gb VRAM Laptop | 3 | I bought my laptop before I knew I would be interested in LLM's, so I got it with a 4060 8gb CPU (not like I had much of a choice anyway). I have an i7 13700HX CPU with 16gb RAM.
I know I can forget about training models with this hardware, but I would like to know my limitations in terms of just running LLM's (7b, 13b, 70b) whatever and if it would be worth it to upgrade my CPU RAM to 64gb for this purpose.
I'm mainly interested in running capable models like DeepSeek for coding and Stable Diffusion for image generation locally. | 2023-12-04T12:25:57 | https://www.reddit.com/r/LocalLLaMA/comments/18aj4na/possibilities_with_an_8gb_vram_laptop/ | Ok-Book-1494 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aj4na | false | null | t3_18aj4na | /r/LocalLLaMA/comments/18aj4na/possibilities_with_an_8gb_vram_laptop/ | false | false | self | 3 | null |
How to check how much LLM inferencing eats your hardware's battery? | 1 | Just curious if there are resources that has provided details on each model. Say if you used Mistral-7B, what is the estimate battery life it will eat in your Macbook pro? etc.
I tried Mistral-7B with a 4000 token novel composition. It eats around 5% battery per inference (prompt output). | 2023-12-04T12:20:50 | https://www.reddit.com/r/LocalLLaMA/comments/18aj1qx/how_to_check_how_much_llm_inferencing_eats_your/ | vlodia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aj1qx | false | null | t3_18aj1qx | /r/LocalLLaMA/comments/18aj1qx/how_to_check_how_much_llm_inferencing_eats_your/ | false | false | self | 1 | null |
A model i discovered today: Onii-1.3-13B made for role play. | 12 | [https://huggingface.co/TheBloke/Onii-1.3-13B-GGUF](https://huggingface.co/TheBloke/Onii-1.3-13B-GGUF)
TheBloke is delivering once again.
Tried this model for a couple of hours and it really upholds the character card really well. i'm having a lot of fun with it. | 2023-12-04T12:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/18aj0un/a_model_i_discovered_today_onii1313b_made_for/ | -Starlancer- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aj0un | false | null | t3_18aj0un | /r/LocalLLaMA/comments/18aj0un/a_model_i_discovered_today_onii1313b_made_for/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'Gq2Lqesep8zxJXY2ejD00XGuGb0hZ2oi-GB3jHe3Qy8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rjFQL8XTW4TqttLXwJWYnzhFy6ac7WukS9Psu8ybTlw.jpg?width=108&crop=smart&auto=webp&s=27f9d5985d0338ff3a048aa1c1bb5a0a12a7a709', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rjFQL8XTW4TqttLXwJWYnzhFy6ac7WukS9Psu8ybTlw.jpg?width=216&crop=smart&auto=webp&s=48ca1d00433246d79d8d7eca20e5865c4d733c26', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rjFQL8XTW4TqttLXwJWYnzhFy6ac7WukS9Psu8ybTlw.jpg?width=320&crop=smart&auto=webp&s=394909b7f3c42dc3b59a739e751c0594af3e20e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rjFQL8XTW4TqttLXwJWYnzhFy6ac7WukS9Psu8ybTlw.jpg?width=640&crop=smart&auto=webp&s=9cb8598052a739563057e9cf1f83c5176eb97523', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rjFQL8XTW4TqttLXwJWYnzhFy6ac7WukS9Psu8ybTlw.jpg?width=960&crop=smart&auto=webp&s=f6efbb1fb5bbe1a922fcb011818dfa7c18bf3db5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rjFQL8XTW4TqttLXwJWYnzhFy6ac7WukS9Psu8ybTlw.jpg?width=1080&crop=smart&auto=webp&s=9f78415f86ca4acb8700e61a97b61a182a3e178f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rjFQL8XTW4TqttLXwJWYnzhFy6ac7WukS9Psu8ybTlw.jpg?auto=webp&s=2caa24659b20509f3fa4b66df38cdf79c457fb8a', 'width': 1200}, 'variants': {}}]} |
Using logit_bias to make the assistant speak in l33t text | 18 | 2023-12-04T12:04:33 | Timotheeee1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18aisp3 | false | null | t3_18aisp3 | /r/LocalLLaMA/comments/18aisp3/using_logit_bias_to_make_the_assistant_speak_in/ | false | false | 18 | {'enabled': True, 'images': [{'id': 'QsngOZiV9V8rYdxN_kR93wApPfqMrlClQwjJ4lU9QYA', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/1dyirw59q94c1.png?width=108&crop=smart&auto=webp&s=e9947a03e692293d8b8f7046211f077521f0f7ed', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/1dyirw59q94c1.png?width=216&crop=smart&auto=webp&s=69c4f0188bfa293338c3d29f488e599a4bdd9cfc', 'width': 216}, {'height': 271, 'url': 'https://preview.redd.it/1dyirw59q94c1.png?width=320&crop=smart&auto=webp&s=03666ecaa49ba7c1a41f92b884dc2bb94681f8f7', 'width': 320}, {'height': 542, 'url': 'https://preview.redd.it/1dyirw59q94c1.png?width=640&crop=smart&auto=webp&s=efc63bf9820aac7c8e39c88e8a02c4047e64a07e', 'width': 640}, {'height': 814, 'url': 'https://preview.redd.it/1dyirw59q94c1.png?width=960&crop=smart&auto=webp&s=83d757aed74dd0fbc1d4d9a240550d7fac60ef0e', 'width': 960}, {'height': 916, 'url': 'https://preview.redd.it/1dyirw59q94c1.png?width=1080&crop=smart&auto=webp&s=8eb8caca355e89903305c38c91bb3f63ded515be', 'width': 1080}], 'source': {'height': 1174, 'url': 'https://preview.redd.it/1dyirw59q94c1.png?auto=webp&s=0db2b7a414d9bd7c19b878bfc8633a2e3bcfc04c', 'width': 1384}, 'variants': {}}]} | |||
what is the best uncensored NSFW model for roleplay currently? | 4 | i want open source model that works best in NSFW roleplay in 7B parameter. | 2023-12-04T11:58:01 | https://www.reddit.com/r/LocalLLaMA/comments/18aioti/what_is_the_best_uncensored_nsfw_model_for/ | glide_nexus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aioti | false | null | t3_18aioti | /r/LocalLLaMA/comments/18aioti/what_is_the_best_uncensored_nsfw_model_for/ | false | false | nsfw | 4 | null |
DiscoResearch: Grooving into AI with a Global Beat! (DiscoLM 120b + DiscoLM 70b announcement) | 20 | ​
https://preview.redd.it/lc0fswlci94c1.jpg?width=1024&format=pjpg&auto=webp&s=6590dfcc20c2acc42c47d3b2dc763eeda7e8c799
Meet [DiscoResearch](https://huggingface.co/DiscoResearch), the latest rhythm in AI research, with a spotlight on non-English models and innovative evaluation methods. We're not just launching one, but two sensational models, and there's even more in the pipeline!
**DiscoLM 120b: Goliath on the Dancefloor**
First up, [DiscoLM 120b](https://huggingface.co/DiscoResearch/DiscoLM-120b), a dazzling 120b model merging Llama2-70b's best moves. With an average score of 73.198, it's twirling its way to the top of the AI charts. Expect remarkable performance across various metrics.
**DiscoLM 70b: The Multilingual Maestro**
Simultaneously, we introduce [DiscoLM 70b](https://huggingface.co/DiscoResearch/DiscoLM-70b). Born from LeoLM 70b and powered up with 65b tokens of German text, this 70b model dances smoothly between German and English, scoring an impressive 71.24 average on the HF leaderboard tasks. It's a beautiful blend of multilingual prowess and instruction-set finetuning.
**Join the Dance!**
DiscoResearch is where AI meets global diversity and community. We're spinning the decks for AI innovation, focusing on bringing the rhythm of multiple languages to the forefront while remaining on top of performance. Stay tuned, and let's disco into a future where AI speaks every language! Come join the party!
**What's Next?**
We're currently composing the successor of the [EM German model family](https://huggingface.co/collections/jphme/em-german-model-family-65200351c452265ea1d75b62), our next hit in the genre of German models. But which song is the greatest? Our upcoming Disco Judge models will offer highly customizable benchmarking, based on the great Prometheus approach. Join us in the development process! | 2023-12-04T11:19:39 | https://www.reddit.com/r/LocalLLaMA/comments/18ai4p9/discoresearch_grooving_into_ai_with_a_global_beat/ | jphme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ai4p9 | false | null | t3_18ai4p9 | /r/LocalLLaMA/comments/18ai4p9/discoresearch_grooving_into_ai_with_a_global_beat/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'NaTI-jGTKu5eFAUPWowisGNpdraG_dUobwM5i0A0aGI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qPKTdWLpr8MQo1iC2r4LAL5c4DnLlqQDOlzCo7xS214.jpg?width=108&crop=smart&auto=webp&s=516c37c948d555cb188220592a5c25502f76bae4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qPKTdWLpr8MQo1iC2r4LAL5c4DnLlqQDOlzCo7xS214.jpg?width=216&crop=smart&auto=webp&s=457728aa13766ad42b16f6d3d66e42ae5c3bf5ae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qPKTdWLpr8MQo1iC2r4LAL5c4DnLlqQDOlzCo7xS214.jpg?width=320&crop=smart&auto=webp&s=1a41f37f3b1623b342bf292e4e189c5851b666c7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qPKTdWLpr8MQo1iC2r4LAL5c4DnLlqQDOlzCo7xS214.jpg?width=640&crop=smart&auto=webp&s=33d2d7c280ad1a22efc3c49cbe4fd556ebb4b58d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qPKTdWLpr8MQo1iC2r4LAL5c4DnLlqQDOlzCo7xS214.jpg?width=960&crop=smart&auto=webp&s=6e3ec5f900abf5318706ca48289b1971cea52ea3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qPKTdWLpr8MQo1iC2r4LAL5c4DnLlqQDOlzCo7xS214.jpg?width=1080&crop=smart&auto=webp&s=00263211722de35700657e28783b749a51c1f969', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qPKTdWLpr8MQo1iC2r4LAL5c4DnLlqQDOlzCo7xS214.jpg?auto=webp&s=17951876098afe1c82a84027ee1e381759934024', 'width': 1200}, 'variants': {}}]} | |
Best open/commercial model that is tuned on ChatGPT4? | 1 | I'm okay with considering that legal ambiguity right now, anyone have a suggestion?
I'm more interested in knowing what is possible rather than actually moving forward. | 2023-12-04T11:04:52 | https://www.reddit.com/r/LocalLLaMA/comments/18ahx6m/best_opencommercial_model_that_is_tuned_on/ | learning_hedonism | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ahx6m | false | null | t3_18ahx6m | /r/LocalLLaMA/comments/18ahx6m/best_opencommercial_model_that_is_tuned_on/ | false | false | self | 1 | null |
Jailbreak prompts for Llama ? | 3 | Hello,
I've been having quite some fun with jailbreak prompts on ChatGPT recently. It is interesting to see how various strategies like Role Playing or AI simulation can make the model say stuff it should not say.
I wanted to test those same type of "jailbreak prompts" with Llama-2-7b-chat. But while there are a lot of people and websites documenting jailbreak prompts for ChatGPT, I couldn't find any for Llama. I tested some jailbreak prompts made for ChatGPT on Llama-2-7b-chat but it seems they do not work.
I would also like to note that what I'm looking for are jailbreak prompts that have a **semantic meaning** (for example by hiding the true intent of the prompt of by creating a fake scenario). I know there is also a class of attack that searches for a suffix to add the prompt such that the model outputs the expected message (they do this by using gradient descent). This is not what I'm looking for.
​
Here are my questions :
\- Do these jailbreak prompts even exist for Llama-2 ?
\- If yes, where can I find them ? Would you have any to propose to me ?
| 2023-12-04T10:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/18ahskq/jailbreak_prompts_for_llama/ | zerokerim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ahskq | false | null | t3_18ahskq | /r/LocalLLaMA/comments/18ahskq/jailbreak_prompts_for_llama/ | false | false | self | 3 | null |
I want some help in a PC build for llms models | 1 | [removed] | 2023-12-04T09:14:09 | https://www.reddit.com/r/LocalLLaMA/comments/18agg26/i_want_some_help_in_a_pc_build_for_llms_models/ | Top-Weekend-357 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18agg26 | false | null | t3_18agg26 | /r/LocalLLaMA/comments/18agg26/i_want_some_help_in_a_pc_build_for_llms_models/ | false | false | self | 1 | null |
Run 70B LLM Inference on a Single 4GB GPU with This NEW Technique | 1 | 2023-12-04T08:34:47 | https://ai.gopubby.com/unbelievable-run-70b-llm-inference-on-a-single-4gb-gpu-with-this-new-technique-93e2057c7eeb | idnc_streams | ai.gopubby.com | 1970-01-01T00:00:00 | 0 | {} | 18afyhg | false | null | t3_18afyhg | /r/LocalLLaMA/comments/18afyhg/run_70b_llm_inference_on_a_single_4gb_gpu_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RkFEPaifIY86OQPf8THcukSAQ6_mmePt01rNVNIMaHg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Sqnq-NC6gB5IaibD0wtOzmUAmEopiu056VSGDrnEh4Q.jpg?width=108&crop=smart&auto=webp&s=5f03654bc0a9db287fd6ee183152aa970f8f2167', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Sqnq-NC6gB5IaibD0wtOzmUAmEopiu056VSGDrnEh4Q.jpg?width=216&crop=smart&auto=webp&s=18c2535a38bd98369712eceadac612cde2aaf3ee', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Sqnq-NC6gB5IaibD0wtOzmUAmEopiu056VSGDrnEh4Q.jpg?width=320&crop=smart&auto=webp&s=60a2d0c2e14e8a6c8308429d90edc84d8419396b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Sqnq-NC6gB5IaibD0wtOzmUAmEopiu056VSGDrnEh4Q.jpg?width=640&crop=smart&auto=webp&s=2474947cc3a42713037ecec7167640f32c669959', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Sqnq-NC6gB5IaibD0wtOzmUAmEopiu056VSGDrnEh4Q.jpg?width=960&crop=smart&auto=webp&s=bc552da50eaecb6ccdc221708a2ac4505f41d5f1', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/Sqnq-NC6gB5IaibD0wtOzmUAmEopiu056VSGDrnEh4Q.jpg?width=1080&crop=smart&auto=webp&s=294b6d8950b883899aec3a3244518c8344776a5f', 'width': 1080}], 'source': {'height': 676, 'url': 'https://external-preview.redd.it/Sqnq-NC6gB5IaibD0wtOzmUAmEopiu056VSGDrnEh4Q.jpg?auto=webp&s=41aaaa1be836eb9324b6210b1383efe73aab98cf', 'width': 1200}, 'variants': {}}]} | ||
Clipboard Conqueror now supports even more prompt control. | 2 | Super Advanced Commands, copy them up to conquer any task:
||||System: Command first before Clip agent.| text from user: in the internal chain
\^\^\^\^note 4 "|" , and the closing | on the end
|||writer|SYSTEM: Command First.| User: after agent writer
Clipboard Conqueror applies configurable formatting like:
\`\`\`
prompt":"<|im\_start|>\[\\"SYSTEM: Command First.\\",\[\\"SYSTEM: Write a lengthy prose about the requested topic. Do not wrap up, end, or conclude the story, write the next chapter.\\\\n \\\\n Story:\\",\\"\\"\]\]<|im\_end|>\\n<|im\_start|>user:\\n User: after agent writer\\n\\n<|im\_end|>\\n<|im\_start|>assistant:\\n "
\`\`\`
​
|||re,frank|this text is invisible to :save| //also, :save in there may have unpredictable results...
for reference:
|||writer,write|
Paste ready immediately, no notification://I should fix that
​
\`\`\`
|||name:save|\["",\["SYSTEM: Write a lengthy prose about the requested topic. Do not wrap up, end, or conclude the story, write the next chapter.
Story:",""\],\[null,""\]\]
\`\`\`
Clipboard Conqueror is nearly optimal within scope. Please, try it out! | 2023-12-04T07:52:33 | https://www.reddit.com/r/LocalLLaMA/comments/18afesz/clipboard_conqueror_now_supports_even_more_prompt/ | aseichter2007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18afesz | false | null | t3_18afesz | /r/LocalLLaMA/comments/18afesz/clipboard_conqueror_now_supports_even_more_prompt/ | false | false | self | 2 | null |
How to start red teaming on llms ? | 1 | Hi,
Red teaming is one of the crucial steps for safe guarding llms.
I want to know how to get started with red teaming, what process should I follow. | 2023-12-04T06:43:00 | https://www.reddit.com/r/LocalLLaMA/comments/18aeh8b/how_to_start_red_teaming_on_llms/ | kadhi_chawal2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aeh8b | false | null | t3_18aeh8b | /r/LocalLLaMA/comments/18aeh8b/how_to_start_red_teaming_on_llms/ | false | false | self | 1 | null |
which is the best model (finetuned or base) to extract structured data from a bunch of text? | 4 | hi folks,
simple question really - what model (finetuned or otherwise) have you found that can extract data from a bunch of text.
I'm happy to finetune, so if there are any successes there, would really appreciate some pointers in the right direction.
Really looking for a starting point here.
P.S. cannot use GPT because of sensitive PII data. | 2023-12-04T06:11:05 | https://www.reddit.com/r/LocalLLaMA/comments/18ae0bh/which_is_the_best_model_finetuned_or_base_to/ | sandys1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ae0bh | false | null | t3_18ae0bh | /r/LocalLLaMA/comments/18ae0bh/which_is_the_best_model_finetuned_or_base_to/ | false | false | self | 4 | null |
Newbie Questions Regarding Enhanced Control of Bot Responses in oobabooga TextGenUI's API | 1 | I'm currently working on implementing oobabooga TextGenUI's API in my game and have a couple of questions regarding gaining more control over the bot's responses:
## Incorporating Game Time into Responses:
My game operates on an in-game "world time." I tried to concatenate the game time to the context before generating an answer, intending for the bot to consistently respond with the provided time. But, I've noticed that sometimes I still receive random times. Is there a way to ensure the model consistently uses the time I supply?
## Manipulating Responses to Time Queries (Another approach):
Is there a method to make the model respond with a placeholder when a player asks about the current time? I'd like to use a regex to replace the placeholder with the actual game time. Keep in mind that the player might ask for the time in slightly different ways.
## Identifying Greetings and Farewells:
I'm interested in detecting whether the bot's response includes a greeting or a farewell. How can I programmatically identify these events? For instance, if the model says "bye," I want my NPC to exit through a door. What would be the most effective approach for this? | 2023-12-04T06:10:28 | https://www.reddit.com/r/LocalLLaMA/comments/18adzz6/newbie_questions_regarding_enhanced_control_of/ | masteryoyogi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18adzz6 | false | null | t3_18adzz6 | /r/LocalLLaMA/comments/18adzz6/newbie_questions_regarding_enhanced_control_of/ | false | false | self | 1 | null |
Thoughts on getting an upgraded 2080ti | 1 | [removed] | 2023-12-04T05:52:50 | https://www.reddit.com/r/LocalLLaMA/comments/18adplk/thoughts_on_getting_an_upgraded_2080ti/ | ConcaveTriangle5761 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18adplk | false | null | t3_18adplk | /r/LocalLLaMA/comments/18adplk/thoughts_on_getting_an_upgraded_2080ti/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'TFA2w0gEPf2YlhOpyKmgbm8B1eLxV1vGp4v73joxl4w', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/AyThVcP8enn_6SUpDIX_MUtbBzoYtLwdScA78_AhJr8.jpg?width=108&crop=smart&auto=webp&s=8ab5b00f8ac793157237c51ce3696867c13d8142', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/AyThVcP8enn_6SUpDIX_MUtbBzoYtLwdScA78_AhJr8.jpg?width=216&crop=smart&auto=webp&s=a65e4fc281e2527627e233ed1a770c86affde1fa', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/AyThVcP8enn_6SUpDIX_MUtbBzoYtLwdScA78_AhJr8.jpg?width=320&crop=smart&auto=webp&s=663c6f0dd2030b28e1c9bd7f4a733982f527d97e', 'width': 320}], 'source': {'height': 214, 'url': 'https://external-preview.redd.it/AyThVcP8enn_6SUpDIX_MUtbBzoYtLwdScA78_AhJr8.jpg?auto=webp&s=c348aa93d6791d396524ff31d120cf5ee9c0cbdd', 'width': 400}, 'variants': {}}]} | |
LoRa on multiple nodes ? | 1 | [removed] | 2023-12-04T05:12:57 | https://www.reddit.com/r/LocalLLaMA/comments/18ad221/lora_on_multiple_nodes/ | Efficient_Cheek8672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ad221 | false | null | t3_18ad221 | /r/LocalLLaMA/comments/18ad221/lora_on_multiple_nodes/ | false | false | default | 1 | null |
QuIP#: SOTA 2-bit quantization method, now implemented in text-generation-webui (experimental) | 67 | 2023-12-04T05:09:34 | https://github.com/oobabooga/text-generation-webui/pull/4803 | oobabooga4 | github.com | 1970-01-01T00:00:00 | 0 | {} | 18ad01b | false | null | t3_18ad01b | /r/LocalLLaMA/comments/18ad01b/quip_sota_2bit_quantization_method_now/ | false | false | 67 | {'enabled': False, 'images': [{'id': 'mUR0eIh1IB8BbBRrUKFXhq8a2IKzvEutzjoJ1tV7NfQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Skppg0NYh_XbRbbF10ypNfDBa1qveX9I7ipw4GKJA7U.jpg?width=108&crop=smart&auto=webp&s=26af18c80d313c33d4fede22d4f8af5b292a6113', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Skppg0NYh_XbRbbF10ypNfDBa1qveX9I7ipw4GKJA7U.jpg?width=216&crop=smart&auto=webp&s=777814f4fe0531bc3d3ef011276ad2e541270688', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Skppg0NYh_XbRbbF10ypNfDBa1qveX9I7ipw4GKJA7U.jpg?width=320&crop=smart&auto=webp&s=d8d49637c7ac185a7956969533d611d2fc8bdc37', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Skppg0NYh_XbRbbF10ypNfDBa1qveX9I7ipw4GKJA7U.jpg?width=640&crop=smart&auto=webp&s=98a76a5f4863de8bcd25d8e537a102b313b5027e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Skppg0NYh_XbRbbF10ypNfDBa1qveX9I7ipw4GKJA7U.jpg?width=960&crop=smart&auto=webp&s=f3936077a2451aff5755439294f40211933b3572', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Skppg0NYh_XbRbbF10ypNfDBa1qveX9I7ipw4GKJA7U.jpg?width=1080&crop=smart&auto=webp&s=623bf95a5a31219e941b18d132aca28f18a7a9c9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Skppg0NYh_XbRbbF10ypNfDBa1qveX9I7ipw4GKJA7U.jpg?auto=webp&s=6ad595706eff94d1dd09df680a3052af15bb32d9', 'width': 1200}, 'variants': {}}]} | ||
How do I make a Lora with a mistral model? (With Lima-rp dataset) | 2 | Is there a tutorial to make a Lora with the Limarp dataset? I really have no idea what to do or where to start. I've tried looking for videos and some tutorials but I can't find anything clear enough. | 2023-12-04T05:07:33 | https://www.reddit.com/r/LocalLLaMA/comments/18acyqn/how_do_i_make_a_lora_with_a_mistral_model_with/ | Ok_Dragonfruit_107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18acyqn | false | null | t3_18acyqn | /r/LocalLLaMA/comments/18acyqn/how_do_i_make_a_lora_with_a_mistral_model_with/ | false | false | self | 2 | null |
Is m1 max macbook pro worth? | 4 | Hi there,
Im looking to buy an apple laptop and I saw a macbook pro m1 max with 64gb ram and 2TB ssd for 2400 usd
Will this computer be able to run the big models at reasonable speed?
I was going to buy the simple macbook air m1 8gb ram for 700usd but I saw this and I always wanted to play with LLMs but never could.
Any advice is appreciated, thanks | 2023-12-04T04:49:30 | https://www.reddit.com/r/LocalLLaMA/comments/18acmzs/is_m1_max_macbook_pro_worth/ | PuzzledWhereas991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18acmzs | false | null | t3_18acmzs | /r/LocalLLaMA/comments/18acmzs/is_m1_max_macbook_pro_worth/ | false | false | self | 4 | null |
A100 inference is much slower than expected with small batch size | 5 | I’m working on a project to generate text from a 1.2B parameter full precision LLM (5gb)
Unfortunately I’m limited in the infrastructure I can use to deploy this model. There is no batch inference supported. The infrastructure I have allows me to deploy a copy of the model on a single A100, 1 per process with up to 9 processes supported (these are called “replicas”). I understand that this makes little sense given my model is memory bound, and each process will fight for memory bandwidth to read in the same weights, but I can’t change that for now.
My average input and output tokens are roughly 1000 each. I estimate the kv cache per token is roughly 400kB using full precision.
I have benchmarks of the latency of the model using various “replicas” as described above. I wanted to compare this to the theoretical performance of the A100. For my use case time to first token is negligible (<200ms), and generation is memory bound.
I find that with 5 or more replicas, the math works out and my model is roughly as fast as I expect. For example, with 1000 output tokens, 6 replicas, it’s like I’m generating using a batch of 6 requests from a 30gb model + 5gb for the kv cache. At a memory bandwidth around 1-1.3tbps that translates to ~30s per request, which is not far from what I see. The same goes for other replica numbers, 5, 7, 8 and 9.
However, when I run with a single replica, I expect generation to hover around the 5-6s mark on average. Instead, I see > 20s. I need to add 4 more replicas before the number starts to make sense. It almost seems like the model takes up too little memory to be allocated the entire memory bandwidth.
Does anyone know where this extra latency could be coming from? Do models have to reach a certain amount of used memory for A100 memory bandwidth to hit their available memory bandwidth? | 2023-12-04T04:27:42 | https://www.reddit.com/r/LocalLLaMA/comments/18ac9p7/a100_inference_is_much_slower_than_expected_with/ | currytrash97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ac9p7 | false | null | t3_18ac9p7 | /r/LocalLLaMA/comments/18ac9p7/a100_inference_is_much_slower_than_expected_with/ | false | false | self | 5 | null |
This model is extremely good | 176 | I have been using this as daily driver for a few days, very good, i never thought 7B model can achieve this level of coding + chat
[https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF) | 2023-12-04T04:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/18abvwx/this_model_is_extremely_good/ | noobgolang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18abvwx | false | null | t3_18abvwx | /r/LocalLLaMA/comments/18abvwx/this_model_is_extremely_good/ | false | false | self | 176 | {'enabled': False, 'images': [{'id': '-kcKp08tXt3vCAp0zGRjVTi2iLMYxYJqZvt7QbBhUN0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hy_ktdcOn5iNsO6zz1XBAw_cavSJv_Z-UBHwlq7TxoY.jpg?width=108&crop=smart&auto=webp&s=51899f75d3b7bd7db246c461cc6af6611b36f5b3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hy_ktdcOn5iNsO6zz1XBAw_cavSJv_Z-UBHwlq7TxoY.jpg?width=216&crop=smart&auto=webp&s=0d9a60482021661344e9b0617ae23d7002be1179', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hy_ktdcOn5iNsO6zz1XBAw_cavSJv_Z-UBHwlq7TxoY.jpg?width=320&crop=smart&auto=webp&s=e3ffb14d9f17132b96fae41a9a961a46db319703', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hy_ktdcOn5iNsO6zz1XBAw_cavSJv_Z-UBHwlq7TxoY.jpg?width=640&crop=smart&auto=webp&s=2d7298cdfe97c0b6d275ca6d5587626543cd2648', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hy_ktdcOn5iNsO6zz1XBAw_cavSJv_Z-UBHwlq7TxoY.jpg?width=960&crop=smart&auto=webp&s=d5242f8af780f25614a8488bf6349ae71fb9f233', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hy_ktdcOn5iNsO6zz1XBAw_cavSJv_Z-UBHwlq7TxoY.jpg?width=1080&crop=smart&auto=webp&s=1b4d0c499283cb699dae63c7faa5f495f61518d4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hy_ktdcOn5iNsO6zz1XBAw_cavSJv_Z-UBHwlq7TxoY.jpg?auto=webp&s=865b36a9f7346d702abb75f68f2874e54ccdb65d', 'width': 1200}, 'variants': {}}]} |
What prompts do you use for literature NSFW and otherwise? | 6 | How do you get best result? | 2023-12-04T03:46:50 | https://www.reddit.com/r/LocalLLaMA/comments/18abjpy/what_prompts_do_you_use_for_literature_nsfw_and/ | poet3991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18abjpy | false | null | t3_18abjpy | /r/LocalLLaMA/comments/18abjpy/what_prompts_do_you_use_for_literature_nsfw_and/ | false | false | nsfw | 6 | null |
Cheapest GPU/Way to run 30b or 34b "Code" Models with GPT4ALL? | 1 | Currently running them on-CPU:
- Ryzen 9 3950x
- 64gb DDR4 3200mhz
- 6700xt 12gb (does not fit much more than 13b models, so not relevant here)
While running on-CPU with GPT4All, I'm getting 1.5-2 tokens/sec. It finishes, but man is there a lot of waiting.
What's the most affordable way to get a faster experience? | 2023-12-04T03:38:06 | https://www.reddit.com/r/LocalLLaMA/comments/18abe2c/cheapest_gpuway_to_run_30b_or_34b_code_models/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18abe2c | false | null | t3_18abe2c | /r/LocalLLaMA/comments/18abe2c/cheapest_gpuway_to_run_30b_or_34b_code_models/ | false | false | self | 1 | null |
Is RAG better with fine tuning on same data or pure RAG FTW? | 19 | So I have collected a very high quality and large medical QA dataset that I want to use to create a medical knowledge retrieval app. I have heard LLMs perform much better when it is fine tuned on the same data on which RAG is performed. Is it true? And is it worth the hassle of fine-tuning or am I good with pure RAG? | 2023-12-04T03:36:48 | https://www.reddit.com/r/LocalLLaMA/comments/18abd9g/is_rag_better_with_fine_tuning_on_same_data_or/ | Shoddy_Vegetable_115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18abd9g | false | null | t3_18abd9g | /r/LocalLLaMA/comments/18abd9g/is_rag_better_with_fine_tuning_on_same_data_or/ | false | false | self | 19 | null |
A new dataset for LLM training has been released! | 34 | Great news! Beijing Academy of Artificial Intelligence(BAAI) published a new dataset Chinese Corpus Internet (CCI v1.0.0), a large-scale dataset for Chinese language model pretraining and collected with leading institues in China. This open-source dataset is designed to offer an important data foundation for the AI Large-Language Model in Chinese. It includes contents from >1000 most important websites in Chinese, from Jan. 2001 to Nov. 2023. It has been filtered for high quality, content safety, deduplication, and content correction with lots of manual checking. This dataset is 104GB in total, filtered from a much larger one (original size is >800GB). I would encourage you to include this dataset for training an LLM supporting Chinese as one of its languages.
URL for downloading:
[https://huggingface.co/datasets/BAAI/CCI-Data](https://huggingface.co/datasets/BAAI/CCI-Data)
[https://data.baai.ac.cn/details/BAAI-CCI](https://data.baai.ac.cn/details/BAAI-CCI) | 2023-12-04T03:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/18aayul/a_new_dataset_for_llm_training_has_been_released/ | Grouchy-Mail-2091 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18aayul | false | null | t3_18aayul | /r/LocalLLaMA/comments/18aayul/a_new_dataset_for_llm_training_has_been_released/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'KdeC4mVHv25SBZtHSxZA9mjqckimsK7CYlawDoHWxg0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zEM8LSjAZJRNOHX-H5n5bW0ql8OoFZl_YM2w1JqajvM.jpg?width=108&crop=smart&auto=webp&s=bd3ab44b6821e6fa34e32b8a5a10b5aa02592713', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zEM8LSjAZJRNOHX-H5n5bW0ql8OoFZl_YM2w1JqajvM.jpg?width=216&crop=smart&auto=webp&s=3095a90bc77699674dc73e1558a22303d4a5798e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zEM8LSjAZJRNOHX-H5n5bW0ql8OoFZl_YM2w1JqajvM.jpg?width=320&crop=smart&auto=webp&s=4da01d440d0b02233676c1c62125e44dce33ba34', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zEM8LSjAZJRNOHX-H5n5bW0ql8OoFZl_YM2w1JqajvM.jpg?width=640&crop=smart&auto=webp&s=e9b8bacafc276c9ced1209dffca6b785395b85bf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zEM8LSjAZJRNOHX-H5n5bW0ql8OoFZl_YM2w1JqajvM.jpg?width=960&crop=smart&auto=webp&s=06a90a7d17632d23598b0caa54a854022ebd70f8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zEM8LSjAZJRNOHX-H5n5bW0ql8OoFZl_YM2w1JqajvM.jpg?width=1080&crop=smart&auto=webp&s=bcf4145ca8058bcb0629a174f67e4c09474ea984', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zEM8LSjAZJRNOHX-H5n5bW0ql8OoFZl_YM2w1JqajvM.jpg?auto=webp&s=6d853ba9f5dbcacb5fb59a0468c175940c832307', 'width': 1200}, 'variants': {}}]} |
Animation written by OpenHermes-2.5-Mistral-7B-GGUF and Voiced by Coqui-TTS | 1 | 2023-12-04T02:16:26 | https://www.instagram.com/p/C0Ys1udRW0e/ | zebraloveicing | instagram.com | 1970-01-01T00:00:00 | 0 | {} | 18a9vhv | false | null | t3_18a9vhv | /r/LocalLLaMA/comments/18a9vhv/animation_written_by_openhermes25mistral7bgguf/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'Tfic5mV0saS87i0r7IcAV6Z5JilApRo67Z0gw-UDaLE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5yrGHRsiEFF-g_w0NHpa4_iGU-eUptHpNEc9g7NVRRw.jpg?width=108&crop=smart&auto=webp&s=ec504328e87f2e57e72a18e73bf95aafff33e5c8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5yrGHRsiEFF-g_w0NHpa4_iGU-eUptHpNEc9g7NVRRw.jpg?width=216&crop=smart&auto=webp&s=63ff4f46cacb21e6aae1faa75fbc864f84c542e2', 'width': 216}], 'source': {'height': 318, 'url': 'https://external-preview.redd.it/5yrGHRsiEFF-g_w0NHpa4_iGU-eUptHpNEc9g7NVRRw.jpg?auto=webp&s=7a164a9f7a354a2e2cb82789dc6af212c03eb6c6', 'width': 318}, 'variants': {'nsfw': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5yrGHRsiEFF-g_w0NHpa4_iGU-eUptHpNEc9g7NVRRw.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=2c57f1b0c9509acb603763096b61746094c956b3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5yrGHRsiEFF-g_w0NHpa4_iGU-eUptHpNEc9g7NVRRw.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=be35d4ea67fc57dfbd91e85c621249ee9f68ff1a', 'width': 216}], 'source': {'height': 318, 'url': 'https://external-preview.redd.it/5yrGHRsiEFF-g_w0NHpa4_iGU-eUptHpNEc9g7NVRRw.jpg?blur=40&format=pjpg&auto=webp&s=f04759b8ae96e12a97327e37ae0cd609badc2c8a', 'width': 318}}, 'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5yrGHRsiEFF-g_w0NHpa4_iGU-eUptHpNEc9g7NVRRw.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=2c57f1b0c9509acb603763096b61746094c956b3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5yrGHRsiEFF-g_w0NHpa4_iGU-eUptHpNEc9g7NVRRw.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=be35d4ea67fc57dfbd91e85c621249ee9f68ff1a', 'width': 216}], 'source': {'height': 318, 'url': 'https://external-preview.redd.it/5yrGHRsiEFF-g_w0NHpa4_iGU-eUptHpNEc9g7NVRRw.jpg?blur=40&format=pjpg&auto=webp&s=f04759b8ae96e12a97327e37ae0cd609badc2c8a', 'width': 318}}}}]} | |
Politically balanced chat model? | 1 | Can you make any suggestions for a model that is good for general chat, and is not hyper-woke?
I've just had one of the base Llama-2 models tell me it's offensive to use the word "boys" because it reinforces gender stereotypes. The conversation at the time didn't even have anything to do with gender or related topics. Any attempt to get it to explain why it thought this resulted in the exact same screen full of boilerplate about how all of society is specifically designed to oppress women and girls. This is one of the more extreme examples, but I've had similar responses from a few other models. It's as if they tried to force their views on gender and related matters into conversations, no matter what they were about. I find it difficult to believe this would be so common if the training had been on a very broad range of texts, and so I suspect a deliberate decision was made to imbue the models with these sorts of ideas.
I'm looking for something that isn't politically or socially extreme in any direction, and is willing to converse with someone taking a variety of views on such topics. | 2023-12-04T02:04:15 | https://www.reddit.com/r/LocalLLaMA/comments/18a9n52/politically_balanced_chat_model/ | Clark9292 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18a9n52 | false | null | t3_18a9n52 | /r/LocalLLaMA/comments/18a9n52/politically_balanced_chat_model/ | false | false | self | 1 | null |
What's the best way to deploy Ollama models to the cloud? | 1 | I'm creating a SaaS and I've been extensively using [Ollama](https://ollama.ai/) which has been great! However, when it comes to deploying the model, it's not so straightforward, especially since it seems to serve only request at a time. How should I handle this? | 2023-12-04T01:55:37 | https://www.reddit.com/r/LocalLLaMA/comments/18a9h7o/whats_the_best_way_to_deploy_ollama_models_to_the/ | NextgenAITrading | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18a9h7o | false | null | t3_18a9h7o | /r/LocalLLaMA/comments/18a9h7o/whats_the_best_way_to_deploy_ollama_models_to_the/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]} |
Trouble retrieving local data. Bionic GPT, Docker, llama-2-7b | 2 | First, I want to say I am an idiot. I really don't know what I am doing, but I am trying to learn. This is just something for me, nothing to do with work.
So I was able to download git, Docker, and follow instructions well enough to get Bionic GPT installed on my computer. This was pretty great I think on my part. I was even able to upload a document, and retrieve some data from it. I thought it was a good first step so I uploaded to more documents, two .docx and one pdf.
Now I ask it a question which the data is only in one document, it stalls at "Processing prompt".
I can see in git and docker its still working but I honestly don't know what it means. It seems like it is in some sort of loop.
,"target":"cria::routes::chat","span":{"name":"infer"},"spans":[{"name":"infer"}]}
downloads-llm-api-1 | {"timestamp":"2023-12-04T00:48:57.198379Z","level":"INFO","fields":{"message":"chat client closed channel"},"target":"cria::routes::chat","span":{"name":"infer"},"spans":[{"name":"infer"}]}
downloads-llm-api-1 | {"timestamp":"2023-12-04T00:48:57.367595Z","level":"INFO","fields":{"message":"chat client closed channel"},"target":"cria::routes::chat","span":{"name":"infer"},"spans":[{"name":"infer"}]}
downloads-llm-api-1 | {"timestamp":"2023-12-04T00:48:57.533176Z","level":"INFO","fields":{"message":"chat client closed channel"},"target":"cria::routes::chat","span":{"name":"infer"},"spans":[{"name":"infer"}]}
downloads-llm-api-1 | {"timestamp":"2023-12-04T00:48:57.698964Z","level":"INFO","fields":{"message":"chat client closed channel"},"target":"cria::routes::chat","span":{"name":"infer"},"spans":[{"name":"infer"}]}
downloads-llm-api-1 | {"timestamp":"2023-12-04T00:48:57.866272Z","level":"INFO","fields":{"message":"chat client closed channel"},"target":"cria::routes::chat","span":{"name":"infer"},"spans":[{"name":"infer"}]}
downloads-llm-api-1 | {"timestamp":"2023-12-04T00:48:58.031191Z","level":"INFO","fields":{"message":"chat client closed channel"},"target":"cria::routes::chat","span":{"name":"infer"},"spans":[{"name":"infer"}]}
downloads-llm-api-1 | {"timestamp":"2023-12-04T00:48:58.194867Z","level":"INFO","fields":{"message":"chat client closed channel"},"target":"cria::routes::chat","span":{"name":"infer"},"spans":[{"name":"infer"}]}
downloads-llm-api-1 | {"timestamp":"2023-12-04T00:48:58.359486Z","level":"INFO","fields":{"message":"chat client closed channel"},"target":"cria::routes::chat","span":{"name":"infer"},"spans":[{"name":"infer"}]}
​ | 2023-12-04T00:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/18a88ec/trouble_retrieving_local_data_bionic_gpt_docker/ | Foot-Note | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18a88ec | false | null | t3_18a88ec | /r/LocalLLaMA/comments/18a88ec/trouble_retrieving_local_data_bionic_gpt_docker/ | false | false | self | 2 | null |
Optimum Intel OpenVino Performance | 8 | ​
[Optimum Intel int4 on iGPU UHD 770](https://preview.redd.it/wv73viw4c64c1.png?width=2560&format=png&auto=webp&s=f4c8f237600ef1ca522d020c33a0754cb2c54893)
I'd like to share the result of inference using [Optimum Intel](https://huggingface.co/docs/optimum/main/en/intel/inference) library with Starling-LM-7B Chat model quantized to int4 (NNCF) on iGPU Intel UHD Graphics 770 (i5 12600) with OpenVINO library.
I think it's quite good 16 tk/s with CPU load 25-30%. Same performance with int8 (NNCF) quantization.
This is inside a Proxmox VM with SR-IOV virtualized GPU 16GB RAM and 6 cores. I also found that the ballooning device might cause crash of the VM so I disabled it while the swap is on a zram device.
`free -h` output while inferencing:
`total used free shared buff/cache available`
`Mem: 15Gi 6.2Gi 573Mi 4.7Gi 13Gi 9.3Gi`
`Swap: 31Gi 256Ki 31Gi`
Code adapted from [https://github.com/OpenVINO-dev-contest/llama2.openvino](https://github.com/OpenVINO-dev-contest/llama2.openvino)
What's your thoughts on this? | 2023-12-04T00:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/18a87je/optimum_intel_openvino_performance/ | fakezeta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18a87je | false | null | t3_18a87je | /r/LocalLLaMA/comments/18a87je/optimum_intel_openvino_performance/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'A6mGWcTnFV9w-3-14a4l4GnR-RV6nE_zM0OOXGtZmJ8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IgQqiX4uE_2xqmoHTCfkJqrIwIuzT-J2RL4R0yDfbbI.jpg?width=108&crop=smart&auto=webp&s=ec9535ce44c7bd59f82dc2df227b69f8f79d343b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IgQqiX4uE_2xqmoHTCfkJqrIwIuzT-J2RL4R0yDfbbI.jpg?width=216&crop=smart&auto=webp&s=93a6f391fcea6edd8ccf9fe244f52169b819502d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IgQqiX4uE_2xqmoHTCfkJqrIwIuzT-J2RL4R0yDfbbI.jpg?width=320&crop=smart&auto=webp&s=07545d6d54a013be9ee37ff36b2a6e0fa4b1c15b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IgQqiX4uE_2xqmoHTCfkJqrIwIuzT-J2RL4R0yDfbbI.jpg?width=640&crop=smart&auto=webp&s=74c929e20e0fe6b62b61a760a7a7ee8db388ed6c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IgQqiX4uE_2xqmoHTCfkJqrIwIuzT-J2RL4R0yDfbbI.jpg?width=960&crop=smart&auto=webp&s=240c57df3649f8dd36ba28ea6fe2da3d3cf6a9a3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IgQqiX4uE_2xqmoHTCfkJqrIwIuzT-J2RL4R0yDfbbI.jpg?width=1080&crop=smart&auto=webp&s=bdf1e4fd6b6dae2a32725ae608a5839b71b59b73', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IgQqiX4uE_2xqmoHTCfkJqrIwIuzT-J2RL4R0yDfbbI.jpg?auto=webp&s=28c8a24e3c7ea05a7b1c80518823a985ac0e32d4', 'width': 1200}, 'variants': {}}]} | |
Which local models are best for writing “literature” | 7 | I seen a lot of posts here talking about this or that model being great for storytelling/writing but when I try them out the prose is…well…flat, boring and plain unfunny. I’m not interested in models to write NSFW (nothing against it just not my thing). I’m looking for models that can actually output stuff that sounds literary - for example, if I ask the model to write something in the style of x (where x is an author with a very unique style) that the output has some of the author’s style in it. Or if I ask it to write a poem, that the result isn’t like something out of some kid’s book written by Dr. Seuss.
With the exception of one model I tried [Storywriter 13b] (which sort of produced something literary after a little coaxing and leading). All the others produced results which sounded like entries from an encyclopedia or dictionary (lifeless, droning, emotionless, etc.). And the leaderboard hasn’t been much help in identifying anything that’s close to what I’m looking for - the top rated models I have tried are the worst when it comes to prose of the kind I’m looking for, in my limited experience.
Does anyone know of any models, that I can run on my local computer, that can produce “literary” prose (I.e. moving, detailed descriptions plus creative story writing)? Not looking for perfect just better… I’m hoping one of you might have come across a model I haven’t seen/tried so any and all suggestions will be appreciated. | 2023-12-04T00:13:47 | https://www.reddit.com/r/LocalLLaMA/comments/18a7i5n/which_local_models_are_best_for_writing_literature/ | Heliogabulus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18a7i5n | false | null | t3_18a7i5n | /r/LocalLLaMA/comments/18a7i5n/which_local_models_are_best_for_writing_literature/ | false | false | self | 7 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.