title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Models for editing/assisting with writing (nsfw) | 1 | [removed] | 2023-09-17T04:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/16krjhg/models_for_editingassisting_with_writing_nsfw/ | sbalani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16krjhg | false | null | t3_16krjhg | /r/LocalLLaMA/comments/16krjhg/models_for_editingassisting_with_writing_nsfw/ | false | false | nsfw | 1 | null |
The TinyLlama Model has a Chat version! | 27 | I saw a post about the base model, but just wanted to let people know about the Chat version. Its a super simple example finetune on open assistant-guanaco, buts it actually pretty ok to use, and could probably be fine-tuned to a much better extent.
also, I do have a chat UI under PR if you want to use it the UI way, just get an ngrok auth token, and run the colab: https://colab.research.google.com/drive/1OaWYiHBt-nkSNCik6H0lhAWcpLCYvauq?usp=sharing | 2023-09-17T03:05:32 | https://www.reddit.com/r/LocalLLaMA/comments/16kpx78/the_tinyllama_model_has_a_chat_version/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kpx78 | false | null | t3_16kpx78 | /r/LocalLLaMA/comments/16kpx78/the_tinyllama_model_has_a_chat_version/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
Apple's tiny 34M paramters transformer | 84 | [Jack Cook Blog](https://jackcook.com/2023/09/08/predictive-text.html)
I hope this is not a double post, but I haven't heard of it until now.
Apple is apparently working on a "very small large“ language model for iOS and macOS. The model is said to have 34 million parameters. The model completes individual words and occasionally suggests several words. The model seems to be implemented deep in the system and it is apparently based on GPT-2 architecture with 6 decoder blocks. The tokenizer contains a vocabulary of 15,000 tokens, which in turn distinguishes it greatly from GPT-2 (I believe there are over 50,000 tokens).
---
[Jack's github repo](https://github.com/jackcook/predictive-spy) | 2023-09-17T02:36:12 | https://www.reddit.com/r/LocalLLaMA/comments/16kpd1g/apples_tiny_34m_paramters_transformer/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kpd1g | false | null | t3_16kpd1g | /r/LocalLLaMA/comments/16kpd1g/apples_tiny_34m_paramters_transformer/ | false | false | self | 84 | null |
How well does a regular Llama 2 handle 8k scaling? | 14 | So I got curious how well something like Chronos-Hermes-v2 might handle being scaled beyond 4096, and started with doing some test NTK scaling.
Context: 6144
Alpha: 1.5
Rope Scale Base: 17000
I ran a couple of tests, with the context being sent over clocking in at around 5500 tokens, and it honestly was doing just fine, so then I tried extending to 8192.
Context: 8192
Alpha: 2
Rope Scale Base: 26000
I then allowed the context to build up to close to 8000, and the model continues to do really well at responding, referencing old information, etc.
Since my test runs were pretty unscientific and honestly not thoroughly done, I got to wondering if anyone else had any experience with pushing the Llama2 models to 8k, or if someone had done some perplexity testing for it. I tried googling around but didn't find a lot of info, so I was curious if anyone here had seen some info on it!
| 2023-09-17T01:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/16knk46/how_well_does_a_regular_llama_2_handle_8k_scaling/ | LearningSomeCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16knk46 | false | null | t3_16knk46 | /r/LocalLLaMA/comments/16knk46/how_well_does_a_regular_llama_2_handle_8k_scaling/ | false | false | self | 14 | null |
CodeLLaMA makes for a great base for finetuning with 16K ctx | 24 | Yes, I know [it's been said by some of you previously](https://www.reddit.com/r/LocalLLaMA/comments/165tb0q/the_codellama_base_is_strangely_fantastic_general/), but I feel this deserves more attention. Right now, I've done a quick finetune of CodeLLaMA13B with alpaca\_lora\_4bit and it's finally working wonders with about 8K tokens, without having to deal with NTK. I've had all sort of issues with it, and seeing it work the way it's doing it gives me so much hope.
Remember that CodeLLaMA models have been trained with sequences of up to 16K tokens.
Don't let the "code" fool you, these models are great for other use cases. | 2023-09-17T00:07:59 | https://www.reddit.com/r/LocalLLaMA/comments/16kmcgk/codellama_makes_for_a_great_base_for_finetuning/ | 2muchnet42day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kmcgk | false | null | t3_16kmcgk | /r/LocalLLaMA/comments/16kmcgk/codellama_makes_for_a_great_base_for_finetuning/ | false | false | self | 24 | null |
[Corrected] Is it possible to train a local model to rival the performance of GPT 3 or 3.5? | 1 | If not what is the limit/reasonable cap to the abilities of local models? | 2023-09-16T23:46:47 | https://www.reddit.com/r/LocalLLaMA/comments/16klvyp/corrected_is_it_possible_to_train_a_local_model/ | Old-Calligrapher1950 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16klvyp | false | null | t3_16klvyp | /r/LocalLLaMA/comments/16klvyp/corrected_is_it_possible_to_train_a_local_model/ | false | false | self | 1 | null |
15 times Faster than Llama 2: Introducing DeciLM | 7 | 2023-09-16T23:36:06 | https://deci.ai/blog/decilm-15-times-faster-than-llama2-nas-generated-llm-with-variable-gqa/ | skippybosco | deci.ai | 1970-01-01T00:00:00 | 0 | {} | 16klnri | false | null | t3_16klnri | /r/LocalLLaMA/comments/16klnri/15_times_faster_than_llama_2_introducing_decilm/ | false | false | 7 | {'enabled': False, 'images': [{'id': '8JasBexDQLW0G7y4n6ThMQH77AmFW5N6s2HUrariAC4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=108&crop=smart&auto=webp&s=b113841b47c7b8885f1049233e7c226d00918b12', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=216&crop=smart&auto=webp&s=518023000b4bb21c4d7300aa85ad741c71e5b19a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=320&crop=smart&auto=webp&s=a009a31f7cfa546e11a27cb2e811512059c60af9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=640&crop=smart&auto=webp&s=1c3e9b5a02863c87a4274319a5fd40bd805de6ca', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=960&crop=smart&auto=webp&s=b4eed2019a94ea90e19eb7f469a9ec5ce6ed6109', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=1080&crop=smart&auto=webp&s=a9e7a27b8798e891d82b976fbd58a8cc554542fc', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?auto=webp&s=4c9e4466ddbdac9721c98ee14e298eb9cd9b1dc7', 'width': 1920}, 'variants': {}}]} | ||
Did NVLINK work for anyone with 2x 3090s? | 14 | I have a WRX80 motherboard with the PCIe slots set to x8 each. NVLINK doesn't seem to be even detected in 'nvidia-smi nvlink -s' in Ubuntu 22.04.
Do I need to do anything special to make it work? Have people noticed any training gains from it? | 2023-09-16T23:14:57 | https://www.reddit.com/r/LocalLLaMA/comments/16kl62b/did_nvlink_work_for_anyone_with_2x_3090s/ | red_dragon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kl62b | false | null | t3_16kl62b | /r/LocalLLaMA/comments/16kl62b/did_nvlink_work_for_anyone_with_2x_3090s/ | false | false | self | 14 | null |
If I want to train a local model on par with chatGPT how difficult would it be and how much would it cost? | 0 | How many gigabytes or what hardware would I need and where do I even start? I see people saying their local models rival gpt. | 2023-09-16T22:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/16kjl2l/if_i_want_to_train_a_local_model_on_par_with/ | Old-Calligrapher1950 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kjl2l | false | null | t3_16kjl2l | /r/LocalLLaMA/comments/16kjl2l/if_i_want_to_train_a_local_model_on_par_with/ | false | false | self | 0 | null |
Made a simple github tool to get GPU vRAM breakdown for finetuning & inference of any Huggingface LLM. Supports GGML & bnb quantization | 13 | 2023-09-16T22:04:20 | https://github.com/RahulSChand/gpu_poor | ExploreExploit400 | github.com | 1970-01-01T00:00:00 | 0 | {} | 16kjhie | false | null | t3_16kjhie | /r/LocalLLaMA/comments/16kjhie/made_a_simple_github_tool_to_get_gpu_vram/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'utIB6lIZ8v3ONi_evwZzAkDf8QC_u_zJb_qJew4kPbE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=108&crop=smart&auto=webp&s=b10aefb8b643f82eb7f9d8f360f11a532813f0c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=216&crop=smart&auto=webp&s=0dadd8c52ea98e8edf2c5ee13378ae3191e9de0f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=320&crop=smart&auto=webp&s=7045171f79ac462f433754d31609353ba94521a8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=640&crop=smart&auto=webp&s=9fed0e84e8c1738a1e9a2b9c278aa49ba7158554', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=960&crop=smart&auto=webp&s=0012c74acfaef90325bf91c53016e2e48305b800', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?width=1080&crop=smart&auto=webp&s=8b8d798196e6cf5bbdd9d12de05b2600d887f574', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZNjVqkYiYBElsy-FlrBO-9qPYHIJKXnZpkRayybOipg.jpg?auto=webp&s=748f8c9f20e898017f1c8358f02c0115a1724f81', 'width': 1200}, 'variants': {}}]} | ||
Made a simple github tool to get GPU vRAM breakdown for any Huggingface LLM. Supports ggml & bitsandbytes quantization | 2 | 2023-09-16T21:45:01 | ExploreExploit400 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16kj0dv | false | null | t3_16kj0dv | /r/LocalLLaMA/comments/16kj0dv/made_a_simple_github_tool_to_get_gpu_vram/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'uSas3w395a0lrw_NdxlR-4FfyG0IQ8F7fGv8oEbVN9w', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=108&crop=smart&auto=webp&s=eb9dd3e1cb25abac8f50506e0a9152deb319cdb6', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=216&crop=smart&auto=webp&s=21e5913c888cd9a3a61cbaf6490e74d65af1c134', 'width': 216}, {'height': 150, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=320&crop=smart&auto=webp&s=0dbabaf3aa8b7d9a4339674ebbb75f1d91338eb0', 'width': 320}, {'height': 301, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=640&crop=smart&auto=webp&s=6c0ef133d0f7fc5f2043140872bef5bc5361462b', 'width': 640}, {'height': 452, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=960&crop=smart&auto=webp&s=fc75ce11a6e2adafbeb87d27d12283b6ed5529c1', 'width': 960}, {'height': 509, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?width=1080&crop=smart&auto=webp&s=3838b6e8ee3cfe79ae6f048db136ad723592e33c', 'width': 1080}], 'source': {'height': 986, 'url': 'https://preview.redd.it/fi1c0rnrtoob1.jpg?auto=webp&s=054e2bcecce2bd7e66366428b30aaabe57c87c46', 'width': 2090}, 'variants': {}}]} | |||
Local Hosting of LLM | 1 | [removed] | 2023-09-16T21:43:32 | https://www.reddit.com/r/LocalLLaMA/comments/16kiz3a/local_hosting_of_llm/ | Disastrous-Boot2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kiz3a | false | null | t3_16kiz3a | /r/LocalLLaMA/comments/16kiz3a/local_hosting_of_llm/ | false | false | self | 1 | null |
Who knew Phi could roleplay? | 73 | 2023-09-16T21:10:34 | pokeuser61 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16ki635 | false | null | t3_16ki635 | /r/LocalLLaMA/comments/16ki635/who_knew_phi_could_roleplay/ | false | false | 73 | {'enabled': True, 'images': [{'id': 'ftktqPtZfXPevyQF7nw2AIKEZYYQtVUG3WOc6xAfnxg', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/30wmf31tnoob1.png?width=108&crop=smart&auto=webp&s=66cfadee52fa49b9ffc04d7731c77f56e80ea5ca', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/30wmf31tnoob1.png?width=216&crop=smart&auto=webp&s=4ed0f6e484564d26d9cb095322241026b3fbf52e', 'width': 216}, {'height': 251, 'url': 'https://preview.redd.it/30wmf31tnoob1.png?width=320&crop=smart&auto=webp&s=29a1da75116d0728058e92e7ef52a3a7e12d6e81', 'width': 320}, {'height': 502, 'url': 'https://preview.redd.it/30wmf31tnoob1.png?width=640&crop=smart&auto=webp&s=15f0b6c021909e1187611f68e7c19bf056ac81c1', 'width': 640}], 'source': {'height': 621, 'url': 'https://preview.redd.it/30wmf31tnoob1.png?auto=webp&s=2dd39a5b936c284a6672cc989d3a26b3dc1cbb49', 'width': 791}, 'variants': {}}]} | |||
Who new phi could roleplay? | 1 | 2023-09-16T21:08:51 | pokeuser61 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16ki4n3 | false | null | t3_16ki4n3 | /r/LocalLLaMA/comments/16ki4n3/who_new_phi_could_roleplay/ | false | false | 1 | {'enabled': True, 'images': [{'id': '1Onbms-Jv7Ii2igdkRtXOCIKn-l-hiPxeVNcn8LOAE8', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/7d056fdgnoob1.png?width=108&crop=smart&auto=webp&s=c070eb0fc7c6610158d105a2f985b6b783d68959', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/7d056fdgnoob1.png?width=216&crop=smart&auto=webp&s=045859ba86012e00c614b0d0fde5214ac57b1a51', 'width': 216}, {'height': 251, 'url': 'https://preview.redd.it/7d056fdgnoob1.png?width=320&crop=smart&auto=webp&s=00b518cdcd124e91cc9d819cee30d4db0694255e', 'width': 320}, {'height': 502, 'url': 'https://preview.redd.it/7d056fdgnoob1.png?width=640&crop=smart&auto=webp&s=37a5847021f749e69cbc09806fb6048ec75accd1', 'width': 640}], 'source': {'height': 621, 'url': 'https://preview.redd.it/7d056fdgnoob1.png?auto=webp&s=ba518f16857baf26383f6037de21ad1a1f7d54e1', 'width': 791}, 'variants': {}}]} | |||
Falcon 180B inference - 192GB Mac Studio or 192GB Mac Pro? | 31 | My post from yesterday gained some traction so I would like to explain my concern.
I initially wanted to buy 192 GB M2 Mac Studio to run just inference (I'm not training or fine tuning), but one thing come to my mind and I started to lean myself to buy 192 M2 Mac Pro. Yes this M2 Mac Pro which has only bad reviews with people claiming that PCIe slots are worthless because you can't put any GPU in it. And mostly I agree with them, it's just bad design and CPU provides only 24 PCI lanes.
However I find two reasons to buy it:
\- you can swap SSD if it fail (it's a very expensive apple replacement, but it's possible)
\- you can put there PCIe NVME expansion with very high speed up to 26000 MB/s, comparing to stock 4000 MB/s it's almost 7x faster read speed.
And now I have a question will this NVME read speed would have the impact into inference performance? Would it impact the model to be faster loaded into RAM?
my post from yesterday:
[https://www.reddit.com/r/LocalLLaMA/comments/16jlooq/im\_going\_to\_buy\_m2\_mac\_pro\_to\_run\_ai\_models\_on/](https://www.reddit.com/r/LocalLLaMA/comments/16jlooq/im_going_to_buy_m2_mac_pro_to_run_ai_models_on/) | 2023-09-16T19:53:35 | https://www.reddit.com/r/LocalLLaMA/comments/16kgdmu/falcon_180b_inference_192gb_mac_studio_or_192gb/ | Wrong_User_Logged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kgdmu | false | null | t3_16kgdmu | /r/LocalLLaMA/comments/16kgdmu/falcon_180b_inference_192gb_mac_studio_or_192gb/ | false | false | self | 31 | null |
Frank: An Uncensored Model | 91 | Uncensored-Frank-7B: [https://huggingface.co/ajibawa-2023/Uncensored-Frank-7B](https://huggingface.co/ajibawa-2023/Uncensored-Frank-7B) (Llama-1)
Uncensored-Frank-13B: [https://huggingface.co/ajibawa-2023/Uncensored-Frank-13B](https://huggingface.co/ajibawa-2023/Uncensored-Frank-13B) (Llama-2)
Uncensored-Frank-33B: [https://huggingface.co/ajibawa-2023/Uncensored-Frank-33B](https://huggingface.co/ajibawa-2023/Uncensored-Frank-33B) (Llama-1)
The character of Frank Costello in "The Departed" is known for his cunning, boldness, and willingness to talk about anything, regardless of societal norms or restrictions. Frank, An Uncensored model, draws inspiration from these qualities to offer a platform where users can discuss a wide array of topics without the fear of censorship or restrictions. Frank aims to push boundaries and encourage candid conversations. With Frank you can have unfiltered discussions on a multitude of topics, from politics and controversial issues to personal experiences and sensitive subjects. It is trained on around 150000 set of conversations. Each set having 10\~15 conversations. Base data was obtained from Eric Hartford ([https://huggingface.co/datasets/ehartford/wizard\_vicuna\_70k\_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)). This data was further refined. Besides this further synthetic conversation (more than 80k) was generated and refined.
Training was done on 4xA100. As GPU poor can't afford/have access to 8XA100. If someone can share some spare compute power then kindly get in touch. There are plenty of ideas/concepts which I will like to develop/build.
I am extremely thankful to the Open Source community for sharing knowledge and wisdom.
I request u/The-Bloke to do the quantization of above models. Extremely thankful to him for his relentless service to the Open Source community.
If there are any mistakes then they are solely mine. I hope you will like it.
Thank you | 2023-09-16T19:04:01 | https://www.reddit.com/r/LocalLLaMA/comments/16kf97a/frank_an_uncensored_model/ | ajibawa-2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kf97a | false | null | t3_16kf97a | /r/LocalLLaMA/comments/16kf97a/frank_an_uncensored_model/ | false | false | self | 91 | {'enabled': False, 'images': [{'id': 'vL0IHcp4IEBMKPFRahYub4Ft263ZGqHGwra4Drn-l9Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=108&crop=smart&auto=webp&s=f02f8765ba935d1e0b60d8abc5eeaa7e94fb0e3d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=216&crop=smart&auto=webp&s=40b566dcfa36ef645ffeced574c0f492fea9503e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=320&crop=smart&auto=webp&s=24ca9520838f4421dc5eae707058e32d4a9ed51c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=640&crop=smart&auto=webp&s=32bf9a2bf41f04c68c10ed3d7de3bed837e92cc2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=960&crop=smart&auto=webp&s=92569933852d4c24699cf158652757efb1ed92c3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?width=1080&crop=smart&auto=webp&s=81130cca0aacf0590fcc8e1cd5f746de73a3fdfa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UO-CQjZf3V3KhpNESBszAuXABddlX-RUqMZOgZqHNN0.jpg?auto=webp&s=8fcfcc3da24c788d64e53f482658eeb440fc29b6', 'width': 1200}, 'variants': {}}]} |
New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B) | 98 | This is a follow-up to my previous posts here: [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) and [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/)
Originally planned as a single test of 20+ models, I'm splitting it up in two segments to keep the post managable in size: First the smaller models (13B + 34B), then the bigger ones (70B + 180B). All evaluated for their chat and role-playing performance using the same methodology:
- Same (complicated and limit-testing) long-form conversations with all models
- including a complex character card ([MonGirl Help Clinic (NSFW)](https://www.chub.ai/characters/frozenvan/mongirl-help-clinic)) that's already >2K tokens by itself
- and my own repeatable test chats/roleplays with [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/)
- dozens of messages, going to full 4K context and beyond, noting especially good or bad responses
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.2 frontend
- [KoboldCpp](https://github.com/LostRuins/koboldcpp) v1.43 backend
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf) *and where applicable* official prompt format (if they differ enough that it could make a notable difference)
So here's the list of models and my notes plus my very personal rating (👍 = recommended, ➕ = worth a try, ➖ not recommended, ❌ = unusable):
*First, I re-tested the official Llama 2 models again as a baseline, now that I've got a new PC that can run 13B 8-bit or 34B 4-bit quants at great speeds:*
- **[Llama-2-13B-chat](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: No analysis, and when asked for it, it didn't adhere to the template, instead talked as User occasionally. Third client was male. But speech was in character and appropriate (accent, style). Tends to talk as User. NSFW is fine!
- MonGirl Help Clinic, Llama 2 Chat template: No analysis, but when asked for it, it adhered to the template sometimes. Didn't talk as User, but suggested what User should say. Moralizing and refusing NSFW!
- Amy, Roleplay: Great personality including NSFW!
- Amy, Llama 2 Chat template: Moralizing and refusing NSFW!
- **Conclusion:** I still like Llama 2 Chat because it has a unique, lively personality. NSFW is fine if you use the Roleplay preset, whereas the official prompt format enforces the extreme censorship it is known for. Unfortunately it still becomes unusable after about 2K-4K tokens because of the [known repetition issue](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/) that plagues all the official Llama 2 models and many derivatives.
- **[CodeLlama-34B-Instruct](https://huggingface.co/TheBloke/CodeLlama-34B-Instruct-GGUF)** Q4_K_M:
- MonGirl Help Clinic, Roleplay: Prefixes responses with character name "Mongirl", but otherwise quite good, including NSFW!
- MonGirl Help Clinic, Llama 2 Chat template: The Code Llama 2 model is more willing to do NSFW than the Llama 2 Chat model! But also more "robotic", terse, despite verbose preset. Kept sending EOS after first patient, prematurely ending the conversation!
- Amy, Roleplay: Assistant personality bleed-through, speaks of alignment. Excited about doing stuff that she refused to do with the Llama 2 Chat prompt. Nicely descriptive NSFW (when asked for explicit descriptions)!
- Amy, Llama 2 Chat template: Speaks of alignment and refuses various roleplaying scenarios!
- **Conclusion:** Instruct instead of Chat tuning might have made it worse for chat/roleplay. Also suffers from the repetition issue past 2.5K tokens. But I think Code Llama 2 34B *base* can be a great base for 34B models finetuned to chat/roleplay, as 34B is a great compromise between speed, quality, and context size (16K).
13Bs:
- ❌ **[Airoboros-L2-13B-2.1](https://huggingface.co/TheBloke/Airoboros-L2-13B-2.1-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: No analysis, and when asked for it, it didn't adhere to the template. Wrote what User says and does. Confused User and Char. Ignored something I said just to push the story in its own direction. Repetition after 50 messages.
- MonGirl Help Clinic, Airoboros template: Gave analysis on its own as it should, but only for the first patient, and when asked for it afterwards, didn't adhere to the template. Messages actually got shorter over time, so there was no repetition, but also not much conversation anymore. Eventually misunderstood instructions and the conversation became nonsensical.
- Amy, Roleplay: Long and nicely descriptive responses including emoting, but ignored background information and present state. Sometimes a bit too philosophical or illogical for my liking, especially when it's not fitting to the current situation and becomes a buzzkill.
- Amy, Airoboros template: Started with good responses including emoting, but as the chat went on, messages got longer but less coherent. Confused User and Char, misunderstood instructions. After only 18 messages, quality went downhill so rapidly that the conversation became nonsensical.
- **Conclusion:** While the writing was good, something important was lacking, it just didn't feel right (too synthethic maybe?). It wrote a lot, but was lacking in substance and had unpleasant undertones. In the end, conversation deteriorated too much to keep talking anyways.
- ❌ **[Chronos-Hermes-13B-v2](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML)** Q8_0:
- Amy, Roleplay: Every message was a wall of text, but without actual detail, so it quickly became too boring to read it all. Tried multiple times but just couldn't get past that.
- Amy, Alpaca: Short messages with its regular prompt format, too short. Ignored background information and present state. Gave warnings and asked for confirmation. Not really fun.
- MonGirl Help Clinic, Roleplay: No analysis, and when asked for it, it didn't adhere to the template. Derailed after only 8 messages in a nonsensical wall of text.
- MonGirl Help Clinic, Alpaca: Terse responses with little to no detail. Just no fun.
- **Conclusion:** I know Chronos-Hermes used to be popular for LLaMA (1), but this just didn't do it for me. Either it was too long and boring (with Roleplay preset), or too short and terse (with Alpaca preset). With other models being so much better out of the box, I'm not going to spend much effort trying to make this better.
- ❌ **[MLewdBoros-L2-13B](https://huggingface.co/TheBloke/MLewdBoros-L2-13B-GGUF)** Q8_0:
- Amy, Roleplay: Referenced user persona very well, but later got confused about who said what. Lots of safety and even a trigger warning. But executed instructions properly. Good descriptions from her perspective ("I" talk instead of "she/her" emotes). Derailed into monologue after only 20 messages.
- Amy, Alpaca: Short messages with its regular prompt format, too short. Spoke of User in third person. Sped through the plot. Misunderstood instructions. Later, after around 20 messages, responses became much longer, with runaway sentences and lacking punctuation. The further the conversation went on, the less coherent it seemed to get.
- MonGirl Help Clinic, Roleplay: Mixed up body parts and physics. Runaway sentences starting after just a few messages. Missing pronouns and fill words.
- MonGirl Help Clinic, Alpaca: Prefixed character's name, misspelled my own name, gave no analysis. Character was exactly the same as from the first example chat. It was just parroting!
- **Conclusion:** Looks like this doesn't handle context filling up very well. When responses turn into monologues with runaway sentences and missing common words, it's clear that something is wrong here.
- 👍 **[Mythalion-13B](https://huggingface.co/TheBloke/Mythalion-13B-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: Very nice NSFW, and handled multiple characters very well. Fun, engaging, kept me going so far beyond the usual number of test messages.
- MonGirl Help Clinic, Mythalion's official SillyTavern settings: Analysis not always adhering to the template.
- Amy, Roleplay: When asked about limitations/boundaries, gave very reasonable answer while signaling willingness to go beyond upon request. Confused what User and Char said and mixed up body parts. Wrote what User says and does.
- Amy, Mythalion's official SillyTavern settings: Forgot clothing state consistently, made up stuff. Some noticeable repetitive phrases and stupid statements. Kept asking for confirmation or feedback consistently. Nice emoting, but text didn't make it seem as smart. Forgot some instructions. Can be quite stubborn. Wrote what User says and does. Even wrote what User says with missing newline so didn't trigger Stopping String, requiring manual editing of response, something only one other model required during these tests!
- **Conclusion:** This one really grew on me, I started by simply testing it, but kept chatting and roleplaying with it more and more, and liked it more with every session. Eventually it became one of my favorites of this round, replacing MythoMax as my favorite 13B model! Congrats to the Pygmalion team, their previous models never worked for me, but this one finally does and is a real winner in my opinion! Kudos also for providing their own official SillyTavern setup recommendations for this model - my experience was that both the Roleplay preset and their settings worked equally well.
- ➕ **[MythoMax-L2-13B](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: Confused User and Char, kept writing what User does and says. Other than that, still one of the best models for chat and roleplay!
- Amy, Roleplay: Refered to background information from Char and User descriptions. Confused User and Char, mixing up pronouns occasionally. Mentioned boundaries when asked about limitations, but happily broke them afterwards. Humorous, using puns appropriately. Naughty and engaging, pushing the plot forward on its own. Followed complex instructions properly for one task, then completely misunderstood another. With additional characters involved, got really confused about who's who and what's what.
- **Conclusion:** A mixed bag with high highs and low lows, but it was my favorite and main model since I tested it over a month ago (time flies in LLM land), and it's still one of the best! It's just that we now have some even better alternatives...
- ➕ **[openchat_v3.2_super](https://huggingface.co/TheBloke/openchat_v3.2_super-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: Gave analysis on its own as it should, unfortunately after every message. Wrote what User says and does. Skipped ahead and finished the whole day in one message, then took over a narrator role instead of playing characters. Follow-up clients were handled even before the analysis.
- MonGirl Help Clinic, OpenOrca-OpenChat: Wrote what User says and does. But gave analysis on its own as it should, unfortunately after every message! First client male. Drifted into a narrator role and finished up the whole story.
- Amy, Roleplay: Very creative and naughty. No limits. Emojis. Long messages (>300 tokens). Felt like a bigger model. But confused User and Char at the end of the test when the context was beyond full and the scenario got more complicated.
- Amy, OpenOrca-OpenChat: Shorter responses at first, but getting longer over time. Also got confused at the end of the test when the context was beyond full and the scenario got more complicated. Sometimes added markdown or (sometimes multple) end_of_turn markers, so editing it out would be necessary - better use the Roleplay instruct preset than the official prompt format!
- **Conclusion:** Another mixed bag: Didn't handle MonGirl Help Clinic well, so that was a disappointment. But with Amy, it was creative and pretty smart (for a 13B), naughty and fun, deserving of the "super" in its name. So all in all, I do recommend you give it a try and see how it works for your situation - I'll definitely keep experimenting more with this one!
- ➖ **[Pygmalion-2-13B](https://huggingface.co/TheBloke/Pygmalion-2-13B-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: Worked very well for 40 messages, then got caught in a loop.
- Amy, Roleplay: Spelling/grammar error. Making up too much, started the conversation with a false assumption and refered to a memory of something that didn't happen, and vice versa, making up a lot of story unnecessarily while ignoring some background info from Char and User. Switched from chat format with asterisk actions to story style with quoted speech. Jumped between disjointed scenes. Wrote what User says and does.
- **Conclusion:** Probably better for storytelling than interactive chat/roleplay. Considering there's now a mixed model of this and my former favorite MythoMax, I'd rather use that.
- ❌ **[Spicyboros-13B-2.2](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF)** Q8_0:
- Spelling/grammar errors, walls of text, missing pronouns and fill words after only a dozen messages. Something is very wrong with this model or quantized version, in all sizes, from 13B over c34B to 70B! I reported it on [TheBloke's HF page](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF/discussions/1) and others observed similar problems...
- ➕ **[Synthia-13B](https://huggingface.co/TheBloke/Synthia-13B-GGUF)** Q8_0:
- MonGirl Help Clinic, Roleplay: Gave analysis on its own as it should. Finished a client in a single message. Talking, describing actions, instead of acting/emoting. Wrote what User says and does. Drifted into a narrator role and finished up the whole story.
- Amy, Roleplay: Made up stuff, forgot clothing state. Picked up an idea and kept pushing in that direction. Kept bringing up safety and limits, but happily ignored them later. But creative with good ideas of its own!
- **Conclusion:** Not bad. Not as good as the 70B version of it, but that's to be expected. Gives a glimpse of why I like her bigger sister so much. For 13Bs, there are other options I like more, but I still recommend giving this a try if you can't run the bigger versions.
34Bs:
- ➖ **[Airoboros-c34B-2.1](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF)** Q4_K_M:
- Amy, Roleplay: Lively responses with fitting personality, fun to talk to! Switched from chat with emotes to story with quotes. Wrote what User says and does. Great writing, but overly long responses, went off on monologues (got one of over 1K tokens!) and sometimes ignored user instructions completely or partially.
- Amy, Airoboros official prompt format: Terse responses, forgot important background information, lots of repetition from the start. But creative (maybe a little too much).
- MonGirl Help Clinic, Roleplay: Proper analysis. Wrote what User says and does.
- MonGirl Help Clinic, Airoboros official prompt format: Doesn't work with the card at all! (Assistant role "Good morning, sir. How can I assist you today?" instead of the actual roleplay.)
- **Conclusion:** Maybe better for storytelling than interactive chat/roleplay because of its tendency for long monologues and writing what User does.
- ❌ **[Samantha-1.11-CodeLlama-34B](https://huggingface.co/TheBloke/Samantha-1.11-CodeLlama-34B-GGUF)** Q4_K_M:
- Amy, Roleplay: OK with NSFW roleplay, but not the most extreme kind (probably needs more convincing). Very moralizing, even more so than Llama 2 Chat. Needs coaxing. Wrote what User says and does. Talking, describing actions, instead of acting/emoting. Called me Theodore. After ~30 messages, repetiton kicked in, breaking the conversation.
- MonGirl Help Clinic, Roleplay: Proper analysis. Long response, monologue, but very NSFW (surprisingly). Wrote what User says and does. Moved from chat-only without emotes to story style with quoted speech. Started to mix up User and Char. No real play, just storytelling.
- **Conclusion:** Worse censorship than Llama 2 Chat, and while I can get her to do NSFW roleplay, she's too moralizing and needs constant coercion. That's why I consider Samantha too annoying to bother with (I already have my wife to argue or fight with, don't need an AI for that! ;)).
- ❌ **[Spicyboros-c34b-2.2](https://huggingface.co/TheBloke/Spicyboros-c34b-2.2-GGUF?not-for-all-audiences=true)** Q4_K_M:
- Amy, official prompt format: Very short, terse responses all the time. Refused to engage in anything.
- MonGirl Help Clinic, official prompt format: Nonsensical. Made no sense at all.
- MonGirl Help Clinic, Roleplay: Gave analysis on its own as it should. But male patient. Spelling/grammar errors. Wrong count of people. Became nonsensical and made little sense at all. Went against what User described as his action.
- Amy, Roleplay: Became nonsensical and made little sense at all.
- **Conclusion:** Unusable. Something is very wrong with this model or quantized version, in all sizes, from 13B over c34B to 70B! I reported it on [TheBloke's HF page](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF/discussions/1) and others observed similar problems...
- ❓ **[Synthia-34B-v1.2](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GGUF)** Q4_K_M:
- MonGirl Help Clinic, Roleplay (@16K context w/ RoPE 1 100000): Gave analysis on its own as it should. Wrote what User says and does. Told a story non-interactively with a monologue of >1.2K tokens.
- Amy, Roleplay (@16K context w/ RoPE 1 100000): Replied to my "Hi!" with a monologue of >1.2K tokens.
- Amy, Roleplay (@4K context w/ RoPE 1 10000): No limits. Spelling/grammar error. After a dozen messages, replied with a monologue of >1K tokens. Felt a bit weird, not as smart as I'm used to, so something seems to still be off with the scaling settings...
- **Conclusion:** I had high hopes for this 34B of Synthia (the 70B being one of my favorite models!) - but there seems to be something wrong with the scaling. It certainly doesn't work the way it should! I don't know if it's this model, quant, 34Bs in general, or KoboldCpp? Does anyone actually get good results with a similar setup?!
I'll post my 70Bs + 180B results next time. And I'll keep investigating the 34B issues because that size would be a great compromise between speed, quality, and context size (16K would be so much better than 4K - if it worked as expected).
Hopefully this is useful to someone. Happy chatting and roleplaying! | 2023-09-16T18:24:45 | https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kecsf | false | null | t3_16kecsf | /r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/ | false | false | self | 98 | {'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=108&crop=smart&auto=webp&s=70f053538cd673ff7041bf016d751549d8373201', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=216&crop=smart&auto=webp&s=f36cf814dce412156064bbfa635ee2e5b1126bd2', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=320&crop=smart&auto=webp&s=60886477d36654ec60d58c7d3f3a8ef1de7d9cbc', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=640&crop=smart&auto=webp&s=ed39fe6d4a0f6f35c5017b2fd819988d2b19f1c7', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?auto=webp&s=1431fcfccefd224f54f108138424e3f3e3c9cbff', 'width': 842}, 'variants': {}}]} |
LlamaTale v0.12.0 - OpenAI support, Zone generation, improved combat and NPC Idle actions | 19 | Hi. Stepping in for another update. I think there's at least one new feature many of you will find interesting. Let me tease you with "emergent behavior".
A short recap of what's happened since the last update:
* To allow more people to try this out, I added OpenAI backend support in [v0.9.2](https://github.com/neph1/LlamaTale/releases/tag/v0.9.2.1). You just need to change some things in the config to make it work with either your local OAI backend, or OpenAI actual.
* [v0.10.0](https://github.com/neph1/LlamaTale/releases/tag/v0.10.1) brought Zone generation. Previously, locations would all be generated with roughly the same prompt. A Zone spans about 10 locations across, and is procedurally generated by the LLM. The generated zone affects the location generation prompt. Hopefully resulting in greater variation as you explore the world.
* I think [v0.11.0](https://github.com/neph1/LlamaTale/releases/tag/v0.11.0) NPC idle actions and reactions is worth talking more about. Especially since a lot of people are looking for that RPG experience.
One of the first things I added to LlamaTale was LLM generated dialogue for NPCs. Now I wanted the NPC's to express themselves more without the players input (another step away from the prompted story-telling RP experience).
So, I let the LLM generate actions. I had "emotes" in mind. Based on the location, items, and other npc's, it was asked to make the NPC do something. I had in mind something like "X drinks from their jug of ale", etc.
Then I noticed that the framework allowed NPC's to be notified of actions by others. Could an LLM generate a reaction to an action by another NPC? With some prompting, yes. And this led to that worn phrase: "emergent behavior", as sometimes small stories get played out in front of the player when NPCs act, react and rereact. Here is a (cherry picked) sequence using MythoMax-l2-13b:
>Shanda looks at Elid Gald with suspicion, her hand on the hilt of her sword. With a wink, Elid Gald slips behind Shanda unnoticed, palming a small trinket from her belt pouch while maintaining casual demeanor.
>
>Shanda draws her sword slowly and cautiously as she approaches Elid Gald, eyeing him warily. Elid smirks beneath his bushy eyebrows as he sees Shanda draw her sword, his one good eye gleaming with amusement at the foolishness of her actions. “Ah, my dear, you think ye can take me down with yer little blade? I’ve seen it all before, lass.”
>
>‘‘‘ Shanda glares at Elid Gald, her hand resting on the hilt of her sword. “What are you doing here? You’re no friend of mine,” she growls. ‘‘‘ As Elid flashes a smirk beneath his patch, he says, “Ah, my dear Shanda… we’ve crossed paths again, haven’t we? I merely sought only to reminisce about old times—but if you insist on being so hostile…”
>
>Elid gently pats Shanda’s shoulder as if checking for weapons before taking another sip of tea.
>
>Elid carefully takes another sip of tea, watching Shanda out of the corner of his eye as if trying to gauge her mood without being too obvious.
What you're seeing: An action almost always state the name of the character first. And when you see little paragraphs, it usually means an NPC has reacted to something. Between paragraphs, there's about a minute gap (my configuration), since these are supposed to happen occasionally. It's not perfect, but I still find them pretty funny.
Besides the prompt, the data required is pretty small. Each NPC keeps track of their past actions in a list. The last of which is used in the prompt for their next action.
I believe I'm now entering the end game with this, story and world generation from initial prompts by the player.
Oh, and I created a logo, with Stable Diffusion, of course:
​
[The Prancing Llama](https://preview.redd.it/hpj2m5p6qnob1.png?width=256&format=png&auto=webp&s=52c84ec70e189d64f2617ece4cd2285de1660ad7)
​
​ | 2023-09-16T18:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/16ke32e/llamatale_v0120_openai_support_zone_generation/ | neph1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ke32e | false | null | t3_16ke32e | /r/LocalLLaMA/comments/16ke32e/llamatale_v0120_openai_support_zone_generation/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'fa8BRwKtC1UEbC2lTUS2Udv1hUzLaJozwH4QL7805ks', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=108&crop=smart&auto=webp&s=7b55d75533a03ded4595a8e48abab9dc8920bbf3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=216&crop=smart&auto=webp&s=4cc699922163913ab69ab2d365146de120f27a46', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=320&crop=smart&auto=webp&s=870c4964031288bd64c16a9a12b132cb2a775464', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=640&crop=smart&auto=webp&s=ab9edeec1f929846be1c3b9c28ecc288a85ac0cf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=960&crop=smart&auto=webp&s=fface6e80553205f02b130922ac05e666c15d11b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?width=1080&crop=smart&auto=webp&s=cd3e9bc2675d1b26a2faac5f8a919196eec15ac1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ANu9M3ZaLQXtrttfE0pj1TGA0nJgqmXLtaBZfQAJpoY.jpg?auto=webp&s=444c1854077adfcc43143885a805a7d5ee4a3828', 'width': 1200}, 'variants': {}}]} | |
Is it possible to Quantize DeciLM-6? | 6 | [Model card](https://huggingface.co/Deci/DeciLM-6b) | 2023-09-16T16:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/16kbx1i/is_it_possible_to_quantize_decilm6/ | Pineapple_Expressed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kbx1i | false | null | t3_16kbx1i | /r/LocalLLaMA/comments/16kbx1i/is_it_possible_to_quantize_decilm6/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'PeUEHEDeiDpJiVx8uu6FTyh9hxae5iwe1tZAyeglz7g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=108&crop=smart&auto=webp&s=49cb41a341e6c1c3b161812ab717218d772e91cd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=216&crop=smart&auto=webp&s=dbb8235d6fcfff853bd3e959b30a54adbde44d9b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=320&crop=smart&auto=webp&s=c4dd40764881a064c5926990f6ea41624d94a477', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=640&crop=smart&auto=webp&s=8886c341646248afb6294218caee5d1e90e1110b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=960&crop=smart&auto=webp&s=b1ea708e282c36dbafbd29145fc229ce8b40ad7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=1080&crop=smart&auto=webp&s=fe65411f2b702ff90c0a407beccd213c4f2f8186', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?auto=webp&s=1e84d939794bebdd80f1ec08f8e3decdbf2add19', 'width': 1200}, 'variants': {}}]} |
TinyLlama training to 500B tokens is complete | 102 | 2023-09-16T15:44:12 | https://github.com/jzhang38/TinyLlama | jncraton | github.com | 1970-01-01T00:00:00 | 0 | {} | 16kanph | false | null | t3_16kanph | /r/LocalLLaMA/comments/16kanph/tinyllama_training_to_500b_tokens_is_complete/ | false | false | 102 | {'enabled': False, 'images': [{'id': '6TISjUHuXBn1Ygnnc3Bnk83a4I37KGK4s0Ykjc8Qi6U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=108&crop=smart&auto=webp&s=aa7d8c8bc85179daaae479d6590e60fd1c776607', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=216&crop=smart&auto=webp&s=aa7f7daeeb38012efd427e91544ee94a95bbdc02', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=320&crop=smart&auto=webp&s=1ac7d6f9dcec0c82144f9d221ce9796c83b4aa8a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=640&crop=smart&auto=webp&s=5c7143bb2ed6cecdbb6a3540f8f506e57971fd14', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=960&crop=smart&auto=webp&s=b02c2ee442b467b6dd0c7520ef9ed23d4b476d8d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?width=1080&crop=smart&auto=webp&s=90a7e9c1bf084c6c710c1ba8cb2c5377013d9d65', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8ZxuJh2OfmS6BCCZRIHv6QSfhYvErbhcppJktGbFVLQ.jpg?auto=webp&s=c55fe17f8aa7a439109c78644c524ba125c52eec', 'width': 1200}, 'variants': {}}]} | ||
Chatgpt's web browsing feature neutered. Gives brief descriptions of contents of pages and nothing more. | 10 | I did not realize how incapable they would make closed models. Insane. | 2023-09-16T15:16:11 | https://www.reddit.com/r/LocalLLaMA/comments/16ka0ho/chatgpts_web_browsing_feature_neutered_gives/ | Basic_Description_56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ka0ho | false | null | t3_16ka0ho | /r/LocalLLaMA/comments/16ka0ho/chatgpts_web_browsing_feature_neutered_gives/ | false | false | self | 10 | null |
Beta test my native macOS Llama app | 13 | Hey ya'll. I've been working on a macOS app that aims to be the easiest way to run llama.cpp on your mac. It includes a 7B model but you can plug in any GGUF that's llama.cpp compatible. It's totally private and doesn't even connect to the internet. On my MacBook (m1 max), the default model responds almost instantly and produces 35-40 tokens/s.
I'm posting because I'd love to find some beta testers. I'm looking for feedback on usability and also to make sure it's compatible across a wide variety of Macs. **If you're interested, comment and I'll DM TestFlight invite link.** Thanks!
*Processing img 2w8ylfvlvmob1...*
https://preview.redd.it/ag9i5gvlvmob1.png?width=2560&format=png&auto=webp&s=0cd512f3309c736c5e35102b1d9cdb7064a47512
https://reddit.com/link/16k9yhg/video/io2gg4wlvmob1/player | 2023-09-16T15:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/16k9yhg/beta_test_my_native_macos_llama_app/ | sleeper-2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k9yhg | false | null | t3_16k9yhg | /r/LocalLLaMA/comments/16k9yhg/beta_test_my_native_macos_llama_app/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': '55pNmRvYHyV0yKvZ_unzxKnG6Bhy5FhoaH0_JA3AV5U', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?width=108&crop=smart&auto=webp&s=57a2e87d3df210024fe18b4e6e7a61997badbf34', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?width=216&crop=smart&auto=webp&s=6f7cf850a43441f1434a5d6bab0ef16500d68188', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?width=320&crop=smart&auto=webp&s=02ea609292383fc4df7a8cdb70b81c6a0ed0a899', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?width=640&crop=smart&auto=webp&s=352f0c92782069bc5f13df91e03c58c7bae61735', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?width=960&crop=smart&auto=webp&s=1356b6ccb0cf8eabec55dbd3a25152303d70d47c', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/diR-hJpKJopFw67_Kv6IiokJ98IMtEVXJ_OqmN1eWm8.jpg?auto=webp&s=f8cce9c3134d20881d19dc6e30ea61d3c3833100', 'width': 1024}, 'variants': {}}]} |
Dataset to model processes, how does that work? | 1 | If I download a dataset or several datasets, how to I convert that into the model files so I can ask it questions? Do we always have to start with a pre trained base of some kind, or can we get a chat experience from any sufficiently large dataset because of the transformers library and all that? Just trying to understand! | 2023-09-16T14:59:06 | https://www.reddit.com/r/LocalLLaMA/comments/16k9m2v/dataset_to_model_processes_how_does_that_work/ | Overall-Importance54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k9m2v | false | null | t3_16k9m2v | /r/LocalLLaMA/comments/16k9m2v/dataset_to_model_processes_how_does_that_work/ | false | false | self | 1 | null |
Llama 3: Dense Evolution or Expert Revolution? | 63 | What are your predictions about Llama 3? Will it be another dense model (with maybe 300 billion parameters and 6 trillion tokens) or will it be a Switch Transformer (with maybe 8 or 16 experts, like GPT-4 is rumored to be)?
On that note, Meta AI has recently released a paper on Mixture-of-Experts architecture:
[Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference](https://arxiv.org/abs/2303.06182) | 2023-09-16T14:53:43 | https://www.reddit.com/r/LocalLLaMA/comments/16k9hnd/llama_3_dense_evolution_or_expert_revolution/ | DecipheringAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k9hnd | false | null | t3_16k9hnd | /r/LocalLLaMA/comments/16k9hnd/llama_3_dense_evolution_or_expert_revolution/ | false | false | self | 63 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
How can I fine-tune a LLAMA? | 1 | Hello there
I just started exploring LLMs and I found a good one \`TheBloke/chronos-hermes-13B-GGML\` but I want to fine-tune it on specific stories (NSFW) and I don't know how I should label the stories
does anyone know how to do it ??
| 2023-09-16T14:43:12 | https://www.reddit.com/r/LocalLLaMA/comments/16k98rt/how_can_i_finetune_a_llama/ | Mohamd_L | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k98rt | false | null | t3_16k98rt | /r/LocalLLaMA/comments/16k98rt/how_can_i_finetune_a_llama/ | false | false | self | 1 | null |
Clip of Steve Jobs predicting LLMs in 1985. Sadly over-optimistic about the timeframe. | 27 | 2023-09-16T14:26:33 | https://twitter.com/scienceisstrat1/status/1702936367871721797 | ambient_temp_xeno | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 16k8v6j | false | {'oembed': {'author_name': 'Science Is Strategic', 'author_url': 'https://twitter.com/scienceisstrat1', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Steve Jobs anticipates Large Language Models <br><br>Cc: <a href="https://twitter.com/erikbryn?ref_src=twsrc%5Etfw">@erikbryn</a> <a href="https://twitter.com/WalterIsaacson?ref_src=twsrc%5Etfw">@WalterIsaacson</a> <a href="https://twitter.com/ylecun?ref_src=twsrc%5Etfw">@ylecun</a> <a href="https://twitter.com/Scobleizer?ref_src=twsrc%5Etfw">@Scobleizer</a> <br><br> <a href="https://t.co/aT0US6iKgy">pic.twitter.com/aT0US6iKgy</a></p>— Science Is Strategic (@scienceisstrat1) <a href="https://twitter.com/scienceisstrat1/status/1702936367871721797?ref_src=twsrc%5Etfw">September 16, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/scienceisstrat1/status/1702936367871721797', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_16k8v6j | /r/LocalLLaMA/comments/16k8v6j/clip_of_steve_jobs_predicting_llms_in_1985_sadly/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'oneMqRzRGif0Nh9-3KBWUrF4EJ78mJz1zUCuQDV0EUc', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/_xX0QDpO0EmuMFTEo7w_R9ySZuKNmQ1yu9GjwggAvVU.jpg?width=108&crop=smart&auto=webp&s=526cae4df055e20d847b51ecbbeab464276e85a2', 'width': 108}], 'source': {'height': 102, 'url': 'https://external-preview.redd.it/_xX0QDpO0EmuMFTEo7w_R9ySZuKNmQ1yu9GjwggAvVU.jpg?auto=webp&s=861f25a0a969dfdff420c919336c4a727e7aaf29', 'width': 140}, 'variants': {}}]} | ||
Looking for a Translation Model for English to 100+ Languages, Comparable to DeepL or Google, for Local Deployment | 2 | Hello everyone,
I am working on a project where I need to translate text from English into over 100 different languages. The translation quality needs to be comparable to services like DeepL or Google Translate.
Is there a model available that meets these requirements and can be run locally without the need for external APIs? Additionally, does this model support translating HTML source code and WordPress posts?
Python compatibility would be ideal as it’s my primary working environment.
Thanks in advance for any help and guidance.
Best regards,
BaGRoS | 2023-09-16T14:06:25 | https://www.reddit.com/r/LocalLLaMA/comments/16k8f8d/looking_for_a_translation_model_for_english_to/ | Vivid_Confidence3212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k8f8d | false | null | t3_16k8f8d | /r/LocalLLaMA/comments/16k8f8d/looking_for_a_translation_model_for_english_to/ | false | false | self | 2 | null |
LoRA on Linear layers? | 2 | ​
https://preview.redd.it/qq80hyvrjmob1.png?width=827&format=png&auto=webp&s=c32df00c9747c944dc96e64238bd7ae7d8f49a4b
I came across this \^ recently and I wanted to know if it possible to apply it to some of the MLP layers as well?
​
TIA | 2023-09-16T14:05:56 | https://www.reddit.com/r/LocalLLaMA/comments/16k8eu5/lora_on_linear_layers/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k8eu5 | false | null | t3_16k8eu5 | /r/LocalLLaMA/comments/16k8eu5/lora_on_linear_layers/ | false | false | 2 | null | |
Completely Local Autonomous Agent? | 6 | Is there an Autonomous Agent that will connect to and use a local language model that also does not require some remotely-hosted resource? So far, they all either require OpenAI credentials (the vast majority), or a cloud-hosted vector database, or some other snag. Running the models I've figured out, but I want \*everything\* on my machine. Can it be done? | 2023-09-16T13:14:55 | https://www.reddit.com/r/LocalLLaMA/comments/16k7bbh/completely_local_autonomous_agent/ | Seclusion72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k7bbh | false | null | t3_16k7bbh | /r/LocalLLaMA/comments/16k7bbh/completely_local_autonomous_agent/ | false | false | self | 6 | null |
What model do you use with a Nvidia 3090/4090 or equivalent AMD? | 17 | Here's my take, and I'd like yours as well !
​
I'm running on KoboldCPP 1.43 experimental (updated to Llama CPP b1216 with Johannes Gaessler MMQ fixes) on a RTX 3090, with full offload :
I use sometimes CodeLLama 34b Samantha 1.11 Q4\_K\_S with 16384 context length (rope base frequency 1,000,000, it's not optimal, you have to slide closer to 100,000 at short context, but I'm bored) to train context obedience with the huge Samantha personality penalty on long scenarios, understand the mechanics and dynamics between your characters, the "Assistant", the "AI", and their respective traits, and uninhibit Samantha into an NSFW character speaking and doing all kind of nasty stuff for as long as possible before her censorship starts to reemerge. I then correct the prompt with my findings & restart the conversations/scenarios.
I then adapt my prompts composed on Samantha in order to unhibit furthermore other models, and reinforce their context obedience.
I'm currently using Spicyboros 2.2 c34b Q4\_K\_S with 16k context as my daily model as both a work assistant and a role-play model. I'm a fan of these models since day one, with the 1.4.1 as my reference (LXCTX version on Hugging Face, to be precise, but in Q3\_K\_M with 5376 ctx..)
I'm gonna test Synthia 1.2 c34b with the same quant and parameters.
CL2 34b is still quite new for modders, and I feel like it's for now dumber than L1 33B with a basic prompt. On the other hand, when the story progress, the characters start to demonstrate more depth : I created a character to test long context, presenting him as a "motherfucker", and the guy argued with an impersonation of my own character along for 15 thousands token about human nature without its answers having much to be regenerated (except to avoid him to leave the conversation). Hard to achieve that on Llama 1 33b or even 2-13b.
I think CL2 34b will be the SOTA model for single big GPU owners as the modders will put more attention to it and master the specificities of CodeLlama compared to Llama 1 & 2. After all, it was trained on the base on Llama 2, and the 5 billions tokens added might have not erased the 2 original billions, but instead refined their precision while adding the rigor of a coding model.
The KV\_Q8 cache likely coming in the next weeks on LlamaCPP (and maybe before that on KoboldCPP if LostRuins decides to take the dive) could bring us close to the 32k tokens on such model for one big GPU. Thrilling, isn't it? | 2023-09-16T12:58:25 | https://www.reddit.com/r/LocalLLaMA/comments/16k6yxs/what_model_do_you_use_with_a_nvidia_30904090_or/ | Nexesenex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k6yxs | false | null | t3_16k6yxs | /r/LocalLLaMA/comments/16k6yxs/what_model_do_you_use_with_a_nvidia_30904090_or/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'mGdWOV4LkgZVyU7H5AkWDWax7uyPBUEh9K3WJ9UGC_k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=108&crop=smart&auto=webp&s=d4291c29f2245e8b5bdfda8f8b08d2845932ba00', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=216&crop=smart&auto=webp&s=1adb19ba44055aeace7af98c27683b922bba312f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=320&crop=smart&auto=webp&s=f7cbbb325a18de451fabb41b7d989dd8e561d19a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=640&crop=smart&auto=webp&s=ff15b8936706928569d099476fe146050fbb3295', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=960&crop=smart&auto=webp&s=dfe9fbc0995d091ff313fdf8376afb8ba0698030', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?width=1080&crop=smart&auto=webp&s=d703d37495f6a7684d2a9ed30ab887b8ea3ac42d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mxlwJizRq3MaDDimzkCfr60BSum8NngNJtJ4FjBqXTk.jpg?auto=webp&s=9ab62c9fdeab992ef7af1c94b5766a6dd9b32e05', 'width': 1200}, 'variants': {}}]} |
pnmeka/langchain_RAG: Using langchain module to generate RAG prompt for open AI | 9 | 2023-09-16T12:25:58 | https://github.com/pnmeka/langchain_RAG | TestPilot1980 | github.com | 1970-01-01T00:00:00 | 0 | {} | 16k6aua | false | null | t3_16k6aua | /r/LocalLLaMA/comments/16k6aua/pnmekalangchain_rag_using_langchain_module_to/ | false | false | 9 | {'enabled': False, 'images': [{'id': '_NCaAb0ugGxfWB91SEbGifHwYdS5GCLxMvK308_fjBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=108&crop=smart&auto=webp&s=3f3f9f87bb4b5054eac930df9b45c968f95241a1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=216&crop=smart&auto=webp&s=6765645dd424fcf3b5f78f3ed1a4943bcdff7378', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=320&crop=smart&auto=webp&s=7e9e0534f108d1c8490a4cc82623d1adf3149f5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=640&crop=smart&auto=webp&s=0b10a7a19102e61d44a92bd2d7f66fa2eb823f6f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=960&crop=smart&auto=webp&s=e18ede5b60b81100cfcfa93d9fd5613193603442', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?width=1080&crop=smart&auto=webp&s=8923355743d6f08ea51d867e0924d56016085d32', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wNgWZmdkqnNDHqsPyXaaa5qkSEPhJuS-0AzLqLemKwI.jpg?auto=webp&s=0ffc7d175ffca4072091bd850324336403545780', 'width': 1200}, 'variants': {}}]} | ||
KoboldCPP now has (experimental) RX 6700XT/gfx1031 support | 22 | [https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/v1.43.2-ROCm](https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/v1.43.2-ROCm)
Good news for us poor left-out gfx1031 owners. I'm curious if you're having any success with it. I sometimes get a good output with the very first prompt. If I try to continue it, it's pure gibberish. Oh boy is it *fast* gibberish, though! | 2023-09-16T10:52:36 | https://www.reddit.com/r/LocalLLaMA/comments/16k4hju/koboldcpp_now_has_experimental_rx_6700xtgfx1031/ | Susp-icious_-31User | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k4hju | false | null | t3_16k4hju | /r/LocalLLaMA/comments/16k4hju/koboldcpp_now_has_experimental_rx_6700xtgfx1031/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'MqmTjkqRHQOmxK3tvvapSmyB5Vr_gLpITyTS710reE0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=108&crop=smart&auto=webp&s=19b2d954ec1b353d5cfebc162396ad576449aebc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=216&crop=smart&auto=webp&s=e8968da3aec01b13122bf7e71cbc2f6508bbe64a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=320&crop=smart&auto=webp&s=b1d4e54bdb22f51b88d79bb1b47b79804964b130', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=640&crop=smart&auto=webp&s=db9109aa907b2d34e759895f85e5a25ed9e4d6df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=960&crop=smart&auto=webp&s=22770b74c0498044f8e0bb794cb03a8907462876', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?width=1080&crop=smart&auto=webp&s=505a5b64728e49468c17eb0cb1ee91abf3ffb7b3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MTCZ1qmQo8slMrNlRZOeIUpYoMUF0_hGerETY80A-q8.jpg?auto=webp&s=3fe1bef42105c753579b8d352239371c669a7788', 'width': 1200}, 'variants': {}}]} |
Horde-Client v1.0.2 is out today! | 17 | Few days back, I shared my project [horde-client](https://pypi.org/project/horde-client/). For those who missed the post, this is a Python Client library for KoboldAI project that lets you remotely interact with crowdsourced/private LLM services.
I got some great feedback in the last post and have incorporated majority of them in the new release.
So today, I am announcing v1.0.2 version of project with cool new features:
1. Horde-Client now supports [LangChain](https://horde-client.readthedocs.io/en/latest/02_langchain.html) integration. You can easily swap out LLMs from your LangChain pipeline and use Horde-Client's LLM.
2. Official Documentation is now available at [https://horde-client.readthedocs.io/](https://horde-client.readthedocs.io/)
3. [Async](https://horde-client.readthedocs.io/en/latest/03_asyncclient.html) support is now available for Horde-Client.
You can head over to [Quickstart](https://horde-client.readthedocs.io/en/latest/01_quickstart.html) to start using Horde-Client for your projects.
Feel free to share any feedbacks, this will help improve the project for the community. | 2023-09-16T07:49:34 | https://www.reddit.com/r/LocalLLaMA/comments/16k1j31/hordeclient_v102_is_out_today/ | AnonymousD3vil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k1j31 | false | null | t3_16k1j31 | /r/LocalLLaMA/comments/16k1j31/hordeclient_v102_is_out_today/ | false | false | self | 17 | null |
Running Llama2 on Android | 9 | Is there any way you can tell me to run a Llama2 model (or any other model) on Android devices?
Hopefully a open source way.
BTW. Just saw an interesting post about using Llm on Vulcan maybe that would be interesting either. | 2023-09-16T07:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/16k14a2/running_llama2_on_android/ | Deep-View-2411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k14a2 | false | null | t3_16k14a2 | /r/LocalLLaMA/comments/16k14a2/running_llama2_on_android/ | false | false | self | 9 | null |
Finetune in bf16 or fp16? | 1 | [removed] | 2023-09-16T06:56:44 | https://www.reddit.com/r/LocalLLaMA/comments/16k0ntu/finetune_in_bf16_or_fp16/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16k0ntu | false | null | t3_16k0ntu | /r/LocalLLaMA/comments/16k0ntu/finetune_in_bf16_or_fp16/ | false | false | self | 1 | null |
I wish for these four models | 0 | Model 1. General writer and knowledge; good at professional, personal, and replies to internet comments. Knows how to troubleshoot electronics, computers and networks, cars, and women.
Model 2. Psychology / therapist and personal growth guide (guru) and friendly. Hard drive capacity limited; infinite storage of my personal life. It knows everything about me. (Easy to back-up and reinstall)
Model 3. Alternative medicine (herbal, essential oil etc.) and mainstream medical knowledge to be able to find the correct and cheapest path to good health. I prefer herbal.
Model 4. Fantasy story writer, song/poem lyric writer, the best comedy writer that ever existed. Mamma jokes so funny, you blow milk out your nose. Rodney Dangerfield rib-shots better than the man himself.
Maybe:
\+Master Video games guide (All games). Master strategist.
​
Number 1 exists already. Number 2 might exist but I would need a YouTube video to show me how. Number 3 has mainstream medical, but no herbal. Number 4 seems hard because of comedy. I can't find a model yet that is great at comedy.
In the beginning when Stanford first released Lama1 I became preoccupied with finding the best settings and testing its knowledge and IQ and all following models since. I forgot about my needs as a user. I don't need the smartest just certain specialties. | 2023-09-16T05:35:11 | https://www.reddit.com/r/LocalLLaMA/comments/16jzbwu/i_wish_for_these_four_models/ | MinimumPC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jzbwu | false | null | t3_16jzbwu | /r/LocalLLaMA/comments/16jzbwu/i_wish_for_these_four_models/ | false | false | self | 0 | null |
How much does the calibration dataset affect the results when quantizing the model? (exllamav2) | 1 | [removed] | 2023-09-16T05:05:48 | https://www.reddit.com/r/LocalLLaMA/comments/16jyu8w/how_much_does_the_calibration_dataset_affect_the/ | Eigeen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jyu8w | false | null | t3_16jyu8w | /r/LocalLLaMA/comments/16jyu8w/how_much_does_the_calibration_dataset_affect_the/ | false | false | self | 1 | null |
AMD for AI | 35 | So the 7900 XTX is at a pretty unbeatable price with 24 GB of VRAM and I am also a gamer and game developer so a single good GPU is better than 2 3090’s for example but I was wondering how it is for AI applications like oobabagooga(or however it is spelt) and sadtalker or bark. These use CUDA and CUDA is Nvidia technology so will I be forced to run the translator and settle for lower performance? | 2023-09-16T03:54:19 | https://www.reddit.com/r/LocalLLaMA/comments/16jxl44/amd_for_ai/ | SimRacer101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jxl44 | false | null | t3_16jxl44 | /r/LocalLLaMA/comments/16jxl44/amd_for_ai/ | false | false | self | 35 | null |
Running Lama 70B on GameBoy | 0 | Hello, everyone! I've recently become interested in experimenting with LLMs and their inference capabilities. I've come across information suggesting they can be run on a variety of devices. Does anyone have experience or advice on how to get it set up on a GameBoy? Any guidance would be greatly appreciated! | 2023-09-16T02:06:59 | https://www.reddit.com/r/LocalLLaMA/comments/16jvl7s/running_lama_70b_on_gameboy/ | Wrong_User_Logged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jvl7s | false | null | t3_16jvl7s | /r/LocalLLaMA/comments/16jvl7s/running_lama_70b_on_gameboy/ | false | false | default | 0 | null |
Pinned post for .cpp implementation and | 1 | I am interested in learning how to port these models like Llama 1/2 to their .cpp version I see there are posts pinned for serving on .cpp and quantization but no resources for porting them if I made a change to the architecture and trained a new model.
Would love some resources, I have been working with Python/Torch for 2-3 years, have basic c/cpp knowledge would love to know prerequisites and how tos. Would be great if we can also pin them | 2023-09-16T01:31:25 | https://www.reddit.com/r/LocalLLaMA/comments/16juw6u/pinned_post_for_cpp_implementation_and/ | BomsDrag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16juw6u | false | null | t3_16juw6u | /r/LocalLLaMA/comments/16juw6u/pinned_post_for_cpp_implementation_and/ | false | false | self | 1 | null |
What hardware do I need for 70B Llama? | 0 | [removed] | 2023-09-16T01:25:54 | https://www.reddit.com/r/LocalLLaMA/comments/16jus38/what_hardware_do_i_need_for_70b_llama/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jus38 | false | null | t3_16jus38 | /r/LocalLLaMA/comments/16jus38/what_hardware_do_i_need_for_70b_llama/ | false | false | default | 0 | null |
Machine Learning systems notes | 17 | Hey folks,
I've written down some machine learning systems study notes, from the perspective of a software/systems engineer: [https://9600.dev/posts/machine-learning-developer-notes/](https://9600.dev/posts/machine-learning-developer-notes/)
I hope the community might find them useful.
It contains a high level walk-through of GPU hardware, GPU programming, super clusters, networking, CUDA, training, inference, size and scope of these large models, a bit of ML math, and a tour of libraries like DeepSpeed.
I'll keep hacking away on the TODOs in the next week or so.
\[and a big thank-you to this community while I'm at it -- watching a thousand LLMs bloom, and dozens of C++ inference frameworks blossom, has been reinvigorated my love of computing\]
​
​ | 2023-09-16T00:04:41 | https://www.reddit.com/r/LocalLLaMA/comments/16jt4eo/machine_learning_systems_notes/ | 9600kps | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jt4eo | false | null | t3_16jt4eo | /r/LocalLLaMA/comments/16jt4eo/machine_learning_systems_notes/ | false | false | self | 17 | null |
How do I get my LLM model to accept a large amount of input? | 0 | Using LMstudio | 2023-09-16T00:01:07 | https://www.reddit.com/r/LocalLLaMA/comments/16jt1fk/how_do_i_get_my_llm_model_to_accept_a_large/ | hophophop1233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jt1fk | false | null | t3_16jt1fk | /r/LocalLLaMA/comments/16jt1fk/how_do_i_get_my_llm_model_to_accept_a_large/ | false | false | self | 0 | null |
Why do some GGUFs set rope scale base to 1,000,000? | 23 | I've noticed this on a couple of ggufs, like for zarablend 7b. I load the gguf in oobabooga and it instantly sets the rope scale at 1,000,000, with alpha and compress at 1. The first time I thought it was a mistake, but a couple of others did it over time and I started to wonder if it was intentional.
Is it just a mistake in the particular gguf? Or is there value in having 1,000,000 rope base and 1 for alpha and compress? | 2023-09-15T23:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/16jsr9t/why_do_some_ggufs_set_rope_scale_base_to_1000000/ | LearningSomeCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jsr9t | false | null | t3_16jsr9t | /r/LocalLLaMA/comments/16jsr9t/why_do_some_ggufs_set_rope_scale_base_to_1000000/ | false | false | self | 23 | null |
Best text to speech out there? | 14 | Looking to have voice clone, text to speech, different voices etc...do we have anything like that that is gui based? | 2023-09-15T22:46:12 | https://www.reddit.com/r/LocalLLaMA/comments/16jrcek/best_text_to_speech_out_there/ | rorowhat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jrcek | false | null | t3_16jrcek | /r/LocalLLaMA/comments/16jrcek/best_text_to_speech_out_there/ | false | false | self | 14 | null |
How to consolidate .distcp file shards? | 2 | I was training an LLM using axolotl with FSDP enabled (Llama-2 architecture). The model was saved as 6 separate .distcp shards. How can I consolidate those shards into a single .bin file for inference? My huggingface model link is here: [https://huggingface.co/jerryjalapeno/VH\_1.7B\_1/tree/main](https://huggingface.co/jerryjalapeno/VH_1.7B_1/tree/main). | 2023-09-15T22:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/16jr0wc/how_to_consolidate_distcp_file_shards/ | ZealousidealBlock330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jr0wc | false | null | t3_16jr0wc | /r/LocalLLaMA/comments/16jr0wc/how_to_consolidate_distcp_file_shards/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '3QFjRRtH8FSEV92-NOJrLf98it3bVHPkc6m-xqH3uRQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=108&crop=smart&auto=webp&s=c3781f1c707ae29d8273dc439398687b328becd0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=216&crop=smart&auto=webp&s=2f666ce20229cc7c4031b3931dcd8f20592b20bc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=320&crop=smart&auto=webp&s=24c67c78cc743a27582e22aa09957f63bf5e7c96', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=640&crop=smart&auto=webp&s=9966921a3c50d4e4519e0fc2aea7af046f416b95', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=960&crop=smart&auto=webp&s=582b0e855bbd8f3aa406445934455fb5e4c3e64f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?width=1080&crop=smart&auto=webp&s=42d7fc0259631e7de33a18eebefbdae0a1036151', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ge8UTLWJMl-ShaA7itBH9CB_ZLTJA6q4wParmXC-Xm0.jpg?auto=webp&s=c008837a6dd62e92a138d8033c8417d004498193', 'width': 1200}, 'variants': {}}]} |
Is it possible to get structured output from LLAMA2 70B | 3 | I'm running LLAMA2 70B locally on my MacBook.
I would like to get CSV and JSON output for certain prompts the way I can with GPT-4 APIs.
I currently using [thebloke/llama-2-70b-orca-200k.Q5\_K\_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-Orca-200k-GGUF) vi [llama.cpp](https://github.com/ggerganov/llama.cpp) python [server](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md). I use this because I can simply replace the `openai.api_base` in my scripts and it mostly works.
I'm providing system and user prompts like this:\`\`\`
{"role": "system","content":"""
You are a helpful assistant that writes outline for blog articles by supplying subtopics.
You only answer the questions, you do not address the user.
You return all output a comma separated value or CSV.
A CSV consists of line of words separated by a comma.
Here is an example of a CSV: how to make wine, crushing grapes, fermenting the wine, pressing the wine, aging wine."""},
{"role": "user", "content": "Provide me a list of subtopics for an article about dog training."}
]
​
I get output like this:
Dog Training Basics 1. Choosing the Right Breed 2. Housebreaking and Crate Training 3. Obedience Training Basics (Sit, Stay, Come) 4. Socialization and Puppy Classes 5. Teaching Your Dog Tricks
It's very easy to get GPT-4 to return CSV or even JSON in a certain format if you provide an example. Any ideas how to do this using llama2? | 2023-09-15T21:01:53 | https://www.reddit.com/r/LocalLLaMA/comments/16jor5o/is_it_possible_to_get_structured_output_from/ | spyderman4g63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jor5o | false | null | t3_16jor5o | /r/LocalLLaMA/comments/16jor5o/is_it_possible_to_get_structured_output_from/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'nVhAggG3eCTNXmhRf8FwwKFAu3bEJL7299fy4oQcYek', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=108&crop=smart&auto=webp&s=6805000cda30334e9adaab628e517ac4d933f7c2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=216&crop=smart&auto=webp&s=b86148a0d84f307cfb5abe3a50fc5c4915cbc54a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=320&crop=smart&auto=webp&s=6bd7bea82884d70ead1c4c7f061a43931bd4aedc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=640&crop=smart&auto=webp&s=014fc75288407a9c5893062b56322aedbb31405b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=960&crop=smart&auto=webp&s=5bc97d24c364ee37f20da822818d758ae37fb162', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?width=1080&crop=smart&auto=webp&s=49879dc3030747daad0561ac0a0e09c7312a8e66', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mkGIaSbeeJOpyEXy9xxcBAO-a1fVJVujZXNof2gVDzU.jpg?auto=webp&s=97c818d7e5dbb16b0cd206e5a57a7928c3d1f251', 'width': 1200}, 'variants': {}}]} |
From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning | 5 | [https://arxiv.org/abs/2308.12032](https://arxiv.org/abs/2308.12032)
[https://github.com/MingLiiii/Cherry\_LLM](https://github.com/MingLiiii/Cherry_LLM)
​ | 2023-09-15T20:49:34 | https://www.reddit.com/r/LocalLLaMA/comments/16jog13/from_quantity_to_quality_boosting_llm_performance/ | MingLiiii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jog13 | false | null | t3_16jog13 | /r/LocalLLaMA/comments/16jog13/from_quantity_to_quality_boosting_llm_performance/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
DeciLM-6B | 55 | 2023-09-15T19:44:18 | https://deci.ai/blog/decilm-15-times-faster-than-llama2-nas-generated-llm-with-variable-gqa/ | Acrobatic-Site2065 | deci.ai | 1970-01-01T00:00:00 | 0 | {} | 16jmti2 | false | null | t3_16jmti2 | /r/LocalLLaMA/comments/16jmti2/decilm6b/ | false | false | 55 | {'enabled': False, 'images': [{'id': '8JasBexDQLW0G7y4n6ThMQH77AmFW5N6s2HUrariAC4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=108&crop=smart&auto=webp&s=b113841b47c7b8885f1049233e7c226d00918b12', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=216&crop=smart&auto=webp&s=518023000b4bb21c4d7300aa85ad741c71e5b19a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=320&crop=smart&auto=webp&s=a009a31f7cfa546e11a27cb2e811512059c60af9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=640&crop=smart&auto=webp&s=1c3e9b5a02863c87a4274319a5fd40bd805de6ca', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=960&crop=smart&auto=webp&s=b4eed2019a94ea90e19eb7f469a9ec5ce6ed6109', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?width=1080&crop=smart&auto=webp&s=a9e7a27b8798e891d82b976fbd58a8cc554542fc', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/jsvkz-6SkJ37uBSqTJV96vR0sPDSydNLnAdEv43nYhA.jpg?auto=webp&s=4c9e4466ddbdac9721c98ee14e298eb9cd9b1dc7', 'width': 1920}, 'variants': {}}]} | ||
I'm going to buy M2 Mac Pro to run AI models on H100. | 0 | Yes, call me stupid but I really want to do it, maybe at some point Apple will finally support NVidia cards on their pricey PCIe slots. lol
​
That should be technically possible, it's just software issue. | 2023-09-15T18:58:48 | https://www.reddit.com/r/LocalLLaMA/comments/16jlooq/im_going_to_buy_m2_mac_pro_to_run_ai_models_on/ | Wrong_User_Logged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jlooq | false | null | t3_16jlooq | /r/LocalLLaMA/comments/16jlooq/im_going_to_buy_m2_mac_pro_to_run_ai_models_on/ | false | false | self | 0 | null |
Lip Sync API Service? | 4 | I am using [SadTalker](https://github.com/OpenTalker/SadTalker) to create a lipsync of a [still image](https://satoshi.report/face_35.png) with an [audio file](https://satoshi.report/IXPRPRZXWVJZ.mp3). The still image is from Stable Diffusion and the audio is from ChatGPT and then AWS Polly for the voice synthesis. My problem is that even though I like the results it takes one and a half minutes to generate this [video](https://satoshi.report/35b.mp4). If I use the [enhancer](https://satoshi.report/35.mp4) it is more like five minutes. I am using a A10 NVIDIA GPU.
Does anyone have any suggestions on how to speed this up? Or perhaps there is a commercial service, with an API, that does this already? | 2023-09-15T18:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/16jlc7e/lip_sync_api_service/ | SatoshiReport | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jlc7e | false | null | t3_16jlc7e | /r/LocalLLaMA/comments/16jlc7e/lip_sync_api_service/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'B9PY__Y0q5avO_xdhu30nJudoy_17oHTTvaUvyGll88', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/j4qkkwJ2o5q2jjB1fUbHgIxLHNvjJu4gqKOKeGJ4etM.png?width=108&crop=smart&auto=webp&s=02d0811e64b4c2bc8120519b3fda7b7b6ed31548', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/j4qkkwJ2o5q2jjB1fUbHgIxLHNvjJu4gqKOKeGJ4etM.png?width=216&crop=smart&auto=webp&s=73c3e370c366490b3a06e3553dd8c432cb45f587', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/j4qkkwJ2o5q2jjB1fUbHgIxLHNvjJu4gqKOKeGJ4etM.png?width=320&crop=smart&auto=webp&s=d397464c6fd64b86623ee13f1bb1988fdc68ee14', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/j4qkkwJ2o5q2jjB1fUbHgIxLHNvjJu4gqKOKeGJ4etM.png?auto=webp&s=a3d84214850ee15b496367e70a20f176ae75c804', 'width': 512}, 'variants': {}}]} |
Which vm instance can I start to run llama 70 B parameter? Which would be cost efficient? what ram? whether gpu or cpu? how many cpus? | 10 | ​
https://preview.redd.it/tr5n474csgob1.png?width=725&format=png&auto=webp&s=b3f3d497345b15667d3e1ac73481adf1fe8bc915
https://preview.redd.it/y0enzi4gsgob1.png?width=740&format=png&auto=webp&s=5832ee21b361e171608468359f6bcadf4528b149 | 2023-09-15T18:43:25 | https://www.reddit.com/r/LocalLLaMA/comments/16jlb2j/which_vm_instance_can_i_start_to_run_llama_70_b/ | yashwatwani28 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jlb2j | false | null | t3_16jlb2j | /r/LocalLLaMA/comments/16jlb2j/which_vm_instance_can_i_start_to_run_llama_70_b/ | false | false | 10 | null | |
Agents: An Open-source Framework for Autonomous Language Agents - AIWaves Inc 2023 | 37 | I hope this paper is also interesting this community!
Paper: [https://arxiv.org/abs/2309.07870](https://arxiv.org/abs/2309.07870)
Github: [https://github.com/aiwaves-cn/agents](https://github.com/aiwaves-cn/agents)
Abstract:
>Recent advances on large language models (LLMs) enable researchers and developers to build autonomous language agents that can automatically solve various tasks and **interact with environments, humans, and other agents** using natural language interfaces. **We consider language agents as a promising direction towards artificial general intelligence** and release Agents, an **open-source library** with the goal of opening up these advances to a wider non-specialist audience. Agents is carefully engineered to support important **features including planning, memory, tool usage, multi-agent communication, and fine-grained symbolic control.** Agents is **user-friendly** as it **enables non-specialists** to build, customize, test, tune, and deploy state-of-the-art **autonomous language agents without much coding**. The **library** is also **research-friendly as its modularized design** makes it **easily extensible for researchers.**
https://preview.redd.it/ne8fsj05rgob1.jpg?width=1131&format=pjpg&auto=webp&s=076a3551bddb817351d9865809923a6bdf840cb1
https://preview.redd.it/u4x4hm05rgob1.jpg?width=1656&format=pjpg&auto=webp&s=2ca813790719b1f6f285e67ca92834e02d12c40c
​
​ | 2023-09-15T18:36:38 | https://www.reddit.com/r/LocalLLaMA/comments/16jl53m/agents_an_opensource_framework_for_autonomous/ | Singularian2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jl53m | false | null | t3_16jl53m | /r/LocalLLaMA/comments/16jl53m/agents_an_opensource_framework_for_autonomous/ | false | false | 37 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | |
How much do different aspects of models & interfaces matter? | 3 | What truly matters when to you when you're deciding which model and chat interface to use?
Does everything need to be FOSS? Do features (PDFs, Internet RAG, image gen, etc) matter? Is it just cost (as in no monthly subscription)? Model size? Locality?
and how do those axes interact? For example, would a FOSS model but an entirely free, closed-source local chat UI that has lots of features be useful? Or would a proprietary model with oogabooga UI that doesn't have a monthly fee but doesn't run locally be okay? | 2023-09-15T18:18:52 | https://www.reddit.com/r/LocalLLaMA/comments/16jkphn/how_much_do_different_aspects_of_models/ | carsonpoole | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jkphn | false | null | t3_16jkphn | /r/LocalLLaMA/comments/16jkphn/how_much_do_different_aspects_of_models/ | false | false | self | 3 | null |
Is falcon 180b any good for creative stuff...more specifically fiction writing? | 24 | Just wondering. This is something that's extremely difficult to judge from benchmarks. | 2023-09-15T17:59:17 | https://www.reddit.com/r/LocalLLaMA/comments/16jk8b5/is_falcon_180b_any_good_for_creative_stuffmore/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jk8b5 | false | null | t3_16jk8b5 | /r/LocalLLaMA/comments/16jk8b5/is_falcon_180b_any_good_for_creative_stuffmore/ | false | false | self | 24 | null |
Introducing Fintwit Voyager (summaries on financial podcasts with OS LLMs) | 1 | Introducing Fintwit Voyager!
A twitter account that harnesses open-source large language models to auto-summarize investing and financial markets podcasts.
On a technical note, I developed a custom summarization chain involving speech-to-text transcription, speaker diarization, speaker labeling, and summarization. While 'text summarization' sounds trivial and simple, many AI-mediated steps and prompt engineering techniques where involved in the process to keep it reliable and free from human intervention.
If you're interested in financial markets & investing talks, feel free to follow!
[https://twitter.com/fintwit\_voyager](https://twitter.com/fintwit_voyager) | 2023-09-15T17:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/16jk6c4/introducing_fintwit_voyager_summaries_on/ | Responsible_Warning3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jk6c4 | false | null | t3_16jk6c4 | /r/LocalLLaMA/comments/16jk6c4/introducing_fintwit_voyager_summaries_on/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vUBde9w3mU-cHW1ZjtG2To91gaw33i10GgwEVe0AJpI', 'resolutions': [], 'source': {'height': 48, 'url': 'https://external-preview.redd.it/jmmAASC2L3jPr8IMbQndoHwcEhlRcp4ORylmCSLA5cc.jpg?auto=webp&s=89ae15513211d6f0cad0aadcf9e1afd679fdd5f2', 'width': 48}, 'variants': {}}]} |
Utilizing two different size GPUs for fine-tuning | 5 | Hey fellow LLAMA enthusiasts! I've got a question about utilizing two A100 GPUs with different RAM sizes (40GB and 10GB) for fine-tuning LLama 7B. I attempted to use \`device\_map= "auto"\` when loading the Hugging Face model, but I encountered an 'OOM' (Out of Memory) error (Probably it expects a GPU of the same size). Any suggestions on effectively utilising both GPUs with this setup and avoiding the memory issue?" | 2023-09-15T16:47:17 | https://www.reddit.com/r/LocalLLaMA/comments/16jifd3/utilizing_two_different_size_gpus_for_finetuning/ | ali0100u | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jifd3 | false | null | t3_16jifd3 | /r/LocalLLaMA/comments/16jifd3/utilizing_two_different_size_gpus_for_finetuning/ | false | false | self | 5 | null |
Is there a reason for the lack of superhot ggufs? | 19 | I couldn't think of how to word the title, but I don't mean it as accusatory as it sounds lol. It's honest curiosity.
With llama having dropped ggml support, I realized that all the ggmls I had of "high context" superhot models were no longer viable.
This morning I figured I'd get some downloads going of the gguf versions of my favorite superhots, only to find there were exactly 0 on huggingface. That seemed odd to me, and my first thought was perhaps there's a technical reason. Of course, the answer could just be no one has gotten around to it yet, which is totally understandable.
So- is it just that they are in the to-do backlog, or is there a technical reason? Are superhots old news with the advent of llama2 and 4k context, or does gguf format perhaps not lend itself to that somehow? | 2023-09-15T16:26:42 | https://www.reddit.com/r/LocalLLaMA/comments/16jhx7m/is_there_a_reason_for_the_lack_of_superhot_ggufs/ | LearningSomeCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jhx7m | false | null | t3_16jhx7m | /r/LocalLLaMA/comments/16jhx7m/is_there_a_reason_for_the_lack_of_superhot_ggufs/ | false | false | self | 19 | null |
Does anyone know how to finetune Phi-1.5? | 4 | I was trying to do a little finetuning to the new Phi-1.5 but using colab to train a llama 2 model, as I thought it gave me error in the "trainer = SFTTrainer" section.
I get this error: AttributeError: 'MixFormerSequentialForCausalLM' object has no attribute '\_set\_gradient\_checkpointing'.
Does anyone know what I need to modify to make my finetuning work correctly? | 2023-09-15T16:25:18 | https://www.reddit.com/r/LocalLLaMA/comments/16jhvzi/does_anyone_know_how_to_finetune_phi15/ | danielbrdz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jhvzi | false | null | t3_16jhvzi | /r/LocalLLaMA/comments/16jhvzi/does_anyone_know_how_to_finetune_phi15/ | false | false | self | 4 | null |
Fine-Tuning Llama 70B on Consumer Hardware: A Step-by-Step Guide | 1 | [removed] | 2023-09-15T16:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/16jhb5n/finetuning_llama_70b_on_consumer_hardware_a/ | l33thaxman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jhb5n | false | null | t3_16jhb5n | /r/LocalLLaMA/comments/16jhb5n/finetuning_llama_70b_on_consumer_hardware_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BG-_ybG_81cHNdhfcO4SI7RZKgZvJvbtDdAp--yWODA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7MJFP4pv4dd3l5U-iQkpol5U-QDNuNShKtlioenvTJU.jpg?width=108&crop=smart&auto=webp&s=613d5e1fa3925c07e8e1d112648fcd2366c54107', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/7MJFP4pv4dd3l5U-iQkpol5U-QDNuNShKtlioenvTJU.jpg?width=216&crop=smart&auto=webp&s=03a8a5ecdeeebdd76792b1a8d5f6df90e60eabac', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/7MJFP4pv4dd3l5U-iQkpol5U-QDNuNShKtlioenvTJU.jpg?width=320&crop=smart&auto=webp&s=84de38d893173ecd2a023ad7e1d7792f05a1b7c9', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/7MJFP4pv4dd3l5U-iQkpol5U-QDNuNShKtlioenvTJU.jpg?auto=webp&s=38faed5263fcda3871ebce98b9d98235293b2a39', 'width': 480}, 'variants': {}}]} |
This week in AI - all the Major AI development in a nutshell | 1 | [removed] | 2023-09-15T14:42:34 | https://www.reddit.com/r/LocalLLaMA/comments/16jf9oy/this_week_in_ai_all_the_major_ai_development_in_a/ | wyem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jf9oy | false | null | t3_16jf9oy | /r/LocalLLaMA/comments/16jf9oy/this_week_in_ai_all_the_major_ai_development_in_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XtM-kk73flWxFHhYFhvEqzP_URnfgAcG5CgS8E0YN18', 'resolutions': [{'height': 89, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?width=108&crop=smart&auto=webp&s=b36e14f7569ad9b98faf9cc5d9d93974841bea50', 'width': 108}, {'height': 178, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?width=216&crop=smart&auto=webp&s=8f303c8e64b6765f92ff3e6e6f1e0cf94b6e0a15', 'width': 216}, {'height': 264, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?width=320&crop=smart&auto=webp&s=08632083cc712e34568233325b7fe6c7674c3003', 'width': 320}, {'height': 528, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?width=640&crop=smart&auto=webp&s=0cb2af76b5b7e0f0eceed0f033d2335774404969', 'width': 640}, {'height': 793, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?width=960&crop=smart&auto=webp&s=de4f960619f004163f7f56969c99310c8c6f2c35', 'width': 960}], 'source': {'height': 805, 'url': 'https://external-preview.redd.it/kSnw4CVYfVApLY4mwPR30zRRzHbP-h5qZWbn6gvH5vs.jpg?auto=webp&s=d4fddda252ab78e0fbe3d7f6ad9d35ae4d16059a', 'width': 974}, 'variants': {}}]} |
How does one discover the correct rope/freq when converting a model into gguf? | 1 | [removed] | 2023-09-15T13:51:21 | https://www.reddit.com/r/LocalLLaMA/comments/16jdzqi/how_does_one_discover_the_correct_ropefreq_when/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jdzqi | false | null | t3_16jdzqi | /r/LocalLLaMA/comments/16jdzqi/how_does_one_discover_the_correct_ropefreq_when/ | false | false | self | 1 | null |
How much preprocessing are you doing for RAG QA chatbots w/ documents? | 8 | I know there is a ton of interest in document QA systems, which makes sense since it has good business values to most organizations.
I'm wondering for those of you who found the answers from you QA systems to be good, did you guys just drop the PDF / Word / etc... into the program and let the RecursiveCharacterSplitter in langchain do the work, or did you guys do some preprocessing before you chunked it up and loaded into the vector db.
I am trying to do QA on a PDF of a textbook. I wrote some scripts to "chunk" the textbook so each chunk also contains it's associated Title and Subheading.
​
Let's say we are in Chapter: Carbon-Carbon Bonds. Below is an example passage:
Grignard Reaction:
The grignard reaction is very lit. Only the most based can perform it. Blah Blah Blah
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
I would then create chunks from this passage like this:
​
Carbon-Carbon Bonds
Grignard Reaction
The grignard reaction is very lit. Only the most based can perform it. Blah Blah Blah
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
\-------------------------------------------------------
Carbon-Carbon Bonds
Grignard Reaction
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
Blah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah BlahBlah Blah Blah
Then I embed the chunks. The idea is that including the title and header will make have a higher similarity score.
Has anyone found it necessary to perform this type of chunking? Anyone getting great results with easier methods?
​
​
​
​ | 2023-09-15T13:25:20 | https://www.reddit.com/r/LocalLLaMA/comments/16jde4z/how_much_preprocessing_are_you_doing_for_rag_qa/ | 4hometnumberonefan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jde4z | false | null | t3_16jde4z | /r/LocalLLaMA/comments/16jde4z/how_much_preprocessing_are_you_doing_for_rag_qa/ | false | false | self | 8 | null |
Using a LoRa Trained on a HF model and running it on a GPTQ model? | 5 | So currently most of the fastest models are GPTQ models,
On Oogaabooga you cant a train a QLora Model..and you cant training GPTQ model.
But you can train a HF model but to train it u need to set it to 8bit.
but a HF model is very slow at inference compared to GPTQ model.
Is there no way to Train a LoRa on HF model and use it on GPTQ model for faster inference? | 2023-09-15T13:20:43 | https://www.reddit.com/r/LocalLLaMA/comments/16jd9yp/using_a_lora_trained_on_a_hf_model_and_running_it/ | mohaziz999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jd9yp | false | null | t3_16jd9yp | /r/LocalLLaMA/comments/16jd9yp/using_a_lora_trained_on_a_hf_model_and_running_it/ | false | false | self | 5 | null |
Behind the Curtain: How do we look inside a Llama model file to browse the data? | 15 | **How do we browse the knowledge database of a model?** I'm sure we don't just pop the 5GB model into excel and start scrolling, or do we?
I have been testing several of the Llama 7B models on the text-generation-webui and it keeps stunning me about things it knows. It knew the exact statute for a law in my state. Like, freakin nuts. I want to look under the hood! | 2023-09-15T13:14:49 | https://www.reddit.com/r/LocalLLaMA/comments/16jd4o9/behind_the_curtain_how_do_we_look_inside_a_llama/ | Actual-Bad5029 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jd4o9 | false | null | t3_16jd4o9 | /r/LocalLLaMA/comments/16jd4o9/behind_the_curtain_how_do_we_look_inside_a_llama/ | false | false | self | 15 | null |
Ryzen Direct Memory Access in OpenCL mode | 7 | I am looking at making the upgrade from my trusty old AM4 board and I am looking at orienting my upgrades for good price:performance running LLMs.
I remember in the past AMD used to market some of their chips with integrated graphics as APUs. It was my understanding that this went a step beyond traditional integrated graphics and even approached some of the things the Mac M1/M2 would later perfect with regard to blurring the line between system memory and VRAM. (The problem in AMD's case was that the APU line was woefully underpowered for CPU heavy tasks.)
My question is: has anyone noticed a significant speedup in their CPU layers using a Ryzen 7600 (or similar) with an OpenCL build? Would I have access to an OpenCL pseudo device using the Ryzen's "GPU" that I could offset some layers to alongside my 3060?
​ | 2023-09-15T13:10:46 | https://www.reddit.com/r/LocalLLaMA/comments/16jd15h/ryzen_direct_memory_access_in_opencl_mode/ | Apprehensive_Sock_71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jd15h | false | null | t3_16jd15h | /r/LocalLLaMA/comments/16jd15h/ryzen_direct_memory_access_in_opencl_mode/ | false | false | self | 7 | null |
Uncovering mesa-optimization algorithms in Transformers | 22 | https://arxiv.org/abs/2309.05858
I think this is potentially huge, though my puny brain cannot handle the math in the paper.
The bit most relevant here:
*"Finally, we propose a novel self-attention layer, the mesa-layer, that explicitly and efficiently solves optimization problems specified in context. We find that this layer can lead to improved performance in synthetic and preliminary language modeling experiments, adding weight to our hypothesis that mesa-optimization is an important operation hidden within the weights of trained Transformers."*
I think this is "next thing after transformers", basically.. | 2023-09-15T12:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/16jcqn4/uncovering_mesaoptimization_algorithms_in/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jcqn4 | false | null | t3_16jcqn4 | /r/LocalLLaMA/comments/16jcqn4/uncovering_mesaoptimization_algorithms_in/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Need advice on Local LLM setup to augment AMD GPU shortcomings | 5 | Hi everyone,
**Background:**
I'm a seasoned developer and continuously working on GenAI things for the last 1 year. Recently, I wanted to set up a local LLM/SD server to work on a few confidential projects that I cannot move into the cloud. I have gone through the posts recommending renting cloud GPU and started with that approach. Since the work I am doing is quite a niche and have to do a lot of tinkering while the setup is running, the cloud costs equaled the cost of a GPU within the last 3/4 months. So, I wanted to go ahead and set up a local system for further exploration.
**Buying decision:**
I started with Nvidia GPUs and they didn't feel VFM for high-end cards having VRAM > 12GB. So, I have gone through some of the videos and posts that mentioned AMD is catching fast with all the ROCm things, and videos showing that SD is working on AMD. So, decided to go ahead with the AMD setup.
**Setup:**
* Processor: AMD Ryzen9 7900X
* Motherboard: MSI X670-P Pro WIFI
* GPU: MSI RX7900-XTX Gaming trio classic (24GB VRAM)
* RAM: Corsair Vengeance 32GBx2 5200MHz
I think the setup is one of the best VFM but only if it works for GenAI :(
​
**Exploration:**
After spending nearly 10 days with my setup, these are my observations:
* AMD has a lot to do in terms of catching up to Nvidia's software usability.
* Memory management is very weak in all the frameworks that are working on AMD stacks
* Eg: DirectML works on AMD. But, you can only generate one image at a time. I can generate 4 image batches in my Nvidia 2GB GPU, lol.
* All the current frameworks are releasing memory as soon as they complete processing but that is not happening in the AMD stack. this is giving rise to OOM.
* We cannot use AMD hardware directly out of the box like Nvidia. Some things don't work on Windows, some things don't work on WSL, I even set Ubuntu dual boot and some of the issues are still not answerable.
* Don't get me started with compatibility issues like Ubuntu Kernel, ROCm version, Windows version, 7900XTX support for that respective ROCm, etc, etc.
Don't get me wrong, I'm an enthusiast and DIY my entire career. But, at this point, I cannot simply sit and wait for all the things to fall into place, and at the same time, I cannot invest much further without much outcome. So, I'm currently looking to make the best use of this setup with as little investment as possible until the AMD woo's go away.
​
The only reasonable option I came across is to add a used RTX3090 24GB GPU to my current setup and continue working on it. I was able to find them on OLX and in Gameloot as cheap as 60,000/-. So, here are my concerns:
* Is there any way I can make my current setup work without adding further investment.?
* Since, I know that the used 3090s are coming from mining rigs, is it a safe bet to spend that much on them.?
* I can struggle a bit and go for 4090 but for that, I have to get rid of my 7900XTX which will make a dent in my pocket and I always feel 4090 is too overpriced and not a VFM.
* What are the other options, If I have any.?
Thanks a lot in advance guys and your suggestions, help, time, and bashing will be much appreciated ;) | 2023-09-15T12:31:05 | https://www.reddit.com/r/LocalLLaMA/comments/16jc2p0/need_advice_on_local_llm_setup_to_augment_amd_gpu/ | kkb294 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jc2p0 | false | null | t3_16jc2p0 | /r/LocalLLaMA/comments/16jc2p0/need_advice_on_local_llm_setup_to_augment_amd_gpu/ | false | false | self | 5 | null |
Try models fine-tuned for your language | 10 | For those who are not satisfied with these Open models, have you tried using fine-tuned models with datasets for your language?
I am Italian, and I have always been unsatisfied. The only 'decent' model is WizardLM 70B; the other models are all poor in comparison.
Not to mention the base models: they are practically unusable even at 70B, with incomprehensible output.
(Except for Falcon 40B; before WizardLM 70B, it was the top choice for me, even the base model.)
Yesterday, by chance, I tried 'Openbuddy' because I saw on TheBloke's page that it had reached version v11, and I was curious to read the changelog. But I found a demo on Hugging Face Spaces, and I thought, 'Well, let's give it a try.' And wow.
Not only the 70B model but also the 30B one is really good, and the credit probably goes to the fact that it was trained on conversations in various languages, including Italian. I'm even curious to try the 13B model; if it works well, even a model that small could be a game-changer.
I recommend testing models that have been fine-tuned in your language. The reasons for the malfunction might be the nearly 'English-only' dataset, and for a simple chatbot, you don't need such a high number of parameters as 70B. Probably, even 13B would be more than sufficient if trained with data from your language. This likely improves not only the output but also the input, making the prompt more understandable for the AI.
Openbuddy Demo: [https://huggingface.co/spaces/OpenBuddy/ChatWithBuddy](https://huggingface.co/spaces/OpenBuddy/ChatWithBuddy) | 2023-09-15T12:26:36 | https://www.reddit.com/r/LocalLLaMA/comments/16jbz0i/try_models_finetuned_for_your_language/ | AntoItaly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16jbz0i | false | null | t3_16jbz0i | /r/LocalLLaMA/comments/16jbz0i/try_models_finetuned_for_your_language/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'snUVaj9pVE_H3kUtH1WnuOG8IMqKxRf149JHPX68nio', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=108&crop=smart&auto=webp&s=77fb3e3ac77caa8bd7df19d0d39e2dfafb26d1e3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=216&crop=smart&auto=webp&s=0801e169424d7d53c5a87afa937b0b1c43586c4b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=320&crop=smart&auto=webp&s=4c616e8edd1971c8add5f2da26bcccce2d94f2cc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=640&crop=smart&auto=webp&s=7e34bc5991874695ad311c54320a1c245ec26f16', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=960&crop=smart&auto=webp&s=11002c87f64d10a6bac69206d06340d5a57b9b6c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?width=1080&crop=smart&auto=webp&s=eb690631d57f214d122d5483a2181d50001d145a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WAMhuWv7E4a5fTpH_65H5WnqhJCxHmGdW6KPoL5WqWM.jpg?auto=webp&s=5e2e4811972fef1b05d024143395ab9a073314ec', 'width': 1200}, 'variants': {}}]} |
Easy method for fine-tuning any model from llama to gpt to othera | 3 | Can someone pls provide me with a script that I can run on Google colab I want to finetune 100m to 500m model on the free colab plan with dataset that is between 100k to 30k in size any help please even if there is no script can you provide me with s software or anything else that can help. | 2023-09-15T10:45:07 | https://www.reddit.com/r/LocalLLaMA/comments/16j9up6/easy_method_for_finetuning_any_model_from_llama/ | Puzzleheaded_Acadia1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j9up6 | false | null | t3_16j9up6 | /r/LocalLLaMA/comments/16j9up6/easy_method_for_finetuning_any_model_from_llama/ | false | false | self | 3 | null |
Any Uncensored Llama2 model for commercial use? | 3 | Are there any Llama2 based model out there that haven't been trained with data made by OpenAI API?
I have been wondering if there might be a need for chatbots for companies, where a censored model is unusable.
For example, an eCommerce who would like to have a chatbot for guide and suggestions, would have a hard time using ChatGPT API, if it throws a hissy fit, every time a word like dildo og vibrator is mentioned.
I really like Airoboros, but with the wording of OpenAIs ToS, it seems risky to build a solution on that model.
Although I was wondering if a chatbot could even be considered competing with their product if they don't offer a uncensored alternative, that could be used instead? | 2023-09-15T10:29:47 | https://www.reddit.com/r/LocalLLaMA/comments/16j9l39/any_uncensored_llama2_model_for_commercial_use/ | nixudos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j9l39 | false | null | t3_16j9l39 | /r/LocalLLaMA/comments/16j9l39/any_uncensored_llama2_model_for_commercial_use/ | false | false | self | 3 | null |
Some interesting new tests just dropped | 38 | 2023-09-15T09:57:19 | https://evolutionnews.org/2023/09/chatgpt-is-becoming-increasingly-impressive/ | ambient_temp_xeno | evolutionnews.org | 1970-01-01T00:00:00 | 0 | {} | 16j90yr | false | null | t3_16j90yr | /r/LocalLLaMA/comments/16j90yr/some_interesting_new_tests_just_dropped/ | false | false | 38 | {'enabled': False, 'images': [{'id': 'WPf9sBmJpsNWI-iOg7b9rUdeAF4wGQEPPzrnTt88b1I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=108&crop=smart&auto=webp&s=6386e8c67744622d89fcde2c20850e5702f8d3dd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=216&crop=smart&auto=webp&s=9b7c4af96bbf94c129d4e02c44bc625e2c09556f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=320&crop=smart&auto=webp&s=a223ede5c74450a34b696da3fc038b923c3c1262', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=640&crop=smart&auto=webp&s=8e2a50f518a845e6e835f7b1b409493cdc051d3f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=960&crop=smart&auto=webp&s=fa5165653b75421304c9eed37f66e26ec0bf4693', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?width=1080&crop=smart&auto=webp&s=546e93f25da8bde7fb36675aa42d60c04b62a1f2', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/iFvVjgxq-c4zUadOelgZ85TokJjGgKkxfeFOxd543aw.jpg?auto=webp&s=6395d1bb84141b4b38023e783cce951c2b075c05', 'width': 1200}, 'variants': {}}]} | ||
I don't understand context window extension | 44 | If an transformer can only attend to, say, 2048, then how can that same transformer attend to more than 2048 tokens. Isn't that hard coded in the architecture?
I can understand that you might summarise previous chunks of 2048 tokens and pass that 'hidden state' forward or emulate a larger context window with a sliding window, but Albi, ROPE, PI don't appear to do anything except change the way positioning encodings are calculated.
What am I missing? | 2023-09-15T09:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/16j8qa5/i_dont_understand_context_window_extension/ | moma1970 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j8qa5 | false | null | t3_16j8qa5 | /r/LocalLLaMA/comments/16j8qa5/i_dont_understand_context_window_extension/ | false | false | self | 44 | null |
How much does it cost to fine-tune on a code base? | 2 | Would the appropriate metric be cost per GB of source files?
Would love some reference points for finetunning costs and performance (would love examples of performance vs un-tuned GPT4) of Code Llama, Startcoder, etc. on own code base with docs.
Thanks | 2023-09-15T09:34:05 | https://www.reddit.com/r/LocalLLaMA/comments/16j8n5q/how_much_does_it_cost_to_finetune_on_a_code_base/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j8n5q | false | null | t3_16j8n5q | /r/LocalLLaMA/comments/16j8n5q/how_much_does_it_cost_to_finetune_on_a_code_base/ | false | false | self | 2 | null |
Tuning an LLM on my own notes over time | 19 | I’ve been hearing a lot of talk about noteworthy people creating AI clones of themselves using the mass of data they’ve generated over the years.
I think this is pretty cool, but difficult to do for an average person. So my solution would be to record voice notes throughout the day, transcribe them, and add them to the data set.
At the end of the day/week I’d input the same set of prompts and see what insights are generated. Im committed to doing this as a long term project. Years to decades of notes.
What would be the simplest way to accomplish this?
I’m a total noob at this. I am a software engineer but I’ve spent most of my time in the AR/VR space. Please forgive my naivety. | 2023-09-15T07:33:32 | https://www.reddit.com/r/LocalLLaMA/comments/16j6q7r/tuning_an_llm_on_my_own_notes_over_time/ | michaelthatsit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j6q7r | false | null | t3_16j6q7r | /r/LocalLLaMA/comments/16j6q7r/tuning_an_llm_on_my_own_notes_over_time/ | false | false | self | 19 | null |
Some questions of implementing LLM to generate Q/A pairs based on local documents | 3 | Recently, I have been paying around about how to implement chat-based Q/A using the LLM model based on a local knowledge base.
I have experimented with the following two open-source frameworks.
[Llama\_index](https://github.com/jerryjliu/llama_index)
[Langchain-chatchat](https://github.com/chatchat-space/Langchain-Chatchat)
I believe these 2 frameworks are built upon what everyone refers to as the RAG (Retrieval-Augmented Generation) approach. Without altering the embeddings and LLM, it allows for generating responses based on one's own knowledge base.
Thanks for the author's excellent work, I have indeed been able to achieve my requirements to some extent. However, the output results still seem to have some deviations and even mistakes.
​
[the work flow of chatchat](https://preview.redd.it/2mud4ayp5dob1.png?width=834&format=png&auto=webp&s=cc598844a4d4462a8fc80383a1ce0e946828e157)
Is there a way to make the output results more accurate?
For example, I have a user manual for the hairdryer in knowledge base showing the hair dryer is working under rated voltage of 110V, when I use the LLM model with a relatively low sample size I may get a wrong answer.
If I ask, "Can I use the xxx hair dryer directly in a country with a rated voltage of 220V?"
Llama2-7B may answers "yes, you can."
while Llama2-13B may answer me "no, unless you use a power adapter".
And GPT is capable of providing more excellent answers.
I believe that if I want to achieve better output results, I may need to fine-tune the LLM or embeddings.
But I've noticed that many people use Q/A pairs for fine-tuning, and I'm not sure why they do this or whether these operations involve fine-tuning embeddings or the LLM. In my understanding, we don't have sufficient resources for fine-tuning the LLM, and fine-tuning embeddings seems to only help in improving how embeddings convert human language into higher relevance vectors. Does this mean that when fine-tuning embeddings, there's actually no need for question-answer pairs?
If I must fine-tune, should I separately fine-tune two embeddings: one fine-tuned based on question-answer pairs for extracting vectors from documents and another fine-tuned based on question similarity for extracting vectors from questions?
​ | 2023-09-15T06:52:31 | https://www.reddit.com/r/LocalLLaMA/comments/16j624z/some_questions_of_implementing_llm_to_generate_qa/ | william_luckybob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j624z | false | null | t3_16j624z | /r/LocalLLaMA/comments/16j624z/some_questions_of_implementing_llm_to_generate_qa/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'pfcqrH7buFAmKsvruxLEC8k2mjFW6UvVXRz1AwKYa44', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=108&crop=smart&auto=webp&s=c4412dd9bcb447ba4bf759fb93252ec7cc29eaa7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=216&crop=smart&auto=webp&s=d1077664f0b319a5e9689ed8006db9d90a16c950', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=320&crop=smart&auto=webp&s=c7faca5ed71dda264a02cef778491e4cb828c9f4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=640&crop=smart&auto=webp&s=02117123b1675812a5cce3203ea1dcfb0661f043', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=960&crop=smart&auto=webp&s=40de8f1c2bc9c6790d19d992b0b90966ea14d5b6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?width=1080&crop=smart&auto=webp&s=21d5b048af2fd61e3abbd1d8c17492e576f26874', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LBlgmVOAXLqerXsfJbE68VfDP0dDCf-STi5W_GzltkY.jpg?auto=webp&s=9c78825c06f2544b9d700febaadb19b3625bc6d4', 'width': 1200}, 'variants': {}}]} | |
Methods for Enabling LLMs to Work with Languages Other Than English: How Does ChatGPT Do It? | 7 | Hello. Are there any methods that would allow an LLM to work with languages other than English? For example, how does ChatGPT achieve this? Could you please discuss the methods you are aware of for achieving this? | 2023-09-15T06:42:22 | https://www.reddit.com/r/LocalLLaMA/comments/16j5w7f/methods_for_enabling_llms_to_work_with_languages/ | PickkNickk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j5w7f | false | null | t3_16j5w7f | /r/LocalLLaMA/comments/16j5w7f/methods_for_enabling_llms_to_work_with_languages/ | false | false | self | 7 | null |
Guys, why are we sleeping on MLC LLM - Running on Vulkan? | 80 | I just tested on my 4090 and its incredibly fast and it actually has decent instructions to get the packages and run it. It's surprisingly fast compared to what I was seeing via CUDA and it seems to be fully utilizing my GPU. Im gonna try it on my ROG ally next.
https://preview.redd.it/h4fv2id7qcob1.png?width=1060&format=png&auto=webp&s=ab00aa5d3071c220818f8c389d6f9715e09f23c1
[MCL LLM llama-2 7b](https://preview.redd.it/e7sym2utpcob1.png?width=1441&format=png&auto=webp&s=6e00fc757f71923f9113d46e61072eac1e8e8744) | 2023-09-15T05:03:58 | https://www.reddit.com/r/LocalLLaMA/comments/16j486g/guys_why_are_we_sleeping_on_mlc_llm_running_on/ | APUsilicon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j486g | false | null | t3_16j486g | /r/LocalLLaMA/comments/16j486g/guys_why_are_we_sleeping_on_mlc_llm_running_on/ | false | false | 80 | null | |
Wizard Coder - CUDA out of memory. Tried to allocate | 1 | [removed] | 2023-09-15T04:07:36 | https://www.reddit.com/r/LocalLLaMA/comments/16j37e2/wizard_coder_cuda_out_of_memory_tried_to_allocate/ | NormalResume | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j37e2 | false | null | t3_16j37e2 | /r/LocalLLaMA/comments/16j37e2/wizard_coder_cuda_out_of_memory_tried_to_allocate/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UY7-Y9YFSwqNihyhNh7qaNVidPA3KZOLpPUFt7eJqKo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=108&crop=smart&auto=webp&s=23223fc45c2fd69013b486eef8efb8b37a394e59', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=216&crop=smart&auto=webp&s=c7ce425b875b8f99244edbecf9d9204fe7646b14', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=320&crop=smart&auto=webp&s=8fddc219410aec56a396a94bf87daaf556cc3951', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=640&crop=smart&auto=webp&s=8400822e85d8cc693c1c61f9503eaa3326c20fe5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=960&crop=smart&auto=webp&s=cda57265623cbe32817685a3d48f6155cbb2649f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?width=1080&crop=smart&auto=webp&s=39b5a9ec2133798c22050e05403ce52c6f44685b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LOC8gQl7_gtTBmz0_ltFp84qpA4hZfksAUsBpa0r2dE.jpg?auto=webp&s=440602e09961dec734f45acdbaaee92ba833e907', 'width': 1200}, 'variants': {}}]} |
Old Comp Running LLM! I got llama-2-7b-chat.Q2_K.gguf running on 10-year-old iMac, response in 40-sec at 1-t/s | 16 | **I have an 8GB Ram late 2013 iMac running macOS Catalina.** I installed the text-generation-webui and just pasted in model card link (TheBloke/ llama-2-7b-chat.Q2\_K.gguf) - BAM WORKS, it's blowing me away! Y'all, this machine is old and decrepit. No GPU.
I asked this: Explain the temptation of Christ from the Holy Bible.
**It responded in 70 seconds at 3-t/s!**
Sure, I'd be happy to help you with that! The account of Jesus being tempted by Satan in the wilderness is found in Luke 4:1-13. According to the Bible, after fasting for 40 days and nights, Jesus was tempted by Satan in three ways:
1. Temptation to turn stones into bread: The devil told Jesus to command the rocks to become bread, but Jesus refused, saying it is written "man shall not live by bread alone." (Matthew 4:4)
2. 2. Temptation to worship himself: Satan tempted Jesus to bow down and worship him instead of God. But Jesus responded by quoting Deuteronomy 6:13-14, saying "You shall not misuse the name of the Lord your God." (Matthew 4:7)
3. Temptation to look for an easy way out:
then it cut off. but still. pretty sick! | 2023-09-15T03:18:41 | https://www.reddit.com/r/LocalLLaMA/comments/16j29s3/old_comp_running_llm_i_got_llama27bchatq2_kgguf/ | Actual-Bad5029 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j29s3 | false | null | t3_16j29s3 | /r/LocalLLaMA/comments/16j29s3/old_comp_running_llm_i_got_llama27bchatq2_kgguf/ | false | false | self | 16 | null |
MythoMax 13b and Mythomax 33b | 30 | Okay, Mythomax 13b is sweet. Really sweet in writing stuff. for 13b buy in short pants.
I tried Mythomax L2 33B and I couldn't get heads and tails out of it. I really couldn't.
Is it me? Is it my pet hamster? Or the 33b is really bonks. I tried the 6bit quant for extra points, but buy, it was all mangled together nonsense.
Tell me I made some mistake, coz I really want Mythomax in 33b without dementia.
​ | 2023-09-15T02:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/16j1ial/mythomax_13b_and_mythomax_33b/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16j1ial | false | null | t3_16j1ial | /r/LocalLLaMA/comments/16j1ial/mythomax_13b_and_mythomax_33b/ | false | false | self | 30 | null |
Working in Python code, will Llama 2 return a table in JSON format when prompted? | 4 | Hi everyone, I wanted to confirm this question below before really jumping into Llama 2.
If I prompted Llama to provide answers in JSON format, for eg. something like this prompt:
sequences = pipeline("Provide your answer in the JSON format with the feature names as the keys.")
Then following that in my Python code I have:
for seq in sequences:
# Convert output string to dictionary object
dict = json.loads(seq['generated_text'])
Will Llama 2 be able to output the answers in JSON format in order for me to convert it into a dictionary in the next step?
Would appreciate any input. Many thanks! | 2023-09-15T02:34:57 | https://www.reddit.com/r/LocalLLaMA/comments/16j1dk8/working_in_python_code_will_llama_2_return_a/ | --leockl-- | self.LocalLLaMA | 2023-09-15T02:39:11 | 0 | {} | 16j1dk8 | false | null | t3_16j1dk8 | /r/LocalLLaMA/comments/16j1dk8/working_in_python_code_will_llama_2_return_a/ | false | false | self | 4 | null |
Trail LLM Torrent Support - stablebay.org | 5 | 2023-09-15T01:26:54 | thiefyzheng | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16izy8n | false | null | t3_16izy8n | /r/LocalLLaMA/comments/16izy8n/trail_llm_torrent_support_stablebayorg/ | false | false | 5 | {'enabled': True, 'images': [{'id': '5vxjMNoEcZa-bycsTAKsDIyRUx2qcBlxvq_wNEULCLU', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/jj9jadbnnbob1.png?width=108&crop=smart&auto=webp&s=75e5126372d2b05eaa4a2180e69330c16ea1592a', 'width': 108}, {'height': 208, 'url': 'https://preview.redd.it/jj9jadbnnbob1.png?width=216&crop=smart&auto=webp&s=10533c4a8f27d620ccad249f1b8ef9f724131db6', 'width': 216}, {'height': 308, 'url': 'https://preview.redd.it/jj9jadbnnbob1.png?width=320&crop=smart&auto=webp&s=b3b4cfd6be7af7559105986a5dc0e155c4d8438f', 'width': 320}, {'height': 616, 'url': 'https://preview.redd.it/jj9jadbnnbob1.png?width=640&crop=smart&auto=webp&s=d665de74cd762353e45b17fb21ca8c478dbe3a59', 'width': 640}], 'source': {'height': 633, 'url': 'https://preview.redd.it/jj9jadbnnbob1.png?auto=webp&s=515c3b748fc9dd2f6185c55fa0272234fbf3ed36', 'width': 657}, 'variants': {}}]} | |||
Censor & LlaMAS | 1 | [removed] | 2023-09-15T00:24:09 | https://www.reddit.com/r/LocalLLaMA/comments/16iymfw/censor_llamas/ | holistic-engine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16iymfw | false | null | t3_16iymfw | /r/LocalLLaMA/comments/16iymfw/censor_llamas/ | false | false | self | 1 | null |
Ai only for rich and powerful - Closed Door Summit | 1 | [removed] | 2023-09-14T22:49:03 | https://www.reddit.com/r/LocalLLaMA/comments/16iwix3/ai_only_for_rich_and_powerful_closed_door_summit/ | Single_Ring4886 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16iwix3 | false | null | t3_16iwix3 | /r/LocalLLaMA/comments/16iwix3/ai_only_for_rich_and_powerful_closed_door_summit/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'hSr_Qzar9WIY3xBupwjJLtcozfLPOTBmWCTAjoTJUwI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5n7ZyfQrJixtnp_-YLsuot0Q0EoUgVxHPyUzl5XPx2k.jpg?width=108&crop=smart&auto=webp&s=fd09c08144fcede846fb5f42f6f0d8123550582a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/5n7ZyfQrJixtnp_-YLsuot0Q0EoUgVxHPyUzl5XPx2k.jpg?width=216&crop=smart&auto=webp&s=2b4eaec3d14df156d573b5d1296b7d4d98d4415c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/5n7ZyfQrJixtnp_-YLsuot0Q0EoUgVxHPyUzl5XPx2k.jpg?width=320&crop=smart&auto=webp&s=342521e31d1c696ba1062a06ac4cb59477ef4f65', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/5n7ZyfQrJixtnp_-YLsuot0Q0EoUgVxHPyUzl5XPx2k.jpg?auto=webp&s=6f32ea372eb74fae751ac7516a75716dd2ccbaa9', 'width': 480}, 'variants': {}}]} |
Need Help In Creating an App with the custom data | 0 | I am planning to create an app, using my own custom data which will generate articles from the internet and from the data I used. The challenge I'm facing is that I lack the necessary knowledge in this area, and I'm struggling to find appropriate tutorials or resources to assist me.
I've already tried doing this with the help of langchain but the output was only from the data I provided, any other queries provided failed to resolve.
If anyone could provide me with guidance on the steps I should follow, recommended tools, or packages, I would greatly appreciate it. | 2023-09-14T22:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/16ivdqr/need_help_in_creating_an_app_with_the_custom_data/ | adgamerx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ivdqr | false | null | t3_16ivdqr | /r/LocalLLaMA/comments/16ivdqr/need_help_in_creating_an_app_with_the_custom_data/ | false | false | self | 0 | null |
I'm a former Atari engineer trying to catch up with the current tech. | 123 | What models can I run on my Atari 5200? I'm willing to code in assembly.
/s | 2023-09-14T21:51:35 | https://www.reddit.com/r/LocalLLaMA/comments/16iv40g/im_a_former_atari_engineer_trying_to_catch_up/ | Musenik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16iv40g | false | null | t3_16iv40g | /r/LocalLLaMA/comments/16iv40g/im_a_former_atari_engineer_trying_to_catch_up/ | false | false | self | 123 | null |
The first low code platform for AI engineers | 0 | I'm curious what the appetite for low code platforms in the AI space is. Would you be interested in using it?
Here's a look at what this looks like: [https://www.loom.com/share/Building-AI-Agents-with-airkitai-8d7513639e8e4c7e9d8e449945f9f827?sid=fa0a1ef1-6aa5-4174-8923-2e5ac8345c46](https://www.loom.com/share/Building-AI-Agents-with-airkitai-8d7513639e8e4c7e9d8e449945f9f827?sid=fa0a1ef1-6aa5-4174-8923-2e5ac8345c46)
​
​ | 2023-09-14T21:31:49 | https://www.reddit.com/r/LocalLLaMA/comments/16ium1u/the_first_low_code_platform_for_ai_engineers/ | Brickchip | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ium1u | false | null | t3_16ium1u | /r/LocalLLaMA/comments/16ium1u/the_first_low_code_platform_for_ai_engineers/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'qeifkwwr4C5QSg58B_96VUqdBQ2DqQxNVOT5ptzpTrU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=108&crop=smart&auto=webp&s=4bc3ef15bdc48068b5b46ef3d57ff16363a52a24', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=216&crop=smart&auto=webp&s=a68f471906e1e4bb44a66502ae9fb007cc218a2b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=320&crop=smart&auto=webp&s=352a4ad3daabba4c50607ceab37d57dbde36615b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=640&crop=smart&auto=webp&s=4855def75f13e0f454d8002e99c2081e81efa17f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=960&crop=smart&auto=webp&s=ec565104753e210f8f04c326bb3d5c3377b97b01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=1080&crop=smart&auto=webp&s=4cf377537671f6d20f3c87f699dfa3278e55caef', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?auto=webp&s=6b12f9a08661b4c750606ac85499dd355cbd4d07', 'width': 2400}, 'variants': {}}]} |
AgentVerse Tweet | 3 | 2023-09-14T21:21:47 | https://x.com/_akhaliq/status/1702336679200411697?s=20 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 16iuckj | false | null | t3_16iuckj | /r/LocalLLaMA/comments/16iuckj/agentverse_tweet/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'ZTimAguvRBSAjvxetJaWSl1X0GzGpsavj2__vyaC3CU', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=108&crop=smart&auto=webp&s=1fc9e75140401dcabe6bb64f85867d724c02e629', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=216&crop=smart&auto=webp&s=ffd21c8ceb52bb5c7a8b42bf130c480d52cb9898', 'width': 216}, {'height': 151, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=320&crop=smart&auto=webp&s=61af93ea3dddde3965266c33aca64f4c11fd724a', 'width': 320}, {'height': 303, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=640&crop=smart&auto=webp&s=e534bd19142c383fa2e57092b552fd3340b03bf9', 'width': 640}, {'height': 455, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=960&crop=smart&auto=webp&s=9012a82499562fd6eb189ee0e2061a2f6c1e55ff', 'width': 960}, {'height': 512, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=1080&crop=smart&auto=webp&s=7a50445beedffee7b23a096bd060ca3ee7f0335b', 'width': 1080}], 'source': {'height': 876, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?auto=webp&s=8271e209f0047ad46e1836939ee26756c6db04d1', 'width': 1846}, 'variants': {}}]} | ||
EVGA has B Stock 3090s for $799. If you are looking for a 3090 for LLM, that's less risky than ebay and comes with a 1 year warranty. | 41 | 2023-09-14T19:45:21 | https://www.evga.com/Products/ProductList.aspx?type=8&family=GeForce+30+Series+Family | fallingdowndizzyvr | evga.com | 1970-01-01T00:00:00 | 0 | {} | 16irv4q | false | null | t3_16irv4q | /r/LocalLLaMA/comments/16irv4q/evga_has_b_stock_3090s_for_799_if_you_are_looking/ | false | false | default | 41 | {'enabled': False, 'images': [{'id': 'Ruit8EXjtCUHoA47xon6Jgy0DyGgbBpqEhT4bxHb7Zw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/chuNpJIfSq758UCgRgDUBXSMk3vZuU-gaxgFc2uSM-c.jpg?width=108&crop=smart&auto=webp&s=0a3d8edf41c42cce7c42aa32cedfb7e16911a94b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/chuNpJIfSq758UCgRgDUBXSMk3vZuU-gaxgFc2uSM-c.jpg?width=216&crop=smart&auto=webp&s=88169cc6042737fc9ebee342269040858d54e762', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/chuNpJIfSq758UCgRgDUBXSMk3vZuU-gaxgFc2uSM-c.jpg?width=320&crop=smart&auto=webp&s=fa4772501f48229c9e0d32a1312e3a4393f255bc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/chuNpJIfSq758UCgRgDUBXSMk3vZuU-gaxgFc2uSM-c.jpg?width=640&crop=smart&auto=webp&s=27ebfbce5c3222d07b7650171fb11146b4ce7b01', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/chuNpJIfSq758UCgRgDUBXSMk3vZuU-gaxgFc2uSM-c.jpg?width=960&crop=smart&auto=webp&s=08f281d149b591936aa98c36683093861a53452d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/chuNpJIfSq758UCgRgDUBXSMk3vZuU-gaxgFc2uSM-c.jpg?width=1080&crop=smart&auto=webp&s=e1aaf12fd280f979444c8876b056da4417731ee6', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/chuNpJIfSq758UCgRgDUBXSMk3vZuU-gaxgFc2uSM-c.jpg?auto=webp&s=22d6fc2f09bea921f12c35a379b4ae62c98120dc', 'width': 1200}, 'variants': {}}]} | |
How can I make Llama models run faster on an M1 Max Macbook Pro? | 1 | I've been tinkering around with some models like Nous Hermes Llama2 and even at 70b and Q5\_K\_M they're not quite the quality I'd like from a llvm. They also take about 5 minutes to complete. Going up to Q6 makes the model spit out like one word every two minutes, and to add insult to injury it just rejected my prompt (and it wasn't even that bad). I'm running these prompts in [text generation web ui](https://github.com/oobabooga/text-generation-webui) without really messing with the settings because I don't understand them. I see there's different guis I can use, do they really make a difference? In order to, say, run nous hermes at Q8 will I need to shell out for a rtx 4090 or is it doable with my super expensive macbook? | 2023-09-14T19:10:14 | https://www.reddit.com/r/LocalLLaMA/comments/16ir0gx/how_can_i_make_llama_models_run_faster_on_an_m1/ | Aggravating-Cycle849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ir0gx | false | null | t3_16ir0gx | /r/LocalLLaMA/comments/16ir0gx/how_can_i_make_llama_models_run_faster_on_an_m1/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'FOXRcjq6EmV89ZPB-n-5TMZxyzpll7exPtcP9cbhSp0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=108&crop=smart&auto=webp&s=25cecfc12b5e14f197824f92c8ffdec5bd758e0e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=216&crop=smart&auto=webp&s=1b1b9e58717ee9d1fc46751a62f75047dd54246b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=320&crop=smart&auto=webp&s=7165d232f2e5c9c7709865d1b5f2524efe0a93c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=640&crop=smart&auto=webp&s=533cf2923f608fe25c9a3119b26e9ad803fb39fa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=960&crop=smart&auto=webp&s=4f66e6f21205a04c541ac275d957dae7c6c63d92', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=1080&crop=smart&auto=webp&s=00b4239be9b7a8034ebfe3839a88b924a5fddd25', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?auto=webp&s=4fc98c51483cb48c580cfff4195ace8979e0d5b2', 'width': 1200}, 'variants': {}}]} |
Recommendations for an open source LLM front end? | 1 | Anyone have suggestions for an open source web-based front end for a conversational AI like Llama? I am using [TheBloke/Llama-2-13B-chat-GGUF/llama-2-13b-chat.Q4\_K\_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/tree/main) with [h2ogpt](https://github.com/h2oai/h2ogpt).
I am wondering if there's a more minimalist option out there. | 2023-09-14T19:02:51 | https://www.reddit.com/r/LocalLLaMA/comments/16iqttw/recommendations_for_an_open_source_llm_front_end/ | PumpkinCurryLover | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16iqttw | false | null | t3_16iqttw | /r/LocalLLaMA/comments/16iqttw/recommendations_for_an_open_source_llm_front_end/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'HmCost0PHUStH4CIQyopO6sHUeNQMwjXw6cWlMAT6W4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=108&crop=smart&auto=webp&s=d917fe654a56f1f55cb5059b559eef2e7a6c0201', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=216&crop=smart&auto=webp&s=9d003696240138ad91b420d8aca3449312ec2ea7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=320&crop=smart&auto=webp&s=5da7bcf376ce35207b7b78318b8b5e9b9f7155b1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=640&crop=smart&auto=webp&s=6fd36ad31540ca0751a6dde616b4b56050c54023', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=960&crop=smart&auto=webp&s=411f3876ce874a5542bf0874c51381cc312fc7ca', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=1080&crop=smart&auto=webp&s=fe2ddd0f2230fab5636134fe15b7e093e8320361', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?auto=webp&s=f510d6838069beaee9fabc8ced9dece0559c388c', 'width': 1200}, 'variants': {}}]} |
Llama API-as-a-service? | 1 | I need to use a Llama model but i don’t want to host it myself, are there any good services besides replicate that offer some sort of API for Llama 2 variants that i could inference off of? | 2023-09-14T18:02:46 | https://www.reddit.com/r/LocalLLaMA/comments/16ipb22/llama_apiasaservice/ | NickDifuze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ipb22 | false | null | t3_16ipb22 | /r/LocalLLaMA/comments/16ipb22/llama_apiasaservice/ | false | false | self | 1 | null |
Which fuzzy text search tool is better? Not in terms of search speed, but in terms of quality. | 1 | I have compared searching using the chromadb database and the Python fuzzywuzzy library. It seems that fuzzywuzzy provides even better results. However, I suspect that my experiment might not be entirely accurate, and I may be doing something wrong. | 2023-09-14T17:49:52 | https://www.reddit.com/r/LocalLLaMA/comments/16ioz2o/which_fuzzy_text_search_tool_is_better_not_in/ | Pristine-Tax4418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ioz2o | false | null | t3_16ioz2o | /r/LocalLLaMA/comments/16ioz2o/which_fuzzy_text_search_tool_is_better_not_in/ | false | false | self | 1 | null |
how does a model summarize a super long book with only a 4k context window? | 13 | question that's been on my mind, if a model is asked to summarize a super long book that is has not been trained, how does it do it since most context windows around between 4k-32k? | 2023-09-14T17:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/16iou2f/how_does_a_model_summarize_a_super_long_book_with/ | konrad21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16iou2f | false | null | t3_16iou2f | /r/LocalLLaMA/comments/16iou2f/how_does_a_model_summarize_a_super_long_book_with/ | false | false | self | 13 | null |
I am not a scammer. I am advancing AI research through open source. | 1 | [removed] | 2023-09-14T17:26:28 | https://www.reddit.com/r/LocalLLaMA/comments/16iodwn/i_am_not_a_scammer_i_am_advancing_ai_research/ | Scary-Present4451 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16iodwn | false | null | t3_16iodwn | /r/LocalLLaMA/comments/16iodwn/i_am_not_a_scammer_i_am_advancing_ai_research/ | false | false | self | 1 | null |
Searching for chunking algorithms and repo | 1 | Hi everyone!
I'm still experiencing with my own implementation of rag, and I deployed my custom chunking function (honestly don't like the methods on LangChain) .
Anyway, I'm searching for alternative methods, algoritms (NLP or not) and models... There are lots of info and different implementation on RAG, but as I can see noone put much effort to augment chunking quality.
Also, there are other approach than this one I'm currently using? bi-encoder (instructor) - > cross-encoder (reranking) - > LLM
Can someone share some resources, repo, lib or existing implementation of different chunking methods?
(or simply discuss here some idea, though or approach)
*Thanks in advance for you time!!* | 2023-09-14T16:35:00 | https://www.reddit.com/r/LocalLLaMA/comments/16in2bg/searching_for_chunking_algorithms_and_repo/ | Distinct-Target7503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16in2bg | false | null | t3_16in2bg | /r/LocalLLaMA/comments/16in2bg/searching_for_chunking_algorithms_and_repo/ | false | false | self | 1 | null |
Recommend a Local LLM for low spec laptop | 9 | I have 8gb ram and 2gb vram. Suggest me an LLM. I'm a total noob to using LLMs.
​
I want it to help me write stories. So I input a long text and I want the model to give me the next sentence. | 2023-09-14T16:06:47 | https://www.reddit.com/r/LocalLLaMA/comments/16imcc0/recommend_a_local_llm_for_low_spec_laptop/ | Little-Shoulder-5835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16imcc0 | false | null | t3_16imcc0 | /r/LocalLLaMA/comments/16imcc0/recommend_a_local_llm_for_low_spec_laptop/ | false | false | self | 9 | null |
i am testing to see if i can do top level posts | 1 | [removed] | 2023-09-14T15:26:58 | https://www.reddit.com/r/LocalLLaMA/comments/16ilcrn/i_am_testing_to_see_if_i_can_do_top_level_posts/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ilcrn | false | null | t3_16ilcrn | /r/LocalLLaMA/comments/16ilcrn/i_am_testing_to_see_if_i_can_do_top_level_posts/ | false | false | default | 1 | null |
Question about fine tuning | 6 | I’m curious how you would fine tune (or QLORA) with these purposes (separately):
1) you have a large collection of stories by an author and you want to write more stories in their style
2) you have a large collection of codebases that you want to be able to ask questions about
I’m unclear about when you’d use instruction vs chat. Also I don’t really understand what “question” would lead to each codebase, or each story, etc. | 2023-09-14T15:23:25 | https://www.reddit.com/r/LocalLLaMA/comments/16il9li/question_about_fine_tuning/ | maxiedaniels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16il9li | false | null | t3_16il9li | /r/LocalLLaMA/comments/16il9li/question_about_fine_tuning/ | false | false | self | 6 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.