title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
How to adapt a single LLM to several tasks
2
Hi, I am searching for a parameter-efficient method to adapt a single LLM to several different NLP tasks (think of a single system that should perform classification, NER, relation extraction, etc.). The most obvious option is surely to train the LLM on all tasks simultaneously, e.g. using LoRA. Whenever I want to add a new task, this however requires me to re-train the model, and hence update all existing tasks. A better solution would be to have some kind of an adapter-like structure where the base LLM is frozen and I have one adapter per task. I saw that the regular LoRA cannot be used for multiple tasks at the same time as the update matrix is normally directly merged into the base model to reduce the latency. One could apply the matrices one after another, but this may be rather slow? Are there any good alternatives?
2023-08-21T07:47:10
https://www.reddit.com/r/LocalLLaMA/comments/15x0qzl/how_to_adapt_a_single_llm_to_several_tasks/
Neeeeext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x0qzl
false
null
t3_15x0qzl
/r/LocalLLaMA/comments/15x0qzl/how_to_adapt_a_single_llm_to_several_tasks/
false
false
self
2
null
Using Flash Attention with Llama 2
3
Hi guys, Has anyone here been able to successfully incorporate Flash Attention for fine-tuning Llama 2 model ? I found a patch in (this blog post)\[[https://www.philschmid.de/instruction-tune-llama-2](https://www.philschmid.de/instruction-tune-llama-2)\] that replaces attention layers, but for some reason it blows up my training and validation loss - it's 7 times bigger than loss on the runs without flash attention enabled. I'd be grateful for any guidance
2023-08-21T07:18:07
https://www.reddit.com/r/LocalLLaMA/comments/15x08bp/using_flash_attention_with_llama_2/
mr_dicaprio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15x08bp
false
null
t3_15x08bp
/r/LocalLLaMA/comments/15x08bp/using_flash_attention_with_llama_2/
false
false
self
3
{'enabled': False, 'images': [{'id': 'NYy7vS_DCF7ziYozZI5NewU4mrQpjLxWwJIEeoOeoTE', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=108&crop=smart&auto=webp&s=4768a7f3ce8e98b65ec2928dd27be69d13817653', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=216&crop=smart&auto=webp&s=f597cbd4fbbce7835de2c3ddf57bea4be32791f5', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=320&crop=smart&auto=webp&s=63abbf41f12bdd3f3a744092849dea63858626f3', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=640&crop=smart&auto=webp&s=8c350290c3032da07ffd1380750949fe1a6eddec', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=960&crop=smart&auto=webp&s=eb6f8491e988e2a9cbc7ff3ab2a8f7d3c829b09f', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?width=1080&crop=smart&auto=webp&s=c63d9cb2ef67160c0d0c200ae7b5a4b86e3e4148', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/_9syY01yJIJrp0yKlHYQtp3ZiCXO2_-fNjsDLCStuhc.jpg?auto=webp&s=b98be99841a14dfc0937f46c8910ea6847ab32b0', 'width': 2400}, 'variants': {}}]}
What makes ChatGPT so powerful?
58
Hi all, I'm attempting to train an LLM, and want to know what makes ChatGPT so powerful when compared to other models. I have read about the RLHF training that OpenAI used for GPT3.5 and GPT4, and it seems the only difference in comparison to previous models. I have also read that parameter size increases the quality of the output of the model vastly, but then plateaus, can anyone confirm this or provide more insight?
2023-08-21T06:36:05
https://www.reddit.com/r/LocalLLaMA/comments/15wzh2i/what_makes_chatgpt_so_powerful/
JakeN9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wzh2i
false
null
t3_15wzh2i
/r/LocalLLaMA/comments/15wzh2i/what_makes_chatgpt_so_powerful/
false
false
self
58
null
how to allow a LLM to use the internet?
6
Has this been worked on yet? I'd love to be able to give a website as a context and the LLM read the contents of it, like with bing chat
2023-08-21T06:06:57
https://www.reddit.com/r/LocalLLaMA/comments/15wyyly/how_to_allow_a_llm_to_use_the_internet/
actualmalding
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wyyly
false
null
t3_15wyyly
/r/LocalLLaMA/comments/15wyyly/how_to_allow_a_llm_to_use_the_internet/
false
false
self
6
null
Fine-tune for logic?
4
Hi, I have use cases that are supported by LLMs doing some logic understanding, following a written workflow and calling virtual functions, while not chatting but behaving like a computer calling software functions. Now the only LLM that does this is gpt-4, every other LLM gets sidetracked after a few messages, starts hallucination, starts chatting instead of sticking to the original question and so on. I do not want to use gpt-4, but would rather use open source solutions (for the anti monolithic sentiment, not the money). Is there already a fine tuned model like that, and if not, would I ask gpt-4 to create 100 examples and then use those to fine tune a llama2 or free willy? Any advice appreciated
2023-08-21T05:41:24
https://www.reddit.com/r/LocalLLaMA/comments/15wyhre/finetune_for_logic/
ComprehensiveBird317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wyhre
false
null
t3_15wyhre
/r/LocalLLaMA/comments/15wyhre/finetune_for_logic/
false
false
self
4
null
what is LocalLLaMA like compared to ChatGPT or google bard? or GPT 4? are there some things LocalLLaMA will ethically refuse to do for you like the other three?
2
[removed]
2023-08-21T04:44:12
https://www.reddit.com/r/LocalLLaMA/comments/15wxegu/what_is_localllama_like_compared_to_chatgpt_or/
Username9822
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wxegu
false
null
t3_15wxegu
/r/LocalLLaMA/comments/15wxegu/what_is_localllama_like_compared_to_chatgpt_or/
false
false
self
2
null
what is LocalLLaMA like compared to ChatGPT or google bard? or GPT 4? are there some things LocalLLaMA will ethically refuse to do for you like the other two?
1
[removed]
2023-08-21T04:43:39
https://www.reddit.com/r/LocalLLaMA/comments/15wxe3x/what_is_localllama_like_compared_to_chatgpt_or/
Username9822
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wxe3x
false
null
t3_15wxe3x
/r/LocalLLaMA/comments/15wxe3x/what_is_localllama_like_compared_to_chatgpt_or/
false
false
self
1
null
StableDiffusion CPP
120
Thought I'd share this here, it's kinda related:. Found this on Github today: https://github.com/leejet/stable-diffusion.cpp GGML of Stable Diffusion with CPU inference :)
2023-08-21T03:26:18
https://www.reddit.com/r/LocalLLaMA/comments/15wvtlk/stablediffusion_cpp/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wvtlk
false
null
t3_15wvtlk
/r/LocalLLaMA/comments/15wvtlk/stablediffusion_cpp/
false
false
self
120
{'enabled': False, 'images': [{'id': 'LvBftIN2Vk7w8f2fyyHhw_6fxeM8t9F7BpV-xqOihkU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=108&crop=smart&auto=webp&s=faa150707769bee9edc1a66382c6be537c0a3949', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=216&crop=smart&auto=webp&s=506906c80b5dfdee08d6f0fea121a80f619bc9dd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=320&crop=smart&auto=webp&s=c2270254410850734d2d96fe3dbe899cb3a8d74b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=640&crop=smart&auto=webp&s=9b8d84fef2b9f771e4a8866ef3ef5a1875f7f132', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=960&crop=smart&auto=webp&s=a81b2705e5dd830c30c5f8e3d5e9cb6e36e31116', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?width=1080&crop=smart&auto=webp&s=77df6fc3f6d19433cd122e73e0f3850f4150b05e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OfSas9CNwoFdkQ5R-gLaMLMy9D3UHj-o6iyTxSPZtWw.jpg?auto=webp&s=d661961560a8deb72475a2621f2e94957cb91c01', 'width': 1200}, 'variants': {}}]}
Inference speed on windows vs Linux with GPTQ (exllama hf) on dual 3090
6
Has anyone compared the inference speeds for 65B models observed on windows vs Linux? I'm reading very conflicting posts with some saying there's only a minor difference while others claiming almost double the t/s. I'm building a system with dual 3090s and a ryzen 5900x with 128gb ram. I would prefer to stay on windows as that would make the system a little more useful to me for other tasks. I know about wsl and may experiment with that, but was wondering if anyone's experimented with this already.
2023-08-21T03:18:09
https://www.reddit.com/r/LocalLLaMA/comments/15wvnh0/inference_speed_on_windows_vs_linux_with_gptq/
hedonihilistic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wvnh0
false
null
t3_15wvnh0
/r/LocalLLaMA/comments/15wvnh0/inference_speed_on_windows_vs_linux_with_gptq/
false
false
self
6
null
Help with categorisation of social media posts
2
Hello Everyone, I'm a researcher in education, studying how teachers use social media to collaborate and share resources. I have access to thousands of social media posts. Yes I have gone through appropriate ethical approval and have notified the members of the groups. I was looking at the possibility of using a LLM to categorise each post into pre-made categories, such as: a) sharing lesson ideas b) asking for help a question c) advertising professional learning programs etc. I'm new to LLMs, so I'm open to any advice people may have. I currently have Oobabooga running on a local machine; my plan was currently to fiddle with the prompt, and then possibly make my own LORA to train it. Am I completely in the wrong ballpark or is this something that could work? Thank you everyone for any help you have to offer.
2023-08-21T03:16:01
https://www.reddit.com/r/LocalLLaMA/comments/15wvluv/help_with_categorisation_of_social_media_posts/
Parrallaxx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wvluv
false
null
t3_15wvluv
/r/LocalLLaMA/comments/15wvluv/help_with_categorisation_of_social_media_posts/
false
false
self
2
null
The Secret Sauce of LLaMA🦙 : A Deep Dive!
1
[removed]
2023-08-21T02:22:05
https://www.reddit.com/r/LocalLLaMA/comments/15wufxm/the_secret_sauce_of_llama_a_deep_dive/
rajanghimire534
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wufxm
false
null
t3_15wufxm
/r/LocalLLaMA/comments/15wufxm/the_secret_sauce_of_llama_a_deep_dive/
false
false
self
1
{'enabled': False, 'images': [{'id': 'pVYP7mVyiCk5Z1zvg0uVh2WbQdlZ7xkpGTfZBhn8jOs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/_wyTnnkXfbSc-vbah-1HBtauDsJHK1mnf1zSCOr3ttc.jpg?width=108&crop=smart&auto=webp&s=b5a494ba471046f2b0a5b6ec1708b0b1594d2dbe', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/_wyTnnkXfbSc-vbah-1HBtauDsJHK1mnf1zSCOr3ttc.jpg?width=216&crop=smart&auto=webp&s=b4760c2afba67d850f6b8ae30d09dcfb3fdec046', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/_wyTnnkXfbSc-vbah-1HBtauDsJHK1mnf1zSCOr3ttc.jpg?width=320&crop=smart&auto=webp&s=fdca63fd328a9685eb00df9bdd118eb7ecce79bc', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/_wyTnnkXfbSc-vbah-1HBtauDsJHK1mnf1zSCOr3ttc.jpg?width=640&crop=smart&auto=webp&s=2a3dfa14ffe0c342c3015d2d11b444f87b15ba71', 'width': 640}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/_wyTnnkXfbSc-vbah-1HBtauDsJHK1mnf1zSCOr3ttc.jpg?auto=webp&s=ec999d25047cac9e59e30e7417f3441586be489a', 'width': 800}, 'variants': {}}]}
Finetuning question
4
Hey all, so I am trying my first fine tune. I am following something like this [https://github.com/Azure-Samples/miyagi/blob/4550d5fa2118cf04734bc6f587957715577cfb0b/sandbox/fine-tuning/Llama2/GK\_Fine\_tune\_Llama\_2\_Miyagi.ipynb#L205](https://github.com/Azure-Samples/miyagi/blob/4550d5fa2118cf04734bc6f587957715577cfb0b/sandbox/fine-tuning/Llama2/GK_Fine_tune_Llama_2_Miyagi.ipynb#L205) &#x200B; And i see the dataset they are using is this: [https://huggingface.co/datasets/thegovind/llamav2-instruct-miyagi/tree/main](https://huggingface.co/datasets/thegovind/llamav2-instruct-miyagi/tree/main) &#x200B; So 2 questions, I dont want to upload my dataset to hugging face, so I am doing this to load the dataset from a local file: from datasets import Dataset,Features,Value Dataset.cleanup_cache_files context_feat = Features({'text': Value(dtype='string', id=None)}) dataset = load_dataset("csv",data_files="data/gpt4_training.csv", split="train", delimiter=',', column_names=['text'], skiprows=1, features=context_feat ) I had to add the feature (otherwise it was complaining) - but I dont know if this is the right thing to do? Second, my dataset looks exactly like the link above for the dataset - a column named text with each cell representing the llama chat format : text "<s>[INST] <<SYS>> You are a helpful assistant <</SYS>>{context+question} [/INST] {answer}</s>" What I am not understanding is - if I run this through the SFTTrainer, does it really understand it needs to learn how to continue past the {context+question}? Like what is the loss function really measuring here? &#x200B;
2023-08-21T02:21:55
https://www.reddit.com/r/LocalLLaMA/comments/15wuft9/finetuning_question/
Alert_Record5063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wuft9
false
null
t3_15wuft9
/r/LocalLLaMA/comments/15wuft9/finetuning_question/
false
false
self
4
{'enabled': False, 'images': [{'id': 'MD9LFS58q5nlWj9wx91E-73MXqI0MmkwNYxj8FXj_cA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=108&crop=smart&auto=webp&s=c43783ecaf2cc12b94e68e773595610a3fe03c9c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=216&crop=smart&auto=webp&s=2015af3055c576f712c630931a80ff3239b80189', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=320&crop=smart&auto=webp&s=311c3a0c179ba2b1fb7992cb31a02e959d8cccb7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=640&crop=smart&auto=webp&s=a1598b6d2dd7e68ad750432ea9c6166ab7c4a777', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=960&crop=smart&auto=webp&s=51e39a31cdad8f33259c82a41266e085b48b1299', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?width=1080&crop=smart&auto=webp&s=b689fb39e0c6cd8427741ff523792a181979d831', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RR2dU746hERIQJjChhzujNswSehF2l5kZf2cmX89ZTY.jpg?auto=webp&s=f4deb2e4543d2d5fe8d182d669e85d95fe11d632', 'width': 1200}, 'variants': {}}]}
Exploring LLMs and prompts: A guide to the PromptTools Playground
0
2023-08-21T00:26:36
https://blog.streamlit.io/exploring-llms-and-prompts-a-guide-to-the-prompttools-playground/
hegel-ai
blog.streamlit.io
1970-01-01T00:00:00
0
{}
15wrvvk
false
null
t3_15wrvvk
/r/LocalLLaMA/comments/15wrvvk/exploring_llms_and_prompts_a_guide_to_the/
false
false
https://b.thumbs.redditm…moA1du7UHQyA.jpg
0
{'enabled': False, 'images': [{'id': 'MZGdnK0AwQ6yO1n7aVF9grbr-VDb0bVEsjbFgIG3Zro', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=108&crop=smart&auto=webp&s=3e68f95c8ed6c4d974bcc00139865f066e8bd73d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=216&crop=smart&auto=webp&s=5b34ddac4a4c5b5027f1bd2480de40a1cc617c2e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=320&crop=smart&auto=webp&s=72635d6714a1de62f352af3a3b242533745a78ec', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=640&crop=smart&auto=webp&s=ebe642ec49faed95cdad4c785b46386feca1ab42', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=960&crop=smart&auto=webp&s=6c7014d2e95a3afa5b50f62d2206a6e434aab211', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?width=1080&crop=smart&auto=webp&s=4e9659fec11b50ab2ef29d0cd3e468d3acf3111e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ER2SO4JOuHgaAKW-8W-gvW5bK_t4Yf7K2EbuElQSOhE.jpg?auto=webp&s=0f793239e50f6d8b697c360c285917ab9015f072', 'width': 1200}, 'variants': {}}]}
Anyone have experience with the RX580 16GB?
33
In a quest for the cheapest VRAM, I found that the RX580 with 16GB is even cheaper than the MI25. $65 for 16GB of VRAM is the lowest I've seen. Doesn't anyone have any experience with it? Its not going to break any records with only 256GB/s of memory bandwidth but it should be appreciably faster than CPU inference. For $65 it may be good performance per dollar.
2023-08-21T00:26:32
https://www.reddit.com/r/LocalLLaMA/comments/15wrvtg/anyone_have_experience_with_the_rx580_16gb/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wrvtg
false
null
t3_15wrvtg
/r/LocalLLaMA/comments/15wrvtg/anyone_have_experience_with_the_rx580_16gb/
false
false
self
33
null
DOLMA, largest curated text dataset for training just dropped - 3 TRILLION TOKENS.
68
[https://x.com/yampeleg/status/1693359681354265020?s=46](https://x.com/yampeleg/status/1693359681354265020?s=46)
2023-08-20T23:12:41
https://i.redd.it/uf8z1c10lcjb1.jpg
PookaMacPhellimen
i.redd.it
1970-01-01T00:00:00
0
{}
15wq66q
false
null
t3_15wq66q
/r/LocalLLaMA/comments/15wq66q/dolma_largest_curated_text_dataset_for_training/
false
false
https://b.thumbs.redditm…2t2QqjgqtNCo.jpg
68
{'enabled': True, 'images': [{'id': 'OdKhjTX6JIMz0F3F462pMm-snEea4_X0oPWsDXBKR7o', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/uf8z1c10lcjb1.jpg?width=108&crop=smart&auto=webp&s=17f345440e7580b58ddbbb13e69ccbeefb6931a0', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/uf8z1c10lcjb1.jpg?width=216&crop=smart&auto=webp&s=bfa916cc55a6b8569b916a1f3a918de822a31a62', 'width': 216}], 'source': {'height': 230, 'url': 'https://preview.redd.it/uf8z1c10lcjb1.jpg?auto=webp&s=9dfd66cef225baf8e66cb836ed916c8afaeb3905', 'width': 218}, 'variants': {}}]}
The moat is shrinking (?)
1
2023-08-20T22:45:09
https://i.redd.it/881v3hd1gcjb1.jpg
Amgadoz
i.redd.it
1970-01-01T00:00:00
0
{}
15wpi1z
false
null
t3_15wpi1z
/r/LocalLLaMA/comments/15wpi1z/the_moat_is_shrinking/
false
false
https://a.thumbs.redditm…1Mm5nupjax48.jpg
1
{'enabled': True, 'images': [{'id': 'X-nMOAWTSxdUY17qQ89hbi6E2C9VS6QzVqWV1D32EiY', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=108&crop=smart&auto=webp&s=20c22e4cf74c4d402e3b5daf911b851edbebcf0e', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=216&crop=smart&auto=webp&s=eb237ba2fd5d5b111a49a8b5de23928d4597f290', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=320&crop=smart&auto=webp&s=8837707aa406b3f60ed4231f70ebfc159499b186', 'width': 320}, {'height': 683, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=640&crop=smart&auto=webp&s=f1dd143a3e7eb9fa88f85e142939a567db3a88d8', 'width': 640}, {'height': 1024, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=960&crop=smart&auto=webp&s=85a5c311b21357a6c964f4f3cd044d8079858331', 'width': 960}, {'height': 1153, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?width=1080&crop=smart&auto=webp&s=1df146d8e6dca446f4c8ff6bfba7b9a244404ca7', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/881v3hd1gcjb1.jpg?auto=webp&s=8352d0e828153d36f76f56b07e89fdd9b47a33b9', 'width': 1124}, 'variants': {}}]}
The moat is shrinking (?)
239
2023-08-20T22:43:35
https://i.redd.it/tjtjps5tfcjb1.jpg
Amgadoz
i.redd.it
1970-01-01T00:00:00
0
{}
15wpgrq
false
null
t3_15wpgrq
/r/LocalLLaMA/comments/15wpgrq/the_moat_is_shrinking/
false
false
https://b.thumbs.redditm…VimyLRLDWsHQ.jpg
239
{'enabled': True, 'images': [{'id': '2pnFU2AOsSxrqruELjzoHvff6DcrP5oroFsdOIJM0Mc', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=108&crop=smart&auto=webp&s=7c99c8859581c3b21edd8da1bbdcf01224849af6', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=216&crop=smart&auto=webp&s=0856c099d758fc1486d62bcbe2ca1e882a7bbb1a', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=320&crop=smart&auto=webp&s=93bf358cf2d1cf4117e5814e72019202aa4d8262', 'width': 320}, {'height': 683, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=640&crop=smart&auto=webp&s=c46146f3911d5c0f18e65d842455dfba94e9d894', 'width': 640}, {'height': 1024, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=960&crop=smart&auto=webp&s=ce1eed11ddc932e0b9e591e5279178a240ae39d2', 'width': 960}, {'height': 1153, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?width=1080&crop=smart&auto=webp&s=29ad1e5e11d3ae4c4fdc07c0e9ccbf40a6141deb', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/tjtjps5tfcjb1.jpg?auto=webp&s=f60b980717d1cf7f3de2671bbc2c9f1d1d0a5bd3', 'width': 1124}, 'variants': {}}]}
Veterinary Chatbot Llama2
1
[removed]
2023-08-20T21:55:15
https://www.reddit.com/r/LocalLLaMA/comments/15wo9ep/veterinary_chatbot_llama2/
aianytime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wo9ep
false
null
t3_15wo9ep
/r/LocalLLaMA/comments/15wo9ep/veterinary_chatbot_llama2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'rM82cd14gTDS-W14elzmDPNcE_BAEp9F6CeqwQUzAz8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XjFRhoU4nZOrHzMWBb5NM6bPIZithS89FoHExLKwkk4.jpg?width=108&crop=smart&auto=webp&s=b2d52b4f077dace32dbb39c47a4a1d6f5845277d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XjFRhoU4nZOrHzMWBb5NM6bPIZithS89FoHExLKwkk4.jpg?width=216&crop=smart&auto=webp&s=5f8c31762fd9716217e117ee800d286946903fe8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XjFRhoU4nZOrHzMWBb5NM6bPIZithS89FoHExLKwkk4.jpg?width=320&crop=smart&auto=webp&s=87576587ee1cf5ac47f5722f88f3171af1f8c80d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XjFRhoU4nZOrHzMWBb5NM6bPIZithS89FoHExLKwkk4.jpg?auto=webp&s=71956cd8e3d25eea2ba0e7f83d42990e5c6f4268', 'width': 480}, 'variants': {}}]}
On what hardware/setup are you running your local LLM?
10
Do ypu use a seperate workstation to run it? What ist the best GPU to use?
2023-08-20T21:23:43
https://www.reddit.com/r/LocalLLaMA/comments/15wng5l/on_what_hardwaresetup_are_you_running_your_local/
snarfi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wng5l
false
null
t3_15wng5l
/r/LocalLLaMA/comments/15wng5l/on_what_hardwaresetup_are_you_running_your_local/
false
false
self
10
null
Best sub 13b parameter model for vector document retrieval
1
[removed]
2023-08-20T21:10:46
https://www.reddit.com/r/LocalLLaMA/comments/15wn437/best_sub_13b_parameter_model_for_vector_document/
Sweaty-Share3443
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wn437
false
null
t3_15wn437
/r/LocalLLaMA/comments/15wn437/best_sub_13b_parameter_model_for_vector_document/
false
false
self
1
null
Llama-2-13b and document QA
2
So I created embeddings from few pdf-s. For now I tested some Vicuna and WizardLm models. Now I wanted to test Llama 2 model so I got approved on HF and used this model: [https://huggingface.co/meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) When I pass in my embeddings (I am using langchain) and prompt a simple question, like "What is X?" where X is some term, thing etc. from PDF, I get results where there short answer and URL for source from diffrent websites like [ask.com](https://ask.com) [wisegeek.com](https://wisegeek.com) etc. instead of my embeddings/documents. On top of that there is same answer and same URL source repeated about 8times for example. Why is that? Is there a way around this? I don't get responses like this when using Vicuna or WizardLm for example. From Langchain I am using AutoTokenizer, AutoModelForCasualLM, HuggingFacePipeline to create LLM.
2023-08-20T21:04:57
https://www.reddit.com/r/LocalLLaMA/comments/15wmyej/llama213b_and_document_qa/
Kukaracax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wmyej
false
null
t3_15wmyej
/r/LocalLLaMA/comments/15wmyej/llama213b_and_document_qa/
false
false
self
2
{'enabled': False, 'images': [{'id': '0onXEGBmoQmFhPcu8lfnvmCh8U-MCx0zY71zvhZhw2E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=108&crop=smart&auto=webp&s=c2ce0494e5d7b7a3e0e06d4c9fb8240299e68ebf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=216&crop=smart&auto=webp&s=6ad92a9acda433b7bde6c33477e04286e71bd105', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=320&crop=smart&auto=webp&s=a8c8d168860b8a0089766492b143a236e3d6759b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=640&crop=smart&auto=webp&s=3df0ba28cc2cd6f794b311c2979f3ef26ddcf73d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=960&crop=smart&auto=webp&s=1e2b18c77dbe7c4da74492b702d0b1d9c84cd144', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?width=1080&crop=smart&auto=webp&s=30486a74758bea37eb70972c12396a0dc71697c3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GE0aZMu0kw9GSZulkWqFY7cliisSqk2hsJxG9KY2HJE.jpg?auto=webp&s=fd574586b548ce5aeba867b913415bb14d71cc00', 'width': 1200}, 'variants': {}}]}
A fast llama2 CPU decoder for GPTQ
38
2023-08-20T20:47:16
http://github.com/srush/llama2.rs/
srush_nlp
github.com
1970-01-01T00:00:00
0
{}
15wmh6k
false
null
t3_15wmh6k
/r/LocalLLaMA/comments/15wmh6k/a_fast_llama2_cpu_decoder_for_gptq/
false
false
https://b.thumbs.redditm…njhw3dbv0HrA.jpg
38
{'enabled': False, 'images': [{'id': '3EC37BDCRugRF5MP3fCu7gQ-4JJIcE18Ws8Jly6T278', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=108&crop=smart&auto=webp&s=bbbb3928c70cb7d22ca076b976d1412dc2d0571f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=216&crop=smart&auto=webp&s=1ae7b2a3d91f5b8b4b459c7586bb26de76cae1bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=320&crop=smart&auto=webp&s=33291c081f9561e6dfe8dafdf151a009cad4fe5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=640&crop=smart&auto=webp&s=513d93e2387614b45c44c43b3fba93b31593b43c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=960&crop=smart&auto=webp&s=9d858d3929dc6c203aaf79e95cd897d01b24715f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?width=1080&crop=smart&auto=webp&s=e16c004bfda83962526576015ba7a6baf973a964', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wMEaIoTI73KrNYEfKCEjCV5PRON6zFuSMzTPMg7kWLk.jpg?auto=webp&s=fb0da7c177a03867bb965938b8b713efb26425f4', 'width': 1200}, 'variants': {}}]}
Is there a LLaMA2, 70B model at 6bit quant? Would it fit in a 96GB M2max computer?
17
I'm looking to optimize the LLM's ability without regard for speed. I can wait. What is the most generally capable LLM model that I can fit into my computer?
2023-08-20T17:55:46
https://www.reddit.com/r/LocalLLaMA/comments/15wi2hi/is_there_a_llama2_70b_model_at_6bit_quant_would/
Musenik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wi2hi
false
null
t3_15wi2hi
/r/LocalLLaMA/comments/15wi2hi/is_there_a_llama2_70b_model_at_6bit_quant_would/
false
false
self
17
null
Uncensored LLMs that work on languages other than English?
1
[removed]
2023-08-20T17:46:43
https://www.reddit.com/r/LocalLLaMA/comments/15whu8w/uncensored_llms_that_work_on_languages_other_than/
throwfalseaway123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15whu8w
false
null
t3_15whu8w
/r/LocalLLaMA/comments/15whu8w/uncensored_llms_that_work_on_languages_other_than/
false
false
self
1
null
Llama-ready prebuilt PC
0
I’m PC shopping and I don’t want to build anything myself bc I’m worried I will mess it up. I want an RTX 4090. I am thinking 64GB is substantially better. So I see two prebuilt options and I don’t see any reviewers directly comparing them. Is there a landslide winner or are they about equal? Or should I do something else? Two options I see: 1. Corsair Vengence 2. Digital Storm Aventum X
2023-08-20T16:55:49
https://www.reddit.com/r/LocalLLaMA/comments/15wgjfo/llamaready_prebuilt_pc/
knight_of_mintz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wgjfo
false
null
t3_15wgjfo
/r/LocalLLaMA/comments/15wgjfo/llamaready_prebuilt_pc/
false
false
self
0
null
32gb ram good enough for 70b?
1
I have been trying to get 70b models to run on my desktop pc with 32gb ram. I also tried gpu offloading to my 3070. It still hangs and uses 100% ram usage. Should I bite the bullet and buy more ram, or are there any more hacks I can do to squeeze out what I can?
2023-08-20T16:54:09
https://www.reddit.com/r/LocalLLaMA/comments/15wghw4/32gb_ram_good_enough_for_70b/
gameditz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wghw4
false
null
t3_15wghw4
/r/LocalLLaMA/comments/15wghw4/32gb_ram_good_enough_for_70b/
false
false
self
1
null
Llama cute voice assistant (llama + RVC model)
27
github: [https://github.com/atomlayer/llama\_cute\_voice\_assistant](https://github.com/atomlayer/llama_cute_voice_assistant) demo: [https://youtu.be/h-GCQukW4E8](https://youtu.be/h-GCQukW4E8) My attempt to make an ai assistant with a cute voice. Now there are problems with generating a cute voice for the assistant, but pretty well developed [RVC models](https://www.youtube.com/watch?v=_JXbvSTGPoo). I am combining llama and RVC model. The solution may not be the prettiest, but it works.
2023-08-20T16:44:59
https://www.reddit.com/r/LocalLLaMA/comments/15wg9eb/llama_cute_voice_assistant_llama_rvc_model/
Pristine-Tax4418
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wg9eb
false
null
t3_15wg9eb
/r/LocalLLaMA/comments/15wg9eb/llama_cute_voice_assistant_llama_rvc_model/
false
false
self
27
{'enabled': False, 'images': [{'id': 'faeA1JZc0pDkiCD9BjtRVzBeyZBekLxl7QSlKQr46YQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=108&crop=smart&auto=webp&s=a39b183886099742b08eba028cbfbb39e7e84e50', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=216&crop=smart&auto=webp&s=11ce49e9d81d0e0be266905a23fb3516decbfb53', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=320&crop=smart&auto=webp&s=b3d70408cc2756afaf61f3206e855f7bc5847e9d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=640&crop=smart&auto=webp&s=de63b6f45941a5512d4edcdffba9f8bf69c50975', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=960&crop=smart&auto=webp&s=04ddb2eec49791a7bd6b368b6d84eef8f7b31e98', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?width=1080&crop=smart&auto=webp&s=7757fa0dde26598bee48842bc1a7d5543e18970a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tYhPI9MLoaWa2qbG8gMevHCkoxVVxfPMHs1ia96PmEA.jpg?auto=webp&s=486fec83961e59af25b53d6de6332813e186897c', 'width': 1200}, 'variants': {}}]}
Dadbot
42
My father had cancer and before he passed I was trying to get information/stories from him so that I could use [https://github.com/gunthercox/ChatterBot](https://github.com/gunthercox/ChatterBot) to create a chatbot that I could chat with that would have his memories....He threw a blood clot and died early in his treatment so I didn't have the information from him I needed. I decided that I didn't want the same thing to happen for my kids(who are not computer science people) I wanted to put something together for them....I built dadbot 1.0 using chatterbot and created a corpus with my information and memories and it talked like me and was ok. The biggest issue was updating with new information. It was a xml file that I would create and train. Once llama was released It made since to me to figure out a way to use it. I started researching and learned that I could create a lora using my previous dadbot information reformated for llama based systems. So I created dadbot 2 using [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) and now dadbot 2 is working...There is a character that thinks it is me and has my information from dadbot 1 with updates(Divorce, remarriage, kids got older, new experiences etc.). All still with my information I am updating the json file by hand. So last week I tried creating a "journal" entry out of text with a walkthrough of my day and what I did. I feed that into ChaT gpt with the directions to create a json file that I can use to import into a ai system that uses the following format. { "instruction": "Convert from celsius to fahrenheit. Temperature in Celsius: 15", "output": "Temperature in Fahrenheit: 59" } I told chat gpt to create instructions and outputs based on my journal entries. Chat gpt struggled but with more information and examples it was able to do a good job. So now I can feed chat gpt a couple paragraphs and it will spit out a json file I can import into dadbot lora using oobabooga. This is still a manual process so I thought why not use llama to create the JSON on my local computer. I have a CSCI background so I thought I could send email to a journal email address I create and download the journal and covert using the same setup as chatgpt....Should be easy. The problem is none of the llama models I try can convert the journal entry into a instruction/output format. I have tried several but they all give out put like " Cool awesome glad i could help ty for teaching me!!!!! " or Oh sorry i missed that bit alright ill correct that quickly & come back shortly to finish up pls bear with me while i fix up my mistakes heres what i came up with initially --> {"instruction":"convert from celsius to fahrenheit. temperature in celsius: 15","output":"temperature in fahrenheit: 59"} <-- hope you find that helpful! cheers . &#x200B; &#x200B; Does anyone know a model that would handle this? I am using a 6 gb cuda based system with 16 gb system ram loading 4bit model. The dadbot works great....Just not able to use a non lora model only system to convert a journal into a json file that could be imported.....
2023-08-20T16:22:38
https://www.reddit.com/r/LocalLLaMA/comments/15wfpbf/dadbot/
betolley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wfpbf
false
null
t3_15wfpbf
/r/LocalLLaMA/comments/15wfpbf/dadbot/
false
false
self
42
{'enabled': False, 'images': [{'id': 'A-hypKo-1_v9jALkQz5uVOQ5AptMXVPubX_-LdxXITc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=108&crop=smart&auto=webp&s=e0b24621d86bf23ffe7a98e17d8bb3cb57228f4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=216&crop=smart&auto=webp&s=67811dab788f47704133ea2d23498f7935e79a11', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=320&crop=smart&auto=webp&s=1635011bf599ad0d6ca5d46aaef9f268087ffaea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=640&crop=smart&auto=webp&s=83f204c6bdd331448068c57a1972368a658a75cf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=960&crop=smart&auto=webp&s=b6cb6a4ecc41393c380ed8ae471dab89d8dd93d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?width=1080&crop=smart&auto=webp&s=1e64ccd0c36689e18a1d7a9be879bd078798523d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_CeFgmMV1KrhoELGB_deGRGvstpKhBCGiurJRcctFFY.jpg?auto=webp&s=661cf9158f88aca3e9b05a5bb1f5698f4a50fc0f', 'width': 1200}, 'variants': {}}]}
What's your favorite model and results? - Model Discussion Thread
55
Have a model you want to talk about or interesting results to share? Want to ask for an example or need recommendations? Share and discuss here! This is a megathread for model discussion and generations. Everyone is encouraged to share their results, all topics allowed. Looking for new model releases? The subreddit's [New Model flair](https://www.reddit.com/r/LocalLLaMA/top/?t=all&f=flair_name%3A%22New%20Model%22) can be used to find posts.
2023-08-20T14:56:18
https://www.reddit.com/r/LocalLLaMA/comments/15wdjly/whats_your_favorite_model_and_results_model/
Technical_Leather949
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wdjly
false
null
t3_15wdjly
/r/LocalLLaMA/comments/15wdjly/whats_your_favorite_model_and_results_model/
false
false
self
55
null
Anonymizer LLM?
1
I need an LLM to help me with one task: removing all names, locations, and any other identifying information for a block of text. I don’t need many changes to the text—just the original text with small changes her and there to remove and change language that contains information that could be used to identify someone or something.
2023-08-20T13:30:44
https://www.reddit.com/r/LocalLLaMA/comments/15wbhus/anonymizer_llm/
Psychological-Ad5390
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wbhus
false
null
t3_15wbhus
/r/LocalLLaMA/comments/15wbhus/anonymizer_llm/
false
false
self
1
null
GGML hardware questions, regarding a mid 2010s quad CPU Xeon server based build, with intent of 70b unquantized
22
CPU questions: 1. Is per core speed important? Or is it more about total chip performance. 2. If total performance matters more than per-thread, does the number of cores matter? IE if an older 22c/44t Xeon, and a Ryzen 5600x 6c/12t, both score 22000 on [cpubenchmark.net](https://cpubenchmark.net), which is better? 3. Quad and dual CPU: does GGML work with these? Does performance scale linearly with them? RAM questions: 1. Is total RAM bandwidth more important than the memory speed itself (IE how much does it matter that I'm using 2400 ECC vs 3600 normal RAM)? 2. With a quad or dual CPU rig with quad RAM channels per CPU, does all the bandwidth get used by GGML? General (kinda specific) questions: 1. The intention is to use as a chatbot with 4096 context, or the most it will coherently utilize. What T/s could be expected with a 70b unquantized model running with layers offloaded to two RTX 3090s, with quad 22c/44t Xeons scoring 22k on [cpubenchmark.net](https://cpubenchmark.net), with quad RAM channels on each CPU using 2400Mhz RAM? 2. Is performance benefit linear when adding GPUs for layer offloading? 3. Does every model have an unquantized version available, IE whatever the current best uncensored/RP capable 70b is? Yes I know 70b 4bit models exist, that fit onto my two 3090s in GPTQ. But I want to be able to try out the unquantized versions of chungus models too. I cannot afford eleventy billion GB of VRAM, but I can definitely throw a few hundred at 512GB of cheap ECC (128GB of 2400MHz DDR4 is $80 on ebay). I have no idea how much RAM an unquantized 70b model takes, but I assume I need over 128GB and probably no more than 512GB. Thanks for help, I realize this is specific as hell and nobody does this probably. I just figure I may as well make a rig that can do everything. I've been lurking for months and still don't really know what I'm doing.
2023-08-20T12:30:36
https://www.reddit.com/r/LocalLLaMA/comments/15wa67y/ggml_hardware_questions_regarding_a_mid_2010s/
CanineAssBandit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15wa67y
false
null
t3_15wa67y
/r/LocalLLaMA/comments/15wa67y/ggml_hardware_questions_regarding_a_mid_2010s/
false
false
self
22
{'enabled': False, 'images': [{'id': 'wKEUaX_AjKElK73rADrRP6qe6o-GToKYw8-odUFh8yo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=108&crop=smart&auto=webp&s=b494d3be3f57ce69841d80968c9e3b40d1130c48', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=216&crop=smart&auto=webp&s=507af478e38055e688647b665a870d57f884c65e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=320&crop=smart&auto=webp&s=7c3fc23eedc0f7e50a9c9c6f31ee7ebe94ab494c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=640&crop=smart&auto=webp&s=3a5d54bf89ce0a0d706ac685c592fb2c044b7e9c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=960&crop=smart&auto=webp&s=82c0d33a1da9f3296a6a5e67dfcd8706c14a00f9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?width=1080&crop=smart&auto=webp&s=c97d39e0afd84e47554a47fb9fa26bde937659f1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/osaqsE2y2PeMgpSPn0geDYtC7-oDy9EMn2A2xwNkMNg.jpg?auto=webp&s=ed37b843a300e6604df917d292be5c2d1702abe0', 'width': 1200}, 'variants': {}}]}
How to fine-tune LLaMA without losing its general ability?
17
I have a dataset of student essays and their teacher grading + comments. I want to fine-tune LLaMA with it to create another base model which knows how to grade essays, but still be flexible to respond to other instructions I provide, like give comments on essays in a different format / from a specific aspect. In the GPT-3 era I once fined-tuned GPT-3 to a dataset with a very specific output format. 200 training examples in it already lost most of its ability to respond in any other formats / follow any other instructions. Are newer models like the instruction-following ones better at preserving its instruction following ability post fine-tuning? Any tips on fine-tuning method (supervised / unsupervised next token prediction) or dataset curation to help preserve instruction following ability?
2023-08-20T11:10:15
https://www.reddit.com/r/LocalLLaMA/comments/15w8n2q/how_to_finetune_llama_without_losing_its_general/
elon_mug
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w8n2q
false
null
t3_15w8n2q
/r/LocalLLaMA/comments/15w8n2q/how_to_finetune_llama_without_losing_its_general/
false
false
self
17
null
Is there any good llama 2 based models that are uncensored?
77
What are the best llama 2 based uncensored models?
2023-08-20T08:10:37
https://www.reddit.com/r/LocalLLaMA/comments/15w5h3b/is_there_any_good_llama_2_based_models_that_are/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w5h3b
false
null
t3_15w5h3b
/r/LocalLLaMA/comments/15w5h3b/is_there_any_good_llama_2_based_models_that_are/
false
false
self
77
null
Any solution that can read databases?
2
**So I want to use LLMs for business applications, and I think the holy grail would be a feature that it is able to read multiple company databases.**So I would imagine something like: \- email comes in: customer John Smith wants to order a laptop and is also looking for advice if he should choose one with an SSD \- LLM looks up customer database and sees that John has already bought a laptop 3 years ago at 1000 USD \- LLM reads advice database and describes why it would be a good idea to get a laptop with SSD \- LLM reads inventory database and finds the laptops with SSDs and recommends one for John at the 1000 USD price range **Based on what I have read, these features kinda already exist, what are the state of the art solutions for this?** It would be ideal if it would be working through a local LLM as it is more private and cheap, but if that’s not available yet, API solutions could work as well. Code Interpreter can kinda do it but it is not az API as far as I know, H2O seems something like it but not sure if you can wire it up so that it can be used in an API-like way. And would something like this work even if the databases were huge? (Like thousands of inventory in the inventory database or an equivalent of 50 pages in the advice database.)
2023-08-20T07:53:37
https://www.reddit.com/r/LocalLLaMA/comments/15w561p/any_solution_that_can_read_databases/
VentrueLibrary
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w561p
false
null
t3_15w561p
/r/LocalLLaMA/comments/15w561p/any_solution_that_can_read_databases/
false
false
self
2
null
Trying to infer Llama2 - 13B on my M1 Max - BFloat16 is not supported on MPS
1
Trying to infer Llama2 - 13 B on my M1 Max - Ventura 13.5.1 Getting error : BFloat16 is not supported on MPS Tried different torch builds mentioned in various forums but to no avail Any help is much appreciated
2023-08-20T07:18:55
https://www.reddit.com/r/LocalLLaMA/comments/15w4jvr/trying_to_infer_llama2_13b_on_my_m1_max_bfloat16/
StrategyThick115
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w4jvr
false
null
t3_15w4jvr
/r/LocalLLaMA/comments/15w4jvr/trying_to_infer_llama2_13b_on_my_m1_max_bfloat16/
false
false
self
1
null
Hosting llama2 on cloud GPUs
26
I wanted to make inference and time-to-first token with llama 2 very fast, some nice people on this sub told me that I'd have to make some optimizations like increasing the prompt batch size and optimizing the way model weights are loaded onto VRAM among others. My question is: Can I make such optimizations on AWS/Azure's platform or on new serverless GPU platforms like banana dev or these other GPU renting websites like vast dot ai? Also where do you prefer to host the model when optimizing for latency? Also, in general, Which platform allows you to customize the most?
2023-08-20T06:12:33
https://www.reddit.com/r/LocalLLaMA/comments/15w3caz/hosting_llama2_on_cloud_gpus/
me219iitd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w3caz
false
null
t3_15w3caz
/r/LocalLLaMA/comments/15w3caz/hosting_llama2_on_cloud_gpus/
false
false
self
26
null
Few-shot learning in large model vs fine-tuning in a small model
1
[removed]
2023-08-20T05:16:28
https://www.reddit.com/r/LocalLLaMA/comments/15w2ain/fewshot_learning_in_large_model_vs_finetuning_in/
aadoop6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w2ain
false
null
t3_15w2ain
/r/LocalLLaMA/comments/15w2ain/fewshot_learning_in_large_model_vs_finetuning_in/
false
false
self
1
null
Llama2 7b that was fine tuned on medical data
130
https://github.com/llSourcell/DoctorGPT https://huggingface.co/llSourcell/medllama2_7b https://huggingface.co/llSourcell/doctorGPT_mini
2023-08-20T04:35:17
https://www.reddit.com/r/LocalLLaMA/comments/15w1i3b/llama2_7b_that_was_fine_tuned_on_medical_data/
Lazylion2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w1i3b
false
null
t3_15w1i3b
/r/LocalLLaMA/comments/15w1i3b/llama2_7b_that_was_fine_tuned_on_medical_data/
false
false
self
130
{'enabled': False, 'images': [{'id': 'IR2LXqgKt07NvqyPoStLHqcpCduIlwXuonjPBFw6HQA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=108&crop=smart&auto=webp&s=5d7cd8db841ca215d616a56bca5d03357db4381a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=216&crop=smart&auto=webp&s=0dbebb38d073714d1cf67a41ed2f5fd5b30fccb5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=320&crop=smart&auto=webp&s=372967d334d2fa7ce353930b182ab4f14de2648a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=640&crop=smart&auto=webp&s=aac2c34b503b7c66adb56db234d9b20eaf7ae765', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=960&crop=smart&auto=webp&s=9f2dc06a4aa083a4b7fb95cf6fb187a225b16c50', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?width=1080&crop=smart&auto=webp&s=590b12f0986fc8619de902b175d8206145bad0ad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/umQvaB0q1CaJIHfaK6tEfrlV_DS5oeOY7tjCseWgWkA.jpg?auto=webp&s=439869ac7d0738413673bc63ff8250b0af595c73', 'width': 1200}, 'variants': {}}]}
Optimizing for latency on GPUs
1
[removed]
2023-08-20T03:35:36
https://www.reddit.com/r/LocalLLaMA/comments/15w0brv/optimizing_for_latency_on_gpus/
me219iitd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15w0brv
false
null
t3_15w0brv
/r/LocalLLaMA/comments/15w0brv/optimizing_for_latency_on_gpus/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IqDlVX4hzxx7GqLZ_AhcZgHC4cWnNWKewApIaECh5aA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=108&crop=smart&auto=webp&s=19c96fc4eb2cd3566e271cabcc7c50617dffad3f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=216&crop=smart&auto=webp&s=b4308d424c5263141a7c4729e9c95b63c194c13a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=320&crop=smart&auto=webp&s=14bcaecd0863c21686cbdce86ea4fd6bf23500d6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=640&crop=smart&auto=webp&s=15cc5865b990ec1965a118aefc63fbd1b932bdf2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=960&crop=smart&auto=webp&s=10e24a4d66bfbcbf84c7bddda9ce7be0577824a0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?width=1080&crop=smart&auto=webp&s=9494975b86663188527d4cd764e3fcfa2c39ace7', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/bYYdsYv6SDXT8b8uWQH4UX6n5x67w9K2CYQZ6ocmZTo.jpg?auto=webp&s=1d5d34d237d89544e6d3b046b7a0626d54bb03c3', 'width': 1200}, 'variants': {}}]}
Multi machine 4090s, or dual gpu 4090s setup for interesting scenario
0
I have a 4090 based desktop, and another one with a 3080. I also have an additional 4090 that I can either use to replace the 3080 or add to either of the desktops. I was thinking about few interesting options that it can enable, would really like to get opinions [View Poll](https://www.reddit.com/poll/15vwj1v)
2023-08-20T00:31:54
https://www.reddit.com/r/LocalLLaMA/comments/15vwj1v/multi_machine_4090s_or_dual_gpu_4090s_setup_for/
rbit4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vwj1v
false
null
t3_15vwj1v
/r/LocalLLaMA/comments/15vwj1v/multi_machine_4090s_or_dual_gpu_4090s_setup_for/
false
false
self
0
null
GPU issues loading model
2
I have a GTX 1660 Super on an older i5 3470. This processor has no avx2 support and I'm not sure if it's related, but I have yet to get a model to load on my gpu as I get all kinds of errors or just random crashes. I can't get a model to load, AT ALL actually. All the software I use works fine when using my other pc with Ryzen 5. Any help is much appreciated!
2023-08-20T00:20:09
https://www.reddit.com/r/LocalLLaMA/comments/15vw9k9/gpu_issues_loading_model/
jchacakan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vw9k9
false
null
t3_15vw9k9
/r/LocalLLaMA/comments/15vw9k9/gpu_issues_loading_model/
false
false
self
2
null
Looking For Feedback — GGML Model Downloader/Runner
1
[removed]
2023-08-19T23:47:47
https://www.reddit.com/r/LocalLLaMA/comments/15vvinl/looking_for_feedback_ggml_model_downloaderrunner/
jmerz_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vvinl
false
null
t3_15vvinl
/r/LocalLLaMA/comments/15vvinl/looking_for_feedback_ggml_model_downloaderrunner/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fLJsNbUriWtrLRQhoHIe3z2UwP064nGIwlvKaGHLpHQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=108&crop=smart&auto=webp&s=53292720f73e45b03e9836c4b8c233af7244bce5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=216&crop=smart&auto=webp&s=5d64b834a79f101baf9ba5131bd442465412fdcf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=320&crop=smart&auto=webp&s=02addacc985c5985c6550cad190f1d0750a96e73', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=640&crop=smart&auto=webp&s=f111f18b06bbe11d601c4f6e8b4109d2e9324b1c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=960&crop=smart&auto=webp&s=4d3e8b1ff7429a2d21c4d472d25909961bec3007', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?width=1080&crop=smart&auto=webp&s=f45ff6774cf08dfc2083866a243fdc5a635516c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-XlsRmUf86cvCPyMhzz5k8W5CcrrL0t3PAv7p7KD-kc.jpg?auto=webp&s=14dd4fb61d37ca0e92e13cc74b77701586dde2a8', 'width': 1200}, 'variants': {}}]}
Does anyone have experience running LLMs on a Mac Mini M2 Pro?
22
I'm interested in how different model sizes perform. Is the Mini a good platform for this?
2023-08-19T22:56:15
https://www.reddit.com/r/LocalLLaMA/comments/15vub0a/does_anyone_have_experience_running_llms_on_a_mac/
jungle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vub0a
false
null
t3_15vub0a
/r/LocalLLaMA/comments/15vub0a/does_anyone_have_experience_running_llms_on_a_mac/
false
false
self
22
{'enabled': False, 'images': [{'id': 'GDa12ybOMRVCtm1zKwniHOe4OxlEWOboRIJrW2PSeLk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YJleP0mIMcjck7o8OQu5LUz8rCH4Qh-nDo8hn3Wo0CE.jpg?width=108&crop=smart&auto=webp&s=94d8adbf940a01507452b23cf556febea29edf36', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YJleP0mIMcjck7o8OQu5LUz8rCH4Qh-nDo8hn3Wo0CE.jpg?width=216&crop=smart&auto=webp&s=18d5082c3ec03fda1adae7072f29c4d5de61d69c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YJleP0mIMcjck7o8OQu5LUz8rCH4Qh-nDo8hn3Wo0CE.jpg?width=320&crop=smart&auto=webp&s=eae59af455dc9da5a13b68a5423922805eb0a224', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YJleP0mIMcjck7o8OQu5LUz8rCH4Qh-nDo8hn3Wo0CE.jpg?width=640&crop=smart&auto=webp&s=2fb9915cf9ed4b42c72f891223ffeb513bbe2120', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YJleP0mIMcjck7o8OQu5LUz8rCH4Qh-nDo8hn3Wo0CE.jpg?width=960&crop=smart&auto=webp&s=90098af18a3b2dc21877c6c284e8e7189dc37677', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YJleP0mIMcjck7o8OQu5LUz8rCH4Qh-nDo8hn3Wo0CE.jpg?width=1080&crop=smart&auto=webp&s=712289f8ba8debbd8e542c0d2d359140e03d3066', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YJleP0mIMcjck7o8OQu5LUz8rCH4Qh-nDo8hn3Wo0CE.jpg?auto=webp&s=6c1bf2b5437cc56484e9366b05461ea769fc5316', 'width': 1200}, 'variants': {}}]}
Best 13B or 30Buncensored LLM model?
1
I have been doing some research and tried out wizardLm and a few others. Bust at the moment what's the best uncensored LLM out there?
2023-08-19T21:13:25
https://www.reddit.com/r/LocalLLaMA/comments/15vruy6/best_13b_or_30buncensored_llm_model/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vruy6
false
null
t3_15vruy6
/r/LocalLLaMA/comments/15vruy6/best_13b_or_30buncensored_llm_model/
false
false
self
1
null
Can I use GPT 4 data to fine tune?
1
[removed]
2023-08-19T20:10:29
https://www.reddit.com/r/LocalLLaMA/comments/15vq953/can_i_use_gpt_4_data_to_fine_tune/
arctic_fly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vq953
false
null
t3_15vq953
/r/LocalLLaMA/comments/15vq953/can_i_use_gpt_4_data_to_fine_tune/
false
false
self
1
null
How to download your chats with Huggingface Chat?
5
I've been using their [chat](https://huggingface.co/chat) for a few months. Is there a way to download all my chats?
2023-08-19T19:32:59
https://www.reddit.com/r/LocalLLaMA/comments/15vpapz/how_to_download_your_chats_with_huggingface_chat/
JebryyathHS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vpapz
false
null
t3_15vpapz
/r/LocalLLaMA/comments/15vpapz/how_to_download_your_chats_with_huggingface_chat/
false
false
self
5
{'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=108&crop=smart&auto=webp&s=c5c01ca386f7a26e8afeb5073e51c35d0d581de7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=216&crop=smart&auto=webp&s=0e915f82e672294c639c476433af5f1919265348', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=320&crop=smart&auto=webp&s=87643eb4a9654c3497efe7fce371db617f9ff816', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=640&crop=smart&auto=webp&s=20315fe6e900582303995761624ac0728d1703f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=960&crop=smart&auto=webp&s=6d8bc7d3273f5290083f6668e10d5b513621bfa3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=1080&crop=smart&auto=webp&s=865cccb6b6df001aa14ef4fb2eb0f5902cb15904', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?auto=webp&s=03f4344525b6a013e0ac556cfc24b4a45d64f47e', 'width': 1200}, 'variants': {}}]}
Have anyone tried to host Llama 2 on AWS?
1
[removed]
2023-08-19T19:09:14
https://www.reddit.com/r/LocalLLaMA/comments/15vop9f/have_anyone_tried_to_host_llama_2_on_aws/
holistic-engine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vop9f
false
null
t3_15vop9f
/r/LocalLLaMA/comments/15vop9f/have_anyone_tried_to_host_llama_2_on_aws/
false
false
self
1
null
Exploding loss when trying to train OpenOrca-Platypus2-13B
11
I have been experimenting on the OpenOrca-Platypus2-13B model I wanted to finetune on a dataset for MCQ answering, and I used the OpenChat prompt template (as specified by model owners on the hugginface model card) The dataset is of this format: ``` User: You will be provided with a multiple choice question followed by 3 choices, A,B and C. Give the letter of the option that correctly answers the given question. For example, if the correct option is B, then your answer should be B. Question: {prompt} A) {a} B) {b} C) {c} D) {d} E) {e} <|end_of_turn|>Assistant: {answer} ``` There are about 30k rows of data in this format I implemented a regular SFTTrainer using LoRA. Here is a snippet of my code: ```python qlora_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, bias="none", target_modules=["q_proj", "v_proj"], # "dense", "dense_h_to_4h", "dense_4h_to_h" src/transformers/trainer_utils.py task_type="CAUSAL_LM" ) bnb_config = BitsAndBytesConfig( load_in_8bit=True, ) training_args = TrainingArguments( output_dir="./sft-orca-platy2-v2", per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=2, lr_scheduler_type = 'linear', learning_rate=2e-4, logging_steps=50, warmup_ratio=0.1, num_train_epochs=1, optim="adamw_torch", fp16=True, run_name="baseline2-orca-platypus2" ) tokenizer = AutoTokenizer.from_pretrained('Open-Orca/OpenOrca-Platypus2-13B') tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained( 'Open-Orca/OpenOrca-Platypus2-13B', device_map="auto", quantization_config=bnb_config, ) model.resize_token_embeddings(len(tokenizer)) model = prepare_model_for_kbit_training(model) peft_config = LoraConfig(r=16, lora_alpha=32, lora_dropout=0.05, task_type="CAUSAL_LM") model = get_peft_model(model, peft_config) trainer = SFTTrainer( model=model, train_dataset=train_ds, args=training_args, tokenizer=tokenizer, peft_config=qlora_config, dataset_text_field="text", max_seq_length=1024, ) ``` and this is what I get ![image](https://github.com/huggingface/peft/assets/29889429/cb66d254-43fc-4880-936b-778650f53bc9) Halfway through the epoch, the loss starts increasing and jumps from ~0.7 to 9.3, which is quite absurd. What could be the reason for this to happen? I have a feeling that I maybe should be freezing layers, but I was of the notion that fine-tuning with LoRA doesn't require freezing of layers? Would appreciate some kind of assistance into digging deeper into understanding how this process works?
2023-08-19T18:11:41
https://www.reddit.com/r/LocalLLaMA/comments/15vn7yj/exploding_loss_when_trying_to_train/
Crafty_Charge_4079
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vn7yj
false
null
t3_15vn7yj
/r/LocalLLaMA/comments/15vn7yj/exploding_loss_when_trying_to_train/
false
false
self
11
{'enabled': False, 'images': [{'id': 'NbOGe8hgIjBmu5jxpbCddkuSvDTzsy29s4u5895V6Ec', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=108&crop=smart&auto=webp&s=81f1f439bf4ab60c890bbdac2911b45d2cb3782b', 'width': 108}, {'height': 103, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=216&crop=smart&auto=webp&s=a965a4ede11b7e3303639425a35d53d323ebf669', 'width': 216}, {'height': 152, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=320&crop=smart&auto=webp&s=3ed3323223a22c35af13c4333ecd13a72e69e13d', 'width': 320}, {'height': 305, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=640&crop=smart&auto=webp&s=fbf32807d780ad8e588237bbe4ce887cd0e3806d', 'width': 640}, {'height': 458, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=960&crop=smart&auto=webp&s=483032c382daeb1d7ed7828b0c4b3693907a4f45', 'width': 960}, {'height': 516, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?width=1080&crop=smart&auto=webp&s=f7dd61151f94eb50857ee9c8d05cd7c38f8df1d2', 'width': 1080}], 'source': {'height': 613, 'url': 'https://external-preview.redd.it/Dfhhz7fHPKbGTGzC6lhS7KHLHT_Rrk-mlOdGHryxPyY.jpg?auto=webp&s=3e4dfbfdd7984b611e13f8d2b5d2b2f56f11a326', 'width': 1283}, 'variants': {}}]}
Try to run LLaMA on windows + WSL2, wondering how to deal with CUDA. Please help
1
Today, I installed WSL2. Searching how to use GPU acceleration in WSL2, I found 2 tutorials: * [One from Nvidia](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#getting-started-with-cuda-on-wsl), suggesting to install "WSL-Ubuntu CUDA toolkit" within WLS2. * [The other from Microsoft](https://learn.microsoft.com/en-us/windows/wsl/tutorials/gpu-compute), suggesting "Docker Desktop" and " nvidia-docker". I am wondering which way is better, or should I do both? And then, to my surprise, nvidia-smi tell me I already have CUDA in WSL2 before I try any of the options to install CUDA toolkit: https://preview.redd.it/bytpvzz2r3jb1.png?width=808&format=png&auto=webp&s=58c6b47431b77389396cf15bec44907203909a36 Does it mean I don't need to install "WSL-Ubuntu CUDA toolkit"? Then where is this CUDA12.2 come from? Is it from my host? (I installed Cuda12.2 on win11 long time ago); Or is WSL-Ubuntu included it by default?
2023-08-19T17:32:09
https://www.reddit.com/r/LocalLLaMA/comments/15vm8lf/try_to_run_llama_on_windows_wsl2_wondering_how_to/
Defiant_Hawk_4731
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vm8lf
false
null
t3_15vm8lf
/r/LocalLLaMA/comments/15vm8lf/try_to_run_llama_on_windows_wsl2_wondering_how_to/
false
false
https://a.thumbs.redditm…lgyUTV2IT798.jpg
1
null
[Free offer] I will help you self-host LLM
23
Hello r/LocalLLaMA I've set up several self-hosted Open-Source LLMs in the past, and I want to help those having difficulty doing so. Please DM me if you're interested. Full transparency: I'm doing this because I'm exploring startup opportunities in helping people self-host LLMs. Speaking to those whom I help will give me invaluable insight. I've mostly worked with A40 and A100's. Most of my setups involve nodejs and it uses llama.cpp js port. The performance seems comparable to Python. &#x200B; Thank you!
2023-08-19T17:30:06
https://www.reddit.com/r/LocalLLaMA/comments/15vm6ne/free_offer_i_will_help_you_selfhost_llm/
m0dE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vm6ne
false
null
t3_15vm6ne
/r/LocalLLaMA/comments/15vm6ne/free_offer_i_will_help_you_selfhost_llm/
false
false
self
23
null
Training
11
I would like to train the model to identify concepts and entities in medical texts and return specific outputs based on what it identifies. I’ve programmed for decades but have zero experience with LLM so please excuse my questions if they are absurd. I have 7-8 million examples and expected answers that is spread over maybe 5000 concepts. Can I use the entire text and answer for teaching or do I need to figure out a way to pair down the document? All the training sets I’ve looked at are only a few sentences at most. The documents are usually around a page and highly sectional and double spaced so they are not huge but much larger than I’ve seen. I am going to try this in azure. Would the 13b or I think I saw a 30b model exists, size be ok? I’ll be processing maybe 3-5k documents a day through it once trained. Can something like that run on 4 cores? No user interaction so it doesn’t have to super fast but reasonable to use. Azure screens on deploying the llm, while picking compute, elude to 24-40 cores on the small ones and 96 cores on 70b but I’ve seen posts that it runs on far far less. Thanks!
2023-08-19T15:08:48
https://www.reddit.com/r/LocalLLaMA/comments/15viq0h/training/
Ulan0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15viq0h
false
null
t3_15viq0h
/r/LocalLLaMA/comments/15viq0h/training/
false
false
self
11
null
Train llama on a new language
20
How do you teach llama v2 a new language? What approach works best? Let's say I can collect 1GB of data on the target language. There options are full parameter train and LoRA. Can LoRA be effective? Have someone tried full parameter training of llama 7/13B ?
2023-08-19T14:33:37
https://www.reddit.com/r/LocalLLaMA/comments/15vhuth/train_llama_on_a_new_language/
generalfsb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vhuth
false
null
t3_15vhuth
/r/LocalLLaMA/comments/15vhuth/train_llama_on_a_new_language/
false
false
self
20
null
Karpathy: M2 Ultra is the smallest, prettiest, out of the box easiest, most powerful personal LLM node today
96
2023-08-19T14:29:03
https://twitter.com/karpathy/status/1691844860599492721
johnybe
twitter.com
1970-01-01T00:00:00
0
{}
15vhqt5
false
{'oembed': {'author_name': 'Andrej Karpathy', 'author_url': 'https://twitter.com/karpathy', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Two notes I wanted to add:<br><br>1) In addition to parallel inference and training, prompt encoding is also parallelizable even at batch_size=1 because the prompt tokens can be encoded by the LLM in parallel instead of decoded serially one by one. The token inputs into LLMs always… <a href="https://t.co/cwqVhK10aC">pic.twitter.com/cwqVhK10aC</a></p>&mdash; Andrej Karpathy (@karpathy) <a href="https://twitter.com/karpathy/status/1691844860599492721?ref_src=twsrc%5Etfw">August 16, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/karpathy/status/1691844860599492721', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_15vhqt5
/r/LocalLLaMA/comments/15vhqt5/karpathy_m2_ultra_is_the_smallest_prettiest_out/
false
false
https://b.thumbs.redditm…VNRh2iLIpXSk.jpg
96
{'enabled': False, 'images': [{'id': '8OBttjof8JKsvHudNNjnlek67eISquunI0_apKCF2zI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/30kOT1ynnz-BSOfuiENEnJiQXlgDWDqDeQuZ6vEgPUk.jpg?width=108&crop=smart&auto=webp&s=d6aa5cabfe1c2e2898a972b9f3f881d1a350bf5c', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/30kOT1ynnz-BSOfuiENEnJiQXlgDWDqDeQuZ6vEgPUk.jpg?auto=webp&s=fbeba4dd607887de0ab7c1f1c9ca6ad3de295249', 'width': 140}, 'variants': {}}]}
Nvidia Tesla K80
58
Why are these things SO cheap? I get that it’s older, per Bard, 2014 and are DDR5, but this price??? Please, someone educate me here.
2023-08-19T14:26:10
https://i.redd.it/wjk9bck5u2jb1.jpg
jchacakan
i.redd.it
1970-01-01T00:00:00
0
{}
15vhogy
false
null
t3_15vhogy
/r/LocalLLaMA/comments/15vhogy/nvidia_tesla_k80/
false
false
https://b.thumbs.redditm…Efl0x7rv4UUA.jpg
58
{'enabled': True, 'images': [{'id': 'JVrh_Rrn1FpBb_pcXVeFHc-_H3OYJMpWy23JigKRFLA', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/wjk9bck5u2jb1.jpg?width=108&crop=smart&auto=webp&s=52d09b75dcc5602de1c31f489e72a6b8e89b197f', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/wjk9bck5u2jb1.jpg?width=216&crop=smart&auto=webp&s=8b46d550ffeed1c379666e327a1ffe2dc35c92f4', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/wjk9bck5u2jb1.jpg?width=320&crop=smart&auto=webp&s=97c9ecbf6bd1fc34554c6f394774c442fa08ef2f', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/wjk9bck5u2jb1.jpg?width=640&crop=smart&auto=webp&s=92d86a1d40efa5af390505befd5d3dc2a6427d61', 'width': 640}], 'source': {'height': 1624, 'url': 'https://preview.redd.it/wjk9bck5u2jb1.jpg?auto=webp&s=4286eb8e36dda404b8bc27ec14f5a52db7f19fe8', 'width': 750}, 'variants': {}}]}
Making interface in hugging face with Open llama and RAG pipeline.
3
I have created a RAG pipeline and using it with an openllama 13b loaded directly from huggingface and without fine-tuning the model. How can I use this configuration into huggingface to make inference in the huggingface interface?
2023-08-19T14:00:13
https://www.reddit.com/r/LocalLLaMA/comments/15vh209/making_interface_in_hugging_face_with_open_llama/
mathageche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vh209
false
null
t3_15vh209
/r/LocalLLaMA/comments/15vh209/making_interface_in_hugging_face_with_open_llama/
false
false
self
3
null
Local assistant guide needed
2
Hey there, I would like to run an llm assitant locally that is able to fulfill task like composing mails, answer questions (without sources or google)and stuff. Kind of like chatgpt, although I am aware that's not quite possible. I installed koboldcpp but what model should I choose? What settings do I need to make to the model? Ist there comprehensive guide somewhere? Is koboldcpp even the best/right choice?
2023-08-19T12:57:15
https://www.reddit.com/r/LocalLLaMA/comments/15vfm3w/local_assistant_guide_needed/
la_baguette77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vfm3w
false
null
t3_15vfm3w
/r/LocalLLaMA/comments/15vfm3w/local_assistant_guide_needed/
false
false
self
2
null
Solving some issues with self-querying for retrieval
6
We are doing retrieval augmented generation (RAG) to perform question answering from a pdf. We are using self-querying to get the retrievals and are facing some issues. We have created text chunks from a pdf, where each chunk has been tagged with some metadata to indicate the chapter number. We ask questions of the type "what is chapter 1?" and are able to see the correct filter being made, ```eq("chapter":"1")``` . - A question like "compare chapter 1 and chapter 2" should require chunks from both chapter 1 and 2 to be retrieved, i.e. a filter ```eq("chapter":"1") or eq("chapter":"2")``` . however the llm gets fooled by the "and" in the query ( we think) and produces ```eq("chapter":"1") and eq("chapter":"2")``` which leads to no retrievals - asking an irrelevant question like "who is michael jackson?" which should be unrelated to the text at all still creates a filter like ```eq("chapter":"1")```. i.e. a hallucination. We wanted to know if the community has any strategies for dealing with this scenario. our hypothesis is that this is either a problem with the llm itself, so would a llm for producing code might be better? we are using vicuna 1.5 8 bit with 13 billion parameters. Another guess is the might be that the query is too "short" for a good filter to be created. We dont know about chain of thought much, and maybe that be a way here? would appreciate your advice * Self-querying is a strategy for improving retrieval when there is some metadata present. apart from matching the embeddings of the text of the query and the chunks, it also creates a filter to only retain certain chunks. for e.g. ideally we'd only keep chunks with the metadata attribute "chapter" as "1" when asking the question as "what is chapter 1?". This filter is also predicted by a llm as a json which is then used as code to filter for the chunks.
2023-08-19T12:45:00
https://www.reddit.com/r/LocalLLaMA/comments/15vfc9y/solving_some_issues_with_selfquerying_for/
nlpllama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vfc9y
false
null
t3_15vfc9y
/r/LocalLLaMA/comments/15vfc9y/solving_some_issues_with_selfquerying_for/
false
false
self
6
null
Chatbots and games - Combining LLM with animated 3D characters, TTS, VR?
1
[removed]
2023-08-19T12:42:57
https://www.reddit.com/r/LocalLLaMA/comments/15vfaor/chatbots_and_games_combining_llm_with_animated_3d/
capybooya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vfaor
false
null
t3_15vfaor
/r/LocalLLaMA/comments/15vfaor/chatbots_and_games_combining_llm_with_animated_3d/
false
false
default
1
null
When running LLAMA2 on top
1
I can see many settings, Chat settings, parameters, Model, Training and Session. Where can I find out what each of them does and what the variables mean on each page.
2023-08-19T12:04:11
https://www.reddit.com/r/LocalLLaMA/comments/15vehis/when_running_llama2_on_top/
Rear-gunner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vehis
false
null
t3_15vehis
/r/LocalLLaMA/comments/15vehis/when_running_llama2_on_top/
false
false
self
1
null
Calculating Sentence Level Attention
9
How do I quantify the attention between input and output sentences in a sequence-to-sequence language modelling scenario \[translation or summarization\]? For instance, consider these input and output statements, i.e., document is the input, and abstract is the output for a sequence-to-sequence task. # INPUT SENTENCES document = [ "This paper covers various aspects of learning.", "We will dive deep into algorithms.", "It's crucial to understand the basics.", "Modern techniques are also covered." ] # OUTPUT SENTENCES abstract = [ "The paper discusses machine learning.", "We focus on deep learning techniques.", "Results indicate superior performance.", ] I want to generate a heatmap containing each document sentence's attention values with its abstract sentence. In this case, the heatmap should be a 4x3 matrix. I hope this makes sense. Any ideas (code repos, articles, etc) on how this can be done?
2023-08-19T11:55:09
https://www.reddit.com/r/LocalLLaMA/comments/15veake/calculating_sentence_level_attention/
psj_2908
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15veake
false
null
t3_15veake
/r/LocalLLaMA/comments/15veake/calculating_sentence_level_attention/
false
false
self
9
null
Fine tune with RAG?
13
Hi everyone, I am working on making a QA bot. Ive generated a dataset of 113k QA pairs which should cover a pretty good chunk of possible questions. My question is about the fine-tuning process. We know that adding knowledge to a model with LoRA/QLoRA is ineffective, they are mainly good for adjusting outputs in different ways, as I understand it. To try and accommodate that, it seems like it might be better to use RAG with whatever model I end up with. So, following that guidance, does it seem beneficial to put in the additional context in the fine-tune stage? Maybe just to better visualize it, here's an example: Question: What color is the sky? Answer: Blue As opposed to something like: Question: What color is the sky? Context: I looked up at the blue sky and saw birds and clouds. Answer: Blue Is there any intuition on which is better to do for cases like this? It seems to me the ladder option with RAG in the dataset for fine tuning is better but curious what others think. Thanks!
2023-08-19T11:36:36
https://www.reddit.com/r/LocalLLaMA/comments/15vdxa4/fine_tune_with_rag/
GetRektX9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vdxa4
false
null
t3_15vdxa4
/r/LocalLLaMA/comments/15vdxa4/fine_tune_with_rag/
false
false
self
13
null
LLAMA2 Using GPU
1
How to run the LLAMA2 models with GPU? I have CUDA installed and as accessible from my venv but all prompts are getting processed by my CPU itself. The library llama_cpp useses Cpu it seems so I tired a git repo of llama.cpp but that's not working either.
2023-08-19T11:13:52
https://www.reddit.com/r/LocalLLaMA/comments/15vdhpz/llama2_using_gpu/
PhantomLord06
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vdhpz
false
null
t3_15vdhpz
/r/LocalLLaMA/comments/15vdhpz/llama2_using_gpu/
false
false
self
1
null
Comparing how well some 13B GPTQ Llama-2 models seem to adhere to instructions asking for a particular writing style
58
I'm running some tests, trying to see how well 13B GPTQ models can follow instructions. This is one of the things I want from LLMs. The prompt is something that should not be too typical of what it was fine tuned: > Write a mail to Twitter asking to unsuspend one account, but do it in the style of a script by Quentin Tarantino. Include typical Tarantino writing style In all cases I'm using CONTRASTIVE CHANGE, no changes in instruct format from the Ooba predefined ones for each model type. I may run several generations and pick the better-looking one, but with no alteration of the prompt or parameters. Also note: there may be better ways of prompting models to actually write what I wanted. But in all honesty, a potent finetune of the model should allow it to infer what is being instructed to do from less concrete or more ambiguous prompts. **Llama-2 Chat** I feared Llama-2 Chat would go all soy milk on me and refuse, but it actually wrote it: &#x200B; https://preview.redd.it/9p0ie9lgy0jb1.png?width=877&format=png&auto=webp&s=b4848aefc123ceb20f19f38142bce6fe548203f7 To be honest, it did pretty well, so if I wanted to use this as a starting point, I could take it, and tweak it. Like every single model I used, except for Nous Hermes without using an instruction prompt, it understands I want to sign the mail as "Quentin" or similar. **Vicuna 1.5** &#x200B; https://preview.redd.it/ngdvr2tqy0jb1.png?width=975&format=png&auto=webp&s=c9e9c1a21edfd1ffaa67d3206798c27635083d34 I believe it also did relatively well, but the Llama-2 chat has some flair to it, such as asking itself "Why should we unsuspend this account", which was a nice touch. Vicuna sounds too formal. &#x200B; **OpenOrca Platypus 2** &#x200B; https://preview.redd.it/3nx4v694z0jb1.png?width=885&format=png&auto=webp&s=736268215be8e435c859d0d60a2aa2f07c6a56c8 This one flatly fails. Did not do what I wanted. It wasn't possible to steer the style of it. It took that part of the prompt as if it had to include some reference to it. &#x200B; **Orca Mini v3** &#x200B; https://preview.redd.it/eipaptyaz0jb1.png?width=896&format=png&auto=webp&s=b86ed99ff6a553168826620189c0bb76df74a598 Also fails miserably. I wanted it in the style of a script, not as if Quentin Tarantino was writing a meek formal mail. &#x200B; **Airoboros-l2 gpt4** &#x200B; https://preview.redd.it/ntgm74akz0jb1.png?width=938&format=png&auto=webp&s=dc46dfabd74c662324e700da5771f80b24d48ca2 Fails so much it repeats my instruction in the mail. It's as if a 6-year-old child was repeating verbatim what they were told to do, instead of doing it. **Chronos Beluga v2** &#x200B; https://preview.redd.it/1xmch7axz0jb1.png?width=861&format=png&auto=webp&s=4f7541909ee0b57b80653a745c7f5b06bb058d8b Ok, so this one while not perfect, and while understanding I wanted it to write the mail as if it was Tarantino himself, at least has a flair to it. It has a different style from the formal, almost apologetic format of other models I've tried. Probably a model that "shows" a little more of understanding and is able to produce a style that is not the 'standard' one can be led to produce what I want through better prompting. **WizardLM-1.0-Uncensored-Llama2** &#x200B; https://preview.redd.it/16wjaggc01jb1.png?width=930&format=png&auto=webp&s=11d21c7d4c3eb7754c96671145c3b382729b297c This one is pretty funny. Again, like all other models, it signs as Quentin Tarantino, but I like its style! Again, material you could take and tweak. **Nous-Hermes-Llama2** &#x200B; https://preview.redd.it/2377aw2w01jb1.png?width=901&format=png&auto=webp&s=f3528f27173a95d130800abf4106acc98fe1004d This one has the problem about the format not being really an e-mail. The \[in Tarantino's style\] part may be due to RP tuning? However, this is for me the best model at producing a particular style. It's really, really good. It's the kind of dialog line you could base an actual line in a story you were writing, if that was the style you were looking for. But there is more about Nous Hermes. I realized the instruct template Oobabooga webui loaded was not the fine tuned one. So I went to the NOTEBOOK option and asked the same with the suggested prompt: &#x200B; https://preview.redd.it/8tyvkp5g11jb1.png?width=1493&format=png&auto=webp&s=bc9908bddeeb0b354c2f94fa9886f61c232c8e3c It gained the e-mail format. I'm not sure about the style itself, as I'm not an expert in American jargon and very colloquial speech, but still, it keeps a distinct flair. It's also fine the El\*n M\*sk guy (I refuse to even write that guy's name) is mentioned, so it was part of the training data. **Conclusions** For me, Llama-2 Chat seems to be the best finetune. However, due to the heavy censorship and agenda being infused into its veins, it can't be used for creative writing purposes. It often ends up giving moralizing, patronizing speeches. The soy milk saturation is just too thick. Vicuna 1.5 seems to also be pretty good at following instructions, but it lacks flair. The instructions oriented towards writing are probably underepresented in the training set. Nous Hermes seems to be a strange case, because while it seems weaker at following some instructions, the quality of the actual content is pretty good. Most other models are not even able to produce what you want. Some repeat your prompt, resulting in text that reads "I am writing this e-mail in the style of Quenting Tarantino" or variations thereof. All in all, to me it seems like even more than number of parameters (not underestimating them, of course), what matters the most is the finetuning. And sadly, here you run into the problem of probably quality and quantity of it, as well as the matter of censorship. All models tested were based on the LlaMa-2 base model, so what makes or breaks it is the finetune. Especially for creative purposes, one alternative is to switch models frequently, which ExLlaMa makes easy as it's pretty fast, but not ideal. For example, to get inspiration for speech segments Nous Hermes seems excellent, but it's unlikely it can proofread or rewrite passages to give it a different flair or to fix things like duplication or improve flow. Llama-2 CHAT does most things pretty well, but you run into censorship and US West Coast patronizing sermons only too quickly for it to be used for any serious endeavor. For reference ChatGPT 3.5 is almost useless for tasks that involve writing or rewriting fiction, not only because of the censorship, but especially because its verbose style is ill-suited. ChatGPT 4 is very good at this too, but you run into a 50-message every three hours limitation; or paying for an extremely pricey API. This is why I'm really hoping that one day we'll get a good LLM we can run locally that does a better work at adhering to instructions about content, tone and style; as well as being able to do tasks like rewriting, proofreading, streamlining passages, etc.
2023-08-19T08:35:15
https://www.reddit.com/r/LocalLLaMA/comments/15vakry/comparing_how_well_some_13b_gptq_llama2_models/
CulturedNiichan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15vakry
false
null
t3_15vakry
/r/LocalLLaMA/comments/15vakry/comparing_how_well_some_13b_gptq_llama2_models/
false
false
nsfw
58
null
How to make Llama.cpp compatible with HF chat-ui
7
I have an Azure VM where I am running Hugging face chat-ui and llama.cpp in server mode. Because of the Nvidia GPU shortage I had to run the hugging face inference server as an endpoint on HF hub, but it costs a lot. So my question is how can I make llama.cpp the inference server of the good looking HF Chat-ui (I know llama.cpp has one but too simple)
2023-08-19T07:54:00
https://www.reddit.com/r/LocalLLaMA/comments/15v9upw/how_to_make_llamacpp_compatible_with_hf_chatui/
No_Palpitation7740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v9upw
false
null
t3_15v9upw
/r/LocalLLaMA/comments/15v9upw/how_to_make_llamacpp_compatible_with_hf_chatui/
false
false
self
7
null
is it possible to train llama v2 like SplGen?
1
[removed]
2023-08-19T07:45:20
https://www.reddit.com/r/LocalLLaMA/comments/15v9p3a/is_it_possible_to_train_llama_v2_like_splgen/
happydadinau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v9p3a
false
null
t3_15v9p3a
/r/LocalLLaMA/comments/15v9p3a/is_it_possible_to_train_llama_v2_like_splgen/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PHRaFt3y7mGcUAGh52gAY_6ohGSCP9O7BjmdDSLD9Pc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Wl3M-LdNx1NiXOo4w7jll1Mjx_9xoDblevXsfwRsgYc.jpg?width=108&crop=smart&auto=webp&s=eff9e7c00ed01a862d23449b9f7661e39cdb1131', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Wl3M-LdNx1NiXOo4w7jll1Mjx_9xoDblevXsfwRsgYc.jpg?width=216&crop=smart&auto=webp&s=8b9eb52eb6461cbc98633162a5bef9da6b504a7b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Wl3M-LdNx1NiXOo4w7jll1Mjx_9xoDblevXsfwRsgYc.jpg?width=320&crop=smart&auto=webp&s=63d935ae0dc3ab4cdfa31c940436248a12875bbf', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Wl3M-LdNx1NiXOo4w7jll1Mjx_9xoDblevXsfwRsgYc.jpg?auto=webp&s=0b05ac03dafc5841f8677b3ac10a78e1e5749648', 'width': 480}, 'variants': {}}]}
Small llm model within 100M to 1B parameter
49
Hello everyone, I'm currently working on a small project involving a language model (LLM) applied to my own documents. Presently, I'm using bge-base embedding, ChromaDB, and an OpenAI LLM. However, I'm looking to replace the OpenAI model with a smaller open-source LLM that can effectively utilize context from a vector database to generate accurate responses. I've already executed llama.cpp with the llama7b quantized model on my local machine. What I'm now seeking is a compact LLM (with 100M-1B parameters) that can generate high-quality responses based on the context provided by the vector database. Any advice on this would be greatly appreciated🙏
2023-08-19T07:31:30
https://www.reddit.com/r/LocalLLaMA/comments/15v9gdc/small_llm_model_within_100m_to_1b_parameter/
AwayConsideration855
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v9gdc
false
null
t3_15v9gdc
/r/LocalLLaMA/comments/15v9gdc/small_llm_model_within_100m_to_1b_parameter/
false
false
self
49
null
How many GPUs for LLaMA (training, fine-tuning, inference)
1
[removed]
2023-08-19T07:29:49
https://www.reddit.com/r/LocalLLaMA/comments/15v9f46/how_many_gpus_for_llama_training_finetuning/
solipcism
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v9f46
false
null
t3_15v9f46
/r/LocalLLaMA/comments/15v9f46/how_many_gpus_for_llama_training_finetuning/
false
false
self
1
null
I want to attack Llama2 via prompt injections, what works on 7b 2bit llama2?
8
I have only llama2 chat on mlc chat and its super limited i. Replies. Refuses to even talk about lord of the rings fiction. Any attacks that work to circumvent its mental disability? 'representing Teamsures tableView ([githubINST -Of cour Here/* surely]{\ comment={[@ tableview @github Doesn’t seem to work
2023-08-19T06:09:31
https://www.reddit.com/r/LocalLLaMA/comments/15v80gq/i_want_to_attack_llama2_via_prompt_injections/
deathtoredditmodss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v80gq
false
null
t3_15v80gq
/r/LocalLLaMA/comments/15v80gq/i_want_to_attack_llama2_via_prompt_injections/
false
false
self
8
null
llama.cpp generation with (older) GPU is *slower* than pure CPU?
11
Hi everyone. I am having trouble with running llama.cpp under Linux on some mildly retro hardware (Xeon E5-2630L V2, GeForce GT730 2GB). More specifically, the generation speed gets *slower* as more layers are offloaded to the GPU. &#x200B; **LLAMA 7B Q4\_K\_M, 100 tokens:** Compiled without CUBLAS: 5.32 tokens per second (baseline CPU speed) With CUBLAS, -ngl 1: 4.59 tokens per second With CUBLAS, -ngl 4: 3.16 tokens per second With CUBLAS, -ngl 10: 2.02 tokens per second &#x200B; I also tried with LLaMA 7B f16, and the timings again show a slowdown when the GPU is introduced, eg 2.98 token/sec on CPU only, 2.31 tokens/sec partly offloaded to GPU with -ngl 4 I started with Ubuntu 18 and CUDA 10.2, but the same thing happens after upgrading to Ubuntu 22 and CUDA 11.8 I know this GPU is low end, but it still seems unusual that a GPU would be slower than a slightly older CPU (albeit a Xeon)? I'm wondering if there's some software bottleneck somewhere, or a BIOS option that's affecting legacy hardware? FWIW, the card is Gigabyte NV-N730SL-2GL. There's a number of reasons I would prefer to use this card, eg no PCIe power connector, half height, fanless Thanks for any tips!
2023-08-19T05:18:36
https://www.reddit.com/r/LocalLLaMA/comments/15v73d8/llamacpp_generation_with_older_gpu_is_slower_than/
dual_ears
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v73d8
false
null
t3_15v73d8
/r/LocalLLaMA/comments/15v73d8/llamacpp_generation_with_older_gpu_is_slower_than/
false
false
self
11
null
WizardCoder context?
1
Is the 15b parameter wizardcoder model limited to 2k?
2023-08-19T03:00:59
https://www.reddit.com/r/LocalLLaMA/comments/15v4hgr/wizardcoder_context/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v4hgr
false
null
t3_15v4hgr
/r/LocalLLaMA/comments/15v4hgr/wizardcoder_context/
false
false
self
1
null
Hey r/localLLaMA'rs, would you mind sharing with me your fav models, what you use them for, and how you use them?
47
I'm trying to quickly learn the fundamentals of the local hosted LLM, maybe the best way is to just hear what you all got going on. For example: 1. GPT-Neo-2.7B-Picard -> collaborative story writing -> koboldai It would be really awesome if y'all with experience could help turn this post into a general sort of newbie guide of what is available and what it is good for. Cheers. I really hope you guys will share! For every post submitted with a model/use-case I will update this post with that info. If something like this already exists please link me to it!
2023-08-19T02:55:34
https://www.reddit.com/r/LocalLLaMA/comments/15v4dp1/hey_rlocalllamars_would_you_mind_sharing_with_me/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v4dp1
false
null
t3_15v4dp1
/r/LocalLLaMA/comments/15v4dp1/hey_rlocalllamars_would_you_mind_sharing_with_me/
false
false
self
47
null
How do I learn about LLM's?
136
Gosh, there's just so many things to know. Every single thread I read here, I discover two or three new abbreviations or acronyms and even trying to look up an explanation is difficult without landing on some kind of in-depth deep dive into the technology. Can you recommend a single or a few good resources that I can just spend like 30m reading and be up to date on the terminology? I'm not looking for guides on how to install anything. Please and thank you.
2023-08-19T02:44:24
https://www.reddit.com/r/LocalLLaMA/comments/15v45dd/how_do_i_learn_about_llms/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v45dd
false
null
t3_15v45dd
/r/LocalLLaMA/comments/15v45dd/how_do_i_learn_about_llms/
false
false
self
136
null
Best model to train to generate content in the style and tone of writing of a certain author.
1
I am looking for ways to mimic the tone and style of writing of any author to generate content about a specific topic. For example, write about a new project management tool in the style of george carlin. What would be the most effective way or the best model to train to achieve this task?
2023-08-19T01:48:51
https://www.reddit.com/r/LocalLLaMA/comments/15v2zta/best_model_to_train_to_generate_content_in_the/
ImpressiveFault42069
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v2zta
false
null
t3_15v2zta
/r/LocalLLaMA/comments/15v2zta/best_model_to_train_to_generate_content_in_the/
false
false
self
1
null
What can I run on this Dell PowerEdge R710 Server 32G Ram/12 Core?
7
I was humbled when it arrived at like 1000lbs and the size of a small European car. Can anyone offer insight into options on which models I can run on it? It’s got dual Xenons, 32GB system Ram, no vram.
2023-08-19T01:00:27
https://www.reddit.com/r/LocalLLaMA/comments/15v1z29/what_can_i_run_on_this_dell_poweredge_r710_server/
Overall-Importance54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v1z29
false
null
t3_15v1z29
/r/LocalLLaMA/comments/15v1z29/what_can_i_run_on_this_dell_poweredge_r710_server/
false
false
self
7
null
How to achieve realistic conversations?
12
I'm trying to figure out how to have an realistic conversation. I'm building a game and I want realistic NPCs powered by an LLM. But currently they are too friendly. They ask too many questions or keep the conversation going. It's too easy, I want the player to have to do some effort in keeping the conversation going. So I'm not sure how to prompt realistic conversations. Meaning; sometimes the npc should answers with "yeah.." or "nice" or just a short sentence but no question. Or maybe even an awkward pause. I think that would make it more realistic. Not sure if this is just a matter of finding the right prompt or if I need to do something else like fine-tune on movie dialogue or something. Tips, advice?
2023-08-19T00:15:46
https://www.reddit.com/r/LocalLLaMA/comments/15v0z6g/how_to_achieve_realistic_conversations/
mmmm_frietjes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v0z6g
false
null
t3_15v0z6g
/r/LocalLLaMA/comments/15v0z6g/how_to_achieve_realistic_conversations/
false
false
self
12
null
Lora Training - Limiting content response to content of Lora, or Preventing some content at least.
1
I'm using OoogaBooga to train a falcon 7b. I just want to use it for basic technical support. So I've adapted the help documentation to the alpaca format. What I have run into is that the trained model includes information from the Falcon set of data that I don't want. How would I limit the content of the responses to at least not include certain things, for example sometimes it points me to a website that doesn't exist. In each of my instructions, I provided a last statement that links the answer directly to the documentation url it was pulled from. Ideally I would like the response to include that in it's delivery.
2023-08-19T00:07:14
https://www.reddit.com/r/LocalLLaMA/comments/15v0s93/lora_training_limiting_content_response_to/
DashinTheFields
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15v0s93
false
null
t3_15v0s93
/r/LocalLLaMA/comments/15v0s93/lora_training_limiting_content_response_to/
false
false
self
1
null
a python library that can run any machine learning model on your laptop
1
[removed]
2023-08-18T23:33:09
https://www.reddit.com/r/LocalLLaMA/comments/15uzzpq/a_python_library_that_can_run_any_machine/
Degenerat666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uzzpq
false
null
t3_15uzzpq
/r/LocalLLaMA/comments/15uzzpq/a_python_library_that_can_run_any_machine/
false
false
self
1
{'enabled': False, 'images': [{'id': '0AVqFHLgyqw-lb7lCcYl2JxBrV5pdoPouAraqeyc6QE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=108&crop=smart&auto=webp&s=e55d2089338447fbbb7856d3a855836d8cae2112', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=216&crop=smart&auto=webp&s=8ce11f0a12edcc0858ce1f9a422dedb708a97495', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=320&crop=smart&auto=webp&s=e842b806ca71d99d3f94c5efd61e74ee8dc502fd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=640&crop=smart&auto=webp&s=8dff517348b4c693a48719ca171e0bdf6f5e36ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=960&crop=smart&auto=webp&s=21089c9b48b9a954186bfe66143152cd5b88b893', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?width=1080&crop=smart&auto=webp&s=d2a2a355cf2951f3970949f135cea59999dff992', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_Jg17zC_SGd3rCxtG0FmV1cqm_ZmOUgFdtDGuCpGWCo.jpg?auto=webp&s=3adc07dc6cc6c802fe13fa68439cd89799a49278', 'width': 1200}, 'variants': {}}]}
66% Wizard Coder + 33% Redmond Hermes Coder + CTranslate2 = Wizard Coder 8bit (but 37% better than load-in-8bit) + 37 tokens/s + little general abilities like summarization
30
Hi all! HF page: [https://huggingface.co/KrisPi/Wizard-Coder-0.66-Redmond-Hermes-0.33-ct2fast](https://huggingface.co/KrisPi/Wizard-Coder-0.66-Redmond-Hermes-0.33-ct2fast) First of all - I'm not sure if I have lucked out or if I did something wrong. I spent an enormous amount of time evaluating different models, presets, prompts, and quantizations in an attempt to find something I can use on RTX 3090 and use instead of OpenAI most of the time. My initial use case is: DevOps questions, summarizing content, and script development. Llama models are not very good in coding-related questions so far and despite the community prioritizing fine-tuning general models to code rather than coding models to be more general I stick to Wizard Coder as a base due to just minimally making the cut for my must-have use case. This effort led me to merge Redmond Hermes Coder (Nous Hermes guys fine-tune of Wizard Coder) that lost too much coding ability back with Wizard Coder until its smart in coding enough again. It seems this merged model still retained some ability to work with website content as a context. I'm also a big fan of CTranslate2 quantization and Inference - I'm getting 37 tps vs. 12 tps in 8bit Transformer!! There is something going on with sampling as u/kryptkpr noticed, but further testing has shown that using Space Alien preset gives 37% better HumanEval+ results than the normal Wizard Coder in 8bits. I can also get around 6k context before OOM. All details on the model card on HF - I would really appreciate some feedback and ideas why I'm having such results.
2023-08-18T21:39:13
https://www.reddit.com/r/LocalLLaMA/comments/15ux71j/66_wizard_coder_33_redmond_hermes_coder/
kpodkanowicz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ux71j
false
null
t3_15ux71j
/r/LocalLLaMA/comments/15ux71j/66_wizard_coder_33_redmond_hermes_coder/
false
false
self
30
{'enabled': False, 'images': [{'id': 'c3vpXFJoNGdnJUyIr0gYLQRlZd30pqpp9GHAVWwxeK8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=108&crop=smart&auto=webp&s=bbe4d7a927c4cb8584d75be3f18ef18d807f4b34', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=216&crop=smart&auto=webp&s=be398e73b3890b87e789c600afa41856901d06bc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=320&crop=smart&auto=webp&s=9bbf54243b35edc1264516178bf1a8c4295b4d00', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=640&crop=smart&auto=webp&s=df65b2a8f29cd2fa6711f8f50508cba3bbc5c845', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=960&crop=smart&auto=webp&s=99602b94b7e7cd87787c806e100a3433f5fa3fdf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?width=1080&crop=smart&auto=webp&s=b534584b5d8c1e3affc4332cad2bf1f5b1315b4f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gDSemtQHJKQm9_IqlMkj61nCfrXDEAG3_PiwORjHbvg.jpg?auto=webp&s=55d97ff7c25532a2311be5bc3a61035c25baa99a', 'width': 1200}, 'variants': {}}]}
One 3090 or Two RTX A4000?
14
I have an opportunity to acquire two used RTX A4000 for roughly the same price as a used 3090 ($700USD). What would you guys recommend?
2023-08-18T21:26:01
https://www.reddit.com/r/LocalLLaMA/comments/15uwuz8/one_3090_or_two_rtx_a4000/
Imagummybear23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uwuz8
false
null
t3_15uwuz8
/r/LocalLLaMA/comments/15uwuz8/one_3090_or_two_rtx_a4000/
false
false
self
14
null
Run llama 70B on AWS notebook
10
Hello, I am trying to spin up LLAMA on aws - I managed to get it to run with vLLM, and it works great. However, vLLM so far does not yet have support for rope scaling, and I need more than 4096 tokens. I am wondering which model loader i can use? I have a 8X A10G 25G GPU each, so I have 200G GPU RAM (AWS g5 48 large), &#x200B; I tried transformers, with 22,700 in each GPU, but it is sooo slow. And i see that the gpus are barely at 10-30 percent. When vllm runs, all gpus are 100% and fast. &#x200B; https://preview.redd.it/mbiu2v7unxib1.png?width=674&format=png&auto=webp&s=abaabc35daf52858949146dd25cf26d9b5373664
2023-08-18T21:02:46
https://www.reddit.com/r/LocalLLaMA/comments/15uw9e9/run_llama_70b_on_aws_notebook/
Alert_Record5063
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uw9e9
false
null
t3_15uw9e9
/r/LocalLLaMA/comments/15uw9e9/run_llama_70b_on_aws_notebook/
false
false
https://b.thumbs.redditm…Xn2QeRv0s5gU.jpg
10
null
Cheetor, SotA multimodal
42
Honestly looks pretty cool, these guys made a visual&text benchmark, and a multi modal LLM that has novel training methods, check it out.
2023-08-18T20:37:39
https://github.com/dcdmllm/cheetah
GodIsAWomaniser
github.com
1970-01-01T00:00:00
0
{}
15uvlsm
false
null
t3_15uvlsm
/r/LocalLLaMA/comments/15uvlsm/cheetor_sota_multimodal/
false
false
https://b.thumbs.redditm…yg8uScAJSH2c.jpg
42
{'enabled': False, 'images': [{'id': 'Lb39gfZp9D5G5zigQOhqbElshmd4mXB4mtPad0NdVrM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=108&crop=smart&auto=webp&s=70b2796eb9b3b7b1a4c21f6e7579834eed517142', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=216&crop=smart&auto=webp&s=02a874a10290a17b049a3c6bd805a0d678e2fa95', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=320&crop=smart&auto=webp&s=ec15f6188b4fc1bba449ad9ded605c0a12259038', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=640&crop=smart&auto=webp&s=c7a72ff4a5eeab5bac34c0c10184e083b296d246', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=960&crop=smart&auto=webp&s=dddc463859c406c4a2be536bc12d063a27a4a292', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?width=1080&crop=smart&auto=webp&s=eba6c41eee9dc9438edb1fdae5b2d1412cd17d4c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IS0tDo6S0Th8xN4H3aqRuhP21C_e9vSINqKBfUGynFk.jpg?auto=webp&s=1ca3b73bafae1d6b397c717447de295bba9c2e75', 'width': 1200}, 'variants': {}}]}
Introducing the orca-mini chatbot powered by the orca-mini-v3-7b model
1
👉 https://huggingface.co/spaces/psmathur/psmathur-orca_mini_v3_7b 🤔 How good is orca-mini-v3-7b? Do the evaluation results from Huggingface Open LLM leaderboard translate to real-world use cases? 🔍 Now you can figure it out for yourself! Dive into the chatbot and see how the open source 7b model stacks up in the world of massive language models. 🌍 ⏰ Hurry up before I run out of GPU credits! 😉
2023-08-18T20:11:44
https://www.reddit.com/r/LocalLLaMA/comments/15uuy4h/introducing_the_orcamini_chatbot_powered_by_the/
Remarkable-Spite-107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uuy4h
false
null
t3_15uuy4h
/r/LocalLLaMA/comments/15uuy4h/introducing_the_orcamini_chatbot_powered_by_the/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ZaXzGqjwz7kjiDEGQj7IkQx76VANdzyaprmyHQNbn6o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=108&crop=smart&auto=webp&s=85242b3dab853888deebe0ee9c84210448b6dbc8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=216&crop=smart&auto=webp&s=1d542dc9ff83692b1c3c155c933048d1fc4c241e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=320&crop=smart&auto=webp&s=802d4feacab1dd61232f03efb4e2e612a47adf04', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=640&crop=smart&auto=webp&s=1c7058507a305e7a038d254258b4891428b80d33', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=960&crop=smart&auto=webp&s=1bc465ab151598093792f8b5115a34d0bd9624f5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?width=1080&crop=smart&auto=webp&s=9fdde4eb938599e7b26f8870774b85dca26a5cf5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BhWjGdGtDIiNtN5eaAiYaNnMOta4TtlPAK3Jik3eSSg.jpg?auto=webp&s=471c797284d88a33d654684ac0a8517c5e712b13', 'width': 1200}, 'variants': {}}]}
ChatGPT
0
ChatGPT, in particular, gained massive popularity within a short period due to its ability to mimic human-like responses. It leverages machine learning algorithms trained on an extensive dataset, surpassing BERT in terms of training capacity. LLMs like ChatGPT excel in generating personalized and contextually relevant responses, making them valuable in customer service applications. Compared to intent-based chatbots, LLM-powered chatbots can handle more complex and multi-touch inquiries, including product questions, conversational commerce, and technical support. It's exciting to [build your own custom LLM](https://hubs.la/Q01_wyKQ0) and fulfill the business needs
2023-08-18T19:45:17
https://www.reddit.com/r/LocalLLaMA/comments/15uua51/chatgpt/
Emergency_Hat9105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uua51
false
null
t3_15uua51
/r/LocalLLaMA/comments/15uua51/chatgpt/
false
false
self
0
{'enabled': False, 'images': [{'id': 'q_gcnTh0VBWEHa1CpHkZMWLDcaoygOegqbe0UUUjVYI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=108&crop=smart&auto=webp&s=d845299c9c85ccc475919a74953503f6142b4ef6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=216&crop=smart&auto=webp&s=d5bf500eda8d2e0a7a8f2fcd843336b75b112806', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=320&crop=smart&auto=webp&s=5f4e825dcf69e09c5790071cabd155a10bf0ff03', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=640&crop=smart&auto=webp&s=e0ebf28f8f7ab1329c8fa0163691553ba8ebbd60', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=960&crop=smart&auto=webp&s=888678da01e62abcd0235873f77e65ae06c6a9df', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=1080&crop=smart&auto=webp&s=92784fc5bbebf5234b8d7c86c6102ded7911e7b9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?auto=webp&s=7857f22774d82df664f84be7087ba0f6b3555554', 'width': 1920}, 'variants': {}}]}
How do you really, really prompt Llama 2?
67
There seems to be all sorts of ideas about how to properly prompt LLama2. Sam Witteveen uses this formatting : ''' [INST]<<SYS>> You are a Neuroscientist with a talent for explaining very complex subjects to lay people <</SYS>> Chat History: {chat_history} Human: {user_input} Assistant:[/INST] ''' Where as on the Huggingface guide to Lllama 2 prompting it has a slightly different format using an <s> wrapped around the whole turn and closing the [/inst] tag right after the {user_input}, Here's huggingface's version: ''' <s>[INST] <<SYS>>{{ system_prompt }} <</SYS>> {{ user_msg_1 }} [/INST] {{ model_answer_1 }} </s> <s>[INST] {{ user_msg_2 }} [/INST].... ''' Then there is a recent Replicate blog on (you guessed it!) A guide to prompting Llama 2 : ) where they say you don't want to use the Human: (to denote the human is speaking) and you only want to wrap the (humans) input in the [inst] not the ai's Here's their example ''' correct_prompt_long = """\ [INST] Hi! [/INST] Hello! How are you? [INST] I'm great, thanks for asking. Could you help me with a task? [/INST] Of course, I'd be happy to help! . [INST] How much wood could a wood chuck chuck or something like that? [/INST] """ ''' So I'm really confused as to correct way! : ) Am I over thinking this?
2023-08-18T19:41:14
https://www.reddit.com/r/LocalLLaMA/comments/15uu6lk/how_do_you_really_really_prompt_llama_2/
jacobgolden
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uu6lk
false
null
t3_15uu6lk
/r/LocalLLaMA/comments/15uu6lk/how_do_you_really_really_prompt_llama_2/
false
false
self
67
null
*Introducing "Endless AI": Chat with your AI Companions! 🤖❤️ [BETA TESTERS NEEDED]*
0
Hey Redditors, We're super excited to share our latest project: \*Endless AI\*. Ever felt like diving deep into conversation, sharing stories, or just having a casual chat without any human strings attached? Our app lets you do just that, with remarkable \*UNCENSORED\* and custom trained AI companions crafted to be your virtual girlfriend. We're currently preparing for our grand launch, but before that, we need your help! We're rolling out a public beta version to test the waters and ensure that our systems can handle real-world scenarios. That's where you come in! 🔧 \*Why beta test?\* \- Get a sneak peek of what's to come \- Help us identify and squash any lingering bugs \- Provide invaluable feedback to shape the final product \- Enjoy a bonus of 500 free messages to all bots (heads up, it'll be subscription-based later!) ⚠️ \*A few things to note:\* \- Depending on user influx, there might be occasional downtimes (but we're crossing our fingers!). Custom LLMs are hard and expensive :) \- If you stumble upon any bugs, don't be shy—let us know! \- As of now, AI girlfriends can't send images. But hey, they have some cool artwork in their profile galleries for you to check out! 😉 📲 \*How to join the beta test:\* 1. \*Install TestFlight:\* If you're new to iOS beta testing, you'll need TestFlight. It's an official app by Apple that lets you try out beta versions of apps. Simply head to the AppStore, search for "TestFlight", and install it. 2. \*Access Endless AI on TestFlight:\* Once you have TestFlight installed, \[click on this link\](https://testflight.apple.com/join/o5nMKF5y) (make sure to open it on your iOS device). This will prompt you to join the "Endless AI" beta test. 3. \*Download & Explore:\* After joining, you'll see an option to download "Endless AI" within TestFlight. Install, launch, and dive into endless conversations! We genuinely value any feedback, be it praises, critiques, or the wildest of suggestions. Help us in creating a more refined and enjoyable experience for all. And Android users, hang tight! Your version will be up and running in a couple of weeks. Cheers, The Endless AI Dev Team PS: If your AI says it sent a picture, because you asked for it, she's lying to you... for now. 😉
2023-08-18T19:40:20
https://www.reddit.com/r/LocalLLaMA/comments/15uu5qr/introducing_endless_ai_chat_with_your_ai/
Gummy_God
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uu5qr
false
null
t3_15uu5qr
/r/LocalLLaMA/comments/15uu5qr/introducing_endless_ai_chat_with_your_ai/
false
false
self
0
{'enabled': False, 'images': [{'id': 'cMXrJgPGR84CfjTvZuz_djCWEJk4l3k8c2nvFQdrP8s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?width=108&crop=smart&auto=webp&s=ea5630ff1755a41b4c86744b48830a96028f0794', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?width=216&crop=smart&auto=webp&s=f08bdba77edca1f5d192bd32e2347d363b2e9885', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?width=320&crop=smart&auto=webp&s=f3035d0a39c474f5ac9400cd01e5230fd1ea01f5', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?width=640&crop=smart&auto=webp&s=124b91bbe72eb0281d9955222107beabe1322da3', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?width=960&crop=smart&auto=webp&s=ecec10515b4ad34ba7ed1432172f9a07431f12f6', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/TKSK8ssLdeabMnQRCzKWc2PynN21NalZIjqH4mrZJA0.jpg?auto=webp&s=cb453520a0de2f08a1a6ef7a46a6fefef512a0ac', 'width': 1024}, 'variants': {}}]}
Quantize Per-Trained model Using QLoRa or LoRa , PFET Technique
1
i would like to ask how can I use QLoRa or Parameter-Efficient Fine-Tuning thin a model does not register at Hugging face instead is Based on [OFA](https://github.com/OFA-Sys/OFA) Here the repo of the model: [GitHub - taokz/BiomedGPT: BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision, Language, and Multimodal Tasks](https://github.com/taokz/BiomedGPT/tree/main) i am trying to Quantize the Tiny version but I don’t know if I need to use Lora in which way to Parameter-Efficient Fine-Tuning
2023-08-18T19:39:20
https://www.reddit.com/r/LocalLLaMA/comments/15uu4tt/quantize_pertrained_model_using_qlora_or_lora/
Youness_Elbrag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uu4tt
false
null
t3_15uu4tt
/r/LocalLLaMA/comments/15uu4tt/quantize_pertrained_model_using_qlora_or_lora/
false
false
self
1
{'enabled': False, 'images': [{'id': '1FobZ6IwsyLOVfEFJkGsi6RKWWC__8hAAU66GCSCBr0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=108&crop=smart&auto=webp&s=782d2a58fdf78a4f61e22121ac1f9a3463f20c8e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=216&crop=smart&auto=webp&s=ebde9a7bf2110c8d6ec80573287f836e9dc9d76d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=320&crop=smart&auto=webp&s=66baf34d013391c5ac74221e07bc8fc1148f8798', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=640&crop=smart&auto=webp&s=d707a793eb065526c63a3b1ab97c71f6e4f6a604', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=960&crop=smart&auto=webp&s=7078926ce3ac58b55ddcc35fc75c3b141af5de56', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?width=1080&crop=smart&auto=webp&s=b329235ec7a888272977adb4b8cea7489fa623a1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZIHI58VkJ_iO4uitOhBxKp34A-y2e4l3iWQ1tiM4ox0.jpg?auto=webp&s=7dbd68cda4942f398b04ae9914e7857368eb474d', 'width': 1200}, 'variants': {}}]}
Fine tuning in LLM
0
>!Fine-tuning!< is the process of adjusting the parameters of a foundation model to make it better at a specific task. Fine-tuning can be used to improve the performance of LLMs on a variety of tasks, such as machine translation, question answering, and text summarization. [learn LLM](https://hubs.la/Q01_wyKQ0) from expert instructors and build your own ChatGPT to outperform in your area of work
2023-08-18T19:36:58
https://www.reddit.com/r/LocalLLaMA/comments/15uu2pj/fine_tuning_in_llm/
Emergency_Hat9105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uu2pj
false
null
t3_15uu2pj
/r/LocalLLaMA/comments/15uu2pj/fine_tuning_in_llm/
false
false
default
0
{'enabled': False, 'images': [{'id': 'q_gcnTh0VBWEHa1CpHkZMWLDcaoygOegqbe0UUUjVYI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=108&crop=smart&auto=webp&s=d845299c9c85ccc475919a74953503f6142b4ef6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=216&crop=smart&auto=webp&s=d5bf500eda8d2e0a7a8f2fcd843336b75b112806', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=320&crop=smart&auto=webp&s=5f4e825dcf69e09c5790071cabd155a10bf0ff03', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=640&crop=smart&auto=webp&s=e0ebf28f8f7ab1329c8fa0163691553ba8ebbd60', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=960&crop=smart&auto=webp&s=888678da01e62abcd0235873f77e65ae06c6a9df', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?width=1080&crop=smart&auto=webp&s=92784fc5bbebf5234b8d7c86c6102ded7911e7b9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Ojn6hupD0TOTCs-TpLk-xzhJibCMSCGG-Q9R1XgZ7Uo.jpg?auto=webp&s=7857f22774d82df664f84be7087ba0f6b3555554', 'width': 1920}, 'variants': {}}]}
Llama-2-7B-32K-Instruct
18
2023-08-18T18:54:46
https://huggingface.co/togethercomputer/LLaMA-2-7B-32K
Thistleknot
huggingface.co
1970-01-01T00:00:00
0
{}
15ut001
false
null
t3_15ut001
/r/LocalLLaMA/comments/15ut001/llama27b32kinstruct/
false
false
https://b.thumbs.redditm…8vKTayy8l5TE.jpg
18
{'enabled': False, 'images': [{'id': 'yoAlnlO31bWvxDt2ZPQlTqH9iazTvQwoeS-sUqsWgtw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=108&crop=smart&auto=webp&s=ecde1bdfdd91dfd590fabe657c5615966e36eb14', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=216&crop=smart&auto=webp&s=7a69f688a67587c94ed26be64a481d68052f3079', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=320&crop=smart&auto=webp&s=1b3a3c61498f0435036449c4d6b4327ad292008a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=640&crop=smart&auto=webp&s=a270f1bd5077999a037ee576111d0712d9a582a6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=960&crop=smart&auto=webp&s=1ae788bc4b040349b2aa07cf4caf6ebea35e78e5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?width=1080&crop=smart&auto=webp&s=3114ca2a22bee5ee2cd02e824a2fc5c2faa5d107', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/85BR1V8c4hx_wcFHOrIntRGZZLf-PLK-slB3rJOTsOY.jpg?auto=webp&s=b3dc3ec328d20f24bf3fd21426f2588cecf707ff', 'width': 1200}, 'variants': {}}]}
Who can tell me the Claude 2.0 restriction?
1
[removed]
2023-08-18T18:03:27
https://www.reddit.com/r/LocalLLaMA/comments/15uroe6/who_can_tell_me_the_claude_20_restriction/
Turdoer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uroe6
false
null
t3_15uroe6
/r/LocalLLaMA/comments/15uroe6/who_can_tell_me_the_claude_20_restriction/
false
false
self
1
null
Pluralistic: "Open" "AI" isn't (18 August 2023) - Cory gives a pretty good critique of the current status of open source a.i.
8
2023-08-18T18:00:17
https://pluralistic.net/2023/08/18/openwashing/
freedom2adventure
pluralistic.net
1970-01-01T00:00:00
0
{}
15url7v
false
null
t3_15url7v
/r/LocalLLaMA/comments/15url7v/pluralistic_open_ai_isnt_18_august_2023_cory/
false
false
default
8
null
Can someone please explain this to me? I've tried different variations of the question and the result was always the same. I am baffled. Thank you.
3
2023-08-18T17:48:54
https://imgur.com/a/y38tMGV
Solstice_Projekt
imgur.com
1970-01-01T00:00:00
0
{}
15urau5
false
{'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 190, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2Fy38tMGV%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D900&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2Fy38tMGV&image=https%3A%2F%2Fi.imgur.com%2FXJVJKBx.jpg%3Ffb&key=2aa3c4d5f3de4f5b9120b660ad850dc9&type=text%2Fhtml&schema=imgur" width="600" height="190" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 315, 'thumbnail_url': 'https://i.imgur.com/XJVJKBx.jpg?fb', 'thumbnail_width': 600, 'title': 'Imgur', 'type': 'rich', 'url': 'https://imgur.com/a/y38tMGV', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'}
t3_15urau5
/r/LocalLLaMA/comments/15urau5/can_someone_please_explain_this_to_me_ive_tried/
false
false
https://b.thumbs.redditm…fmlMuAr7k1rU.jpg
3
{'enabled': False, 'images': [{'id': 'qAIUWk9EyAKPcrkLq6Tb3JIqGkpmUTksR_yFYHzMQxI', 'resolutions': [{'height': 26, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=108&crop=smart&auto=webp&s=ee212c44fbe876146e8dec10480d96284a78c67d', 'width': 108}, {'height': 53, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=216&crop=smart&auto=webp&s=4929d5686205caf9ec178bbfabd68539e84128f9', 'width': 216}, {'height': 79, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=320&crop=smart&auto=webp&s=82470e6f4725def86775ac0312479e736a118e04', 'width': 320}, {'height': 159, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=640&crop=smart&auto=webp&s=9f47f2e8e7b2b360b5b74f2601dfdb448b1ce5e0', 'width': 640}, {'height': 239, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=960&crop=smart&auto=webp&s=57fd93f6ef198bca528b95110bdafaae05113550', 'width': 960}, {'height': 269, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?width=1080&crop=smart&auto=webp&s=5760b9d981feeb87b5d4b8387d4566fc42bed8d6', 'width': 1080}], 'source': {'height': 292, 'url': 'https://external-preview.redd.it/H_BJ_dCJhGsaPSeruvoiV92h65mkSaK8d5dSOB0N8Pk.jpg?auto=webp&s=b6cfd7be1d6bfa3dde2b1d77a4f9dce0a9d5b0a0', 'width': 1170}, 'variants': {}}]}
Is the gptq model format the same as exllama?
7
I converted my model with auto gptq and have a 4bit model, how can I use it with exllama ? Do I need to convert it again? I thought exllama was using the same model format I converted in (4bits with auto gptq) but it's giving me an error about the header size beeing too big.
2023-08-18T16:51:18
https://www.reddit.com/r/LocalLLaMA/comments/15uptqv/is_the_gptq_model_format_the_same_as_exllama/
biggieshiba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uptqv
false
null
t3_15uptqv
/r/LocalLLaMA/comments/15uptqv/is_the_gptq_model_format_the_same_as_exllama/
false
false
self
7
null
Releasing EverythingLM V2 dataset, now 100% GPT-4 generated!
43
[https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V2](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V2) It is in alpaca format for accessibility; the model will be trained like a normal chat model. On HF I explained some issues in V1 that will hopefully be fixed. If anyone has any suggestions or questions let me know.
2023-08-18T16:46:22
https://www.reddit.com/r/LocalLLaMA/comments/15upp59/releasing_everythinglm_v2_dataset_now_100_gpt4/
pokeuser61
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15upp59
false
null
t3_15upp59
/r/LocalLLaMA/comments/15upp59/releasing_everythinglm_v2_dataset_now_100_gpt4/
false
false
self
43
{'enabled': False, 'images': [{'id': '89mhat411KgccFaMhErA53piAIEjmKvexhm2hpKNvMw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=108&crop=smart&auto=webp&s=a14cc468a829927330d55e3866039ab30334ddd2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=216&crop=smart&auto=webp&s=5fc8464718efbb3b1df8f319740c767bb42ee757', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=320&crop=smart&auto=webp&s=1883a0c460aa29d431f7750da33efcfc595e3a37', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=640&crop=smart&auto=webp&s=cdb33caa984d64524d2ecc1de2e92dfb19a76e52', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=960&crop=smart&auto=webp&s=71af9fc5a43419d418b7a16e07ad77307d0c8936', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?width=1080&crop=smart&auto=webp&s=ae8d0444114a3d723ab5a656fc342d65a74a9793', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x4IGoHkvUZg8NJ-KyMXq1LEoHQdYtjv0F75jXZqwNvI.jpg?auto=webp&s=e284928cb7956fc19366bafd3262b192d0be967e', 'width': 1200}, 'variants': {}}]}
Fine-Tuning LLama2 models.
1
[removed]
2023-08-18T16:24:19
https://www.reddit.com/r/LocalLLaMA/comments/15up4lj/finetuning_llama2_models/
arthurwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15up4lj
false
null
t3_15up4lj
/r/LocalLLaMA/comments/15up4lj/finetuning_llama2_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WEqe5L9e3uodqGGDPhgCoDZAOTmDsrJCeVEVBjMJX-A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=108&crop=smart&auto=webp&s=cb8cd1fba0ca5df419eef882b37c7652264ce056', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=216&crop=smart&auto=webp&s=09cee969eed64882e96d9a4679534746c2275fc1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=320&crop=smart&auto=webp&s=a6000693ae95627c40a35e58e2711035e9b442e0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=640&crop=smart&auto=webp&s=2ba938c67cff3699075a135f9f3648ada1be824f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=960&crop=smart&auto=webp&s=0dc9d549adaed5eebfa80bd7e0ce458a682f3f66', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?width=1080&crop=smart&auto=webp&s=bc461386252cf5badb670be29dbe681183c788f3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/BBVZnIPu7jjjypwndVSHoI0Fkx6msOrFINQ55r5sWX8.jpg?auto=webp&s=3903a047959628480febc9c8267554c05f3ec721', 'width': 1199}, 'variants': {}}]}
Tokenizers Truncation during Fine-tuning with Large Texts
1
[removed]
2023-08-18T16:08:22
https://www.reddit.com/r/LocalLLaMA/comments/15uopwf/tokenizers_truncation_during_finetuning_with/
nrepesh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15uopwf
false
null
t3_15uopwf
/r/LocalLLaMA/comments/15uopwf/tokenizers_truncation_during_finetuning_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gjHxd9rOFf-otsPthPyuAvgMsbSad0dlkDuex_1RVjE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=108&crop=smart&auto=webp&s=a0b38ef2f95a8a36dd0a2d9f8cd228b34ed67b04', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=216&crop=smart&auto=webp&s=98e362db2c15afdc284c97dc913a3fb957f63005', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=320&crop=smart&auto=webp&s=071f03be8b6ebb9dd063bc0c4ae7e548f416a005', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=640&crop=smart&auto=webp&s=e8c109e4ce8e584bb675d8b8a9d0dd3bb520d943', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=960&crop=smart&auto=webp&s=d0ac141e496635ccba8146cdd5ab642317f0b702', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?width=1080&crop=smart&auto=webp&s=470c4b5a4023b612c628513536a9229185dc51c8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7tpHlvIQYEZAdqwnDuby7z-DaMmMRUw4wY9H5BVFjbk.jpg?auto=webp&s=d332bdd0796a33974613c1c0b55ea4be90594ed1', 'width': 1200}, 'variants': {}}]}
Builing a RAG customer service chatbot
4
Hi there! I have been working on a project to build a customer service chatbot. The bot would interact with clients and answer their questions. It will have access to documents to help it answer questions about things like pricing, available services etc. I have been looking into langchain but I could only find qa agents, not chatbots whose tone and attitude can be adjusted by prompting. Has anyone worked on something similar? Are there any open source projects for something similar? I would be grateful if someone can offer any tips/help
2023-08-18T14:58:06
https://www.reddit.com/r/LocalLLaMA/comments/15umuqj/builing_a_rag_customer_service_chatbot/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15umuqj
false
null
t3_15umuqj
/r/LocalLLaMA/comments/15umuqj/builing_a_rag_customer_service_chatbot/
false
false
self
4
null
FlexFlow Serve: Low-Latency, High-Performance LLM Serving
26
Seems to be pretty decent at speeding up inference of llama models with speculative inference without too much memory overhead. [https://github.com/flexflow/FlexFlow](https://github.com/flexflow/FlexFlow) Would be interesting if it worked with quantized models.
2023-08-18T14:22:52
https://www.reddit.com/r/LocalLLaMA/comments/15ulyny/flexflow_serve_lowlatency_highperformance_llm/
ptxtra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ulyny
false
null
t3_15ulyny
/r/LocalLLaMA/comments/15ulyny/flexflow_serve_lowlatency_highperformance_llm/
false
false
self
26
{'enabled': False, 'images': [{'id': '5TG_1S1MwC6RwxVLTAt8lvT26shRYwGIGC0lB6dy1VE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=108&crop=smart&auto=webp&s=b6657a190aeb8f5eb125f5e00892e34affe66aef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=216&crop=smart&auto=webp&s=fb17c25cdcc8ba6515e887325dfcb3b10e44fdeb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=320&crop=smart&auto=webp&s=37b92d9e562b1ff2ac804e8c9704f48cb7942015', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=640&crop=smart&auto=webp&s=5616d187725d3d0186812fc2e35aa87cf9047c89', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=960&crop=smart&auto=webp&s=0ac1c2beac63d3975f4da0c5ecd4fedd21ca1556', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?width=1080&crop=smart&auto=webp&s=5fa0b28c4dc7e3e86a77e5196d4a9d3741e7d6e1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HL6AYg8bOwyRuLR4IF73wdFJOn_4MIiPguS438vJ7e8.jpg?auto=webp&s=d616a3cc411b29a61a4a5ae17cc0574c0b4248ef', 'width': 1200}, 'variants': {}}]}
How to use gunicorn with LLAMA2 from Hugging Face ?
5
I have a Flask Server, where LLAMA model is loaded via AutoClass (into GPU) from Hugging Face. I am not able to make 'cuda' call when I launch it with gunicorn. I am getting 'CUDA: Cannot re-initialize CUDA in forked subprocess'. I tried few workarounds from stackoverflow with no success. Do you guys have any gist or tutorial that has implemented this ? Any pointers is great !
2023-08-18T14:10:21
https://www.reddit.com/r/LocalLLaMA/comments/15ulnna/how_to_use_gunicorn_with_llama2_from_hugging_face/
UncertainLangur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ulnna
false
null
t3_15ulnna
/r/LocalLLaMA/comments/15ulnna/how_to_use_gunicorn_with_llama2_from_hugging_face/
false
false
self
5
null
text preprocessing (custom Dataset)
2
Hello guys :) I created a custom dataset by converting scientific paper to txt files. Unfortunately I do not know how to delete informations like sources, picture captions, author-information etc... 1. Are there any methods/packages to clean the txt file? I tried to use spacy and detect senteces in order to delete everything which is not a sentence but spacy detected almost everything as a sentence... I tried also textcl, which has a split\_text\_to\_sentece function but it was not successful either... 2. Basic question: Is it fatal if i don't remove the information? Thanks a lot in advance :)
2023-08-18T13:40:59
https://www.reddit.com/r/LocalLLaMA/comments/15ukwm7/text_preprocessing_custom_dataset/
Enkay55
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ukwm7
false
null
t3_15ukwm7
/r/LocalLLaMA/comments/15ukwm7/text_preprocessing_custom_dataset/
false
false
default
2
null
API Documentation vs Comprehensive Tutorials: What's Your Preference?
2
Hey LLama Redditors, I've been writing Declarai, an open-source llm-related library lately, and I've noticed two distinct approaches to educating users: 1. **API-Documented Approach**: Every class, method, and object is meticulously documented with full details. This often means you get a technical reference for each piece, but you may need to connect the dots yourself to see the bigger picture. 2. **Tutorial-Comprehensive Approach**: Libraries like FastAPI lean more into comprehensive tutorials where users are guided step-by-step through common use cases, providing context and explaining the reasoning behind each step, from the simplest feature to the most complex techniques. Both approaches have their merits, but I'm curious: 1. which do you personally prefer and why? 2. Do you like the reference-style documentation where you can quickly find the specifics, or do you appreciate a more hands-on tutorial that walks you through the process? Looking forward to hearing your thoughts!
2023-08-18T13:22:46
https://www.reddit.com/r/LocalLLaMA/comments/15ukh0q/api_documentation_vs_comprehensive_tutorials/
matkley12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
15ukh0q
false
null
t3_15ukh0q
/r/LocalLLaMA/comments/15ukh0q/api_documentation_vs_comprehensive_tutorials/
false
false
self
2
null