title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to deal with loss for SFT for CausalLM | 1 | I'm currently trying to finetune Llama2 chat model.
The task is causal language modeling and I'm exploiting custom dataset, consisting of domain specific prompts and corresponding answers.
I'm currently using SFT trainer([https://huggingface.co/docs/trl/sft\_trainer](https://huggingface.co/docs/trl/sft_trainer))and using LoRA adapter ([https://huggingface.co/docs/peft/package\_reference/lora](https://huggingface.co/docs/peft/package_reference/lora))
My question is about computation of the loss. in Llama2 paper, it is said that they zero-out the prompts for computing loss.
​
https://preview.redd.it/y4er67uj5i5c1.png?width=3596&format=png&auto=webp&s=bc08b3c3e3e1445e973ba41d9ce6c67078140cb9
But, in SFT trainer it seems there are no options to do this thing (ignoring prompts when computing loss). At least I coundn't find it, neither through docs and digging inside the code.
Has anyone dealt with the same issues before? | 2023-12-10T17:28:30 | https://www.reddit.com/r/LocalLLaMA/comments/18f8ju4/how_to_deal_with_loss_for_sft_for_causallm/ | __given__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f8ju4 | false | null | t3_18f8ju4 | /r/LocalLLaMA/comments/18f8ju4/how_to_deal_with_loss_for_sft_for_causallm/ | false | false | 1 | null | |
Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar | 18 | Hi everyone!
I'm currently working on a project where I'm trying to force an AI to reply with a particular structure. I've written a GBNF grammar file for this purpose, but I'm facing some challenges in getting the AI to strictly adhere to the pattern I need.
Here's the pattern I want the AI to follow:
The logic of the script is structured to handle both JSON and non-JSON formatted messages using a GBNF grammar. If the message is in JSON format, it processes the message by analyzing its content and then responds accordingly. For non-JSON messages, it checks whether data extraction is required and responds either by extracting relevant data or by directly replying to the user.
And here is my GBNF Grammar script:
root ::= CheckJSONFormat
CheckJSONFormat ::= "{" ws "\"is_json_format\":" ws boolean ws "}" ProcessMessageBasedOnFormat
ProcessMessageBasedOnFormat ::= ProcessJSON | ProcessNonJSON
ProcessJSON ::= "{" ws "\"function\":" ws "\"reply_to_user\"," ws "\"message\":" ws AnalyseJSONMessage ws "}"
ProcessNonJSON ::= CheckForDataExtraction
AnalyseJSONMessage ::= string
CheckForDataExtraction ::= "{" ws "\"check_for_data_extraction\":" ws boolean ws "}" (DataExtractionTrue | DataExtractionFalse)
DataExtractionTrue ::= "{" ws "\"function\":" ws "\"rag_extraction\"," ws "\"message\":" ws "[" ws string ("," ws string)* "]" ws "}"
DataExtractionFalse ::= "{" ws "\"function\":" ws "\"reply_to_user\"," ws "\"message\":" ws string ws "}"
string ::= "\"" ([^"\\] | "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F]) )* "\"" ws
boolean ::= "true" | "false"
ws ::= ""
My goal is to ensure that the AI's responses are structured in a JSON format where it can process both JSON and non-JSON messages and respond accordingly, with specific functions like 'reply\_to\_user' and 'data\_extraction'. Using boolean answer to CheckForDataExtraction should "force" the ai to only use DataExtractionTrue (reply in list of specific keywords) or DataExtractionFalse with str. But the output looks random.
​
>Indeed, for exemple: if write "Hello AI, how are you today?", it answer is {is\_json\_format: False}{check\_for\_data\_extraction:True}{reply\_to\_user:"hello I'm fine, and you?“}
>
>Where I expect the ai to return: {is\_json\_format: False}{check\_for\_data\_extraction**:****False**}{**reply\_to\_user**:"hello I'm fine, and you?“}
>
>And if the Ai consider it needs extra data: {is\_json\_format: False}{check\_for\_data\_extraction:**True**}{rag\_extraction:\[list,of,keywords,the,ai,is,looking,for,rag\]}
I'm looking for guidance on how to effectively implement this grammar structure in the AI's response system. If anyone has experience with GBNF grammar files or has tackled similar challenges, I would greatly appreciate your insights or suggestions on how to approach this.
Thanks in advance for your help! | 2023-12-10T17:10:52 | https://www.reddit.com/r/LocalLLaMA/comments/18f861a/forcing_ai_to_follow_a_specific_answer_pattern/ | Toni_rider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f861a | false | null | t3_18f861a | /r/LocalLLaMA/comments/18f861a/forcing_ai_to_follow_a_specific_answer_pattern/ | false | false | self | 18 | null |
Has anyone tried striped hyena 7b chat from from together ai? | 1 | [removed] | 2023-12-10T16:25:50 | https://www.reddit.com/r/LocalLLaMA/comments/18f76h2/has_anyone_tried_striped_hyena_7b_chat_from_from/ | Conscious-Mixture-69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f76h2 | false | null | t3_18f76h2 | /r/LocalLLaMA/comments/18f76h2/has_anyone_tried_striped_hyena_7b_chat_from_from/ | false | false | self | 1 | null |
Gpt4all can't discover starcoder-q4_0.gguf model | 2 | Hello,
I'm a beginner at this and don't know what's the issue hope you can help me with it.
I installed gpt4all on Windows and then downloaded this model **starcoder-q4\_0.gguf** to the correct location in settings but the app doesn't recognize the model.
What could be the issue and how to fix it?
Thanks
​ | 2023-12-10T16:24:32 | https://www.reddit.com/r/LocalLLaMA/comments/18f75gq/gpt4all_cant_discover_starcoderq4_0gguf_model/ | yassersharaf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f75gq | false | null | t3_18f75gq | /r/LocalLLaMA/comments/18f75gq/gpt4all_cant_discover_starcoderq4_0gguf_model/ | false | false | self | 2 | null |
Got myself a 4way rtx 4090 rig for local LLM | 670 | 2023-12-10T16:07:13 | VectorD | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18f6sae | false | null | t3_18f6sae | /r/LocalLLaMA/comments/18f6sae/got_myself_a_4way_rtx_4090_rig_for_local_llm/ | false | false | 670 | {'enabled': True, 'images': [{'id': '2gxzG1dtyI2H2UhZMEvZ4bKoy2f3c-a5jEXt_XhpatM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/abxkvuy1rh5c1.jpeg?width=108&crop=smart&auto=webp&s=b8e19e8c31690669dc5f20e3fe52e83d061b8484', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/abxkvuy1rh5c1.jpeg?width=216&crop=smart&auto=webp&s=f450cb1478ec84a33fe5c617eb810c2665435fb7', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/abxkvuy1rh5c1.jpeg?width=320&crop=smart&auto=webp&s=79271c55202d2aa869f94f95c07bf1394a958e86', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/abxkvuy1rh5c1.jpeg?width=640&crop=smart&auto=webp&s=f03f12e0ea319dadb379518a46d86389f989c0a3', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/abxkvuy1rh5c1.jpeg?width=960&crop=smart&auto=webp&s=bf329b5bf098b67226445817daa0d36be909978a', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/abxkvuy1rh5c1.jpeg?width=1080&crop=smart&auto=webp&s=2921238b7862f4ef22ff0a7128d788f14b30b174', 'width': 1080}], 'source': {'height': 4000, 'url': 'https://preview.redd.it/abxkvuy1rh5c1.jpeg?auto=webp&s=98d52fd584410c3533d77cdb8034e38383195468', 'width': 1868}, 'variants': {}}]} | |||
Fail to reproduce the same evaluation metrics score during inference. | 3 | Hi,
I have fine-tuned LLAMA-2-7b using QLora for multi-label classification on 6k context using RoPE scaling. After training the model for 2 epochs I get F1 macro 0.51 and micro 0.65 on the evaluation dataset. However, I tried to reproduce these scores in the inference pipeline on the same evaluation dataset but failed with F1 scores of 0.13 and 0.18 where all the predictions are random.
I have tried training with and without flash\_attention, and RoPE scaling but couldn't reproduce the
evaluation metrics. I also confirm that the 'score' head is saved by setting 'module\_to\_save=\[score\]'
I am aware that using [mixed precision reduces the stability](https://github.com/huggingface/transformers/issues/27502) of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great. | 2023-12-10T15:28:03 | https://www.reddit.com/r/LocalLLaMA/comments/18f5z8z/fail_to_reproduce_the_same_evaluation_metrics/ | ali0100u | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f5z8z | false | null | t3_18f5z8z | /r/LocalLLaMA/comments/18f5z8z/fail_to_reproduce_the_same_evaluation_metrics/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Xzza6MaeNH9Rnq_891iBBVcMQM_HZSO6VjT5Q7fv8Bw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QlMGkXTXgf4R7c_uQEwyYWqH0BQqZvMc8CBDYt-HvYc.jpg?width=108&crop=smart&auto=webp&s=5ea645d5ddc9c2ebb7f53494285860739fcacaf2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QlMGkXTXgf4R7c_uQEwyYWqH0BQqZvMc8CBDYt-HvYc.jpg?width=216&crop=smart&auto=webp&s=b195830dfe2c813619b7da682c5c27ffed237a62', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QlMGkXTXgf4R7c_uQEwyYWqH0BQqZvMc8CBDYt-HvYc.jpg?width=320&crop=smart&auto=webp&s=2e065b7f6088c1b2311582036451c5b2a9bb5d7a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QlMGkXTXgf4R7c_uQEwyYWqH0BQqZvMc8CBDYt-HvYc.jpg?width=640&crop=smart&auto=webp&s=d3d575ce12c5f2f00e7220be03032a3bed9c839f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QlMGkXTXgf4R7c_uQEwyYWqH0BQqZvMc8CBDYt-HvYc.jpg?width=960&crop=smart&auto=webp&s=07c86c61b81166b4005fb44533c16292431c65bc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QlMGkXTXgf4R7c_uQEwyYWqH0BQqZvMc8CBDYt-HvYc.jpg?width=1080&crop=smart&auto=webp&s=67450b564275c7cc623350ecbd2223f33a159f8e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QlMGkXTXgf4R7c_uQEwyYWqH0BQqZvMc8CBDYt-HvYc.jpg?auto=webp&s=d63f76723767c28298d64ed7c98b07c2d362cb03', 'width': 1200}, 'variants': {}}]} |
Mixtral 7B MoE beats LLaMA2 70B in MMLU | 100 | 2023-12-10T15:15:37 | https://twitter.com/Francis_YAO_/status/1733686003687112983 | iDoAiStuffFr | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 18f5q44 | false | {'oembed': {'author_name': 'Yao Fu', 'author_url': 'https://twitter.com/Francis_YAO_', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Feel the power of MoE <a href="https://t.co/JgwKJNOXCC">pic.twitter.com/JgwKJNOXCC</a></p>— Yao Fu (@Francis_YAO_) <a href="https://twitter.com/Francis_YAO_/status/1733686003687112983?ref_src=twsrc%5Etfw">December 10, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Francis_YAO_/status/1733686003687112983', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_18f5q44 | /r/LocalLLaMA/comments/18f5q44/mixtral_7b_moe_beats_llama2_70b_in_mmlu/ | false | false | 100 | {'enabled': False, 'images': [{'id': 'I9sTJGsHBuWQv7xtm4bF3_5q9TfGhO4geGjazaC6FeE', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/47qG6rrCTpmkKn9claOEr7lg2Ashj4u6Z8XdpyK7ygI.jpg?width=108&crop=smart&auto=webp&s=759d95c744e29caa668db6a99fa4050fc0c7b930', 'width': 108}], 'source': {'height': 59, 'url': 'https://external-preview.redd.it/47qG6rrCTpmkKn9claOEr7lg2Ashj4u6Z8XdpyK7ygI.jpg?auto=webp&s=310e0d0b4c549baf26a0b406a239d3bf4ff0fb09', 'width': 140}, 'variants': {}}]} | ||
Mixtral 7B MoE beats LLaMA2 in MMLU | 1 | [deleted] | 2023-12-10T15:14:47 | https://twitter.com/Francis_YAO_/status/1733686003687112983 | iDoAiStuffFr | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 18f5pi3 | false | {'oembed': {'author_name': 'Yao Fu', 'author_url': 'https://twitter.com/Francis_YAO_', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Feel the power of MoE <a href="https://t.co/JgwKJNOXCC">pic.twitter.com/JgwKJNOXCC</a></p>— Yao Fu (@Francis_YAO_) <a href="https://twitter.com/Francis_YAO_/status/1733686003687112983?ref_src=twsrc%5Etfw">December 10, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Francis_YAO_/status/1733686003687112983', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_18f5pi3 | /r/LocalLLaMA/comments/18f5pi3/mixtral_7b_moe_beats_llama2_in_mmlu/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'I9sTJGsHBuWQv7xtm4bF3_5q9TfGhO4geGjazaC6FeE', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/47qG6rrCTpmkKn9claOEr7lg2Ashj4u6Z8XdpyK7ygI.jpg?width=108&crop=smart&auto=webp&s=759d95c744e29caa668db6a99fa4050fc0c7b930', 'width': 108}], 'source': {'height': 59, 'url': 'https://external-preview.redd.it/47qG6rrCTpmkKn9claOEr7lg2Ashj4u6Z8XdpyK7ygI.jpg?auto=webp&s=310e0d0b4c549baf26a0b406a239d3bf4ff0fb09', 'width': 140}, 'variants': {}}]} | |
Why are there no exllamav2 or GPTQ quatschen for llamqfied Qwen 72b chat? | 1 | The details of the model seem to be pretty good and with a context window of 32k, as well as multi language capabilities this model could really be a choice when it comes to a multi tool, that performs on a broad range of tasks.
Usually, it does not take much time for ppl like TheBloke or LoneStriker to drop quants, but in this case, I am looking for it at least a week.
I mean, maybe Mixtral-8x7b will do it too... same context length and even smaller... but I rly wanted to test Qwen via exllamav2.
Does anyone know more? Thx guys! | 2023-12-10T14:47:56 | https://www.reddit.com/r/LocalLLaMA/comments/18f55er/why_are_there_no_exllamav2_or_gptq_quatschen_for/ | Rutabaga-Agitated | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f55er | false | null | t3_18f55er | /r/LocalLLaMA/comments/18f55er/why_are_there_no_exllamav2_or_gptq_quatschen_for/ | false | false | self | 1 | null |
Using LLaVA for content moderation | 1 | [removed] | 2023-12-10T14:05:54 | https://www.reddit.com/r/LocalLLaMA/comments/18f4br8/using_llava_for_content_moderation/ | adpq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f4br8 | false | null | t3_18f4br8 | /r/LocalLLaMA/comments/18f4br8/using_llava_for_content_moderation/ | false | false | self | 1 | null |
Are there any good papers or learning resources about system prompts? | 7 | To me it seems like people are just toying around with these randomly even though they are one of the biggest factors in output quality.
For example, this is the recommended system prompt for the excellent OpenHermes-2.5-Mistral-7B:
`"""<|im_start|>system`
`You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>`
It seems pretty silly to me. Has someone actually analyzed a bunch of these and tested output quality using standard benchmarks?
I should clarify, I'm looking for the best possible system prompt to emulate something like ChatGPT. I want it to be able to answer questions and give it tasks, expecting factually and syntactically best possible results.
​ | 2023-12-10T14:02:23 | https://www.reddit.com/r/LocalLLaMA/comments/18f49e2/are_there_any_good_papers_or_learning_resources/ | Dogeboja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f49e2 | false | null | t3_18f49e2 | /r/LocalLLaMA/comments/18f49e2/are_there_any_good_papers_or_learning_resources/ | false | false | self | 7 | null |
Mixtral finetune by fireworks.ai on Poe | 24 | 2023-12-10T13:56:32 | https://poe.com/Mixtral-8x7B-Chat | WaterdanceAC | poe.com | 1970-01-01T00:00:00 | 0 | {} | 18f4583 | false | null | t3_18f4583 | /r/LocalLLaMA/comments/18f4583/mixtral_finetune_by_fireworksai_on_poe/ | false | false | default | 24 | null | |
What are you guys using local LLMs for? | 88 | Just wondering what you guys are doing with these things, is it merely academical? I am a noob in this space trying to learn more.
My main interest in this space atm is with WebLLM, running LLMs on the browser so as to improve UI and maybe even eventually make LLM use free and accessible for everyone. What are you guys working on? | 2023-12-10T13:55:06 | https://www.reddit.com/r/LocalLLaMA/comments/18f448c/what_are_you_guys_using_local_llms_for/ | C12H16N | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f448c | false | null | t3_18f448c | /r/LocalLLaMA/comments/18f448c/what_are_you_guys_using_local_llms_for/ | false | false | self | 88 | null |
help me build my local machine with a 4060ti | 1 | hi guys im building a pc mostly for running llms locally.
first i decided for a gpu since its the most important thing.
gpu: 4060ti 16 gb
whats missing or as to where i dont know what a good addition is is the following :
-motherboard: i prefer one where i could at at least add another card or 4. and where i could add some
ram
-cpu: which is a good addition that is not too pricey?
-harddrive: how much space would i need? is that even important?
sorry for all the questions but i would really appreciate your help :) | 2023-12-10T13:51:31 | https://www.reddit.com/r/LocalLLaMA/comments/18f41v7/help_me_build_my_local_machine_with_a_4060ti/ | thefunnyape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f41v7 | false | null | t3_18f41v7 | /r/LocalLLaMA/comments/18f41v7/help_me_build_my_local_machine_with_a_4060ti/ | false | false | self | 1 | null |
At this point we don’t necessarily need higher context windows. We need better truncation. | 58 | In a longer discussion, between a LLM and an average user, a 4k context window that manages to retain important details while truncating will be more effective than an 8k context window truncating the entire past discussion.
We need a system that determines what information is important enough to keep within context, and I suspect it should take priority over trying to just expand the context size. | 2023-12-10T13:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/18f3o08/at_this_point_we_dont_necessarily_need_higher/ | ZABKA_TM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f3o08 | false | null | t3_18f3o08 | /r/LocalLLaMA/comments/18f3o08/at_this_point_we_dont_necessarily_need_higher/ | false | false | self | 58 | null |
GitHub - Krisseck/php-rag: An AI assistant built with PHP, Solr and LLM backend of choice | 1 | 2023-12-10T13:30:12 | https://github.com/Krisseck/php-rag | Risse | github.com | 1970-01-01T00:00:00 | 0 | {} | 18f3nk1 | false | null | t3_18f3nk1 | /r/LocalLLaMA/comments/18f3nk1/github_krisseckphprag_an_ai_assistant_built_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': '_YWuBjUV8avKlIIPZcGoerc0hK_PvddCRNYRslw3GMA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aVDcfzJ7_Oc-OEU7YPFGL9o31Ko69CIXa1pAC5r3uBQ.jpg?width=108&crop=smart&auto=webp&s=4b18a6c7f338e3f67ca2c5ff1fb1eb78ee986700', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aVDcfzJ7_Oc-OEU7YPFGL9o31Ko69CIXa1pAC5r3uBQ.jpg?width=216&crop=smart&auto=webp&s=6c99c18bdb468425209d2a47bbd3ec754eb0709e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aVDcfzJ7_Oc-OEU7YPFGL9o31Ko69CIXa1pAC5r3uBQ.jpg?width=320&crop=smart&auto=webp&s=03217a9a38b89293d0b9c86ea209dbbff15083d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aVDcfzJ7_Oc-OEU7YPFGL9o31Ko69CIXa1pAC5r3uBQ.jpg?width=640&crop=smart&auto=webp&s=f173f283fd08677e61df7448e6bc23855ce59e4a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aVDcfzJ7_Oc-OEU7YPFGL9o31Ko69CIXa1pAC5r3uBQ.jpg?width=960&crop=smart&auto=webp&s=edbf3331a95119e6cb360a45a17121602cd37769', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aVDcfzJ7_Oc-OEU7YPFGL9o31Ko69CIXa1pAC5r3uBQ.jpg?width=1080&crop=smart&auto=webp&s=e45ec87d2d813ed90fa9d8cff7a89f102d2e769c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aVDcfzJ7_Oc-OEU7YPFGL9o31Ko69CIXa1pAC5r3uBQ.jpg?auto=webp&s=4696a90429b42d9fbd2324090ac549d5a1ead93a', 'width': 1200}, 'variants': {}}]} | ||
So now 7b are better then 70b? | 10 | Now that Mistral 7b and the relative finetunes are the go-to for most of us I'm wondering if there is any 34b@4bit or 70b@ 2.4bpw that it's actually better then those mistral finetune.
Are the available 7b mistral really better then anything else now for 24gb vram?
I have a 4090 and I feel like I'm not pushing it at its best capabilities with the 7b model, however i haven't found anything better for now, Do you have any suggestions?
i tried the lzlv 70b 2.4bpw
airoboros 70b 2.4bpw
nous capybara 34b
they are not even close to the OpenHermes-2.5-Mistral-7B.
(mostly used for roleplay and text summary)
The only issue I found with be above 7b is that after 10K context it start to talk gibberish with no obvious reason, it just start to generate random words for no reasons.
Very open to any suggestion. | 2023-12-10T11:26:29 | https://www.reddit.com/r/LocalLLaMA/comments/18f1l7g/so_now_7b_are_better_then_70b/ | Lorian0x7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f1l7g | false | null | t3_18f1l7g | /r/LocalLLaMA/comments/18f1l7g/so_now_7b_are_better_then_70b/ | false | false | self | 10 | null |
StrippedHyena | 27 | https://www.together.ai/blog/stripedhyena-7b
Thoughts ?! | 2023-12-10T10:48:36 | https://www.reddit.com/r/LocalLLaMA/comments/18f10i6/strippedhyena/ | True_Giraffe_7712 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18f10i6 | false | null | t3_18f10i6 | /r/LocalLLaMA/comments/18f10i6/strippedhyena/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'BZO-ySPJOe3ai2VIr8zBFqqikgdLZOfIKThn_0kEVcg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/L3yyMI1izgGt1JJcrpcieg6JFTv2ruZYrlytjw1rUrM.jpg?width=108&crop=smart&auto=webp&s=300e405f9097d5f047e47ae58121c9408e1299f3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/L3yyMI1izgGt1JJcrpcieg6JFTv2ruZYrlytjw1rUrM.jpg?width=216&crop=smart&auto=webp&s=bd97a11d5d8d8040ecd6b71dbf269ef8cb6fad3f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/L3yyMI1izgGt1JJcrpcieg6JFTv2ruZYrlytjw1rUrM.jpg?width=320&crop=smart&auto=webp&s=4f93927d0847d2fef95e06da766525fadaa50da3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/L3yyMI1izgGt1JJcrpcieg6JFTv2ruZYrlytjw1rUrM.jpg?width=640&crop=smart&auto=webp&s=6dbb5b34ba1f60881bf2252310b857339beeedb8', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/L3yyMI1izgGt1JJcrpcieg6JFTv2ruZYrlytjw1rUrM.jpg?width=960&crop=smart&auto=webp&s=1fc9161d72a08266160ea7f5f3dfac742da0330c', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/L3yyMI1izgGt1JJcrpcieg6JFTv2ruZYrlytjw1rUrM.jpg?width=1080&crop=smart&auto=webp&s=6eaf842e2c6babb7a6c9db9ddb2b5960c9d180c6', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/L3yyMI1izgGt1JJcrpcieg6JFTv2ruZYrlytjw1rUrM.jpg?auto=webp&s=4fc45acbcb851ee3973f171ce4a2f679ae17fd85', 'width': 2048}, 'variants': {}}]} |
PSA: new ExLlamaV2 quant method makes 70Bs perform much better at low bpw quants | 162 | If you have a single 3090 or 4090, chances are you have tried to run a 2.4b-2.65bpw quant of 70B models only to be disappointed by how unstable they tend to be due to their high perplexity.
Good news: Turbo, the author of ExLlamaV2, has made a new quant method that decreases the perplexity of low bpw quants, improving performance and making them much more stable. In terms of perplexity, there is about a significant[ improvement](https://huggingface.co/DiscoResearch/DiscoLM-120b/discussions/2#6574d8389fe27c093d75b8ec) over the previous method. I was skeptical at first, but based on my limited testing so far I could hardly tell the difference between a Q5\_K\_S gguf of Aetheria L2 70B and a 2.4bpw exl2. The latter being much faster since it fits completely in my 24GB VRAM while taking up about half the storage space.
LoneStriker has started uploading a few 70B exl2 quants using this new quant method to Hugging Face if you want to try it out for yourself. I recommend Aetheria which is my current favorite model for roleplaying (not named Goliath).
\- [LoneStriker/Aetheria-L2-70B-2.4bpw-h6-exl2-2](https://huggingface.co/LoneStriker/Aetheria-L2-70B-2.4bpw-h6-exl2-2)
\- [LoneStriker/airoboros-l2-70b-gpt4-1.4.1-2.4bpw-h6-exl2-2](https://huggingface.co/LoneStriker/airoboros-l2-70b-gpt4-1.4.1-2.4bpw-h6-exl2-2) | 2023-12-10T07:42:21 | https://www.reddit.com/r/LocalLLaMA/comments/18eyf39/psa_new_exllamav2_quant_method_makes_70bs_perform/ | brobruh211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eyf39 | false | null | t3_18eyf39 | /r/LocalLLaMA/comments/18eyf39/psa_new_exllamav2_quant_method_makes_70bs_perform/ | false | false | self | 162 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=108&crop=smart&auto=webp&s=4bc231a80d79babe4e6cddf7b4c71dcb0aa8f8ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=216&crop=smart&auto=webp&s=d7108244b7182d85047aa59446f1dfb68542b610', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=320&crop=smart&auto=webp&s=d34fa1a756c458772d3c8680309a93cf8d758b40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=640&crop=smart&auto=webp&s=5b03e18da2698977cf1222f0c9e54ccb6177ffc4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=960&crop=smart&auto=webp&s=3d875ff29aae8239d010f3b964e5a2f3ebe32e3d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=1080&crop=smart&auto=webp&s=b51090c30528b6b8c637acb54d7fc0f6a5249cf5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?auto=webp&s=ee70e402c0f8274b46f38378bada81dbeb5b1dac', 'width': 1200}, 'variants': {}}]} |
PSA: new ExLlamaV2 quant method makes 70Bs perform much better on 24GB VRAM | 1 | [removed] | 2023-12-10T07:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/18ey2ox/psa_new_exllamav2_quant_method_makes_70bs_perform/ | MyOrangeApples | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ey2ox | false | null | t3_18ey2ox | /r/LocalLLaMA/comments/18ey2ox/psa_new_exllamav2_quant_method_makes_70bs_perform/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=108&crop=smart&auto=webp&s=4bc231a80d79babe4e6cddf7b4c71dcb0aa8f8ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=216&crop=smart&auto=webp&s=d7108244b7182d85047aa59446f1dfb68542b610', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=320&crop=smart&auto=webp&s=d34fa1a756c458772d3c8680309a93cf8d758b40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=640&crop=smart&auto=webp&s=5b03e18da2698977cf1222f0c9e54ccb6177ffc4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=960&crop=smart&auto=webp&s=3d875ff29aae8239d010f3b964e5a2f3ebe32e3d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?width=1080&crop=smart&auto=webp&s=b51090c30528b6b8c637acb54d7fc0f6a5249cf5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OFWu5qxCY4R2hQfI_vzWYgK2ON5meupO-ZR0eRPmsP8.jpg?auto=webp&s=ee70e402c0f8274b46f38378bada81dbeb5b1dac', 'width': 1200}, 'variants': {}}]} |
Is there a glossary of LLM terms? | 13 | Below are some of the terms I have come across while looking at LLM projects and blogs. I am sure there are more. Is there a glossary of these terms or maybe you can list more that I have missed:
Rank
Alpha
Beam search
top-p
top-k
ggml
gguf
lora
gptq | 2023-12-10T06:54:43 | https://www.reddit.com/r/LocalLLaMA/comments/18exqgv/is_there_a_glossary_of_llm_terms/ | soham1996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18exqgv | false | null | t3_18exqgv | /r/LocalLLaMA/comments/18exqgv/is_there_a_glossary_of_llm_terms/ | false | false | self | 13 | null |
LLMS4Rec through RAG? | 5 | I recently came across [LLM4Rec-Awesome](https://github.com/WLiK/LLM4Rec-Awesome-Papers) and felt inspired to build something small based on some of those papers-- essentially, it will be a RAG-powered, recommendation engine for games (and eventually much more).
The approach focuses heavily on input standardization-- allowing the system to adapt to various types of users and ensure, *mostly*, consistent & relevant results.
Curious to hear some thoughts/criticism on this! Here is a rough breakdown:
## Data Curation
My efforts will center on curating a dataset rich in metadata, encompassing themes, settings, gameplay dynamics, genre, and visual styles.
## Iterative Processing
I'll be utilizing a local LLM (like a neural-7b model) for dataset processing. Each game item will be annotated focusing on themes, setting, mood, genre, character dynamics, and stylistic features. For example:
```log
System-prompt: "Annotate the following game by providing brief, descriptive statements for each of the following categories: themes, setting, mood, genre, character dynamics, and stylistic features. Your annotations should be concise, informative, and reflective of the game's core aspects.
Example:
Game to Annotate: 'Cyber Rebellion 2120'
Game Metadata: []
- Theme: Rebellion against a technologically advanced totalitarian regime, exploring cybernetics and AI ethics.
- Setting: Dystopian cityscape in 2120, neon-lit urban environment with advanced digital infrastructure.
- Mood: Tense and dark, highlighting the struggles within a high-tech oppressive society.
- Genre: Cyberpunk action-adventure with RPG elements.
- Character Dynamics: The protagonist's evolution from a hacker to a key resistance figure, interactions with allies and enemies.
- Stylistic Features: Cyberpunk aesthetics combining futuristic technology with gritty urban settings, interactive narrative shaping the story's progression."`
```
## User Prompt Pre-Processing
A key component of my system is the pre-processing of user prompts, which aims to accurately match gamers' queries with our annotated content. This involves parsing user inputs to identify key features that resonate with our game annotation categories. For example:
```log
"Analyze the user's input and extract key features that align with our game annotation categories: themes, setting, mood, genre, character dynamics, and stylistic features. Summarize these features to reflect the user's interests, assisting in accurately matching them with suitable games from our database.
Example:
User Input: 'I'm looking for an open-world RPG game focusing on space exploration and intergalactic conflicts.'
- Theme: Space exploration, intergalactic warfare, and adventure.
- Setting: Vast open-world space environments and multiple galaxies.
- Mood: Exciting and adventurous, with elements of conflict and discovery.
- Genre: Open-world RPG set in space.
- Character Dynamics: Interaction with various alien species, building alliances or rivalries.
- Stylistic Features: Rich space visuals, focus on exploration and space combat mechanics."`
```
This process is crucial for translating broad, convoluted, or vague interests into quantifiable features that can be matched against our game database, leading to more relevant and resonant game recommendations.
## Instruct Fine-Tune?
Might explore whether fine-tuning the LLM on these specific instructions would yield more consistent results during the prompt (pre-)processing phases might be interesting.
## Embedding Generation
Transforming both game annotations and user-extracted features into sentence embeddings for more effective similarity retrieval.
## Retrieval-Augmented Generation
Performing a cosine similarity search between these detailed embeddings and our game database to pinpoint the best-matched titles – those with the most feature matches.
## Generating Recommendations
Compiling a list of game recommendations with titles, brief descriptions, and similarity scores reflecting the feature matches-- I'll need to establish a threshold to determine what's a match and what isn't (something like 85% similarity? idk) | 2023-12-10T05:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/18ewk5b/llms4rec_through_rag/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ewk5b | false | null | t3_18ewk5b | /r/LocalLLaMA/comments/18ewk5b/llms4rec_through_rag/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '8WYCF9fd1fbmi6X0ClqQlR4UqNJcJ68gufyc3-XqTN0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8dpTYyrh8TZawgwwd6Ve4U1_aubVLvsS96GhLn8PXsc.jpg?width=108&crop=smart&auto=webp&s=168a93dc5170ea25e6f5f8aa4c0bbbb038772d3e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8dpTYyrh8TZawgwwd6Ve4U1_aubVLvsS96GhLn8PXsc.jpg?width=216&crop=smart&auto=webp&s=8816ef0d1369f3d73402265bc8e07da1313a47ae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8dpTYyrh8TZawgwwd6Ve4U1_aubVLvsS96GhLn8PXsc.jpg?width=320&crop=smart&auto=webp&s=621330535e7192e0ffbda4221ecd023973fe2119', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8dpTYyrh8TZawgwwd6Ve4U1_aubVLvsS96GhLn8PXsc.jpg?width=640&crop=smart&auto=webp&s=9796a42b56340561201637ef203ef32ced3331b3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8dpTYyrh8TZawgwwd6Ve4U1_aubVLvsS96GhLn8PXsc.jpg?width=960&crop=smart&auto=webp&s=ac610b9867e075ea491f03a9e9fdbfbe8300956f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8dpTYyrh8TZawgwwd6Ve4U1_aubVLvsS96GhLn8PXsc.jpg?width=1080&crop=smart&auto=webp&s=84eee3e3bddcbbf19e1764a9ccb0b3aa0d017c30', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8dpTYyrh8TZawgwwd6Ve4U1_aubVLvsS96GhLn8PXsc.jpg?auto=webp&s=4f1e1a50b3890bfc601c95d0958158aab7a48c14', 'width': 1200}, 'variants': {}}]} |
Need urgent help: Lllama-2-7b-chat llamacpp finetuning | 1 | I used this command:
llama.cpp/finetune \\
\--model-base $model\_path \\
\--train-data $datadir \\
\--lora-out $new\_model\_path \\
\--save-every 0 \\
\--threads 40 \\
\--ctx 4096 \\
\--rope-freq-base 10000 \\
\--rope-freq-scale 1.0 \\
\--batch 1 \\
\--grad-acc 1 \\
\--adam-iter 3099 \\
\--adam-alpha 0.0003 \\
\--lora-r 4 \\
\--lora-alpha 4 \\
\--use-checkpointing \\
\--use-flash \\
\--sample-start "\\n" \\
\--escape \\
\--include-sample-start \\
\--seed 1
But in my logs it shows:
llm_load_tensors: VRAM used: 0.00 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 256.00 MiB
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 73.57 MiB
llama_new_context_with_model: VRAM scratch buffer: 70.50 MiB
llama_new_context_with_model: total VRAM used: 70.50 MiB (model: 0.00 MiB, context: 70.50 MiB)
I wanted to know why n_ctx is 512 and not 4096 like i passed? | 2023-12-10T04:53:00 | https://www.reddit.com/r/LocalLLaMA/comments/18evutu/need_urgent_help_lllama27bchat_llamacpp_finetuning/ | Key_Arugula_721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18evutu | false | null | t3_18evutu | /r/LocalLLaMA/comments/18evutu/need_urgent_help_lllama27bchat_llamacpp_finetuning/ | false | false | self | 1 | null |
shinning the spotlight on CogVLM | 31 | I was casually sifting through the [llama.cpp discussions](https://github.com/ggerganov/llama.cpp/discussions/categories/general) when i found a particularly interesting [conversation](https://github.com/ggerganov/llama.cpp/discussions/4350).
\--------
**TL;DR (if you don't want to sift through the conversation):**
A core Llama.cpp contributor, named [cmp-nct](https://github.com/cmp-nct), discovered stumbled upon what might be the next leap forward for vision/language models. [CogVLM](https://github.com/THUDM/CogVLM) (which uses a Vicuna 7B language model combined with a 9B vision tower) excels particularly in OCR (Optical Character Recognition), detail detection, and minimal hallucinations. It effectively understands both handwritten and typed text, context, fine details, and background graphics. It even provides pixel coordinates for small visual targets. CovVLM surpasses other models like llava-1.5 and Qwen-VL in performance.
\--------
**More Details:**
The model is not simple to implement, needing K-type quantization support and an additional expert model. Despite being similar to llava, it's more complex and seems to be on par with OpenAI's GPT4-Vision, offering precise OCR and image detection abilities. The user shared examples of the model's output, showing its capability to accurately extract and represent information from an image, like a driver's license, in JSON format.
CovVLM's recent updates include 4-bit quantization support, allowing it to run efficiently on 11GB VRAM. This demonstrates that the model manages well with quantization, a significant improvement in AI image processing models on consumer grade hardware.
[CogVLM GitHub](https://github.com/THUDM/CogVLM) | [Web Demo](http://36.103.203.44:7861/) | [Paper](https://github.com/THUDM/CogVLM/blob/main/assets/cogvlm-paper.pdf)
\--------
Personally, I can see this research eventually maturing to the point where we have "detachable" models/experts that can be switched out kind of like LORAs--so instead of using Vicuna 7B we could opt for, say a Qwen-72B. | 2023-12-10T04:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/18evtgp/shinning_the_spotlight_on_cogvlm/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18evtgp | false | null | t3_18evtgp | /r/LocalLLaMA/comments/18evtgp/shinning_the_spotlight_on_cogvlm/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
Anyone playing around with the Starling 7B fine tune model or dataset? | 9 | It seems (based on chatbot arena Elo and MT bench scores) like an intriguing (if stodgy) fine tune and dataset. Having seven AI scored answers to over 100K prompts in the dataset seems like it would give the model a pretty clear idea of what GPT-4 thinks is the best answer to a wide variety of prompts. Any experimenters played around with the model or having ideas on how the dataset could be expanded/improved?
[https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
[https://huggingface.co/TheBloke/Starling-LM-7B-alpha-GGUF](https://huggingface.co/TheBloke/Starling-LM-7B-alpha-GGUF) | 2023-12-10T04:29:39 | https://www.reddit.com/r/LocalLLaMA/comments/18evhb2/anyone_playing_around_with_the_starling_7b_fine/ | WaterdanceAC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18evhb2 | false | null | t3_18evhb2 | /r/LocalLLaMA/comments/18evhb2/anyone_playing_around_with_the_starling_7b_fine/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'MkPeBvU3WVFIry5Ow6_NGccUwNW-5VNr5a9-aLl22oE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=108&crop=smart&auto=webp&s=e3b1bb2daa788a68756f939b39227b6698f780a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=216&crop=smart&auto=webp&s=004e52b113c93a7075f8413d381804d53c262bc9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=320&crop=smart&auto=webp&s=f15a44bbe918aa4f34a97da84405003ed714cec1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=640&crop=smart&auto=webp&s=a43d44a557c302398a77ec5c72e50a1f9fa899cb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=960&crop=smart&auto=webp&s=0e49e3a1b82436b538a60acad13c772a98943ffe', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?width=1080&crop=smart&auto=webp&s=d5688cc53a5b9df932cabd8eeabb718a561bd921', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/s0XUwHQwrVUYWPx2yMh-TgB49PUYfXtuc1NY9XKN_oY.jpg?auto=webp&s=67de6f3802ac2f0480278d8a7d80b132163922eb', 'width': 1200}, 'variants': {}}]} |
Do you run your LLM on your hardware or in a VM? | 4 | Is it always faster on hardware rather than in a VM? But didn't clog up the OS when runs in a VM? Any suggestions? | 2023-12-10T04:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/18ev34i/do_you_run_your_llm_on_your_hardware_or_in_a_vm/ | DuckFormer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ev34i | false | null | t3_18ev34i | /r/LocalLLaMA/comments/18ev34i/do_you_run_your_llm_on_your_hardware_or_in_a_vm/ | false | false | self | 4 | null |
SuperDuperDB - how to use it to talk to your documents locally using llama 7B or Mistral 7B? | 1 | https://github.com/SuperDuperDB/superduperdb
This has just been released few days ago and trending in github for LLM/AI category.
Would like to ask if anyone tried to use it to perform inferencing on your documents? Say 10 pages of text. How did it go? I'm planning to use LLAMA 2 7B or mistral 7B for it? | 2023-12-10T03:47:58 | https://www.reddit.com/r/LocalLLaMA/comments/18eusc8/superduperdb_how_to_use_it_to_talk_to_your/ | vlodia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eusc8 | false | null | t3_18eusc8 | /r/LocalLLaMA/comments/18eusc8/superduperdb_how_to_use_it_to_talk_to_your/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Dm5OzYBAS-qzFRjPkJ4vzFTkoenLemIsEb4OJX6z6FQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6QlF-fcns_8JEM8aqZb1K110slWxYPXDHSs7GvEpDvQ.jpg?width=108&crop=smart&auto=webp&s=fe78d32a43569da51807866133ba782442e90225', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6QlF-fcns_8JEM8aqZb1K110slWxYPXDHSs7GvEpDvQ.jpg?width=216&crop=smart&auto=webp&s=0fd324f44ce51883fe61257857b78805f8cdee76', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6QlF-fcns_8JEM8aqZb1K110slWxYPXDHSs7GvEpDvQ.jpg?width=320&crop=smart&auto=webp&s=28fb518349163e869c86486384dffa06ea18c4b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6QlF-fcns_8JEM8aqZb1K110slWxYPXDHSs7GvEpDvQ.jpg?width=640&crop=smart&auto=webp&s=c9256e4c6ce588d64e4a24b44d0d026205943ff9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6QlF-fcns_8JEM8aqZb1K110slWxYPXDHSs7GvEpDvQ.jpg?width=960&crop=smart&auto=webp&s=98c95689491edbc6246a40886aa2b9bb47a9de8c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6QlF-fcns_8JEM8aqZb1K110slWxYPXDHSs7GvEpDvQ.jpg?width=1080&crop=smart&auto=webp&s=bebf450f0241110b243b9f76ca84cea2174b9da2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6QlF-fcns_8JEM8aqZb1K110slWxYPXDHSs7GvEpDvQ.jpg?auto=webp&s=396409da2da4f40baf19284bab4053dcc28afa4e', 'width': 1200}, 'variants': {}}]} |
A fun experiment running concurrent inference in multiple MacBook Pro's to evaluate model and hardware performance. M2 Max 64GB (34B model) vs M3 Max 128GB (67B model) | 12 | I setup a feature to run concurrent inference workflows against multiple hosts. The UI only supports two for now but I will be scaling this as the project matures. This is just a fun experiment to evaluate the responses from different models in real time. I am actually surprised at how the M3 running a model twice the size keeps up with the M2. Let's ask the big questions.
​
​
https://reddit.com/link/18eu5f1/video/sm4clj08wd5c1/player | 2023-12-10T03:09:48 | https://www.reddit.com/r/LocalLLaMA/comments/18eu5f1/a_fun_experiment_running_concurrent_inference_in/ | LocoMod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eu5f1 | false | null | t3_18eu5f1 | /r/LocalLLaMA/comments/18eu5f1/a_fun_experiment_running_concurrent_inference_in/ | false | false | self | 12 | null |
Inference and Evaluation of Mistral AI's MoE model(Mixtral-8x7b-32kseqlen) | 31 | \- Inference and evaluation code of Mistral MoE(Mixtral-8x7b-32kseqlen)
[https://github.com/open-compass/MixtralKit](https://github.com/open-compass/MixtralKit)
​
\- Evaluation performance will updated soon. | 2023-12-10T03:03:28 | https://www.reddit.com/r/LocalLLaMA/comments/18eu1kb/inference_and_evaluation_of_mistral_ais_moe/ | No_Competition_7372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eu1kb | false | null | t3_18eu1kb | /r/LocalLLaMA/comments/18eu1kb/inference_and_evaluation_of_mistral_ais_moe/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'N-TaxDA82QwjJPSKD3xjtb6Sgr7qALGw-OrZYpo-9Ng', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/R9e5lQq4rzsA56w55vbJDETNkSf9qtp6FzhLrtNjfj4.jpg?width=108&crop=smart&auto=webp&s=54640d3a14b650402ef364bbe07f03cefae379b4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/R9e5lQq4rzsA56w55vbJDETNkSf9qtp6FzhLrtNjfj4.jpg?width=216&crop=smart&auto=webp&s=948eceeec66539c85380736c4ea7d9d930e4f74d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/R9e5lQq4rzsA56w55vbJDETNkSf9qtp6FzhLrtNjfj4.jpg?width=320&crop=smart&auto=webp&s=7abce427162b4e11516bc4492a31d56597ab284c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/R9e5lQq4rzsA56w55vbJDETNkSf9qtp6FzhLrtNjfj4.jpg?width=640&crop=smart&auto=webp&s=6e347ea2d1028ec5b6a7f966eb9e639e194e6ca4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/R9e5lQq4rzsA56w55vbJDETNkSf9qtp6FzhLrtNjfj4.jpg?width=960&crop=smart&auto=webp&s=06c8bdc310539150e13af2a86cf608ac1c941a57', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/R9e5lQq4rzsA56w55vbJDETNkSf9qtp6FzhLrtNjfj4.jpg?width=1080&crop=smart&auto=webp&s=e77ae67b6faa03a5c109b97b6e33ed9522e9f182', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/R9e5lQq4rzsA56w55vbJDETNkSf9qtp6FzhLrtNjfj4.jpg?auto=webp&s=2e368f3c4639e889d88563c0f6c17e19fa9fbc01', 'width': 1200}, 'variants': {}}]} |
Why llama, yi, qwen, their official chat worse than its base in benchmarks, and some fine-tune stuff are better? is meta dumb? | 3 | is align tax a huge lie? | 2023-12-10T02:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/18etz15/why_llama_yi_qwen_their_official_chat_worse_than/ | No-Link-2778 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18etz15 | false | null | t3_18etz15 | /r/LocalLLaMA/comments/18etz15/why_llama_yi_qwen_their_official_chat_worse_than/ | false | false | self | 3 | null |
Question about models from newbie. | 3 | I was using dolphin-2.2.1-mistral-7b.Q5\_K\_S.gguf and it was not really bad, but I wanted better. I have tried a couple of 13B models like Wizard-Vicuna-13B-Uncensored.Q5\_K\_S.gguf and amethyst-13b-mistral.Q3\_K\_L.gguf, and result is honestly just worse. Worse quality, longer generation. I was thinking about upgrading GPU (from 3050 8GB to 4060ti 16GB) but seeing larger models can't outperform my 7B one, just don't know how to justify the upgrade. How does it works and is it worthy to upgrade, or any advices on what and with which settings can I run with normal speed (2.5+ tokens on high context) with following specs:
RTX 3050 8GB
R 5 5600
32GB RAM | 2023-12-10T02:22:07 | https://www.reddit.com/r/LocalLLaMA/comments/18etbqi/question_about_models_from_newbie/ | Deep-Yoghurt878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18etbqi | false | null | t3_18etbqi | /r/LocalLLaMA/comments/18etbqi/question_about_models_from_newbie/ | false | false | self | 3 | null |
Good models to run on a GTX 1660 with Ryzen 2950 Threadripper? | 1 | By “good” I mean practical, like you can ask it to do stuff and it gives useful answers instead of hot 🗑 | 2023-12-10T02:19:54 | https://www.reddit.com/r/LocalLLaMA/comments/18eta8x/good_models_to_run_on_a_gtx_1660_with_ryzen_2950/ | LeastWest9991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eta8x | false | null | t3_18eta8x | /r/LocalLLaMA/comments/18eta8x/good_models_to_run_on_a_gtx_1660_with_ryzen_2950/ | false | false | self | 1 | null |
What is going on with Mistral 7b finetunes? (Equal to Qwen 72b) | 30 | Lately, every day there has been a new 7b finetune of Mistral that has beaten the last by a noticeable amount on the open LLM leaderboard. The top finetune went from 'Starling' to 'NeuralHermes-2.5' to 'OpenHermes-2.5-neural-chat' to 'DPOpenHermes' to 'MetaMath-Cybertron-Starling', and now 'go-bruins', which has a near equal average score to the best performing open source base model Qwen-72b. go-bruins basically ties or beats Qwen in every benchmark except for MMLU, despite being a 7b model. I know that many organizations are focusing on smaller models right now for cost and time efficiency, but they can't be making this much progress can they? Are all of the models listed above just cheating at the benchmarks? If so why are the models not able to get good MMLU scores? You can also see on the leaderboard that the obviously contaminated tigerbot model is getting benchmarks of 98, but has a low MMLU.
​
So yeah, are any of the top 7b models right now up there because they're actually good models? And why is the effectiveness of finetuning so different depending on the benchmark?
Best finetune of each benchmark compared to Mistral base:
ARC: +10, Hellaswag: +3, MMLU: +1, TruthfulQA: +23, Winogrande: +3, GSM8K: +34. | 2023-12-10T02:15:39 | https://www.reddit.com/r/LocalLLaMA/comments/18et7h9/what_is_going_on_with_mistral_7b_finetunes_equal/ | DontPlanToEnd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18et7h9 | false | null | t3_18et7h9 | /r/LocalLLaMA/comments/18et7h9/what_is_going_on_with_mistral_7b_finetunes_equal/ | false | false | self | 30 | null |
RAG example with llama.cpp, LiteLLM and Mamba Chat | 26 | 2023-12-10T02:07:39 | https://neuml.hashnode.dev/integrate-llm-frameworks | davidmezzetti | neuml.hashnode.dev | 1970-01-01T00:00:00 | 0 | {} | 18et2be | false | null | t3_18et2be | /r/LocalLLaMA/comments/18et2be/rag_example_with_llamacpp_litellm_and_mamba_chat/ | false | false | 26 | {'enabled': False, 'images': [{'id': '7M0dwTYeAupbhUnTCO3MtN3YeIjFrAZaTyazieQoUJg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/KKgwERriw_FiuVuLRPiiIiItfIHuz7dUrQy9AV0JhZY.jpg?width=108&crop=smart&auto=webp&s=8332196eac0d18da2c083b3a8ef1de2a1a62bfbe', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/KKgwERriw_FiuVuLRPiiIiItfIHuz7dUrQy9AV0JhZY.jpg?width=216&crop=smart&auto=webp&s=0d9c7ca4229f46ff3567686a02d47ba311b267a8', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/KKgwERriw_FiuVuLRPiiIiItfIHuz7dUrQy9AV0JhZY.jpg?width=320&crop=smart&auto=webp&s=430e574b14bcb9717180765565150b69c326f570', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/KKgwERriw_FiuVuLRPiiIiItfIHuz7dUrQy9AV0JhZY.jpg?width=640&crop=smart&auto=webp&s=48fdc0b25083215dcfd7e45615af38c438a5dfc9', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/KKgwERriw_FiuVuLRPiiIiItfIHuz7dUrQy9AV0JhZY.jpg?width=960&crop=smart&auto=webp&s=bed37be730168c13b6d837001a6f184e14c6c820', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/KKgwERriw_FiuVuLRPiiIiItfIHuz7dUrQy9AV0JhZY.jpg?width=1080&crop=smart&auto=webp&s=bed4fb29438c7cc65071e1d4ba7333393e59266e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/KKgwERriw_FiuVuLRPiiIiItfIHuz7dUrQy9AV0JhZY.jpg?auto=webp&s=e5f60e766df4934e9c0c0caca37a110769a46146', 'width': 1200}, 'variants': {}}]} | ||
What is your motive for running open-source models, instead of just using a ready-made solution like GPT-4? | 98 | Is it curiosity? A desire for privacy? Greater customizability and control?
I am new to this space and curious about the reasons to use open-source LLMs. | 2023-12-10T01:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/18erxfd/what_is_your_motive_for_running_opensource_models/ | LeastWest9991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18erxfd | false | null | t3_18erxfd | /r/LocalLLaMA/comments/18erxfd/what_is_your_motive_for_running_opensource_models/ | false | false | self | 98 | null |
I have a 3080 and just bought a 3090 - is it worth me keeping both? | 2 | So I've been running LLMs on my 10GB 3080 for a while but I keep hitting memory limits so I thought it's finally time to bite the bullet and get a 3090, which I got a good deal with on a used one on ebay.
I was planning on selling the 3080 to cover some of that cost, but now I'm wondering is there anything I can do with 2 GPUs? Can I use both to run even larger models split across the two? Anything else? | 2023-12-10T00:58:19 | https://www.reddit.com/r/LocalLLaMA/comments/18ersaj/i_have_a_3080_and_just_bought_a_3090_is_it_worth/ | CrasHthe2nd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ersaj | false | null | t3_18ersaj | /r/LocalLLaMA/comments/18ersaj/i_have_a_3080_and_just_bought_a_3090_is_it_worth/ | false | false | self | 2 | null |
Finetunes and datasets to add onto mixture-of-expert models? | 2 | Mixture or collection of experts, whatever you want to call it.
This might be a challenge to make universal datasets to train and finetune with since the number of experts might vary, and their specialties might vary too.
Anyone who wants to have a universal dataset to throw on every multi-expert model might have to come up with some system to compartmentalize their datasets into 30+ areas so people can assign data to specific experts.
I don't know anything about training, finetuning, or sorting data for these models. I'm just trying to get some topic started so everyone here can come up with some best method on how to add to these.
​ | 2023-12-10T00:41:56 | https://www.reddit.com/r/LocalLLaMA/comments/18erh83/finetunes_and_datasets_to_add_onto/ | -Automaticity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18erh83 | false | null | t3_18erh83 | /r/LocalLLaMA/comments/18erh83/finetunes_and_datasets_to_add_onto/ | false | false | self | 2 | null |
Aurelian: 70B 32K story-writing (and more) [Alpha] | 42 | A while ago, I posted about [ways of training 70B 32K LORAs on Llama-2](https://www.reddit.com/r/LocalLLaMA/comments/16euhw5/training_long_context_32k_70b_llama/), and some people seemed interested in an actual 32K 70B fine-tune.
**This is an alpha release for testing & feedback, there are known issues** (see known issues below). I am already training the next version, but due to the long training times, I'd appreciate any feedback in the interim period.
## Available Quantizations:
* [Aurelian alpha v0.1 70B 32K - bfloat16](https://huggingface.co/grimulkan/aurelian-alpha0.1-70b-rope8-32K-fp16) (upload in progress)
* [Aurelian alpha v0.1 70B 32K - EXL2 4bit](https://huggingface.co/grimulkan/aurelian-alpha0.1-70b-rope8-32K-4.65bpw_h6_exl2) can run with 16K context in 2x24GB using Exllamav2
* [Aurelian alpha v0.1 70B 32K - EXL2 6bit](https://huggingface.co/grimulkan/aurelian-alpha0.1-70b-rope8-32K-6bpw_h8_exl2) can run with the full 32K context in 3x24GB using Exllamav2
The EXL2s were done with ROPE scaling 8 on a subset of the training dataset. Hopefully someone can make some GGUFs or other formats.
## Notes:
* Llama-2 chat format, system prompt: `An interaction between a user providing instructions, and an imaginative assistant providing responses.`
* 32K context length, use **Linear Rope Scaling = 8** (use a factor of 8 even if you are not using the full 32K context length)
* **This model is not censored, and is capable of producing offensive and NSFW content. Please use this model with caution, and do not use if you are offended by such content.**
## Main functions:
* **Story Co-writing**: Co-write a story with guided prompts over a 32K context, staying consistent with prior story details, capable of writing both long and short scenes. Start by explaining that you are writing a story scene-by-scene, provide some background, themes/tags, and describe what you want in the first scene.
* **Brainstorming/Speculation/Analysis**: Pause writing in the midst of co-writing a story, and analyze the story so far, bounce ideas about future directions, etc.
* **Oneshot Story-writing**: Write a complete story in one go, based on an outline, themes/tags, etc. Make sure you explain that this is not a scene-by-scene writing, and is meant to be written in a single go. You can specify a word-count to shoot for (though the model may not respect it).
* **Document Search/Analysis**: Reading comprehension & finding information from a long document, or sets of documents (up to 32K tokens)
## Secondary Functions (not yet fully trained, but can be tested)
Limited training so far, model may get confused between task types + see known issues.
* **Roleplaying (RP)**: Explain what RP is, setup a scenario and characters, and start the RP. You can specify any rules like OOC or use of emojis, etc.
* **Interactive Fiction (IF) Emulation**: Adventure game/interactive fiction emulation like Zork, Anchorhead, etc. Explain what it is, and how the AI should respond, specify the kind of game, tags, and so on. You can interact with usual commands like 'north', 'examine book', etc.
* **Choose Your Own Adventure (CYOA) Emulation**: Explain what you're looking for and how you want the AI to respond (egs., with a numbered list of prompts at the end of each turn), and you can pick which option you want the story/game to go. Most such human-written games tend to have 1-2 prompts, so I had a hard time getting the AI to give more options. Finetuning is helping, but the model is now only half-baked.
* **Document Summary/Editing**: Brief or comprehensive summaries of a long document, or sets of documents, in various formats (prose, bulleted list, table). Can also do some limited re-writing, conversions between formats and grammar checking.
* **General Chatting**: Explain that it is a general chat, or provide some preamble to your interaction before starting. Otherwise the model might not know if you want to RP, story-write or something else.
* **General Logic/Reasoning**: Same guidelines as above.
## Prompting Guidelines
* Treat the first prompt like you normally would the system prompt
* Describe what you want the AI to do in detail in the first prompt, even if you feel it is obvious. This is how the AI can tell what sort of task it is supposed to perform (story-writing, RP, adventure game emulation, summarization, and so on).
* After that, specify anything else you want in the first prompt (your instructions for the next response, for instance).
* Bias the length of the output with your prompt. This is no guarantee, so you may need to regenerate if you don't get your preferred length. The model will easily produce 2000+ tokens (egs., for a story scene), so make sure your response limit can handle that.
* Egs., Statements like `Make this a long response` would bias the response longer
* Statements like `Respond briefly` would bias it shorter
* Explain clearly if you want the content to be SFW or NSFW in the first prompt. However, **there are no guarantees that the model won't generate NSFW content** if you force it to, in a later prompt, even if you specify the content should be SFW at the start. It's just a statistical bias (that should get better with more training).
* Give the model details to go on. The more you flesh out what you want, the better and more consistently it will write. Tiny prompt = Tiny story, and more ChatGPTisms.
## Known Issues
* **Blank Outputs**: When you have many short prompts, sometimes the model just produces the EOS token. Especially with RP and adventure game emulation. I believe that this is due to [this issue](https://huggingface.co/jondurbin/airoboros-l2-70b-3.1.2/discussions/2). Fixing in next iteration, but meanwhile, workarounds:
* Use `Start reply with` in Oobabooga to force the first token be something other than `</s>`
* Ban the EOS token (though you need to stop the generation manually in that case)
* Strip the space after the final `[/INST]`, though I don't know of an easy way to do that without writing code in Oobabooga
* Ban the EOS token only for the first generated token, though not sure how you'd do that without some code (this feature seems like a good idea to always have enabled actually)
* Wait for the next iteration where I think I have it fixed! Airoboros went through the same issue when they switched to Llama-chat.
* **Lack of Diversity for NSFW Content**: Some common phrases & descriptions are over-used. I believe I know why this is, and believe it can be fixed with more training/diverse content (which is currently being done).
* **ChatGPTisms**: Not refusals, but canned responses, happy endings, that sort of thing. This doesn't happen often, but it shouldn't happen AT ALL, as this was NOT in the training data. But it shows up anyhow, possibly because base Llama-2 has it baked in. I will eventually fight this with DPO (using prompt-biased GPT-4 generated responses as the *rejected* option), but for now, regenerate, or prompt engineer. The model usually *can* re-regenerate diverse responses based on `temperature`, `top_p` and `top_K` (many models don't have a diverse distribution outside the top greedy tokens, but you *don't* want that in a creative model like this one).
* **Repetition**: The usual thing. Not sure why it happens. I avoid it by setting `repetition penalty = 1.16` (or higher) and `repetition range = 4096`. I'm sure you have your own sampling recipe to combat that by now.
Training dataset info available in the [HuggingFace Page](https://huggingface.co/grimulkan/aurelian-alpha0.1-70b-rope8-32K-4.65bpw_h6_exl2)
Next version is already being trained, ETA in a couple of weeks, but I can still influence the training mix and direction based on your feedback.
Let me know what you think & what you'd like to see! I intend to work on this model whenever I can spare the compute. | 2023-12-09T23:48:34 | https://www.reddit.com/r/LocalLLaMA/comments/18eqfqd/aurelian_70b_32k_storywriting_and_more_alpha/ | Grimulkan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eqfqd | false | null | t3_18eqfqd | /r/LocalLLaMA/comments/18eqfqd/aurelian_70b_32k_storywriting_and_more_alpha/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'MCalqV1HoBH4FiM21SoIO1zmo1rxabGTtYiIG6z6kb8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=108&crop=smart&auto=webp&s=4a1b4c5ab08a12413dc526402ff94dcd7b014072', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=216&crop=smart&auto=webp&s=c030eb7afa474c1335a3bb2443fce86371db557e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=320&crop=smart&auto=webp&s=6bd812a3e10943adaa92cd7505e49b0300fb252b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=640&crop=smart&auto=webp&s=cab9749e48311a56072b764992f8f45065932072', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=960&crop=smart&auto=webp&s=6286b521d3573cc8e08fb30972e7d8873cf0d932', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?width=1080&crop=smart&auto=webp&s=1e39363613dfe496718ab6e2611780ea176e1cac', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fjdJI20wZWeLaGoZI92qXakf5wHbS1sxCDfcogGIEVY.jpg?auto=webp&s=33bd006635ede805f31485be802f8f1d6583e19b', 'width': 1200}, 'variants': {}}]} |
Overview of OS-LLM architecture and how to take things to the next level | 19 | Disclaimer: I've been studying this stuff for a while but I'm not an expert so please forgive any mistakes.
---
There are an awful lot of models out there, and an awful lot of innovations to implement and combine. I want to discuss the state of the art, what is possible with minimal work, what is possible with some significant coding, and what is possible with a freshly trained base model.
Now there will never be one 'best' model for everything, as models are built and tuned for different purposes. Hardware limitations and tradeoffs also apply. That said, there are things which should improve all models. Let's look at the whole 'stack'.
---
- **Above the Model**
1) **User Interface** - This handles inputs and outputs. The UI should be largely independent of the model itself, and doesn't actually change what you can do, but can make a big difference through practicality and convenience. Things like selecting files and formatting their data to add to the prompt can fall into this category. The UI can have automation and limited agent capabilities, such processing batches of prompts, or scanning outputs for certain strings that indicate that part of the output should be sent to Stable Diffusion to make an image with a specified filename etc.
2) **AI Agents** - This is software which can create instances of various LLM models, processing their outputs to feed back into new instances. A major avenue for improvement is training of the master/director LLM to assess what needs to be done, choose specialised models and make good prompts for them. There isn't a good set of examples to train on, but it should be possible to make synthetic data by experimenting with different prompts for 'subroutines' and see which ones give the good results. Creation/selection of specialised LLMs is also important. The non-LLM software layer that communicates prompts can also be improved to check for the agent getting stuck in loops. This layer can't 'understand' a planned list of tasks to reach an outcome, but it can check to see if progress is being reported if a defined format, and alter prompts using preset methods until progress resumes, including asking another special LLM to alter the prompt in a way likely to be helpful.
---
- **Model Code**
3) **Tokenization** - The input data (i.e. the prompt) is converted to a series of tokens. The size of the token 'vocabulary' can be changed, and the limited slots can be differently assigned to words and word fragments along with individual characters which let any text to be read as lower efficiency. I'm not sure if there's much room to further improve tokenization without some novel approach I'm not aware of. Maybe there things that can be done with different languages, using a different set of tokens and embeddings for each language while the model itself is multi-language/language neutral.
4) **Embeddings** - The tokens are converted into vectors through an embedding table. The embedding vector table values are the result of training the model. More complex arrangements are possible, where the initial vectors depend on multiple tokens. I think this achieves a similar thing to the self-attention mechanism.
5) **Layer Architecture** - Most commonly, the set of vectors is passed through a self-attention mechanism then a feed-forward network, making one 'layer'. The outputs are then recycled as the inputs for the same steps again as the next layer. More complex architectures are possible, such as Mixture of Experts, where an initial layer judges which sub-models are likely to most suitable. Then, layers are loaded and used for the chosen sub-models. Having different sub-model layer options requires more storage, but I don't think it requires more RAM, and it only takes a short while to load the right sub-model into RAM, and it can be kept there for each generated token. There are probably other good ideas with layer swapping that we haven't seen yet.
6) **Attention Mechanism** - The first step of each layer is the attention mechanism, which considers each token vector in relation to the others, and modifies them accordingly. This is where context and meaning are 'understood' ~~this is basically black magic~~. There are promising alternatives to the common Transformer approach, like Mamba.
7) **Feed Forward Network** - The second step of each layer is a Feed Forward Network, where each token vector undergoes some nonlinear transformation based on 'weights and biases' parameters established through training. Each token vector can be processed in parallel, unlike the attention step where they all interact. Maybe there could eventually be a different nonlinear transformation which scales better, but increasing the number of parameters seems to reliably improve model accuracy while making it bigger and slower.
8) **Output Layer** - After the final pass through the autoregressive layers, the final token vector is put through an output layer, which gives a probability distribution for the next token, which can be used to generate the next token. I think that because only the token vector for the final token is used for prediction, there is no point in putting the other token vectors through the final FFN, which is an optimisation that is probably already used.
9) **Token Selection** - This is picking a token to 'generate' or output, from the probability distribution produced in the output layer. There is huge scope for tinkering here, as any selection method can be patched into basically any model, which all use the same probability distribution list system. 'Temperature' is a way to get more surprising outputs, but there are countless other potential approaches which should be quite easy to implement.
---
- **Model Training**
10) **Model Setup** - There are lots of values that need to be set when creating a model, with big effects on performance. Token vocabulary size, token vector height and precision, attention head count, hidden dimension size, max context length, feedforward layer count, and the number of layer cycles. Several of these contribute to the 'parameter count', which doesn't capture the nuances. They all have quality benefits and performance costs, and I'm sure experience and experimenting will help prioritise the variables that give the best bang per buck.
11) **Data** - The model can't be trained without data. Both the difficulty in acquiring huge amounts of high quality data, and the cost of training are significant barriers to open source LLMs. There are all sort of tricks and techniques to improve data quantity and quality, but they can't overcome the gap. Hopefully big players keep releasing 'open parameter' models, which are the results of training without the actual training data, driven by wanting to implement advancements in OS LLM code. Approaches and tricks for training is a big topic, but mostly this can save training time and cost somewhat, but doesn't improve the final models.
12) **Fine Tuning** - This allows a base model to be tailored to a specific use using smaller sets of examples, and is more accessible that training a new model. As I understand, it can't really 'teach' the model anything new, but can make it more focused and effectively change the personality or motivations.
13) **Quantization** - This is just throwing away some of the data in the default 16 bit parameters to reduce model size. The key is smartly throwing away the bits which matter the least. Better quantization algorithms mean you can keep more of the quality of a big model in a small version of the model.
---
14) **Hardware optimisation** - Reimplementing algorithms to make the most of common CPU and GPU architectures can make a huge difference. Some of this needs to be done before a model is trained, but some can be done after the fact by changing the code to run the model while keeping the model 'data'.
---
In conclusion, I expect new base models to be released at their own tempo. For any researchers or CS geniuses, if you can come up with any improvements to model code, or new approaches to one of the LLM stages, it has a good chance to be adopted in a big model within a few months after a small model proof on concept. For other software developers, I think that the UI and Agent software has huge potential to make local LLMs dramatically more useful, in a way that won't be made redundant when new models come out.
I'd like to hear all your thoughts about this. Thanks for reading! | 2023-12-09T22:19:46 | https://www.reddit.com/r/LocalLLaMA/comments/18eome3/overview_of_osllm_architecture_and_how_to_take/ | EvokerTCG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eome3 | false | null | t3_18eome3 | /r/LocalLLaMA/comments/18eome3/overview_of_osllm_architecture_and_how_to_take/ | false | false | self | 19 | null |
Is there any open source app to load a model and expose API like OpenAI? | 23 | 99% of my usage of local LLMs is via API.
I have created my own, but with all the advancement it is hard to keep up to date. I also use r/Oobabooga for quick demos.
I am wondering if there isn't some OS project that is entirely focused on exposing API to interact with an LLM. | 2023-12-09T22:09:33 | https://www.reddit.com/r/LocalLLaMA/comments/18eoeti/is_there_any_open_source_app_to_load_a_model_and/ | Ion_GPT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eoeti | false | null | t3_18eoeti | /r/LocalLLaMA/comments/18eoeti/is_there_any_open_source_app_to_load_a_model_and/ | false | false | self | 23 | null |
RAG for image question answering | 5 | Hi,
Has anyone seen anything related to using a RAG based solution with a vision model to answer specific questions about an image? I’m looking to answer questions about the price of merchandise in an image of a store by using a price menu as the reference document.
Thanks! | 2023-12-09T21:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/18en7pi/rag_for_image_question_answering/ | loslow34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18en7pi | false | null | t3_18en7pi | /r/LocalLLaMA/comments/18en7pi/rag_for_image_question_answering/ | false | false | self | 5 | null |
Yi Context Length | 3 | I’m looking at this list: https://ollama.ai/library/yi/tags
This may be a dumb questions but are the only versions capable of 200k context size the ones labeled as such? In which case, only the 6B versions have that?
I have the memory to run any of these. Trying to get summaries of docs 10,000-15,000 words long. | 2023-12-09T20:54:35 | https://www.reddit.com/r/LocalLLaMA/comments/18emuj7/yi_context_length/ | Hinged31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18emuj7 | false | null | t3_18emuj7 | /r/LocalLLaMA/comments/18emuj7/yi_context_length/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]} |
TinyLlama - Any news? | 26 | Do any of you know what's up with the current TinyLlama checkpoint ( -intermediate-2T tokens)? It is now four days delayed.
Would be glad if someone could provide more info. I've also looked at the progress on w&b, but unfortunately I'm not getting any clearer and I can't quite understand this interim correction there either:
[https://wandb.ai/lance777/lightning\_logs/reports/metric-train\_loss-23-09-04-23-38-15---Vmlldzo1MzA4MzIw?accessToken=5eu2sndit2mo6eqls8h38sklcgfwt660ek1f2czlgtqjv2c6tida47qm1oty8ik9](https://wandb.ai/lance777/lightning_logs/reports/metric-train_loss-23-09-04-23-38-15---Vmlldzo1MzA4MzIw?accessToken=5eu2sndit2mo6eqls8h38sklcgfwt660ek1f2czlgtqjv2c6tida47qm1oty8ik9)
​
​ | 2023-12-09T20:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/18em6r9/tinyllama_any_news/ | Evening_Ad6637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18em6r9 | false | null | t3_18em6r9 | /r/LocalLLaMA/comments/18em6r9/tinyllama_any_news/ | false | false | self | 26 | null |
LLMs in the middle: Content-aware browser filters for social media | 22 | 2023-12-09T20:05:03 | https://karimjedda.com/llms-in-the-middle-content-aware-client-side-filtering/ | kaiserk13 | karimjedda.com | 1970-01-01T00:00:00 | 0 | {} | 18elso2 | false | null | t3_18elso2 | /r/LocalLLaMA/comments/18elso2/llms_in_the_middle_contentaware_browser_filters/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'es03BSmKZSqRZxBY60FeyYuBDIhNmpdTYsk8GdRCduw', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/3XEnZR-45OLspj2MVH3b_oOGjkDMDfxx75rH3f2gDBg.jpg?width=108&crop=smart&auto=webp&s=4a5ffa408801bd323ab41379397c59ed82da07e6', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/3XEnZR-45OLspj2MVH3b_oOGjkDMDfxx75rH3f2gDBg.jpg?width=216&crop=smart&auto=webp&s=b94ee3c714773e431aac5e75ad8e1a28faec7bd1', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/3XEnZR-45OLspj2MVH3b_oOGjkDMDfxx75rH3f2gDBg.jpg?width=320&crop=smart&auto=webp&s=8445553d13feafdc128aa2bb0d4a692219bee36c', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/3XEnZR-45OLspj2MVH3b_oOGjkDMDfxx75rH3f2gDBg.jpg?width=640&crop=smart&auto=webp&s=1ea1f932bbbabfb81af3ca354ef34161aa21fd3e', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/3XEnZR-45OLspj2MVH3b_oOGjkDMDfxx75rH3f2gDBg.jpg?width=960&crop=smart&auto=webp&s=f5e0480e2cc16446b36e8dfd30f4c6c837256ce4', 'width': 960}], 'source': {'height': 569, 'url': 'https://external-preview.redd.it/3XEnZR-45OLspj2MVH3b_oOGjkDMDfxx75rH3f2gDBg.jpg?auto=webp&s=9efdfb366b207b2396fcc244b2c1b82b55b050d1', 'width': 1070}, 'variants': {}}]} | ||
DIY converting safetensors format to GGUF on a Mac | 15 | I have a Mac Studio M2 128GB that I've been using with great results lately. It's been much faster than running models on my machine with a single 4090.
But lately, I've been wanting to try several new models that TheBloke hasn't gotten around to converting to GGUF yet.
As it turns out, I didn't have to wait.
### Requirements
[llama.cpp](https://github.com/ggerganov/llama.cpp) - make sure and follow the installation instructions, install the requirements with pip, etc.
Mac Silicon
Plenty of disk space
### Conversion process
The convert script will take, as input, the safetensors files and out either an f32, f16, or Q8_0 GGUF for you. I recommend using f16 unless all you need as a final result is a Q8_0, in which case you can just go directly to that quantization. f32 will technically be higher quality but substantially larger (and the f16 for my 103B input model was already 200GB)
Here's the command I used for creating the f16 gguf:
*python convert.py --outtype f16 models/Rogue-Rose-103b-v0.2*
It took about 10-15 minutes and outputted ggml-model-f16.gguf into the original folder for us.
It's safe to delete the .safetensors files once you have your f16 gguf.
### Quantization
Now that we have our f16, we can quantize the result into any format we'd like:
*./quantize models/ggml-model-f16.gguf models/Rogue-Rose-103B.Q6_K.gguf Q6_K*
Again, it took about 10-15 minutes for this 103B model on an M2 Mac Studio. Smaller models would likely go faster. I don't know how much memory is required to quantize the models, but I was able to quantize models that I don't have enough RAM to run.
I hope this is helpful! | 2023-12-09T19:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/18elm98/diy_converting_safetensors_format_to_gguf_on_a_mac/ | Sakedo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18elm98 | false | null | t3_18elm98 | /r/LocalLLaMA/comments/18elm98/diy_converting_safetensors_format_to_gguf_on_a_mac/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]} |
Open Inference Engine Comparison | Features and Functionality of TGI, vLLM, llama.cpp, and TensorRT-LLM | 19 | 2023-12-09T19:33:26 | https://github.com/lapp0/lm-inference-engines/ | andrewlapp | github.com | 1970-01-01T00:00:00 | 0 | {} | 18el4gn | false | null | t3_18el4gn | /r/LocalLLaMA/comments/18el4gn/open_inference_engine_comparison_features_and/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'QcoAuD6nN6iY47Y1pZuMjDzBToFqRjSHiJ6b37CV1ic', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uLnQ96wmcwfC9lp72bR-0F2y6HPbhYWniv4U1LTpM5c.jpg?width=108&crop=smart&auto=webp&s=c4c8a7f0d8eda75a3f075cf940cbccc4a00af35b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uLnQ96wmcwfC9lp72bR-0F2y6HPbhYWniv4U1LTpM5c.jpg?width=216&crop=smart&auto=webp&s=8baf4cef85ee17e5173579c8cb61959828e2fff1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uLnQ96wmcwfC9lp72bR-0F2y6HPbhYWniv4U1LTpM5c.jpg?width=320&crop=smart&auto=webp&s=1b64e350df2e13d9d89b98ba00c60d43d58674c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uLnQ96wmcwfC9lp72bR-0F2y6HPbhYWniv4U1LTpM5c.jpg?width=640&crop=smart&auto=webp&s=fafd6945430a2f1ea212719fd06696a6818fd42e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uLnQ96wmcwfC9lp72bR-0F2y6HPbhYWniv4U1LTpM5c.jpg?width=960&crop=smart&auto=webp&s=388f40f883f640c163f16851d61ebd28fc813614', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uLnQ96wmcwfC9lp72bR-0F2y6HPbhYWniv4U1LTpM5c.jpg?width=1080&crop=smart&auto=webp&s=338dbc8f082925ef51a325cad96d48f93f6e7fc6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uLnQ96wmcwfC9lp72bR-0F2y6HPbhYWniv4U1LTpM5c.jpg?auto=webp&s=ae6c06ac481ffc19bd2c3756f6c85e132e805cb2', 'width': 1200}, 'variants': {}}]} | ||
Estimate price-per-token / inference cost for locally deployed models accessible via API | 1 | [removed] | 2023-12-09T19:32:48 | https://www.reddit.com/r/LocalLLaMA/comments/18el3zd/estimate_pricepertoken_inference_cost_for_locally/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18el3zd | false | null | t3_18el3zd | /r/LocalLLaMA/comments/18el3zd/estimate_pricepertoken_inference_cost_for_locally/ | false | false | self | 1 | null |
Fine-tuned DiscoLM Mixtral model after only 24 hours! | 167 | 2023-12-09T19:18:52 | https://huggingface.co/DiscoResearch/DiscoLM-mixtral-8x7b-v2 | jd_3d | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18ektd7 | false | null | t3_18ektd7 | /r/LocalLLaMA/comments/18ektd7/finetuned_discolm_mixtral_model_after_only_24/ | false | false | 167 | {'enabled': False, 'images': [{'id': 'oQNhmIAEIIxFvzbRgVH-KHgMxTIHGWfRZoPfHNwUcLs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wcPMlqSjrue99vHhUC9DnoBHBnBuhMagJ57CXns24PI.jpg?width=108&crop=smart&auto=webp&s=c66c2ff5a8160be1e9c2b19dc70a468ce0aba03c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wcPMlqSjrue99vHhUC9DnoBHBnBuhMagJ57CXns24PI.jpg?width=216&crop=smart&auto=webp&s=940b7bacc458c5e02db24c2d696ee5906530a671', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wcPMlqSjrue99vHhUC9DnoBHBnBuhMagJ57CXns24PI.jpg?width=320&crop=smart&auto=webp&s=196162c2fffc7bbf52570fb2000e520bfe485928', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wcPMlqSjrue99vHhUC9DnoBHBnBuhMagJ57CXns24PI.jpg?width=640&crop=smart&auto=webp&s=f91658709e308e15b29082b13441bf6d032505af', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wcPMlqSjrue99vHhUC9DnoBHBnBuhMagJ57CXns24PI.jpg?width=960&crop=smart&auto=webp&s=43852cef27af97d770f7255310ee3d9e176e72f9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wcPMlqSjrue99vHhUC9DnoBHBnBuhMagJ57CXns24PI.jpg?width=1080&crop=smart&auto=webp&s=b0ddd5f16f58094ee4eafb52ed3d119166a548fa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wcPMlqSjrue99vHhUC9DnoBHBnBuhMagJ57CXns24PI.jpg?auto=webp&s=3d2e8c53e1d573d0160be5070b72fecc8d1cbe2b', 'width': 1200}, 'variants': {}}]} | ||
What is fblgit/una Unified Neural Alignment? Looks like cheating on testset and overfitting. | 13 | Those UNA-\*models have high TruthfulQA and ARC, but hallucinating much worse than those normal models.
And fblgit, this guy is hiding something - "What is UNA? A formula & A technique to TAME models"
We have no idea what UNA is, and he preferred not to say.
​
>Funded by Cybertron's H100's with few hours training.
Who is this Cybertron? Never heard. I think it is another pseudonym of himself. [juanako.ai](https://juanako.ai) \- Xavier M. - fblgit, all these nicknames is the same one.
> The model is very good, works well on almost any prompt but ChatML format and Alpaca System gets the best
How can this ever happen? But in my test, it is not working well in formats other than chatml, and still hallucinates a lot - more than any other normal models. And considering its #1 high score, it seems just overfitting on test datasets to cheat benchmark. | 2023-12-09T18:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/18ejwm1/what_is_fblgituna_unified_neural_alignment_looks/ | PuzzledTeam5961 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ejwm1 | false | null | t3_18ejwm1 | /r/LocalLLaMA/comments/18ejwm1/what_is_fblgituna_unified_neural_alignment_looks/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'E52zxkgIQ4erLeto4AhQQwIDH-4S-D7pyYegSKhUC3Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=108&crop=smart&auto=webp&s=11874a81c84f6ac8d6af4cd02b91b763e5aa9e63', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=216&crop=smart&auto=webp&s=1ee0c4b18aaf0e93ccd3a971fe32081f3406766e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=320&crop=smart&auto=webp&s=09e4a196c01af4a45549ae99925050677754aa38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=640&crop=smart&auto=webp&s=91671ecc5e75078769d60d263cec150a53d2e1f8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=960&crop=smart&auto=webp&s=110dfb0cf008fe17de74f6a0ee93dc4ce43c816f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?width=1080&crop=smart&auto=webp&s=d07b37d24ed9a61e19b740b70435460429e029f6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pX1sM-1B5Q7cYzBztyqS3kCWYJKD4n6j9K-WJrDmC10.jpg?auto=webp&s=6c87ba8708477672a80d7d7c8238b6dbff1f8265', 'width': 1200}, 'variants': {}}]} |
Memory Requirements for Training Over Time | 1 | I’m generally aware of decreasing memory requirements overtime with the introduction of various quantization levels, or approaches such as LoRA. Does anyone know of a longitudinal measure of requirements? E.g. when llama first came out what was the card size required, versus now? My Google Fu is failing me on this. | 2023-12-09T18:26:27 | https://www.reddit.com/r/LocalLLaMA/comments/18ejoq4/memory_requirements_for_training_over_time/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ejoq4 | false | null | t3_18ejoq4 | /r/LocalLLaMA/comments/18ejoq4/memory_requirements_for_training_over_time/ | false | false | self | 1 | null |
LLava 1.5 13b on LM studio | 1 | [removed] | 2023-12-09T17:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/18ehzjk/llava_15_13b_on_lm_studio/ | danilo139hg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ehzjk | false | null | t3_18ehzjk | /r/LocalLLaMA/comments/18ehzjk/llava_15_13b_on_lm_studio/ | false | false | 1 | null | |
People who've used RWKV, whats your wishlist for it? | 29 | Seeing the recent outburst of new ssms, thought I'd ask people what their wishlist for rwkv was.
Most of the releases bench against rwkv v4, therefore making themselves look better, but with rwkv5, its 3b matches mamba 3b, and its 7b matches striped hyena 7b base, but is worse than the special tek finetune. all the rwkv v5 models are base models.
For those people who haven't heard of it, its striped hyena with less GPU usage, mamba with more scaling, and a rwkv 7b bf16 on a 3090 is faster than a llama2 7b fp8 on an h100.
Things being worked on rn:
- HF transformers integration for v5, (v4 is there)
- better tooling (We have rwkv.cpp for quants, moving for llama.cpp, axolotl support, etc)
- An online platform (maybe a better version of the openAI api?)
- Bigger, Better dataset (this is one im also trying to work on!)
- Even more powerful arch's, scaling to 34b (They are making a v6, trying to get more compute)
- long context models (someones trying to build a 300k ctx len version, and thanks to the rwkv arch, 300k only uses a couple mb!)
- hosted setup (Theres api endpoints, google "rwkv recursal.ai")
My wishlist
- Multimodal (gathering data for this, would love if they made a fuyu/gemini version!)
Wbu? | 2023-12-09T16:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/18ehd9z/people_whove_used_rwkv_whats_your_wishlist_for_it/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ehd9z | false | null | t3_18ehd9z | /r/LocalLLaMA/comments/18ehd9z/people_whove_used_rwkv_whats_your_wishlist_for_it/ | false | false | self | 29 | null |
Metacall.io running inside of llama.cpp | 2 | [removed] | 2023-12-09T16:35:17 | introsp3ctor | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18ehbc7 | false | null | t3_18ehbc7 | /r/LocalLLaMA/comments/18ehbc7/metacallio_running_inside_of_llamacpp/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'vtqsnhZYj90YNMewztStAY-5MZMkAlGzRefb5V9afQU', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/l3wk0jaara5c1.jpg?width=108&crop=smart&auto=webp&s=a3870b51f252416e6a63e6eee1e5d187b86c7c56', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/l3wk0jaara5c1.jpg?width=216&crop=smart&auto=webp&s=e03f2b56ed0a3a62d79bbc488006acdd24f7aae0', 'width': 216}, {'height': 154, 'url': 'https://preview.redd.it/l3wk0jaara5c1.jpg?width=320&crop=smart&auto=webp&s=75aaeeeb841335a373c2a9dde11865261da2e544', 'width': 320}, {'height': 308, 'url': 'https://preview.redd.it/l3wk0jaara5c1.jpg?width=640&crop=smart&auto=webp&s=d5167ebf513dee2a5f46d4f040a390fb03ae4a7f', 'width': 640}, {'height': 462, 'url': 'https://preview.redd.it/l3wk0jaara5c1.jpg?width=960&crop=smart&auto=webp&s=9c22ef0bdea72102fd3e27b4a075dc81a7491288', 'width': 960}, {'height': 520, 'url': 'https://preview.redd.it/l3wk0jaara5c1.jpg?width=1080&crop=smart&auto=webp&s=edb183d0309fceb5858e3bbf80e585a061464ee7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/l3wk0jaara5c1.jpg?auto=webp&s=967fd5e80bacf2d92cce301b0651f1e76c609ae4', 'width': 2241}, 'variants': {}}]} | ||
Compare a 5 page document against a 20 page code of conduct? | 3 | I pay my lovely £20 ChatGPT fee for one singular purpose.
I have a GPT which contains maybe 10 PDFs at most. One is a code of conduct, one is what a “work document” should contain, a sort of checklist. And the rest are documents on different aspects of a topic. For example maybe one is a document that contains 20 pages of how and when you can use a certain legal exemption. The next document refers to a different exemption.
I write my usual work document, my brain holds to knowledge about the code of conduct etc however I upload this work document and ask the GPT to check if I have fulfilled my obligations in line with that code. I’m not asking it for legal advice, but it does need to reason and explain.
So - this is just a small workflow but has been really handy, it’s reduced the first stage checking of a doc from half an hour to 5 minutes max. It then goes to another human for final review.
Problem: GPT 4 has a 40 message limit per 3 hours. I’m hitting this frequently.
I’m considering my alternatives. I don’t feel like my use case is significantly resource intensive but maybe I’m wrong? I’m desperate to run a local llm (ADHD hyper focus) but I’m fairly ignorant.
Could I achieve want I want by doing something local or should I just stick to ChatGPT, and maybe look at doing it via API and building a low code app to hook up to it instead. | 2023-12-09T16:21:54 | https://www.reddit.com/r/LocalLLaMA/comments/18eh12r/compare_a_5_page_document_against_a_20_page_code/ | michaelsgoneinsane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eh12r | false | null | t3_18eh12r | /r/LocalLLaMA/comments/18eh12r/compare_a_5_page_document_against_a_20_page_code/ | false | false | self | 3 | null |
Mixtral: A Promising Model with Unforeseen Challenges | 166 | I was able to spend significant time yesterday testing Mistral in a plethora of ways. If you go into it with high hopes off the bat, you are going to be disappointed. Just putting that out there upfront. There are bugs with this model. Why are those bugs there though, and are they fixable? Possibly, possibly not. This is the first time a model of this size with this architecture has been released for pure testing purposes. I do not know.
One of the most concerning issues is the model's apparent "AI Multiple Personality Disorder." This phenomenon manifests as inconsistencies in the responses generated by different component models, leading to incoherent and unreliable outputs. This inconsistency is particularly pronounced when the model is presented with challenging questions, where it often performs worse than other LLM models.
While the exact cause of this inconsistency remains unclear, it's likely to be linked to several factors. The training data used for Mixtral may not have adequately addressed such scenarios, leading the models to be unprepared for them. Additionally, the controller, responsible for selecting the appropriate model for each query, may not be sophisticated enough to consistently make optimal decisions.
Furthermore, I do not know how the developer's setup the model's encoder/decoder configuration, which hinders a deeper understanding of its behavior. As an experienced MoE developer, I can attest that even minor changes in this configuration can have drastic effects on the model's performance. Without this information, it's impossible to definitively assess Mixtral's potential and potential pathways for improvement.
Despite these challenges, Mixtral's release marks a significant milestone in the development of MoE-based LLMs. It provides invaluable data and insights that can guide future research and development efforts. It's crucial to remember that Mixtral is still in its early stages of development and should be treated as a test model. More research is needed to fully understand its capabilities and limitations, optimize its performance, and ensure its reliability and consistency.
Bottom line, I am writing this because I am afraid people are going to use the model, think it is utter trash, and move on, while trashing it. As someone who has actually played around with MoE's pretty extensively, I have solved problems that Mixtral is currently displaying by adding literally one line of code to the encoder/decoder.
Is that all Mixtral needs? I do not know; I do not possess a personal server to test these things on that level to where I can actually play around with the backend of this model and stand it up. It is going to take time to do research with this model. MoE's are weird. You have to fine tune every single aspect of them. These things will take time to research out. Please, just don't trash the overall architecture as a knee jerk reaction. More testing is needed. | 2023-12-09T16:15:34 | https://www.reddit.com/r/LocalLLaMA/comments/18egw8i/mixtral_a_promising_model_with_unforeseen/ | Fantastic-Ninja3839 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18egw8i | false | null | t3_18egw8i | /r/LocalLLaMA/comments/18egw8i/mixtral_a_promising_model_with_unforeseen/ | false | false | self | 166 | null |
Using Llama Guard to moderate user/agent interaction from ChatGPT | 1 | [removed] | 2023-12-09T16:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/18egn91/using_llama_guard_to_moderate_useragent/ | InevitableSky2801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18egn91 | false | null | t3_18egn91 | /r/LocalLLaMA/comments/18egn91/using_llama_guard_to_moderate_useragent/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
aphrodite (help) | 2 | i installed aphrodite engine and installed everything, what do i do now, do i write a command, because im stuck in the script.
​ | 2023-12-09T14:54:28 | https://www.reddit.com/r/LocalLLaMA/comments/18ef7z2/aphrodite_help/ | Oorage420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ef7z2 | false | null | t3_18ef7z2 | /r/LocalLLaMA/comments/18ef7z2/aphrodite_help/ | false | false | self | 2 | null |
Looking for Paper about LLM Fine Tuning for specific topic / Alignment Paper | 2 | Hi, I am new to LLM and I am trying to fine tune a 7B model to understand about one specific topic only as chat assistant (pretty much like everyone is trying to do). All guide I can find is just to make a diverse. But how diverse is it, what kind of diverse?
Due to my limited knowledge, I could not find any paper/resource that shows experiment about dataset augmentation and the result that shows clearly what kind of augmentation we need for LLM to fully understand a topic.
I can only find this [reversal curse](https://arxiv.org/abs/2309.12288) paper that shows we also need to reverse the augmentation to make the LLM knows that "Z is mother of celebrity A" instead of just knowing that "celebrity A mothers is Z". I know that there is evol instruct, but it is too wild and the target is not to understand a topic but to get a knowledge on complex as much as possible.
Is there any other resource to learn what kind of dataset augmentation we need to make LLM fully understand about a specific topic? | 2023-12-09T14:50:52 | https://www.reddit.com/r/LocalLLaMA/comments/18ef5j9/looking_for_paper_about_llm_fine_tuning_for/ | nggakmakasih | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ef5j9 | false | null | t3_18ef5j9 | /r/LocalLLaMA/comments/18ef5j9/looking_for_paper_about_llm_fine_tuning_for/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=108&crop=smart&auto=webp&s=bc9575b410002edc2df3c5b5b0355fefedc7baa8', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=216&crop=smart&auto=webp&s=dbce7f303173724d23fb33cd3fc636c04c72b290', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=320&crop=smart&auto=webp&s=c1043d604105157f56a615cc59bb14d7ae64653f', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=640&crop=smart&auto=webp&s=ce8b9192ed7ca476d2844aaa405c5014a7a1ab45', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=960&crop=smart&auto=webp&s=76aed6fd51086798b2d415a7d57562c967db4111', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?width=1080&crop=smart&auto=webp&s=46129c06d8fad9a58fff9740c079e13d4e829213', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/izh8gZHY4FqZ1nwtU1N_TjtohUCNuvTyMn90toXda80.jpg?auto=webp&s=8efe489c05609f1626bbb44354c77840623707de', 'width': 1200}, 'variants': {}}]} |
Any recommendations for cloud services to host llms? | 2 | Could you recommend a service, where you can use all these models (preferably also image generation)? | 2023-12-09T14:01:44 | https://www.reddit.com/r/LocalLLaMA/comments/18ee823/any_recommendations_for_cloud_services_to_host/ | Wonderful-Eye-71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ee823 | false | null | t3_18ee823 | /r/LocalLLaMA/comments/18ee823/any_recommendations_for_cloud_services_to_host/ | false | false | self | 2 | null |
Patiently Lurking from the Shadows | 12 | https://huggingface.co/TheBloke/mixtral-7B-8expert-GPTQ/tree/main
The title is an original masterpiece of my own wetware llm. | 2023-12-09T14:01:22 | https://www.reddit.com/r/LocalLLaMA/comments/18ee7ry/patiently_lurking_from_the_shadows/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ee7ry | false | null | t3_18ee7ry | /r/LocalLLaMA/comments/18ee7ry/patiently_lurking_from_the_shadows/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': '1tbQJmXKdfQqef0fnT21qOI8R8AMpcfEwW28TVqiBhA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OdYeu0QH8G23X2-o5Ro2fzTuiLHNRm8N57peTW4WrH8.jpg?width=108&crop=smart&auto=webp&s=a3fe267b9625e4f6261e67ef041e96dcbc2ed9e2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OdYeu0QH8G23X2-o5Ro2fzTuiLHNRm8N57peTW4WrH8.jpg?width=216&crop=smart&auto=webp&s=6d86d076b3b2482a4232b462f04b2fb1a2c34869', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OdYeu0QH8G23X2-o5Ro2fzTuiLHNRm8N57peTW4WrH8.jpg?width=320&crop=smart&auto=webp&s=0fe3c1824cd0c8d219586a3458a1aa4a05446a6c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OdYeu0QH8G23X2-o5Ro2fzTuiLHNRm8N57peTW4WrH8.jpg?width=640&crop=smart&auto=webp&s=1ba970345c9e6fb218df8dc6473bceab86092ba4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OdYeu0QH8G23X2-o5Ro2fzTuiLHNRm8N57peTW4WrH8.jpg?width=960&crop=smart&auto=webp&s=5a1dfa1e3e0aadfb614028c7c3f9c0a97b4de6e5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OdYeu0QH8G23X2-o5Ro2fzTuiLHNRm8N57peTW4WrH8.jpg?width=1080&crop=smart&auto=webp&s=f5209bbe07bb0143162fab4b05c4c064437afe0f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OdYeu0QH8G23X2-o5Ro2fzTuiLHNRm8N57peTW4WrH8.jpg?auto=webp&s=03f5942f1dbde5846e231a8bb46c16aa5a35f75f', 'width': 1200}, 'variants': {}}]} |
Loquace-7B-Mistral - An Italian speaking LLM good at following instructions. | 31 | Hi Everyone,
I just released the new version of the Loquace Family model, a project that started in the beginning of 2023 with the aim of democratizing LLMs in Italy.
It's a 7B fine-tuned instruct version of Mistral with 2k of context. It's not intended to be used as a chatbot but performs pretty well in NLP tasks. I've been using it in my pipeline for information extraction and dataset augmentation with fairly good results.
It's key features:
\- Is pretty good a following instructions in Italian.
\- Responds well to prompt-engineering.
\- Works well in a RAG setup.
\- It has been trained on a relatively raw dataset ([Loquace-102K](https://huggingface.co/datasets/cosimoiaia/Loquace-102k)) using QLoRa.
\- Training took only 4 hours on a 3090, costing a little more than **1 euro**!
\- It is completely open source: Model, Dataset and Code to replicate the results are fully available.
\- It's created in a garage in the south of Italy. :-)
I would love to hear your thoughts and feedback! | 2023-12-09T13:30:05 | https://www.reddit.com/r/LocalLLaMA/comments/18edn1w/loquace7bmistral_an_italian_speaking_llm_good_at/ | cosimoiaia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18edn1w | false | null | t3_18edn1w | /r/LocalLLaMA/comments/18edn1w/loquace7bmistral_an_italian_speaking_llm_good_at/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'Cn8DKZUElxbYbUJ2gm-qzOxWVae7Apcn-QB2mtmsLu4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bRZUhx1xYNQx6KAXrSfPPPgn-n2N4qVXvCkz2_4uZy4.jpg?width=108&crop=smart&auto=webp&s=09d44e2844fa091d14b06cda9224e39dc60e31aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bRZUhx1xYNQx6KAXrSfPPPgn-n2N4qVXvCkz2_4uZy4.jpg?width=216&crop=smart&auto=webp&s=51fe9a9b4279a73a455a0e1c8b6ae644b4d60359', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bRZUhx1xYNQx6KAXrSfPPPgn-n2N4qVXvCkz2_4uZy4.jpg?width=320&crop=smart&auto=webp&s=6b66aa49d890602b960a1fdb6689dbf7d3ee10f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bRZUhx1xYNQx6KAXrSfPPPgn-n2N4qVXvCkz2_4uZy4.jpg?width=640&crop=smart&auto=webp&s=0f4690d2a466b9cc101a8a0b2428f7f89ea6c30b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bRZUhx1xYNQx6KAXrSfPPPgn-n2N4qVXvCkz2_4uZy4.jpg?width=960&crop=smart&auto=webp&s=ac760141bb537687558e08b6bcca51acc5acc32e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bRZUhx1xYNQx6KAXrSfPPPgn-n2N4qVXvCkz2_4uZy4.jpg?width=1080&crop=smart&auto=webp&s=1cdc4f2993b8600455e0a1fefcf932589b03752b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bRZUhx1xYNQx6KAXrSfPPPgn-n2N4qVXvCkz2_4uZy4.jpg?auto=webp&s=9c8bfec68c7bc4684181fd92413b8303fb92df8a', 'width': 1200}, 'variants': {}}]} |
Unit tests feedback loop | 2 | Have you setup a feedback loop system with your local llama?
like give prompt for creating code for a task, then automatically unit test the result from the model. If it is failed give the error back to the model ask to correct it.
And repeat the process until unit test successfull.
With langchain etc..? | 2023-12-09T13:16:01 | https://www.reddit.com/r/LocalLLaMA/comments/18ede3y/unit_tests_feedback_loop/ | troposfer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ede3y | false | null | t3_18ede3y | /r/LocalLLaMA/comments/18ede3y/unit_tests_feedback_loop/ | false | false | self | 2 | null |
I see the term "State of the Art" thrown around a lot. In the context of LLMs (or related algorithms like quantization), does that mean anything specific, or is it more of a buzz word? | 15 | What are the criteria for something to be SOTA or not? Who decides this? Can it be tested? Do I have enough body text yet? | 2023-12-09T13:12:43 | https://www.reddit.com/r/LocalLLaMA/comments/18edc07/i_see_the_term_state_of_the_art_thrown_around_a/ | Misha_Vozduh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18edc07 | false | null | t3_18edc07 | /r/LocalLLaMA/comments/18edc07/i_see_the_term_state_of_the_art_thrown_around_a/ | false | false | self | 15 | null |
Freeing VRAM with ollama | 6 | Hi chaps,
I'm loving ollama, but am curious if theres anyway to free/unload a model after it has been loaded - otherwise I'm stuck in a state with 90% of my VRAM utilized. Do I need to shutdown the systemd service?
Would be nice if there was a way to do it from the CLI... | 2023-12-09T13:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/18ed9tr/freeing_vram_with_ollama/ | tlonny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ed9tr | false | null | t3_18ed9tr | /r/LocalLLaMA/comments/18ed9tr/freeing_vram_with_ollama/ | false | false | self | 6 | null |
webui similar to chatgpt | 1 | Does anyone knows if there is one similar and easy to use like chatgpt interface? | 2023-12-09T12:58:40 | https://www.reddit.com/r/LocalLLaMA/comments/18ed36i/webui_similar_to_chatgpt/ | Desire-Protection | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ed36i | false | null | t3_18ed36i | /r/LocalLLaMA/comments/18ed36i/webui_similar_to_chatgpt/ | false | false | self | 1 | null |
Which model will gives the fastest results (like turbo) ? | 9 | hi,
I'm having hard time to identify a comparison of the models performances based on completion speed. Do you know any benchmark?
Many thanks to the community. | 2023-12-09T12:20:44 | https://www.reddit.com/r/LocalLLaMA/comments/18ech8a/which_model_will_gives_the_fastest_results_like/ | rmsisme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ech8a | false | null | t3_18ech8a | /r/LocalLLaMA/comments/18ech8a/which_model_will_gives_the_fastest_results_like/ | false | false | self | 9 | null |
Absolutely exceeded our expectations and we got so many compliments. | 1 | 2023-12-09T12:00:34 | Cute_Hovercraft_1436 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18ec5qp | false | null | t3_18ec5qp | /r/LocalLLaMA/comments/18ec5qp/absolutely_exceeded_our_expectations_and_we_got/ | false | false | 1 | {'enabled': True, 'images': [{'id': '6e6ALjheGVcciVcm8ik4xG_vS_MXDTSpUK8M-uv0xy8', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/mid1zu59e95c1.png?width=108&crop=smart&auto=webp&s=8a45134565181ddc19c5abbe1b67100a863715e3', 'width': 108}, {'height': 193, 'url': 'https://preview.redd.it/mid1zu59e95c1.png?width=216&crop=smart&auto=webp&s=05910d3e15114c8408ac6257c97fc9588f932e93', 'width': 216}, {'height': 287, 'url': 'https://preview.redd.it/mid1zu59e95c1.png?width=320&crop=smart&auto=webp&s=2517469941d8e8202f4d63244a5655dcc777def5', 'width': 320}], 'source': {'height': 418, 'url': 'https://preview.redd.it/mid1zu59e95c1.png?auto=webp&s=112c840d74af8654ad5eed2cc44e50ff14991d1f', 'width': 466}, 'variants': {}}]} | |||
Tigerbot 70B v4 beats gpt4 | 1 | So the newest version of tigerbot 70B supposedly beats a 1.8T model.
My question is, how reliable are these benchmarks? How do we know that these new llms aren't being trained directly on the benchmark questions?
I'm new to llms and i'd like to understand how much I should trust these numbers.
Cheers.
[https://huggingface.co/spaces/gsaivinay/open\_llm\_leaderboard](https://huggingface.co/spaces/gsaivinay/open_llm_leaderboard) | 2023-12-09T11:58:39 | https://www.reddit.com/r/LocalLLaMA/comments/18ec4lt/tigerbot_70b_v4_beats_gpt4/ | Mr_Hills | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ec4lt | false | null | t3_18ec4lt | /r/LocalLLaMA/comments/18ec4lt/tigerbot_70b_v4_beats_gpt4/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=108&crop=smart&auto=webp&s=90f4efd1c1314faf5b0cd1c5eeb8d2835fe4a3ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=216&crop=smart&auto=webp&s=062336de177b9f9f124a98f4e03b59faa819be1d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=320&crop=smart&auto=webp&s=01e9aeccc0d76fee4ecb359bfb6238dc2afd87f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=640&crop=smart&auto=webp&s=723b41bc410ff59454cf7a9a3db4eced43d4868f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=960&crop=smart&auto=webp&s=2c43ffe72f7f32d522c3e85c1aa8e25d6f213b38', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=1080&crop=smart&auto=webp&s=6a7d63eae44237642a3f95e586436bf6efe5dd70', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?auto=webp&s=51349e0b781d1c9e91535974e09833705c76a3cc', 'width': 1200}, 'variants': {}}]} |
Managing multiple models, their settings and use cases | 3 | I've been exploring local LLMs for a week now and it's been really fun so far. However, I'm struggling with managing all the details and settings of the different models, the modes (chat, instruct, etc) and so forth.
How are you managing all of that knowledge?
Ideally, there'd be website with all the models, their intended use case ("for story writing", "for math") and which settings to use (prompt format, sampler settings, etc). I know most of that is on HF, but perhaps there is a better option? | 2023-12-09T11:58:35 | https://www.reddit.com/r/LocalLLaMA/comments/18ec4km/managing_multiple_models_their_settings_and_use/ | permutation | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ec4km | false | null | t3_18ec4km | /r/LocalLLaMA/comments/18ec4km/managing_multiple_models_their_settings_and_use/ | false | false | self | 3 | null |
drop some knowledge on me: ollama vs koboldcpp or other tools for power users on mac | 3 | I've been playing with small models using my M1 mini 16GB, but I have an M3 macbook coming with lots of memory, so I need to take things more seriously.
I'm currently using ollama + litellm to easily use local models with an OpenAI-like API, but I'm feeling like it's **too** simple.
I don't necessarily need a UI for chatting, but I feel like the chain of tools (litellm -> ollama -> llama.cpp?) obfuscates a lot to simplify it for the end user and I'm missing out on knowledge.
So drop some knowledge on me if you can:
1. I've been using GGUF formats only, I know there are others, should I care about them?
2. Beyond configuring advanced parameters and chatting - are there UIs that you can recommend for developers/power users (scriptable, configurable, support functions, etc)
3. Any other suggestions for learning? If I have a mac with lots of memory, can I explore fine-tuning? tool recommendations?
4. Should I learn to run llama.cpp directly? (considering the rest of my questions)
Thanks! | 2023-12-09T11:46:13 | https://www.reddit.com/r/LocalLLaMA/comments/18ebxtd/drop_some_knowledge_on_me_ollama_vs_koboldcpp_or/ | Madd0g | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ebxtd | false | null | t3_18ebxtd | /r/LocalLLaMA/comments/18ebxtd/drop_some_knowledge_on_me_ollama_vs_koboldcpp_or/ | false | false | self | 3 | null |
Google just shipped libggml from llama-cpp into its Android AICore | 189 | 2023-12-09T11:00:51 | https://twitter.com/tarantulae/status/1733263857617895558 | perone | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 18eb8u0 | false | {'oembed': {'author_name': 'Christian S. Perone', 'author_url': 'https://twitter.com/tarantulae', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">These are the libs being shipped no in AICore:<a href="https://t.co/WECzbjrMDg">https://t.co/WECzbjrMDg</a><br>libfile_defrag_jni.so<br>libtartarus_core.so<br>libtflite_llm_jni.so<br>libulm1b_ggml_jni.so<br><br>These are some interesting source filenames in the libtflite_llm_jni library: <a href="https://t.co/4gcILOPP4h">pic.twitter.com/4gcILOPP4h</a></p>— Christian S. Perone (@tarantulae) <a href="https://twitter.com/tarantulae/status/1733263857617895558?ref_src=twsrc%5Etfw">December 8, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/tarantulae/status/1733263857617895558', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_18eb8u0 | /r/LocalLLaMA/comments/18eb8u0/google_just_shipped_libggml_from_llamacpp_into/ | false | false | 189 | {'enabled': False, 'images': [{'id': 'hUctKOEb-yrHHGYq4bfrgAvFeR_29YPQhi4z8aEeH1M', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/cUN9xPcGNTYijMcsLHhsotuypTv4t9IT5GOJ14cG7Yg.jpg?width=108&crop=smart&auto=webp&s=2755b23cc52c959ba1380cb52a99dd5bd8b1fee3', 'width': 108}], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/cUN9xPcGNTYijMcsLHhsotuypTv4t9IT5GOJ14cG7Yg.jpg?auto=webp&s=d101aa09beacda2bbe289b391a0fbbd31fe07712', 'width': 140}, 'variants': {}}]} | ||
Best local model for story-writing? | 10 | Hey everyone!
I’m new to LLM’s and feel overwhelmed with all the options out there. I’m looking for a tool that I can use for writing stories preferably uncensored. I have a 3090 and plenty of space on my SSD. What do you think is the best model for me?
Thanks in advance! | 2023-12-09T10:16:04 | https://www.reddit.com/r/LocalLLaMA/comments/18eakw0/best_local_model_for_storywriting/ | FullOfVitaminC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eakw0 | false | null | t3_18eakw0 | /r/LocalLLaMA/comments/18eakw0/best_local_model_for_storywriting/ | false | false | self | 10 | null |
What cpu, ram, motherboard, power supply are people using to support their 2x 4090s or 2x A6000s? | 22 | I can see a high quality motherboard being important, but I have a hard time seeing any reason to get above economy levels CPU and RAM when this machine will essentially be a GPU server.
Pretty similarly on the power supply, hit the spec you need and move on.
Any words of warning? I can spend the money on quality, but I don't like overspending. | 2023-12-09T10:00:06 | https://www.reddit.com/r/LocalLLaMA/comments/18eac3x/what_cpu_ram_motherboard_power_supply_are_people/ | learning_hedonism | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18eac3x | false | null | t3_18eac3x | /r/LocalLLaMA/comments/18eac3x/what_cpu_ram_motherboard_power_supply_are_people/ | false | false | self | 22 | null |
QuIP at 4 or 8 bits? | 3 | From my understanding, the newly implemented QuIP quantization greatly reduces info loss at 2 bits, making it a lot more viable.
The same method should also improve 4 and 8 bit quantizations as far as I can see, although the improvement may not be as dramatic. Has anyone tried this? | 2023-12-09T09:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/18ea27c/quip_at_4_or_8_bits/ | EvokerTCG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ea27c | false | null | t3_18ea27c | /r/LocalLLaMA/comments/18ea27c/quip_at_4_or_8_bits/ | false | false | self | 3 | null |
EU regulation implications ? | 33 | so which way does the recently announced EU regulation go .. there was a rumour that they'd have exemptions for opensource (pressured by the mistral team or something..) , but what actually got decided in the end ?
the biden executive order almost seemed to have the opposite intent, i.e. it was more hostile toward the idea of open weights (where guardrails could be bypassed) | 2023-12-09T09:06:10 | https://www.reddit.com/r/LocalLLaMA/comments/18e9llg/eu_regulation_implications/ | dobkeratops | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e9llg | false | null | t3_18e9llg | /r/LocalLLaMA/comments/18e9llg/eu_regulation_implications/ | false | false | self | 33 | null |
text generation web ui with a rented GPU on runpod ? | 1 | [removed] | 2023-12-09T09:04:16 | https://www.reddit.com/r/LocalLLaMA/comments/18e9kmz/text_generation_web_ui_with_a_rented_gpu_on_runpod/ | Independent-Bill-770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e9kmz | false | null | t3_18e9kmz | /r/LocalLLaMA/comments/18e9kmz/text_generation_web_ui_with_a_rented_gpu_on_runpod/ | false | false | self | 1 | null |
Can we do similarity search with different embedding models? | 3 | I mean, let's say I create a chroma_db vector store using Openai embeddings but I want to use an offline embedding model like MiniLM4all or such for similarity search. Will it cause any problems or even work to begin with? | 2023-12-09T08:30:19 | https://www.reddit.com/r/LocalLLaMA/comments/18e93rw/can_we_do_similarity_search_with_different/ | Shoddy_Vegetable_115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e93rw | false | null | t3_18e93rw | /r/LocalLLaMA/comments/18e93rw/can_we_do_similarity_search_with_different/ | false | false | self | 3 | null |
Prompt Engineering for 7b LLMs | 171 | After testing Mistral-Instruct and Zephyr, I decided to start figuring out more ways to integrate them in my workflow. Running some unit tests now, and noting down my observations over multiple iterations. Sharing my current list:
- give clean and specific instructions (in a direct, authoritative tone - - like "do this" or "do that")
- Don't use ChatGPT to tune your prompts for small models. In my experience, ChatGPT is pretty bad at prompt tuning, if you're going to use those prompts with small models. It usually makes prompts WORSE by adding in lots of purple prose.
- For example, you may ask ChatGPT to improve and fine-tune a prompt "Write an essay on the merits and demerits of the Marshall Plan. Use clear, concise bullet-points, relevant section headings and write with an analytical style". ChatGPT will "fine-tune" that prompt into "Delve into the intricacies of the Marshall Plan, contemplating its many merits and demerits. The use of a succint explanatory style is preferable, and it is important to utilize bullet-points. Explore the usage of section headings spotlighting relevant information, and an engaging, analytical literary style".
- First of all, this "tuned" prompt is absolutely ridiculous. It adds no real semantic value to the prompt, just makes it sound like a 19th century dandy. When it comes to small models, this prompt will almost always make things worse. For one thing, smaller models are more "literal" than larger ones, and can't generalize as well. Smaller models will take this prompt and run with it - -the completion will contain words from the prompt, like "delve", "intricacies" and other such Chat-GPTisms, and you're likely to get a very sterile, ChatGPT-esque response, minus ChatGPT's reasoning ability - - the worst of all possible worlds.
- Be careful with adjectives - - you can ask for a concise explanation, and the model may throw the word "concise" into its explanation. Smaller models tend to do this a lot (although GPT3.5 is also guilty of it) - - words from your instruction bleed into the completion, whether they're relevant or not.
- use delimiters to indicate distinct parts of the text - - for example, use backticks or brackets etc. Backticks are great for marking out code, because that's what most websites etc do.
- using markdown to indicate different parts of the prompt - I've found this to be the most reliable way to segregate different sections of the prompt.
- markdown tends to be the preferred format for training these things, so makes sense that it's effective in inference as well.
- use structured input and output formats: JSON, markdown, HTML etc
- constrain output using JSON schema
- Use few-shot examples in different niches/use cases. Try to avoid few-shot examples that are in the same niche/use case as the question you're trying to answer, this leads to answers that "overfit".
- Make the model "explain" its reasoning process through output tokens (chain-of-thought). This is especially useful in prompts where you're asking the language model to do some reasoning. Chain-of-thought is basically procedural reasoning. To teach chain-of-thought to the model you need to either give it few-shot prompts, or fine-tune it. Few-shot is obviously cheaper in the short run, but fine tune for production. Few shot is also a way to rein in base models and reduce their randomness. (note: ChatGPT seems to do chain-of-thought all on its own, and has evidently been extensively fine-tuned for it).
- break down your prompt into steps, and "teach" the model each step through few-shot examples. Assume that it'll always make a mistake, given enough repetition, this will help you set up the necessary guardrails.
- use "description before completion" methods: get the LLM to describe the entities in the text before it gives an answer. ChatGPT is also able to do this natively, and must have been fine-tuned for it. For smaller models, this means your prompt must include a chain-of-thought (or you can use a chain of prompts) to first extract the entities of the question, then describe the entities, then answer the question. Be careful about this, sometimes the model will put chunks of the description into its response, so run multiple unit tests.
- Small models are extremely good at interpolation, and extremely bad at extrapolation (when they haven't been given a context). Direct the model towards the extrapolation you want, give it enough context.
- at the same time, you can't always be sure which parts of the context the LLM will use, so only give it essential context - - dumping multiple unstructured paragraphs of context into the prompt may not give you what you want.
- This is the main issue I've had with RAG + small models - - it doesn't always know which parts of the context are most relevant. I'm experimenting with using "chain-of-density" to compress the RAG context before putting it into the LLM prompt.. let's see how that works out.
- Test each prompt multiple times, Sometimes the model won't falter for 20 generations, and when you run an integration test it'll spit out something you never expected.
- Eg: you prompt the model to generate a description based on a given JSON string. Let's say the JSON string has the keys "name" "gender" "location" "occupation" "hobbies".
- Sometimes, the LLM will respond with a perfectly valid description "John is a designer based in New York City, and he enjoys sports and video games".
- Other times, you'll get "The object may be described as having the name "John", has the gender "Male", the location "New York City", the occupation "designer", and hobbies "sports" and "video games".
- At one level, this is perfectly "logical" - - the model is technically following instructions, but it's also not an output you want to pass on to the next prompt in your chain. You may want to run verifications for all prompt outputs, but this also adds to the cost/time.
- Completion ranking and reasoning: I haven't yet come across an open source model that can do this well, and am still using OpenAI API for this.
- Things like ranking 3 completions based on their "relevance", "clarity" or "coherence" --these are complex tasks, and, for the time being, seem out of reach for even the largest models I've tried (LLAMA2, Falcon 180b).
- The only way to do this may be to get a ranking dataset out of GPT4 and then fine tune an open-source model on it. I haven't worked this out yet, just going to use GPT4 for now.
- Use stories. This is a great way to control the output of a base model. I was trying to get a base model to give me JSON output, and I wrote a short story of a guy named Bob who makes an API endpoint for XYZ use case, tests it, and the HTTP response body contains the JSON string .... (and let the model complete it, putting a "}" as the stop sequence).
- GBNF grammars to constrain output. Just found out about this, testing it out now.
Some of these may sound pretty obvious, but I like having a list that I can run through whenever I'm troubleshooting a prompt. | 2023-12-09T08:27:17 | https://www.reddit.com/r/LocalLLaMA/comments/18e929k/prompt_engineering_for_7b_llms/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e929k | false | null | t3_18e929k | /r/LocalLLaMA/comments/18e929k/prompt_engineering_for_7b_llms/ | false | false | self | 171 | null |
My open-source & cross-platform on-device LLMs app is now available on TestFlight & GitHub. Feedback & testers welcome. | 2 | 2023-12-09T08:18:43 | https://v.redd.it/gddtagfr985c1 | BrutalCoding | /r/LocalLLaMA/comments/18e8xvp/my_opensource_crossplatform_ondevice_llms_app_is/ | 1970-01-01T00:00:00 | 0 | {} | 18e8xvp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gddtagfr985c1/DASHPlaylist.mpd?a=1704788327%2COGEwMjYxNjhmMmY3NTU2MGFjNTBkZjRjNThiZDgwNTAwMzE0Yzk1ZDgxM2JlYjZiOWE4NThhZDY2ZjM3NTYzOA%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/gddtagfr985c1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/gddtagfr985c1/HLSPlaylist.m3u8?a=1704788327%2CNTFlM2VmODg1YzRlNDJmNjAzYzQyMzZmNmI1MDg3YTY4NWYwODljNWM4ZTA5MTJiODJmNmFmYmI3NzMyOWYwYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gddtagfr985c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_18e8xvp | /r/LocalLLaMA/comments/18e8xvp/my_opensource_crossplatform_ondevice_llms_app_is/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'YTBuM2Mxd29hODVjMfOEWshS1WnjFZjn1v9QghJf7iEf5PNr_9qbhZO3ZL3q', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/YTBuM2Mxd29hODVjMfOEWshS1WnjFZjn1v9QghJf7iEf5PNr_9qbhZO3ZL3q.png?width=108&crop=smart&format=pjpg&auto=webp&s=ee468a8d607c6546e1bab84078792e004c57ffc5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/YTBuM2Mxd29hODVjMfOEWshS1WnjFZjn1v9QghJf7iEf5PNr_9qbhZO3ZL3q.png?width=216&crop=smart&format=pjpg&auto=webp&s=94351f5a13b80e98ffdc539806e5e410b49cca72', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/YTBuM2Mxd29hODVjMfOEWshS1WnjFZjn1v9QghJf7iEf5PNr_9qbhZO3ZL3q.png?width=320&crop=smart&format=pjpg&auto=webp&s=9d0361b753ac43ef8bd8ceec4ecac0ebf73a025b', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/YTBuM2Mxd29hODVjMfOEWshS1WnjFZjn1v9QghJf7iEf5PNr_9qbhZO3ZL3q.png?width=640&crop=smart&format=pjpg&auto=webp&s=fc3ab634665189da24255388aec844ac33bdd33b', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/YTBuM2Mxd29hODVjMfOEWshS1WnjFZjn1v9QghJf7iEf5PNr_9qbhZO3ZL3q.png?width=960&crop=smart&format=pjpg&auto=webp&s=ffa23e2365d198c1df25f86fd788adcac23676ab', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/YTBuM2Mxd29hODVjMfOEWshS1WnjFZjn1v9QghJf7iEf5PNr_9qbhZO3ZL3q.png?width=1080&crop=smart&format=pjpg&auto=webp&s=318502c2eedd1958f4ddc4d60597e453cc571afe', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/YTBuM2Mxd29hODVjMfOEWshS1WnjFZjn1v9QghJf7iEf5PNr_9qbhZO3ZL3q.png?format=pjpg&auto=webp&s=4f3d17bcce443252968a570ea60288902175a50f', 'width': 2160}, 'variants': {}}]} | ||
mixtral-8x7b api for testing here. | 38 | tks [fireworks.ai](https://fireworks.ai).
i post for for someone testing.
[https://app.fireworks.ai/models/fireworks/mixtral-8x7b](https://app.fireworks.ai/models/fireworks/mixtral-8x7b).
sister test.
There are three sisters in a room. Anna is reading a book. Alice is playing a game of chess. What is the third sister, Amanda, doing?
Answer:
>Amanda is playing a game of chess with Alice.
>
>Explanation:
>
>The third sister, Amanda, is playing a game of chess with Alice. This is because the sentence “Anna is reading a book” implies that Anna is not playing a game of chess, and the sentence “Alice is playing a game of chess” implies that Alice is playing a game of chess. Therefore, the third sister, Amanda, must be playing a game of chess with Alice.
​ | 2023-12-09T07:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/18e8huk/mixtral8x7b_api_for_testing_here/ | Narrow-Painter-9623 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e8huk | false | null | t3_18e8huk | /r/LocalLLaMA/comments/18e8huk/mixtral8x7b_api_for_testing_here/ | false | false | self | 38 | null |
Mamba is a new neural network architecture proposed by Albert Gu and Tri Dao. (nice overview) | 1 | [removed] | 2023-12-09T06:32:37 | https://www.reddit.com/r/LocalLLaMA/comments/18e7fbj/mamba_is_a_new_neural_network_architecture/ | Easy_Butterfly2125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e7fbj | false | null | t3_18e7fbj | /r/LocalLLaMA/comments/18e7fbj/mamba_is_a_new_neural_network_architecture/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LSsQihx365XIDDP3ttQkd0kHWsiRZW0cns5Vh1ALTrc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/E4dc0PpHSE8-RICrBaEL4raVv9n7XpVfN9oyMv4Zyig.jpg?width=108&crop=smart&auto=webp&s=88ae2183560b9470c1148637bac044e3b76cc69a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/E4dc0PpHSE8-RICrBaEL4raVv9n7XpVfN9oyMv4Zyig.jpg?width=216&crop=smart&auto=webp&s=3bc66b05072492988c7a9bce4ab041505d2e8501', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/E4dc0PpHSE8-RICrBaEL4raVv9n7XpVfN9oyMv4Zyig.jpg?width=320&crop=smart&auto=webp&s=a7c4069ec5d65d36362a41b9dec01c39f4006545', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/E4dc0PpHSE8-RICrBaEL4raVv9n7XpVfN9oyMv4Zyig.jpg?auto=webp&s=9ba04d1b1e4bd5a7deebf4a3b57cead27d46da8a', 'width': 480}, 'variants': {}}]} |
Mixtral appreciation post | 18 | Look, look, I know, this sounds like a low effort post, but I really like the name Mixtral and I think everyone should too
I think the boys and gals nailed it on this one. It's really clever and I like it a lot. That's all. | 2023-12-09T04:45:44 | https://www.reddit.com/r/LocalLLaMA/comments/18e5q4s/mixtral_appreciation_post/ | Dany0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e5q4s | false | null | t3_18e5q4s | /r/LocalLLaMA/comments/18e5q4s/mixtral_appreciation_post/ | false | false | self | 18 | null |
Is Meta using llama2 in any customer-facing production applications? | 6 | I'm just wondering if it is known if Meta itself is dog-fooding llama2 anywhere? I am working with a customer and finding latency a very difficult obstacle to economically overcome. So I wanted to see if I could find some llama2 success stories and thought I would start with Meta itself. | 2023-12-09T04:34:37 | https://www.reddit.com/r/LocalLLaMA/comments/18e5jbq/is_meta_using_llama2_in_any_customerfacing/ | codefinger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e5jbq | false | null | t3_18e5jbq | /r/LocalLLaMA/comments/18e5jbq/is_meta_using_llama2_in_any_customerfacing/ | false | false | self | 6 | null |
Llamacpp | 5 | how many layers are there in total in llama 7B model , and how many layers can i offload to GPU if i have 32gb vram? | 2023-12-09T04:21:44 | https://www.reddit.com/r/LocalLLaMA/comments/18e5bka/llamacpp/ | happy_dreamer10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e5bka | false | null | t3_18e5bka | /r/LocalLLaMA/comments/18e5bka/llamacpp/ | false | false | self | 5 | null |
Has any of you had disturbing or unsettling dreams after using the LLM? | 1 | Title. I was wondering because since using it I've had more unsettling dreams. I've been giving he AI a bunch of nonsense and watch as it hallucinates some sense into it. However, I think reading this stuff is causing me to have really weird dreams. The last dream I remember, kids were dislocating their jaw and posting pictures of it on social media. They called it a "smiley". I gave the AI a break and the dreams have toned down as well. I was just wondering if anyone else has experienced this? | 2023-12-09T03:27:54 | https://www.reddit.com/r/LocalLLaMA/comments/18e4e4b/has_any_of_you_had_disturbing_or_unsettling/ | Red_Redditor_Reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e4e4b | false | null | t3_18e4e4b | /r/LocalLLaMA/comments/18e4e4b/has_any_of_you_had_disturbing_or_unsettling/ | false | false | default | 1 | null |
HF transformers-compatible Mixtral model | 29 | 2023-12-09T03:26:39 | https://huggingface.co/DiscoResearch/mixtral-7b-8expert | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18e4dcp | false | null | t3_18e4dcp | /r/LocalLLaMA/comments/18e4dcp/hf_transformerscompatible_mixtral_model/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'iH74TfiZ0eTAjkuoJWlFRPPA0sWd87IMuqE2xz7DcMM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=108&crop=smart&auto=webp&s=c34833c0e94fea32e94b1276136bc559040fd1e0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=216&crop=smart&auto=webp&s=e1ca9b01cb9ab8ca328c627dd81105428f4455ff', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=320&crop=smart&auto=webp&s=6940ac43b442dc3f798f52050401f1fd9def6543', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=640&crop=smart&auto=webp&s=388989d591a7fff82b7afd662a17f8eefe229e23', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=960&crop=smart&auto=webp&s=94116a78c221c4f1538c2f3e3ace6d36be08d63c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=1080&crop=smart&auto=webp&s=17f82ae8a24f8862c337d3ab10e64ca6947f527f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?auto=webp&s=8d2448b94fb9e12ed0ca3398e51123886e810d58', 'width': 1200}, 'variants': {}}]} | ||
Prompting LLMs to constrain output | 8 | I just started using GBNF, this has been a lifesaver - - can't believe I was using so many tokens asking the model to respond in a specific format :) I'm using TextSynth for small models, and the API endpoint has support for GBNF.
I've also noticed that using markdown in my prompt (Eg: ## TASK DESCRIPTION) for headings and subheadings instead of just newline seems to make instruct models respond better. It seems that most models, even the 7b ones, work well with markdown.
What methods are you guys using to constrain output/improve completion reliability in a single prompt (ie without chaining a "verification" prompt)? | 2023-12-09T03:21:02 | https://www.reddit.com/r/LocalLLaMA/comments/18e49oj/prompting_llms_to_constrain_output/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e49oj | false | null | t3_18e49oj | /r/LocalLLaMA/comments/18e49oj/prompting_llms_to_constrain_output/ | false | false | self | 8 | null |
Podcaster: turn your static site RSS feed into a podcast (XTTS-v2 + Modal.com) | 8 | 2023-12-09T03:10:51 | https://duarteocarmo.com/blog/you-can-now-listen-to-this-blog | iLikePortugueseTarts | duarteocarmo.com | 1970-01-01T00:00:00 | 0 | {} | 18e432d | false | null | t3_18e432d | /r/LocalLLaMA/comments/18e432d/podcaster_turn_your_static_site_rss_feed_into_a/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'JilLN92RSpbFUQILrN1kqEhNeqS72rz6KVI4Fc45upM', 'resolutions': [{'height': 105, 'url': 'https://external-preview.redd.it/amhyGNEiNAv6ipvgvcrV0foGmPMKSKdJ-gyDgK_CfiQ.jpg?width=108&crop=smart&auto=webp&s=d3aa7c7d981f4a1f078cd94af96ff8276deffaa0', 'width': 108}, {'height': 211, 'url': 'https://external-preview.redd.it/amhyGNEiNAv6ipvgvcrV0foGmPMKSKdJ-gyDgK_CfiQ.jpg?width=216&crop=smart&auto=webp&s=893f23b18f1488813d1514732fdd9a46e8e9b27a', 'width': 216}, {'height': 312, 'url': 'https://external-preview.redd.it/amhyGNEiNAv6ipvgvcrV0foGmPMKSKdJ-gyDgK_CfiQ.jpg?width=320&crop=smart&auto=webp&s=021470657c993e9f3c8b37149ef2e745eb74e98c', 'width': 320}], 'source': {'height': 352, 'url': 'https://external-preview.redd.it/amhyGNEiNAv6ipvgvcrV0foGmPMKSKdJ-gyDgK_CfiQ.jpg?auto=webp&s=a5905ad5a871c2deedf82633be3c5827b22a7d73', 'width': 360}, 'variants': {}}]} | ||
Ryzen 7 7800X3D VS. i9-13900K for self-hosted AI? | 3 | Hi, I currently have a Ryzen 7 7800X3D, x670e motherboard, 4090, and 32gb of DDR5.
Would an i9-13900K, z790 motherboard, my 4090, and 32gb of DDR5 be better for self-hosted AI?
I'm using OobaBooga.
I know the GPU matters most. But I'm wondering if the i9/z790 are better since the Ryzen is mostly for gaming?
I plan on learning how to train AI so it can assist me with my copywriting day job. Nothing major or ground breaking.
thanks | 2023-12-09T02:59:32 | https://www.reddit.com/r/LocalLLaMA/comments/18e3vhx/ryzen_7_7800x3d_vs_i913900k_for_selfhosted_ai/ | kaszebe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e3vhx | false | null | t3_18e3vhx | /r/LocalLLaMA/comments/18e3vhx/ryzen_7_7800x3d_vs_i913900k_for_selfhosted_ai/ | false | false | self | 3 | null |
QLORA vs. Traditional Fine-Tuning | 1 | [removed] | 2023-12-09T02:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/18e3ltr/qlora_vs_traditional_finetuning/ | Demo-Demologist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e3ltr | false | null | t3_18e3ltr | /r/LocalLLaMA/comments/18e3ltr/qlora_vs_traditional_finetuning/ | false | false | self | 1 | null |
Wat | 61 | 2023-12-09T01:59:26 | xadiant | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18e2ria | false | null | t3_18e2ria | /r/LocalLLaMA/comments/18e2ria/wat/ | false | false | 61 | {'enabled': True, 'images': [{'id': 'NQOGoQW4C3roWK7tmoc-2CDGTPB_hZoipAfQPnjZmgU', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/6ql585x0f65c1.png?width=108&crop=smart&auto=webp&s=19aee5b684f3838fe6ec3eed55e326d6f98430dd', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/6ql585x0f65c1.png?width=216&crop=smart&auto=webp&s=18127d1cb5788c7c79b4fb9135c74a447dbb9e4e', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/6ql585x0f65c1.png?width=320&crop=smart&auto=webp&s=6197e18737c98b165ed7c14fa5e82f6e0cdcb9cf', 'width': 320}, {'height': 336, 'url': 'https://preview.redd.it/6ql585x0f65c1.png?width=640&crop=smart&auto=webp&s=d6861a764cfa6c55079f10b4652828f511333e96', 'width': 640}, {'height': 504, 'url': 'https://preview.redd.it/6ql585x0f65c1.png?width=960&crop=smart&auto=webp&s=f53653d7a7f07c1433af9d7a706d3614f7329cab', 'width': 960}, {'height': 568, 'url': 'https://preview.redd.it/6ql585x0f65c1.png?width=1080&crop=smart&auto=webp&s=6df3cbe4539ca43232facaa49c8894760c88bcbc', 'width': 1080}], 'source': {'height': 568, 'url': 'https://preview.redd.it/6ql585x0f65c1.png?auto=webp&s=0ff62057ecda21a8269625481447ad00a53623a0', 'width': 1080}, 'variants': {}}]} | |||
Mistral 7b MOE on Huggingface | 15 | It can be found here: https://huggingface.co/DiscoResearch/mixtral-7b-8expert | 2023-12-09T01:40:47 | https://www.reddit.com/r/LocalLLaMA/comments/18e2eud/mistral_7b_moe_on_huggingface/ | nutcustard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e2eud | false | null | t3_18e2eud | /r/LocalLLaMA/comments/18e2eud/mistral_7b_moe_on_huggingface/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'iH74TfiZ0eTAjkuoJWlFRPPA0sWd87IMuqE2xz7DcMM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=108&crop=smart&auto=webp&s=c34833c0e94fea32e94b1276136bc559040fd1e0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=216&crop=smart&auto=webp&s=e1ca9b01cb9ab8ca328c627dd81105428f4455ff', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=320&crop=smart&auto=webp&s=6940ac43b442dc3f798f52050401f1fd9def6543', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=640&crop=smart&auto=webp&s=388989d591a7fff82b7afd662a17f8eefe229e23', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=960&crop=smart&auto=webp&s=94116a78c221c4f1538c2f3e3ace6d36be08d63c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?width=1080&crop=smart&auto=webp&s=17f82ae8a24f8862c337d3ab10e64ca6947f527f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fizOw-MpmWqkoA5UcFxFDS8Kq0mlxSKIU5zfmv3qSyA.jpg?auto=webp&s=8d2448b94fb9e12ed0ca3398e51123886e810d58', 'width': 1200}, 'variants': {}}]} |
European Union agrees to regulate potentially harmful effects of artificial intelligence | 1 | 2023-12-09T01:25:51 | https://www.cnn.com/2023/12/08/business/european-union-regulate-ai/index.html | throwaway_ghast | cnn.com | 1970-01-01T00:00:00 | 0 | {} | 18e24fg | false | null | t3_18e24fg | /r/LocalLLaMA/comments/18e24fg/european_union_agrees_to_regulate_potentially/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'HBjXE5Os9ehypurFKDIMNi53p24uYiGuyWMm5jojnXs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cJ_ToQAsuYdtR6wX_5Slbn2yYe9O-5vWTXiD-_vHZDs.jpg?width=108&crop=smart&auto=webp&s=0454a74c754c8a9cc6d9ee630eda174cc0b2899d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cJ_ToQAsuYdtR6wX_5Slbn2yYe9O-5vWTXiD-_vHZDs.jpg?width=216&crop=smart&auto=webp&s=e0e9536577d86cffc01d04e50e63840a6e3d8cd2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cJ_ToQAsuYdtR6wX_5Slbn2yYe9O-5vWTXiD-_vHZDs.jpg?width=320&crop=smart&auto=webp&s=4dcd75270538782e8764eb9ee1d85dabdab5ad85', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cJ_ToQAsuYdtR6wX_5Slbn2yYe9O-5vWTXiD-_vHZDs.jpg?width=640&crop=smart&auto=webp&s=0324cf71749987e7e40e9f23a4ef4c2f76a75cc3', 'width': 640}], 'source': {'height': 450, 'url': 'https://external-preview.redd.it/cJ_ToQAsuYdtR6wX_5Slbn2yYe9O-5vWTXiD-_vHZDs.jpg?auto=webp&s=edabb46f3b9c42d57c455af8e8e683cdb2371a9e', 'width': 800}, 'variants': {}}]} | ||
First results of the Mixtral 7b MoEx8: Humaneval 33.54% vs previous Mistral 7b 30.5% | 40 | [Author](https://x.com/abacaj/status/1733292527904592350?s=46&t=dUCVh9akIWxxNUIkrDJwJg). Whole [results.](https://gist.github.com/abacaj/17b987286a265f63f22f03b0323d1399) | 2023-12-09T01:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/18e1wwa/first_results_of_the_mixtral_7b_moex8_humaneval/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e1wwa | false | null | t3_18e1wwa | /r/LocalLLaMA/comments/18e1wwa/first_results_of_the_mixtral_7b_moex8_humaneval/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': 'i7lTrP4taO8voJySMDU8lBrpnPDUZaDdsQ8Kx2QunBo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7SMIjjyaVadThhLVUtL5L4sfOQ6Mj4naXAVhrZDIbDo.jpg?width=108&crop=smart&auto=webp&s=4b62d8ae8225607ec179b2d1cc48010e12379db7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7SMIjjyaVadThhLVUtL5L4sfOQ6Mj4naXAVhrZDIbDo.jpg?width=216&crop=smart&auto=webp&s=d34cbb47f8cef4be18da8bf5860147201f9f0f2c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7SMIjjyaVadThhLVUtL5L4sfOQ6Mj4naXAVhrZDIbDo.jpg?width=320&crop=smart&auto=webp&s=bd72ba0664ffef554a5605bad0b5e36f42b690bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7SMIjjyaVadThhLVUtL5L4sfOQ6Mj4naXAVhrZDIbDo.jpg?width=640&crop=smart&auto=webp&s=ebfa9f82f246668e3d4e899d02a698ecffd2d411', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7SMIjjyaVadThhLVUtL5L4sfOQ6Mj4naXAVhrZDIbDo.jpg?width=960&crop=smart&auto=webp&s=0dd68ab6237c513b87bdeb324b52b3998e363d04', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7SMIjjyaVadThhLVUtL5L4sfOQ6Mj4naXAVhrZDIbDo.jpg?width=1080&crop=smart&auto=webp&s=9646858be12ca0460d2dd29562b049840c09b357', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/7SMIjjyaVadThhLVUtL5L4sfOQ6Mj4naXAVhrZDIbDo.jpg?auto=webp&s=6598b3673fc752a69eac1e860c0af772ffaa6b98', 'width': 2048}, 'variants': {}}]} |
near fp16 performance with 2 bit quantitization? | 1 | [deleted] | 2023-12-09T01:05:54 | https://x.com/tsengalb99/status/1733222467953422702?s=20 | ninjasaid13 | x.com | 1970-01-01T00:00:00 | 0 | {} | 18e1qna | false | null | t3_18e1qna | /r/LocalLLaMA/comments/18e1qna/near_fp16_performance_with_2_bit_quantitization/ | false | false | default | 1 | null |
Mixtral-8x7B-32K-q8_0-gguf 49.6GB on HuggingFace | 17 | 2023-12-09T00:51:48 | https://huggingface.co/lxe/mixtral-8x7b-32kseqlen-GGUF | chibop1 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 18e1gej | false | null | t3_18e1gej | /r/LocalLLaMA/comments/18e1gej/mixtral8x7b32kq8_0gguf_496gb_on_huggingface/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'ZE6aaAMADWgj1PEdrTmSFWpK6gIEiAZiPtJpPuhlAbE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cyCZeSX4UYWtj6rELwXGG95hynzmKNbckashIZrAkV8.jpg?width=108&crop=smart&auto=webp&s=4052e1a70eaab0e0831d9fd360ab50d19bc93338', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cyCZeSX4UYWtj6rELwXGG95hynzmKNbckashIZrAkV8.jpg?width=216&crop=smart&auto=webp&s=c947d19698aced05f7fdb72195e069f51b142d16', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cyCZeSX4UYWtj6rELwXGG95hynzmKNbckashIZrAkV8.jpg?width=320&crop=smart&auto=webp&s=fa25fc6092d6d945cd1d543d76941a5f1c36800e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cyCZeSX4UYWtj6rELwXGG95hynzmKNbckashIZrAkV8.jpg?width=640&crop=smart&auto=webp&s=3728f2f335c3416233f9e24b8fcd40514f25be81', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cyCZeSX4UYWtj6rELwXGG95hynzmKNbckashIZrAkV8.jpg?width=960&crop=smart&auto=webp&s=79b96ec77586ff6028a95ed2a409854059d0512e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cyCZeSX4UYWtj6rELwXGG95hynzmKNbckashIZrAkV8.jpg?width=1080&crop=smart&auto=webp&s=99001e3f5832345adbcd92e13e61d4f09a63aa32', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cyCZeSX4UYWtj6rELwXGG95hynzmKNbckashIZrAkV8.jpg?auto=webp&s=0ac3f65d3e74fac5ad6408d137a1d795a5363401', 'width': 1200}, 'variants': {}}]} | ||
Can LLMs self-fine-tune on narrow tasks? Reinforcement Learning with Self-Feedback | 15 | In many cases, evaluation of an LLM output is far easier than generation of that output. In such cases you can get the LLM to rank its own responses, thus replacing the human in RLHF. There's been experiments with RLAIF, but these are with *better* AI models replacing the human ranking step. I'm talking about the same model ranking itself.
Take for example the fact that GPT-4 is terrible at drawing bounding boxes around objects. (Try it: get it to use its advanced data anaysis to draw a bounding box around an object. It will suck). If you however gave GPT-4 a list of bounding box attempts, it would very likely be able to pick out the winner. This basically means GPT-4 should have the ability to fine-tune itself with reinforcement learning, does it not? | 2023-12-09T00:50:34 | https://www.reddit.com/r/LocalLLaMA/comments/18e1fjg/can_llms_selffinetune_on_narrow_tasks/ | xXReggieXx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e1fjg | false | null | t3_18e1fjg | /r/LocalLLaMA/comments/18e1fjg/can_llms_selffinetune_on_narrow_tasks/ | false | false | self | 15 | null |
Llama.cpp Mixtral Progress | 1 | I know you're all as eager as I am to run the new Mixtral model.
The place to stay up to date is actually on the linked github issue. You'll be the first to know when it's runnable. *cough* u/the-bloke | 2023-12-09T00:49:25 | https://github.com/ggerganov/llama.cpp/issues/4381 | Sweet_Protection_163 | github.com | 1970-01-01T00:00:00 | 0 | {} | 18e1enr | false | null | t3_18e1enr | /r/LocalLLaMA/comments/18e1enr/llamacpp_mixtral_progress/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'N8-N4cINqk5edyL0ftSgecTUiIYGPLrKDJBi76iMY0I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OSlzaWA70wwGe1VI6_vspPMTCTotQEsvrW_Nx9NqxdQ.jpg?width=108&crop=smart&auto=webp&s=3c6956b8c7591a7dfd644533ee018348b388fc40', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OSlzaWA70wwGe1VI6_vspPMTCTotQEsvrW_Nx9NqxdQ.jpg?width=216&crop=smart&auto=webp&s=4d39fc9f5b6e95a66c74f857a7f500f40812409a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OSlzaWA70wwGe1VI6_vspPMTCTotQEsvrW_Nx9NqxdQ.jpg?width=320&crop=smart&auto=webp&s=9aca294cc53ae56bfcb766199e0f70ccfa84e014', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OSlzaWA70wwGe1VI6_vspPMTCTotQEsvrW_Nx9NqxdQ.jpg?width=640&crop=smart&auto=webp&s=f56f69d2643dc8dcddf5b08f5f5f0c7b80c737f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OSlzaWA70wwGe1VI6_vspPMTCTotQEsvrW_Nx9NqxdQ.jpg?width=960&crop=smart&auto=webp&s=a718f65d3af74123228a5480617ad67970fdcb1b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OSlzaWA70wwGe1VI6_vspPMTCTotQEsvrW_Nx9NqxdQ.jpg?width=1080&crop=smart&auto=webp&s=8b557de55ed268f65cb9a839ee1e33bc39e8ddc0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OSlzaWA70wwGe1VI6_vspPMTCTotQEsvrW_Nx9NqxdQ.jpg?auto=webp&s=7e3b41618dec305a1e9e7b48999a2a81bf564d48', 'width': 1200}, 'variants': {}}]} | |
Coding LLM Discussion - December 2023 | 4 | I've run some local llms for uncensored fun, but I use GPT4 for Javascript and Laravel web dev and it works great (3.5 is good too but I don't mind paying the $20 for 4 because it pays for itself for its convenience). For those of you running local LLMs for coding, why and where do you see it progressing from the commercial llms? | 2023-12-09T00:45:36 | https://www.reddit.com/r/LocalLLaMA/comments/18e1byk/coding_llm_discussion_december_2023/ | gelatinous_pellicle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18e1byk | false | null | t3_18e1byk | /r/LocalLLaMA/comments/18e1byk/coding_llm_discussion_december_2023/ | false | false | self | 4 | null |
Systematic Reviews: Harnessing Local Large Language Models | 2 | I am looking into using a local LLM to help with a systematic review. I have a large Zotero library of multiple PDFs organized by topic. I think ideally this could be triggered by a Zotero plugin.
I found this paper that mentions the possibility but I’m curious if I could get some guidance on actually implementing it or if existing frameworks can be coordinated together to make this a reality.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10478591/
I’m fairly new at building software so any guidance is appreciated! | 2023-12-09T00:42:58 | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10478591/ | lincolnrules | ncbi.nlm.nih.gov | 1970-01-01T00:00:00 | 0 | {} | 18e1a6w | false | null | t3_18e1a6w | /r/LocalLLaMA/comments/18e1a6w/systematic_reviews_harnessing_local_large/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/mdgLNJC-IP-f3g0_kXaAuK-s1qKbr7wKnPDYGvTl8Dw.jpg?width=108&crop=smart&auto=webp&s=6a41c00b160bc781c73dede4c59738bd43199f11', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/mdgLNJC-IP-f3g0_kXaAuK-s1qKbr7wKnPDYGvTl8Dw.jpg?width=216&crop=smart&auto=webp&s=444160d968e46d46c0b917c8f318acf235fd8395', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/mdgLNJC-IP-f3g0_kXaAuK-s1qKbr7wKnPDYGvTl8Dw.jpg?width=320&crop=smart&auto=webp&s=4998518907d78f8cb6d956a0ccb3fb5ce76de53b', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/mdgLNJC-IP-f3g0_kXaAuK-s1qKbr7wKnPDYGvTl8Dw.jpg?width=640&crop=smart&auto=webp&s=0d3f6a434e93957613c0707b557f783f3416946a', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/mdgLNJC-IP-f3g0_kXaAuK-s1qKbr7wKnPDYGvTl8Dw.jpg?width=960&crop=smart&auto=webp&s=1a7360dd7641771390047811a6bd73433a5a94be', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/mdgLNJC-IP-f3g0_kXaAuK-s1qKbr7wKnPDYGvTl8Dw.jpg?width=1080&crop=smart&auto=webp&s=8b4ca5c073f451fd9c2d8ccbbd4317e75acf5115', 'width': 1080}], 'source': {'height': 836, 'url': 'https://external-preview.redd.it/mdgLNJC-IP-f3g0_kXaAuK-s1qKbr7wKnPDYGvTl8Dw.jpg?auto=webp&s=12103263d39335ab43e7564d31cdb5a839110fd7', 'width': 1600}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.