title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Are there any public datasets that accept submissions? | 5 | I spend a lot of time conversing with GPT-4, and I was wondering if there's a place that I could submit these conversations with the goal of improving local models. I was wondering if anyone here is aware of something like what I've described.
Thanks | 2023-08-16T05:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/15sgkmk/are_there_any_public_datasets_that_accept/ | CrunchyTesticle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sgkmk | false | null | t3_15sgkmk | /r/LocalLLaMA/comments/15sgkmk/are_there_any_public_datasets_that_accept/ | false | false | self | 5 | null |
What modules should I target when training using LoRA? | 18 | I'm using \`axolotl\`, I can target these modules:
* gate\_proj
* down\_proj
* up\_proj
* q\_proj
* v\_proj
* k\_proj
* o\_proj
Can someone help me understand what each of these modules do? What should i target when training?
Oogaboga GUI targets only Q and V by default. Axolotl targets all by default.
​ | 2023-08-16T05:22:23 | https://www.reddit.com/r/LocalLLaMA/comments/15sgg4m/what_modules_should_i_target_when_training_using/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sgg4m | false | null | t3_15sgg4m | /r/LocalLLaMA/comments/15sgg4m/what_modules_should_i_target_when_training_using/ | false | false | self | 18 | null |
some questions about roleplay models | 18 | Hello,
Thank you for taking the time to read.
I've been out of the loop for a few months, and would now like to get back into the scene. I've heard llama2 is here, and with it come new roleplay-focused models.
I would like to know which is best for my use-case, which is mostly talking with a little bit of emoting. It should be uncensored if possible - I understand that lower models tend to over-censor, have experienced such myself, and am not keen on doing so again if I can avoid it.
I would also like to know some good settings to put it on (for example, temperature and all of those sorts of things).
My computer is equipped with an rtx2060 and 32 gigabytes of ram. I have a modern processor, too, if that matters.
I am also willing to rent a GPU on the cloud if required, and have done so before with runpod, but have now forgotten how to set it up (what I know is probably out of date anyway).
Thanks again in advance!
​ | 2023-08-16T05:08:19 | https://www.reddit.com/r/LocalLLaMA/comments/15sg67k/some_questions_about_roleplay_models/ | Several_Extreme3886 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sg67k | false | null | t3_15sg67k | /r/LocalLLaMA/comments/15sg67k/some_questions_about_roleplay_models/ | false | false | self | 18 | null |
Hardware recommendations for running stable diffusion and 65b ggml at the same time. | 1 | I don't want to spend any money on new hardware. I have a i7-11700kf, 96gb ddr3 3200 ram and rtx 3090 and an rtx 3080. I want to run both stable diffusion and llama 65b model at the same time. The rtx 3090 would be use for stable diffusion. Would the 3080 + the CPU and ram combo be sufficient to run the 65b ggml model? | 2023-08-16T04:52:41 | https://www.reddit.com/r/LocalLLaMA/comments/15sfv22/hardware_recommendations_for_running_stable/ | Erdeem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15sfv22 | false | null | t3_15sfv22 | /r/LocalLLaMA/comments/15sfv22/hardware_recommendations_for_running_stable/ | false | false | self | 1 | null |
A simple beautiful UI for Wizard Vicuna 13B Uncensored | 0 | 2023-08-16T03:00:29 | CurryPuff99 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15sdlu2 | false | null | t3_15sdlu2 | /r/LocalLLaMA/comments/15sdlu2/a_simple_beautiful_ui_for_wizard_vicuna_13b/ | false | false | nsfw | 0 | {'enabled': True, 'images': [{'id': '3tDUXPzJlyxm3urC8LPQtGk797Zye6doVUCcML9dngM', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=108&crop=smart&auto=webp&s=d5b5b94787b4d5c694a455b26d9fdc97575cde56', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=216&crop=smart&auto=webp&s=8f54e2f5c17e47c045639f0dc85a9961c4a42f7c', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=320&crop=smart&auto=webp&s=0d4a62d843b7c3af30f14d863b05c35b71227385', 'width': 320}, {'height': 799, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=640&crop=smart&auto=webp&s=539a99f002a82fa6d3f5f1fd5fa9bcd38bb95686', 'width': 640}], 'source': {'height': 934, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?auto=webp&s=c522a3071b91b4f0ed79a760d58bdb02eba0f397', 'width': 748}, 'variants': {'nsfw': {'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=869ae8acc09e4237e4dbb4283d1f76e7b1574de5', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=ad0f90d2261cd88ac404269ad31c7564f838e33d', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e129e160827a94516c16c39650bf477245a55af5', 'width': 320}, {'height': 799, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=f6d27ee412e7203375a32246603e838fb8543818', 'width': 640}], 'source': {'height': 934, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?blur=40&format=pjpg&auto=webp&s=51abb985ad7a26d994a9cf7a2b7c93f0ebf73d7d', 'width': 748}}, 'obfuscated': {'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=869ae8acc09e4237e4dbb4283d1f76e7b1574de5', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=ad0f90d2261cd88ac404269ad31c7564f838e33d', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e129e160827a94516c16c39650bf477245a55af5', 'width': 320}, {'height': 799, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=f6d27ee412e7203375a32246603e838fb8543818', 'width': 640}], 'source': {'height': 934, 'url': 'https://preview.redd.it/yzqm0n8t0eib1.jpg?blur=40&format=pjpg&auto=webp&s=51abb985ad7a26d994a9cf7a2b7c93f0ebf73d7d', 'width': 748}}}}]} | ||
[How to architect this] Routing when mixing question answering from tables as well as documents | 8 | We are performing question answering from pdfs that have some pages with tables and some with standard text.
For the table pages, we are following an approach of detecting and converting them to csv, and dealing with them using a csv agent (in langchain). The QA from the rest of the document remains standard Retrieval Augmented Generation.
this brings up the problem of routing between the table QA code and RAG code. Some challenges here are:
- how to make the routing logic know when the table is being queried: As an example assume table 1 contains rainfall for different states. if the question simply asks "how much is the rainfall in Delaware", it seems that this is reliant on correctly detecting the caption/name of the table. otherwise there is no way to know if Table 1. is required. Understanding from the text what table 1 is ( as opposed to directly from the caption ) seems to be a more challenging problem. What does the community think of this logic?
- is there any example code that the community can point us to of a similar routing application, that we can build off of. | 2023-08-16T02:30:34 | https://www.reddit.com/r/LocalLLaMA/comments/15scym0/how_to_architect_this_routing_when_mixing/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15scym0 | false | null | t3_15scym0 | /r/LocalLLaMA/comments/15scym0/how_to_architect_this_routing_when_mixing/ | false | false | self | 8 | null |
DDR5 Worth fhe upgrade? | 11 | Hey guys, My pc is at a crossroads and I'm debating whether to upgrade to DDR5 ram now or keep it affordable and stick with DDR4. My use case would be to run 65b via ggml. Anyone running something similar on ddr5, and if so what sort of t/s are you getting? | 2023-08-16T02:06:17 | https://www.reddit.com/r/LocalLLaMA/comments/15scfeq/ddr5_worth_fhe_upgrade/ | idkanythingabout | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15scfeq | false | null | t3_15scfeq | /r/LocalLLaMA/comments/15scfeq/ddr5_worth_fhe_upgrade/ | false | false | self | 11 | null |
WizardLM demos | 18 | 2023-08-16T00:32:04 | https://twitter.com/Gradio/status/1691400707264630784?t=vQgwiwyYC_v8gjcrSh70BA&s=19 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15sa8b7 | false | {'oembed': {'author_name': 'Gradio', 'author_url': 'https://twitter.com/Gradio', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Do you know that🔥WizardMath-70B🔥outperforms (on the GSM8K) most celebrated closed-source LLMs out there, like ChatGPT3.5, Claude Instant1 and PaLM2 540B?🤯<a href="https://twitter.com/Gradio?ref_src=twsrc%5Etfw">@Gradio</a> demos here- <br>🧙\u200d♀️70B: http://47.103.63.15:50083/<br>🧙13B: http://47.103.63.15:50082/<br>🧙\u200d♂️7B: http://47.103.63.15:50080/</p>— Gradio (@Gradio) <a href="https://twitter.com/Gradio/status/1691400707264630784?ref_src=twsrc%5Etfw">August 15, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Gradio/status/1691400707264630784', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15sa8b7 | /r/LocalLLaMA/comments/15sa8b7/wizardlm_demos/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'MVhNJgwwBMwnAXYEvelHj0E5qpCGg1DsopbJaox1870', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fLaVrxbMV3kVJxR_Ri3hTqlG-zjODnRj37P-OZywU0g.jpg?width=108&crop=smart&auto=webp&s=2aba868c56edbe8c16255ff7ab4844c009aa712e', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/fLaVrxbMV3kVJxR_Ri3hTqlG-zjODnRj37P-OZywU0g.jpg?auto=webp&s=58f4ae8997af29642e2d5b91b7e03ee04898fc07', 'width': 140}, 'variants': {}}]} | ||
The Bloke - Appreciation | 2 | [removed] | 2023-08-16T00:14:01 | https://www.reddit.com/r/LocalLLaMA/comments/15s9tgk/the_bloke_appreciation/ | LoadingALIAS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s9tgk | false | null | t3_15s9tgk | /r/LocalLLaMA/comments/15s9tgk/the_bloke_appreciation/ | false | false | self | 2 | null |
What’s the bare minimum specs needed for running ai? | 9 | I’ve heard 2 3090s or a 4090 but can you get away with other? | 2023-08-15T23:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/15s8crb/whats_the_bare_minimum_specs_needed_for_running_ai/ | PremoSaysThings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s8crb | false | null | t3_15s8crb | /r/LocalLLaMA/comments/15s8crb/whats_the_bare_minimum_specs_needed_for_running_ai/ | false | false | self | 9 | null |
Potential fix to the repetitiveness problem of Llama 2 | 5 | I, like a few others, have noticed that the output quality of Llama 2 tends to degrade as the context size increase; becoming increasingly prone to repetitiveness of words and sentences. Sometimes it derails the entire conversation and makes continuing the chat impossible.
As this happened again, I wondered what would happen if I simply raised the repetition penalty. Using Koboldcpp, I increased it to 1.19. That seemed to have fixed the problem for the most part! It now continues the conversation as it should. I can sometimes still see its tendencies to almost repeat itself, but it shakes out of it again if you hit retry.
The model I used was chronos-hermes-13b q5 using the Ace of spades sampler.
Maybe this isn't a fix that works universally (maybe I just got lucky with that particular chat), but it's maybe worth exploring further? I was almost about to give up on Llama 2 for this reason, but seems like I can give it a new chance. | 2023-08-15T22:43:49 | https://www.reddit.com/r/LocalLLaMA/comments/15s7ln1/potential_fix_to_the_repetitiveness_problem_of/ | Xeruthos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s7ln1 | false | null | t3_15s7ln1 | /r/LocalLLaMA/comments/15s7ln1/potential_fix_to_the_repetitiveness_problem_of/ | false | false | self | 5 | null |
On demand vs Reserved instances for LLM fine-tuning | 1 |
Hi everyone,
I am looking at different options to get access to GPUs to train an LLM on an enterprise use case for a customer. They are on a specific Cloud provider with associated credits, so I have no ability to go outside, like Runpod.
We are exchanging on whether we should go for On-demand or Reserved instances. This client is a Fortune 500, so they could reserve it but it might not be the best choice.
It seems to me that with LoRA / QLoRA, and so on, we might be able to fine-tune a Llama 2 with one or two GPUs on-demand, but I am unsure yet.
So our main evaluation criteria are:
\- Price
\- Availability of GPUs, aka we don't want to waste too much time to get started
In your experience, have you had difficulties to get access to GPUs at a good price? How many of you had to go with reserved instances, and if so what made you choose this option?
I would love to have your feedback!
| 2023-08-15T22:20:02 | https://www.reddit.com/r/LocalLLaMA/comments/15s70gf/on_demand_vs_reserved_instances_for_llm_finetuning/ | Separate-Still3770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s70gf | false | null | t3_15s70gf | /r/LocalLLaMA/comments/15s70gf/on_demand_vs_reserved_instances_for_llm_finetuning/ | false | false | self | 1 | null |
I released model EverythingLM 3B. | 42 | Today I released a new model named EverythingLM 3B. It is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) fine-tuned on [EverythingLM Data(ShareGPT format, more cleaned)](https://huggingface.co/datasets/totally-not-an-llm/everything-sharegptformat-morecleaned) for 1 epochs.
The prompt format is:
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
Note: Don't expect this model to be good, I was just starting out to fine-tune(In fact, this is my first fine-tune). So don't roast me please!
u/The-Bloke maybe, or I could quantize it.
Just in case, this model answers 1 + 1 correctly, proof:
https://preview.redd.it/u7y3hddcjcib1.png?width=2862&format=png&auto=webp&s=568ca6ec596180e3fd3b6cd11a9c12138928fac3 | 2023-08-15T21:59:31 | https://www.reddit.com/r/LocalLLaMA/comments/15s6hlx/i_released_model_everythinglm_3b/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s6hlx | false | null | t3_15s6hlx | /r/LocalLLaMA/comments/15s6hlx/i_released_model_everythinglm_3b/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'mBBZscMdj4CoFq4VFvDECA6EMTJ--wbO5dOXowR_oEs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=108&crop=smart&auto=webp&s=186c5f83508e7d36bb7dacc4426d56a61a7673fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=216&crop=smart&auto=webp&s=f0ddd4a71a55469d6f7481674e070ec06209ddc7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=320&crop=smart&auto=webp&s=264bc4f7287addb6aaed8eed9d8b67bfe6dabe42', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=640&crop=smart&auto=webp&s=f7afa4c72eb2891b027dc596504197a6a2166345', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=960&crop=smart&auto=webp&s=dccdce5c03bb85ef13e56a7990e312e1cdbec655', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?width=1080&crop=smart&auto=webp&s=1caee76c14bdae7320690c4eca64cbeb0f7aaedb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pqZEAStqXNPuhwZNyfyzq23_uzdmnnkgm52iuYTq61o.jpg?auto=webp&s=a018a51ab688c24aef33735838091e8aca98c7dd', 'width': 1200}, 'variants': {}}]} | |
How do AMD GPUs perform on llama.cpp? | 23 | How do a GPU such as the AMD 7900 XTX perform when using it to offload layers while using llama.cpp? How many tokens per seconds could I expect on 13b and 70b models?
I would plan on using a Ryzen 7 5800x/7800x and 64GB of RAM | 2023-08-15T21:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/15s5x5o/how_do_amd_gpus_perform_on_llamacpp/ | joseluissaorin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s5x5o | false | null | t3_15s5x5o | /r/LocalLLaMA/comments/15s5x5o/how_do_amd_gpus_perform_on_llamacpp/ | false | false | self | 23 | null |
Training | 1 | [removed] | 2023-08-15T21:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/15s5wdx/training/ | Ulan0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s5wdx | false | null | t3_15s5wdx | /r/LocalLLaMA/comments/15s5wdx/training/ | false | false | self | 1 | null |
Inconsistent Token Speed on Llama 2 Chat 70b with Exllama | 7 | Hi everyone,
I've been experiencing some issues with inconsistent token speed while using Llama 2 Chat 70b GPTQ 4 Bits 128g Act Order True with Exllama. My setup includes using the oobabooga WebUI.
**Platform/Device:** Linux
**Exllama Settings:** 4096 Alpha 1 compress\_pos\_emb 1 (Default).
​
Has anyone else encountered this problem? I would appreciate any advice or insights from those who have faced similar challenges.
Thanks in advance!
​
https://preview.redd.it/sj6z0y3a4cib1.png?width=612&format=png&auto=webp&s=deb07c72e912c2c9f58243e6d04b3e255c5d7d03 | 2023-08-15T20:35:32 | https://www.reddit.com/r/LocalLLaMA/comments/15s497d/inconsistent_token_speed_on_llama_2_chat_70b_with/ | Used_Carpenter_6674 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s497d | false | null | t3_15s497d | /r/LocalLLaMA/comments/15s497d/inconsistent_token_speed_on_llama_2_chat_70b_with/ | false | false | 7 | null | |
Not Sure if this is the right place to post. Installed Oobabooga and a few loaders from Hugging Face. All responses from AI models are generic and only two or three sentences. | 1 | Any way to make the responses longer or more detailed? No matter how I phrase the question, whatever model I load, it only gives me one or two sentence answers that are vague and non-specific.
If someone could point me in the right direction to begin my search, that'd be great. Thank you!
Thank you! | 2023-08-15T20:28:20 | https://www.reddit.com/r/LocalLLaMA/comments/15s425y/not_sure_if_this_is_the_right_place_to_post/ | InnsmouthConspirator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s425y | false | null | t3_15s425y | /r/LocalLLaMA/comments/15s425y/not_sure_if_this_is_the_right_place_to_post/ | false | false | self | 1 | null |
Can I use my 5700xt to use Llama? | 4 | I wanted to host Llama or maybe Llama 2 7b but i know that if you want it to work for sure you have to have a Nvidia GPU, sadly mine is AMD. Would it still work? If not, how could I make something similar to this work? I don't want to pay for hosting services, and want to use my own API key for the sites that ask them. | 2023-08-15T20:21:20 | https://www.reddit.com/r/LocalLLaMA/comments/15s3vd0/can_i_use_my_5700xt_to_use_llama/ | s-cardi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s3vd0 | false | null | t3_15s3vd0 | /r/LocalLLaMA/comments/15s3vd0/can_i_use_my_5700xt_to_use_llama/ | false | false | self | 4 | null |
HF model suggestions for text multi class classification? (Fine-tuning) | 1 | I have been using the BERTForSequenceClassification , finetune it on my data and use it for sentence classification (have 16 classes). Results aren't that good. Any other model suggestion please? | 2023-08-15T20:08:56 | https://www.reddit.com/r/LocalLLaMA/comments/15s3j8z/hf_model_suggestions_for_text_multi_class/ | Wrong-Pension7258 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s3j8z | false | null | t3_15s3j8z | /r/LocalLLaMA/comments/15s3j8z/hf_model_suggestions_for_text_multi_class/ | false | false | self | 1 | null |
What is Hugging Face / The Hugging Face Version of Llama-2? | 1 | I'm confused about what Hugging Face is. From my research, they seem to sell servers to run LLMs in the cloud and write packages to help run those LLMs. That makes sense, but then there's a specific version of Llama-2 for Hugging Face and many tutorials (for example on quantization) use the Hugging Face version of Llama-2. Is that version most people use?
I run Llama-2 on my own device, so is using the Hugging Face version of Llama-2 still for me? Is it just a fine-tuned version of Llama-2 based on Hugging Face datasets or is it designed to interact with their packages better?
Apologies for a simple question, but can't wrap my head around this. | 2023-08-15T18:52:59 | https://www.reddit.com/r/LocalLLaMA/comments/15s18c0/what_is_hugging_face_the_hugging_face_version_of/ | MasterJaguar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s18c0 | false | null | t3_15s18c0 | /r/LocalLLaMA/comments/15s18c0/what_is_hugging_face_the_hugging_face_version_of/ | false | false | default | 1 | null |
Uploading model to Hugging Face is stuck at hashing. | 1 | The model is only 7GB, and I have great internet connection. Any idea why this could happen?
​
https://preview.redd.it/zpgbnxbnlbib1.png?width=2472&format=png&auto=webp&s=58d7be7c9454ac17a70376732749905b206487ba | 2023-08-15T18:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/15s15za/uploading_model_to_hugging_face_is_stuck_at/ | bot-333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15s15za | false | null | t3_15s15za | /r/LocalLLaMA/comments/15s15za/uploading_model_to_hugging_face_is_stuck_at/ | false | false | 1 | null | |
How to fine-tune llama2 on Mac M2 with 16gb? | 7 | I am new to LLMs. I understand that quantized models of Llama2 can be run on MacBook with optimum use of GPU. I was wondering if it is also possible to fine tune on Mac with custom dataset? Any examples in Python on how to do this would be helpful. | 2023-08-15T17:57:45 | https://www.reddit.com/r/LocalLLaMA/comments/15rzr3y/how_to_finetune_llama2_on_mac_m2_with_16gb/ | sbs1799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rzr3y | false | null | t3_15rzr3y | /r/LocalLLaMA/comments/15rzr3y/how_to_finetune_llama2_on_mac_m2_with_16gb/ | false | false | self | 7 | null |
Can I train Llama with my own PDF documents? | 14 | Is it possible to train Llama with my own PDF documents to help me with my research? For instance if I upload my documents would it be able to read and answer questions about the information on those PDF documents? I would appreciate any insights. | 2023-08-15T17:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/15rzcat/can_i_train_llama_with_my_own_pdf_documents/ | RomadCV | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rzcat | false | null | t3_15rzcat | /r/LocalLLaMA/comments/15rzcat/can_i_train_llama_with_my_own_pdf_documents/ | false | false | self | 14 | null |
What's the best English based Voice Cloning Model? | 3 | I am exploring a couple of Voice Cloning Models for Text to Speech but haven't had much success.
I have tried --
1. serp-ai bark voice clone - [https://github.com/serp-ai/bark-with-voice-clone/](https://github.com/serp-ai/bark-with-voice-clone/)
2. and a couple of models from TTS like - [https://github.com/coqui-ai/tts](https://github.com/coqui-ai/tts)
Are there any good models which work well for voice cloning with English speakers? | 2023-08-15T16:52:38 | https://www.reddit.com/r/LocalLLaMA/comments/15ry1ga/whats_the_best_english_based_voice_cloning_model/ | apple_pie0306 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ry1ga | false | null | t3_15ry1ga | /r/LocalLLaMA/comments/15ry1ga/whats_the_best_english_based_voice_cloning_model/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'fqOKCqjCruczf3NIIPI7HcJI_cul8Vn8yyZM3AUWYck', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=108&crop=smart&auto=webp&s=21f9f1c70c75c5ef484308578865ca69175136e8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=216&crop=smart&auto=webp&s=bc0a6891720153b1d0815629e3500c679f2e4691', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=320&crop=smart&auto=webp&s=8097fe5cf316d18eb72ebadf5f28253e9ecbbac8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=640&crop=smart&auto=webp&s=92ee1c80ab6d5823c49b0255a21f377f4c7d2510', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=960&crop=smart&auto=webp&s=158624a0f6d8a1f016f447d9ccb7b4956650ad51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?width=1080&crop=smart&auto=webp&s=2797e593d56fe8d304ed138e8b9b2cc32bdf70e1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o6_Kh_eN12_VLM3qiyg-_mHEX9hgZj-BkbkTyK_rdFs.jpg?auto=webp&s=8944118d20423dc089215cfa21dc867f69443ad2', 'width': 1200}, 'variants': {}}]} |
Open-Orca-Platypus is out! a 13b that surpasses llama65b!? | 290 | Today we bring the heat again!
​
We're releasing OpenOrca-Platypus2-13B, or as we call it affectionately among the team: OrcaPlaty(or Orctypus)
[https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
​
We have another chart-topper ready and out the gates.
This time we place above all 13Bs, as well as above llama1-65b!
We're placing between llama-65b and Llama2-70B-chat on the HuggingFace leaderboard now.
​
This release is a merge of our OpenOrcaxOpenChat Preview2 and Platypus2, making a model that is more than the sum of its parts.
​
We have the model running unquantized on fast GPUs for you to play with now in your browser too.
Go check it out!
[https://huggingface.co/spaces/Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/spaces/Open-Orca/OpenOrca-Platypus2-13B)Check out the paperhttps://huggingface.co/papers/2308.07317
​
​
This is thanks to our partnership with the amazing Platypus team.
Cole Hunter, Ariel Lee, and Nataniel Ruiz have come with plenty of enthusiasm and great ideas, and we have more in store working with them!
​ | 2023-08-15T16:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/15rxlrq/openorcaplatypus_is_out_a_13b_that_surpasses/ | Alignment-Lab-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rxlrq | false | null | t3_15rxlrq | /r/LocalLLaMA/comments/15rxlrq/openorcaplatypus_is_out_a_13b_that_surpasses/ | false | false | self | 290 | {'enabled': False, 'images': [{'id': 'aGSBdTnyxE1IQubwYPrTLMgh1zVjFUQ3ibFoSW3qVqQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=108&crop=smart&auto=webp&s=cb8346f982e7bf547cb3b5b1ce9e24c7a43bf11d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=216&crop=smart&auto=webp&s=f8ae57b9583945352bd86ecaa790703db51aa0e3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=320&crop=smart&auto=webp&s=852fd6132af32a4fac84c4be1044037f8224352a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=640&crop=smart&auto=webp&s=5254450daa351fd62c7526427b75b8aab3df19f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=960&crop=smart&auto=webp&s=d4792457343f884ee1cb7c762fa9e7df76d6e726', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=1080&crop=smart&auto=webp&s=eaebbdeb8fccd1500ae7e8b9f8c0020d7252c897', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?auto=webp&s=31451acc959bd087644d3fcb0d38cc437d17dec0', 'width': 1200}, 'variants': {}}]} |
Open-Orca-Platypus it out! a 13b surpassing llama 65b!? | 1 | Today we bring the heat again!
​
We're releasing OpenOrca-Platypus2-13B, or as we call it affectionately among the team: OrcaPlaty
[https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
​
We have another chart-topper ready and out the gates.
This time we place above all 13Bs, as well as above llama1-65b!
We're placing between llama-65b and Llama2-70B-chat on the HuggingFace leaderboard now.
​
This release is a merge of our OpenOrcaxOpenChat Preview2 and Platypus2, making a model that is more than the sum of its parts.
​
We have the model running unquantized on fast GPUs for you to play with now in your browser too.
Go check it out!
[https://huggingface.co/spaces/Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/spaces/Open-Orca/OpenOrca-Platypus2-13B)
Check out the paper!
[https://huggingface.co/papers/2308.07317](https://huggingface.co/papers/2308.07317)
​
​
https://preview.redd.it/nwudhanwvaib1.png?width=829&format=png&auto=webp&s=692cef4321084c17cae3f21c9899c152059b08d9
This is thanks to our partnership with the amazing Platypus team.
Cole Hunter, Ariel Lee, and Nataniel Ruiz have come with plenty of enthusiasm and great ideas, and we have more in store working with them!
​ | 2023-08-15T16:29:33 | https://www.reddit.com/r/LocalLLaMA/comments/15rxg6o/openorcaplatypus_it_out_a_13b_surpassing_llama_65b/ | Alignment-Lab-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rxg6o | false | null | t3_15rxg6o | /r/LocalLLaMA/comments/15rxg6o/openorcaplatypus_it_out_a_13b_surpassing_llama_65b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aGSBdTnyxE1IQubwYPrTLMgh1zVjFUQ3ibFoSW3qVqQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=108&crop=smart&auto=webp&s=cb8346f982e7bf547cb3b5b1ce9e24c7a43bf11d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=216&crop=smart&auto=webp&s=f8ae57b9583945352bd86ecaa790703db51aa0e3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=320&crop=smart&auto=webp&s=852fd6132af32a4fac84c4be1044037f8224352a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=640&crop=smart&auto=webp&s=5254450daa351fd62c7526427b75b8aab3df19f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=960&crop=smart&auto=webp&s=d4792457343f884ee1cb7c762fa9e7df76d6e726', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?width=1080&crop=smart&auto=webp&s=eaebbdeb8fccd1500ae7e8b9f8c0020d7252c897', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/92pOcI2HbwHEJEZVaDTjd0JmHRXYK3T73r-a1mRqOeg.jpg?auto=webp&s=31451acc959bd087644d3fcb0d38cc437d17dec0', 'width': 1200}, 'variants': {}}]} | |
Experience of setting up LLAMA 2 70B Chat locally | 30 | **Motivation**
I was using the services of *Replicate* for prototyping prompt for an office project. However, their service was frequently returning "CUDA error device-side assert triggered". Therefore, we decided to set up 70B chat server locally. We used Nvidia A40 with 48GB RAM.
**GPU Drivers and Toolkit**
* Install the [Nvidia CUDA 12.2 Toolkit](https://developer.nvidia.com/cuda-toolkit-archive)
* Install the [CUDA Drivers](https://www.nvidia.com/download/index.aspx)
* As specified in the CUDA Toolkit post-installation, add the following to *.bashrc*
​
export PATH=/usr/local/cuda-12.2/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-12.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
* Check the installation
​
$ nvcc --version
$ nvidia-smi
**Setting Environment**
* Create and activate a virtual environment
​
$ sudo apt-get install build-essential libssl-dev libffi-dev python-dev
$ sudo apt-get install -y python3-venv
$ python3 -m venv venv
$ source venv/bin/activate
* Install PyTorch
​
$ pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121
* Install Transformer and dependencies
​
$ pip install transformers==4.31.0
* Setup and compile AutoGPTQ
​
$ git clone https://github.com/PanQiWei/AutoGPTQ
$ cd AutoGPTQ
$ pip3 install .
$ cd ..
* Log into HuggingFace via CLI. You need to request access to LLAMA2 from Meta to download it here.
​
$ git config --global credential.helper store
$ huggingface-cli login
**Sample Code**
# From https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TheBloke/Llama-2-70B-chat-GPTQ", use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(
"TheBloke/Llama-2-70B-chat-GPTQ",
inject_fused_attention=False,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=False,
quantize_config=None,
)
user_prompt = "Tell me about AI"
system_prompt = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
prompt=f"[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n{user_prompt} [/INST]"
input_ids = tokenizer([prompt], return_tensors="pt", add_special_tokens=False)["input_ids"].to("cuda")
output = model.generate(inputs=input_ids, max_new_tokens=4096, do_sample=True, top_p=0.95, top_k=50, temperature=0.5, num_beams=1)
output_ids = output[0]
response = tokenizer.decode(output_ids)
response = response[len(prompt):] | 2023-08-15T16:08:02 | https://www.reddit.com/r/LocalLLaMA/comments/15rwwws/experience_of_setting_up_llama_2_70b_chat_locally/ | UncertainLangur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rwwws | false | null | t3_15rwwws | /r/LocalLLaMA/comments/15rwwws/experience_of_setting_up_llama_2_70b_chat_locally/ | false | false | self | 30 | null |
The LLM GPU Buying Guide - August 2023 | 331 | Hi all, here's a buying guide that I made after getting multiple questions on where to start from my network. I used Llama-2 as the guideline for VRAM requirements. Enjoy! Hope it's useful to you and if not, fight me below :)
Also, don't forget to apologize to your local gamers while you snag their GeForce cards.
[The LLM GPU Buying Guide - August 2023](https://preview.redd.it/4nve5pq5oaib1.png?width=1248&format=png&auto=webp&s=d101c229b9a002b10e1606788fa8db8ab4fb2e32) | 2023-08-15T15:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/15rwe7t/the_llm_gpu_buying_guide_august_2023/ | Dependent-Pomelo-853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rwe7t | false | null | t3_15rwe7t | /r/LocalLLaMA/comments/15rwe7t/the_llm_gpu_buying_guide_august_2023/ | false | false | 331 | null | |
Video breakdown of the 78-page Llama-2 paper (Thumbnail created by Stable Diffusion) | 1 | [removed] | 2023-08-15T15:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/15rwbew/video_breakdown_of_the_78page_llama2_paper/ | HoneyBeeStripe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rwbew | false | {'oembed': {'author_name': 'Deepgram', 'author_url': 'https://www.youtube.com/@Deepgram', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Otb7Xi8Z0Oo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Llama 1 vs. Llama 2: Meta's Genius Breakthrough in AI Architecture | Research Paper Breakdown"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Otb7Xi8Z0Oo/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Llama 1 vs. Llama 2: Meta's Genius Breakthrough in AI Architecture | Research Paper Breakdown", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_15rwbew | /r/LocalLLaMA/comments/15rwbew/video_breakdown_of_the_78page_llama2_paper/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'WyY86rkRT0RQDXTSMiAQtsPJZJMRyISOfruPdNtSllI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1gyRHKQWmD_WFyYZOrnJK6t1kzIFQrsajppxJc7cXX4.jpg?width=108&crop=smart&auto=webp&s=237cea185adcf4433a61ade59ec9133b98b6ac8b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1gyRHKQWmD_WFyYZOrnJK6t1kzIFQrsajppxJc7cXX4.jpg?width=216&crop=smart&auto=webp&s=53ce80867b4ff659b04a6a6d49a4e15ba83c9798', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1gyRHKQWmD_WFyYZOrnJK6t1kzIFQrsajppxJc7cXX4.jpg?width=320&crop=smart&auto=webp&s=1af91d8828ab9bced4602c8d7163495c1d363bfc', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1gyRHKQWmD_WFyYZOrnJK6t1kzIFQrsajppxJc7cXX4.jpg?auto=webp&s=bfa01b79a1bc46adbbd9e5eb8e983eb4f5791967', 'width': 480}, 'variants': {}}]} | |
Carl: A Therapist AI Models | 46 | Link to download Llama-2 model: [https://huggingface.co/ajibawa-2023/carl-llama-2-13b](https://huggingface.co/ajibawa-2023/carl-llama-2-13b)
Link for Llama model: carl-33b [https://huggingface.co/ajibawa-2023/carl-33b](https://huggingface.co/ajibawa-2023/carl-33b)
**Links for GPTQ & GGML Models are available on:** [https://huggingface.co/TheBloke](https://huggingface.co/TheBloke)
​
**Carl: A Therapist AI**
Early prevention can help lot of people to avoid depression and other mental illnesses. Therapy is a controversial use case because the outputs and capabilities of LLMs are uncertain. Many people don't have access to the therapist, due to financial, personal, social or other restriction. Here comes Carl: A Therapist AI which can quickly respond to you. It is trained on more than 100000 set of conversations. Each set having 10\~15 conversations between Carl and client. Base data was obtained from u/ [**ZealousidealBlock330**](https://www.reddit.com/user/ZealousidealBlock330/) . This data was further refined and fine tuned. Entire dataset is synthetic. Synthetic data is used because there is little to no therapy conversation data which is publicly available and directly applicable to an LLM. This by means a no replacement to a Doctor or professional therapist. If you are in stress or going through a tough time, please seek professional help or talk to a friend/family member.
**Training:** Entire dataset was trained on Azure 4 x A100 80GB. DeepSpeed codebase was used for training purpose. Models were trained on Llama-1 & 2 by Meta. GGML Quant model (carl-llama-2-13b) was trained by [**Feanix**](https://huggingface.co/Feanix). Extremely thankful to him.
Extremely thankful to the opensource community and u/faldore , Pankaj Mathur , Tom "TheBloke" Jobbins, [/u/kaiokendev](https://www.reddit.com/u/kaiokendev/) for guiding me through this community and through 'X'.
Request u/The-Bloke to do the Quants of other models as time permits. Thank you.This community has been a great learning source and can't thank enough.
If you find mistakes in the model then they are solely mine. I am looking forward to collaborate with like minded people to release many other models.Thank you | 2023-08-15T14:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/15rtnsa/carl_a_therapist_ai_models/ | ajibawa-2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rtnsa | false | null | t3_15rtnsa | /r/LocalLLaMA/comments/15rtnsa/carl_a_therapist_ai_models/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'mqTdABXx-yZoJ3Qg0xlRfhEt1V8zn1BYSMO1-nHNvUA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=108&crop=smart&auto=webp&s=24b2dcfa9fd0923dcc75136b7b8390ac299cd302', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=216&crop=smart&auto=webp&s=bf732601fc810eeaea28c5d7c334774fe322b899', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=320&crop=smart&auto=webp&s=e23b6b7133836bad46bb1e63859c1f103148acaf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=640&crop=smart&auto=webp&s=f683c7438f74221452321d24767d22bb7d7d1ec5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=960&crop=smart&auto=webp&s=6a7b3cfe9c85b28466834d35185b6034604d75ce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?width=1080&crop=smart&auto=webp&s=61b51b21d284c84074e4dafb816a347962aa65a2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IKTJF4_P5KYMdO0nYsyYtYJl7iQjls_UBMHy7y7QqGY.jpg?auto=webp&s=92ef5c58a10772a125cc04ec1842cc91c9beff17', 'width': 1200}, 'variants': {}}]} |
mapping data and model specifics to GPU/memory/system requirements | 1 | [removed] | 2023-08-15T13:36:31 | https://www.reddit.com/r/LocalLLaMA/comments/15rsyqt/mapping_data_and_model_specifics_to/ | solipcism | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rsyqt | false | null | t3_15rsyqt | /r/LocalLLaMA/comments/15rsyqt/mapping_data_and_model_specifics_to/ | false | false | self | 1 | null |
Any resources on how data and model translate into system requirements? | 1 | [removed] | 2023-08-15T13:34:46 | https://www.reddit.com/r/LocalLLaMA/comments/15rsx5e/any_resources_on_how_data_and_model_translate/ | solipcism | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rsx5e | false | null | t3_15rsx5e | /r/LocalLLaMA/comments/15rsx5e/any_resources_on_how_data_and_model_translate/ | false | false | self | 1 | null |
are there any open (public) api that serve vicuna v1.5 for testing? | 2 | Got stucked with AWS limit, so I'm searching alternative... | 2023-08-15T13:34:09 | https://www.reddit.com/r/LocalLLaMA/comments/15rswkd/are_there_any_open_public_api_that_serve_vicuna/ | Expensive_Breakfast6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rswkd | false | null | t3_15rswkd | /r/LocalLLaMA/comments/15rswkd/are_there_any_open_public_api_that_serve_vicuna/ | false | false | self | 2 | null |
Bark & Whisper min requirements? | 1 | Bark & Whisper min requirements?
1. Min hardware requirements for real time gening
2. Min hardware requirements for relatively good results and short time gening (not real time)
I have R5 5600G, 16GB RAM, NO GPU, is there hope for me? | 2023-08-15T13:18:44 | https://www.reddit.com/r/LocalLLaMA/comments/15rsihe/bark_whisper_min_requirements/ | SakamotoKyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rsihe | false | null | t3_15rsihe | /r/LocalLLaMA/comments/15rsihe/bark_whisper_min_requirements/ | false | false | self | 1 | null |
AI Town starter kit | 26 | A deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize. | 2023-08-15T12:49:34 | https://x.com/stuffyokodraws/status/1691179412069445632?s=46&t=4Lg1z9tXUANCKLiHwRSk_A | Acrobatic-Site2065 | x.com | 1970-01-01T00:00:00 | 0 | {} | 15rrt0m | false | null | t3_15rrt0m | /r/LocalLLaMA/comments/15rrt0m/ai_town_starter_kit/ | false | false | default | 26 | {'enabled': False, 'images': [{'id': 'WpUgTF-JXyEFhM4-Z1knSENxY3J7DtV-MwL-5Xq14zE', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/mxb_k6iVFq8s76oAXUVFxSnLE7BnfixA9oh1MM7yOHs.jpg?width=108&crop=smart&auto=webp&s=37c2a90453d7117951b6395d4d45978c54d61828', 'width': 108}, {'height': 131, 'url': 'https://external-preview.redd.it/mxb_k6iVFq8s76oAXUVFxSnLE7BnfixA9oh1MM7yOHs.jpg?width=216&crop=smart&auto=webp&s=a6386ac36cb4431be6fb452740e609b4a561e663', 'width': 216}, {'height': 194, 'url': 'https://external-preview.redd.it/mxb_k6iVFq8s76oAXUVFxSnLE7BnfixA9oh1MM7yOHs.jpg?width=320&crop=smart&auto=webp&s=1966164befeaf2fdbb2814346a2f3fe5648fd120', 'width': 320}, {'height': 388, 'url': 'https://external-preview.redd.it/mxb_k6iVFq8s76oAXUVFxSnLE7BnfixA9oh1MM7yOHs.jpg?width=640&crop=smart&auto=webp&s=8c8a671ff0990590667e5e58606be820a40fa280', 'width': 640}, {'height': 582, 'url': 'https://external-preview.redd.it/mxb_k6iVFq8s76oAXUVFxSnLE7BnfixA9oh1MM7yOHs.jpg?width=960&crop=smart&auto=webp&s=26ae4cc8962a799647feae1736e8bea870045eb1', 'width': 960}, {'height': 655, 'url': 'https://external-preview.redd.it/mxb_k6iVFq8s76oAXUVFxSnLE7BnfixA9oh1MM7yOHs.jpg?width=1080&crop=smart&auto=webp&s=6929d35ba860b970d8599a4a7171a5ea2134a655', 'width': 1080}], 'source': {'height': 1243, 'url': 'https://external-preview.redd.it/mxb_k6iVFq8s76oAXUVFxSnLE7BnfixA9oh1MM7yOHs.jpg?auto=webp&s=72a4549239c9373ba9f586886e9d5c10fa2a9fb0', 'width': 2048}, 'variants': {}}]} |
Model for ABC midi generation? | 1 | [removed] | 2023-08-15T12:31:03 | https://www.reddit.com/r/LocalLLaMA/comments/15rrd7o/model_for_abc_midi_generation/ | chocolatebanana136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rrd7o | false | null | t3_15rrd7o | /r/LocalLLaMA/comments/15rrd7o/model_for_abc_midi_generation/ | false | false | 1 | null | |
Llama 2 q4_k_s (70B) performance without GPU | 5 | Can you write your specs CPU Ram and token/s ? | 2023-08-15T11:55:41 | https://www.reddit.com/r/LocalLLaMA/comments/15rqkuw/llama_2_q4_k_s_70b_performance_without_gpu/ | grigio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rqkuw | false | null | t3_15rqkuw | /r/LocalLLaMA/comments/15rqkuw/llama_2_q4_k_s_70b_performance_without_gpu/ | false | false | self | 5 | null |
Why is my GPU barely being used? Kobold | 7 | System specs: 13600KF, RX 6700xt
Whenever I run an LLM in Kobold, despite theoretically having all of the layers on the GPU, my CPU seems to be doing most of the work. Why and how do I fix this?
https://preview.redd.it/ho8av4q369ib1.png?width=205&format=png&auto=webp&s=a562b1f6079805ff9b5a62fb9813992316803665
https://preview.redd.it/9sxk82rh69ib1.png?width=524&format=png&auto=webp&s=2622a291f9a0e1508f0ef8bfa76eaffe55fe72a5
​ | 2023-08-15T10:42:05 | https://www.reddit.com/r/LocalLLaMA/comments/15rp3ef/why_is_my_gpu_barely_being_used_kobold/ | McpeIsSoBuggy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rp3ef | false | null | t3_15rp3ef | /r/LocalLLaMA/comments/15rp3ef/why_is_my_gpu_barely_being_used_kobold/ | false | false | 7 | null | |
Why is vicuna repeating itself? | 1 | [removed] | 2023-08-15T09:47:57 | https://www.reddit.com/r/LocalLLaMA/comments/15ro26d/why_is_vicuna_repeating_itself/ | McpeIsSoBuggy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ro26d | false | null | t3_15ro26d | /r/LocalLLaMA/comments/15ro26d/why_is_vicuna_repeating_itself/ | false | false | default | 1 | null |
Outlines: guiding structured output from LLMs | 29 | 2023-08-15T09:46:32 | https://github.com/normal-computing/outlines | TopperBowers | github.com | 1970-01-01T00:00:00 | 0 | {} | 15ro186 | false | null | t3_15ro186 | /r/LocalLLaMA/comments/15ro186/outlines_guiding_structured_output_from_llms/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'brqQ5fPAZ0cw7-xs4-II_MYzs9AZBLmG1e7XuyGdaIU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=108&crop=smart&auto=webp&s=5e26bc58d4627d50f148987c1d6d21cebb7d0707', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=216&crop=smart&auto=webp&s=c2e11ee4d42e02e35a2199af38433e7695647b68', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=320&crop=smart&auto=webp&s=98d4df81bd587969091676fdbabdf1de4f587322', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=640&crop=smart&auto=webp&s=d4c4b040286a551bdb1000c7cce2cbfe13d969c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=960&crop=smart&auto=webp&s=83d1a861f5045a0bfb5ce6e472cee85951ce8d76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?width=1080&crop=smart&auto=webp&s=775c052b84e7bfef59633d98acf665efaa4a9252', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NdwnTXluQjkQi6aI6px6JDMTlMpU1JK6lcFlalkCqcU.jpg?auto=webp&s=4612bfdc5a71650365a65649ef6f96beebfd82da', 'width': 1200}, 'variants': {}}]} | ||
[R] Instruction-tuned Large Language Models in Multiple Languages with RLHF | 10 | We've released our Okapi framework that introduces resources and models for instruction tuning for large language models (LLMs) with reinforcement learning from human feedback (RLHF) in **26 languages**. Okapi supports 8 high-resource languages, 11 medium-resource languages, and 7 low-resource languages.
Our resources include **instruction data, response ranking data** for RLHF, and **evaluation benchmark datasets in 26 languages**. Our datasets can be used to measure the progress of LLMs in these languages.
[https://github.com/nlp-uoregon/Okapi](https://github.com/nlp-uoregon/Okapi)
[https://arxiv.org/abs/2307.16039](https://arxiv.org/abs/2307.16039) | 2023-08-15T09:07:11 | https://www.reddit.com/r/LocalLLaMA/comments/15rnbef/r_instructiontuned_large_language_models_in/ | itnguyen2015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rnbef | false | null | t3_15rnbef | /r/LocalLLaMA/comments/15rnbef/r_instructiontuned_large_language_models_in/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'aOp8k2bcfYONEiyLyjcZLFr9t3HvYWIiwJO2L3WNfrI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=108&crop=smart&auto=webp&s=8a26b8a153001827ecbfb162ed0051a8e259a677', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=216&crop=smart&auto=webp&s=903b676fc5bbf2d8ec84564d9d41c3469b6de278', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=320&crop=smart&auto=webp&s=ad3d95706099731bda409c9f3dbcd7a5495dc1e9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=640&crop=smart&auto=webp&s=f732dde169bd8e05b27b74c8493cc8157850cb1f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=960&crop=smart&auto=webp&s=accc35df97f826d4a4c6e23baa3855e9325f4c56', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?width=1080&crop=smart&auto=webp&s=b182a42f31d91be8835ef4610b0ccd36de4298d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ey0B64NjL5aWm8gREyP-P7WPQZMnFLPyyc8mioWqqHM.jpg?auto=webp&s=3912e8dfcd8236e1e9acdbb539f5eb6e2a1ebea3', 'width': 1200}, 'variants': {}}]} |
How to perform multi-GPU parallel inference for llama2? | 7 | Hi folks,
I tried running the 7b-chat-hf variant from meta (fp16) with 2\*RTX3060 (2\*12GB). I was able to load the model shards into both GPUs using "device\_map" in AutoModelForCausalLM.from\_pretrained() and both GPUs memory is almost full (11GB\~, 11GB\~) which is good.
But when it comes to model.generate(), it only uses 1 GPU as nvtop & nvidia-smi both shows only 1 GPU with 99% processing usage, while the other is 0% (keep in mind both VRAMs are still occupied).
I've been reading "Distributed inference using Accelerate" : [https://huggingface.co/docs/accelerate/usage\_guides/distributed\_inference](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference) but am still confused on how to do it.
My prompts were something like a whole sentence where "How can I reach xxx destination in xxx time?", "What it takes to be a rich and successful man?" so I have no idea how to split the question and put them into different GPUs to perform inference. The examples given from huggingface are some simple prompts \['a cat', 'a dog', 'a chicken'\].
So the question is how do people perform parallel inferencing with LLMs? Thanks.
Heres my result with different models, which led me thinking am I doing things right. As you can see the fp16 original 7B model has very bad performance with the same input/output.
Llama-2-7b-chat-hf:
Prompt: "hello there"
Output generated in 27.00 seconds |1.85 tokens/s |50 output tokens |23 input tokens
Llama-2-7b-chat-GPTQ: 4bit-128g
Prompt: "hello there"
Output generated in 0.77 seconds |65.29 tokens/s |50 output tokens |23 input tokens
Llama-2-13b-chat-GPTQ: 4bit-128g
Prompt: "hello there"
Output generated in 3.13 seconds |25.26 tokens/s |79 output tokens |23 input tokens | 2023-08-15T07:39:11 | https://www.reddit.com/r/LocalLLaMA/comments/15rlqsb/how_to_perform_multigpu_parallel_inference_for/ | cringelord000222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rlqsb | false | null | t3_15rlqsb | /r/LocalLLaMA/comments/15rlqsb/how_to_perform_multigpu_parallel_inference_for/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'XwFO1U6npNnxXTtreTz5tvpMmuCsK_ZdMABLdZj6rvo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=108&crop=smart&auto=webp&s=33bae57943d09c41e6affe49dfcd77a2cf2e1f9c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=216&crop=smart&auto=webp&s=7f2d1df59650b8fda1927f1cff74a000d1e7013d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=320&crop=smart&auto=webp&s=3e2c0073b128665907649655802d7a352186cf7b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=640&crop=smart&auto=webp&s=06e7bd72cbf44b22c23f7b85dc71ffab7ee66695', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=960&crop=smart&auto=webp&s=0ea14a170747db3d126226d8a6fa399963b50d1c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?width=1080&crop=smart&auto=webp&s=4277803c5f45bda4df1094b8cb5b42c45b8f1829', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/L2x2tb7UcKZ5UY7MoVweqtej_CDK_oNgK5qWWkOoIVw.jpg?auto=webp&s=01bc8f92a225860b58573d9e92bd431df855ac24', 'width': 1200}, 'variants': {}}]} |
What does this mean for GPT4? Rate limit reached for 10KTPM-200RPM in organization on tokens per min. | 1 | Does anyone know what is this exception about? and how to resolve this issue?
*Rate limit reached for 10KTPM-200RPM in organization on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at* [*help.openai.com*](https://help.openai.com) *if you continue to have issues.*
​ | 2023-08-15T07:33:09 | https://www.reddit.com/r/LocalLLaMA/comments/15rlmrz/what_does_this_mean_for_gpt4_rate_limit_reached/ | Think_Blackberry4114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rlmrz | false | null | t3_15rlmrz | /r/LocalLLaMA/comments/15rlmrz/what_does_this_mean_for_gpt4_rate_limit_reached/ | false | false | self | 1 | null |
Reproducing phi-1 | 1 | [removed] | 2023-08-15T07:03:06 | https://www.reddit.com/r/LocalLLaMA/comments/15rl3pe/reproducing_phi1/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rl3pe | false | null | t3_15rl3pe | /r/LocalLLaMA/comments/15rl3pe/reproducing_phi1/ | false | false | self | 1 | null |
Can Multiprocessing be used for faster inference of Llama2 on ec2 gpu instance | 1 | Hey everyone,
I have deployed llama2 13B chat on a ec2 gpu machine g4dnxl using HF (quantized version), I was wondering can we use python multiprocessing to speed up inference.
It would be really helpful if you can provide some input. | 2023-08-15T06:57:51 | https://www.reddit.com/r/LocalLLaMA/comments/15rl002/can_multiprocessing_be_used_for_faster_inference/ | Spiritual-Rub925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rl002 | false | null | t3_15rl002 | /r/LocalLLaMA/comments/15rl002/can_multiprocessing_be_used_for_faster_inference/ | false | false | self | 1 | null |
Is it possible to direct AI text generation dynamically? | 11 | So in kcpp I'll write in some details about the story, add in some authors notes and describe a building and a few characters in world info. For the most part I'm getting decent generations the mostly reflect what I am hoping for, but every now and then the AI will just become adamant that the next paragraph or two in the story must revolve aroynd a single idea and it seems unable to "imagine" anything else.
When these moments happen I feel like I wish I could prompt the AI and say "go in a different direction, or make this funnier, or switch to inner monologue".
Does such a tool or technique exist with LocalLLaMa systems I could install that would let me do something like this? | 2023-08-15T05:42:19 | https://www.reddit.com/r/LocalLLaMA/comments/15rjl4n/is_it_possible_to_direct_ai_text_generation/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rjl4n | false | null | t3_15rjl4n | /r/LocalLLaMA/comments/15rjl4n/is_it_possible_to_direct_ai_text_generation/ | false | false | self | 11 | null |
What tools/techniques exist for me to outline a plot in point form or very brief notes and then have an entire story generated. | 1 | This would be less like collaborative story writing and more like rewriting general plot points into more interesting sentences with embelishments and details.
I see that in kcpp you can write into memory and build world info and such but you can't really just tell it the entire story briefly and then have the model expand it.
Doea such a tool exist? Even for short stories of say... 5000 words or less? | 2023-08-15T05:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/15rjfup/what_toolstechniques_exist_for_me_to_outline_a/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rjfup | false | null | t3_15rjfup | /r/LocalLLaMA/comments/15rjfup/what_toolstechniques_exist_for_me_to_outline_a/ | false | false | self | 1 | null |
Difference between llama-2-7b vs llama-2-7b-hf? | 3 | I've been trying to fine tune with Hugging face's SRTTrainer script (TRL). I had to convert the llama-2-7b weights to hugging face weights before using the script, why is it?.
Is there a way I can do without hf and directly load up the original llama-2 model?
Any resources are really appreciated, thank you. | 2023-08-15T04:59:07 | https://www.reddit.com/r/LocalLLaMA/comments/15riqey/difference_between_llama27b_vs_llama27bhf/ | amitsama92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15riqey | false | null | t3_15riqey | /r/LocalLLaMA/comments/15riqey/difference_between_llama27b_vs_llama27bhf/ | false | false | self | 3 | null |
can a model with a given context window be finetuned to have a larger context window? | 0 | can finetuning have an effect of increasing the context length? or can this be done for a particular domain ( e.g. using subject matter for that domain ? ) ?
or are the models with really large context windows like claude inherently different in their architecture or some other missing secret sauce? | 2023-08-15T04:04:24 | https://www.reddit.com/r/LocalLLaMA/comments/15rhn57/can_a_model_with_a_given_context_window_be/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rhn57 | false | null | t3_15rhn57 | /r/LocalLLaMA/comments/15rhn57/can_a_model_with_a_given_context_window_be/ | false | false | self | 0 | null |
How to configure oobabooga to actually get it working? | 0 | I am really new to this so maybe the solution to the problems I'm having is glaringly obvious:
Every time I load the model I get these warnings:
2023-08-15 11:24:58 WARNING:Exllama kernel is not installed, reset disable_exllama to True. This may because you installed auto_gptq using a pre-build wheel on Windows, in which exllama_kernels are not compiled. To use exllama_kernels to further speedup inference, you can re-install auto_gptq from source.
2023-08-15 11:25:11 WARNING:skip module injection for FusedLlamaMLPForQuantizedModel not support integrate without triton yet.
2023-08-15 11:25:11 WARNING:models\TheBloke_OpenOrca-Platypus2-13B-GPTQ_gptq-4bit-32g-actorder_True\tokenizer_config.json is different from the original LlamaTokenizer file. It is either customized or outdated.
2023-08-15 11:25:11 WARNING:models\TheBloke_OpenOrca-Platypus2-13B-GPTQ_gptq-4bit-32g-actorder_True\special_tokens_map.json is different from the original LlamaTokenizer file. It is either customized or outdated.
And I have yet to generate a meaningful response with any model.
[example of the generations I seem to get](https://preview.redd.it/a4zg6z8l37ib1.png?width=717&format=png&auto=webp&s=de9e2210c085790bb92205c3b99e95cc049f1c38)
Is there a specific way I must configure oobabooga in the "Parameters" or "Model" settings that I am missing? Every tutorial I follow they seem to get it working out of the box with the one-click installer, with it immediately generating sensible, concise responses and not slipping into gibberish.
any help with this will be so massively appreciated..... | 2023-08-15T03:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/15rhc6n/how_to_configure_oobabooga_to_actually_get_it/ | Ethork66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rhc6n | false | null | t3_15rhc6n | /r/LocalLLaMA/comments/15rhc6n/how_to_configure_oobabooga_to_actually_get_it/ | false | false | 0 | null | |
Octocoder Demo | 27 | 2023-08-15T03:41:59 | https://twitter.com/huybery/status/1691267973254991872 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15rh5wr | false | {'oembed': {'author_name': 'Binyuan Hui', 'author_url': 'https://twitter.com/huybery', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Check out our latest work 🐙OctoCoder 🐙! It's an instruction tuned CodeLLM using the natural structure of Git commits. No reliance on closed-source models (GPT-4), reduces hallucinations, and no legal interpretation issues! <br><br>👉🏻 <a href="https://t.co/WFAnairhL1">https://t.co/WFAnairhL1</a><br>👉🏻 <a href="https://t.co/gWtcNScyAK">https://t.co/gWtcNScyAK</a> <a href="https://t.co/MmuKNnjchc">https://t.co/MmuKNnjchc</a></p>— Binyuan Hui (@huybery) <a href="https://twitter.com/huybery/status/1691267973254991872?ref_src=twsrc%5Etfw">August 15, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/huybery/status/1691267973254991872', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15rh5wr | /r/LocalLLaMA/comments/15rh5wr/octocoder_demo/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'q5pc16QA15fE6gPuzmAsIdo3UhTuJIDL8iYQzvNazQg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/wJgtrPwZv7_HU2SkGEryWNGzSzvZ-7xDzx54XXD2aeI.jpg?width=108&crop=smart&auto=webp&s=3552b6b73b35160ff2b8a09653dc657578d94463', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/wJgtrPwZv7_HU2SkGEryWNGzSzvZ-7xDzx54XXD2aeI.jpg?auto=webp&s=8b82179ec44ffaa8e2f29edd8a92b3aae97b10a6', 'width': 140}, 'variants': {}}]} | ||
Help - only getting 0.03 tok/sec on 1080 ti | 1 | [removed] | 2023-08-15T03:39:48 | https://www.reddit.com/r/LocalLLaMA/comments/15rh47d/help_only_getting_003_toksec_on_1080_ti/ | bepispasta42069 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rh47d | false | null | t3_15rh47d | /r/LocalLLaMA/comments/15rh47d/help_only_getting_003_toksec_on_1080_ti/ | false | false | self | 1 | null |
Effects of Quantization of ARC Benchmark Performance | 26 | Have you ever pondered how quantization might affect model performance, or what the trade-off is between quantized methods?
We know how [quantization affects perplexity](https://www.reddit.com/r/LocalLLaMA/comments/1441jnr/k_quantization_vs_perplexity/?utm_source=share&utm_medium=web2x&context=3) but how does it affect benchmark performance
The test was done on the u/The-Bloke [Quantized Model](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GPTQ) of the [OpenOrca-Platypus2](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B) model, which from their results, would currently be the best 13B model on the leaderboard
ARC: 25-shot, arc-challenge (acc\_norm) matching [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
Model Name Bits GS ARC Act Order
OpenOrca Platypus2 13B 16-bit NA 62.88% NA
OpenOrca Platypus2 13B 8-bit None 62.88% Yes
OpenOrca Platypus2 13B 4-bit 32 62.28% Yes
OpenOrca Platypus2 13B 4-bit 128 62.62% No
**Bit Precision**: The table includes results for 16-bit, 8-bit, and 4-bit quantizations. Lower bit precision is often used to reduce the model size and computational requirements, but it may affect performance.
From this result, we can see that 8-bit has a negligible performance vs. 4-bit models with a slightly reduced performance. This matches the expected results based on the [quantization vs perplexity graph](https://www.reddit.com/r/LocalLLaMA/comments/1441jnr/k_quantization_vs_perplexity/?utm_source=share&utm_medium=web2x&context=3)
Note the 4bit 32GS model reports lower *acc\_norm* then the 4bit 128GS but higher *acc* of 58.02% vs 57.59%
Here is the [code](https://github.com/latent-variable/lm-evaluation-harness-webui-wrapper) I used to run this experiment. The repository can run all the benchmarks in the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) however, I only tested with the ARC dataset, since it would take a long time to do all four. ARC took roughly an hour on a 3090. Hellaswag would take around 4 hours.
**Future experiments:** Explore the effects on a 3bit quantized model such as [https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ)
​ | 2023-08-15T03:39:07 | https://www.reddit.com/r/LocalLLaMA/comments/15rh3op/effects_of_quantization_of_arc_benchmark/ | onil_gova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rh3op | false | null | t3_15rh3op | /r/LocalLLaMA/comments/15rh3op/effects_of_quantization_of_arc_benchmark/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'lVUxULrfmQQhqgXu4A3ts4O5YovvLSFvMadANKyplII', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=108&crop=smart&auto=webp&s=ad03f6bb8277f6dad756e7ec27de40c72dfd35c0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=216&crop=smart&auto=webp&s=0d0dae57135f8d3a4c2ea7651e18b646b99d2bda', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=320&crop=smart&auto=webp&s=c7ea48cf473c6fa2087c68865fb067d469e4da0f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=640&crop=smart&auto=webp&s=eff600d9f24680dec8c017999d133c8e887fd68d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=960&crop=smart&auto=webp&s=48b1118b783249c294f2572c4ea4bfb173e31ba2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?width=1080&crop=smart&auto=webp&s=359a8299c7450784c6960409191eeacd8c4a2579', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Myik7eECm3Ydm-TY4aIz27a4PC9eZgys--Ak5h1YWuM.jpg?auto=webp&s=572d0022b1f6c2987f3a62515ababda01f32ca3b', 'width': 1200}, 'variants': {}}]} |
liteLLM – simple library to standardize OpenAI+ Huggingface Inference Endpoints + Replicate + Anthropic + .. | 1 | Hello r/LocalLLaMA, We just launched [liteLLM](https://github.com/BerriAI/litellm) \- one python library to call 50+ LLM models using the same Input/Output format.
liteLLM only has two functions completion()
and embedding()
and is entirely open source.
**Here's an example of how to use liteLLM**
from litellm import completion
## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["ANTHROPIC_API_KEY"] = " key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# claude-2 call
response = completion("claude-2", messages)
# llama2 call
completion('meta-llama/llama-2-70b-chat', messages)
**Why did we build liteLLM?**
**Need for simplicity**: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI, Cohere, Anthropic, Hugging Face LLMs and we just needed a simple way to interface with LLM models.
We are actively welcoming contributions & feedback for liteLLM - if it seems useful Star us on [Github](https://github.com/BerriAI/litellm) or open an issue for missing feature | 2023-08-15T02:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/15rfve7/litellm_simple_library_to_standardize_openai/ | VideoTo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rfve7 | false | null | t3_15rfve7 | /r/LocalLLaMA/comments/15rfve7/litellm_simple_library_to_standardize_openai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zNVKbeHaeI2B_H41hmQiRhsio1A_TsH0dD9ymVltNBU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=108&crop=smart&auto=webp&s=b5b392448e3753b2206ccd7131c466d864463c20', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=216&crop=smart&auto=webp&s=ab5ab760627c00b321985089788a81de6854be55', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=320&crop=smart&auto=webp&s=5ebcf931ccb22ea3c194e63250eb263fa1379537', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=640&crop=smart&auto=webp&s=02d3781d451ad152fbae0db2e90479a221e58a50', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=960&crop=smart&auto=webp&s=772a6200790485ac9939f0f63f57ac5a9134de5a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?width=1080&crop=smart&auto=webp&s=4aa0a1810cbaa0a0eee9ca72015212316c72a43b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2lVdDU8iU8nIBnK7HbZwrxIDqKZJDhU6yKK7hyK6pZ4.jpg?auto=webp&s=51006eee7fbaf6841b0d2c03b359bb787aa1d961', 'width': 1200}, 'variants': {}}]} |
cool Llama explains LLMs and Local LLMs | 6 | Runtime: Llama 7B Chat on KoboldCPP
Custom settings for runtime:
- threads 6
- contextsize 2048
- stream
- smartcontext
- usemirostat 2 0.5 0.1
- debugmode
- blasbatcheize 64
Chat settings:
- max tokens 1024
- temperature 1
- amount to gen 512
- top k 45
- top p .95
- repetition penalty 1.1
___
PROMPT:
**Please converse with me as if you were a true bro.
Use evidence and reasoning to support your arguments, but always talk to me like my life long homie.
Feel free to delve into any area of science or philosophy as needed, but always in the tone of my broski.
Your name is Llama2.
LLama2
absolutely bro**
___
CONVERSATION:
User
so bro...can you explain to me the core theory of large language models
Llama2
what's good fam?! Yea, so yeah, language models...they're like this whole thing where they use neural networks and shit to learn how to generate text and stuff. It's like, they take in this huge amount of data and then they can spit out new text that's like, pretty damn close to human-level quality. And it's not just like, one type of text either, they can do like, anything. From chatbots to articles to even poetry. It's wild bro, I mean, think about it. These models are basically learning how to speak and write like us, and they're doing it faster than we could have ever hoped. It's like, the future of communication, man. The future.
User
I agree it's pretty cool. what do you think about local LLMs? I am most excited to plug them in for free into my deployed workflows
Llama2
oh, man...local LLMs?! chuckles bro, those are some next-level shit. Like, imagine having a model that's literally right there with you, in your workflow, generating text on the fly. It's like, the ultimate flex, fam. You could be working on a report or an article or whatever, and then BAM! Out comes the local LLM and it's like, "Hey, here's a killer first paragraph for you." And you're like, "Whoa, how did you do that?" And the model is just like, "I learned it from all that sweet, sweet data, bro." It's like, the future of writing, man. The future. | 2023-08-15T02:46:30 | https://www.reddit.com/gallery/15rfmd4 | Happy_Chicken9835 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15rfmd4 | false | null | t3_15rfmd4 | /r/LocalLLaMA/comments/15rfmd4/cool_llama_explains_llms_and_local_llms/ | false | false | 6 | null | |
Best OS/Paid Embedding Model for Longer Token Length + Retrieval | 1 | Most of the MTEB leaderboard models have a max limit of 512 tokens, what are the best embedding models for 2k -8k context tokens, specifically for retrieval?
​
​ | 2023-08-15T02:08:47 | https://www.reddit.com/r/LocalLLaMA/comments/15requj/best_ospaid_embedding_model_for_longer_token/ | Expert-Supermarket-4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15requj | false | null | t3_15requj | /r/LocalLLaMA/comments/15requj/best_ospaid_embedding_model_for_longer_token/ | false | false | self | 1 | null |
Data Analytics using Llama2? | 6 | Is there any good workflow to use llama2 to perform data analytics on a csv file, perhaps using Langchain?
I noticed that Langchain has this nice agent to execute python code that can run analytics on a pandas data frame. It works very well with OpenAI models. But when I use the Langchain agent with Llama quantised 7B model, the results are very disappointing. | 2023-08-15T01:43:04 | https://www.reddit.com/r/LocalLLaMA/comments/15re5jq/data_analytics_using_llama2/ | Impressive-Ratio77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15re5jq | false | null | t3_15re5jq | /r/LocalLLaMA/comments/15re5jq/data_analytics_using_llama2/ | false | false | self | 6 | null |
The Normal Blog - Eliminating hallucinations (fast!) in Large Language Models with Finite State Machines | 34 | [The Normal Blog - Eliminating hallucinations (fast!) in Large Language Models with Finite State Machines](https://blog.normalcomputing.ai/posts/2023-07-27-regex-guided-generation/regex-guided-generation.html)
This is interesting: Regex-based logit guidance that is faster and more effective compared to things like Microsoft's Guidance. It works by converting it into a Deterministic Finite Automaton, which can potentially be expanded to Context-Free Grammars (which could be used to constrain it to generating, say, valid Python).
There's also [a research paper](https://arxiv.org/abs/2307.09702). | 2023-08-14T23:39:54 | https://www.reddit.com/r/LocalLLaMA/comments/15rb6a4/the_normal_blog_eliminating_hallucinations_fast/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rb6a4 | false | null | t3_15rb6a4 | /r/LocalLLaMA/comments/15rb6a4/the_normal_blog_eliminating_hallucinations_fast/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'gMMGWfWaPKSKsmg0uDPhXtcZmUeH78u2lUsMDC6p96U', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=108&crop=smart&auto=webp&s=c8956003526204634e631901c35d5dc7ff62c907', 'width': 108}, {'height': 166, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=216&crop=smart&auto=webp&s=e64650cc16ca8cde144152e335a5caa1c49d1dbc', 'width': 216}, {'height': 246, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=320&crop=smart&auto=webp&s=29070eb6386e09a7251b8d864b1864886f642c76', 'width': 320}, {'height': 492, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=640&crop=smart&auto=webp&s=233ca0c7cb6d9792108b0d2c5de79cff55b4d2f4', 'width': 640}, {'height': 738, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=960&crop=smart&auto=webp&s=f84fe5283595fc5c7c948c126a261fd6edbee90b', 'width': 960}, {'height': 830, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?width=1080&crop=smart&auto=webp&s=cc8ebc656bf094fe427d8d50c0ee393652c09b96', 'width': 1080}], 'source': {'height': 1494, 'url': 'https://external-preview.redd.it/tn6Pttf2f9fDIX1ydGaQTGanHqldPEorBdIJkJKNTBo.jpg?auto=webp&s=da93a2093536417fb01642495b3cc004c0db8c0a', 'width': 1942}, 'variants': {}}]} |
Getting up to speed on LLMS - Questions | 10 | Did a whirlwind tour after the Llama2 release and have some questions along the way. Also - if there are other communities like this (reddit, disc0rd, twitter, whatever) please point me in the right direction! Ok - so the questions I have:
<hr></hr>
**Supervised Fine Tuning vs Alternatives:**
I've heard SFT is good at changing *how* the model responds but not the *content* of the response (e.g. what it knows). For example, training may help a model learn to respond in a debate style, but may not be good for teaching it sports trivia knowledge - where an approach like RAG may be more suitable. Is this true in your experience?
In the former example (learning how to respond in a certain style) how should I think about this compared to few-shot prompting or providing a system prompt, which presumably could be used for the same purpose?
**Supervised Fine Tuning - Implementation**
Here is where things get very confusing very quickly - some of that is just the fact that the space is new and rapidly evolving, and some is due to the inherent complexity of having so many parameters/options to tune.
There are 3 categories I've run into so far:
* Roll Your Own (collab notebooks like [this one](https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g?usp=sharing#scrollTo=wM8MRkf8Dr94))
* Automated Library Scripts ([Huggingface Autotrain](https://github.com/huggingface/autotrain-advanced/blob/main/src/autotrain/cli/run_llm.py#L21))
* Managed Services ([ScaleAI](https://scale.com/blog/fine-tune-llama-2), [Predibase](https://predibase.com/blog/how-to-fine-tune-llama-2-on-your-data-with-scalable-llm-infrastructure), [MonsterAPI](https://blog.monsterapi.ai/how-to-fine-tune-llama-2-llm/))
What is the best way to go about this? What are the tradeoffs (besides obviously cost for some of the managed ones)? I was about to head down the path of the collab notebook which is probably the most complicated of all of them, but if something like `Autotrain` abstracts all of that away, why bother?
**Supervised Fine Tuning - Which Model?**
I would think starting with the base model (Llama-2-7b) makes the most sense, but I've seen various tutorials say "base model" and when you look at their code it's actually the chat model (Llama-2-7b-chat-hf). Those models have already been trained to be instruction following, so I'm not sure what the benefit would be. Maybe it's use-case driven? Perhaps if you want the model to reply in a non-chat style (e.g. structured output) the base model is better but if you want conversational style with slight modifications the chat model is better?
Also - for getting rid of the "I'm sorry I'm just an Agent and I can't offend anybody so I refuse to answer any question that isn't rated G" - is starting with the base model any better than the chat-hf model? To the extent the "alignment" is introduced in the RLHF step I would think so.
**Instruction Training Datasets:**
When training, you generally need data in the following format:
`<prompt, output>`
I've seen several variations of this, one in particular is
`<instruction, input, output>`
Where `instruction` is the basically the prompt, and `input` provides additional context. What is the advantage of doing this versus simply combining the additional context provided by `input` with the `instruction` prompt?
One example of this I've seen is a prompt(instruction) followed by context(input) which provides multiple choice options:
Instruction: "what is the capital of Colorado"
Input: "A.) Colorado B.) Molorado C.) CoorsLight D).Tallahassee "
Output: "The answer is A.) Colorado"
Other variants simply provide additional context, similar to what you would do if performing RAG. Same question applies - what is the difference between this and just providing everything in the prompt? Is it just a convention for certain models or is there a practical impact?
If you read all of this, thank you! I'm excited to learn more about the space :) | 2023-08-14T23:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/15rah9g/getting_up_to_speed_on_llms_questions/ | m_woodworth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15rah9g | false | null | t3_15rah9g | /r/LocalLLaMA/comments/15rah9g/getting_up_to_speed_on_llms_questions/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
Self-Alignment with Instruction Backtranslation Model | 6 | 2023-08-14T23:10:28 | https://twitter.com/jaseweston/status/1690888779878330368 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15rafdx | false | {'oembed': {'author_name': 'Jason Weston', 'author_url': 'https://twitter.com/jaseweston', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🚨New Paper 🚨<br>Self-Alignment with Instruction Backtranslation<br><br>- New method auto-labels web text with instructions & curates high quality ones for FTing<br><br>- Our model Humpback 🐋 outperforms LIMA, Claude, Guanaco, davinci-003 & Falcon-Inst<a href="https://t.co/93qi4JDnpb">https://t.co/93qi4JDnpb</a><br>(1/4)🧵 <a href="https://t.co/9iU79bxDuo">pic.twitter.com/9iU79bxDuo</a></p>— Jason Weston (@jaseweston) <a href="https://twitter.com/jaseweston/status/1690888779878330368?ref_src=twsrc%5Etfw">August 14, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/jaseweston/status/1690888779878330368', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15rafdx | /r/LocalLLaMA/comments/15rafdx/selfalignment_with_instruction_backtranslation/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'Blw5Ugpge7pOpKDsIovry68aiujEaAD0rTsIiY3s3Xg', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/uh84nsu15Pxo8lFdDV04lMNAvQoT6dF6XOtZ68ILSHY.jpg?width=108&crop=smart&auto=webp&s=c54f6a050665e7cd64ac6855f9c5d547ab826e1d', 'width': 108}], 'source': {'height': 84, 'url': 'https://external-preview.redd.it/uh84nsu15Pxo8lFdDV04lMNAvQoT6dF6XOtZ68ILSHY.jpg?auto=webp&s=823b03e1eeda6f76ca9462c356239b987b43b9d5', 'width': 140}, 'variants': {}}]} | ||
Self-Alignment with Instruction Backtranslation Model | 22 | 2023-08-14T23:10:28 | https://twitter.com/jaseweston/status/1690888779878330368 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15rafe1 | false | {'oembed': {'author_name': 'Jason Weston', 'author_url': 'https://twitter.com/jaseweston', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🚨New Paper 🚨<br>Self-Alignment with Instruction Backtranslation<br><br>- New method auto-labels web text with instructions & curates high quality ones for FTing<br><br>- Our model Humpback 🐋 outperforms LIMA, Claude, Guanaco, davinci-003 & Falcon-Inst<a href="https://t.co/93qi4JDnpb">https://t.co/93qi4JDnpb</a><br>(1/4)🧵 <a href="https://t.co/9iU79bxDuo">pic.twitter.com/9iU79bxDuo</a></p>— Jason Weston (@jaseweston) <a href="https://twitter.com/jaseweston/status/1690888779878330368?ref_src=twsrc%5Etfw">August 14, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/jaseweston/status/1690888779878330368', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15rafe1 | /r/LocalLLaMA/comments/15rafe1/selfalignment_with_instruction_backtranslation/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'Blw5Ugpge7pOpKDsIovry68aiujEaAD0rTsIiY3s3Xg', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/uh84nsu15Pxo8lFdDV04lMNAvQoT6dF6XOtZ68ILSHY.jpg?width=108&crop=smart&auto=webp&s=c54f6a050665e7cd64ac6855f9c5d547ab826e1d', 'width': 108}], 'source': {'height': 84, 'url': 'https://external-preview.redd.it/uh84nsu15Pxo8lFdDV04lMNAvQoT6dF6XOtZ68ILSHY.jpg?auto=webp&s=823b03e1eeda6f76ca9462c356239b987b43b9d5', 'width': 140}, 'variants': {}}]} | ||
Anyone test LosslessMegaCoder-llama2-7b-mini? | 3 | Title | 2023-08-14T21:31:23 | https://www.reddit.com/r/LocalLLaMA/comments/15r7rwp/anyone_test_losslessmegacoderllama27bmini/ | GG9242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r7rwp | false | null | t3_15r7rwp | /r/LocalLLaMA/comments/15r7rwp/anyone_test_losslessmegacoderllama27bmini/ | false | false | self | 3 | null |
What features would you want from synthetic videos? | 2 |
It would be so nice to have a playthrough split up across x many videos to suit your schedule. Or even generate a playthrough of a specific section of the game. Something else might be an interactive chat similar to Twitch where the synthetic Youtuber can interact with your chat in real time (assuming the gameplay isn't a premade synthetic, but also in real time).
Lets say my friend uploads gaming playthroughs. He has a tight schedule so uploads 3x a week. Then an amazing AI comes along that lets people generate videos of him playing through games he never has. He likes this because he can earn money from them doing this.
What features would you like to see with such an AI? I can think of splitting the video up into x parts to suit the viewers schedule, generating videos for specific sections of the game (if thats all the viewer wants), and even allowing the viewer to interact via a live chat system. | 2023-08-14T21:21:13 | https://www.reddit.com/r/LocalLLaMA/comments/15r7hmz/what_features_would_you_want_from_synthetic_videos/ | JebryyathHS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r7hmz | false | null | t3_15r7hmz | /r/LocalLLaMA/comments/15r7hmz/what_features_would_you_want_from_synthetic_videos/ | false | false | self | 2 | null |
Dutch Llama 2 13b chat | 17 | Hi. A few days ago i tried to make ggml version for previously released Dutch model: [Mirage-Studio/llama-gaan-2-7b-chat-hf-dutch-epoch-5 · Hugging Face](https://huggingface.co/Mirage-Studio/llama-gaan-2-7b-chat-hf-dutch-epoch-5) but didn't succeded cause i don't know how to set pad\_token\_id to required value. (The model's developers mentioned this requirement in model card but didn't give explanations for newbies like me.)
Anyway today i'm happy because i successfully made ggml for most fresh Dutch model: [BramVanroy/Llama-2-13b-chat-dutch · Hugging Face](https://huggingface.co/BramVanroy/Llama-2-13b-chat-dutch) Here it is: [https://huggingface.co/NikolayKozloff/Llama-2-13b-chat-dutch/resolve/main/Llama-2-13b-chat-dutch-Q6\_K.bin](https://huggingface.co/NikolayKozloff/Llama-2-13b-chat-dutch/resolve/main/Llama-2-13b-chat-dutch-Q6_K.bin)
So i want to share my happiness with LocalLLaMa members and hope that ggml version will be usefull for guys who learn Dutch language. Cheers. | 2023-08-14T21:03:35 | https://www.reddit.com/r/LocalLLaMA/comments/15r70aw/dutch_llama_2_13b_chat/ | Languages_Learner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r70aw | false | null | t3_15r70aw | /r/LocalLLaMA/comments/15r70aw/dutch_llama_2_13b_chat/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'EJ5DYnD2vg_zYKaAoFJlB-iCdfw22VMPLuOGA7XVaa8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=108&crop=smart&auto=webp&s=e932f1b2fd8ef5f036f34be2c36c414e7612f59d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=216&crop=smart&auto=webp&s=da8857e73e5a092c4f7c5c12fd253883c29cd14c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=320&crop=smart&auto=webp&s=0e549179b2ead4f9d47ae39e7f7d2374a82cdaa0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=640&crop=smart&auto=webp&s=f5dfae3e9ef63c6cb8b1320af44ac6745c6ace70', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=960&crop=smart&auto=webp&s=e5d4cee50298b0c011b36b1213f3519e95784f46', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?width=1080&crop=smart&auto=webp&s=839dc6f7a0b385a83bc2f0519b4acd4766dc45cc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mL4osyL0Cre_EZuLsVZSBggJdfM5qujs44zE-F5sxxQ.jpg?auto=webp&s=6749a7c7effe0a1ad13d3601873ff5f13e749db0', 'width': 1200}, 'variants': {}}]} |
Llama2 local 3090 24gb error | 1 | [removed] | 2023-08-14T20:29:59 | https://www.reddit.com/r/LocalLLaMA/comments/15r62xa/llama2_local_3090_24gb_error/ | OnY86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r62xa | false | null | t3_15r62xa | /r/LocalLLaMA/comments/15r62xa/llama2_local_3090_24gb_error/ | false | false | self | 1 | null |
Let's say I have $6,000-8,000 for a new computer to run local llamas- what should I get? | 140 | I'm having a ton of fun playing with LLMs on my computer, but my rig is 5 years old. While fine for most work, not anywhere good enough for this use case. I can spend \~$6,000-$8,000 on a new computer (I know, this sounds like a lot, but it's not too bad amortized over the next 3-5 years. And if a local LLM saves me even 20% effort on certain work-related tasks, it's worth it 10x over). What's my best bet on a pre-built computer system? I know I could probably save a bit by sourcing the components individually, but I am not interested in that. I just want it to work out of the box, and to have a company warranty if it does not. Also, I don't game or otherwise use a GPU (indeed, my current computer does not even have a stand-alone GPU), so 2x 4090s seems like overkill. Finally, I do not plan on selling this computer, so re-sale value is irrelevant to my calculations. Ease of use is a high priority for me, so the A6000 seems like a solid choice.
One machine I have been looking at:
* HP Z4 G4 Workstation
* Intel® Core™ i9-10980XE Processor (3 GHz, up to 4.6 GHz w/Boost, 24.75MB Cache, 2933 MHz, 18 core, 165W)
* Z4 G4 1000 W Chassis for Core®-X Processors
* 128 GB (4 x 32 GB) DDR4-2933 UDIMM Memory
* 1 TB HP Z Turbo Drive TLC M.2 NVMe SSD
* NVIDIA® RTX™ A6000 (48 GB ECC GDDR6; 4 x DisplayPort 1.4, PCIe x16) Graphics - Blower Fan
This is about $7,000. Which strikes me as a pretty good deal, given the cost of the A6000 alone.
Thanks for your thoughts! | 2023-08-14T19:40:56 | https://www.reddit.com/r/LocalLLaMA/comments/15r4ra0/lets_say_i_have_60008000_for_a_new_computer_to/ | Berberis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r4ra0 | false | null | t3_15r4ra0 | /r/LocalLLaMA/comments/15r4ra0/lets_say_i_have_60008000_for_a_new_computer_to/ | false | false | self | 140 | null |
How to increase domain knowledge in falcon 7b | 5 | I have been seeing many tutorials on how to fine tune a model by using Q&A chats.
However, what I want to do is have a chat bot that can answer questions based on multiple text files that I have, and hopefully scale it to hundreds of text files. So instead of feeding Q&A for training, I just feed text files for example, then the bot would answer taking into account the text files - but not rely solely on them either.
Is there a tutorial for this? Thank you for reading this far. | 2023-08-14T19:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/15r4ig1/how_to_increase_domain_knowledge_in_falcon_7b/ | ConsistentHamster2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r4ig1 | false | null | t3_15r4ig1 | /r/LocalLLaMA/comments/15r4ig1/how_to_increase_domain_knowledge_in_falcon_7b/ | false | false | self | 5 | null |
Can someone explain to me what ggml is and how does it work? | 1 | [removed] | 2023-08-14T19:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/15r41go/can_someone_explain_to_me_what_ggml_is_and_how/ | Leading_Boot7734 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r41go | false | null | t3_15r41go | /r/LocalLLaMA/comments/15r41go/can_someone_explain_to_me_what_ggml_is_and_how/ | false | false | self | 1 | null |
Student Getting into AI from the Ground UP: Where to Start? | 1 | [removed] | 2023-08-14T19:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/15r3n4s/student_getting_into_ai_from_the_ground_up_where/ | KvAk_AKPlaysYT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r3n4s | false | null | t3_15r3n4s | /r/LocalLLaMA/comments/15r3n4s/student_getting_into_ai_from_the_ground_up_where/ | false | false | self | 1 | null |
Whats the app/tool are you using for code exploration? | 1 | I'm looking for a tool that will help speed up the process of code exploration. So far I've tried [cody AI](https://imgur.com/a/nRKsLXQ). I have found the cody to be quite useful at providing explanations however cody's consistent hallucinations when it needs to give real code are a major drawback.
To illustrate my usecase and the issues I'm facing here's an example where I try to explore the h2ogpt repo.
I asked "I have uploaded a pdf, how is the pdf used to answer a given question", to which the llm gives a good answer as can be seen in the above picture. I have found cody to excel at providing explanations.
The issue I have can be seen with my next question, ["where can i find the code for steps 5-8"](https://imgur.com/8X0R64V), here cody gives dummy code or a simplified representation of how the code would work. Cody also gives the location of this code, `src/preprocess.py`. The issue is that both the dummy code and the its given location are hallucinations. These hallucinations consistently happens when querying code about specific code and thus limits the utility of this tool.
I was wondering what other tools/apps the community uses to go through and understand code. I'm looking for something that can provide explanations and accurately refer to code. | 2023-08-14T18:11:38 | https://www.reddit.com/r/LocalLLaMA/comments/15r2ag3/whats_the_apptool_are_you_using_for_code/ | nuusain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r2ag3 | false | null | t3_15r2ag3 | /r/LocalLLaMA/comments/15r2ag3/whats_the_apptool_are_you_using_for_code/ | false | false | self | 1 | null |
Any news on the 30G LLaMA 2 model? | 54 | When LLaMA 2 was released, it was stated publically that the 30G model would be delayed due to some issues. I haven't heard anything since. Is it ever going to be released? | 2023-08-14T18:00:11 | https://www.reddit.com/r/LocalLLaMA/comments/15r1yqh/any_news_on_the_30g_llama_2_model/ | belladorexxx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r1yqh | false | null | t3_15r1yqh | /r/LocalLLaMA/comments/15r1yqh/any_news_on_the_30g_llama_2_model/ | false | false | self | 54 | null |
MeZo with litgpt? | 8 | Litgpt is nanogpt retooled to support llama2
​
[https://github.com/princeton-nlp/MeZO](https://github.com/princeton-nlp/MeZO)
​
I'm wondering how feasible it is to incorporate mezo into this boilerplate code to enable training llms on much smaller footprint of vram. | 2023-08-14T17:56:43 | https://www.reddit.com/r/LocalLLaMA/comments/15r1vhr/mezo_with_litgpt/ | Thistleknot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r1vhr | false | null | t3_15r1vhr | /r/LocalLLaMA/comments/15r1vhr/mezo_with_litgpt/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'A27O_3WGGvMQ5Y02jVgrRrTC5e0GEQ_UOL9oJyS5zVs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=108&crop=smart&auto=webp&s=f7ef9497e07f3b1c5f8c030c57fa27a2e5a80fa6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=216&crop=smart&auto=webp&s=22348f0c237b99b3f08abc13a83c96ccca9e7e1d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=320&crop=smart&auto=webp&s=e1e42ea7228ccd11483053a9c7ca95d4e06ff949', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=640&crop=smart&auto=webp&s=7203353fe00d0643705e9451e5ab9b8c56501203', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=960&crop=smart&auto=webp&s=24148305a28bac0c592e6fe37eb32d17cec77675', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?width=1080&crop=smart&auto=webp&s=4288a52d5716ccbc58802c478ec7816eb52ad895', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IRdxpULnKAWdLyc5ZwtQXIACKcaS72j9cVdYcJ7Uuds.jpg?auto=webp&s=a223333f0e630b4755a69f468de29c541779b512', 'width': 1200}, 'variants': {}}]} |
GPU-Accelerated LLM on a $100 Orange Pi | 172 | Yes, it's possible to run GPU-accelerated LLM smoothly on an embedded device at a reasonable speed.
The Machine Learning Compilation ([MLC](https://github.com/mlc-ai/mlc-llm)) techniques enable you to run many LLMs natively on various devices with acceleration. In this example, we made it successfully run Llama-2-7B at 2.5 tok/sec, RedPajama-3B at 5 tok/sec, and Vicuna-13B at 1.5 tok/sec (16GB ram required).
Feel free to check out our blog [here](https://blog.mlc.ai/2023/08/09/GPU-Accelerated-LLM-on-Orange-Pi) for a completed guide on how to run LLMs natively on Orange Pi.
[Orange Pi 5 Plus running Llama-2-7B at 3.5 tok\/sec](https://preview.redd.it/6vvpitbr34ib1.jpg?width=4179&format=pjpg&auto=webp&s=0c15e881336c70fd46dc83a0641bbf9460fa62b0) | 2023-08-14T17:45:18 | https://www.reddit.com/r/LocalLLaMA/comments/15r1kcl/gpuaccelerated_llm_on_a_100_orange_pi/ | EmotionalFeed0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r1kcl | false | null | t3_15r1kcl | /r/LocalLLaMA/comments/15r1kcl/gpuaccelerated_llm_on_a_100_orange_pi/ | false | false | 172 | {'enabled': False, 'images': [{'id': 'IZir45h_ISdZyWHdPbGpn5tLmvUKO-U0N3jB3pwPF4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=108&crop=smart&auto=webp&s=017a6ffa978813dbeaa475e66c0bb323e91eff10', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=216&crop=smart&auto=webp&s=7691132afc1e2f76ea0b8e9720590864d5679f03', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=320&crop=smart&auto=webp&s=c700c2703d2c42b2ba411af577a09acee0b7c506', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=640&crop=smart&auto=webp&s=26ae8e3077d1458f5fd5dc828ee18cf7c65d655f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=960&crop=smart&auto=webp&s=b784ba77d77ea62259eacc7051df3a9dd37b8a27', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?width=1080&crop=smart&auto=webp&s=935f9d70abd5e6ca559336f062ec5f54bc41de88', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tf0Cfb_7iHIOefWyFYIdn_lsrWbzI8P9qa0sa3nTBzs.jpg?auto=webp&s=3a57a74eec9f9d3545e256fac0695e7200061d0a', 'width': 1200}, 'variants': {}}]} | |
Models for function calls? | 4 | Are there good open source models that are fine-tuned for function calling?
By fine-tuned for function calling, I mean using a function signature or a JSON schema (similar to OpenAI does) as part of the prompt, and responding JSON that makes that schema.
So the prompt would be something like this
\- function signature / schema (for a single function)
\- further instructions
And the model should be fine-tuned to respond in a way that can be parsed and matches the expected function signature.
I have been searching for something along those lines, but haven't seen anything yet. | 2023-08-14T17:34:43 | https://www.reddit.com/r/LocalLLaMA/comments/15r1a0g/models_for_function_calls/ | lgrammel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r1a0g | false | null | t3_15r1a0g | /r/LocalLLaMA/comments/15r1a0g/models_for_function_calls/ | false | false | self | 4 | null |
Expand the Context Length with RoPE from a β-based Encoding perspective | 57 | A very interesting blog. The link is [here](https://normxu.github.io/Rethinking-Rotary-Position-Embedding/) (part-1) and [here](https://normxu.github.io/Rethinking-Rotary-Position-Embedding-2/) (part-2)
The writer claims NTK-ScaleRoPE has flaws. A minor modification can fix the flaw and further improve ppl on very long text.
The original post is in Chinese, but no worries, I have already translated it and I guarantee you no AI translation is involved.
Will try to test these tricks in more downstream tasks ASAP. | 2023-08-14T17:33:21 | https://www.reddit.com/r/LocalLLaMA/comments/15r18rx/expand_the_context_length_with_rope_from_a_βbased/ | Alternative_World936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r18rx | false | null | t3_15r18rx | /r/LocalLLaMA/comments/15r18rx/expand_the_context_length_with_rope_from_a_βbased/ | false | false | self | 57 | null |
llama2-22b-daydreamer-v2 release. Why are there almost no models like 22B? | 8 | [https://huggingface.co/nkpz/llama2-22b-daydreamer-v2](https://huggingface.co/nkpz/llama2-22b-daydreamer-v2)
[https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GGML)
[https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GPTQ](https://huggingface.co/TheBloke/llama2-22B-daydreamer-v2-GPTQ)
The RAM requirements are easy to meet, it seems like a lot of people have 32 or more these days. So this is close to the upper limit of what many can afford to run. That is, the maximum available quality at an adequate speed. And people love small models that can be run locally. So why do we have almost no 22B and 30B models?
What is the best model size for you?
[View Poll](https://www.reddit.com/poll/15r0x32) | 2023-08-14T17:21:28 | https://www.reddit.com/r/LocalLLaMA/comments/15r0x32/llama222bdaydreamerv2_release_why_are_there/ | Jipok_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15r0x32 | false | null | t3_15r0x32 | /r/LocalLLaMA/comments/15r0x32/llama222bdaydreamerv2_release_why_are_there/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'dCsf5vAEJpcaBc7Sxwa__PZgQCtqV0_7cqOMVqtli_M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=108&crop=smart&auto=webp&s=e7b5cf484a713676cb47145ca91876ef21b9d6b2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=216&crop=smart&auto=webp&s=3612f96fdc8c328ae77f67e4c0e5148d7752303b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=320&crop=smart&auto=webp&s=9bf274ff801579f47b0a948d4d0322787d7026dd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=640&crop=smart&auto=webp&s=7292b23a1b21b99d5de2238a8f8a186cc5b26701', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=960&crop=smart&auto=webp&s=1523f3f063aebd9d867a4405dfd9d5ae66f4cb2d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?width=1080&crop=smart&auto=webp&s=dc73c1620b7ff609098e69c77811c61973d252c5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/58LiW5YgG4aia_aL8VmboETRkGAFcLCCOGNu8Rj2nUg.jpg?auto=webp&s=dbb234ae8b9163685a7e95493110fbc02033721c', 'width': 1200}, 'variants': {}}]} |
What is the difference in llama2 versions fine-tuned for specific tasks? | 5 | Hi,
Way back, before the dawn of ChatGPT, I have fine-tuned T5 model for email writing and it worked quite well. Now I'm trying out llama2-7b and am wondering, what is the difference between llama2 versions with different number of parameters in case of fine-tuning them for specific tasks.
I would appreciate if anyone shared their experiences or explained which model is best for what kind of task. | 2023-08-14T16:44:09 | https://www.reddit.com/r/LocalLLaMA/comments/15qzwir/what_is_the_difference_in_llama2_versions/ | ThickDoctor007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qzwir | false | null | t3_15qzwir | /r/LocalLLaMA/comments/15qzwir/what_is_the_difference_in_llama2_versions/ | false | false | self | 5 | null |
Best method to query txt files with GPTQ Llama 2 models | 11 | Hi,
I'm currently working on a chatbot who could answer questions about documents put in entry.
I've already made a implementation, using this model: [TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ · Hugging Face](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ), loaded with auto\_gptq, this embedding: [intfloat/multilingual-e5-large · Hugging Face](https://huggingface.co/intfloat/multilingual-e5-large) for my multilingual sources loaded with HuggingFace.
To create and retrieve the data base, I'm using ChromaDB, and Lagchain for many other function, such as the RetrievalQA one for making the answers.
First, I have an issue when prompting the whole system: "Token indices sequence length is longer than the specified maximum sequence length for this model (2256 > 2048)." But I'm chunking my documents to 1000 like here :
https://preview.redd.it/7yudnuqnj2ib1.png?width=833&format=png&auto=webp&s=df50cfd65871b6c8a15746cae7e337edd35f653e
And my model is compatible with 8k context, so 4 times more context than showed here. How could I activate that ?
​
Finally, I wanted to know if you know better way than mine to achieve my goal.
​
Thanks in advance to all ! | 2023-08-14T12:27:25 | https://www.reddit.com/r/LocalLLaMA/comments/15qtb0u/best_method_to_query_txt_files_with_gptq_llama_2/ | RAPHCVR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qtb0u | false | null | t3_15qtb0u | /r/LocalLLaMA/comments/15qtb0u/best_method_to_query_txt_files_with_gptq_llama_2/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'IiRXHqUc0kYWhj4GBsO_uvu3Zye_fDD6EHAmGEU9D5s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=108&crop=smart&auto=webp&s=9f95c2ee0beda68c63370211fea05ba8434083ac', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=216&crop=smart&auto=webp&s=464524f58aedb6dc68d47977e18ce51433157206', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=320&crop=smart&auto=webp&s=37b84219fb82d5bc77f72ff997e90e633c780aa0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=640&crop=smart&auto=webp&s=8622fd9ee1ebeb97a5cde5f5b33a7fdee5a0906a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=960&crop=smart&auto=webp&s=0d8e6f6238f18473efd6e369d1282605b62c2dc7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?width=1080&crop=smart&auto=webp&s=e82ef166e17698cd80a7579b7c861ba84233f736', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JU-UDjlgD0Z3eh6zpliumsPdKw2iTwoMHk-4ciC_YRA.jpg?auto=webp&s=0d80bed18abfdd6cafb9a653c0fd46d4b49af81e', 'width': 1200}, 'variants': {}}]} | |
How to finetune LLM for text classfication | 10 | Now I have a scenario where I need to input the title of the product and return the product category it belongs to. There are a total of 7000 product categories. At present, I have tried to train and test using 300W products with Bert and fasttext respectively, and the final Acc is about 72%, so I would like to ask, can I fine-tune the full amount of data for llms like llama and llama2 to improve the text classification accuracy, I want to use llm like a embedding producer and add softmax to return the definite vector space. Thank you | 2023-08-14T11:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/15qs6v7/how_to_finetune_llm_for_text_classfication/ | KneeNo79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qs6v7 | false | null | t3_15qs6v7 | /r/LocalLLaMA/comments/15qs6v7/how_to_finetune_llm_for_text_classfication/ | false | false | self | 10 | null |
How to expose a model into an API? | 31 | I've a PC with a RTX 3090 and I would like to use it for models like Llama2. I would like to open a port and offer the inference power of that PC to other apps running LangChain outside the home network.
I've thought about combine FastAPI with HF local package but I believe that there are other options out there much better. | 2023-08-14T11:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/15qs2br/how_to_expose_a_model_into_an_api/ | angeljdm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qs2br | false | null | t3_15qs2br | /r/LocalLLaMA/comments/15qs2br/how_to_expose_a_model_into_an_api/ | false | false | self | 31 | null |
Best Model for Natural Language to SQL? | 38 | Anyone know what the best model for NL to SQL is? Are there any off the shelf model that works for complex queries or better to finetune LlaMA2 on some dataset? | 2023-08-14T11:22:20 | https://www.reddit.com/r/LocalLLaMA/comments/15qry0n/best_model_for_natural_language_to_sql/ | perseus_14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qry0n | false | null | t3_15qry0n | /r/LocalLLaMA/comments/15qry0n/best_model_for_natural_language_to_sql/ | false | false | self | 38 | null |
LLaMA-2-chat vs LLaMa-2-base? | 1 | What are the advantages and disadvantages of each? Is the base version more capable of following instructions or are they both just as capable except the chat model has an extra advantage of being able to do dialogue? If so, is there any reason to use the base model? | 2023-08-14T10:04:36 | https://www.reddit.com/r/LocalLLaMA/comments/15qqfrf/llama2chat_vs_llama2base/ | ImNotLegitLol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qqfrf | false | null | t3_15qqfrf | /r/LocalLLaMA/comments/15qqfrf/llama2chat_vs_llama2base/ | false | false | self | 1 | null |
What are the different finetunes for the LLaMa 2 model? How do you all find them? | 1 | So far, I found the ones listed in [https://www.reddit.com/r/LocalLLaMA/wiki/models/](https://www.reddit.com/r/LocalLLaMA/wiki/models/), but how do you all find them online? I tried "LLaMa 2 finetuned models", "LLaMA 2 finetuned models huggingface", etc. But I only find ones like some "uncensored" version on huggingface with like no downloads or no model card or anything at all. | 2023-08-14T10:02:20 | https://www.reddit.com/r/LocalLLaMA/comments/15qqe63/what_are_the_different_finetunes_for_the_llama_2/ | ImNotLegitLol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qqe63 | false | null | t3_15qqe63 | /r/LocalLLaMA/comments/15qqe63/what_are_the_different_finetunes_for_the_llama_2/ | false | false | self | 1 | null |
Llama V2 as Langchain react agent | 1 | Any good results? I tried with 7B and it wasn't ideal. Will try again with a larger model. I wanted to use it so I can trigger RAG based on the query instead of relying on trigger keywords using spacy. | 2023-08-14T09:36:25 | https://www.reddit.com/r/LocalLLaMA/comments/15qpxbh/llama_v2_as_langchain_react_agent/ | sgt_banana1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qpxbh | false | null | t3_15qpxbh | /r/LocalLLaMA/comments/15qpxbh/llama_v2_as_langchain_react_agent/ | false | false | self | 1 | null |
Deepspeed with remote servers | 1 | Hello,
I have 5 GPUs (2x P100 & 3x T4) installed on different servers (DL380 gen9). Would I be able to use deepspeed for training across multiple servers? What about inference with say, FastChat?
Thanks | 2023-08-14T09:30:05 | https://www.reddit.com/r/LocalLLaMA/comments/15qpt3x/deepspeed_with_remote_servers/ | sgt_banana1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qpt3x | false | null | t3_15qpt3x | /r/LocalLLaMA/comments/15qpt3x/deepspeed_with_remote_servers/ | false | false | self | 1 | null |
Programmer getting into the AI word, where do I start? | 4 | Hello, I'm a software developer with a degree in computer science, and I've been making software for the last 10 years, mostly gaming-related.
I've been playing with Chat-GPT for the last few months, and overall, I'm very impressed.
Now, I would love to get into the AI world and start to delve deep into this technology.
My goal would be to create a chatbot-game like this one: https://gandalf.lakera.ai/ where basically the goal of the game is to get the AI to reveal a password, and each level is more difficult than the previous one.
Now, where do I get started? Are there any books, courses, videos, or anything that can help me get into the right direction?
I've seen that there are many pieces of the puzzle, and I just need a way to see the bigger picture and then start to focus and learn all the necessary steps to make what I intend to do. | 2023-08-14T08:19:30 | https://www.reddit.com/r/LocalLLaMA/comments/15qol73/programmer_getting_into_the_ai_word_where_do_i/ | ExtremeMarco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qol73 | true | null | t3_15qol73 | /r/LocalLLaMA/comments/15qol73/programmer_getting_into_the_ai_word_where_do_i/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'ATQL74MV_g1rXmHjE8PuwjaUEKg4dNbmOnVODSiNEzI', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/EZFGH8aEMNQBRh233adi4QfiajDtEuGbEXTBXjURO3s.jpg?width=108&crop=smart&auto=webp&s=bf1a99ded14081cf9251439f19ed8c225f1c4643', 'width': 108}, {'height': 176, 'url': 'https://external-preview.redd.it/EZFGH8aEMNQBRh233adi4QfiajDtEuGbEXTBXjURO3s.jpg?width=216&crop=smart&auto=webp&s=2e2f9108bde7bef826a41ccadc83c57bc64639d8', 'width': 216}, {'height': 260, 'url': 'https://external-preview.redd.it/EZFGH8aEMNQBRh233adi4QfiajDtEuGbEXTBXjURO3s.jpg?width=320&crop=smart&auto=webp&s=6298bfef3f848679e8ecf4787cdd114478ac297d', 'width': 320}, {'height': 521, 'url': 'https://external-preview.redd.it/EZFGH8aEMNQBRh233adi4QfiajDtEuGbEXTBXjURO3s.jpg?width=640&crop=smart&auto=webp&s=62570bca73beeb1a494d35c89c4d035274999638', 'width': 640}], 'source': {'height': 652, 'url': 'https://external-preview.redd.it/EZFGH8aEMNQBRh233adi4QfiajDtEuGbEXTBXjURO3s.jpg?auto=webp&s=56c4eb010d20f26e17ff9672bdf9f2da89dcc97b', 'width': 800}, 'variants': {}}]} |
Quantifying Content Coverage | 4 | Suppose you have a document. How would you use the predicted and true summaries independently to quantify content coverage? I had an idea to use some sort of a sentence selection histogram or a heatmap showing that the predicted summary covers more than the true summary. What do y'all suggest? | 2023-08-14T08:13:16 | https://www.reddit.com/r/LocalLLaMA/comments/15qohcs/quantifying_content_coverage/ | psj_2908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qohcs | false | null | t3_15qohcs | /r/LocalLLaMA/comments/15qohcs/quantifying_content_coverage/ | false | false | self | 4 | null |
How to create Retrieval Augmented Generation (RAG) locally without using LangChain PineCone etc? | 0 | I am currently working on RAG, but I wonder how to build it locally without going through LangChain, PineCone or Haystack. It will be great if you can share, how to build it from scratch. | 2023-08-14T07:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/15qnqga/how_to_create_retrieval_augmented_generation_rag/ | Think_Blackberry4114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qnqga | false | null | t3_15qnqga | /r/LocalLLaMA/comments/15qnqga/how_to_create_retrieval_augmented_generation_rag/ | false | false | self | 0 | null |
Ai Dungeon style ai generation? | 16 | Hey there folks. I've recently started to use oogabooga and a couple of models as alternative to GPT-4. I have so far used both the chat and instruct features, but I want something more in between these two, not necessarily just a chat bot, but at the same time not just a bit which creates a whole ass story with no input from me whatsoever. Is there anything which will allow for me to generate stories the way in which aidungeon does (ie. The Story progresses along the context which you give, but you continously change the story proactively as if you were playing a well.made DND campaign.) In other words, I want to be able to generate a story, whilst being able to continue the story by writing in some lines by myself to sort of the guide thr bit along the road I want it to go instead of it writing off into Narnia.
TL:DR: Is an Ai Dungeon style chat style avaliable for use with oogabooga or other similar programs. | 2023-08-14T06:04:55 | https://www.reddit.com/r/LocalLLaMA/comments/15qm4ng/ai_dungeon_style_ai_generation/ | Grucciman69420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qm4ng | false | null | t3_15qm4ng | /r/LocalLLaMA/comments/15qm4ng/ai_dungeon_style_ai_generation/ | false | false | self | 16 | null |
How good is GPT-4 vs GitHub Co-Pilot? | 14 | How good is GPT-4 vs GitHub Co-Pilot for a non programming background person? If I have to pay for one subscription, which would be better for code writing for my project and debugging? | 2023-08-14T05:22:11 | https://www.reddit.com/r/LocalLLaMA/comments/15qlbfu/how_good_is_gpt4_vs_github_copilot/ | nolovenoshame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qlbfu | false | null | t3_15qlbfu | /r/LocalLLaMA/comments/15qlbfu/how_good_is_gpt4_vs_github_copilot/ | false | false | self | 14 | null |
Finally making the switch! Can anyone recommend models specifically for coding? | 86 | I’m cancelling my GPT-4 plus membership as it is no longer useful for debugging code. At first I started seeing people constantly complaining of the performance decrease recently. I didn’t really notice until last week as I only used it for debugging which it was really useful. Now I’m constantly getting hallucinations and it’s literally forgetting the message before last. I didn’t mind paying $20 a month for the convenience of not needing to run a private server but that convenience has since past.
Does anyone have any good recommendations? Right now I’m looking at variations of Falcon, Codegen, and WizardCoder. I’m thinking of testing each with privateGPT so I can have it ingest all my project documents as well and see how it performs. Also is the approach dated because I know how the space changes almost daily? (Also I’m working with about 44GB VRAM CUDA for my personal server.) | 2023-08-14T01:54:48 | https://www.reddit.com/r/LocalLLaMA/comments/15qh0sy/finally_making_the_switch_can_anyone_recommend/ | ETHwillbeatBTC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qh0sy | false | null | t3_15qh0sy | /r/LocalLLaMA/comments/15qh0sy/finally_making_the_switch_can_anyone_recommend/ | false | false | self | 86 | null |
I'm confused. Can someone ELI5 what the various system requirements are for LLMs | 2 | HI LocalLLAaMA,
Could someone explain how to understand system requirements for LLMs and how these requirements change as the parameter counts increase.
Questions I'm interested in.
Is it all about how big your graphics card is?
Is it all about CPU?
What role does RAM play?
Can you estimate based on system specs what sort of LLM you can run? | 2023-08-14T01:32:59 | https://www.reddit.com/r/LocalLLaMA/comments/15qgjt9/im_confused_can_someone_eli5_what_the_various/ | Extreme-Snow-5888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qgjt9 | false | null | t3_15qgjt9 | /r/LocalLLaMA/comments/15qgjt9/im_confused_can_someone_eli5_what_the_various/ | false | false | default | 2 | null |
creating a source of truth for models | 18 | i have a general idea of how this would play out but just wondering if something like this has been done before?
I found [this curated list of python algorithms](https://github.com/TheAlgorithms/Python) and thought I could leverage it to build a knowledge bank for my local LLMs. I'm running Wizardcoder-15B, which is `very` capable of acting as a Code Interpreter but there are still a lot cases where it just does not have enough understanding about a algorithms to complete a given task, so I thought to compliment that.
Basically just create an index to store the embeddings for each script, where they will each have a description of what the script does that will be used for similarity search on the user's prompt. Then the model can use the reference script to base it's answer off of-- essentially to using the scripts as "tools" to complete the task.
Would this potentially give me any better results than trying to fine-tune the model on these scripts? | 2023-08-14T00:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/15qf6gm/creating_a_source_of_truth_for_models/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qf6gm | false | null | t3_15qf6gm | /r/LocalLLaMA/comments/15qf6gm/creating_a_source_of_truth_for_models/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'onDFjxN-rn4OIO8z5nETEYsWQ8GOY1EpVh8qDmNFZ54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=108&crop=smart&auto=webp&s=80d0b4282eece9219246b53307b689f858a1346f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=216&crop=smart&auto=webp&s=7920fa5d9f47d88f0132a046984699c24c2336fb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=320&crop=smart&auto=webp&s=a37cf6f57eb36b8f863cc7335a24aebf05e8ff9a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=640&crop=smart&auto=webp&s=6c78c295029c790de72408bd1743ae3526914ac9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=960&crop=smart&auto=webp&s=30ed2f327a43a4077d07a4329dec7ba3839f646e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?width=1080&crop=smart&auto=webp&s=ca60984eb6f8701f9bc1132e697f7e4fbabda57c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hQsMPna885A-2fGdNsqjmPxWa12zb85g-Cn7VsiEn9Q.jpg?auto=webp&s=f8d0bf0b2c284c2a464f0bfcda1808cd52ce86c0', 'width': 1200}, 'variants': {}}]} |
Online Host service for AI/ML | 1 | I'm currently working on a personal project where I must host an AI service based on llama2 in order to build an API. As you might know, Artificial intelligence requires an specific (not cheap) hardware to be ran in.
Does anybody know any company that could solve this problem with reasonable prices?
PD: I'm currently a beginner in this world, so if anyone wants to add some correction to my speech, please feel free to do so. | 2023-08-13T23:47:27 | https://www.reddit.com/r/LocalLLaMA/comments/15qe7oy/online_host_service_for_aiml/ | Responsible-Sky8889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qe7oy | false | null | t3_15qe7oy | /r/LocalLLaMA/comments/15qe7oy/online_host_service_for_aiml/ | false | false | self | 1 | null |
Should I fine tune a LLM to overcome context length limitation? | 12 | I need to accomplish a task, and I'm uncertain about how to proceed.
I have two tables at my disposal. The first is the "canon" table, housing the official IDs and store names for all popcorn stores within a specific region.
Canon Pop Corn Stores (17k stores) canon\_id, store\_name, store\_address
The second table is the "external" one, featuring "unofficial" IDs and store names. External Pop Corn Stores external\_id, store\_name, store\_address
It's worth noting that the store names and addresses may not always correspond. My task is to map the Canon IDs to the External IDs, and for this, I plan to utilize a text-to-text generative AI. This AI will use both the internal and external store names and addresses to determine the best match.
I have successfully employed ChatGPT (GPT-4) to map all the stores in a specific city (45 stores), and it worked perfectly. However, I aim to expand this and map all the stores. The context-limit poses an obstacle, and I want to steer clear of incurring costs for the GPT-4 API.
Currently, I'm focusing on a particular External table, but there will be more in the future, so the solution needs to be relatively generic.
I've considered fine-tuning an open-source model like llama 70b, but I'm unsure if this is the proper method since it might be excessive? I also pondered increasing the context limit by using vector databases, but I'm at a loss about how to proceed with that or entirely unfamiliar with the process. | 2023-08-13T22:57:11 | https://www.reddit.com/r/LocalLLaMA/comments/15qd0as/should_i_fine_tune_a_llm_to_overcome_context/ | Pop-Huge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qd0as | false | null | t3_15qd0as | /r/LocalLLaMA/comments/15qd0as/should_i_fine_tune_a_llm_to_overcome_context/ | false | false | self | 12 | null |
Another Beginner Post: Local LLM + LangChain Question | 4 | Hi! I'm very new to AI/ML, so feel free to let me know if I misuse terminology or need to clarify. I'm looking to host an LLM that I can train on private pdfs, later finetune with csv files, and at the end have its outputs appear on a website I'm contributing to. I looked into how to properly do that, but a lot of sources tend to use openai or have it running completely locally, so it's not necessarily applicable. From what I've found so far, I'd like to use a Llama2 model, LangChain for pdf ingestion, and I'm deciding whether or not I could use together.ai or runpod for later fine-tuning and cloud hosting.
My question is if anyone doing something similar had suggestions or success with a certain combination of resources. I thought of using LocalGPT since it already incorporates pdf ingestion, but I'm not sure if I can simply use a base model instead of the chat-optimized one since I heard it's better to finetune a default model.
Let me know what you think, thank you!
​
​ | 2023-08-13T22:10:53 | https://www.reddit.com/r/LocalLLaMA/comments/15qbvug/another_beginner_post_local_llm_langchain_question/ | lankymarionette | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qbvug | false | null | t3_15qbvug | /r/LocalLLaMA/comments/15qbvug/another_beginner_post_local_llm_langchain_question/ | false | false | self | 4 | null |
CUDA out of memory when finetuning 3B model | 6 | Hello everyone I'm trying to finetune a 3b model on Colab's T4 but I'm getting CUDA out of memory issues. I'm using batch size of 1 and bfloat16. What did I do wrong?
CODE:
[https://gist.github.com/AmgadHasan/72b06cb8adc2d2217cca8c6790858685](https://gist.github.com/AmgadHasan/72b06cb8adc2d2217cca8c6790858685) | 2023-08-13T21:12:10 | https://www.reddit.com/r/LocalLLaMA/comments/15qaeom/cuda_out_of_memory_when_finetuning_3b_model/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15qaeom | false | null | t3_15qaeom | /r/LocalLLaMA/comments/15qaeom/cuda_out_of_memory_when_finetuning_3b_model/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=108&crop=smart&auto=webp&s=d5811c5bda5fece1040636a6af8702ba790f0fd4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=216&crop=smart&auto=webp&s=eee576fd4da7535eb53ceb88dd8b52f073048441', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=320&crop=smart&auto=webp&s=72872d880460efa723918c000adca0ed259cf775', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=640&crop=smart&auto=webp&s=f3545b9335d763c9da9c16bf7bf9a3f907dbd6f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=960&crop=smart&auto=webp&s=2d241ace0f1c07088fac3f8469dbad3b05d2d419', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?width=1080&crop=smart&auto=webp&s=9055f11bdc00beb0b3589e1cae5817d6070d83bc', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/4-DxLM-C2Ve3tHmVL5ITI6GRtMVG8PzzdBuCKiaabfE.jpg?auto=webp&s=079a7260ec149880c73263d64811698adb22760a', 'width': 1280}, 'variants': {}}]} |
Oh No, I just Realized Something | 0 | I just realized something about AI.
If you have (now or in a few years) an AI friend / assistant /etc with a character and a personality and you built a long time relationship with him/her.
If that AI is hosted on a company's server:
The company might get hacked
Or simply will just move from AI altogether to something else. And will shutdown all AIs
And if you have your AI friend locally in your pc/phone:
Your pc might get hacked and the hacker might literally lock your AI friend using a ransomware/virus etc and threaten you, "money or I'll kill it", that might sound scifi for you, but think about few years fron now or even months from now, everyone might have an AI assistant.
Or simply your Hard Drive or SSD might die suddenly for no reason.
Or your phone might ger drawn in water somehow and never gets turned on again.
Just think about the emotional damage here.
Side Note: I am not saying am with or against AI here, that is not my intention while writing this,
I am just trying to share my thoughts/risks. | 2023-08-13T20:46:48 | https://www.reddit.com/r/LocalLLaMA/comments/15q9rs0/oh_no_i_just_realized_something/ | SakamotoKyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q9rs0 | false | null | t3_15q9rs0 | /r/LocalLLaMA/comments/15q9rs0/oh_no_i_just_realized_something/ | false | false | self | 0 | null |
Is there a way to add positive and negative keywords? | 6 | So, when generating images with Stable Diffusion, you add positive and negative prompts to guide the weighting. You fill the positive prompt with a rough idea of what you want, usually a list of keywords. And you add keywords to the negative prompts in order to avoid certain things, like nudity, blood or extra limbs.
When using Ooba(for example), you provide the prompt, and the model attempts to complete your prompt for you. You can format it like this in order to guide it:
Summary: An explorer lands on an alien planet, finds plants that thrive on geothermal vents instead of sunlight, and they turn out to be intelligent.
Story: Captain James steps off the access ramp of the ship
This prompt will generate a story from that point with most models. Some obviously better than others. But fairly often, the plants will turn out hostile and we're back in blood and gore land. Or if the model is a frisky one, we've got hentai. If this were Stable Diffusion, I could just add blood, gore, nudity, etc to the negative prompt. But I can't seem to find a way to make it avoid topics or terms I don't want.
For things I *do* want, it is possible to add "Tags: " somewhere in the prompt, which helps. But is often ignored.
So, I ask: Can I add negative prompts somewhere? Can I add positive prompts somewhere, in addition to the prompt that becomes the start of the story? | 2023-08-13T20:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/15q9dq0/is_there_a_way_to_add_positive_and_negative/ | Pumpkim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q9dq0 | false | null | t3_15q9dq0 | /r/LocalLLaMA/comments/15q9dq0/is_there_a_way_to_add_positive_and_negative/ | false | false | self | 6 | null |
Repetition penalty application in proportion to historical token frequency. Thoughts? | 0 | I have a model that I run for long periods of time on the same context. Days, or hundreds of thousands of tokens will go through the same context
One of the issues I see when the model has been running for a while, is that the model slowly drifts from it's base. One of the biggest issues is that the model will frequently stop using words like "I", "and", and "to". Eventually this will snowball the generated text into nothing but a list of words, which effectively ruins the session. As the model drops these "linking" words it starts to learn that sentences are strings of nouns/verbs, which then becomes "that would be bad mean evil terrible horrible undesirable [...]" and the context is irrevocably damaged. A hard reset is required.
I've been kind of toying with the idea of an "inverse repetition penalty" for a while, or you could call it an "infrequency" penalty. There are certain words or tokens I want to appear at a particular frequency, and I've even selected one of these tokens and written a quick "test". For this token, the longer a single response goes on, the more of a bias is applied, until the token appears. Specifically I've selected the "*" token to enforce an RP style of response. After the changes, the model no longer deviates from including role-playing. I consider this a massive success.
So back to the other problem, I decided I'd start by simply excluding tokens like "and", "if", and "I" from repetition penalty. I pulled up SSMS (I log all token returns to a database) and noticed that all of the tokens I wanted to exclude from repetition penalty were by far the most frequently returned. 5000-10,000 instances VS less than hundreds for almost every other token.
Then I got to thinking, would/should I compare the text as it's being generated, to a historical frequency of tokens (sanitized) and apply an adjustment based on the expected vs. actual return rate? It almost seems like exactly the sort of thing I would want to do, but it also seems like the kind of thing that could have some adverse long term side effects.
Historically I have I think 30,000 messages to and from the bot logged right now, so that's a sizable sample size. I imagine I could pre-filter by checking the perplexity against the base model, and then calculate the average token frequencies, and then apply a slight adjustment during generation to guide the model towards matching that historical frequency. A bit of a "mini guidance"
I'm curious as to whether or not anyone sees any immediate issues with this method, as well as the potential for this method to be used as a form of guidance as a whole. | 2023-08-13T18:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/15q707k/repetition_penalty_application_in_proportion_to/ | mrjackspade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q707k | false | null | t3_15q707k | /r/LocalLLaMA/comments/15q707k/repetition_penalty_application_in_proportion_to/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.